id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
155728026
pes2o/s2orc
v3-fos-license
Influence of available resources on the defect elimination The paper investigates the influence of resources that we need for maintaining the buildings. The quality and quantity of the maintenance works ensure the level of services that are provided for building users. The solution is based on the system dynamics methodology. The model has been developed where the main input parameters are available resources – components for the maintenance of the building, workers and financial resources. The main output parameter is number of defects and the balance of the financial resources. This parameter includes the costs of the planned maintenance and costs of the repair works. The case study presents the calculation of main parameters. The outputs demonstrate the importance of well-designed budget and adequate hire rate. Introduction The number of defects is strongly influenced by the maintenance that is realized in the building and is related to the building structure and the HVAC systems. The capacity of the maintenance depends on the number of workers and their productivity. The maintenance can be considered as planned maintenance and solving adhoc problems, usually repairs. The human resources management is one of the most important part of the facility management. It includes the allocation of workers for both kinds of maintenance but also the decision about hiring new workers in the case of the worker shortage. The materials and components are another resources that we need for the maintenance. The important task is balancing the level of the spare parts stock. In case of the deficiency it can cause the decrease of the service level [1]. All activities are connected to the financial resources. For the maintenance services one year budget is usually proposed. The importance of this resource is evident. It covers all expenses as manpower, materials, components and overhead costs. All these costs are running costs but it is necessary to consider also investment costs for the equipment that is used for the maintenance activities [2]. The paper investigates the dynamic behaviour in the case of restricted resources and it is focused on the financial resources, manpower and the material. As other system model it is simplified with considering the relevant elements. Method The model is developed as the system dynamics model. The method is suitable tool for the description of the dynamic behaviour of the technical, economic and social systems. The main parameters are considered as the stock elements or the flow elements that change the stock values [3][4]. The basic model is described in [5]. For the investigation of the presented problem the model has been changed by adding new elements and by changing input values of the parameters. Model description The model is depicted in Figure 1. The investigated elements in the model are Budget and Active worker maintenance, Active worker repair, Available workers together with the flow element hire flow and the convertor element hire rate. The first simulation was performed for the situation when the budget is high enough to cover all expenses and it means the budget does not influence the service operations in the building. The next simulations were done for different values of year budget planned for the maintenance work. Available workers Active workers repair Table 1. There are initial values that are changed during the simulation. The initial values can substantially influenced the dynamics of the parameter changes. Results The resultant values for the first case are in Figure 2. It is possible to observe the increase of the active workers by a reason of defects increasing and at the same time available workers decreasing. With less and less available workers it is evident from Figure 3 that number of defects grows very quickly. This development can be stopped by hiring more workers. It can be ensured by hiring 0,2 worker by every month, it means 2 or later 3 workers by year with adequate increasing the budget by 0,45.10 6 CZK per year. This solution can be used for stable conditions. Another strategy is to change the parameter defect creation that depends on the time and the load. It can be arranged during the design process (e.g. using more reliable materials) and during the construction period (the quality of the construction works). The material stock changes are depicted in Figure 4. The sustainable supply of material is 9.6 material unit per month and year consumption is 115 units. The next investigation was focused on the influence of the budget on the number of defects, active workers for planned maintenance and active workers for repairs. These parameters are interconnected because low budget influences the level of maintenance activities and consequently the defect creation. The budget is changed from 4.10 6 CZK.year -1 to 14.10 6 CZK.year -1 in seven steps. The output values are presented in Figure 5 -7. After the budget depletion the number of defects is immediately increased. The final value of defects for the budget level 4.10 6 CZK is five times higher than in the case of the sufficient budget. The consequences of low budget are evident also for human resources. Without the possibility to cover the personal costs the number of active workers decreased to zero. Conclusions The developed model allows us to investigate the complex system and to do the decisions concerning the maintenance activities in the buildings. These activities are influenced by available resources as the manpower, materials and the planned budget. The model does not cover the building refurbishment. The improvement of the building structure and HVAC systems can, after the implementation, substantially change the number of defects. The time of ageing is considered from the end of the refurbishment works. It is necessary to pay more attention to the design activities. It means to use elements with well described parameters including the maintenance work demand and the implementation conditions. The solution can bring BIM approach where construction elements use this data. The skills in using BIM tools relevant to the work performed by each professional or technical participant is also a necessity [6]. It influences the productivity of the design work but also operations in the buildings.
2019-05-17T14:02:58.379Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "0009e794f51b8dd9910996793f74793af0ef48df", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2019/28/matecconf_bd18_01005.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e3ef2601a16a48ec6b6bdf2ecc6071876a9b937c", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Computer Science" ] }
29281233
pes2o/s2orc
v3-fos-license
A Multinomial Ordinal Probit Model with Singular Value Decomposition Method for a Multinomial Trait We developed a multinomial ordinal probit model with singular value decomposition for testing a large number of single nucleotide polymorphisms SNPs simultaneously for association with multidisease status when sample size is much smaller than the number of SNPs. The validity and performance of the method was evaluated via simulation. We applied the method to our real study sample recruited through theMexican-American CoronaryArtery Disease study.We found 3 genes SORCS1, AMPD1, and PPARα to be associated with the development of both IGT and IFG, while 5 genes AMPD2, PRKAA2, C5, TCF7L2, and ITR with the IGTmechanism only and 6 genes CAPN10, IL4, NOS3, CD14, GCG, and SORT1 with the IFG mechanism only. These data suggest that IGT and IFG may indicate different physiological mechanism to prediabetes, via different genetic determinants. Introduction Genome-wide association studies GWASs examine genetic variants across the entire genome to improve the understanding of genetic components underlying complex human disease.With whole-genome genotyping techniques that allow GWAS to involve hundreds of thousands of single nucleotide polymorphisms SNPs , many studies have successfully identified novel genetic components for many diseases or related quantitative traits.However, the sample size is often limited due to the difficulty of recruiting patients and/or Multinomial Probit Model with Singular Value Decomposition Logit and probit models are statistical models that are widely used for the analysis of categorical ordinal/nominal data.The difference between these two models is the choice of the link function relating the linear predictor to the expected value; the probit model uses the inverse normal cumulative distribution, and the logit model uses the logit transformation.As discussed by Greene 4 , in most cases, the choice of the link function is largely a matter of taste.We utilized the probit model here to analyze data with polytomous ordinal response variables.In general, the multinomial ordinal probit model can be expressed by latent unobserved continuous variables associated with categorical responses.Let us assume that responses y 1 , y 2 , . . ., y n are observed, where y i takes one of the J-ordered categories and θ 1 , . . ., θ J are real numbers of bin boundaries, which satisfy that −∞ θ 1 ≤ • • • ≤ θ J ∞.As discussed by Albert and Chib 5 , we denote that z 1 , z 2 , . . ., z n are latent continuous random variables.We assume that the latent variable z i associated with a categorical outcome y i can be explained in terms of an underlying linear model, and that the observed response y i has category j if and only if z i falls between θ j−1 and θ j .The multinomial ordinal probit model is equivalent to the following model: where x i is a 1 × m vector of the explanatory variables for the ith sample, and β is a m × 1 vector of parameters to be estimated.In vector-matrix notation, we can have the multinomial ordinal probit model where z is the n×1 vector of latent variables, X is the n×m matrix of the explanatory variables, β is the m × 1 vector of unknown regression coefficients, and follows an independent standard multivariate normal distribution, ∼ N 0, I n .By applying SVD to the matrix X in 2.2 , when rank X n, the matrix can be expressed as X ADF , where A is the m × n singular value factor loading matrix with orthonormal columns so that A A I n , F is the n × n SVD orthogonal factor matrix with F F FF I n , and D diag d 1 , . . ., d n , the diagonal matrix of positive singular values, ordered as Therefore, in the product X ADF , the last n − r columns of both A and F for which d r 1 • • • d n 0 are ignored since they interact with the block of zeros in D. Hence, this leads to another form of SVD, X A r D r F r , that is, the product of the first r columns of A, the upper r × r block of D, and the first r columns of F. Since the difference between the both scenario is only in dimension of matrices in SVD, we assume that rank X n in the rest part of the paper for convenience.Thus, the model in 2.2 with the SVD of X can be written as follows: where L FD and γ n×1 A n×m β m×1 .Therefore, z, the n × 1 vector of latent variables in 2.3 , has a multivariate normal distribution, that is, z ∼ N Lγ, I n .As shown in 2.3 , γ is expressed by a linear combination of the original parameters β .Hence, we call γ as the vector of superfactors.The model in 2.3 represents a massive dimension reduction from m to n parameters.The regression model with m parameters reduced to that with n parameters derived from the SVD of the covariate matrix X.Therefore, the statistical inference on the original parameter turns into the superfactors.Let p i p i1 , . . ., p i J denote the vector of probabilities associated with the assignment of the ith sample into categories 1, . . ., J, where p ij denote the probability that a sample falls into category j.From 2.1 and 2.3 , it follows that where φ • and Φ • denote the probability density function and the cumulative density function of the standard normal distribution, respectively, and l i is the ith row of matrix L FD.Let y y 1 , . . ., y n denote the vector of responses observed for all samples.Then, the probability of observing data y is given as follow: 2.5 From 2.4 and 2.5 , the log likelihood function for γ, θ can be written as Model Fitting with Maximum Likelihood Estimation The maximum likelihood estimates MLEs of the superfactors, γ, in 2.3 can be obtained by the iteratively reweighted least squares IRLSs procedure 6 using the log likelihood function for γ, θ in 2.6 .The procedure can be briefly described as follows.Let η denote the vector of all model parameters, that is, η θ 2 , . . ., θ J−1 , γ 1 , . . ., γ n−J 2 .Note that θ 1 and θ J are not included in this vector because their values are assumed to be 0 and ∞, respectively, for the purpose of model identifiability.Also note that the J − 2 smallest singular values together with their corresponding factors are dropped from the parameters since the number of parameters must not exceed the number of samples.Assuming that J 4, define that and H i diag f i1 , . . ., f iJ−1 , where f ij denotes the derivative of the standard normal cumulative distribution function at θ j − l i γ.Take W i diag p i , where p i is the J × 1 vector of probabilities that the ith individual falls in each category, that is, p i p i1 , . . ., p i J , and let N i be a J × 1 vector of observation, that is, N i I{y i 1}, . . ., I{y i J} .After initialization of all elements, the iteration s 1 s 1, 2, . . .can be written as 2.8 The MLE of η can be found by performing the process recursively until the change between η s 1 and η s is negligible. General Solution for the Original Parameters We have discussed how to estimate the superfactor γ in 2.3 thus far.Since the primary interest is to find SNPs that are significantly associated with a disease, it is necessary to transform the superfactor γ to the original parameters β in 2.1 .The equation γ A β in 2.3 can be utilized for the transformation even though A is m × n nonsquare matrix.As discussed by Graybill 7 , the unique solution for β can be achieved by taking the generalized inverse matrix of A as A since A A I n .Therefore, the unique solution for SNP effect β can be calculated by β Aγ. Selection of Significant SNPs Finding significant SNPs is the same as testing if each SNP effect β i , i 1, . . ., m is statistically significant, that is, testing the hypothesis: H 0 : The simple method is to use Wald's test statistic, which forms β − β /se β and assumes a normal distribution.However, when m n, it is hard to calculate se β directly from the data.We therefore utilized permutation test to select significant SNPs.The rationale behind the test is that, under the null hypothesis, the estimate of β obtained from the raw unpermuted data is similar to the estimate of β obtained from the permuted data.That is, the difference between two estimates is closed to zero under H 0 .With this idea, we can construct Wald's test statistic as follows.Let β i i 1, . . ., m be the estimate of the ith SNP effect from the raw data and β k i k 1, . . ., K be the estimate of the ith SNP effect from the kth-permuted data.Let us define β d k i as the difference between β i and β k i , that is, Then, Wald's test statistic can be as follows: where and se Under the null hypothesis, the statistic Λ i defined in 2.9 follows approximately standard normal distribution when k is large.P value for rejecting the null hypothesis at a significance level α 0.05 can be utilized to identify significant SNPs. Simulated Multinomial Ordinal Data The validity of the proposed method was evaluated using simulated data sets.The procedure of data generation was composed of three steps: generating genotype data with certain genetic model, generating the latent variable, and defining the disease status variable by applying the predefined bin boundaries.The brief scheme of each step is as follows: we first generated 10 sets of the simulated genotype data under an additive genetic model, each set consists of 100 samples and 1000 SNPs.From 2.1 , we can notice that the latent variable z i consists of two parts: the expected value x i β and the random error i .In order to generate the expected value, we assumed that, for each sample, 9 out of the 1000 SNPs every 101th SNP, except the last one contribute to disease status , where β 1 and β 2 are set as −1 and 1, respectively.Hence, the latent variable can be obtained from the sum of the expected values x i β and the random error generated from standard normal distribution.We then generated disease status variable y i assuming 3 disease development stages.Therefore, when applying the proposed method to the simulated data sets, we would expect 9 strong signals corresponding to each of the 9 disease-associated SNPs.We also compared results obtained from the proposed method with that from single SNP analysis with multinomial ordinal probit model. Mexican-American Coronary Artery Disease (MACAD) Study We also applied the proposed method to study sample recruited through the Mexican-American Coronary Artery Disease MACAD study 8, 9 .The study population consists of probands who are Mexican American aged between 45 and 75 with coronary artery disease: spouses of probands, adult offspring ≥18 , and their spouses.For the offspring generation, we performed oral glucose tolerance test and genotyped 132 SNPs in 32 genes selected based on a prior relationship to insulin physiology.The goal of the study herein was to identify genes involved in the development of IGT and/or IFG, where IGT was defined as a 2 hr glucose level between 140 and 199 mg/dL and IFG defined as a fasting glucose level between 100 and 125 mg/dL.In order to identify and compare genes affecting the development of IGT and/or IFG, we generated two study samples, for which each sample has 3 disease stages D1 both 2 hr and fasting glucoses normal N/N n 1 60 , IGT only IGT/N n 2 31 and IGT and IFG IGT/IFG n 3 15 D2 both 2 hr and fasting glucoses normal N/N n 1 60 , IFG only N/IFG n 2 34 and IGT and IFG IGT/IFG n 3 15 . Simulated Multinomial Data Figure 1 summarizes the results of association analyses when applying the multinomial ordinal probit model with SVD to the simulated data sets.All numbers shown in the figures are the average of the estimates obtained from the 10 simulated data sets.As mentioned previously, we expected 9 strong signals corresponding to the 9 SNPs designed to be associated with disease development when generating the simulated data sets, and 9 were observed in our analysis.Similar results from the single SNP analysis were shown in Figure 2. Figure 1 a summarizes MLEs of SNP effects calculated with the multinomial ordinal probit model with SVD.The figure shows that almost all MLEs except 9 were between −0.1 and 0.1, while there were 9 large MLEs 4 around 0.3, 5 around −0.3 corresponding to the 9 SNPs contributed to disease status. Figure 1 b gives P values in −log 10 scale for testing SNP effects.The line in Figure 1 b corresponds to significance level α 0.05.9 SNPs were clearly separated from the rest and had −log 10 P value > 1.3. Figure 2 a summarizes MLEs of SNP effects obtained by the single SNP analysis and shows that no signal was strong enough to be distinguished from all other signals.The P values are given in Figure 2 b in −log 10 scale.SNP 501 in the middle of the figure had a relatively strong signal compared to all others.However, the −log 10 P value was much less than 1.3, which corresponds to significance level α 0.05.Thus, no SNPs were identified as statistically significant from the single SNP analysis method.In contrast to the fact that no SNP was identified as statistically significant by the single SNP analysis, the multinomial ordinal probit model with SVD method was able to identify all 9 SNPs contributing to disease status as statistically significant at significance level α 0.05.These results indicated that the proposed method should be reliable for the analysis of large-scale genome-wide association data that have polytomous ordinal responses when m n. Mexican-American Coronary Artery Disease (MACAD) Study We analyzed the data sets D1 and D2 see methods generated from a subsample of subjects recruited through a coronary artery disease proband in the Mexican-American Coronary Artery Disease Project as described in the method section, using both the multinomial ordinal probit model with SVD method and the single SNP analysis method.Figure 3 summarizes the analysis results with the data D1 N/N-IGT/N-IGT/IFG using the single SNP analysis.Figure 3 a gives MLEs of SNP effects.Figure 3 b plots P values of association analysis in −log 10 scale.With Sidak correction, which is often used to correct multiple testing problem, the adjusted significance level should be 1 − 1 − α 1/m , where α is significance level, and m represents the number of tests.Thus, the corrected −log 10 P value threshold for significance level α 0.05 is 3.4, which corresponds to the line in Figure 3 b .We applied the adjusted significance level to the P values in Figure 3 b since the P values are before correcting multiple testing problem.No SNP was identified as statistically significant Figure 3 b . The data set D2 N/N-N/IFG-IGT/IFG was analyzed with the same method, and analysis results are given in Figure 4. Figure 4 a plots MLEs of the SNP effects.Since P values in Figure 4 b are before the the multiple testing correction, we used 3.4 as the −log 10 P value threshold corresponding to 0.05 significance level after the multiple testing correction.Two SNPs corresponding to SORC1 and SORT1 were found significant. We then also analyzed D1 and D2 with the multinomial ordinal probit model with SVD method.Figure 5 summarizes the analysis results for data D1. Figure 5 a plots MLEs of SNP effects.Figure 5 b plots P values in −log 10 scale for testing SNP effects.The multiple testing correction does not need to be applied now since the method tests all SNPs simultaneously.With the 1.3 P value threshold, which corresponds to 0.05 significance level, we identified that 8 out of the 32 candidate genes SORCS1, AMPD1, PPARα, AMPD2, PRKAA2, C5, TCF7L2, and ITR were associated with the disease path defined in D1. The multinomial ordinal probit model with SVD method was applied to data set D2 as well.The results are shown in Figure 6.In Figure 6 a , MLEs of the SNP effects were summarized.Figure 6 b plots the P values in −log 10 scale for testing the SNP effects.It showed that 11 SNPs corresponding to 9 out of 32 candidate genes SORCS1, AMPD1, PPARα, CAPN10, IL4, NOS3, CD14, GCG, and SORT1 have −log 10 P value greater than the 1.3 P value threshold.From the analyses of D1 and D2, we found that SNPs in 3 genes SORCS1, AMPD, and PPARα were associated with both IGT and IFG; SNPs in 5 genes AMPD2, PRKAA2, C5, TCF7L2, and ITR were associated with IGT only; SNPs in 6 genes CAPN, IL4, NOS3, CD14, GCG, and SORT1 were associated with IFG only.These results suggest that IGT and IFG may indicate different pathways to diabetes, with different genetic determinants.Thus, using both simulated data and a real study sample, we demonstrated that multinomial ordinal probit model with SVD method can be utilized to identify associated markers involved in disease development when multidisease stages are considered.For relatively small size of data set used in the paper, which is 100 samples and 1000 SNPs for the simulation study, the computation took about less than 10 minutes to complete.However, the computation time might be a concern when applying this method to large data set, such as GAWS with millions of SNPs and thousands of samples. Figure 1 : Figure 1: Analysis of the simulated data sets with multinomial ordinal probit model with SVD. Figure 2 : Figure 2: Analysis of the simulated data sets with single SNP analysis. b the disease status a Estimates of parameters by single-SNP analysis P values of testing SNP effects −log 10 P value Figure 3 : Figure 3: Analysis of genes for IGT/IFG through IGT pathway Data Set D1 with single-SNP analysis. of testing SNP effects −log 10 P value Figure 4 : Figure 4: Analysis of genes for IGT/IFG through IFG pathway Data Set D2 with single SNP-analysis. Figure 5 : Figure 5: Analysis of Genes for IGT/IFG through IGT pathway Data Set D1 with multinomial ordinal probit model with SVD. of testing SNP effects −log 10 P value Figure 6 : Figure 6: Analysis of genes for IGT/IFG through IFG pathway Data Set D2 with multinomial ordinal probit model with SVD.
2017-07-30T21:43:25.496Z
2012-05-29T00:00:00.000
{ "year": 2012, "sha1": "c6c5e0305176253eafda20b0fc9a2594c1a771fb", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jps/2012/419832.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c6c5e0305176253eafda20b0fc9a2594c1a771fb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Mathematics", "Medicine" ] }
56012648
pes2o/s2orc
v3-fos-license
Virulence and antimicrobial susceptibility profile of Listeria monocytogenes isolated from frozen vegetables available in the Egyptian market Listeria monocytogenes is among the most important foodborne pathogens. It may enter foodprocessing environments through raw materials, handlers or equipment and may persist due to ineffective cleaning or sanitation. The bacterium can be isolated from both frozen vegetables and fresh food substances. This study aimed to estimate the prevalence of L. monocytogenes in spices and frozen vegetables and screen for some virulence factors and drug-resistance determinants of the isolated bacteria. First, conventional microbiological methods were used for the isolation and identification of bacteria. Next, the identity of isolated bacteria was confirmed by molecular techniques, and the virulence genes iap and hlyA were identified by real-time polymerase chain reaction (PCR). The hemolytic activity of the isolates was assessed by cultivation on sheep blood agar. Furthermore, the antimicrobial susceptibility of confirmed L. monocytogenes isolates was tested by the disk diffusion method against 10 antibiotics. Out of 331 vegetable samples, 47 isolates were confirmed to contain L. monocytogenes, whereas none of 40 spice samples tested positive. All isolates were positive for iap and hlyA genes. Susceptibility testing indicated that all isolates were sensitive to trimethoprim/ sulfamethoxazole, but only 36% were sensitive to penicillin G, while 100% and 70% showed intermediate resistance to chloramphenicol and erythromycin, respectively. All tested isolates were resistant to amoxicillin, gentamicin and norfloxacin; on the other hand, 90, 86 and 84% of the tested strains were resistant to ciprofloxacin, ceftazidime/clavulanic acid and amikacin, respectively. In summary, L. monocytogenes isolates disseminated in frozen vegetable samples from the Egyptian market were highly virulent, entirely multiple-drug resistant and were enriched in iron-containing vegetables. Since L. monocytogenes is primarily pathogenic to humans and causes a life-threatening disease, there is a potential infection risk for people who usually deal with frozen vegetables before cooking. Hence, surveillance to L. monocytogenes in frozen products, together with implementation of tight measures would be valuable in preventing listeriosis, and are highly recommended. INTRODUCTION Bacteria of the genus Listeria are Gram-positive, facultative anaerobic and non-spore forming bacilli (Wong and Freitag, 2004).The genus is represented by eight major species: Listeria monocytogenes, Listeria innocua, Listeria welshimeri, Listeria grayi, Listeria seeligeri, Listeria ivanovii, Listeria marthii and Listeria rocourtiae; recently (Weller et al., 2015) added new species are Listeria booriae and Listeria newyorkensis.The most medically relevant species, L. monocytogenes, is classified into 13 serotypes.Serotypes 1/2a, 1/2b, 1/2c and 4b strains are associated with human infections (Graves et al., 2010;Leclercq et al., 2010).Almost all major outbreaks of invasive listeriosis are due to serotype 4b strains (Salcedo et al., 2003).The ability of these bacteria to survive and grow over a wide range of environmental conditions, including high salt concentration, refrigeration temperature, and low pH, which makes them a potential hazard in foods (Ryser and Marth, 2007) and the ability of L. monocytogenes to persist in the environment is due to their capacity to form biofilms (Colagiorgi et al., 2016).This organism is a recognized foodborne pathogenic bacterium that causes many diseases, from mild gastroenteritis to severe blood and/or central nervous system infections, as well as abortion in pregnant women.Many studies have detected L. monocytogenes in fresh product samples and even in some minimally processed vegetables (Lopez, 2008;Zhu et al., 2017).However, L. ivanovii and L. seeligeri have been also rarely associated with disease in humans (Lopez, 2008).Listeriosis was responsible for 30% of foodborne deaths from 1996 to 2005 and had a high case fatality rate of 16.9% according to Food Net US (Barton et al., 2011).L. monocytogenes expresses a highly conserved pore-forming toxin known as listeriolysin O (LLO).LLO is a member of a large family of cholesteroldependent cytolysins (CDCs) found in several bacterial pathogens (e.g., streptolysin O of Streptococcus pyogenes and alveolysin of Bacillus alvei).It is the primary virulence factor in L. monocytogenes and is essential for its pathogenesis (Tweten, 2005;Cossart et al., 1989) (Jaradat et al., 2002).L. monocytogenes is susceptible to many antibiotics; but multi-drug resistant isolates have been reported (Jaradat et al., 2002).Listeria species are generally susceptible to a wide range of antimicrobials, but the first multi resistant L. monocytogenes strain has been isolated in 1988.Since then, antibiotic-resistant L. monocytogenes isolates have been recovered from food, environment, and human listeriosis cases (Soni et al., 2014).Currently, a β-lactam antibiotic (e.g., ampicillin or penicillin) combined with an aminoglycoside (for example, gentamicin) is the reference therapy for human listeriosis, while the second choice of treatment is a combination of vancomycin, erythromycin and trimethoprim-sulfamethoxazole for pregnant women or patients allergic to β-lactams (Hof, 2004). This study aimed to estimate the prevalence of L. monocytogens in spices and frozen vegetables, and screen for some virulence factors and drug-resistance determinants of the isolated bacteria. Twenty-five grams of each food sample was weighed and mixed with 225 ml of half Fraser primary enrichment medium.The mix was incubated at 30 ±1°C for 24 ± 2 h.0.1 ml of primary enrichment was transferred to a tube containing 10 ml Fraser broth.Then, this inoculated medium was incubated at 37°C for 4 ± 2 h.From the primary enrichment culture, a loopful (10 μl) was inoculated on the surface of Listeria Agar according to Ottaviani and Agosti medium (MERCK) (Ottaviani et al., 1997) and chromogenic listeria agar medium (OXOID) and were observed for typical L. monocyogenes colonies.The identity of the isolated colonies was further confirmed biochemically following the Microbact 12L scheme (Table 1). Molecular identification of L. monocytogenes and detection of virulence genes Real-time PCR was used to identify Listeria genus.DNA was extracted by Prep Man® Ultra according to manufacturer's protocol.Ten microliters of the supernatant was transferred to a new tube containing 90 µl of ultra-pure water, and then vortexd.The mixture was used as a DNA template for PCR.Real-time PCR mixture solution was prepared Using Promag™ custom kit (PROMAGA GMBH, Berlin, Germany) according to manufacturer procedure and then added into PIKO 96-well PCR (Thermo Fisher Scientific, Vantaa, Finland).Primers and probes used for the detection of hlyA and iap genes are listed in Table 2. Hemolytic activity assay Haemolysin was detected by culturing L. monocytogenes isolates on blood agar base supplemented with 5% defibrinated sheep blood.Blood agar plates were then incubated at 37°C for 24 h.Colonies producing clear zones of haemolysis were classified according to zone diameter of haemolysis as strong, intermediate and weak (ISO 11290-1-(2014). Author(s) agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License Antimicrobial susceptibility testing Antibiotic susceptibility was determined by the Kirby-Bauer disc diffusion method (Bauer et al., 1966) as previously recommended by the National Committee for Clinical and Laboratory Standards (NCCLS, 2012).Four to five colonies were picked up from overnight cultures; then, a loopful was inoculated into sterile TSB (about 3-4 ml/tube), incubated for 2 to 4 h.The culture turbidity was adjusted to 0.5 McFarland (equal to 0.08 -1 absorbance at wavelength 624 nm).Using a sterile cotton swab, the bacterial broth culture was streaked on Muller Hinton agar surface.The inoculum was left to dry for 3 to 5 min.Discs were placed individually on the agar surface with sterile forceps and then gently pressed down onto agar surface to provide uniform contact.Plates were allowed to diffuse for 2 h in a refrigerator then incubated at 37 ± 2°C for 18 to 24 h.The susceptibility of the Listeria isolates was detected by a clear zone around the discs.Results were interpreted according to the standardized interpretive chart by NCCLS (NCCLS, 2012).The antibiotics used were as follows: Penicillin G (PG 10), trimethoprim 1.25 µg + sulfamethaxazole 23.75 µg (TS25), erythromycin (E15), ciprofloxacin (CIP5), Amoxicillin (AML10), amikacin (AK30), norofloxacin (NOR 10 µg), gentamycin (GM 200 µg), ceftazidime + clavulanic acid (CAL40) and chloramphenicol (C30) (MAST Diagnostics-UK). Distribution of L. monocytogenes in tested food samples When forty spice samples and 331 frozen samples were examined for L. monocytogenes, 47 out of the 331 vegetable samples (14.2%) were positive for the presence of L. monocytogenes (Figure 1), while none of the spice samples were positive. Haemolytic activity and frequency of virulence genes among L. monocytogenes isolates All 47 L. monocytogenes isolates were PCR-positive for iap and hlyA genes.L. monocytogenes isolates showing haemolytic activity were classified according to their potency as shown in Figure 2. Antimicrobial susceptibility of the L. monocytogenes isolates The in vitro susceptibility of 47 L. monocytogenes strains isolated from different kinds of foods was tested against 10 antibiotics.All tested strains were sensitive to Trimethoprim/Sulfamethoxazole, while 36% of tested strains were sensitive to Penicillin G.Moreover, 100 and 70% of the samples showed intermediate resistance to Chloramphenicol and Erythromycin, respectively.All tested strains were resistant to Amoxicillin, Gentamicin and Norfloxacin, while 90, 86, 84% of tested strains were resistant to Ciprofloxacin, Ceftazidime/clavulanic acid and Amikacin, respectively. Statistical analysis Chi-square tests were used to determine significant trends in the data.First, it was obvious from the culture results that L. monocytogenes cannot be isolated from spices (0% in 40 spices samples as opposed 14.2% of 331 frozen food samples).Among the food samples, however, a clear over representation of L. monocytogenes was observed in okra, spinach, and artichoke with p < 0.05 which indicates statistically significant relationship between the categorical variables. DISCUSSION This study aimed to isolate L. monocytogenes from different kinds of spices and frozen vegetables.Overall, 40 spices and 331 vegetable samples were examined for the presence of L. monocytogenes.It was found that 47 (14.2%) samples out of 331 (17 okra,1 carrot, 6 green beans, 9 artichoke, 8 molokia, 3 spinach, 1 green peas, 1 strawberry and 1 grape leaves) were positive for L. monocytogenes.Meanwhile, surprisingly none of the spice samples showed any positive results for the pathogen.The absence of Listeria in spices may suggest a potential antimicrobial activity of these spices, and this will need confirmation in further studies.Even though reports on the sensitivity of L. monocytogenes to spices such as ginger, finger-root and turmeric were studied (Thongson et al., 2005), the current search for L. monocytogenes in spices was based on recent reports of detection of a number of food pathogens including L. monocytogenes in spices and herbs (Thongson et al., 2005;Kara et al., 2015). Previous studies among the analyzed categories showed variation in occurrence of L. monocytogenes.For instance, Byrne et al. (2016) studied the occurrence and antimicrobial resistance patterns of listeria isolated from vegetables in Brazil and found that 3% of the samples were contaminated with L. monocytogenes, including 2% raw vegetables and 5.5% ready-to-eat vegetables.They confirmed the virulence potential of the isolates and antimicrobial susceptibility, revealing 50% of the isolates were susceptible to antibiotics (Byrne et al., 2016). In Uruguay, on the other hand, 11.2% of different food samples were positive for L. monocytogenes.The highest percentage was among frozen food samples (38%) followed by cheese (10%).The same study discussed the serotype distribution among the samples and concluded on the prevalence of serotype 1/2b and 4b.These results highlight the role that frozen foods may play in the spread of this pathogen (Braga et al., 2017). Moreover, the prevalence of L. monocytogenes in frozen burger patties was studied by Wong et al. (2012) in Malaysia.L. monocytogenes was detected in 33% of the chicken burger patties, 22.9% of the beef patties and 10% of fish patties; their results suggest that burger acts as a potential source of listeriosis if adequate cooking is not involved. Finally, the prevalence of Listeria species in fresh and frozen fish and shrimp was studies in Iran by Rahimi et al. (2012).Listeria species were isolated in 7.5, 4.2, 11.7 and 6.6% of fresh fish, frozen fish, fresh shrimp and frozen shrimp, respectively.Almost 2% of identified species were L. monocytogenes which led to the conclusion that consumption of sea food either raw or frozen may lead to food borne illnesses in Iran (Rahimi et al., 2012). L. monocytogenes detected in this study were positive for both iap and hlyA genes.Isolates showing haemolytic activity were classified according to their degree of heamolysis, into strong, intermediate and weak.Previous studies reported isolates positive for the virulence genes inlA, inlB, prfA, iap, actA, plcB and hlyA; their results suggest that all L. monocytogenes isolates have the potential to cause listeriosis in humans (Xiaolong et al., 2017).Various genes such as hlyA and iap genes have been targeted for detection of L. monocytogenes using PCR (Aznar et al., 2003).Pulsed field gel electrophoresis (PFGE) methodology is recommended in the identification protocol to identify the food implicated in an outbreak which is considered a key point for public health. From previous reports, it is evident that differences in prevalence of L. monocytogenes in different types of food reflect the effect of geographical location, demography, and food type and hygiene standards among other factors.Food containing only spices or high levels of them, like Indian food, almost lack L. monocyotogenes (Suriyapriya et al., 2016).As indicated above, none of our 40 spice samples collected from the Egyptian market contained Listeria, agreeing with what was found in Indian spicy food (Suriyapriya et al., 2016). Susceptibility testing results (Figure 3) indicates that all tested strains were multi drug resistant as they were resistant to amoxicillin, gentamicin and norfloxacin.Moreover, 90, 86 and 84% of the tested strains are resistant to ciprofloxacin, ceftazidime + clavulanic acid and amikacin, respectively. In previous studies, all L. monocytogenes isolates were sensitive to most of the commonly used antibiotics, such as ampicillin, penicillin G and vancomycin.However, some multidrug-resistant L. monocytogenes isolates had been reported, which were resistant to ampicillin, erythromycin, gentamicin, trimethoprim-sulfamethoxazole or rifampin.For example, a L. monocytogenes strain isolated from a meningoencephalitis patient was resistant to chloramphenicol, erythromycin, streptomycin and tetracycline (Charpentier et al., 1999). These antibiotics have been increasingly used as supplements in animal feed, as growth promoters and for the treatment of human disease (Adzitey et al., 2013).Some common antibiotics, such as ampicillin, that are commonly used to treat clinical listeriosis, represent a high drug resistance phenomenon in L. monocytogenes strains.In recent years, with extensive use and abuse of antibiotics, multi-drug resistant strains have been detected from a variety of food samples (Ling et al., 2006).These findings confirmed that the prevalence of antibiotic resistance in L. monocytogenes might be increasing (Chen et al., 2014). Conclusion The findings of this study revealed a relatively high prevalence of virulent L. monocytogenes in frozen food in Egypt, which could potentially cause human disease.Thus, it is necessary to take precautions in the food factories, and periodical inspection must be performed on frozen food, which would be valuable to prevent human infection during consumption of this kind of food.All isolates recovered in this study were multi-drug resistant to most available antimicrobial agents, which represents This study is a full-scale, systematic investigation of the prevalence of L. monocytogenes in frozen foods in Egypt and the contamination of these foods, and it provides baseline information for Egyptian regulatory authorities to allow the formulation of a regulatory framework for controlling L. monocytogenes and to improve the microbiological safety of frozen foods. Figure 1 . Figure 1.Distribution of isolated L. monocytogenes among vegetable samples. Figure 3 . Figure 3. Percentage of sensitivity and resistance against 47 L. monocytogenes isolates. Table 1 . Substrates and reactions in Microbact 12L system used to identify Listeria monocytogenes.
2018-12-06T02:19:14.442Z
2018-03-07T00:00:00.000
{ "year": 2018, "sha1": "1215f7cfbc3501eb383634ae727f35e555c1e53f", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJMR/article-full-text-pdf/0C8BBCD56217.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1215f7cfbc3501eb383634ae727f35e555c1e53f", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
259453760
pes2o/s2orc
v3-fos-license
Multiobjective Reptile Search Algorithm Based Effective Image Deblurring and Restoration : Images are frequently affected because of blurring, and data loss occurred by sampling and noise occurrence. The images are getting blurred because of object movement in the scenario, atmospheric misrepresentations, and optical aberrations. The main objective of image restoration is to evaluate the original image from the corrupted data. To overcome this issue, the multiobjective reptile search algorithm is proposed for performing an effective image deblurring and restoration (MORSA-IDR). The proposed MORSA is used in two different processes such as threshold and kernel parameter calculation. In that, threshold values are used for detecting and replacing the noisy pixel removal using deep residual network, and estimation of kernel is performed for deblurring the images. The main objective of the proposed MORSA-IDR is to enhance the process of deblurring for recovering low-level contextual information. The MORSA-IDR is evaluated using peak signal noise ratio (PSNR) and structural similarity index. The existing researches such as enhanced local maximum intensity (ELMI) prior and deep unrolling for blind deblurring (DUBLID) are used to evaluate the MORSA-IDR. The PSNR of MORSA-IDR for image 6 is 30.98 dB, which is high when compared with the ELMI and DUBLID. I. INTRODUCTION The base of various image processing applications is the data collection in the way of digital images.Higher standards of image processing are essential in numerous science and engineering fields, whether seen from the perspective of human or machine vision.A precise and reliable data collection is key to enhancing all image types with enhanced quality [1][2][3].The images captured in low-light circumstances cause degradation such as higher noise, low brightness, and low contrast.The degraded images create difficulty in different essential tasks such as object detection, semantic segmentation, and object tacking.Hence, it is essential for developing an appropriate image enhancement approach to obtain an image with enhanced quality from degraded inputs [4][5][6].On the other hand, the camera movement or lens defocusing also caused blurred images [7,8].The profile of blur is mainly based on the intensity dissemination that is related to the spreading of point images in the blur area [9]. Image restoration and representation are considered two challenging tasks in computer vision.Image restoration is the process of reforming a high-quality image from degraded versions, e.g., blurry and noisy.In image representation, the sparse coding enabled an effective representation of signals with only fewer active elements [10].Image restoration also referred to as image in painting where the automatic restoration of the damaged image region is accomplished; therefore, the image is obtained is naturally and a person who is not familiar with the original image cannot observe the restoration [11].Image deblurring aims to achieve a sharp image by eliminating the blur from a degraded image.Image deblurring is one of the common steps; however, it is highly mandatory in the computer vision field [12][13][14].The low frame rates/low shutter speeds, object motions, and camera shakes are affected image/video quality which resulted in information loss.Elimination of such blurring is used to restore the image that is used in numerous applications such as moving object segmentation, text recognition, facial detection, and so on [15].Image restoration is highly complex to resolve or achieve a single result because of illposed character issue [16].The aforementioned limitations are considered as a motivation for this research to perform an effective image deblurring and restoration. The contributions are concise as follows: • The MORSA is used in this research for performing effective threshold and kernel parameter computations.The reptile search algorithm (RSA) is chosen mainly because of its effective equilibrium between exploitation and exploration.• The threshold from MORSA is used to perform the noise removal DRN, where noisy pixels are identified and replaced with new pixels.Further, the kernel estimation along with MORSA was performed to enhance the deblurring process. The paper is sorted as follows: Section II gives the existing research related to the image restoration and deblurring process.The image deblurring and restoration using MORSA are detailed in Section III.The results of the proposed method are given in Section IV.Moreover, the Discussion and Conclusion of this research are given in Sections V and VI. II. RELATED WORK The existing research developed for performing the image deblurring and restoring is given in the following section. Sadok, et al. [17] developed a regularized dispersion particle filter (RDPF) to accomplish restoration.The developed RDPF depends on the Hidden Markov Model (HMM) and the utilization of the exponential dispersion unit and expectation maximization (EM) approach.The EM and Newton Raphson approach were used to calculate the unknown variance noise and dispersion parameters.The extended EM developed in this work was used to deal with non-Gaussian noise.On the contrary, the developed RDPF was required a huge amount of iterations to restore the image. Malik, et al. [18] presented a Self-Operational Neural Network (ONN) for handling image restoration issues.This Self-ONNs have generative neurons that have the capacity for synthesizing the nodal operator by leveraging Taylor polynomials.The Self-ONN offers an optimal balance among the number of parameters and performance of denoizing than the convolutional networks.The denoizing using Self-ONN mainly depends on the weight values, whereas the performance was degraded when the weight was less in the network. Hu, et al. [19] developed enhanced local maximum intensity (ELMI) prior for deblurring the image.This ELMI was the combination of local maximum gradient (LMG) prior and local maximum intensity (LMI) prior.The ELMI was motivated by the principle of the high value of local patch pixels, and gradients were reduced along with the blurring process.The integration of LMG into LMI was used to enhance the latent image as well as it was useful in kernel estimation.However, the developed ELMI incorporated a huge amount of nonlinear operations while deblurring the image. Li, et al. [20] presented the interpretable neural network structure namely deep unrolling for blind deblurring (DUBLID) approach.The DUBLID depends on the recasting of a generalized total variation regularized methodology in a neural network and their parameter optimization through custom backpropagation.The developed DUBLID has the advantage of interpretability as well as this DUBLID recovered the kernel similar to the ground truth.The DUBLID was required to incorporate the noisy pixel discovery for improving the deblurring performances.However, an extra hardware support such as graphics processing unit (GPU) was required for faster deblurring process. Eqtedaei, A. and Ahmadyfard, A [21] developed multiscale approach according to the maximum a posterior (MAP) structure to perform the image motion deblurring.In this MAP-based approach, the blurry image was represented in various scales.The k-means clustering was used to segment an each scale of the image.For an each scale, the blur kernel was computed by utilizing the image data in dominant edges.From the coarser levels in a coarse-to-fine manner, the blur kernel was computed in pyramid's finest level.The developed MAP-based approach does not require complex considerations for estimating the intermediate latent image.The time consumption was high, when the sharp image was recovered in pyramid's finest level. Zhao et al. [22] presented an image-deblurring context-aware multiscale convolutional neural network namely CDMC-Net.Two different stages such as multiscale network and cross stage feature aggregation (CSFA) were developed for restoring the latent sharp images, whereas CSFA was used for improving the information flow communication.The multiscale blurry images were processed in a coarse-to-fine manner.Moreover, the multistrip feature extraction was used to obtain long-range context information in various scenarios.The developed neural network failed to deblur the lowlight images because strong edges of image were mistakenly considered as structural edges. The literature survey along with the advantages and limitations is given in the following Table I. The limitations found from related work are inefficient deblurring for low-light images and higher time consumption for sharp images.The noisy pixel discovery is essential for an effective deblurring over the images.The DRN is used in this research to identify the noisy pixels.Next, the MORSA is used in two different processes such as threshold calculation for noise removal and kernel estimation.After computing the threshold values, the noisy pixels are deblurred by using the kernel estimation.Further, the salient edges information is used to estimate the noisy kernel in less time for sharp images. III. PROPOSED METHOD The MORSA is developed for performing image restoration and deblurring to improve the PSNR.In general, the meta heuristic algorithm has the best learning strategy, so it is considered for Author name Proposed method Advantages Limitations Sadok, et al. [17] The restoration of images was achieved by using the RDPF filter. The non-Gaussian noise was handled by using the extended EM in RDPF. The developed RDPF required a huge amount of iterations to restore the image.Malik, et al. [18] The Self-ONN with generative neurons was developed to handle the image restoration issues. The tradeoff among the number of parameters and performance of denoizing were achieved by Self-ONN. The weight values of Self-ONN decide the denoizing performances during image restoration.Hu, et al. [19] The ELMI which is the combination of LMG and LMI prior was developed for deblurring the image. The latent image was improved by using the integration of LMG and LMI. The ELMI required a huge amount of nonlinear operations while deblurring the image. The developed DUBLID recovered the kernel similar to the ground truth because of its interpretability. An extra hardware support such as graphics processing unit was required for faster deblurring process.Eqtedaei, A. and Ahmadyfard, A [21] The multiscale approach according to the MAP structure was developed to perform the image motion deblurring The developed MAP-based approach does not require complex considerations for estimating the intermediate latent image. The time consumption was high when the sharp image was recovered in pyramid's finest level. Zhao et al. [22] A context-aware multiscale convolutional neural network (CDMC-Net) is developed for deblurring the image. The context information from different scenarios were obtained by using the multistrip feature extraction. However, the CDMC-Net failed to deblur the low light images because strong edges of image were mistakenly considered as structural edges. automatic selection of optimal threshold instead of manual calculation.Therefore, this research considered the MORSA to ensure the selection of best threshold values.The MORSA is used in two different stages such as threshold calculation for noise removal and kernel estimation.The threshold computation using MORSA is used to effectively discover and remove the noisy pixel using DRN.Further, MORSA is also used in the kernel estimation to deblur the image by computing the optimal kernel parameters.The block diagram of the proposed method is shown in Fig. 1. A. DATA ACQUISITION AND DISCOVERY OF NOISY PIXELS USING DRN Consider the image obtained from the database is H with the dimension of U × V, and it is given as input to noisy pixel identification.The recording process creates different irrelevant consequences such as blurring and noise in the images observed in various situations.The location of the pixel for the input image ðHÞ is represented as Hðu,vÞ.The noisy pixels are discovered by using a deep residual network (DRN), which has various layers such as residual blocks, convolutional (Conv), intermediate pooling, and the linear classifier.The steps processed in the DRN are provided as follows: • The computation process of Conv layer is expressed in equations (1) and ( 2). where the recording coordinates are denoted as u and v; X denotes the kernel matrix E × E, which also referred as learnable parameter; the kernel matrix's position index is denoted as a and s; the size of kernel for input neuron Z is denoted as X Z , and cross-correlation operator is denoted as Ã. • Next, the pooling layer is incorporated among Conv layer, and it is used for minimizing the feature map's spatial size.Each slice and depth of feature map were operated by selecting the average pooling layer.• The DRN uses the rectified linear unit (ReLU), i.e., the nonlinear activation function that is used to process the image. where feature is denoted as K. • The input layers are normalized in batch normalization function by scaling and adjusting the activation functions for enhancing the training speed and reliability.• The shortcut link among the Conv layers is referred by residual blocks.The input is attached with output, when the input and output are same.If the dimensions are different, the dimension matching factor is used to match the input and output.• The linear classifier discovers the noisy pixels from the input image, once the process of the Conv layer is done in DRN. B. STATISTICAL MODEL-BASED NOISE REMOVAL A new pixel is identified for each noisy pixel by using the statistical model.Equation ( 4) is used to eliminate the noise pixel, when it is deliberated as a noise pixel. H γ ðu,vÞ = H c ðu,vÞ if Bðu,vÞ = 1 Hðu,vÞ Otherwise where the new pixel value is denoted as H c ðu,vÞ.The chosen noise pixels are utilized for computing the new pixel value.Initially, a Hðr,qÞ, i.e., 3 × 3 window is generated based on noisy pixels followed by it is evaluated with the input image Hðu,vÞ.Equivalent pixel value is chosen and denoted as A by evaluating the image Hðr,qÞ with Hðu,vÞ.These matched pixels are utilized for further processes.The new pixel value is calculated, when the A is higher than the threshold s 1 .Otherwise, the noise pixel is interchanged with the original input image in the chosen 3 × 3 window.The parameters such as X d , R and J are used to create the new pixel value by utilizing the f ðu,vÞ as shown in equation (5). where the initial value is chosen by X d is denoted as J and the absolute function calculated utilizing the adjacent pixel value is denoted as R is expressed in equation ( 6).R = absðHðr,qÞ − Hðr + u − 3, q + v − 3ÞÞ (6) The X d is discovered by a prefixed window according to the absolute result.But, the X d is fixed as zero where d differs according to the prefixed window η that is shown in equation (7). A X d 's rounding process is computed as round ðX d =8Þ that is denoted as b.For X d , the process of sorting is estimated such that the preliminary value X d is chosen as J. Equation ( 5) is rewritten as shown in equation (8). The f ðu,vÞ is compared with the threshold s 2 .A function G is created, when the f ðu,vÞ > s 2 as shown in equation (9). where the mean value of the adjacent pixel is denoted as χ, and c 1 is fixed as 4. The new pixel is calculated based on equation (10), when G > s 3 . The threshold parameters of the statistical models such as s 1 , s 2 , and s 3 are calculated using the MORSA algorithm. C. THRESHOLD CALCULATION FOR NOISE REMOVAL USING MORSA In this phase, MORSA is used for discovering the optimal values of the threshold for the statistical model.For an effective computation of new pixels, it is essential to select the threshold values for noisy pixels.In general, the conventional RSA is motivated by the encircling, hunting, and social behavior of crocodiles.The iterative process of MORSA and its objective function calculation are detailed in the following sections. 1) ITERATIVE PROCESS OF MORSA.In MORSA, the exploration and exploitation are obtained by the motion of the crocodile while encircling the target prey.The parameters of MORSA are mentioned as follows: population size = 50, dimension = 3, and iterations = 150.There are two different kinds of motions such as high and belly walking based on encircling actions during the exploration phase.Equation (11) represents the location update of MORSA.If iteration ðtÞ is lesser than the T=4, the high walking is initialized, where T defines maximum iteration; otherwise, the belly walking is done as per equation (11).where y ði,jÞ is position j of solution i; the best solution is denoted as y à j ðtÞ; the random value among ½0,1 is r; the hunting parameter is φ ði,jÞ ðtÞ that is formulated in equation (12); μ is set to 0.1; RF ði,jÞ is reduce function, which is expressed in equation ( 13); r 1 − r 4 are random numbers; y ðr 1 ,jÞ are random location; and the evolutionary sense ESðtÞ is expressed in equation ( 14); RF ði,jÞ = y à j ðtÞ − y ðr 2 ,jÞ y à j ðtÞ + ϵ (13) where ϵ is a small value and DV ði,jÞ denotes the difference value expressed in equation (15). where an average location is denoted as Mðy i Þ, and it is expressed in equation (16).A lower and upper limits of MORSA are LB j and UB j , and α is fixed as 0.1. Next, the exploitation, i.e., hunting is accomplished that used two approaches such as hunting coordination and hunting collaboration.The MORSA performs the hunting coordination when the condition of t ≤ 3 T 4 and t > 2 T 4 are satisfied; otherwise, the hunting cooperation takes place as shown in equation (17).The iteration of MORSA is repeated until the specified iteration is met, or the best solution is obtained in the selection process.The computational complexity of MORSA is OðT × PS × DimÞ, where T defines maximum iteration;population size is denoted as PS, and dimension is denoted as Dim.The objective function used to find the optimal threshold is derived in the following section. 2) OBJECTIVE FUNCTION OF MORSA FOR NOISE REMOVAL. The optimal threshold for noise removal is chosen according to the objective function expressed in equation (18).The solution with a lesser value is selected as the optimal solution, i.e., optimal threshold values for the statistical model. where Obj1 denotes the objective function for noise removal; the hyperparameter is denoted as τ; the first term denotes the generative subnetwork to obtain enhanced output that is close to the images with higher contrast and the second term offers enhanced outcome which is indistinguishable from images with higher contrast.The derived fitness function is used to find the optimal threshold of s 1 , s 2 , and s 3 for a statistical model.After that, it was used to perform noise removal.The Pseudo code for MORSA-based threshold calculation is shown in Algorithm 1. D. KERNEL ESTIMATION-BASED IMAGE DEBLURRING After removing the noisy pixels, the image is further processed by image deblurring, which is accomplished by kernel estimation.The estimation of kernel is represented in equation ( 19) according to the hyper-Laplacian model. The aforementioned equation ( 19) is used to preserve the sparsity; however, it does not display the blur kernel's continuity.The noisy kernel is estimated using the salient edges ∇K.The term NðgÞ expressed in equation ( 20) is used to control the gradients to preserve the kernel continuity. Where an amount of the pixel with nonzero gradients is denoted as NðgÞ.Accordingly, the estimation of the kernel is written as shown in equation (21). where the parameter utilized for controlling the smoothness g is denoted as σ.Equation ( 21) is modified as shown in equation ( 22).Further, it is minimized with the iterative reweighted least square as shown in equation ( 23). Further, the parameters of p and σ are also identified by using the same MORSA, and the objective function for kernel estimation is given in the following section. E. OBJECTIVE FUNCTION FOR KERNEL ESTIMATION The iterative process of kernel estimation using MORSA is similar to "Kernel Estimation-based Image Deblurring."A quadratic programming function is used to calculate the objective measure for creating a matrix.Consequently, the search agents are evaluated by utilizing the objective function for kernel estimation ðObj2Þ as expressed in equation ( 24). where the matrix is denoted as D and the coefficient function's transpose is denoted as CF 0 . IV. RESULTS AND DISCUSSION This section provides a comparison of the proposed method with the existing methodologies.The proposed method is developed and executed in MATLAB R2020a, which is operated under 16GB RAM and an i5 core processor.The proposed method is used to accomplish image representation and restoration by deblurring the images.Here, the performances are evaluated in terms of PSNR and SSIM, which are expressed in equations ( 25) and ( 26). where the maximum image pixel value is denoted as m max ; mean square error is denoted as MSE; pixels are denoted as e and h; mean pixel value is denoted as η e and η h ; pixel variance is denoted as ξ e and ξ h ; covariance of pixels is denoted as ξ eh ; and ϕ 1 & ϕ 2 are used for stabilization. A. PERFORMANCE ANALYSIS The sample images processed in this proposed method for performing the image representation and restoration are shown in Fig. 2. The 6 images considered for the evaluation of MORSA-IDR are referred as im1, im2, im3, im4, im5, and im6.These sample blurred images are processed under the proposed method to deblur the given input.For example, a im6 shown in Fig. 2 is processed, and the deblur output is obtained using the proposed method as shown in Fig. 3.The PSNR and SSIM of deblurred im6 are 30.98dB and 0.93. The fitness function graph for MORSA with different optimizations such as particle swarm optimization (PSO) and Grey wolf optimization (GWO) is shown in Fig. 4. The objective function considered for noise removal is used to converge the MORSA is faster than the PSO and GWO. The PSNR is analyzed for different optimization and deblurring models as shown in Table II.Here, two different block sizes such as 3 × 3 and 5 × 5 are considered for analyzing the restoration performances.In that, different optimization includes particle swarm optimization (PSO) and Grey wolf optimization (GWO), whereas the different deblurring include convolutional neural network (CNN) and U-net.Further, PSNR comparison for different optimization methods is shown in Fig. 5.This analysis shows that the MORSA and proposed deblurring methods provide better performances than the other approaches.For example, the PSNR of MORSA for im1 is 31.021dB, whereas PSO obtains 28.578 dB and GWO obtains 29.593 dB.The developed MORSA provides better performance than the PSO and GEO because of its effective equilibrium among the exploration and exploitation operations.Moreover, the kernel estimation computes new pixel for the noisy pixel of blurred image based on estimated threshold from MORSA.Accordingly, this kernel estimation is used to perform an effective deblurring of images which further helps to improve the PSNR. Similar to PSNR analysis, the SSIM also analyzed for different optimization and deblurring models as shown in Table III.Further, the graph for SSIM for different optimizations is shown in Fig. 6.From this analysis, it is found that the SSIM of proposed MORSA and deblurring model provides better performances than the PSO, GWO, CNN, and U-Net.For example, the SSIM of MORSA for im1 is 0.881, whereas PSO obtains 0.678 and GWO obtains 0.793.Since the searching operations of MORSA such as exploration and exploitation's equilibrium result in optimal parameters of threshold and kernel.The MORSA achieves higher structural similarity because of effective identification of threshold and kernel parameters for deblurring and restoring the input images. The runtime, memory, and entropy analysis for different block sizes such as 3 × 3 and 5 × 5 are shown in Table IV.This analysis shows that the runtime for 3 × 3 block size is varied between 6.18s and 9.04s, whereas the 5 × 5 is varied between 6.34s and 9.11s.On the other hand, the memory used during the simulation is varied from 3.04KB to 5.88KB for 3 × 3, while memory of 3.81KB to 7.22KB is used for 5 × 5. Further, the entropy for 3 × 3 block size is varied between 6.44 and 7.58, whereas the 5 × 5 is varied between 5.97 and 6.84. B. COMPARATIVE ANALYSIS Existing researches such as ELMI [19], DUBLID [20], and MAP [21] are used to compare the MORSA-IDR method.The ELMI [19], DUBLID [20], and MAP [21] have processed the im6 shown in Fig. 2, so the comparison is done for the same as shown in Table V.Further, the graphical illustration for the PSNR for MORSA-IDR with ELMI [19] and DUBLID [20] is shown in Fig. 7.This comparison depicts that the MORSA-IDR outperforms well than the ELMI [19], DUBLID [20], and MAP [21].For example, the PSNR of MORSA-IDR for im6 is 30.98 dB, whereas the ELMI [19] obtains 30.45 dB and DUBLID [20] obtains 29.83 dB.The ELMI [19] and DUBLID [20] have to incorporate the noisy pixel discovery to further enhance the deblurring performances.Due to utilization of GPU, the runtime of DUBLID [20] is less when compared to the proposed MORSA-IDR.The main goal of the MORSA-IDR is to increase the PSNR and SSIM of restored images.So, the MORSA-IDR achieved high PSNR and SSIM with significant runtime when compared to the DUBLID [20].The combination of noisy pixel discovery using DRN and kernel estimation along with MORSA help to enhance the deblurring and restoration performances. V. DISCUSSION This section provides the brief discussion about the results obtained from the MORSA-IDR to ensure the image deblurring and restoration.At first, the results of MORSA-IDR are compared with different optimization and deblurring models.The results show that the MORSA-IDR achieves the better PSNR and SSIM than the PSO, GWO, CNN, and U-Net approaches.For example, the PSNR of MORSA-IDR is 30.668dB, which is high when compared with the PSO, GWO, CNN, and U-Net.Further, the MORSA-IDR is compared with the ELMI [19], DUBLID [20], and MAP [21] in comparative analysis section.The MORSA-IDR outperforms well than the ELMI [19], DUBLID [20], and MAP [21].For example, the PSNR of MORSA-IDR for im6 is 30.98 dB, which is high than the ELMI [19] and DUBLID [20].In this research, the DRN-based noisy pixel discovery and MORSA-based kernel estimation are used to enhance the image deblurring and restoration performances.The MORSA-IDR works well for unstructured and low light images during the image deblurring and restoration.However, if MORSA-IDR processed under highly unstructured images, it creates huge impact in the PSNR and SSIM measures. Algorithm 1 : MORSA-based threshold calculation.Input: Noisy central pixel Output: Computation of threshold values based on the proposed MORSA Initialize the population Fitness function evaluation While (end criteria failed to satisfy) For each population Update the solution End for Fitness function evaluation Find the best solution Update the population End while Best solution is returned Table II . PSNR analysis of proposed method Table III . SSIM analysis of the proposed method Fig. 6.SSIM graph for different optimization methods. Table IV . Runtime, memory, and entropy analysis Block size Images Runtime (s) Memory (KB) Entropy Table V . Comparative analysis for MORSA-IDR
2023-07-11T00:49:32.675Z
2023-06-17T00:00:00.000
{ "year": 2023, "sha1": "8d2bbfd6c2d11a816aa40a755002e49442217a04", "oa_license": "CCBY", "oa_url": "https://ojs.istp-press.com/jait/article/download/204/229", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "27d99273f914a38d5f707fc25268ad8f2c791273", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
125650657
pes2o/s2orc
v3-fos-license
The Thermal Encroachment of Microwave Heating with Nano Ferro Fluids Injection on Heavy Oil Deposits Heavy oil demands more energy for its lifting to the surface facilities. A critical parameter that can be altered to enhance the production from the reservoir is the viscosity. Lowering oil viscosity predominantly achieved by thermal methods. This study investigated thermal encroachment in the sand pack layers as simulated heavy oil reservoir was generated by the microwave stack heated mixtures of 22 0API of Indonesian heavy crude, nano-ferro fluid Fe2O3 and saturated brines. The wave guide was used to focus microwave radiation into the sand bed. The experimental results showed that microwave heating with maximum output power of 900 Watt and Fe2O3 as the nano particles, works at the frequency of 2.45 GHz reduces oil viscosity from 4,412.11 cP on its pour point at 51 0C to 134.24 cP at 90 0C. Thermal heating with nano ferro fluids decreased the viscosity of heavy oil and make it easier to be flowed. The increases of temperature are directly proportional with power output and nano-ferro concentration. Introduction Abundant amount of heavy oil resources can be one of the alternative solutions to fulfill the world's energy demand.Commonly, heavy and extra-heavy oils are difficult to flow naturally to the wellbore, therefore, they demand more energy for their production.A critical parameter that can be altered to enhance the production from these reservoirs is viscosity.Lowering oil viscosity predominantly achieved by thermal methods (Bera & Babadagli, 2015).Nowadays, many conventional methods have been applied to decrease heavy oil viscosity, such as steam injection, hot water injection, or gas injection.However, some of those methods have limitation.As instances, steam injection can only be used in shallow reservoir and not permitted to be implementing in offshore.It should have abundant brine supply and hot water system.Others become ineffective due to corrosion problem, significant heat loss and economic criteria.Therefore, developing of heating concept as an alternative to drain additional reserves of heavy crudes is needed.A thermal method that recently attracts many researchers to investigate is electromagnetic heating. Previous research on heating concept was conducted by Chakma & Jha, (1992) to understand that electromagnetic (EM) heating is an effective way to introduce energy to the reservoir in control manner and that this energy can be directed into a specific region.Carrizales, et al., (2008) introduced EM heating refers to Radio Frequency (RF) or Microwave (MW) heating, where heating is produced by absorption of electromagnetic energy by the polar molecules in the formation.Pramana, et al., (2012) defined that combination of resistive heating and injection nano ferro fluid are directly related to increase the temperature distribution. In this research of advanced technology to decrease viscosity of heavy fluid is electromagnetic microwave method with nano-ferro fluid Fe 2 O 3 as stimulant injection to achieve low viscosity of heavy oil.Microwaves thermal heating was explored as an alternative solution for heavy oil drainage and sand packs and artificial cores have been made to simulate oil deposit sediments.Multiphase fluids (nano-ferro, saturated brines, and heavy crude) then saturated into those simulated reservoir samples.The thermal heating of microwave was controlled at 90 0 C and heating exposures recorded in every 20-sec. Materials Heavy oil origin is from Jatibarang formation, Indonesia.The samples originally were in solid condition and has been liquified at the laboratory at 100 0 C of boiling water in a beaker glass.The oil viscosity, gravity and pour point were measured using water bath viscometer, pycnometer, and a pour point tube, respectively. The powder nano particles (Fe 2 O 3 ) supplied by Aldrich Chemistry has size ≤ 50 nm.Sand pack container was made from a modified cylindrical tube Pyrex glass which has 9 cm in high and 8.56 cm of inside diameter and equipped with a coned-shape lid.This sand pack mimics the oil reservoir to simulate the reservoir layers.The sandstones grains size was 45-50 meshes supplied by the geology laboratory of Institut Teknologi Bandung.Table 1 presents physical properties of sand pack and the fluids been used in this study. Experimental Procedures Nano-ferro-fluid was made by mixing hematite powder and brine solution in the sonicator for 20 minutes.In this research, nano-ferro fluids concentration were 10 and 14 ppm to minimize aggregation (Santoso, et al., 2016).This experiment used alternating current to generate voltages.The current voltages then were step up by a transformer before it passed through into the capacitor.The magnetron and microwave antenna transformed the electrical energy into microwaves heating.The maximum income voltage of this concatenation was 1300 watts with maximum outcome voltage was 900 watts.The magnetron has frequency of 2.45 GHz and various power which were 900, 792, 657, and 468 watts.The thermometers have recorded temperature encroachment in the sand pack for every 20 sec.Four thermometers were inserted into the Pyrex wall for every 2 cm distance apart measuring temperature changes which occurred inside the sand pack.The lowest thermometer was tagged as Point-1 and continued up with Point-2, Point-3, and Point-4.However, the observing temperature was limited up to 90 0 C due to water evaporation that might be occurred above 90 0 C (Fig. 1). In the heating process, two sand packs with different nano-ferro concentration of 10 ppm and 14 ppm were used.The sand pack without nano ferro (0 wt.%) was used as the reference sample.Further, the samples were heated by 6 different output powers: 900, 792, 657, 468, 378, and 180 watts.Temperature changes have been recorded for every 20 sec which started at 25 0 C until a point of measurement reached 90 0 C. Figure 1.The microwave heating apparatus configuration of microwave heating that has been used in this study The magnetron and microwave antenna transformed the electrical energy into microwaves heating.The maximum income voltage of this concatenation was 1300 watts with maximum outcome voltage was 900 watts. Results and Discussion The study used nano-ferro fluid with concentration 10 ppm and 14 ppm.However, 10 ppm nano-ferro fluid was slower to encroach the heats up to 90 0 C than 10 ppm.During the heating, only thermometer at Point-1 that could reach 90 o C, therefore, all the analysis will use Point-1 as the reference point.Figure 2 and 3 show temperatures profiles of heats encroachment of microwave heating when the apparatus operated at low and high power, 180 and 900 watts, respectively.At low power 180 watts, the temperatures encroachment increased as the nano-ferro concentration was increased.Similar trends also appeared at high power 900 watts.Temperature changes affected heavy oil viscosity.With this heating process method, heavy oil viscosity has been decreased from 4412.11 cP on its pour point at 51 0 C to 134.24 cP at 90 0 C (Fig. 5).The power of microwave tuned at 180 watts.The heavy oil viscosity has been decreased from 4412.11 cP at 51 0C to 134.24 cP at 90 0C Conclusions The study has been demonstrated the application of microwave heating to reduce the heavy oil viscosity.Further, experimental results show that the heating rate is directly proportional with the output power of microwave and nano-ferro concentration.The threshold concentration of nano-ferro Fe 2 O 3 was 14 ppm, which determined when the microwave was tuned at low power, 180 watts.The experimental results also described that time to reach a point of heating decreased as the concentration of nano ferro Fe2O3 was increased, at the certain power of microwave.Further, an increase of temperature has also reduced the oil viscosity which could increase the oil production rate.In further, the increase of temperature could also affect the heavy oil recovery of a reservoir. Figure 2 .Figure 3 . Figure 2. Temperature versus time profile of low power of microwave heating Table 1 . Physical properties of sand pack and the fluids been used in this study
2018-12-27T11:52:35.867Z
2018-08-08T00:00:00.000
{ "year": 2018, "sha1": "1a4ac829683208212c6498fe1ae19774cde8d623", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/mas/article/download/75976/42627", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1a4ac829683208212c6498fe1ae19774cde8d623", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Mathematics" ] }
149208969
pes2o/s2orc
v3-fos-license
Learning Approach as Predictor of Students' Epistemological Development in the Framework of Self-Authorship Theory Past studies have found that an individual's epistemological development is predicted from learning that is meaningful to the learner. The current research aims to address whether deep learning style is able to predict students' epistemological ability (self-authorship, which is defined as the internal capacity to construct and evaluate knowledge claims, to comprehend the nature of contextual knowledge, and to have independence in the acquisition of knowledge). The researchers hypothesized that the deeper the learning approaches adopted by students, the higher their self-authorship. Conversely, the more students utilize a surface approach to learning, the lower their self-authorship. A total of 346 students enrolled in a university in Indonesia participated in the study. The results showed support for both hypotheses, and we discussed the role of cognitive dispositions in the development of epistemological ability. Introduction The essence of higher education is to create 'thinkers', defined as those who possess self-reliance in thinking and a commitment to the search for truth of knowledge (see Hedges, 2009). Being self-sufficient in producing knowledge is believed to be a common goal that individuals must possess upon college graduation in the 21st century (Baxter Magolda, 2004b, 2004c, 2010Meszaros, 2007). However, students' dependence on authority in the context of the search for knowledge has become a common phenomenon in universities in Indonesia. Students' progression from absolute dependence on authority figures to gaining independence in constructing knowledge has been extensively studied within the framework of self-authorship theory (see Baxter Magolda, 2001b;King, 2010;Kegan, 1994), wherein such progression has been investigated since Perry's (1970) study of epistemological development and its various trajectories. Self-authorship is a term used to label an adult's stage of accomplishment at the peak of epistemological development (see Pizzolato, Hicklen, Brown, & Chaudhari, 2009). Students with a more advanced epistemological development are objective, have more elaborate reading comprehension, are skilled information seekers, have a disposition for seeking the truth, and possess academic honesty (Valanides & Angeli, 2008;Bråten, 2008;Bråten & Strømsø, & Samuelstuen, 2005). They are able to think critically, to review theories, and to evaluate arguments (Kuhn, 1991;Kuhn, Cheney, & Weinstock, 2000). It is only natural, then, that selfauthorship is recognized as a benchmark of student achievement that has been shown to correspond to high academic achievement in a sample of students in America and Africa (Strayhorn, 2014;Pizzolato, et al., 2009). The question is: in what way do students learn in order to achieve this peak of epistemological development? Most of the theories in the research of epistemological development state that meaning-making process (a contemplative reflection that challenges what students believe) is a crucial component in the progression of epistemological development (see provocative moment and dissonance; Pizzolato, 2005;Pizzolato et al., 2009;Baxter Magolda, 1999b;Bekken & Marie, 2007). The experience of dissonance in the meaning-making process encourages students to rethink the way they interpret the knowledge that they have accumulated, ultimately transitioning from a simplistic view of knowledge to being able to construct knowledge independently (Bekken & Marie, 2007). As an epistemological development, self-authorship emphasizes the meaning-making process occurring in students (see King, 2010;Baxter Magolda, 2001b). When students engage in an intensive meaning-making process, they are able to progress faster in the trajectory of epistemological development (LPM; Baxter Magolda; 2001b). Literature from studies of learning approaches (Learning Process Complex;Biggs, 1987) reveals that students are able to use a combination of various strategies and motives when learning, whether it involves involve interpretation (deep approach) or not (surface approach) (see Biggs, 1987;Bowden & Marton, 1998;Entwisle, 2009). Students who study indepth utilize strategies to "interpret", namely by expanding the scope of their reading and connecting new knowledge to any relevant past knowledge. Such students are usually driven by intrinsic motivation, being compelled to actualize their interests and to become competent in specific academic subjects (see Biggs, 1987;Bowden & Marton, 1998;Entwisle, 2009). Considering the close relationship between deep learning approach and the progression of epistemological development, the present study will focus on learning approach as a predictor variable for epistemological development, more particularly within the framework of self-authorship theory. Therefore, the research question is as follows: Can a deep learning approach that involves meaning-making predict the achievement of a more advanced epistemological development (i.e., selfauthorship)? Despite the close connection to learning outcomes, explanations of how other related variables like student learning approaches may support the progression of self-authorship have not been described in the empirical-scientific literature. To date, almost all of the literature in self-authorship have attempted to conceptualize self-authorship as discussed in the context of developmental stages (see Kegan, 1982Kegan, , 1994Baxter Magolda, 1999a, 1999c, 2001bPizzolato, 2005a). Empirically, other constructs directly associated with self-authorship have yet to be widely recognized (Pizzolato, 2005b), studied, and directly substantiated. Therefore, Magolda (2004a) stated that there is a need to identify the factors that influence students' progression in achieving self-authorship. This study aims to explain the role of learning approach as a predictor of self-authorship epistemological development. Through this research, we attempt to explain why some students can achieve independence in the search for knowledge, displaying the tendency not to choose to rely on authority as the determinant of truth. Self-authorship is defined as the internal capacity to construct and evaluate knowledge claims, to comprehend the nature of contextual knowledge, and to have independence in the acquisition of knowledge (Baxter Magolda, 2008;Pizzolato, 2007) in the context of higher education (Meszaros, 2007). It is a major theory explaining individual development independent of absolute reliance on authority to achieve internal maturity (Kegan, 1982). Moreover, the theory became increasingly complex upon Kegan (1994) and Baxter Magolda's (1999aMagolda's ( , 1999cMagolda's ( , 2000Magolda's ( , 2001b identification of three main dimensions of self-authorship, namely the epistemological, intrapersonal, and interpersonal dimensions. In this decade, discourses on whether there exists a dominant dimension and whether the dimensions are intertwined have been much debated (see Baxter Magolda et al., 2010). This debate renders the claim of self-authorship as the representation of epistemological development less clear and precise in its measurement, due to the simultaneous measurement of other constructs (i.e., interpersonal and intrapersonal dimensions) in the same bundle of measurement. One argument maintains that epistemological dimension is the basic and core representation of the attainment of self-authorship (King, 2010) that emphasizes the development of an individual's ability to evaluate knowledge claims and belief systems in constructing knowledge (Baxter Magolda, 2001b). King (2010) states that epistemological development is the foundation that serves as a prerequisite of interpersonal and intrapersonal development, therefore it needs to be prioritized (becoming "first among equals") compared to other dimensions. Furthermore, by the same argument, an individual's lack of epistemological development guarantees low intrapersonal and interpersonal development. King (2010) suggests that individuals need to first possess a sophisticated thought complexity (epistemological) as a requirement for the ability to self-reflect (intrapersonal) and to understand how to meet the expectations of others (interpersonal). In addition, the measurement of self-authorship as an epistemological development has been performed in Pizzolato et al.'s (2009) study, in which self-authorship was conceptualized as a representation of epistemological development. In light of this, in the present study we view self-authorship as a representation of epistemological development. To stay consistent with King's (2010) assertion, we also plan to re-test the construct validity of self-authorship against the epistemological attribute constructs that are closely linked to thought complexity, such as Epistemological Beliefs Inventory (EBI; Schraw, Bendixen, & Dunkle, 2002), Need for Cognition Scale (NCS; see Cacioppo, Petty, & Kao,1984) Research about student learning process is discussed in studies of learning approach (learning process complex) (see Biggs, 1987). In short, in his theory, Biggs (1987Biggs ( , 1999Biggs ( , 2001Biggs ( , 2012 explains that learning process consists of a combination of different learning motives and strategies, or different 'learning approaches', including (1) deep learning (combination of deep information processing and intrinsic motivation) and (2) surface learning (combination of shallow information processing and extrinsic motivation). Furthermore, Biggs, Kember, and Leung (2001) claim that psychometrically, the components of motivation and strategy can be adequately explained through the two aforementioned learning approach constructs without the need to involve achieving approach. This is because students who use achieving approach can use either deep or surface approach, depending on the demands of the task (Wilding & Andrews, 2006;Evans, Kirby, & Fabrigar, 2003). Based on the suggestion of the argument, achieving approach is not measured in the current research. Students with deep learning approach focus on learning outcome and attach meaning to learning (Bowden & Marton, 1998;Entwisle, 2009), have no motive for finding shortcuts when faced with a task (Biggs, 2012), learn for the sake of learning ("learning for its own sake"), and are able to deal with uncertain information in the era of globalization (Barros, Monteiro, Nejmedinne, & Moreira, 2013). In contrast, students with surface learning approach tend to use formulas they do not understand when solving problems (Bowden & Marton, 1998). Such students cannot deal with ambiguous information (Barros et al., 2013). They also learn for the mere sake of graduating, investing the minimal time and effort needed to learn (Yonker, 2011;Biggs, Kember, & Leung, 2001, Biggs, 1999. The dynamics of the relationship between learning approach and self-authorship epistemological development originate from several prior research results that have found that certain learning approaches tend to lead students to become independent, separating themselves from reliance on authority. Such results are implicit, in that the term 'epistemological development' was not expressed directly in the research results. Baeten, Dochy, Struyven, Parmentier, and Vanderbruggen's (2015) study on learning approach and instructional preference found that students who use deep learning tend to have a student-centered instructional preference and choose to actively construct knowledge independently through elaboration and cooperation. Conversely, students with a surface learning approach tend to opt for teacher-centered learning; they are passive and prefer to be guided by the instructor when learning. The results implicitly indicate that students with a deep learning approach are independent learners, while students who adopt a surface learning approach are more dependent upon authority. Further, Bliuc, Ellis, Goodyear, and Hendres (2011) revealed that students who engage in deep learning are highly involved in the learning communities of their universities. On the other hand, those with a surface learning approach demonstrate low participation in learning communities. Student involvement in active learning in an informal environment indicates that students with deep learning approach are more likely to be active in the pursuit of knowledge and have more independence in learning. The relationship between learning approach and selfauthorship can also be determined from indicator similarities within learning. Students who use a deep learning approach and who attain an advanced level of self-authorship development have a common indicator, namely having faith in their ability to reach their goals (goal-oriented) (Pizzolato, 2007;Pizzolato et al., 2009;Cazal & Indreica, 2014;Strayhorn;. Students who achieve self-authorship and those who use deep learning approach are equally driven by intrinsic motivation in their learning (see Biggs, 2001;Cazan & Indreica, 2014;Pizzolato et al., 2009). Reflective thought processes in which students are aware of how their minds work is also a common indicator shared among students who attain self-authorship and those who adopt deep learning approach Cazan & Indreica, 2014). Lastly, students with deep learning approach focus on the meaning of what has been learned (Bowden & Marton, 1998;Entwisle, 2009). Selfauthorship, in this case, centers on how students are able to attach meaning to what has been learned and to integrate knowledge with the internal self (Baxter Magolda, 2004b. It can be concluded that the more students use a deep learning approach, the more likely they are to exhibit indicators common to selfauthorship, such as reflective thinking, the ability to self-regulate, high self-efficacy, being driven by intrinsic motivation, and focusing learning on the meaning-making process. Therefore, we hypothesized that: Deep learning approach is a positive predictor of self-authorship and surface study approach is a negative predictor of self-authorship. Methods Participants and Procedures. The study sample was comprised of 220 actively enrolled Psychology students, excluding new students. The researchers undertook some preparations prior to data collection, including adapting the instruments, preparing informed consent forms, conducting a readability assessment for the instruments, making copies of the questionnaires, recruiting field researchers for data collection, and selecting e-books to present to participants as reward for participation. A total of 500 questionnaire forms were distributed to active students in the Faculty of Psychology in one of the top universities in Indonesia. Out of the 346 questionnaires that were returned, 126 were excluded from the analysis because 96 were not thoroughly completed while 30 questionnaires were discarded because participants were observed to interact with other people during the survey period. Materials. Self-authorship. Defined as the internal capacity to construct and evaluate knowledge claims, to understand the nature of contextual knowledge, and to be independent in the acquisition of knowledge. Selfauthorship is measured from the total score of all selfauthorship subscales contained in the Self Authorship Survey (SAS) instrument. Self-Authorship Survey (SAS) was originally developed by Pizzolato (2005bPizzolato ( , 2007. SAS was used in previous research to measure individual epistemological development within an educational context (Pizzolato et al., 2009). Four subscales are measured in the instrument. First, 9 items of Capacity for Autonomous Actions (items 1 -9) measure the extent to which students feel they are not dependent upon others, such as not feeling pressured to do what others are doing. 6 items of Problem Solving Orientation (items 10 -15) assess whether students are capable of making a decision based on their own values and their orientation to solve problems. 6 items of Perceptions of Volitional Competence (items 16 -21) evaluate how confident students are in planning their targets and in solving problems. 6 items of Self-Regulation in Challenging Situations (items 22 -27) quantify proficiency in self-regulation and persistence in achieving objectives when the unexpected happens. Each subscale of the instrument assesses one or more dimensions of self-authorship. Participants are asked to indicate, on a 5-point Likert scale ranging from 1 (disagree) to 5 (agree), the degree to which they agree with each item. In general, SAS has good internal consistency (Pizzolato, 2005b(Pizzolato, , 2007. In the current study, SAS is adapted into the Indonesian language. The reliability of the Indonesian version of SAS is .83. The higher the SAS score, the higher the epistemological capacity of self-authorship. To test the construct validity of SAS, tests for convergence were performed by correlating total SAS score with each of the total scores of epistemological attribute variables in the study, which include Epistemological Beliefs Inventory (EBI; Schraw et al., 2002), Need for Cognition Scale (NCS; see Learning approach. Defined as students' tendency to learn deeply or on the surface (Biggs, 2012). The Revised Two-Factor Study Process Questionnaire (R-SPQ-2F) is a self-report used to measure deep learning approach and surface learning approach (Biggs et al., 2001). A total of 20 items are included in the R-SPQ-2F. 10 items are subscales measuring surface approach and 10 other items are subscales that measure deep approach. Participants are asked to indicate the extent to which they agree with each item in the inventory. Responses to items are measured on a 5-point Likert scale (1=Disagree; 2=Somewhat Disagree; 3=Neutral; 4=Somewhat Agree; 5=Agree). The Cronbach alpha coefficients for the deep learning approach and the surface learning approach subscales are .78 and .74, respectively, suggesting that the Indonesian adaptation of R-SPQ-2F has good internal consistency. Results Description of Participants. Participants (N = 220) were active students enrolled in the Faculty of Psychology at Universitas Indonesia, comprising 92 (41.8%) second-year students, 72 (32.7%) third-year Generally, the results demonstrate that self-authorship is a valid construct for measuring students' epistemological aspect. The more developed a student's epistemological aspect (self-authorship score), the more the student possesses a great sense of curiosity towards knowledge (FOD & FOI), enjoys thinking (NCS), is skeptical towards myths in popular psychology (skepticism), and tends to hold the belief that knowledge is relative rather than certain (certain knowledge) and that knowledge is complex rather than simple (simple knowledge). Hypothesis Testing. To test the predicted relationship between learning approach and self-authorship, we first performed a zero-order correlation analysis. The results confirmed the prediction that deep learning approach is significantly correlated with self-authorship (r = 0.28, p < 0.01) ( Table 2). Students who learn deeply and attach meaning to learning tend to be oriented towards problem solving (problem solving orientation, r = 0.42, p < 0.01) and confident in their ability to reach their target objective through the planning that they have set (perception of volitional competence; r = .34, p <.01). Surface learning approach was found to be negatively correlated with self-authorship (r = -0.38, p <0.01). That is, students who learn 'on the mere surface' tend to be dependent on others in determining decisions/ confidence (capacity for autonomous action; r = -0.37, p <0.01), not be oriented towards problem-solving (problem solving orientation; R = -0.28, p <0.01), and unable to self-regulate when faced with unexpected situations (self-regulation in challenging situations; r = -0.32, p <0.01). A negative correlation between deep approach and surface approach was additionally discovered (r = -0.23, p <0.01). A zero-order correlation coefficient only explains the strength of relationship between two variables without thoroughly conveying the magnitude of variance contribution from several variables in self-authorship. The analysis also does not explain how some predictors are able to account for a higher variance of the outcome compared to other predictors. To overcome the above limitation of the analysis, a multiple regression was performed. The results of the multiple regression with forced entry method are presented in Table 3. As predicted, deep learning approach is a significant positive predictor of selfauthorship (β = 0.20, p <0.000), while surface study approach is a significant negative predictor of selfauthorship (β = -0.33, p <0.000). Taken together, learning approach has a positive relationship (R = 0.43) with self-authorship, wherein the contribution in variance from the predictor towards self-authorship is 18.3% (F (219) = 24.28, p <0.001. Adj R 2 = 0.17). Discussion This study aims to explain the role of learning approach type as predictor of the achievement of self-authorship epistemological development. Learning approach consists of two types, namely deep approach and surface approach (Biggs, 2012). Epistemological development is explained using the theoretical framework of self-authorship epistemological development, which describes individual progression in achieving independence in learning and knowledge construction (Baxter Magolda, 2008;. We hypothesized that The more students attach meaning to their learning process, the more likely they are to reach the peak of epistemological development. The results supported our hypothesis, as deep learning approach and surface learning approach were shown to be significant predictors of self-authorship. Deep learning approach was indeed found to be a positive predictor of self-authorship. The more students use a deep learning approach, the higher their selfauthorship. That is, individuals who attach meaning to learning, who learn for the sake of learning, and who pursue knowledge for the sake of knowledge itself characterize students who reach maturity in the construction of knowledge. This result is a novel discovery and conforms to the prediction of the researchers. The findings also support King's (2010) argument that self-authorship progresses in accordance with the complexity of the meaning-making process that occurs within an individual. There exist several limitations to the study, the first of which is in reference to the issue of the measurement of deep learning approach. Firstly, there is evidence that deep learning approach has a weak negative correlation with classroom learning behavior, while surface learning approach is strongly negatively correlated with classroom learning behavior (Choy, O'Grady, & Rotgans, 2011). In the aforementioned study's discussion, it is stated that items in the deep approach subscale of R-SPQ-2F are too "philosophical in nature" and are therefore difficult to observe from classroom learning behavior, which contrasts with items in the surface approach subscale that directly measure classroom behavior as they are more "behavioral in nature". Henceforth, researchers of the current study suggest that the measurement of learning approach be changed to the level of actual behaviors exhibited by students while learning in the classroom, so as to obtain a more coherent picture of the relationship between learning approach and self-authorship. Secondly, in the current study, the 'learning approach' construct is a combination of students' learning 'motivation' and 'strategy'. The combined measurement of motivation and strategy implies that the two constructs are not measured separately and are instead measured through a single composite 'approach' score. It has been argued that students can use either deep learning approach or surface learning approach depending on task demands and time management (Wilding & Andrews, 2006;Evans, Kirby, & Fabrigar, 2003). Measuring motivation and strategy independently is assumed to allow for a more detailed alternative explanation regarding the role of learning approach as predictor of self-authorship. Subsequent studies should therefore operationalize learning approach by separating motivation and strategy as independent constructs, while at the same time controlling the level of task demand and time management for students. In the validation test of the Self-Authorship Survey (SAS) instrument, need for cognition was found to be the epistemological attribute variable that has the strongest correlation with self-authorship. Need for cognition is an individual's dispositional trait to like thinking activities and to enjoy complex thinking tasks. This is consistent with the claims of past studies that need for cognition is positively correlated with academic success Olson, Camp, & Fuller, 1984;Petty & Jarvis, 1996;Tolentino, Curry, & Leak, 1990;Waters & Zakrajsek, 1990). Need for cognition is closely related to academic self-efficacy (Elias & Loomis, 2002), wherein academic self-efficacy is the strongest non-cognitive predictor of academic achievement (Richardson et al., 2012). Self-authorship has also been shown to have a strong relationship with academic self-efficacy (Strayhorn, 2014). Individuals who seek complex cognitive tasks seemingly have confidence in their ability (efficacy) to complete the sought tasks. The relationship between self-authorship and need for cognition has never been discussed in prior research. Individuals with a thinking trait and who enjoy thinking activities tend to progress in a more advanced manner towards the peak of epistemological development. This result is consistent with the argument proposed by King (2010), which states that cognitive complexity is a fundamental dimension of selfauthorship epistemological development. Future studies attempting to predict the progression of self-authorship need to control thinking disposition (need for cognition) due to the possibility that students with a natural inclination to enjoy thinking efforts are capable of reaching a higher thinking complexity (i.e., they have a high thinking complexity to begin with). Self-authorship has been established as a benchmark for student outcome (Baxter Magolda, 2004bMeszaros, 2007), such as academic success (GPA) (Strayhorn, 2014;Pizzolato et al., 2009). Deep learning approach has also been claimed as the primary goal of higher education institutions (Biggs, 1999), where metaanalysis Richardson et al. (2012) have been demonstrated that learning approach is consistently predict academic success (GPA). Thus, claims of the objectives of higher education (i.e., attainment of selfauthorship and the use of deep learning approach) are shown to be in accordance with degree of academic success (GPA). Yet several studies in other countries have yielded inconsistent findings with regard to the relationship between deep learning approach and academic success. In Australia, Zeegers (2001) discovered that students are not compelled to use deep learning approach in the classroom. Similarly, Diseth and Martinsen (2003) found that learning approach failed to predict the academic achievement of Psychology students in Norway. Groves (2005) revealed that first-year students taught with a Problem-Based Learning (PBL) curriculum experienced a shift from deep learning approach to surface learning approach throughout the duration of their study in an institution. In the current research, academic success as indicated by GPA was not included. In Indonesian universities, the nature of how learning approach and self-authorship relate to academic success (GPA) remains to be seen. Therefore, future research needs to further clarify how the relationship between learning approach and selfauthorship is associated with academic success in Indonesia. Conclusion The present study fills in the gap in the research on higher education, more particularly pertaining to the relationship between learning approach and selfauthorship. This research is the first to propose a structural model of self-authorship epistemological development with various epistemological attributes and cognitive dispositions taken into account, more specifically among students in Indonesian universities. In particular, the current study provides evidence that the use of deep learning approach and avoidance of the use of surface learning approach is a process experienced by students who attain the peak of selfauthorship epistemological development.
2019-05-11T13:06:53.093Z
2017-08-16T00:00:00.000
{ "year": 2017, "sha1": "34b7f87c591303d64f841eee1015e3b243ecc885", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7454/mssh.v21i1.3500", "oa_status": "GOLD", "pdf_src": "Neliti", "pdf_hash": "230e3171e5beb7076116d65de8fd74ad0bc376d6", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
204543114
pes2o/s2orc
v3-fos-license
Hydroxyurea alters circulating monocyte subsets and dampens its inflammatory potential in sickle cell anemia patients Sickle cell anemia (SCA) is a hemolytic disease in which vaso-occlusion is an important pathophysiological mechanism. The treatment is based on hydroxyurea (HU), which decreases leukocyte counts and increases fetal hemoglobin synthesis. Different cell types are thought to contribute to vaso-occlusion. Nevertheless, the role of monocytes subsets remains unclear. We investigated frequencies of monocytes subsets in blood and their response to HU therapy, testing their ability to express pro-inflammatory molecules and tissue factor (TF). We identified major changes in monocyte subsets, with classical monocytes (CD14++CD16−) appearing highly frequent in who were not taking HU, whereas those with patrolling phenotype (CD14dimCD16+) were enriched in individuals undergoing therapy. Additionally, HU decreased the production of TNF-α, IL1-β, IL-6, IL-8 as well as TF by the LPS-activated monocytes. Likewise, frequency of TF-expressing monocytes is increased in patients with previous vaso-occlusion. Moreover, activated monocytes expressing TF produced several pro-inflammatory cytokines simultaneously. Such polyfunctional capacity was dramatically dampened by HU therapy. The frequency of classical monocytes subset was positively correlated with percentage cytokine producing cells upon LPS stimulation. These findings suggest that classical monocytes are the subset responsible for multiple pro-inflammatory cytokine production and possibly drive inflammation and vaso-occlusion in SCA which is damped by HU. Sickle cell anemia (SCA) is a genetic disease associated with important alterations of morphology and function of red blood cells (RBC) which cause a wide range of clinical manifestations linked to vascular injury and coagulation abnormalities 1 . The SCA is characterized by homozygosity of the hemoglobin S (HbS), and patients with this disease exhibit the most severe clinical forms 1 . Of note, polymerization of HbS triggers biochemical and morphological changes in sickle erythrocytes, which interact with other erythrocytes, as well as with reticulocytes, leukocytes, platelets and endothelial cells leading to vaso-occlusive events (VOE) 1,2 , which is the main pathophysiological mechanism underlying SCA. VOE is thought to be caused at least by three components: (i) activation of endothelial cells and leukocytes due to adherence of sickle erythrocytes; (ii) nitric oxide (NO) consumption by arginase and free hemoglobin as result of intravascular hemolysis; (iii) activation of coagulation cascades due to activation of endothelium and leukocytes, which drive blood flow obstruction and eventually VOE 1,3 . Understanding the mechanisms driving susceptibility to VOE is critical to develop optimization of clinical management and development of new therapeutic approaches of SCA patients. Monocytes play an important role in innate immune responses. These cells originate from a myeloid progenitor from bone marrow, circulate in peripheral blood for approximately 2-3 days until they undergo apoptosis or migrate to the tissues where they become macrophages and maintain the innate immune surveillance 4,5 . They represent a very versatile leukocyte population which is responsible for a wide range of activities involved in immune defense against pathogens, maintenance of immune tolerance as well as of homeostasis 6 . Human monocyte subsets are characterized based on dichotomous expression of the surface markers CD14 and CD16 in classical (CD14 ++ CD16 − ), intermediate (CD14 + CD16 + ) and non-classical or patrolling (CD14 dim CD16 + ) monocytes 7 . Such categorization is not stable and it has been shown that monocytes can turn from one subset to another depending on the microenvironment 8 . The diverse monocyte subsets exhibit distinct functions that can range from highly pro-inflammatory to immunosuppressant activities 9 . The involvement of monocytes subsets in the pathogenesis of several pathological scenarios has been evaluated, ranging from infectious diseases such as HIV infection 10 and tuberculosis 11 to inflammatory diseases such as atherosclerosis 12 and myocardial infarction 13 . In SCA, activated monocytes were shown to be associated with vascular dysfunction through different mechanisms. During vaso-occlusive crisis, monocytes activate endothelium by inducing nuclear factor-kappa B (NF-κB) translocation 14 . In addition, the direct contact with endothelial cells triggers upregulation of genes encoding adhesion molecules and cytokines 15 , aside from production of lipid mediators, adhesion molecules, and coagulation factors 2,16 , which may contribute to VOE. Importantly, increased levels of pro-inflammatory cytokines in SCA seem to be a critical factor contributing to onset of VOE. Elevated serum levels of TNF, IL-1β, IL-6 and IL-8 in SCA patients are correlated with endothelial cell activation, and increased cell expression of vascular cell adhesion molecule-1 (VCAM-1) and intercellular adhesion molecule-1 (ICAM-1), as well as of soluble forms of these molecules [17][18][19] . Moreover, monocytes are a main source of tissue factor (TF) 20 , a critical molecule involved in activation of the extrinsic coagulation cascade leading to thrombin generation 20,21 . In a recent study, we have demonstrated that TF-expressing monocytes are in the epicenter of chronic inflammation and persistent activation of coagulation in patients living with HIV 10 . These cells produce multiple pro-inflammatory cytokines and are related to increased cardiovascular risk in HIV infection 10 . In SCA, expansion of monocytes producing TF has been reported during VOE 22 . The exact mechanisms by which TF-expressing monocytes may drive VOE and/or cardiovascular complications of SCA patients are not completely described. Pharmacological treatment of SCA patients with severe clinical profile is based on hydroxyurea (HU) therapy, which has been associated to beneficial effects on the microvasculature and decreased occurrence of VOE and other clinical complications 23 . This drug exhibits cytostatic properties through the inhibition of ribonucleotide reductase, which stops cell division. Moreover, HU decreases neutrophil, monocyte and reticulocyte counts in peripheral blood, as well as the expression of adhesion molecules and cytokines; while increasing synthesis of fetal hemoglobin (HbF) 24 . Considering the intricate mechanisms related to SCA pathogenesis, we aimed to investigate in detail the effect of HU therapy on circulating monocytes subsets and on their ability to express TF as well as pro-inflammatory cytokines upon activation in SCA patients. Furthermore, we tested association between monocyte activation phenotypes and occurrence of VOE. Our findings indicate that HU therapy induces substantial changes in frequency of monocyte subsets as well as in their capacity to promote inflammation and coagulation, which was associated to occurrence of VOE in SCA. Collectively, our data suggest that HU treatment modulate the inflammatory response driven by the monocytes. Results Impact of hydroxyurea therapy on laboratory parameters and clinical manifestations. The groups of participants were similar with regard to age and gender (Table S1). HU therapy was associated with improvement of most of biochemical and hematological parameters, including increases in hemoglobin levels and values of hematocrit, as well as reduction of LDH and AST concentrations. In addition, we observed a 2-fold increase in HbF levels and decrease of HbS levels (Table S1). HU use was also associated with decreased number of VOE, but no other change in clinical manifestations was noted in this specific study population (Table S2). The two patients underoing HU therapy experienced one episode of VOE six months prior to blood drawn and referred HU use for the last 6 years. Characterization of monocytes subsets of SCA patients under hydroxyurea therapy. Monocyte counts were decreased in SCA patients undergoing HU therapy (Fig. 1A). We next performed multicolor flow cytometry assays to better define the effects of HU treatment on monocyte subsets. The experiments revealed that HU decreased frequency of CD14 ++ CD16 − monocytes, while CD14 dim CD16 + were increased compared with that of patients not undergoing HU therapy (Fig. 1B,C). No statistical significance was found in frequency of monocytes subsets expressing both CD14 + CD16 + between individuals undertaking or not HU. Altogether these results suggest that HU induces substantial changes in monocytes subtypes in peripheral blood. Modulation of cytokine production by monocytes driven by hydroxyurea. We tested the effect of HU on cytokine production by monocytes. In unstimulated conditions, frequencies of monocytes expressing TNF-α, IL-1β or IL-6 were similar between the groups of patients taking or not HU (Fig. 2). Nevertheless, monocytes producing IL-8 were significantly expanded in patients not undergoing HU therapy (Fig. 2). Upon LPS challenge in vitro, monocytes were able to increase the production of TNF-α, IL-1β, IL-6 and IL-8 independent of the clinical group (Fig. 2). Importantly, HU use was associated with decreased capacity to produce TNF-α, IL-1β or IL-6 relative to that in patients who were not under HU therapy (Fig. 2). Production of IL-8 was not affected Hydroxyurea therapy negatively impacts production of pro-inflammatory cytokines of monocytes in response to LPS. PBMC from sickle cell anemia patients were incubated with 100 ng/mL LPS in vitro and intracellular cytokine staining assay was performed to test whether hydroxyurea treatment in vivo induces changes in the capacity of monocytes to respond to LPS by producing TNF-α, IL-1β, IL-6 and IL-8. Data represent frequency of monocytes. HU group n = 17 and no HU group n = 20. At each experimental condition, the study groups were compared using the Mann-Whitney U test. *p < 0.05, ***p < 0.0001. Sickle cell anemia-associated tissue factor production by monocytes in response to LPS is diminished by hydroxyurea treatment in vivo. (A) PBMC from sickle cell anemia patients were incubated with 100 ng/mL LPS in vitro and intracellular cytokine staining assay was performed to test whether hydroxyurea treatment in vivo induces changes in the capacity of monocytes to respond to LPS by producing tissue factor (TF). Data represent frequency of monocytes. HU group n = 17 and no HU group n = 20 At each experimental condition, the study groups were compared using the Mann-Whitney U test. **p < 0.01, ***p < 0.0001. (B) Mean Fluorescence Intensity (MFI) of TF expression by monocytes at indicated experimental conditions is shown. No statistically significant differences were observed. HU group n = 17 and no HU group n = 20. (C) Frequency of TF-expressing monocytes upon LPS stimulation was compared between SCA patients presenting or not previous occurrence of vaso-occlusive events (VOE). VOE group n = 11 and no VOE n = 26. The study groups were compared using the Mann-Whitney U test. *p < 0.05. (D) Receiver Operator Characteristics (ROC) curve analyses was employed to test whether frequency of TF-expressing monocytes after LPS stimulation could by HU treatment. These effects of HU were not linked to differences in cell death before and after LPS stimulation (data not shown). Effect of hydroxyurea treatment in tissue factor expression and vaso-occlusion events. Aside from producing pro-inflammatory cytokines upon stimulation, monocytes are also able to promote coagulation. Hence, we evaluated production of TF, a central molecule involved in activation of coagulation cascade, in our in vitro system. We found that unstimulated cells from both clinical groups displayed similar frequency of TF-expressing monocytes (Fig. 3A). Upon LPS-driven activation, percentage of TF-expressing monocytes was dramatically increased in patients not undergoing HU treatment but remained unchanged in those using HU (Fig. 3A). We did not find differences in mean fluorescence intensity values between the clinical groups and experimental conditions, which indicates that rather than interfering with magnitude protein production per cell basis, HU affected the expansion of cells expressing TF. tf-expressing monocytes are associated to vaso-occlusive events. Additional analyses revealed that activated TF-expressing monocytes were associated with previous occurrence of VOE (Fig. 3C). ROC and C-statistics analyses were used to evaluate the association between VOE and TF-expressing monocytes. The greater the area under the ROC curve (AUC) the better the model is at discriminating between increased TF + monocytes frequency and patients who had VOE from those who had not. Patients who had previous history of VOE had increased frequency of TF-expressing monocytes (Fig. 3D). This finding indicated that frequency of TF-expressing monocytes may serve as a biomarker of VOE. We next, we evaluated the ability of the distinct monocyte subtypes to produce TF. Interestingly, in unstimulated cells, the HU therapy was associated with decreased frequency of TF-expressing CD14 + CD16 + monocytes (Fig. 3E). However, LPS stimulation induced an increased in the frequency of TF-expressing CD14 ++ CD16 − , in patients not undergoing HU treatment compared with that using HU (Fig. 3E). These results uncover differential ability to induce TF expression among the distinct subsets of monocytes in SCA patients. Capacity of monocyte to produce multiple inflammatory cytokines is affected by hydroxyurea. We next examined the capacity of monocytes to produce multiple pro-inflammatory cytokines simultaneously upon LPS-driven activation in vitro. Upon stimulation, TF − monocytes from patients who were not taking HU predominantly produced IL-1β, TNF-α or both cytokines simultaneously (Fig. 4A). On the other hand, in the same clinical group, TF + monocytes exhibited the ability to produce more frequently IL-1β, IL-6, IL-8 and TNF-α simultaneously. Interestingly, HU therapy reduced the capacity of monocytes to produce multiple cytokines upon activation (Fig. 4A,B). Thus, the overall function profile in terms of cytokine production was different between TF − and TF + and also between the two clinical groups stratified by HU therapy (Fig. 4C). The frequency of monocytes producing more than one cytokine after the LPS challenge was statistically different, and this polyfunctionality was shown to be dramatically reduced in the monocytes of patients who were taking HU (Fig. 4D). Frequency of classical monocytes ex vivo and capacity to produce pro-inflammatory cytokines upon LPS stimulation in vitro. After assessing monocytes polyfunctionality, we sought to see whether frequency of classical monocytes in peripheral blood ex vivo was associated with capacity to produce pro-inflammatory cytokines upon LPS stimulation in vitro. Spearman correlation analyses revealed that frequency of monocytes expressing CD14 ++ CD16 − in the entire study population exhibited strong positive association with percentage of monocytes expressing TNF-α + , TF + , IL-1β + , IL-6 + and IL-8 + upon LPS stimulation (Fig. 5A). Noteworthy, in patients undergoing HU therapy reduction of CD14 ++ CD16 − frequencies was proportional to reduction of cytokine production (Fig. 5A), implicating the classical monocyte subset was a potential source of such pro-inflammatory molecules. Of note, frequencies of the other monocyte subsets did not significantly correlated with the frequency of cells expressing these inflammatory mediators (Fig. 5B). Discussion Chronic inflammation and persistent activation of coagulation, with systemic involvement, are main features of SCA. This disease has a high prevalence and incidence worldwide and a very complex pathophysiology 1 . Although HU is considered the main therapeutic option for SCA, the specific mechanisms leading to improvement of clinical manifestations is not completely described. As previously described [25][26][27] , HU therapy has been associated with improvement of hemolysis markers, increased HbF and decreased HbS levels as well as reduction of monocyte counts. Our results are in agreement with a previous study reporting that HU therapy reduced frequency of VOE and pain crisis 27 . The biological relevance of the role of monocytes in SCA has been previously demonstrated, such as involvement with VOE 14,15 . Nevertheless, details regarding monocytes subsets, activation pattern and cytokine production profile in SCA are not fully understood. Frequency of monocytes subsets identified herein are in agreement with previous characterization in healthy peripheral blood, where the majority has classical phenotype whereas around, 6.7% exhibits intermediate and 9.3% non-classic markers 7 . In contrast, a previous study has found that 75% of monocytes from patients with SCA exhibit a CD14 + CD16 + pro-inflammatory phenotype 28 . Differences discriminate patients with previous occurrence of VOE from those who had not, as a way to measure strength of association. AUC, area under the curve. (E) Frequencies of TF-expressing monocyte subsets was compared were compared between the indicated groups using the Mann-Whitney U test. HU group n = 17 and no HU group n = 20. *p < 0.05, ** p < 0.01, ***p < 0.0001. www.nature.com/scientificreports www.nature.com/scientificreports/ in study populations and/or methodological in gating strategy during flow cytometry assays could at least in part explain these discrepancies. Our results demonstrated that HU therapy decreases frequency of classical monocytes (CD14 ++ CD16 − ) while increasing percentage of non-classical monocytes (CD14 dim CD16 + ). Previous studies have shown that HU increases frequency of non-classical monocytes 29 although the activation status of this subset has not been evaluated. We hypothesize that HU may directly induce differentiation of classical monocytes into non-classical/patrolling phenotype by increasing CD16 expression. Future studies are warranted to answer this question. The specific pathways driving monocyte activation in SCA are not entirely elucidated. Previous studies have suggested participation of some agonists of toll-like receptor 4 (TLR4), such as free heme 30 , high mobility group box 1 (HMGB1) 31 and heparan sulfate 32 . It is already known that monocyte activation through TLR4 leads to increased production of TNF-α 33 which can amplify TF and VCAM-1 expression in endothelial cells 33 . Our experiments demonstrated that upon LPS challenge, monocytes from SCA individuals who were not under HU therapy exhibit increased expression of TNF-α, IL-1β and IL-6 compared to that from those who were taking the drug. Furthermore, unstimulated monocytes from SCA individuals who were not under HU therapy already exhibited increased expression of IL-8. Therefore, the production of pro-inflammatory cytokines seems to be strongly modulated by HU 34 . Several studies include monocyte-activating molecules (such as LPS, TNF-α) in order to increase the responsiveness of the cells and to emphasize an activated phenotype 7,35 . Monocytes were obtained from patients in steady-state (in absence of inflammatory crisis), thus we decided to challenge the cells with LPS, in order to increase cytokine and TF production and to mimic an activation process. We found that monocytes from patients treated with HU produced less cytokine, which allows us to suggest that although LPS is able to activate the cells; their response is damped by the HU therapy. It has been shown that HU decreases levels www.nature.com/scientificreports www.nature.com/scientificreports/ of TNF-α, IL-8 19 , IL-1β 36 and IL-6 37 in both plasma and serum of SCA individuals. More recently, it has been demonstrated that heme is able to increase IL-6 expression in SCA monocytes, since addition of iron chelator decreased its expression 38 . Considering that heme is released during hemolysis, these findings argue that intravascular hemolysis may play a pivotal role in monocyte activation in SCA. Collectively, these data indicate that HU affects not only monocyte subsets but also the ability of these cells to produce pro-inflammatory cytokines. TF production at sites of vascular damage promotes the activation of VII factor, thrombin generation and fibrin deposition 39 . The mechanism underlying TF production and expression in both endothelial cells and monocytes has been extensively investigated in SCA 40 . A suggested that heme from intravascular hemolysis can activate endothelial cells and leading to NF-kB nuclear translocation 40 . These events promote transcription of adhesion molecules such as P-selectin and pro-inflammatory cytokines 40 . The participation of TF from monocytes and endothelial cells on VOE has also been related to microparticles production during steady state, and it is dramatically augmented during crisis 41 , which can contribute to VOE. Here we found that HU therapy reduced TF expression by activated monocytes in patients undergoing treatment, corroborating with previous findings demonstrating decreased TF protein levels in plasma 42 . Our results further confirmed that TF + monocytes are associated to occurrence of VOE in the study population. TF + monocytes are described to be increased in SCA individuals (HbSS) compared to those with HbSC disease or controls 21 . In addition, frequency of TF + monocytes has been shown to correlate with reticulocyte and leukocyte counts and soluble E-selectin levels 21 . Finally, other studies have shown that percentage of TF + monocytes in peripheral blood increases during VOE 43 . Immune cells polyfunctionality, in terms of cytokine production, has been recently described in lymphocytes 44 and monocytes 10 , in the context of infectious diseases. In sterile inflammatory conditions such as SCA, the polyfunctionality still remains to be evaluated. In the present study, we investigated the cytokine profile production of both TF + and TF − monocytes and also tested the effect of HU in production of multiple pro-inflammatory cytokines. Our data provide evidence that patients who were not under HU therapy have increased frequency of monocytes simultaneously producing TF, IL-1β, IL-6, IL-8 and TNF-α. Nonetheless, HU substantially dampened such production without affecting cell death. This result suggests that the inflammatory response promoted by activated monocytes relies on the production of multiple pro-inflammatory cytokines and is directly affected by HU therapy. Lastly, our correlation analyses revealed that frequency of classical monocytes was positively correlated with percentage of cells producing TF as well as all the inflammatory cytokines examined in the entire study population. The role of classical monocytes on production of pro-inflammatory cytokines has been previously shown in healthy individuals 7 . In hematological diseases such as chronic myelomonocytic leukemia (CMML), it was shown that classical monocytes account for 94% of total monocytes and that this frequency could be useful to www.nature.com/scientificreports www.nature.com/scientificreports/ distinguish between CMML and reactive monocytosis 45 . A model of lung ischemia-reperfusion injury has shown that classical monocytes were mobilized from the spleen and they also mediated neutrophil extravasation for the sites of injury 46 . During human immunodeficiency virus (HIV) infection, classical monocytes were shown to have increased capacity to promote activation of TF and to produce multiple pro-inflammatory cytokines suggesting their ability to crosstalk coagulation and inflammation 10 . Regarding sickle cell disease, previous evaluation of monocytes subsets has identified that non-classical or patrolling monocytes express low levels of TNF-α and IL-6 and they seem to be important protecting the microvasculature from VOE 35 . To our knowledge, this is the first study to determine ex vivo characterization of monocytes subsets and to identify their polyfunctionality in SCA. Of note, the association between TF-expressing monocytes and occurrence of VOE also highlights the importance of these cells in vascular complications linked to SCA. In summary, our data corroborate with previous studies that show beneficial effects of HU therapy in SCA. We show that HU is associated with the improvement of laboratory parameters, to decreased frequency and activation of the classical inflammatory monocytes. Importantly, HU therapy directly dampened the polyfunctional capacity of monocytes, suggesting an overall anti-inflammatory property which the molecular mechanism still requires elucidation. Considerations regarding monocytes subsets, activation profile and cytokine production are useful to suggest novel therapeutic targets and may help to understand the inflammatory mechanism underlying SCA. Material and Methods Subjects. Thirty-seven pediatric SCA patients (HbSS genotype) were enrolled in the present study, eighteen Clinical manifestations. At the time of enrollment, clinical data regarding the occurrence of previous clinical manifestations (e.g. VOE) were collected using a standardized questionnaire (self-reported or reported by the parents) and confirmed by the medical records. Patients or their legal guardians were asked whether they ever had or not, during their lifetime, any clinical manifestation related to SCA. Hospital admissions were defined as hospitalization for more than three days and VOE were defined as acute pain affecting any body part lasting several hours in association with swelling especially in the joints and soft tissues requiring medication. Patients with previous history of VOE presented at least one episode of VOE (ranging from 1 to 5 events) in the past six months. Laboratory characterization. Hematological parameters were obtained using a Beckman Coulter LH 780 Hematology Analyzer (Beckman Coulter, Brea, California, USA) and hemoglobin patterns were confirmed by high-performance liquid chromatography employing an HPLC/Variant-II hemoglobin testing system (Bio-Rad, Hercules, California, USA). Biochemical parameters, including lipid profile, total bilirubin and fractions, lactate dehydrogenase, iron, hepatic metabolism and renal profile were determined using an automated A25 chemistry analyzer (Biosystems S.A, Barcelona, Catalunya, Spain). Ferritin levels were determined using Access 2 Immunochemistry System (Beckman Coulter Inc., Pasadena, California, USA). C-reactive protein and alpha-1 antitrypsin levels were measured using IMMAGE ® Immunochemistry System (Beckman Coulter Inc., Pasadena, California, USA). Laboratory parameters were analyzed at the Clinical Analyses Laboratory of the College of Pharmaceutical Sciences (Universidade Federal da Bahia). Ex vivo monocyte phenotyping by flow cytometry. Fresh peripheral blood mononuclear cells (PBMC) were obtained from SCA patients' blood samples collected with heparin, through gradient centrifugation on Ficoll Paque Plus (Gibco, GE Healthcare Bio-Sciences Corp. Piscataway, NJ, USA) at room temperature. Isolated PBMC was cryopreserved in 90% of fetal bovine serum (FBS, Gibco, GE Healthcare Bio-Sciences Corp. Piscataway, NJ, USA) and 10% of DMSO (Sigma, St. Louis, MO, USA) until flow cytometry assay. All the samples were processed within one hour after collection. PBMC were thawed and resuspended in RPMI 1640 supplemented with 10% FBS at 10 6 cells per well in 96-well plates. Cells were washed and resuspended in complete media with Brefeldin-A (Biolegend, San Diego, California, USA) and Monensin (Biolegend, San Diego, California, USA), two molecules capable to stop Golgi apparatus and vesicles secretion 47,48 , in order to block cytokine secretion and stimulated with 100 ng/mL of LPS, a well-known TLR4 agonist in order to increase cytokine and TF expression (Sigma, St. Louis, MO, USA) for 6 hours at 37 °C in 5% CO2. Following stimulation, extracellular staining of phenotypic markers was performed. Monocyte immunophenotyping was carried out by detection of CD14 (Qdot 605), CD16 (PE-Cy7), HLA-DR (APC-Cy7) on cell surface. Several lineage markers including CD2, CD3, CD19, CD20, CD56 (Pacific Blue) were used to exclude other cells aside from monocytes of the analyses (see flow cytometry example plots in Supplementary Fig. 1). Dead cells and debris were also excluded by using Aqua fluorescent reactive Live/Dead dye (ThermoFisher Scientific, Waltham, MA, USA). Based on CD14 and CD16 surface expression, three monocyte subsets were examined: classical/inflammatory (CD14 ++ CD16 − ), Scientific RepoRtS | (2019) 9:14829 | https://doi.org/10.1038/s41598-019-51339-x www.nature.com/scientificreports www.nature.com/scientificreports/ intermediate (CD14 + CD16 + ) and non-classical (CD14 dim CD16 + ) monocytes. To determine monocyte functionality, cells were fixed and permeabilized using the Intracellular Fixation & Permeabilization Buffer Set from eBioscience (ThermoFisher) and intracellular staining was performed detecting TNF-α (PerCP-Cy5.5), TF (APC), IL-8 (FITC), IL-1β (PE) and IL-6 (AF-700). Our results of flow cytometry assay are described as percentage of positive cells among HLADR + DUMP − cells (which in the present investigation are denominated "monocytes", as described in overall gating strategy in the Fig. S1), per a total of 10 6 PBMC/well for each experiment. Description of antibody clones, conjugated fluorochromes, catalog numbers and dilutions used is shown in Table S3. Antibodies dilutions were carried out according to each manufacturer's instructions and validated in titration experiments. Acquisition of the stained cells was performed using a BD LSRFortessa ™ cell analyzer (BD Bioscience, San Jose, CA, USA) and Software FlowJo, LLC (BD Bioscience, San Jose, CA, USA) was used to analyze the data. Statistical analysis. Statistical analyses were performed using the Statistical Package for the Social Sciences (SPSS) version 20.0 software (IBM, Armonk, New York, USA), JMP software v.12 (SAS Institute, Cary, North Carolina, USA) and GraphPad Prism version 6.0 (Graphpad Software, San Diego, California, USA), which was also used to assemble the graphs. Baseline values of selected variables are expressed as means with their respective standard variation. The Shapiro-Wilk test was used to determine variable distribution. The Mann-Whitney U test and independent t-test were used to compare the groups according to the normality of the distribution for each variable. Fisher's exact test was used to compare frequency of clinical manifestations as well as sex distribution between the patients groups. Spearman correlation rank analysis was performed to test correlations between frequency of monocyte subsets and cytokine production profiles. Results were adjusted for multiple comparisons using Bonferroni's method. Receiver Operator Characteristics (ROC) curve analysis was used to test the association between frequency of TF-expressing monocytes in blood and occurrence of VOE. Pearson's qui-square test was employed to compare the polyfunctionality profiles of monocytes 10 . All analyses were pre-specified. P values < 0.05, after correction for multiple measurements using the Holm-Bonferroni method were considered statistically significant.
2019-10-15T14:40:08.741Z
2019-10-15T00:00:00.000
{ "year": 2019, "sha1": "a2a1fccd5991d9d806e59fbe93c94b5d0b15c07d", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-51339-x.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a2a1fccd5991d9d806e59fbe93c94b5d0b15c07d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
212882341
pes2o/s2orc
v3-fos-license
Compound Difference Anti-Synchronization between Hyper-Chaotic Systems of Fractional Order In this article, the compound difference anti-synchronization between fractional order hyper-chaotic systems have been studied. Numerical simulations have been performed using MATLAB to verify the theoretical results on fractional order Xling, Vanderpol, Rikitake and Rabinovich hyper-chaotic systems. Introduction CHAOS theory has been gaining popularity ever since the well-known Lorenz system was discovered. From then on, there has been no looking back in the growth and development of chaos theory. Chaos synchronization [1] was introduced by Pecora and Carroll in 1990.In synchronization two chaotic systems arising from different initial conditions is made stable by designing controllers. Where synchronizing [2][3][4][5] two chaotic systems is considered difficult, synchronizing more than two hyper-chaotic systems is in itself a big challenge. Though fractional calculus is not new to mathematics, it has recently emerged most useful in modelling of processes and systems where integer order could not serve purpose. Motivated by the above discussions hyper-chaotic systems have been synchronized here. Numerical Simulations have been performed using MATLAB which verify the theoretical results. = ( ) (1) Let the base master systems be: To achieve the desired anti-synchronization we must have error tending to zero, i.e. We here define the controllers as: Theorem: Systems (1)-(3) will be in compound difference anti-synchronization with (4) if the controllers are designed as in (5). Proof: We define the compound difference anti-synchronization error as: = + ( − ) (6) Differentiating (6) we get the error dynamical system as: Substituting the values of the derivatives and applying the designed controller, the error dynamical system simplifies to: = − (8) Next, we consider the Lyapunov function as: Differentiating we get i.e. V(e(t)) is positive definite function with a negative definite derivative. Hence, by Lyapunov Stability Theory we have that error tends to zero, implying desired antisynchronization has been achieved. Note: We have taken Caputo's version of fractional derivative in our paper. Scaling master system We consider the fractional order hyper-chaotic Xling system as the scaling master system given by: (1,2,3,4) the system shows chaotic behavior as displayed in Fig. 1 (a). Base master systems Next we consider the hyperchaotic fractional order Rabinovich and Rikitake chaotic systems. .75) the system shows chaotic behavior as displayed in Fig. 1 (b). and initial conditions of state variables as (3.5,1.7,-4.5,2.8) the system shows chaotic behavior as displayed in Fig. 1 (c). Slave system We consider the slave system as the hyperchaotic fractional order Vanderpol system. For parameter values = = = , = = 5 = and initial conditions of state variables as (0.1,-0.5,0.1,-0.5) the system shows chaotic behavior as displayed in Fig. 1 (d). Numerical simulations and discussions Corresponding to master system (1)-(3) and slave system (4), the slave system with control functions is given as: We define the error given by (6) as: The error dynamical system is given by: Substituting values of the derivatives from (12)-(15) and designing controllers as: The error dynamical system simplifies to: Next, we consider the lyapunov function as: V(e(t)) = ( + + + ) Differentiating we get (V(e(t))) ≤ ( i.e. V(e(t)) is a positive definite function with a negative definite derivative. Hence, by Lyapunov Stability Theory we have that error tends to zero, implying desired antisynchronization has been achieved. Conclusion In this paper four hyper-chaotic fractional order systems have been synchronized in compound difference anti-synchronization manner by designing suitable controllers. This technique will find application in secure communication, control systems etc.
2020-02-06T09:06:26.998Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "a29b2811e017c53c32c5a2067ef68d2b1aed64fd", "oa_license": "CCBYSA", "oa_url": "https://www.banglajol.info/index.php/JSR/article/download/43764/32955", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9801d835fae2c951723e676523631c48fb612775", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science" ] }
259882199
pes2o/s2orc
v3-fos-license
Neutron star binaries produced by binary-driven hypernovae, their mergers, and the link between long and short GRBs The binary-driven hypernova (BdHN) model explains long gamma-ray bursts (GRBs) associated with supernovae (SNe) Ic through physical episodes that occur in a binary composed of a carbon-oxygen (CO) star and a neutron star (NS) companion in close orbit. The CO core collapse triggers the cataclysmic event, originating the SN and a newborn NS (hereafter $\nu$NS) at its center. The $\nu$NS and the NS accrete SN matter. BdHNe are classified based on the NS companion fate and the GRB energetics, mainly determined by the orbital period. In BdHNe I, the orbital period is of a few minutes, so the accretion causes the NS to collapse into a Kerr black hole (BH), explaining GRBs of energies $>10^{52}$ erg. BdHN II, with longer periods of tens of minutes, yields a more massive but stable NS, accounting for GRBs of $10^{50}$--$10^{52}$ erg. BdHNe III have still longer orbital periods (e.g., hours), so the NS companion has a negligible role, which explains GRBs with a lower energy release of $<10^{50}$ erg. BdHN I and II might remain bound after the SN, so they could form NS-BH and binary NS (BNS), respectively. In BdHN III, the SN likely disrupts the system. We perform numerical simulations of BdHN II to compute the characteristic parameters of the BNS left by them, their mergers, and the associated short GRBs. We obtain the mass of the central remnant, whether it is likely to be a massive NS or a BH, the conditions for disk formation and its mass, and the event's energy release. The role of the NS nuclear equation of state is outlined. Introduction Gamma-ray bursts (GRBs) are classified using the time (in the observer's frame) T 90 , in which 90% of the observed isotropic energy (E iso ) in the gamma-rays is released. Long GRBs have T 90 > 2 s and, short GRBs, T 90 < 2 s [1][2][3][4][5]. The two types of sources, short and long GRBs, are thought to be related to phenomena occurring in gravitationally collapsed objects, e.g., stellar-mass black holes (BHs) and neutron stars (NSs). In this article, we are interested in the direct relationship between long and short GRBs predicted by the BdHN scenario. The CO undergoes core collapse, ejecting matter in a supernova (SN) explosion and forming a newborn NS (νNS) at its center. The NS companion attracts part of the ejected material leading to an accretion process with high infalling rates. Also, the νNS gains mass via a fallback accretion process. The orbital period is the most relevant parameter for the CO-NS system's fate. In BdHN of type I, the NS reaches the critical mass, gravitationally collapsing into a Kerr BH. It occurs for short orbital periods (usually a few minutes) and explains GRBs with energies above 10 52 erg. In BdHN II, the orbital period is larger, up to a few tens of minutes, so the accretion rate decreases, and the NS becomes more massive but remains stable. These systems explain GRBs with energies 10 50 -10 52 erg. In BdHN III, the orbital separation is still larger; the NS companion does not play any role, and the energy release is lower than 10 50 erg. If the binary is not disrupted by the mass loss in the SN explosion (see [20] for details), a BdHN I produces a BH-NS, whereas a BdHN II produces a BNS. In BdHN III, the SN is expected to disrupt the system. Therefore, in due time, the mergers of NS-BHs left by BdHNe I and of BNS left by BdHNe II are expected to lead to short GRBs. Short GRBs from BNS mergers have been classified into short gamma-ray flashes (S-GRFs) and authentic short GRBs (S-GRBs), depending on whether the central remnant is an NS or a BH, respectively [24]. Two different subclasses of short GRBs from BNS mergers have been electromagnetically proposed [20,24,25]: 1) Authentic short GRBs (S-GRBs): short bursts with isotropic energy E iso ≳ 10 52 erg and peak energy E p,i ≳ 2 MeV. They occur when a BH is formed in the merger, which is revealed by the onset of a GeV emission (see 25,26 and 27). Their electromagnetically inferred isotropic occurrence rate is ρ S−GRB ≈ 1.9 +1.8 −1.1 × 10 −3 Gpc −3 yr −1 [24]. The distinct signature of the formation of the BH, namely the observation of the 0.1-100 GeV emission by the Fermi-LAT, needs the presence of baryonic matter interacting with the newly-formed BH, e.g., via an accretion process (see, e.g., [26,28]). 3) Ultrashort gamma-ray flashes (U-GRFs): in [20], it has been advanced a new class short bursts, the ultrashort GRBs (U-GRBs) produced by NS-BH binaries when the merger leaves the central BH with very little or completely without surrounding matter. An analogous system could be produced in BNS mergers. We shall call these systems ultrashort GRFs, for short U-GRFs. Their gamma-ray emission is expected to occur in a prompt short radiation phase. The post-merger radiation is drastically reduced, given the absence of baryonic matter to power an extended emission. A kilonova can still be observed days after the merger, in the infrared, optical, and ultraviolet wavelengths, produced by the radioactive decay of r-process yields [29,30,[30][31][32]. Kilonova models used a dynamical ejecta composed of matter expelled by tides prior or during the merger, and a disk-wind ejecta by matter expelled from post-merger outflows in accretion disks [33], so U-GRFs are expected to have only the dynamical ejecta kilonova emission. We focus on the BNSs left by BdHNe II and discuss how their properties impact the subsequent merger process and the associated short GRB emission, including their GW radiation. Since an accretion disk around the central remnant of a BNS merger, i.e., a newborn NS or a BH, is an important ingredient in models of short GRBs (see, e.g., [34] and references therein), we give some emphasis to the conditions and consequences for the merger leaving a disk. We study BNSs formed through binary evolution channels. Specifically, we expect these systems to form following a binary evolution channel similar to that of two massive stars leading to stripped-envelope binaries, described in previous studies (e.g., [35,36]). In this process, the CO star undergoes mass loss in multiple mass- We follow the expansion of the SN ejecta in the presence of the NS companion and the ν-NS with a smoothed particle hydrodynamic (SPH) code. It is clear that a disk with opposite spins has formed around both stars.. transfer and common-envelope phases through interactions with the NS companion (see, e.g., [37][38][39]). This leads to removing the H/He layers of the secondary star, which ends up as a CO star. Recently, it has been made significant progress in the study of alternative evolution channels for the progenitor of BNSs, such as hierarchical systems involving triple and quadrupole configurations [40,41], which are motivated by the presence of massive stars in multiple systems [42]. These systems are out of the scope of this study. The article is organized as follows. In Sec. 2, we discuss the numerical simulations of BdHNe and specialize in an example of a BNS led by a BdHN II. Section 3 introduces a theoretical framework to analyze the BNS merger outcome configuration properties based on the conservation laws of baryon number, angular momentum, and mass-energy. We present in Sec. 4 a specific example analyzing a BNS merger using the abovementioned theoretical framework, including estimates of the energy and angular momentum release. We include the radiation in gravitational waves (GWs) and estimate its detection by current facilities. Section 5 presents a summary and the conclusions of this work. Figure 1 shows a snapshot of the mass density with the vector velocity field at the binary's equatorial plane some minutes after the CO collapse and the expansion of the SN ejecta. The system's evolution was simulated with an SPH code, where the NS companion and the νNS are point particles that interact gravitationally with the SPH particles of the SN ejecta. For details of these numerical simulations, we refer to [23,43]. In these simulations, the influence of the star's magnetic field has be disregarded, as the magnetic pressure remains significantly lower than the random pressure exerted on the infalling material. The simulation of Figure 1 corresponds to a CO-NS for a CO star evolved from a zero-age main-sequence (ZAMS) star of M zams = 15 M ⊙ . The CO mass is about 3.06M ⊙ , whose core collapse leaves a 1.4 M ⊙ νNS and ejects 1.66 M ⊙ . The NS companion's initial mass is 1.4M ⊙ , and the initial binary period of the system is about 4.5 min. A BNS left by a BdHN II From the accretion rate on the NSs, we have calculated the evolution of the mass and angular momentum of the binary components [see 43, for details]. Table 1 summarizes the final parameters of the νNS and the NS, including the gravitational mass, m, dimensionless angular momentum, j, angular velocity, Ω, equatorial radius, R eq and moment of inertia, I. These structure parameters have been calculated with the RNS code [44] and using the Table 1. BNS produced by a BdHN II originated in a CO-NS with an orbital period of 4.5 min. The CO star mass is 3.06M ⊙ , obtained from the stellar evolution of a ZAMS star of M zams = 15M ⊙ , and the NS companion has 1.4M ⊙ . The numerical smoothed-particle hydrodynamic (SPH) simulation follows the SN produced by the CO core collapse and estimates the accretion rate onto the νNS and the NS companion. The structure parameters of the NSs are calculated for the GM1 and TM1 EOS. We refer to [43] for additional details. GM1 [45,46] and TM1 [47] EOS (see Table 2 for details of the EOS). The BNS left by the BdHN II event has a period P orb = 14.97 min, orbital separation a orb ≈ 2 × 10 10 cm, and eccentricity e = 0.45. Inferences from conservation laws We analyze the properties of the central remnant NS formed after the merger. We use the conservation laws of baryon number, energy, and angular momentum for this aim. Baryon number conservation The total baryonic mass of the system must be conserved, so the binary baryonic mass, M b will redistribute among that of the postmerger's central remnant, m b,c ; the ejecta's mass, m ej , which is unbound to the system; and the matter kept bound to the system, e.g., in the form of a disk of mass m d . Therefore, we have the constraint For a uniformly rotating NS, the relation among its baryonic mass, m b,i , gravitational mass, m i , and angular momentum J i is well represented by the simple function where j i ≡ cJ i /(GM 2 ⊙ ), which fits numerical integration solutions of the axisymmetric Einstein equations for various nuclear EOS, with a maximum error of 2% [48]. Thus, Equation (2) is a nearly universal, i.e., EOS-independent formula. Equation (2) applies to the merging components (i = 1, 2) as well as to the central remnant (i = c). Angular momentum conservation We can make more inferences about the merger's fate from the conservation of angular momentum. The angular momentum of the binary during the inspiral phase is given by where r is the orbital separation, µ = m 1 m 2 /M is the reduced mass, M = m 1 + m 2 is the total binary mass, and Ω = √ GM/r 3 is the orbital angular velocity. The gravitational mass and stellar radius of the i-th stellar component are, respectively, m i and R i ; J i is its angular momentum, Ω i its angular velocity, and κ i is the ratio between its moment of inertia to that of a homogeneous sphere. We adopt the convention m 2 ≤ m 1 . After the merger, the angular momentum is given by the sum of the angular momentum of the central remnant, the disk, and the ejecta. Angular momenta conservation implies that the angular momenta at merger, J merger , equals that of the final configuration plus losses: where J c and J d are, respectively, the angular momenta of the central remnant and the eventual surrounding disk, ∆J accounts for angular momentum losses, e.g., via gravitational waves, and we have neglected the angular momentum carried out by the ejecta since it is expected to have small mass ∼ 10 −4 -10 −2 M ⊙ . Simulations suggest that this ejecta comes from interface of the merger, where matter is squeezed and ejected perpendicular to the orbital plane, see e.g. [49,50]. The definition of the merger point will be discussed below. The angular momentum of the binary at the merger point is larger than the maximum value a uniformly rotating NS can attain, i.e., the angular momentum at the Keplerian/massshedding limit, J K . Thus, the remnant NS should evolve first through a short-lived phase that radiates the extra angular momentum over that limit and enters the rigidly rotating stability phase from the mass-shedding limit. Thus, we assume the remnant NS after that transition phase starts its evolution with angular momentum Equation (5) fits the angular momentum of the Keplerian sequence from full numerical integration of the Einstein equations and is nearly independent of the nuclear EOS [see, e.g., 48, and references therein]. Therefore, the initial dimensionless angular momentum of the central remnant is We model the disk's angular momentum as a ring at the remnant's inner-most stable circular orbit (ISCO). Thus, we use the formula derived in Cipolletta et al. [51], which fits, with a maximum error of 0.3%, the numerical results of the angular momentum per unit mass of a test particle circular orbit in the general relativistic axisymmetric field of a rotating NS. Within this assumption, the disk's angular momentum is given by Notice that Eq. (7) reduces to the known result for the Schwarzschild metric for vanishing angular momentum, as it must. However, it differs from the result for the Kerr metric, which tells us that the Kerr metric does not describe the exterior spacetime of a rotating NS (see [51] for a detailed discussion). The estimate of J merger requires the knowledge of the merger point, which depends on whether or not the binary secondary becomes noticeably deformed by the tidal forces. When the binary mass ratio q ≡ m 2 /m 1 is close or equal to 1, the stars are only deformed before the point of contact [52]. Therefore, for q ≈ 1, we can assume the point of the merger as the point of contact where C 1,2 ≡ Gm 1,2 /(c 2 R 1,2 ) is the compactness of the BNS components. When the masses are different, if we model the stars as Newtonian incompressible spheroids, there is a minimal orbital separation r ms , below which no equilibrium configuration is attainable, i.e., one star begins to shed mass to the companion due to the tidal forces. In this approximation, r ms ≈ 2.2q −1/3 R 2 [53]. Numerical relativity simulations of BH-NS quasi-equilibrium states suggest that the mass-shedding occurs at a distance (see [54] and references therein) Our analysis adopts the mass-shedding distance of Eq. (9). For a system with q = 0.7 (similar mass ratio of the one in Table 1), we have found that the less compact star begins to shed mass before the point of contact, independently of the EOS, which agrees with numerical relativity simulations. Consequently, for non-symmetric binaries q < 1, we define the merging at the point as the onset of mass-shedding, r merger ≈ r ms . Based on the above two definitions of merger point, Eqs. (8) and (9), the angular momentum at the merger is given by where we have introduced the so-called symmetric mass-ratio parameter, ν ≡ q/(1 + q) 2 . Mass-energy conservation The conservation of mass-energy before and after the merger implies the energy released equals the mass defect of the system, i.e., where ∆M is the system's mass defect. We have also defined E GW = E insp GW + E pm GW the total energy emitted in GWs in the inspiral regime, E insp GW , and in the merger and post-merger phases, E pm GW . The energy E other is radiated in channels different from the GW emission, e.g., electromagnetic (photons) and neutrinos. A specific example of BNS merger We analyze the merger of the 1.505 + 1.404 M ⊙ BNS in Table 1. For these component masses, the inferred orbital separation of a orb ≈ 2 × 10 10 cm and eccentricity e = 0.45, the merger is expected to be driven by GW radiation on a timescale [55] τ GW = (1), (4) and (11), we can obtain the remnant and disk's mass as a function of the angular momentum losses, ∆J, as well as an estimate of the energy and angular momentum released in the cataclysmic event. We use the NS structure parameters obtained for the GM1 EOS and the TM1 EOS. The total gravitational mass of the system is M = m 1 + m 2 = 2.909M ⊙ , so using Eq. (2), we obtain the total baryonic mass of the binary, M b = m b,1 + m b,2 ≈ 3.184M ⊙ . The binary's mass fraction is q = 0.933, so we assume the merger starts at the contact point. With this, the angular momentum at the merger, as given by Eq. (10), for the GM1 and TM1 EOS is, respectively, J merger ≈ 5.65GM 2 ⊙ /c and J merger ≈ 5.73GM 2 ⊙ /c. Figure 2 shows that the disk's mass versus the central remnant's mass for selected values of the angular momentum loss for the two EOS. The figure shows the system's final parameters lie between two limiting cases: zero angular momentum loss leading to maximal disk mass and maximal angular momentum loss leading to zero disk mass. The initial BNS has a total gravitational mass of 2.909 M ⊙ and a mass fraction q = 0.933, so we assume the merger starts at the contact point. The maximum mass along the Keplerian sequence for the GM1 EOS is 2.84M ⊙ and for the TM1 EOS it is 2.62 M ⊙ (see Table 2). Thus, for the former EOS, the central remnant is a massive fast-rotating NS, while the latter suggests a prompt collapse into a Kerr BH. Maximal disk mass We obtain the configuration corresponding to the maximum disk mass switching off angular momentum losses. Let us specialize in the GM1 EOS. By setting ∆J = 0, the solution of the system of equations formed by the baryon number and angular conservation equations leads to the central remnant's mass, m c = 2.697 M ⊙ , and disk's mass, m d = 0.073 M ⊙ . This limiting case switches off the GW emission, so it also sets an upper limit to the energy released in mechanisms different than GWs. Thus, Eq. (11) implies that E other = 484 × 10 53 erg of energy are carried out to infinity by a mechanism different than GWs and not accompanied by angular momentum losses. Zero disk mass The other limiting case corresponds when the angular momentum loss and the remnant mass are maximized, i.e., when no disk is formed (see Fig. 2). By setting m d = 0, the solution of the conservation equations leads to the maximum angular momentum loss, ∆J = 0.331 GM 2 ⊙ /c, and the maximum remnant's mass, m c = 2.756 M ⊙ . Thus, the upper limit to the angular momentum carried out by GWs is given by the maximum amount of angular momentum losses, i.e., ∆J GW ≲ 0.331GM 2 ⊙ /c. In the inspiral phase of the merger, the system releases, For the binary we are analyzing, E insp GW ≈ 0.0194 Mc 2 ≈ 0.0563M ⊙ c 2 ≈ 1.0073 × 10 53 erg. The transitional non-axisymmetric object (e.g. triaxial ellipsoid) formed immediately after the merger mainly generates these GWs, and their emission ends when the stable remnant NS is finally formed. We can model such a rotating object as a compressible ellipsoid with a polytropic EOS of index n = 0.5-1 [56]. The object will spin up by angular momentum loss to typical frequencies of 1.4-2.0 kHz. The energy emitted in GWs is E pm GW ≈ 0.0079 M ⊙ c 2 ≈ 1.404 × 10 52 erg. Therefore, the energy released in GWs is, 10 53 erg. If no disk is formed, i.e., for a U-GRF, the mass-energy defect is 10 53 erg. This implies that E other = ∆Mc 2 − E GW ≈ 0.089M ⊙ c 2 ≈ 1.591 × 10 53 erg are released in forms of energy different than GW radiation. Therefore, combining the above two results, we conclude that for the present merger, assuming the GM1 EOS, the merger releases 0 < E GW ≲ 1.147 × 10 53 erg in GWs and 1.591 × 10 53 ≲ E other < 2.484 × 10 53 erg are released in other energy forms. The energy observed in short GRBs and further theoretical analysis, including numerical simulations of the physical processes occurring during the merger, will clarify the efficiency of converting E other into observable radiation. Since no BH is formed (in this GM1 EOS analysis), the assumption that the merger leads to an S-GRF suggests an efficiency lower than 10%. We now estimate the detection efficiency of the GW radiation released by the system in the post-merger phase when angular momentum losses are maximized, i.e., in the absence of a surrounding disk. We find the root-sum-squared strain of the signal, i.e., whereh + andh × are the Fourier transforms of the GW polarizations, d is the distance to the source,f is the mean GW frequency in the postmerger phase. These signals are expected to be detected with a 50% of efficiency by the LIGO/Virgo pipelines [57] when h rss ∼ 10 −22 Hz −1/2 [58]. For the energy release in the post-merger phase, we havē f = 1671.77 Hz, so these signals could be detected up to a distance of d ≈ 10 Mpc. Discussion and conclusions As some BdHN I and II systems remain bound after the GRB-SN event, the corresponding NS-BH and BNS systems, driven by GW radiation, will merge and lead to short GRBs. For a few minutes binary, the merger time is of the order of 10 4 yr. This implies that the binaries will still be close to the long GRB site by the merger time, which implies a direct link between long and short GRBs [20]. The occurrence rate of long and short bursts, however, should differ as the SN explosion likely disrupts the binaries with long orbital periods. We are updating (Bianco et al. 2023, in preparation) our previous analysis on this interesting topic reported in [59]. We refer the reader to Bianco et al. [60] for a preliminary discussion. As a proof of concept, this article examined this unique connection between long and short GRBs predicted by the BdHN scenario, emphasizing the case of mergers of BNS left by BdHNe II. The application of the present theoretical framework to the analysis of other merging binaries, such as the BH-NS binaries produced by BdHN I (see [20] for a general discussion), will be addressed in a separate work. We have carried out a numerical SPH simulation of a BdHN II occurring in a CO-NS of orbital period 4.5 min. The mass of the CO is 3.06M ⊙ and that of the NS companion, 1.4M ⊙ . The CO is the pre-SN star obtained from a ZAMS star of M zams = 15M ⊙ simulated from MESA code. The SPH simulation follows [23,43]. It computes the accretion rate onto the νNS (left by the CO core collapse) and the NS companion while the ejecta expands within the binary. For the event that left a νNS-NS eccentric binary of 1.505 + 1.404M ⊙ , orbital separation 2 × 10 10 cm, orbital period of ≈ 15 min and eccentricity e = 0.45. The above parameters suggest the BNS merger leading to a short GRB occurs in ≈ 73 kyr after the BdHN II event. Whether or not the central remnant of the BNS merger will be a Kerr BH or a massive, fast-rotating NS depends on the nuclear EOS. For instance, we have shown the GM1 EOS leads to the latter while the TM1 EOS leads to the former. As an example of the theoretical framework presented in this article, we quantify the properties of the merger using the GM1 EOS. We infer the mass of the NS central remnant and the surrounding disk as a function of the angular momentum losses. We then emphasize the merger features in the limiting cases of maximum and zero angular momentum loss, corresponding to a surrounding disk's absence or maximum mass. We estimated the maximum energy and angular momentum losses in GWs. We showed that the post-merger phase could release up to ≈ 10 52 erg in ≈ 1.7 kHz GWs, and LIGO/Virgo could, in principle, detect such emission for sources up to ≈ 10 Mpc. We assessed that up to a few 10 53 erg of energy could be released in other forms of energy, so a ≲ 10% of efficiency of its conversion into observable electromagnetic radiation would lead to an S-GRF. The direct link between long and short GRB progenitors predicted by the BdHN model opens the way to exciting astrophysical developments. For instance, the relative rate of BdHNe I and II and S-GRBs and S-GRFs might give crucial information on the nuclear EOS of NSs and the CO-NS parameters. At the same time, this information provides clues for the stellar evolution path of the binary progenitors leading to the CO-NS binaries of the BdHN scenario. Although challenging because of their expected ultrashort duration, observing a U-GRF would also be relevant for constraining the EOS of NS matter. An extended analysis is encouraged, including additional BNS parameters obtained from SPH simulations of BdHNe for various CO-NS systems and nuclear EOS. Data Availability Statement: No new data were created or analyzed in this study. Data sharing is not applicable to this article. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript: νNS Newborn neutron star S-GRB Short gamma-ray burst S-GRF Short gamma-ray flash SN Supernova U-GRB Ultrashort gamma-ray burst U-GRF Ultrashort gamma-ray flash ZAMS Zero-age main-sequence Disclaimer/Publisher's Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
2023-07-15T15:12:37.404Z
2023-07-12T00:00:00.000
{ "year": 2023, "sha1": "ff7e502de3687855cee40f86f52252a5adf64fab", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-1997/9/7/332/pdf?version=1689153159", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "1388affacb6fbf3686c42ba896a447efb9c45c15", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
245672553
pes2o/s2orc
v3-fos-license
Appraisal of water quality and ecological sensitivity with reference to riverfront development along the River Gomti, India The conflict between the vitality of natural ecosystem versus artificially developed systems has existed since decades. The ecological sensitivity and socio-economic aspects associated with riverfront development along rivers have attracted the attention of environmentalists and ecologists across the globe. The present study evaluates the impacts of channelization and riverfront development on the water quality of river Gomti through Water Pollution Index (WPI) and other statistical tools. Of the total studied sites, 75% were found to be in the ‘highly polluted’ category even after the development of riverfront. An approximate increase of 274.5% and 171.76% was witnessed in the WPI values at the midstream sites of Kudiaghat and Daliganj, respectively. This increase in the WPI values clearly stated the deteriorated water quality of river Gomti after the channelization. The major issue of domestic sewage discharge with partial or no treatment into the river seems to be unresolved even after a considerable period of riverfront development. This study can provide a reference database toward development of such projects across the globe. Introduction The rehabilitative development of urban streams through riverfront construction is a globally employed methodology (Bockelmann et al. 2004;Che et al. 2012;Lu et al. 2019;Mitsch 2014; Thompson et al. 2018). The development of riverfront projects is transforming the freshwater ecosystems through alterations in water flow, fluvial habitat, floodplains, and water quality. The development of riverfront projects has attracted contradictory views with researchers emphasizing on the vitality of the natural state of rivers, while some arguing toward the need of artificial modifications. The lack of balance between channel engineering and ecological perspectives has created complexities for the river systems in the Ganga Basin, India (Dutta et al. 2018). The reduction in the density of fish population consequent to the destruction of potential natural habitats post-channelization has been mentioned in various studies (Blake and Rhanor 2020;Dutta et al. 2018;Jurajda 1995). The fragmentation of habitats into sub-clusters makes the survival of aquatic flora and fauna difficult (Jurajda 1995;Khan et al. 2014). The recolonization of such clusters due to the species sensitivity with respect to habitats has even caused extinction (Collinge 1998). The interdisciplinary understanding of ecology and channel engineering is very vital in such projects. The whole river system is very dynamic in nature; hence, development of a small part of the river further modifies the channel limiting the possibilities of restoration. The River Gomti, a unique groundwater-fed river in the Ganga Alluvial Plain (GAP), is an important source for drinking water in many Indian cities and towns along the river basin (Shukla and Saxena 2020a, b, c, d). Residents in these cities, especially in Lucknow (capital of Uttar Pradesh), are exposed to various pollutants from the point and nonpoint sources including domestic sewage, industrial effluents, agricultural, and livestock waste which are complex to monitor, assess, and control Tangri et al. 2018). The population of Lucknow city expanded by approximately 38% from 2001 to 2011, which had caused an excessive increase in the water demand for drinking purposes. The stress on the river is also highlighted by the fact that roughly 60% of the total water demand of 415 MLD (millions of liter per day) is fulfilled by river Gomti and remaining 170 MLD by subsurface water resources (Goel et al. 2018). Hence, to augment towards the increasing demand and deteriorating water quality, the 'riverfront development' project for river Gomti was initiated in April 2015, which was completed in March 2017 in the Lucknow city. It included straightening, narrowing, and lining of the river channel, along with development of intercepting drains for efficient management of sewage. The beautification of the riverbank with parks was also included in the project. Very limited studies discuss the impact of riverfront development and channelization on the water quality of the river Gomti. A study by Dutta et al. (2018) explained the need of maintaining environmental flows, ecological balance, and changes in water quality after the riverfront development. However, the study was more focused on morphological aspects of the riverfront development project, and no comparisons were made for water quality during preand post-channelization. Thus, the present study assesses the detailed impact of riverfront development on water quality of the river pre-and post-channelization. Because the riverfront development does not cover the entire stretch in the city of Lucknow, the present study discusses the water quality along various stretches of the river. An upstream site located at ~ 30 km from Lucknow city was also selected to present a comparative view of the river water quality, at sites with and without riverfront development. The primary aim of this study was to assess the water quality of river Gomti in both the pre-and post-riverfront development phases. Various pollution zones are also identified through Q-mode hierarchical cluster analysis and represented through spatial maps. Finally, the source and inter-dependability of various water quality parameters were also assessed through Pearson's correlation analysis. Study area The Gomti River Basin (GRB) exists between the 80°00′-83°10′ E longitude and 24°40′-28°40′ N latitude (Khan et al. 2021a, b, c). It forms the northwest part of the Indo-Gangetic alluvial plain having a catchment area of 30,437 km 2 . The GRB is categorized by two primary geologic units of Quaternary age (younger and older alluvium). The younger alluvial plain stretches along the river Gomti and forms a wide flood plain which supports agricultural activities throughout its stretch. The older alluvium occurs at higher elevation and is characterized by alluvial deposits comprising sand, clay, and kankar. The channel sediments of the river consist of quartz, feldspar, rock fragments, mica, and illite (major clay mineral). The origin of river Gomti is from 'Fulhar Jheel' at Madhotanda in Uttar Pradesh, and it meets the river Ganga after meandering through an approximate stretch of 960 km. It has two major tributaries, i.e., the Saryu and Sai rivers, which join the river in Mohamadi and Jaunpur, respectively. River Gomti divides the city of Lucknow, into two parts (cis and trans), and water from the river is lifted at Aishbagh waterworks for water supply in the city. The climate of Lucknow city varies between semiarid to subtropical monsoon type with a mean annual rainfall of 963 mm (Goel et al. 2018). The river flows from N-W direction toward S-E direction across the city. The river water is a major drinking water source and also fulfills requirements of industrial and agricultural sectors in Lucknow city. However, increasing population and water demand have contributed towards the deterioration of river water. Furthermore, the untreated domestic and industrial wastewater is released through 26 major drains causing a further decline in the water quality (Khan et al. 2021c). The riverfront development project has potentially been unable to tap these drains and still a large quantity of untreated wastewater is released into the river, which poses human health risks and endangers the aquatic population in the river as well. Sample collection and analysis Twenty-four water samples have been collected from river Gomti across the study area, with three samples from each location (one at each bank, and one from the center of the river). The present study is in continuation with the research conducted by Goel et al. (2018) in the year 2015 (before riverfront development), to evaluate the current status and impacts of riverfront development project on river Gomti. The sampling was done in the post-monsoon period of year 2019 at sites S1 (Chandrika Devi), S2 (IIM Road), S3 (Kudiaghat), S4 (Daliganj), S5 (Lakshman Mela), S6 (Bhaisakund), S7 (Dilkusha), S8 (Shahid Path) (Fig. 1). The channelization of the riverbanks starts from S3 at one of its banks, and from S5, it continues at both the banks of the river. The water samples were analyzed for pH, dissolved oxygen (DO), biological oxygen demand (BOD), total coliform (TC), potassium (K + ), calcium (Ca 2+ ), phosphate (PO 4 2− ), nitrate (NO 3 − ), and fluoride (F − ) to compute the WPI. The water samples were analyzed in accordance with the standard methods of APHA (2012). The water quality parameters pH and DO were measured at the site using potable pH meter and digital DO Meter (Lab Junction, LJ-831), respectively. BOD was computed using the Winkler's method in the laboratory, while NO 3 − and F − were analyzed using advanced research grade benchtop ion selective electrode (Hanna, HI5522). K + , Ca 2+ , and PO 4 2− were measured using ion chromatography (Metrohm 850). Quality assurance of results Analytical grade chemicals (purity >97%, from Sigma-Aldrich) were used for complete analytical procedures, to maintain the quality control and assurance (QA/QC) standards. Further, all the glasswares were soaked for one hour in diluted nitric acid (a 1% nitric acid solution) and subsequently rinsed with distilled water. To remove the analytical inaccuracy, duplicate samples were collected and analyzed for each sampling location. The water samples were analyzed using calibrated equipment with acceptable uncertainties. An internal quality assurance system is applied regularly at the laboratory. During analyses, different concentrations of standard solutions were measured as an internal quality control. The relative standard deviation (RSD) values were less than 5%. Water pollution index Water quality is a quantification parameter for the nature of water concerning the anthropogenic requirements (Bempah and Ewusi 2016;Shukla and Saxena 2020a). Water quality assessment is primarily done using various physical, chemical, and biological parameters (Shukla and Saxena 2020b). The evaluation of the status of water pollution of rivers is a critical and emerging area of interest around the world, requiring data collection, assessment, and interpretation (Shukla et al. 2020c). There have been many approaches for assessing the overall water quality (Gorgij et al. 2019;Li et al. 2018;Su et al. 2019Su et al. , 2020Tian and Wu 2019). The efficacy of WPI is higher in comparison with that of conventional indexing methodologies. The use of varying weights and theoretical ideal value of any parameter can possibly affect the water quality indexing approach. The unsegregated approach of WPI is more accurate considering the conversion of input variables to a single index value. Hence, the slightest variation in the input parameter concentration can affect the WPI category of water quality. In the current study, nine water quality parameters including pH, DO, BOD, Total Coliform, F − , Ca 2+ , K + , NO 3 − , PO 4 3− were selected for pollution load computation. The number of water quality parameters n can vary considering the flexible approach adopted in the WPI methodology. The first step in pollution load computation (PLi) of i th parameter is done using equation below: where Ci is the observed concentration, and Si is the standard or highest permissible limit for the respective parameter. The equation for PLi computation is different for various pH values, i.e., pH value 7 is considered as neutral, but < 7 or > 7 pH values are supposedly detrimental. With this view following equations are recommended for different pH ranges: If pH value is < 7, then Eq. 1.1 is recommended where Si a is the minimum acceptable pH value, i.e., 6.5, and Eq. 1.1 is used: If pH is > 7, then Si b is the maximum acceptable pH value, i.e., 8.5, and Eq. 1.2 is used: Finally, the pollution status of a water sample or water pollution index (WPI) with n number of variables is calculated through aggregation of PLi values and finally dividing with n, Eq. 1.3 is used. Statistical analysis The temporal/seasonal variation in the dataset from pre-riverfront development to the dataset in post-riverfront development was evaluated with the nonparametric Mann-Whitney U test (Woldeab et al. 2018). The spatial variation in water quality parameters among different sampling sites was evaluated with the Kruskal-Wallis H test (Fatema et al. 2014). Furthermore, Pearson's correlation matrix and hierarchical cluster analysis (HCA) were performed using OriginPro 2020b software package (OriginLab Corporation, Northampton, USA). Two correlation matrices were generated, each for the dataset from 2015 to 2019, respectively. The Pearson's correlation analysis is a widely used tool which estimates the linear dependence between various parameters (Batabyal and Chakraborty 2015;Khan et al. 2021b;Wu et al. 2014Wu et al. , 2020Li et al. 2019;Ren et al. 2021). The value of Pearson's correlation coefficient, 'r,' lies between ± 1, suggesting the positive or negative correlation, and there is no correlation between the parameters when 'r' is zero. Moreover, when 'r' lies between ± 0.9 and ± 1, a 'very strong' correlation exists between the parameters. Similarly, a 'strong' correlation exists if values of 'r' vary between ± 0.76 and ± 0.89, a 'good' correlation is there when the values of 'r' lie in the range of 0.51 to ± 0.75, and the correlation is called 'poor' for 'r' values of 0 to ± 0.50 (Batabyal and Chakraborty 2015). The datasets of water quality parameters from the 2015 to 2019 were further subjected to Q-mode HCA (Q-HCA). HCA uses either similarities or dissimilarities between the parameters within the datasets and classifies the dataset into several clusters according to the distance of similarity/ dissimilarity between these clusters (Adimalla et al. 2020;Elumalai et al. 2020;Loh et al. 2020;Shukla and Saxena 2020d). Q-HCA helps in classification of the monitoring sites based on their similar chemical composition geochemistry, which suggests a probable origin of contaminants (Zhu et al. 2017). For Q-HCA, the Ward's method with squared Euclidian distances was used, which is considered to provide the best results. PLi Variation in water pollution index post-channelization The study by Goel et al. (2018) highlighted the impact of untreated domestic sewage discharged in the river and discussed the possible inefficiency of sewerage systems, suggesting possible remediation measures. The riverfront development project included straightening and shortening of river channel, affecting its width, shape, and riverbed, and comprising the construction of a diaphragm wall on both banks along a stretch of ~ 8 km (Dutta et al. 2018). The water samples at all sites fell in the 'highly polluted' category during both pre-and post-channelization surveys. The increased WPI values after riverfront development clearly stated the deteriorated water quality of the river. A very steep increase in the WPI values in the post-riverfront development phase was witnessed at all sites in the order S3 > S4 > S5 > S8 > S6 > S7 > S2 > S1 as illustrated in Fig. 2. The presence of only one water sample in the 'good' category water quality after the development of the Gomti riverfront was seen in the study. The highest deterioration in the water quality was seen at site S3 (~ 275%) signifying the possible influence of small-scale local manufacturing units, dyeing of textile, washing clothes, etc., in the vicinity. The water quality at site S4 showed a significant deterioration, and WPI value increased by ~ 172%, signifying the direct discharge of untreated domestic sewage into the river. The WPI values at sites S5 and S6 showed an increase of ~ 70.1% and ~ 38.6%, respectively. Various cremation-related rituals performed in the vicinity of both sites and consequent anthropogenic influence emerge as possible causes of deteriorated water quality. The WPI of the water sample at site S7 showed an approximate increase of 24.03% after the channelization. The WPI value at downstream site S8 increased by 40.26% after the channelization. The upstream site S1 located ~ 30 km from the city and is not channelized, and it showed the minimum increase in the WPI value (~ 5.76%). The WPI value at this site was 0.71 which felt in the 'good' water quality category. Cluster analysis For a better representation and understanding of the impacts of channelization, a stretchwise assessment was required. Hence, the variation in water quality parameters is assessed as per the various clusters obtained through Q-HCA. The stretchwise summary of various water quality parameters and their statistical measures is presented in Table 1. The results of Q-HCA were also plotted as dendrograms for both assessment periods (pre-and post-channelization). The dendrograms present the extent of similarities between various sampling locations, and similar sites are kept in same cluster. The sampling locations were categorized grouped into two clusters for pre-channelization period (Fig. 3a) and three clusters for post-channelization period of assessment (Fig. 3b). For the pre-riverfront development period, Cluster (Table 1). Similarly, cluster II grouped five sites, Chandrika Devi (S1), IIM Road (S2), Kudia Ghat (S3), Daliganj (S4), and Lakshman Mela (S5), because of similarities between their values of BOD, EC, NO 3 − , K + , and F − , were the reason behind grouping of these sites into a cluster. Cluster II can be considered as 'low pollution' zone based on the mean values of the water quality parameters (Table 1). In the post-riverfront development period, cluster I had only one site, viz. Shaheed Path (S8), which was the most polluted site across the whole stretch considered in this study, with maximum values of all the parameters reported at this location (Fig. 3b). Cluster II had two sub-clusters, with Daliganj (S4) and Lakshman Mela (S5) in one group, and Kudia Ghat (S3), Bhaisakund (S6), and Dilkusha (S7) in another group. Cluster II can be categorized as representing 'moderate to high pollution' and has similar values of the water quality parameters at all sites. Further, cluster III can be categorized as the least polluted zone, having Chandrika Devi (S1) and IIM Road (S2) grouped together. Based on the results of stretchwise cluster analysis, it can be concluded that riverfront had a critical deteriorating impact on the water quality parameters. Sites S3, S4, and S5, which were in the low pollution zone in the pre-channelization period, were found to be in 'moderate to high pollution' zone after the channelization. Pearson's correlation matrix The significance of spatial and temporal differences between water quality parameters was verified using the Kruskal-Wallis H test and the Mann-Whitney U test, respectively, with respect to the temporal variation between the dataset from pre-and post-riverfront development, and the Mann-Whitney U tests indicated that the distribution cannot be considered as significantly different at the significance level of 0.05. Moreover, the Kruskal-Wallis H test indicated that the water quality parameters were significantly different among the sampling sites with p < 0.05 and a Chi-square value of ~ 70. It can be concluded from these results that various anthropogenic activities can be attributed behind the variation among the water quality parameters from pre-to post-channelization. Further, the correlation matrices for the water quality parameters are presented in Fig. 4a, b. It can be seen that pH was negatively correlated with all parameters except DO in both sampling periods. Similarly, DO had a 'very strong' negative correlation with BOD and TC pre-and post-riverfront development, whereas BOD and TC were very strongly positively correlated, suggesting that sewage is the primary source of pollution in River Gomti. All the cations had 'good' to 'very strong' correlation with EC, suggesting that variation of EC is controlled by these ions in the River Gomti. Nitrate exhibited a 'strong' Fig. 3 Dendrograms for Q-mode hierarchical cluster analysis (Q-HCA) for the dataset from a pre-riverfront development and b post-riverfront development 13 Page 8 of 12 correlation with K + in both sampling periods, suggesting that contribution from agricultural practices and subsequent runoff can be responsible for the occurrence of these ions in the river water. Moreover, Ca 2+ and F − also had a 'strong' correlation with K + , suggesting common anthropogenic origin of these contaminants. Spatial distribution of WPI The spatial variation of WPI across different stretches of River Gomti in Lucknow city was determined through inverse distance weighted (IDW) interpolation using Arc-Map 10.3. The WPI variation along the river clearly indicated the increasing pollution load and depleting water quality as illustrated in Fig. 5. The minimum pollution was witnessed at sites S1 and S2 with water sample showing WPI values 0.71 (good water quality) and 0.77 (moderately polluted quality), respectively. The upstream location of the sites emerged as the primary reason for better water quality. This finding was in coherence with the results from cluster analysis in which site S1 and S2 occurred in the least polluted zone, i.e., Cluster III (Fig. 3b). The midstream sites (S3, S4, S5, and S6) showed "highly polluted" water quality which was again in coherence with their grouping in Cluster II by the Q-HCA. Sites S7 and S8 had the worst water quality with maximum pollution load. The downstream location of the sites and sluggish flow of river water because of the development of riverfront prominently affects the water quality of sites S7 and S8. The placement of site S8 in Cluster I confirms our findings from the values of WPI. Thus, it can be clearly seen that the water quality at all sites throughout the stretch after the riverfront development has not witnessed any improvement. The potential causes of deteriorating water quality include the reduced flow in the river caused due to shortening of the river channel and a reduction in channel width through the construction of a diaphragm wall along both banks in a stretch ~ 8 km. The lining of the riverbank has minimized the possibility of adsorption of pollutants on sediments. The extensive development of riverfront infrastructure, including weirs and dams, has caused disruption to the natural river habitats. The reducing aquatic population because of the disruption of natural riffle-pool sequences, and excessive dredging during construction have been highlighted in a previous study (Dutta et al. 2018). The reduction of fish biomass in river Gomti has been reported after channelization highlighting the negative impact of excessive dredging and disturbing river banks. The reduced pace of reinstatement of fish population and even no recovery has been reported after channelization in various studies (Kennedy and Turner 2011;Sharan 2016;Vaughan and Ormerod 2010). Riverfront development and its impacts on river restoration Riverfront development and subsequent restoration projects across the globe aim at the maintenance and improvement of riverine ecosystem goods and services. But the maintenance of balance between environment, ecology, and channel engineering should be considered in such river restoration projects. The dominance of channelization engineering can cause major deterioration of the water quality of rivers, as seen in river Gomti. Modifications of ecology, floodplains, and other key fluvial characteristics as a result of channelization have been witnessed in the river Gomti (Goel et al. 2018). The channelization, covering floodplains with concrete, and filling of wetlands have contributed towards modification of habitats critically affecting the fish diversity. A huge-scale variation in the deposition of sediments in unpredictable ways can be witnessed at sites below channelized river sections (Kennedy and Turner 2011). River Gomti supports various vegetation patches that perform the function of sustaining processes of channel erosion and deposition. The existence of pools and riffles is vital for the optimal survival of fishes, considering their use as areas for feeding, cover, and breeding. The tendency of pools to scour at high flow and fill at low flow and vice versa in riffles is very important in the maintenance of poolriffle sequence morphology. Such sequences, in turn, act as dwellings for some organisms, which consequently serve as food for other organisms (Keller 1978). The removal of pool-riffle sequences from the river Gomti has occurred as a result of the heavy channelization. The dredging action has led to degradation of natural habitats, remobilization of contaminants, increases in suspended sediment concentrations, and uneven sedimentation. Pool-riffle sequences are composed of various materials, which include inputs from benthonic species, including fishes (Jurajda 1995). The development of riverfront modification of channel width for river straightening led to the loss of natural habitats. The channelization of the river Gomti has also led to a scarcity of lentic zones and side arms with aquatic vegetation. The diaphragm wall across the stretch of River Gomti has removed some fish habitats considering the absence of required mild slope shorelines. The local fisherman, during an interaction at site visits (Fig. S1), expressed immense worry and trouble considering the reduction of the fish population. The role of in-stream habitats in the maintenance of heterogeneity of fish population is very vital. The absence of many primary fish habitats after the river channelization has also been reported (Dutta et al. 2018). The homogenization of various fish-fauna habitats at a regional scale can lead to a potential loss of future habitats causing depletion of fish diversity. The straightening and the reduction in width of the river have disrupted the natural state of the channel, modifying flow velocity, gradient, and depth of the river channel. The changed flow and constant release of untreated/ partially treated wastewater from various drains across the Lucknow city have directly affected the breeding and survival of aquatic flora and fauna populations. These changes have consequently led to a reduction in the fish population. A report by Ahmad (2013) had also mentioned about the need to strictly avoid reducing the width of the river channel (< 250 m) during the riverfront development and channelization project. However, the design of diaphragm walls seems to lack a coherent approach toward incorporating the suggestions and providing to the maintenance of river ecology. The balance between river ecology and channel development was vital in this riverfront development project, which seems not to have been maintained. Conclusions Riverfronts can prove to be a rare resource toward urban development and eco-environmental protection when developed with ecological sensitivity. However, in case of river Gomti the aim towards development of water-land and human-nature interaction zone could not be attained with complete efficacy. An average increase of ~ 81% in the WPI values across all the sites post-riverfront development was assessed in the study. This increase stated the deteriorated river water quality highlighting the unstopped discharge of sewage with partial or no treatment, raising concerns toward the operation and management of sewerage systems. The lowest water quality was witnessed at the downstream site of Shaheed Path with a WPI value of '7.04.' The lowest depletion in the water quality (0.71% and 0.77%) was witnessed at the two upstream sites (S1 and S2) which are notably not channelized. The artificial modification through straightening and decrease in the width of the river channel have affected the natural flow along with the associated selfcleaning capacity of River Gomti. Thus, the current study highlights the need of a balanced and coherent approach with ecologists, environmentalists, and civil engineers working together considering all aspects associated with the impacts of riverfront development on the natural state of surface water resources.
2022-01-05T14:11:01.795Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "71686751cbeb0bb9cc1e155a6bd53472a8b219d1", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13201-021-01560-9.pdf", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "54b1b6011df94424efc85f469312e22f64b4448b", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
250953743
pes2o/s2orc
v3-fos-license
Aqueous dispersions of oxygen nanobubbles for potential application in inhalation therapy Inhalation is a non-invasive method of local drug delivery to the respiratory system. This study analyzed the potential use of aqueous dispersion of oxygen nanobubbles (ADON) as a drug carrier with the additional function of oxygen supplementation to diseased lungs. The suitability of the membrane-based method of ADON preparation and, next, the stability of ADON properties during storage and after aerosolization in nebulizers of various designs (jet, ultrasonic, and two vibrating mesh devices) was investigated. The increased oxygen content in the aerosol generated in two mesh nebulizers suggests that the proposed concept may be helpful in the oxygen supplementation during drug delivery by aerosol inhalation without using an additional oxygen source. This application can increase the overall effectiveness of lung disease treatment and pulmonary rehabilitation. 2. the influence of NBs on the nebulization process in different nebulizers, including aerosol droplet size and fine particle fraction (FPF), 3. the influence of nebulization process on NBs stability and oxygen concentration in the aerosol. Methods ADON preparation and stability testing. ADONs were prepared in generation setup ( Fig. 1) with cylindrical porous ceramic membrane (ZrO 2 on TiO 2 support, SiC membrane pore diameter 0.14 μm, internal/ external membrane diameter: 8/10 mm, membrane length 125 mm; Tami Industries, France). Membrane was enclosed in stainless steel casing which allowed for gas supply to the membrane in controlled manner. Gas was able to freely fill whole volume of the casing. During the generation process, the pressurized oxygen from the cylinder was forced through the membrane and the shear stress of the flowing distilled water caused the nanobubbles to be detached from the membrane surface. Nanodispersion was recirculated in the setup and stored in the 5 L stainless steel tank. Generation was carried out in 4 L of distilled water for 30 min with constant gas pressure and volumetric flowrates of liquid (Q l ) and oxygen (Q g ) ( Table 1). Liquid pressure drop in membrane module ΔP was calculated as the difference between indications of two pressure transducers set on the liquid path (denoted as 7a in Fig. 1). These conditions were selected as optimal after preliminary tests using different parameters Q g , and Q l (data not shown). Samples were gathered to the plastic containers (for immediate usage) or to the glass vials which were closed and secured with parafilm (for storage/usage after prolonged time). Samples were then used to measure the quality of the dispersion (density of nanobubble size distribution, Sauter diameter, oxygen concentration in liquid) and to determine the characteristics of atomization process carried out in the nebulization chambers. ADON stability was tested after 1, 4, 7, 14 and 21 days of storage. Density of nanobubble size distribution, Sauter diameter and zeta potential. Density of NB size distribution was determined by Dynamic Light Scattering method (DLS) using Zetasizer NanoZS (Malvern Panalytical, Malvern, UK). Size distribution was additionally characterized by Sauter mean diameter d 32 : www.nature.com/scientificreports/ where n i is the number fraction of bubbles with diameter d i . Sauter mean diameter is typically used in the analysis of fluid dynamics and mass transfer processes. This type of measurements was done for ADONs: freshly prepared (30 s after sampling from the generation system), after different periods of storage, and in the liquid samples collected from nebulized aerosols (see "Nebulization and aerosol collection" section). Once for each generation charge, the zeta potential was assessed using Zetasizer NanoZS along with dip cell for microelectrophoretic measurement. Nebulization and aerosol collection. Four types of medical nebulizers with different construction and mechanisms of aerosol generation were selected to atomize ADONs. They are described in Table 2. Aerosols formed in the nebulizers were condensed and collected in glass vessels, and then immediately tested for density of NB size distribution. The experiments were done in triplicate. Oxygen concentration measurements were done for distilled water (as a reference) and ADON. Studied nebulizers had different principles of operation which can help to explain later why not all of them can be suitable for delivery of aerosols formed from ADON. Jet nebulizer consists of a nebulizer head equipped with a nozzle and requires a source of compressed air delivered from the electric compressor. The aerosol is generated in a Venturi-type nozzle in the head and is splashed against an inner baffle which separates large droplets. This causes the partial drainage of liquid and its recirculation, which extends the residence time of the drug inside the vessel. Only small droplets are carried outside the head with a stream of air delivered from the compressor. Ultrasonic nebulizers consist of a nebulization chamber positioned above a piezoelectric crystal which generates ultrasound with a frequency of 1-3 MHz. The nebulization chamber of the nebulizer used in this study is filled with water and equipped with a medicine cup partially immersed in water preventing overheating the drug during atomization. The drug droplets are torn off the acoustic fountain surface and carried away with the auxiliary airflow. The oversized drops are retained on the impaction baffles and returned to the cup, and only fine droplets forming the inhalable mist is formed can be inhaled by a patient. In vibrating mesh nebulizers (VMNs), a liquid drug is atomized as it passes through a metal or plastic membrane with micrometric pores made precisely by laser processing. The piezoelectric crystal induces vibrations at the frequency 100-180 kHz. The liquid is pushed through the pores, and liquid fragments are torn off the from the mesh surface forming few-micrometer-size droplets. It is worth noting that the time of liquid conversion to aerosol in VMNs is very short and does not require auxiliary air. Aerosol characteristics. Droplet size distribution (DSD) in mists generated in the nebulizers were determined using Spraytec laser diffraction aerosol spectrometer (Malvern Instruments, UK). The device was equipped with 300 mm detector lens and allowed to measure the volumetric size distribution of droplets in the range of 0.1-900 µm. Measurements were done in the time mode (30 s) and rapid data acquisition rate (100 Hz). The raw data were averaged across the measuring time-range during the stable phase of aerosol emission (i.e., for relatively constant values of laser light obscuration and the measured droplet diameter). As the final indicators of aerosol quality, the median volumetric diameter (Dv50), geometric standard deviation (GSD), and mass fraction of droplets smaller than 5 µm (fine particle fraction-FPF) have been determined based on the complete DSD. Respirable particles are typically defined based on mass median aerodynamic diameter (MMAD). However, under certain conditions, Dv50 can be considered as an MMAD equivalent. We used Dv50 because the density of aqueous dispersions is app. equal to 1 g/mL, and the form of droplets released from nebulizers is close to spherical 28,29 . Determination of oxygen concentration. Oxygen concentration was measured using optical sensor ProSolo (YSI, USA) which also determines the temperature and surrounding pressure. Two types of measurements were done: (a) oxygen concentration determination in ADONs directly either after NB generation or after nebulization and liquid collection (it required collection of samples for 30-40 min); (1) www.nature.com/scientificreports/ (b) oxygen content in aerosol phase formed by nebulized liquids (on-line measurement). In this the sensor was positioned near the nebulizer outlet enabling the complete immersion of the sensor in the freshly formed aerosol. Measurements were carried out until the readings were stabilized (steady-state conditions). Results ADON properties after preparation and storage. Nanobubble dispersions generated in the porousmembrane system ( Fig. 1) were characterized regarding NB size and oxygen concentration. Figure 2 presents the densities of number size distributions of NBs in two ADON samples: immediately after NB generation and after 24 h of storage in a sealed glass vessel at room temperature. The results show that 1-day storage of ADON in the sealed glass vessel allows to preserve oxygen nanobubbles, however their density of size distribution becomes narrower, and the mode diameter increases from ~ 140 to ~ 160 nm. It suggests that ADON undergoes equilibration with some coalescence of NBs. This agrees with results from our previous studies 3 , although ADONs were stored then in plastic containers with a shallow layer of air below the lid, which might lead to partial oxygen desorption to this layer but also through the polymeric walls of the container. These effects additionally explained the change in the densities of bubble size distributions in those studies, however they are absent when ADON is stored in sealed glass vessels without air layer. Additionally, zeta potential of nanobubble dispersions (− 22 mV) was also in agreement with both our previous results and literature references 3,11,30 . Assuming that ADON for medical purposes medicines needs to be preserved in securely sealed containers, we have checked whether the storage in gas-proof glass bottles affects the size and oxygen concentration of nanobubble dispersions. Table 3 presents the oxygen concentrations and NB Sauter diameters in dispersions directly after generation and after set time intervals. What is worth noting that directly after NB generation (i.e., after 30 s needed for taking the sample and conducting the measurement) ADON is close to complete saturation with oxygen at the given temperature (35.08 ± 2.87 mg L −1 of complete saturation). Oxygen content in ADON directly after NB generation is 4.5-fold higher than the equilibrium value expected from physical solubility of oxygen contained in the air in water under these conditions of temperature and pressure. After initial decrease of oxygen content on day 4 to the value 2.3-fold higher than the equilibrium value, oxygen level remains constant until 21 day (when the stability studies were finished), confirming good stability of ADON as an oxygen carrier. We have performed post-hoc Tukey test and it shows that oxygen concentration in nanodispersion directly after generation is significantly different (for α = 0.001 ) from oxygen concentrations in following time points, while from 4th day the oxygen concentration does not significantly change even for α = 0.05 . NBs are preserved and Table 3. Properties of ADONs during storage in closed glass containers (room temperature, atmospheric pressure), values are means ± SD, n = 3. Samples denoted with single asterisk are significantly different from sample taken directly after generation with α = 0.05 , while double asterisk denoted sample significantly different from sample taken directly after generation with α = 0.1. Period of storage Oxygen concentration (mg L −1 ) www.nature.com/scientificreports/ their Sauter diameter increases from 251 ± 12 nm to 421 ± 73 nm after 4 days and remains at this elevated level (i.e., between 327 ± 73 nm and 454 ± 143 nm) during next weeks. Similarly to oxygen concentration, according to post-hoc Tukey test, the Sauter diameter of bubbles also was significantly different ( α = 0.05 ) between nanodispersion directly after generation and after storage in closed glass bottles, while the measurements in following days were not significantly different from one another for the same value of α . The only outlier was the Sauter diameter of bubbles after 2 weeks which while still not being significantly different from samples taken after 4 days, 1 week and 3 weeks after generation, was also not significantly different from freshly generated nanodispersion. However, for α = 0.1 , the significant difference is present. These results are extremely important in the designing any therapeutics for administration after prolonged time from the preparation. Oxygen content in relation to equilibrium with air (%) Sauter mean diameter of NBs (nm) Aerosol characteristics. Figure 3 shows the parameters characterizing aerosols generated in each nebulizer from water and ADON, i.e., the volume median droplet diameter (Dv50- Fig. 3a), geometric standard deviation (GSD- Fig. 3b), and fine particle fraction (FPF- Fig. 3c). Three nebulizers generate aerosol with similar Dv50 (4.5-6 µm), degree of polydispersity (GSD = 1.75-2) and fraction of fine droplets (FPF = 40-50%). In contrast, VMN Intec produces a mist with significantly different properties (Dv50 > 12 µm, FPF < 10%), suggesting that it is more suitable for treatment of upper airways diseases, such as laryngotracheobronchitis (croup) or other oropharyngeal infections. www.nature.com/scientificreports/ Considering the influence of NBs on the properties of produced aerosol, it may be noted that ADON nebulized in three devices: Aerogen, Pari and Thomex, is characterized by slightly lower Dv50 and increased FPF compared to nebulized distilled water. The most significant change is found for Pari where FPF increased from 45% for water to 52% for ADON. The opposite effect is seen for Intec with slightly increased Dv50 and FPF reduced from 3% (water) to 1% (ADON), although this parameter is of less importance if aerosol targets the upper airways. In most cases, the difference between aerosolized water and ADON is not statistically significant (see asterisks in Fig. 3), which suggests that ADON can be effectively atomized in various types of commercially available nebulizers allowing generating aerosols suitable for drug delivery by inhalation. As the FPF is significantly larger for three out of four nebulizers, we can assume that presence of nanobubbles in nebulized water allows for easier formation of smaller droplets for most of nebulization mechanisms. Next step was to check whether size of nanobubbles is affected by the method of nebulization. For that, measurements of the density of bubble size distributions were done for ADONs after nebulization. As shown in Fig. 4, NBs are still present in the liquids collected from condensing aerosols of nebulized ADONs regardless of the nebulizer type. Simultaneously, there are noticeable differences in densities of size distributions of oxygen NBs present in original and collected dispersions. The mode of the distribution is shifted towards smaller diameters in all collected samples. However, in the case of Pari and Thomex nebulizers we can see an additional peak at bubble sizes below 70 nm, i.e., the distribution changes from unimodal to bimodal. It can be explained by the fact that ultrasonic nebulization affects the density of size distribution of nanobubbles, as ultrasonic waves are able to both destroy and generate new nanobubbles in liquid depending on the process parameters 31,32 . For the jet nebulizer, destruction of nanobubbles may be caused by shear stresses, droplet impaction and liquid recirculation inside the nebulizing head. The least change in nanobubble sizes is observed for Intec vibrating mesh nebulizer, where the change in the mode of nanobubble size decreases from about 180 to 150 nm with a slight increase in the peak width. It may be explained by larger size of droplets obtained in Intec nebulizer, which are formed at lower energy densities and shear stresses. In case of Aerogen vibrating mesh nebulizer, the density of bubble size distribution after atomization is wider than in the initial samples, however the monomodal characteristic is preserved. Taking into account the changes in density of NB size distribution, we focused next only on two VMNs, i.e., Aerogen and Intec, which caused the least changes in ADON quality after nebulization. Oxygen content in nebulized ADON. Knowing that ADON after nebulization contains nanobubbles, this part of the study was intended to check an increase of oxygen concentration in ADON aerosol. The effect of NB presence is assumed to enhance the therapeutic effect of aerosol inhalation. Initially, we tried to measure oxygen concentration in liquids collected after nebulization by coalescence of aerosol droplets in a glass vessel. Results shown in Fig. 5 for ADON and water show similar values of oxygen concentration in the liquid phase regardless of the nebulizer used. These values were significantly lower than in the "fresh" ADON before nebulization. The results suggest that this method of analysis was ineffective, most probably due to desorption of oxygen from ADON which occurred during 20-30 min required for collection of the nebulized aerosol. Desorption process is fast because the surface area of air/water contact was very large during ADON atomization to droplets in the micrometer size-range. Under such conditions the oxygen concentration in ADON was reduced to the equilibrium value for oxygen dissolved in water contacting with air at room temperature and atmospheric pressure. However, this process does not correspond to the conditions of therapeutic application of nebulizers, where released aerosol immediately flows into the respiratory tract. In www.nature.com/scientificreports/ such conditions, oxygen from ADON is carried to the lungs both as NBs inside fine droplets and desorbed gaseous oxygen contained in the inhaled air. As shown before, higher oxygen concentration is preserved in ADONs during storage, ADON nebulization should increase the amount of oxygen supplied to the organism during inhalation. To demonstrate that, oxygen concentration was measured directly in mists emitted from nebulizers. The oxygen sensor used in this study could measure oxygen concentration in either air gas or liquid, but measurement in mists bear significant challenge for interpretation of obtained results due to hardship in evaluating the density of mists released. For quantitative determination of oxygen content in ADON mists, we have assumed the equilibrium concentration of dissolved oxygen in water as reference concentration for all the measurements. Table 4 compares the oxygen content in mists emitted from VMNs for nebulized ADON and water. ADON was used directly after generation, and after 7 or 14 days of storage in securely sealed glass bottles. The columns denoted as 'Increase [%]' show the relative difference between the average oxygen concentrations in each sample and water. One can clearly see that oxygen content in ADON aerosol directly after nebulization, i.e., in the mist is entering the respiratory tract, was 18.85% or 13.62% higher than in the aerosol generated from water in Aerogen and Intec nebulizers, respectively. For ADONs after storage, this effect is decreased, but is still present for Aerogen. In contrast, the oxygen content in ADONs after 7 or 14 days of storage and nebulized in Intec was significantly reduced. According to post-hoc Tukey test ( α = 0.05 ), all of the results are significantly different from one another, beside two pairs. First pair shows no significant difference between oxygen concentration in water atomized by either Aerogen or Intec nebulizer, while the second one confirms that the oxygen concentration in ADON atomized using Aerogen nebulizer does not significantly change after 7th day of storage. Discussion Presented comprehensive studies of ADON properties show that oxygenation in the membrane system under optimized process conditions allows obtaining dispersion containing nanobubbles with the size range of 80-450 nm (count mode: 140 nm). NB size range becomes narrower due to equilibration after 1 day (range of 90-350 nm, count mode: 160 nm) and remains practically unchanged for 21 days when stored in a closed glass container at room temperature (Fig. 2, Table 3). These results confirm that it is possible to obtain stable NB dispersions for potential use in nebulization a few weeks after ADON production. The results also show that ADONs can be effectively nebulized in various nebulizers without influencing the droplet size distribution of generated aerosol (Fig. 3). It is important factor which confirms that aerosol nebulized from ADON maintains good properties required for targeting various levels of the respiratory system. Three of the studied nebulizers (Pari, Thomex, and Aerogen) generated aerosol with the size appropriate for targeting Table 4. Comparison of oxygen content in the mists measured directly at the outlet of VMNs for water and ADON. Values are means ± SD, n = 3. Aerogen Intec Oxygen content (mg L −1 ) Increase (%) Oxygen content (mg L −1 ) Increase (%) www.nature.com/scientificreports/ lower airways (Dv50 = 4.5-6 µm and FPF = 40-55%), while one nebulizer (Intec) seemed to be more useful in delivering aerosol to the upper airways (Dv50 > 12 µm). Each device's nebulization parameters were almost unchanged, regardless of the liquid used, i.e., water or ADON. It was also confirmed that the nebulization process despite the substantial energy input to the liquid phase and the extended residence time of NBs in the device did not destroy nanobubbles, i.e., they were present in ADON collected from aerosol droplets, although NB size characteristics were modified. For instance, in pneumatic (Pari) and ultrasonic (Thomex) nebulizers, bimodal densities of number distribution functions were noted in ADON after nebulization (Fig. 4) suggesting the influence of liquid atomization mechanism on NBs present in water. However, the size characteristics of NBs in dispersions nebulized in VMNs remained stable, which allowed us to indicate these two nebulizers for further studies of oxygen content in aerosolized ADONs. It should be noted that this type of measurement was challenging since it required a few-minute period of collecting the droplets during when the liquid was contacted with the atmospheric air. Such conditions caused oxygen desorption from ADON to the air, reducing the oxygen concentration in the dispersion to the equilibrium value, as shown in Fig. 5. One may note that, conditions of this experiment which were required to evaluate oxygen content in the liquid after aerosolization do not correspond to the actual condition of nebulization when aerosol generated in the device flows directly to the respiratory system. In such a situation, the total oxygen content in the aerosol (liquid and gas phases) decides on a potential therapeutic gain from inhalation of aerosolized ADON. By doing measurements in the aerosol phase, we were able to demonstrate that the oxygen content was increased by 13.6-18.9% (depending on the VMN) for nebulized ADON compared to the nebulized distilled water (Table 4). It clearly confirms that it is possible to increase the oxygen supply during inhalation of nebulized fresh ADON. Even for ADON stored up to 2 weeks, the oxygen content in the aerosol increases above 8% for the nebulization in Aerogen VMN, although it is much lower for Intec and approaches zero for ADON after a 2-week storage time. The results also show that not all nebulizers are suitable for ADON delivery with an effect of increased oxygen delivery during aerosol inhalation. Both jet and ultrasonic nebulizers are not recommended as the forces and processes responsible for aerosol formation (the energy of ultrasonic waves, droplet impaction, and liquid recirculation inside the nebulizing head) may significantly influence the stability of nanobubbles and result in rapid oxygen desorption during elution with high flows of auxiliary air which is used in these types of nebulizers. VMN is a better option since the aerosol is formed during a single liquid passage through the orifices in the vibrating mesh. Short contact time between ADON and air helps to keep NBs inside the droplets emerging from the nebulizer. It is also interesting to see that one VMN (Aerogen) allows obtaining a higher oxygen concentration in the aerosol phase than another (here: Intec). As shown in Fig. 3, each device produces droplets of a different size, and, intuitively, one may expect that larger droplets obtained from Intec (obviously, with a lower hydrodynamical stresses) should maintain more oxygen than finer droplets from Aerogen. However, one also may note that these nebulizers have quite different designs. The mesh material is not the same (metal in Aerogen, polymeric in Intec- Table 1), and the volume of liquid in the nebulizing vessel and the volume of air over a liquid layer are less in Aerogen than in Intec. These two factors are probably responsible for different degree of oxygen desorption during ADON nebulization, which became notable, in particular, for the samples after 1 or 2-week storage. It should be noted that even in the case when oxygen partially desorbs from ADON droplets nebulized in a VMN, it enriches the gas phase of the inhaled aerosol, so the total oxygen supply in the lungs is increased, as schematically shown in Fig. 6. It should be a benefit of ADON nebulization in treating pulmonary dysfunctions that are associated with reduced oxygenation or the recovery from such pathological cases. Aqueous aerosol generated from ADON in mesh nebulizers has a higher oxygen content than water at the equilibrium conditions. It may be proposed that when ADON is used as a carrier of pulmonary medicines, treatment of lung diseases by inhalation will be enhanced by oxygen supplementation even without using of oxygen as the additional gaseous carrier. It should allow to obtain a better treatment in non-hospital conditions. The potential benefit of ADON as a new drug carrier is the decreased time spent in the hospital, hence the reduced costs (both economic and psychological). Inhalation of nebulized ADON may also be suggested as a method of home-based pulmonary recovery and rehabilitation after hospital treatment of severe pulmonary dysfunctions, e.g., caused by COVID-19. The above analysis is based purely on physicochemical considerations, which www.nature.com/scientificreports/ is a limitation of our study, so future applications of aqueous dispersions of oxygen nanobubbles needs in vivo studies to confirm the proposed pharmacological effects. Conclusions We proposed a new potential application area for liquid dispersions of oxygen nanobubbles which dynamically gain approval in the multiple branches of science and industry. Obtained results indicate their potential usefulness also in the treatment of respiratory diseases by inhalation of aerosols. The increased oxygen content in the aerosol droplets generated in two mesh nebulizers suggests that the proposed concept may be helpful in the oxygen supplementation during drug delivery by aerosol inhalation without using an additional oxygen source. This application can increase the overall effectiveness of lung disease treatment and pulmonary rehabilitation, simultaneously reducing social and economic costs. Besides in vivo studies that are needed to confirm the expected oxygenation effect, the next research steps should be focused on the determination of the interactions of ADONS with various inhalation drugs considering also the effect on the nanobubble size and stability in such mixtures. Data availability The datasets generated and analyzed during the current study are available from the corresponding author on reasonable request. www.nature.com/scientificreports/
2022-07-23T06:17:25.940Z
2022-07-21T00:00:00.000
{ "year": 2022, "sha1": "c55d105005eb4bc46d0a1b12a2c69f6c8e55f8cf", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "4e124c85cf641e4785a1fbcb066915a15d39311f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
31556468
pes2o/s2orc
v3-fos-license
Chemicals released by male sea cucumber mediate aggregation and spawning behaviours The importance of chemical communication in reproduction has been demonstrated in many marine broadcast spawners. However, little is known about the use of chemical communication by echinoderms, the nature of the compounds involved and their mechanism(s) of action. Here, the hypothesis that the sea cucumber Holothuria arguinensis uses chemical communication for aggregation and spawning was tested. Water conditioned by males, but not females, attracted both males and females; gonad homogenates and coelomic fluid had no effect on attraction. Male spawning water, but not female spawning water, stimulated males and females to release their gametes; the spermatozoa alone did not induce spawning. H. arguinensis male spawning water also induced spawning in the phylogenetically related H. mammata. This indicates that males release pheromones together with their gametes that induce spawning in conspecifics and possibly sympatric species. Finally, the male pheromone seems to be a mixture with at least one labile compound (biological activity is lost after four hours at ambient temperature) possibly including phosphatidylcholines. The identification of pheromones in sea cucumbers offers a new ecological perspective and may have practical applications for their aquaculture. individuals successfully induced spawning directly and when added to the water, indicating that the coelomic fluid may contain a pheromone 35 . The commercial demand for sea cucumbers has led to over-exploitation and severe depletion or disappearance in many regions worldwide and aquaculture programs are being developed to sustain and to enhance wild populations 36 . The sea cucumber Holothuria arguinensis is a recent fisheries target and the first sea cucumber species to be reared in captivity in Europe [37][38][39] . A better understanding of the chemical factors influencing the reproductive biology of this broadcast summer-autumn spawner could give valuable insights to improve the management of species reared in captivity. Thermal shock remains the most common used method to stimulate spawning in their aquaculture [40][41][42] . However, this method gives inconsistent and variable results according to the protocol and species used 43,44 ; therefore, the identification of spawning pheromones may provide a promising alternative. Knowledge of the pheromonal chemicals could also help to control invasive species, which represent a major threat to biodiversity and cause significant damage to worldwide economy 45 . A typical case is found in the Mediterranean Sea where many Indo-Pacific species, including sea cucumbers, have invaded the area through the Suez Canal 46,47 . Here, the hypothesis that the sea cucumber H. arguinensis uses chemical communication for aggregation and spawning was tested. We show that H. arguinensis males and females are attracted by male conditioned water but not by female conditioned water, gonad homogenates or coelomic fluid. Furthermore, male spawning water (but not female spawning water or sperm), induces spawning in males and females. The H. arguinensis spawning water also induces spawning in the related H. mammata. Finally, we attempt to identify the pheromone and results indicate they are a mixture containing at least a labile compound and possibly phosphatidylcholines. Altogether, our results provide a novel perspective into sea cucumber aggregative and spawning behaviour with practical applications in ecology and aquaculture. Material and Methods Ethics statement. Sea Collection, gonadal biopsy and maintenance of specimens. H. arguinensis longer than 210 mm, i.e. adults according to previous analysis of the sexual maturity in this species 39 , were collected from southern Portugal: during late spring 2015 from the Ria Formosa (37°00′35.02″N; 7°59′46.10″O) for the aggregation behaviour assay and during summers of 2015 and 2016 from Sagres (37°00′44.78″N; 8°55′49.51″O) for the spawning experiment. Adult H. mammata (200-250 mm), a sister species to H. arguinensis 48 , were collected from the Ria Formosa. The sex and the maturity stage of the sea cucumbers were determined according to Marquet et al. 39 by observation under a light microscope (Leica DM2000) of a gonadal biopsy taken from a small incision on the dorsal side of the animal previously anesthetized in 5% MgCl 2 9 . All experiments were performed at least one week after the biopsy to allow recovery (all animals recovered without any obvious signs of infection or permanent damage). Females and males were kept in separate tanks (1.2 × 1.0 × 0.6 m) in natural sea water and fed four times a week with sediment collected from their natural environment. Y-maze tests of attraction. To test the capacity of water-borne stimuli to attract conspecifics, a glass Y-maze (30 cm height, 3 mm thick) was used with a stem of 55 cm long and 25 cm wide separating into two arms of 40 cm long and 12.5 cm wide at the end of which stimuli were added (Fig. 1a). Water inflow was 700 ml/min in each arm and drained out of the maze through two holes (2 cm diameter) connected to a standpipe which maintained the water height (10 cm). Tests of the maze plume dynamics using food colouring (Brilliant Blue FCF, E133) delivered to both arms revealed by visual inspection small-scale turbulence within the arms, but little mixing between water of the two arms in the stem section. Data collected was based on first entry and time spent in either of the two arms where there was no mixing. The Y-maze was surmounted at 2 m height by an infrared video camera equipped with infrared filter (ICD-49E, Ikegami Tsushinki, Japan) and at 1.2 m height by two automated infrared light sources (IR-56, Microlight, Russia) oriented diagonally with respect to the bottom of the Y-maze (Fig. 1b). The videos were stored in AVI files in a hard drive and displayed with Everfocus Player Application (EFPlayer v 1.0.6.4.). Experiments were carried out over four hours at night, when this species is more active. All animals used in this experiment were at stage III to IV -mature oocytes or spermatozoa filled the gonadal tubules almost completely 39 . Test animals were placed at the entry area A of the Y-maze (Fig. 1a) and given a choice of control seawater and seawater containing the stimulus delivered to each arm (B and C) at a rate of 700 ml/min. The stimulus side was alternated between successive tests to eliminate arm preference. The stimuli tested were: (1) conspecific-conditioned water (CCW), (2) gonad homogenates (ovary and testis), and (3) coelomic fluid (CF). To produce conspecific-conditioned water, two individuals of known sex were placed in an aquarium (30 × 20 × 20 cm) from which water flowed by gravity to one of the Y-maze arms. Separate pools of five testes and five ovaries (total of 120 g each) were homogenised from fresh gonad with a mortar and a pestle and filtered (100 µm pore size) to remove large particles. Coelomic fluid (10 ml) was collected from separate pools of five males and five females using a sterile needle inserted in the body wall of the animals and withdrawn by gravity. Gonad homogenates and coelomic fluid pools were frozen at −20 °C after collection. The day of the experiment they were thawed and diluted in 1200 ml of seawater and injected in the maze with a peristaltic pump at a rate of 10 ml/min during the first two hours of the experiments, with the seawater inflow of 700 ml/min. At least 10 males and 10 females were tested for each combination (receiver vs donor: male vs. male, male vs. female, female vs. female, female vs. male) and stimulus. Each animal was taken from a pool of 20 males and 20 females kept in separate tanks (1.2 × 1.0 × 0.6 m) and used only once for each stimulus. Between trials, the Y-maze was rinsed and cleaned of debris, and clean seawater was allowed to flow through the entire maze for 15 min to remove any residual stimulus. Fourteen out of 160 individuals tested were excluded from the analysis because they remained immobile for four hours (the time limit of immobility set). The behaviours registered in each test were (1) first choice of arm and (2) percentage of time spent in each arm (in each case when the full body entered one of the two arms). The effect of each stimulus on the first choice of arm (stimulus or control) was evaluated by a two-tailed binomial test to determine if the observed frequency was different from a random choice (50/50). The nonparametric Wilcoxon signed-ranks test was used to compare the percentage of time spent in each arm (stimulus or control). Spawning water tests. Two sets of experiments were designed to test the effect of spawning water on spawning. The first tested if male or female spawning water could induce spawning in conspecifics and the second tested for heterospecific responses in the closely related sympatric species H. mammata. All aquaria were filled with seawater coming from the same source and with the same physicochemical properties (22-25 °C, 35 ppt salinity). All experiments were performed at night before or at full or new moon, as sea cucumber spawning has been seen to occur most frequently at these periods 22 , and only reproductively mature sea cucumbers previously selected through a gonadal biopsy were used. For each trial, sea cucumbers used to obtain the spawning water (the donors) were placed in a 40 × 40 × 40 cm aquarium and induced to spawn by thermal shock (TS). For the TS the donors were transferred for 10 minutes to an aquarium with 5 to 6 °C cooler water before being returned to their original aquarium. Spawning occurred within one hour for males and two hours for females. The test sea cucumbers were placed individually in a series of smaller experimental aquaria (26 × 16 × 16 cm) in the morning or the day before the experiment in order to ensure spawning was not induced by the transfer from the larger to the smaller aquarium (in which case they were not used in the experiment). The tests consisted of addition of 250 ml of female or male fresh spawning water, with or without spermatozoa, or spermatozoa in seawater, always to the same corner of the small aquarium containing either a male or a female. To test for interspecific spawning activity, 250 ml of male spawning from H. arguinensis was added to a small aquarium containing an isolated male or female H. mammata. Control aquaria received 250 ml of seawater added the same way as the test seawater. Spermatozoa were filtered from spawning water (0.7 µm pore size; Whatman, GF/F). The results were scored (spawning or not spawning) after one or two hours for males and females, respectively. Statistical significance of percent spawning of stimuli versus the seawater control was evaluated by a Fisher's exact test (two-tailed). Difference of time in spawning response was compared between males and females using the Mann-Whitney test. Tests of male water fractions. In order to characterize the active substance(s) in spawning water, tests were designed to determine whether the biological activity 1) was extractable by HLB+ universal cartridges (reverse-phase sorbent, Waters corporation, Millipore, Milford, Mass., USA), and 2) was stable. HLB+ cartridge extracts (fraction retained in cartridge and eluted in 5 ml of methanol) and filtrates (the flow-through fraction) of 1 l fresh male spawning water from which sperm and particles had been filtered, as indicated above, were obtained following the generic protocol in the manufacturer's manual. For the spawning tests, the same experimental setup as above was used with the following stimuli: HLB cartridge extract (E), HLB cartridge filtrate (F), E and F together (E + F), fresh spawning water (FSW), spawning water aged 2 hours (2 h FSW) and spawning water aged 4 hours (4 h FSW). The solution containing extract was prepared by adding 1.25 ml of methanol extract to 250 ml of sea water (E) or 1.25 ml of methanol extract to the 250 ml filtrate (E + F). Each extract (E) and filtrate (F) was used in four tests. Two control aquaria were used, with sea water only and methanol (1.25 ml diluted in 250 ml sea water). If sea cucumbers spawned within the expected period, the test was stopped. If they did not respond, the complementary stimulus was added (E or F) or FSW. Finally, if they did not respond to FSW, a TS was provided. FSW and TS were used as positive controls to determine if unsuccessful spawning was due to the sea cucumber not being ready to spawn. Those that did not respond to any stimuli were not considered in the analysis. Statistical significance of percent spawning of stimuli versus the seawater only control was evaluated by a Fisher's exact test (two-tailed). Preliminary chemical characterization spawning water. HLB + extracts of filtered seawater taken from the same aquarium before and after spawning (males: n = 5; females: n = 2) were used for subsequent analyses by mass spectrometry. The mass spectrometer was a Bruker Esquire HCT ultra ion trap, equipped with an electrospray ionization source (ESI) (Agilent), operating in the negative and positive polarities. For ESI-MS n studies (direct injection) the typical spray and ion optics conditions were the following: capillary voltage, 4.0 kV; nebulizer gas pressure, 30 psi; drying gas, 300 °C; drying gas flow, 6 l/min; capillary exit voltage, 208 V; skimmer voltage, 15 V. The solutions were infused into the ESI source using a syringe pump (model 781100, KDScientific, USA), at a rate of 4 μl/min. Infusion was performed using samples extracted with methanol, after washing the HLB+ cartridges with ultra-pure water. This washing step removes excess salts, which quench the formation of ions under ESI. Direct injection allowed us to obtain fragmentation spectra of order higher than 2 (MS n , n > 2). The samples were also analysed by liquid chromatography (LC, Agilent Technologies 1200 Series) coupled to the above described mass spectrometer (LC-MS), under Auto-MS mode in both, positive and negative polarity. Under LC-MS operation the spray and ion optics conditions were the following: capillary voltage, 3.5 kV; drying gas (nitrogen), 330 °C at 7 l/min; nebulizer gas pressure, 35 psi; capillary exit voltage, 104 V; skimmer voltage, 32 V. A Hamilton PRP-1 reversed phase LC column (15.0 cm length, 2.1 mm internal diameter, 5 µm average particle diameter), stabilized at 25 °C was used for chromatographic separation. The eluent system was ultra-pure water (A) and acetonitrile (B), both with 0.1% formic acid, and ethyl acetate (C). The gradient started with 52% A, 38% B and 10% C. After 5 minutes an increase of B and C up to 73% and 25%, respectively, took place over 8 minutes. The eluent was then allowed to recover to the initial conditions (52% of A, 38% of B and 10% of C) in 1 minute and then stabilize for additional 5 min before the next run. The flow was 0.35 ml/min. Full-scan mass spectra were generated in the range of 100.00-1500.00 m/z, both under negative and positive ESI. The data were analysed using the software Data Analysis software v 3.4 (Bruker Daltonics esquire 6.1). Under LC-MS, a separation by LC took place before ESI-MS analysis. As salts came out from the column in the first 1-2 minutes, the flow was sent to waste. LC separation also allowed for the observation of less complex full scan spectra and for detection of compounds less prone to ionize and therefore not visible under direct injection. The AutoMS mode allowed for fragmentation (MS2) of compounds detected after LC separation. This process was done for both polarities in a single run. Compound assignment was based on the m/z values, isotope distributions and fragmentation patterns. The presence of compounds possessing a phosphatidylcholine moiety was confirmed by injection of a phospholipid authentic sample, specifically compound 1,2-stearoyl phosphatidylcholine present in the standard Sigma P5394. Data availability. The datasets generated during and/or analysed during the current study are available from the corresponding author on request. Ethical approval. All applicable international, national and institutional guidelines for the care and use of animals were followed. Results Y-maze tests of attraction. For each stimulus, at least 85% of the sea cucumbers chose to move from the entry area to one of the arms. After spending some time in one of the arms, about half of the sea cucumbers went back to the entry area and re-entered the same or the other arm of the Y-maze, and eventually repeated this behaviour. Males more often chose first the male-conditioned water arm than the control (Fig. 2a, two-tailed binomial test, p < 0.01), whereas for females there was no apparent preference (p = 0.10). Males and females showed no preference in their first decision when confronted with female-conditioned water (Fig. 2b, males: p = 0.80; females: p = 0.81). However, both males and females stayed significantly longer in the arm with the male-conditioned water (Fig. 2c, Wilcoxon signed-ranks tests, males: p < 0.01, Z = 3.21; females: p < 0.05, Z = 2.50). In contrast, males and females spent the same percentage of time in water conditioned by females and control water (Fig. 2d, males: p = 0.10, Z = 1.63; females: p = 0.90, Z = 0.13). CF from males or females did not induce first choice preference for any arm in males (Fig. 3a, two-tailed binomial tests, males-CF: p = 1.00; Fig. 3b, females-CF: p = 0.75) or females (Fig. 3a, Spawning behaviour. Thermally or chemically stimulated male and female H. arguinensis adopted a pre-spawning behaviour in which their anterior body region swayed from one side to the other, with tentacles extended outside of the oral cavity (Supplementary material 1a), while the posterior body region rested against the bottom or the side of the glass aquarium (Supplementary material 1b,c). Both males and females released gametes from a gonopore (Supplementary material 1d), located at the dorsal side of the anterior part at the opposite side of the oral cavity and was clearly visible only during spawning. Males started to release gametes between 40 minutes and one hour after stimulation (mean: 49.70 ± 8.17 min, N = 10) and continued to slowly release a continuous flow of sperm for at least one hour (Supplementary material 2b), with some individuals still spawning after three hours. The latency of response of females was longer than that of males (Mann-Whitney U test, U = 16.50, N = 10, p < 0.01), varying between 50 and 100 minutes (mean: 70.30 ± 18.41 min, N = 10). However, in contrast to the continuous slow release of males, females released their gametes quickly and briefly in three to five pulsatile jets (Supplementary material 2a). Similarly, when males and females received the stimulus at the same time, males also spawned longer than females and generally continued to spawn at least until females stopped releasing gametes. Spawning water tests. All males and 11 of the 12 females tested spawned when male spawning water was added to their aquaria (Fisher's exact tests, males or females, p < 0.0001), unlike with female spawning water which had no significant effect on spawning of either males or females (Table 1). If sperm was removed from the spawning water, more than three quarters of males and females still released their gametes (males or females, p < 0.001), but sperm itself had no significant effect on spawning of either sex (p > 0.05 for both). None of the males and only 1 of 16 females spawned in the control. Interspecific spawning was observed between H. arguinensis and H. mammata. All males and 5/6 female H. mammata spawned when they received male spawning water from H. arguinensis in their aquarium while none of the males and females H. mammata spawned in the control (Fisher's exact tests, males, p < 0.01; females, p < 0.05, Table 1). Tests of male water fractions. All sea cucumbers spawned when they received FSW and 6/7 spawned with 2 h FSW (Fisher's exact tests, p < 0.01 in both cases; Table 2; Supplementary material 3). However, no sea cucumber released gametes with 4 h FSW (p = 1.00). When fractions E or F were added individually to the experimental tank, less than a quarter of sea cucumbers started to spawn (p > 0.05 in both cases). After the addition of the complementary stimulus, only 4/7 and 1/10 spawned with E and F, respectively (p > 0.05 in both cases). However, more than 50% of sea cucumbers spawned directly after E and F were added together (close to statistical significance, p = 0.06). The sea cucumbers that did not spawn with extracts, or aged spawning water, were induced to spawn by FSW or TS. If they failed to respond to any stimuli they were not considered in the analysis (14 out of 76). Also, 20 out of 222 (9%) sea cucumbers spawned spontaneously before any stimulus was added, and were not used in experiments. Chemical characterization of spawning water. Full scan LC-MS profiles of water extracted before and after spawning of males and females were clearly different under both ionization polarities (Supplementary material 4). Since only male spawning water was active, only these LC-MS profiles were analysed further. Major differences between samples obtained before and after spawning could be seen between 8 and 11 min. Among those compounds detected by MS, the most intense was found at m/z 808.7 (positive polarity) in the water after spawning, and was absent before spawning (Fig. 5a). A much smaller peak of this compound was also seen in female spawning water (Fig. 5b). Under positive polarity, Auto MS gave a fragmentation spectrum showing a major signal at m/z 184.1 (Fig. 5c). This result was confirmed by direct injection of the sample into the mass spectrometer (ESI-MS). The m/z value and the daughter ion at m/z 184.1 under positive polarity are common in phospholipids possessing a phosphatidylcholine moiety 49 . To evaluate the presence of a phosphatidylcholine moiety, the phosphatidylcholine phospholipid standard 1,2-stearoyl phosphatidylcholine was selected and studied by ESI-MS n . This compound is readily seen at m/z 790.5 under positive polarity as it contains a net positive charge. The fragmentation led to a major signal at m/z 184.1 (Fig. 5d), which is the same daughter ion observed for the fragmentation of m/z 808.7. This result suggests that the unknown compound might possess a phosphatidylcholine moiety. Another signal detected by LC-MS that could be associated with male spawning was seen at m/z 287 under positive polarity (Fig. 6a). This signal was seen in male water prior to spawning but increased greatly after spawning. A much smaller peak was seen in female water which did not increase after spawning (Fig. 6b). The fragmentation spectra are shown in Fig. 6c-e. The signal intensity of the two compounds increased with time with a maximal intensity after 30 min and 90 min after the beginning of the spawning process for the m/z 808.7 and m/z 287, respectively. Their signal intensity then decreased progressively, even though the sea cucumber continued to release sperm. This is consistent with the reduction of spawning activity seen in the bioassay (Supplementary material 5). Discussion The present study demonstrates that chemical cues produced by male sea cucumbers attract conspecifics and trigger spawning in both sexes, indicating an important role of the chemosensory system in the coordination of aggregation and spawning behaviours. Pre-spawning males and females spent more time in the arm of the Y-maze with male-conditioned water. This indicates that males release (a) chemical(s) to the water that are attractive to both sexes. The coelomic fluid and gonad homogenates (ovary and testis) did not attract either sex and are therefore unlikely sources of aggregation odorants. This contrasts with the avoidance reaction of sea urchins, when confronted with conspecific coelomic fluid or gonad extract 50,51 . However, in the present study, the gonad extracts and coelomic fluid were frozen before tested and active compounds in these homogenates could have degraded; it is pertinent to add that these experiments with gonad extracts and coelomic fluid were carried out before the spawning experiments wherein the labile nature of the spawning pheromone became apparent. Another possible source of odorants is the mucus of mature individuals which, in Cucumaria frondosa, has been shown to accelerate gonadal development of less mature individuals 52 . However, we have no indication that more mucus is released during spawning than at any other times or that it is a source of pheromones in the species we tested. That only the males produce/release the aggregation pheromone could be a strategy to draw sea cucumbers to the same place to spawn, while limiting sperm dispersion through male-male groups and maximizing fertilization success through male-female groups 6,15 . Specific male aggregative behaviours have also been reported in brittle stars 53 and sex recognition by mechano-reception in male starfishes 54 . Recently, sedentary starfish A. planci were seen to be attracted to water-borne chemical plumes released from aggregating individuals 55 , which supports the presence of aggregation pheromones in echinoderms. As described briefly above, H. arguinensis performs a 'nuptial' sequence before spawning, which culminates in gamete release and a sperm mass that slowly disperses. Whether, in the wild, this happens in pairs or in a promiscuous mating mode is not known. Further investigations are needed to better understand the triggers and benefits of sea cucumber breeding aggregations and to determine if male attraction is also present outside the pre-spawning period. Male spawning water, with or without spermatozoa, induced spawning in males and females, whereas female spawning water had no effect. This suggests that males release (a) chemical(s) during spawning which stimulate(s) both sexes to release gametes. To our knowledge, it is the first time this has been shown in sea cucumbers, and is consistent with what has been observed in sea urchins 31 and starfish 29 , although in those studies spermatozoa were not removed from water. Consistent with previous studies of marine broadcast spawners, males were quicker to release their gametes than females, a feature that has been suggested to be favoured by sexual selection when males are competing to fertilize the ova, so as to enable fertilization of more eggs over larger areas 25,56,57 . In the present study, males also spawned longer than females; i.e. they continued to spawn at least until females stopped and beyond. This behaviour has been reported in other Holothuroidea 23,28 and in other marine broadcast spawners such as Ophiuroidea 58 , Echinoidea 27 , Polychaeta 11,59 and Appendicularia 60 . Releasing sperm more slowly than eggs was shown to be a good strategy to avoid sperm attaching uselessly to fertilized eggs, since the permanent block preventing subsequent sperm attachment to the eggs takes longer to form than the first block preventing polyspermy 61,62 . Interestingly, as with H. arguinensis, and with similar efficacy, male and female H. mammata released their gametes in response to H. arguinensis male spawning water. This suggests that the two species use the same or similar chemical signals. Heterospecific spawning inducing activity has been documented in other Holothurians and in other invertebrates 10,23,63-65 and has been suggested to result from the coevolution of pheromones in response to reduced predation risks through predator swamping, since isolated spawn clumps from one species are more likely to be caught when they are isolated than when they are grouped with those from another species 66 . This observation raises the possibility of inter-specific hybridization between the two species, a phenomenon which has been documented among other sea cucumber species 67,68 , although species-specific circadian patterns and spawning behaviours minimize this possibility 69 . Spawning was rarely induced when male spawning water was 4 hr old, indicating degradation or evaporation of spawning substance(s). Similarly, separate addition of the eluate or filtrate had no activity, which could be at least partly restored if the complementary fraction was added subsequently. This suggests that the spawning pheromone consists of more than one compound. However, solid-phase extraction of water took between one and three hours; and some loss of activity of the extracts could therefore be explained by degradation of the active compound(s) during this time. In this study, two compounds were found in male spawning water under positive polarity, at m/z 808.1 and m/z 287, while they were absent or present at much lower concentrations in female water and before male spawning. Both showed a decrease within two hours, consistent with the loss of biological activity of male spawning water. They are thus good candidates to be involved in sea cucumber spawning. The fragmentation of compound with m/z 808.1 suggests the presence of a phosphatidylcholine moiety. Phosphatidylcholines are a major component of the plasma membrane and are known to be involved in sperm motility 70 , the acrosomal reaction 71 and in the maintenance of sperm membrane integrity 72 . Moreover, phosphatidylcholines have been characterized as key substances governing group recognition in catfish 73 and as phagostimulants in the nuptial secretion of a species of cockroach 74 . However, to our knowledge, they have never been associated with spawning activity in any animal taxon. Saponins are among the most important and abundant secondary metabolites of sea cucumbers, and have been reported to be involved in the chemical communication of marine organisms 75,76 . Although we cannot exclude the involvement of saponins in the spawning process, none of the detected compounds show mass spectral properties consistent with the reported for sea cucumber saponins. While the compound with m/z 287 is too small to be a saponin and was not detected under negative polarity, the fragmentation of compound with m/z 808.1 does not release water upon fragmentation (Fig. 5c), a typical neutral loss of compounds possessing sugar moieties, including saponins 76 . Pheromones are highly diverse across different animal taxa, and are composed either of a mixture of different chemical compounds or a single compound 77 . Some fish species use steroids and/or prostaglandins as sex pheromones 78 while others use amino acids 79 or bile acids 80 . In marine invertebrates, for example, peptide pheromones have been identified in the sea-slug Aplysia and nereid worms [81][82][83] and nucleotide pheromones in crustaceans 84 . The chemical identification of pheromones is challenging due to the relatively small amount of pheromone released, to the large variety of substances usually present in natural waters, and to their possible lability 85 . Here, the compounds correlated with spawning need to be isolated, purified and submitted to further analysis, namely nuclear magnetic resonance (NMR) and high-resolution mass spectrometry to assign a final structure. The identified compounds would need then to be tested on the animal to confirm their biological activity. It would be also of great interest to know the source of the pheromones and their regulation. Reproductive success in sea cucumbers results is likely dependent upon a combination of chemical cues and one or more exogenous factors 31,86 . This study provides evidence that sea cucumbers use pheromones for aggregation and spawning, and their partial characterization could have important ecological and practical applications in the management of sea cucumbers in aquaculture, and as an attractant in the control of invasive species, viz a viz their use in the control of pest insects 87 and invasive sea lamprey 88 .
2018-04-03T01:56:01.069Z
2018-01-10T00:00:00.000
{ "year": 2018, "sha1": "3aba1245eb8227b177c40d70ef80f915d998b3cd", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-18655-6.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c7dfc12e2a4ea48c6fe05b495d45511a525a5452", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
115689394
pes2o/s2orc
v3-fos-license
The ATLAS Muon Trigger vertical slice at LHC startup The ATLAS trigger system has a three-levels structure, implemented to retain interesting physics events, here described for the muon case ("Muon Vertical Slice"). The first level, implemented in a custom hardware, uses measurements from the trigger chambers of the Muon Spectrometer to select muons with high transverse momentum and defines a Region of Interest (RoI) in the detector. RoIs are then processed by a second trigger level, in which fast algorithms run on an online software architecture. Full granularity information from precision chambers is accessed inside RoIs. A third trigger level (Event Filter), using offline-like algorithms and accessing the full event, provide the best possible muon reconstruction/identification and finally confirm or discard the trigger hypothesis formed at earlier levels. Implementation and performance of the full muon trigger slice, together with first events triggered with LHC beams on, are presented. Introduction The ATLAS trigger system is designed to keep high efficiency for interesting events, while rejecting standard model physics low pT events, with a suppression factor of the order of 10 7 , producing a final output rate for offline analysis of 200 Hz. High pT muons are important for many known processes, that can be used for monitoring and calibration (Z → µ µ) and for several new phenomena predicted at the LHC energy (Higgs, SUSY), therefore the muon trigger system is of primary importance. The Muon Spectrometer (MS) is the detector dedicated to the identification of muons.It consists of Resistive Plate Chambers (RPC) and Thin Gap Chambers (TGC) for triggering and Monitored Drift Tubes (MDT) and Cathode Strip Chambers (CSC) for precision measurements. The Muon Vertical Slice consists of three main trigger steps, one hardware, level 1 (LVL1) and two software, level 2 (LVL2) and event filter (EF).Last two compose the High Level Trigger (HLT). During the run with the first beam, only the LVL1 was inserted in the data taking.The HLT was running in a transparent mode, flagging the events without any rejection.This allows to study the trigger capability without affecting the data taking, and comparing results with respect to the full reconstruction offline. Level 1 trigger LVL1 selects active detector regions (Region of Interest, RoI), in each event, using RPC for |η| < 1 and TGC for 1 < |η| < 2.4.Coincidence windows are defined on the allowed geometrical roads with their center corresponding to the infinite momentum track. Six programmable thresholds of the muon p T are applied: a low p T scheme (for 6, 8, 10 GeV/c) and a high p T (11, 20, 40 GeV/c) level triggers. Overall barrel LVL1 acceptance is 83% for low p T , and 79% for high p T particles, while it is close to 1 for the endcap.The rates depend on the machine luminosity: at the low luminosity of 10 33 cm −2 s −1 with the low p T threshold, the expected rate is about 11 kHz, both for the barrel and the endcap, while at the high luminosity of 10 34 cm −2 s −1 with the high p T threshold, the expected rate is about 2 kHz for the barrel and close to 8 kHz for the endcap. A special setup was configured for taking data during first beam, similar to the one used during cosmic runs.The trigger window was fully open, allowing for tracks not pointing to the interaction region.An example of an event triggered during the first beam run is presented in figure 1. Level 2 trigger Many different software algorithms compose the LVL2 trigger.Core algorithm is muFast, that confirms/rejects LVL1 result and refine muon pT evaluation, using MDT precision measurements. The following steps are executed within a 40 ms processing time: "global pattern recognition", involving trigger chambers and positions of MDT tubes (without using drift time); "local segment reconstruction" involving drift time measurements for each station; fast "p T estimate" via a lookup-table. PoS(ACAT08)092 The ATLAS Muon Trigger vertical slice at LHC startup Sergio Grancagnolo To refine the muFast p T , the algorithm muComb combines information from Inner Detector (ID) data, allowing to sharpen the threshold at low p T .Efficiencies for LVL2, with respect to LVL1, in the barrel is above 80% for muons with p T at the selection edge and well above 90% for muons with higher p T . Using information from calorimeter, muTile looks for an energy deposit compatible with the energy loss from muons.In the figure 2 shows the reconstructed energy and the η vs φ position of cosmic events. Event Filter The Muon Event Filter consists of four algorithms: SegmentFinder, TrackBuilder, Extrapolator and Combiner.They are wrappers for the offline reconstruction tools.EF processing normally starts from the muFast result but, for debug purpose, can use LVL1 RoI directly.Segments are PoS(ACAT08)092 The ATLAS Muon Trigger vertical slice at LHC startup Sergio Grancagnolo made first, using MDT precision hits.Tracks are made from segments, adding information from other muon detectors. The extrapolation to the interaction point uses a parametrization of the energy loss in the calorimeter, for faster computation.Information from the inner detector EF algorithms is then added to make a combined track.The parameters of the tracks are finally obtained after refitting the hits actually used. Efficiency from Z → µ µ sample This method can be applied to extract MS trigger efficiency directly from data.One µ with p T > 20 GeV and |η| < 2.4 reconstructed in both ID and MS is used as a tag.The other µ requiring only the ID, is used as a probe.The separation between the two muons must be ∆φ > 0.3 and the invariant mass must be close to the Z mass (|M µ µ −M Z | < 10 GeV).The threshold trigger efficiency is measured with good precision when compared to MC generator. Trigger Menus Different kinds of physics events need to share available bandwidth, that is limited.Flexible trigger menus allow to avoid saturation from few processes, and guarantee the possibility to organize the analysis depending from luminosity conditions.When the rates are too high, prescale factors (PS) can be applied to signatures, i.e. a PS of 100 on a certain signature means that only one event out of every 100 events triggered by this signature will be actually available at the next step. To ensure a certain amount of events, is defined also the pass through (PT) mode.A PT of 100 on a certain signature means that out of every 100 events there must be at least one event passed to the next step The events are flagged without rejection for HLT studies.Express streams, not prescaled, at fixed thresholds, are used for calibration and monitoring purposes.A possible Trigger Menu for LHC startup is shown in table 1 for different muon p T thresholds. A trigger chain example Trigger software works with objects called Trigger Elements (TE).Feature Extraction Algorithms (FEX) are activated by input TE produced by previous trigger levels.FEX access the detector data and compute physical quantities, Features, that are then associated to the output TE.Selection is done in Hypothesis Algorithms, that validate or reject TE according to trigger menu requirements. In-flight decays of pions and kaons are the main source of LVL1 trigger rate at low p T .One goal of the muon HLT is to reject such secondary muons while having high selection efficiency on prompt muons up to p T of 6 GeV/c. A track from such decays appears with a kink, and the χ 2 of the fit is worse than prompt muon tracks.All possible kinematic parameters and statistical techniques must be used in order to reject such tracks.
2019-04-16T13:32:23.249Z
2009-01-29T00:00:00.000
{ "year": 2009, "sha1": "8f57f0189ad74bb61b49073709d876ac95ae9ad9", "oa_license": "CCBYNCSA", "oa_url": "https://pos.sissa.it/070/092/pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "8f57f0189ad74bb61b49073709d876ac95ae9ad9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
248776248
pes2o/s2orc
v3-fos-license
Biological activity of oxadiazole and thiadiazole derivatives Abstract The 5-membered oxadiazole and thiadiazole scaffolds are the most privileged and well-known heterocycles, being a common and essential feature of a variety of natural products and medicinal agents. These scaffolds take up the center position and are the core structural components of numerous drugs that belong to different categories. These include antimicrobial, anti-tubercular, anti-inflammatory, analgesic, antiepileptic, antiviral, and anticancer agents. In this review, we mostly talk about the isomers 1,2,4-oxadiazole and 1,3,4-thiadiazole because they have important pharmacological properties. This is partly because they are chemical and heat resistant, unlike other isomers, and they can be used as bio-isosteric replacements in drug design. We are reviewing the structural modifications of different oxadiazole and thiadiazole derivatives, more specifically, the anti-tubercular and anticancer pharmacological activities reported over the last 5 years, as we have undertaken this as a core area of research. This review article desires to do a thorough study and analysis of the recent progress made in the important biological isomers 1,2,4-oxadiazole and 1,3,4-thiadiazol. This will be a great place to start for future research. Key points • Five-membered heterocyclic compound chemistry and biological activity recent survey. • Synthesis and pharmacological evolution of 1,2,4-oxadiazole and 1,3,4-thiadiazole are discussed in detail. • The value and significance of heterocyclic compounds in the field of drug designing are highlighted. Supplementary Information The online version contains supplementary material available at 10.1007/s00253-022-11969-0. Introduction It ought to be the top priority of researchers to identify and develop new and better pharmaceuticals, pesticides, and insecticides, and to do so, they follow natural models. The pharmaceutical products with biological activity that resemble natural products are called heterocycles (Sompalle and Roopan 2017). Heterocyclic compounds are cyclic compounds in which another atom replaces one or more carbons in the ring (Ram et al. 2019). Thus, the compounds comprising heteroatoms like nitrogen, oxygen, sulfur, phosphorous, silicon, boron, and the like make the heterocyclic compounds one of the most prevalent classes of organic compounds (Roopan and Palaniraja 2016). They are persistent in biology and do play an integral function in medicinal chemistry, particularly in drug design and synthesis. In nature, heterocycles are broadly distributed; many of them exist in the flora regarded as alkaloids, and a few of them are used in historical instances as medicinal agents (Paulo et al. 2018;Cascioferro et al. 2019). Importance of heterocyclic compounds in medicinal chemistry Heterocyclic compounds possess a broad spectrum of pharmacological activities, and for this reason, it continues to yield new therapeutic agents. The biological activity exhibited by the heterocycles is owed to their potential to bind with various enzymes either to the active sites or with enzyme pocket structures through a broad range of intra-molecular interactions such as van der Waals and hydrophobic forces, hydrogen bonding, and metallic coordination bonds, making them an important scaffold in medicinal chemistry (Pearce 2017). A massive variety of naturally occurring substances, such as hemoglobin, chlorophyll, pyrimidines, and purine bases and enzyme co-factors, belong to the family of heterocycles, which are integral to dwelling cells (Arunachalapandi and Roopan 2021;Sompalle and Roopan 2016). They are almost critical at every step of many biochemical processes and processes that are important to life. Heterocyclic compounds, especially those containing nitrogen, sulfur, and oxygen heteroatoms, are the most important class of compounds in the pharmaceutical and agrochemical industries, in which heterocycles comprise around 60% of the drug substances. Five-membered nitrogen-and oxygen-or sulfur-containing heterocycles such as oxazolidine, isoxazolidine, oxazole, isoxazole, thiazolidine, isothiazolidine, thiazole, isothiazole, oxadiazole, and thiadiazole are important structural motifs and are present in a vast number of biologically active compounds. These heterocycles make up the core structure of numerous drugs and are therefore of great interest in the pharmaceutical industry (Manjupriya and Roopan 2021;Li et al. 2013). The following few sections cover the review of the 5-membered oxadiazole and thiadiazole heterocycles as their derivatives continue to gain interest from researchers, being used as bio-isosteric replacements in drug designing and widely studied for their utility in agrochemical and material science (Sauer et al. 2019): Kumari et al. 2020). The main objective behind this review article was not to cover a broad spectrum of topics. Instead, this article focuses more specifically on recent advances in the anti-tubercular and anticancer pharmacological activities of isomers 1,2,4-oxadiazole and 1,3,4-thiadiazole. Because of their high chemical and thermal strength, these two isomers possess the potential to be an essential part of the pharmacological sector. During our literature review, we attempted to cover a large number of related research articles in order to make it as comprehensive as possible. We included the most recent articles to keep it up-to-date. This review article seeks to conduct an in-depth study and analysis and provide a comprehensive report in the form of detailed technical discussion and even information in the form of figures. Overall, this will serve as a great starting point for future research in this sector. Therapeutic targeting of the hallmarks of tuberculosis (TB) Tuberculosis, a communicable disease caused by the bacillus Mycobacterium tuberculosis, is one of the top 10 causes of death. At the first-ever UN High Level Meeting in September 2018, at its headquarters in New York, heads of state came together and made strong commitments to end TB. The title of the meeting was "United to End TB: An Urgent Global Response to a Global Epidemic" (WHO, Global Tuberculosis Report 2018). In the field of anti-TB drug discovery, although many drugs are currently in use, the developments of multidrug-resistant Mycobacterium tuberculosis (MDRTB) and extremely drug-resistant TB (XDRTB) are the two major challenges. Currently, the development of an effective anti-TB drug, which will have a short treatment duration, be simpler, be less toxic, be drug-resistant, and have minimal drug-drug interactions, is the current requirement in the field (Ginsberg 2010). Therapeutic targeting of the hallmarks of cancer Researchers have produced a vast quantity of records on proteins and muted genes towards the development of cancer cells. Recently, many environmental aspects that are associated with mutations at a genetic level have been reported. To determine the efficacy of gene expression and defective proteins, as well as to detect novel most cancers biomarkers, a one-of-a-kind molecular method has been used. It can be beneficial to deal with most cancers and decrease associated complications. Although it appears that many aspects of epigenetic remain unknown, various studies to ascertain the mechanisms and their relationship collectively to the development and spread of quite a number of diseases, mainly most cancers, are continuing. Lack of specificity or low therapeutic index, drug resistance, and poor ADME profile are a few of the challenges in anticancer drug discovery. Hallmarks of cancer are represented in Figure S2 (Hanahan and Weinberg 2011). Introduction to 1,2,4-oxadiazole and its biological importance As mentioned earlier, growing interest is noted among medicinal chemists for oxadiazole (1) due to its biological activity and being used as a privileged scaffold in drug design with various therapeutic applications (Kumar et al. 2011). While the combination of hydrophilic and electron donor properties makes the oxadiazole ring biologically active, the thermal and chemical resistance provides metabolic stability. In drug discovery and development, 1,2,4-oxadiazoles are widely used as bio-isosteric replacements for ester, amide, and acid compounds to take care of associated metabolism-related liabilities with these functional groups (Saunders et al. 1988;Patani and Voie 1996). In a few clinical trials, 1,2,4-oxadiazole-based medicines have been found to be effective. The 1,2,4-oxadiazole scaffolds containing marketed drugs are reported in Fig. 1, which underlines their significance in the discipline of medicinal chemistry (Kleeman et al. 2001). Oxadiazoles have also been used within the area of material science due to their inherent characteristics to excellent thermal and chemical stability and high photoluminescence quantum yield. The 1,2,4-oxadiazoles show useful properties in blue phosphorescent devices, solar cells, liquid crystals, fluorogenic chemosensory polymers, organic light-emitting diodes, and heat-resistant polymers Parra et al. 2006;Guo et al. 2014). Chemistry As a five-membered ring heterocycle, oxadiazole consists of oxygen and two nitrogen atoms. Based on the position of these N and O heteroatoms in the ring, it has four isomeric structures as represented in Fig. 2. It is the most electron-poor azole because it contains two pyridine-type nitrogen atoms and a furan-type oxygen atom. Mono-substituted 1,2,4-oxadiazoles are much less stable than the disubstituted. The latter can even tolerate strong acids such as concentrated sulfuric acid acidic and strong bases. While the 3 and 5 positions are susceptible to creating 3,5-disubstituted 1,2,4-oxadiazoles via nucleophilic attack, they are almost inert towards electrophilic substitution, and, thus, disubstituted are the most commonly synthesized derivatives. The 1,2,4-oxadiazole ring electron-withdrawing effect is effectively exercised via its C-5 position rather than its C-3 position. See Fig. 2E. Synthesis With time, synthetic chemists have developed many preparatory methods for 1,2,4-oxadiazole. Tiemann and Kruger report the very first synthetic method in 1884 (Tiemann et al. 1884). Traditional route Worldwide, the most common and widely used two synthetic routes by chemists start with readily available nitriles, as shown in Fig. 3: (a) 1,3-dipolar cycloadditions of nitriles to nitrile N-oxides; (b) amidoxime heterocyclization. The amidoxime intermediate is formed when hydroxylamine reacts with nitrile. Acylation of the amidoxime intermediate yields O-acylamidoxime, isolable in some instances, upon cyclization as the ultimate step gives oxadiazole (Paulo et al. 2018). It must be mentioned that when one pursues the 1,3-dipolar cycloaddition route for synthesis, the corresponding substituent (R1) of nitrile acquires the C-5 position of ultimate Oxadiazole, whilst the same acquires the C-3 position when one follows the amidoxime route (Paulo et al. 2018). Most synthetic strategies belong to either one of the two traditional routes. As a variant to the predominant route, while different acylating agents were explored to synthesize the 1,2,4-oxadiazoles, researchers additionally made use of catalyst, coupling agents, or even the preliminary O-acylation of amidoxime to tolerate the reaction conditions, assist in cyclization, and enhance the yield. They also tried some greener methods, such as microwave or ultrasound irradiation. A few of the synthetic approaches developed by researchers are summarized in Fig Biological activity Anti-tubercular activity Villemagne et al. (2020) have worked on fragment-based drug-design and synthesized new oxadiazole compounds as potent EthR inhibitors (Villemagne et al. 2020). As a result of their approach, they successfully synthesized compounds with decreased metabolism and enhanced solubility. The compound 1a (BDM 71,339) showed an EC 50 of 0.072 μM, solubility of 9.9 μg.mL −1 , and an excellent pharmacokinetic profile with an elimination half-life (t1/2) of 19 min and CL of 69 μL min −1 mg −1 . Parikh et al. (2020) have presented the development of substituted 1,2,4-oxadiazole as a potent anti-TB agent (Parikh et al. 2020). The research group tested these compounds for their in vitro anti-tuberculosis activity towards Mycobacterium tuberculosis (Mtb.) H37Rv, anti-microbial activity, and antimalarial activity. In vitro studies performed at two different concentrations, the % inhibition observed for compound 2a against M. tuberculosis H37Rv is 92% at both concentrations, while the compound 2b shows 96% inhibition at a concentration of 250 μg mL −1 and a 91% inhibition at 100 μg mL −1 . Shruthi et al. (2019) worked on a quinoline scaffold. The 1,2,4-oxadiazole moiety is linked to quinoline with piperazine as a linker (Shruthi et al. 2019). They have generated the structure activity relationship (SAR) via combinatorial synthesis. Among the synthesized compounds, compound Traditional synthetic routes towards 1,2,4-oxadiazoles: a 1,3-dipolar cycloaddition of nitriles to nitrile N-oxides; b cyclization of amidoxime derivatives 3a shows a minimum inhibitory concentration (MIC) of 0.5 μg mL −1 towards the wild-type, Mycobacterium tuberculosis strain (WT H37Rv). Compound 3a has been found to be active against monoresistant strains of Mtb. The observed activities against this strain of Mtb. are as given in Table S1. Excellent metabolic stability and bioavailability were noted for the compound 3a, with a long T1/2 (h) of 1.63 h and a comparatively short concentration of a drug in the blood, cerebrospinal fluid, or target organ after a dose was given (Cmax) of 2503.25 ng ml −1 . Relatively, compound 3b showed a T1/2 (h) of 1 h, with a Cmax 7442.73 ng ml −1 , owing to its less stability in human liver microsomes. Its MIC against Mtb. strain H37Rv is 0.25 g mL −1 . Upare et al. (2019) have synthesized the styryl oxadiazoles in good yields from cinnamic acid, wherein the terminal carboxylic group of cinnamic acid was replaced with a bio-isostere 1,2,4-oxadiazole (Upare et al. 2019). The in vitro anti-tubercular activity study against the Mycobacterium tuberculosis (Mtb) H37Ra strain showed varied anti-tubercular profiles. The compound 4a was reported to possess the highest antitubercular activity [concentration of drug required for 50% inhibition (IC 50 ) = 0.045 µg/mL] among the synthesized compounds. Computational molecular docking studies on the InhA enzyme corroborated experimentally observed activities. Desai et al. (2016) have designed molecules based on nicotinamide and pyrazinamide drugs. The synthesized 1,2,4-oxadiazole-5-ones and or -5-thiones are substituted with pyridines and pyrazines at the C3 position. Pivaloyloxymethyl derivatives (5a) of 1,2,4-oxadiazole-5-thiones were made with improved lipophilicity and evaluated for anti-TB activity. Because these compounds have an MIC greater than 50 mg ml −1 against Mtb. H37Rv strain, they may not be considered potent molecules (Desai et al. 2016). Gold et al. (2016) synthesized a novel analogue of cephalosporins which is selectively active on non-replicating Mtb (Gold et al. 2016). The compound 6a has a MIC of 0.88 g/ mL for non-replicating Mtb, a MIC of > 100 g/mL for replicating Mtb, and an LD50 of > 100 g/mL in human liver cancer cell line (HepG2). Jain et al. (2016) have reported a novel hybrid of quinoline and oxadiazole as agents for anti-TB (Jain et al. 2016). They have replaced the isoxazole ring with 1,2,4-oxadiazole as a bio-isosteric replacement. While several molecules tested against the Mtb. H37Rv strain are active, the compound 7a is shown to possess the highest activity, with a MIC of 0.4 µM and a selectivity index of > 610. Shruthi et al. (2016) have proposed the design strategy and synthesized benzimidazole-oxadiazole hybrid compounds as an anti-tubercular agent (Shruthi et al. 2016). Among the 20 novel compounds tested, compound 8a was found to be 546-fold selective toward Mycobacterium tuberculosis strain H37Rv, with an MIC of 1.6 mg mL −1 . Thus, it is twofold superior to the standard drug pyrazinamide and fourfold to isoniazid. Furthermore, the active compounds were tested for toxicity and found to be safe even Fig. 4 Some synthetic approaches to oxadiazole molecules: 1 Amidoxime reacted with suitably activated acid derivative; 2 amidoxime reacted with amide at high temperature; 3 pyrolysis of amidoxime and their esters; 4 ring transformations; 5 amidoxime reacted with acid anhydrides; 6 oxidation of an imino group with sodium hypochlorite; 7 oxidation of oxadiazoline; 8 addition of benzonitrile oxides to aromatic nitriles at high concentrations. Westhuyzen et al. (2015) optimized a hit compound to produce a class of compounds known as pyrrolo[3,4c]pyridine-1,3(2H)diones, which target the mycobacterial respiratory cytochrome bc1 complex, a validated drug target in Mtb (Westhuyzen et al. 2015).. The optimized compound 9a was synthesized from its hit compound wherein the ester functionality was replaced for its metabolic stability with methyl oxadiazole as a bio-isosteric replacement. This modification leads to an improved MIC of 0.065 µM and shows improved metabolic stability, that is, human S9 (% remaining after 40 min) = 97% and good Clin < 11.6 mL/min/mg. Flipo et al. (2012) worked on EthR inhibitors. Ethionamide is the main second-line drug used in multidrug-resistant tuberculosis (MDR-TB) (Flipo et al. 2012). This molecule has been reported to have side effects. To overcome problems associated with ethionamide, this research group came up with the most active compound, 10a, that showed an IC 50 of 400 nM and an EC50 of 60 nM against M. tuberculosis-infected macrophages at nanomolar concentration. They also reported that this compound has good microsomal stability and suitable physicochemical properties. This compound has a solubility of 410 μg mL −1 and Clint (microsomes) of 15 mL/in/mg. The bioavailability of the compound is AUC (mouse): 98.6 µg mL −1 h. Besides, compound 10a (BDM41906) shows a tenfold improved activity of ethionamide and is called "ethionamide boosters." The molecule is in a phase II clinical trial as an anti-TB agent. Almansour and co-workers (2012) reported a 1,2,4-oxadiazoles substituted series of molecules (Almansour et al. 2012). The in vitro anti-mycobacterial behavior of this compound 11a against Mtb. is 0.07 μM, and MDR-TB is 0.14 μM. Kala et al. (2020) have worked on the design and synthesis of 1,2,4-oxadiazoles with quinoline derivatives and evaluated anticancer activity against etoposide (Kala et al. 2020). Etoposide is used for the treatment of non-lymphocytic leukemia, lung cancer, lymphoma, testicular cancer, and glioblastoma multiforme. In treating testicular cancer, it is usually given in combination with other drugs such as bleomycin. The compounds 12a,12b, and 12c exhibited excellent anticancer activity when compared with etoposide (Table S2). The designed series can be considered as an extension or a modification of M. srinivasa, S. satyavenia, and B. rambis published work wherein the isoxazole and quinazoline heterocycles are replaced with quinolone heterocycles. cell line (Z138), Maver-1, Jeko-R, and leukemia cell line (Granta-519) respectively. Cascioferro et al. (2019) reported structural modifications made to nortopsentin, an alkaloid, to obtain the new derivatives wherein the 1,2,4-oxadiazole moiety replaces the central imidazole ring as the main modification and generates the SAR (Cascioferro et al. 2019). The compounds were tested on cancer cell lines such as HCT-116 (human colon cancer cell line), MCF-7 (cervical cancer cell line), and Caco-2. While compound 18a and 18b are the prominent active compounds in the series, and both induce a halt in the cell cycle resting phase (G0)-growth phase (G1) phase, the 18a is 2 to 4 times more active than the 18b, except for the HeLa cell. The SAR studies performed revealed that the halogen atom is crucial for the antiproliferative activity. Table S3 shows the cytotoxic activities measured by the MTT colorimetric assay. Srinivas et al. (2018) have described the design and synthesis of novel 1,2,4-oxadiazole linked benzimidazoles (Srinivas et al. 2018). Preliminary evaluations of the substances' antitumor activities against A549 (lung), MCF-7 (breast), and human melanoma cell line (A375) revealed that they were more potent than the doxorubicin drug. 2018) and co-workers have described the design and synthesis of a new 1,2,4-oxadiazole linked imidazothiadiazole analogue that is structurally close to levamisole (Chakrapani et al. 2018). Anticancer activity data was generated for these compounds using the MTT assay for three human cell lines: A375, MCF-7, and human renal carcinoma cell line (ACHN) ( Table S4). The compounds 20a, 20b, and 20c turned out to be the most prominent compounds with almost similar activity to that of the doxorubicin drug. Anticancer activity A 1,2,4-oxadiazole scaffold has been developed by S. Moniot et al. (2017) as Sirt2 inhibitors. They successfully optimized their previously reported lead compound 3-(4-chlorophenyl)-5-(piperidine-1-ylmethyl)-1,2,4-oxadiazole. For SAR studies, α-tubulin-acetylLys40 peptide was used as the Sirt2 substrate (Moniot et al. 2017). The study revealed that for a molecule to possess Srit2 inhibitory action, the 1,2,4-oxadiazole must have a para substituted phenyl ring in its 3rd position and a cyclic aminomethyl or haloalkyl chain in its 5th position as a crucial substituent. Amongst the compounds tested against leukemia cell lines, 21a and 21b emerged to be the most active in the series; 21a shows an IC 50 of 10 µM against Sirt2, while compound 21b shows an IC 50 of 1.5 µM (Table S5). Han et al. (2016) and his co-workers have designed and synthesized a new series of 4-chloro-benzamide derivatives containing substituted 1,2,4-oxadiazole heteroaryl rings as a potent rearranged transfection kinase inhibitor (RET) for cancer therapy (Han et al. 2016). The compound 22a strongly inhibits cell proliferation driven by RET wild-type and gatekeeper mutations. Western blotting analysis revealed that the compound 22a completely inhibits the phosphorylation of the RET enzyme at a 1-µM concentration. The compound 22a shows an IC 50 of 1.8 nM against RET, whereas that of Ponatinib shows an IC 50 of 0.9 nM. Cai et al. (2015a, b) post their successful investigation of a few derivatives of Vorinostat as an antitumor agent. The group has explored the optimization of lead compound entinostat, leading to a series of 2-aminobenzamide and hydroxamate derivatives containing 1,2,4-oxadiazole (Cai et al. 2016). The MTT-based assay performed to investigate the in vitro antiproliferative activities of these compounds revealed that the 2-aminobenzamides 23b [R = Phenyl,etc.] are predominantly active towards the histiocytic lymphoma cell line (U937) among the tested human cancer cell lines U937, NCI-H661, A549, HCT116, and MDA-MB-231. Furthermore, the compounds were tested against histone deacetylases (HDAC) 1, 2, and 8 for their inhibitory activities, while the 2-aminobenzamide derivatives 23b are active against HDAC 1, hydroxamate derivatives 23a [R = Phenyl,etc.] are active against the HDAC 8 and exhibit lower IC 50 values compared to suberoylanilide hydroxamic acid (SAHA) and entinostat. Cai et al. (2015a, b) discussed how they made hydroxamate, 2-aminobenzamide, and trifluoromethyl ketone analogues of 1,2,4-oxadiazoles (Cai et al. 2015a, b). They have replaced the amide functional group of vorinostat with 1,2,4-oxadiazole as a bio-isostere to investigate the effect of replacement on activity. When these compounds were tested against three human cancer cell lines, most of the compounds showed more prominent anticancer activity against human acute myeloid leukemia cell U937 than the other two human lung cancer cell lines A549 andNCI-H661. The compound 24a shows an IC 50 of 5.31 μM, 3.09 μM, and 0.29 μM against A549, NCI-H661, and U937 cell lines. The compound 24b shows an IC 50 of 9.17 μM, 0.41 μM, and 0.46 μM against A549, NCI-H661 andU937 cell lines, respectively. The compound 24c shows an IC 50 of > 100 μM, 18.68 μM, and 3.73 μM against A549, NCI-H661, and U937 cell lines, respectively. Introduction of thiadiazole and its biological importance Thiadiazole, the most prevalent and indispensable heterocycle, is a five-membered heterocyclic compound, a scaffold that makes an important structure of several naturally occurring as well as medicinal products (Dawood and Farghaly 2017). The thiadiazole moiety acts as a "hydrogenbinding domain" and "two-electron donor system," which makes the thiadiazole ring biologically active, sulfur atom imparts lipo-solubility, leading to analogues with higher lipophilicity. A lot of people use 1,3,4-thiadiazoles as bioisosteric replacements for pyrimidine, pyridine azine, oxadiazole, oxazole, thiazole, and benzene in the development of new drugs (Serban et al. 2018). A few 1,3,4-thiadiazole scaffold containing marketed drugs are reported in Fig. 7, which underlines their importance in the field of medicinal chemistry (Serban 2019). Acetazolamide (9) and methazolamide (10) are potent carbonic anhydrase inhibitors, drugs used in the treatment of glaucoma, an eye-related disorder that causes damage to optic nerves. Megazole (11) is an antitrypanosomal agent, a drug used to treat African trypanosomes, also called sleeping sickness. Sulfamethizole (12) is an anti-microbial agent. Cefazolin (13) and cefazedone are used as antibiotics that belong to the cephalosporin family. Azeteta (14) is a phosphorous-containing drug used for the treatment of cancer. Chemistry Thiadiazole is a heterocycle ring with two nitrogen and one sulfur heteroatom. Based on the nitrogen and sulfur heteroatoms' positions in the ring, it has four isomers as represented in Fig. 8a. 1,3,4-Thiadiazole is a conjugated, weekly basic, planner, and electron-deficient ring system. While high aromaticity with the added + I effect of a sulfur atom makes it a weekly basis, the nitrogen atoms electron-withdrawing effect makes it electron deficient. This nature makes the carbon atoms at the C-2 and C-5 positions relatively inert towards electrophilic substitution while more reactive to nucleophilic attack. Substituents at the C-2 or C-5 positions activate the ring, favoring nucleophilic attack on carbon atoms. Although electrophilic attack on sulfur atoms is rare, the ring nitrogen atoms, dependent on the nature of the substituent on the carbon C-2 or C-5 position, do undergo electrophilic attack. See Fig. 8b. The ring is quite stable and can tolerate aqueous acidic solutions, but not the strong basic conditions under which ring cleavage is observed. Synthesis Emil Fischer first synthesized 1,3,4-thiadiazole in 1882 (Goerdler et al. 1956). The most common and extensively explored synthetic route is the cyclization of acyl hydrazine, which includes diacylhydrazines and monoacylhydrazines. The other synthetic route to 1,3,4-thiadiazoles makes use of thiohydrazines. From the recent literature, we have summarized a few strategies employed by chemists for the synthesis of 1,3,4-thiadiazoles as shown in Fig. 9 (Al-Omar et al. 2011: Kristinsson and Winkler 1982). The transformation of the 1,3,4-oxadiazole ring is also a route of choice. Biological Activity Biologically active 1,3,4-thiadiazole derivatives' structure is illustrated in Fig. 10. Kumar et al. (2019) have synthesized a series of azetidinone nucleus containing 1,3,4-thiadiazole derivatives from the Schiff base and are reported to exhibit good antibacterial, antifungal, and anti-tubercular activity. To obtain anti-tubercular activity data, the microplate Alamar Blue Assay was performed. The MIC of these derivatives against the Mtb. H37Rv strain ranged from 6 to 25 g mL −1 . The compounds 25a, 25b, 25c, and 25d with the substituent in the para position are active and have a MIC close to 6 μg mL −1 , while the substituent in the meta position makes them inactive. Taflan et al. (2019) reported two series of novel imidazo[2,1-b][1,3,4]thiadiazole (ITD) hybrid compounds 26a with excellent anti-tubercular profiles and MICs ranging from 0.24 to 0.49 g mL −1 concentration against Mycobacterium smegmatis organisms (Ebru et al. 2019). At the same time, these compounds were also found to possess antioxidant effects due to the presence of a 3,4-dihydroxy phenolic group C-2 position. Demirci et al. (2018) have reported the synthesis of novel 1,3,4-thiadiazole-based fluoroquinolone hybrid compounds as anti-tuberculosis agents. All of the norfloxacin-derived compounds have 4-chlorophenyl 27a and 2,4-dichlorophenyl 27b substituents on the 1,3,4-thiadiazole ring and exhibit significant antimicrobial activity against the Mtb. H37Rv strain. MIC was measured using the Broth Microdilution method and was found in the range of 8 to 64 μg.mL −1 . Wadhwa et al. (2017) presented the results of computational studies that revealed the most vital structural requirements of molecules for InhA inhibition. The molecular docking experiment performed on imidazo[2,1-b] [1,3,4] thiadiazole 28a revealed that PRO156, GLN100, TYR158, ALA198, LEU197, LEU218, and MET199 (active site) are the essential binding residues that are responsible for interactions between the InhA enzyme and inhibitors. Anti-tubercular activity As anti-TB agents, presented 5-substituted-2-[(3,5dinitrobenzyl)-sulfanyl]-1,3,4-oxadiazoles and 1,3,4-thiadiazole derivatives (Karabanovich et al. 2016). They assessed the anti-tubercular profile (in vitro) of the synthesized compounds as well as studied SAR deeply to conclude their earlier reported findings that the dinitrobenzylsulfanyl substitution is crucial for anti-mycobacterial activity. The nitro-substituted 2-alkyl/aryl-5-benzylsulfanyl-1,3,4thiadiazoles are reported to possess greater activity than the first-line anti-TB drugs, with a measured MIC as low as 0.03 μM for the 29a compound. The compounds were also tested against nontuberculosis mycobacterial species, namely Mycobacterium avium, Czechoslovak National Collection of Type Cultures (CNCTC) My 331/88, Mycobacterium kansasii CNCTC My 235/80, the clinically isolated M. kansasii 6509/96, and six multidrug-resistant clinically isolated strains of M. tuberculosis. Tatar et al. (2016) have synthesized and reported conjugated thiadiazole-thiourea as anti-tuberculosis agents. The 5-(4-chlorolphenyl and 4-fluorophenyl)-1,3,4-thiadiazole 3-substituted thioureas were identified as the most active as well as selective compounds against the M. tuberculosis H37Rv strain. The MIC values measured for compounds 30a and 30b were 10.96, 11.48 μM, respectively. Molecular docking studies performed on these compounds showed good docking scores for the enzyme InhA. Tatar et al. (2015) reported another two series of compounds synthesized from L-methionine as anti-TB agents. The synthesized 1,3,4-thiadiazole and 1,2,4-triazole derivatives exhibited anti-tubercular activity (Batt et al. 2015). The 1,3,4-thaidizole with (4-chloro-(3-trifluoromethyl) phenyl] substituted thiourea 31a is the most active compound against the M. tuberculosis H37Rv strain, with a measured MIC of 30.88 μM. Batt et al. (2015) successfully used the target overexpression approach to screen the lead molecules, identifying GSK710 (32a), one of the series' analogues, as a strong receptor of enzymatic reactions, Mt-DprE1 IC50 = 54 nM), in M. bovis BCG. This compound showed an eightfold higher MIC relative to the control strain. Furthermore, spontaneous resistant mutant sequencing studies were used to conclude that DprE1 is the target of these analogues, with a single nucleotide change at both positions E221Q and G248 in the enzyme DprE1 gene molecule, in a clinical trial. SAR studies revealed that the modification could be allowable at the phenyl's terminal ring inclusive of fluorine substitution, pyridine regioisomers, and other functional groups can be substituted for the para position ensuing when solubility and distribution factors improve dramatically, while inhibitory exercise is maintained. Naggar et al. (2019) recently presented design, synthesis, and molecular docking experiments of antimicrobial and anticancer 5-(3,5-dinitrophenyl)-1,3,4-thiadiazole compounds. These compounds were studied in vitro and their activities were compared to those of Doxorubicin, a standard drug. The compounds 33a, 33b, and 33c are the most active compounds and show the following IC 50 values against various tumor cell lines (Table S6). Altıntop et al. (2018) have described the design and synthesis of 1,3,4-thiadiazole derivatives and evaluated their anticancer activity towards chronic myelogenous leukemia cells. The compound 34a was screened against eight kinases and found to inhibit the Bruton's tyrosine kinase inhibitor (BTK), FYN proto-oncogene (FYNA), LCK, and C-terminal Src kinase (CSK kinases) with the highest selectivity for the Bcr-Abl-positive K562 cell line. With a measured IC 50 value of 7.4 µM for Abl kinase protein compound 34a, it turned out to be the most potent of all the compounds. It also had a different kinase inhibitory profile than imatinib. Compound 34a exhibited BTK activity of 30.3 µM, whereas imatinib showed activity of > 100 µM. Chowrasia et al. (2017) investigated novel fused thiadiazole scaffolds as anticancer agents. The fluorinated analogue 35a is more active than the parent analogue 35b. The antiproliferative activity observed for the most active compound 35a against different cell lines is MCF7 with an IC 50 value of 22.1 µM, a cell line derived from the primary osteosarcoma (SaOS-2) IC 50 value of 19 µM and a K562 IC 50 value of 15 µM. The compound 35b showed significantly low activity against the same cell lines and observed are MCF7 IC 50 value of 30.2 µM, SaOS-2, IC 50 value of 39 µM, and a K562 IC 50 value of 29.4 µM. Amin et al. (2017) presented the synthesis of various coumarin-thiadiazole-based analogues with potential antitumor activity. The DNA binding assay was performed to assess the antitumor activity of the synthesized compounds. These compounds were tested against a few human cancer cell lines; liver cancer (HepG-2), colorectal cancer (HCT-116), and breast cancer (MCF-7). The measured IC 50 values for compound 36a in HepG-2, HCT-116, and MCF-7 (human cancer cell lines) are 266, 238, 398 μg. mL −1 , respectively. Similarly, compound 36b shows an IC 50 of 114, 30.7, 54.9 μg.mL −1 in HepG-2, HCT-116, and MCF-7, cell lines. Jakovljevic et al. (2017) developed two series of 1,3,4-thiadiazole amide derivatives containing a substituent catechol moiety as a potential antioxidant and anticancer agents. Among the two 3,4-and 2,3-dihydroxy series of derivatives, the latter showed the best activity. Moreover, the antioxidant properties of these derivatives totally depend on the substituent attached to the amide bond. Thus, adamantane-containing compounds 37a and 37b showed enhanced cytotoxic activity with IC 50 values of 7.4 and 7.3 µM, respectively, towards the human acute pro-myelocytic leukemia HL-60 cells and lung carcinoma A549 cells, while they showed decreased toxicity towards normal MRC-5, suggesting a selective nature in action. Wang et al. (2017) and his group identified [1,2,4]-triazolo-[3,4-b][1,3,4]-thiadiazole a novel scaffold as a disruptor of telomeric silencing 1 (DOT1L) inhibitor that plays a crucial role in cell cycle regulation, transcriptional elongation (Wang et al. 2017). The compound 38a showed strong binding affinity to DOT1L with measured IC 50 of 8.3 μM. It has average selectivity over other methyl transferases and non-MLL-rearranged leukemia cell lines. Kenji et al. (2016) have described the synthesis of nitrogen-containing bisphosphonates. These compounds show antitumor activity in particular in breast cancer and myeloma patients. Due to poor absorption, it has difficulty in cell entry and hence exhibits weak activity; it also causes bone side effects. Hence, research groups have synthesized their Pivaloyloxymethyl esters as a prodrug approach. The compound 39a shows IC 50 of 11 µM for U937 cell lines and 8.5 µM IC 50 for Cancer cell lines (EJ-1). Tingting et al. (2016), have synthesized a novel disubstituted 1,3,4-thiadiazole that includes substituted N-heterocylic rings, mainly indole, pyridine, and quinoline, along with the aromatic rings (Tingting et al. 2016), Synthesized compounds were tested against CML cells and breast cancer cells. The compound 40a with indole as a substituent on the thiadiazole is reported to be the most potent compound among all, with a measured IC50 value of 5.9 ± 0.56 μM/L for epithelial, human breast cancer cell lines (MDA-MB231 cells), and a measured IC 50 value of 4.2 ± 0.32 μM/L against K562 cell lines. Gossypol is used as the positive control. Romagnoli et al. (2015) described the synthesis of a hybrid compound imidazo[1,2-b][1,3,4]thiadiazole as a biologically active scaffold with anticancer activity (Romagnoli et al. 2015).The compound 41a shows IC 50 's of 0.17, 0.37, 0.41, and 0.67 µM against mouse lymphocytic leukemia cell line (L1210), murine breast cancer cell line (FM3A), lymphoblastic cells that happened to come from a child who had acute lymphoblastic leukemia (CEM), and HeLa cell lines, respectively. The compound 41b shows an IC 50 of 0.25, 0.61, 0.83, and 0.87 µM against L1210, FM3A, CEM, and HeLa cell lines, respectively. Conclusion It is evident from the plethora that today, the 1,2,4-oxadiazole and 1,3,4-thiadiazole scaffolds are widely explored by researchers in medicinal chemistry, particularly as antitubercular and anticancer agents. The appearance of these scaffolds in the number of newly reported molecules also underlines their importance as bio-isosteres. While improving the anticancer and anti-tubercular profiles, while both the positions of these heterocycles were substituted and explored, new scaffolds in combination with other heterocycles are also reported. Although both the scaffolds are tested against various targets for their anti-tubercular and anticancer properties, other potential targets are still to be explored. SAR studies conducted helped in identifying the few structural key requirements. A few of the derivatives showed promising anti-tubercular and anticancer activity. We believe the summarized literature herein provides an overview of anti-tubercular and anticancer activities demonstrated by the 1,2,4-oxadiazoles and 1,3,4-thia diazoles, and helps researchers in rational drug design and development in this area.
2022-05-15T06:22:07.494Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "8227b97b3e19ee503b4b7bf5f45e4242fc68325e", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s00253-022-11969-0.pdf", "oa_status": "BRONZE", "pdf_src": "SpringerNature", "pdf_hash": "9b288bc9e0a15edcf32f808ef42b04ae219a2ea4", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
256081532
pes2o/s2orc
v3-fos-license
Continuous glucose monitoring for children with hypoglycaemia: Evidence in 2023 In 2023, childhood hypoglycaemia remains a major public health problem and significant risk factor for consequent adverse neurodevelopment. Irrespective of the underlying cause, key elements of clinical management include the detection, prediction and prevention of episodes of hypoglycaemia. These tasks are increasingly served by Continuous Glucose Monitoring (CGM) devices that measure subcutaneous glucose at near-continuous frequency. While the use of CGM in type 1 diabetes is well established, the evidence for widespread use in rare hypoglycaemia disorders is less than convincing. However, in the few years since our last review there have been multiple developments and increased user feedback, requiring a review of clinical application. Despite advances in device technology, point accuracy of CGM remains low for children with non-diabetes hypoglycaemia. Simple provision of CGM devices has not replicated the efficacy seen in those with diabetes and is yet to show benefit. Machine learning techniques for hypoglycaemia prevention have so far failed to demonstrate sufficient prediction accuracy for real world use even in those with diabetes. Furthermore, access to CGM globally is restricted by costs kept high by the commercially-driven speed of technical innovation. Nonetheless, the ability of CGM to digitally phenotype disease groups has led to a better understanding of natural history of disease, facilitated diagnoses and informed changes in clinical management. Large CGM datasets have prompted re-evaluation of hypoglycaemia incidence and facilitated improved trial design. Importantly, an individualised approach and focus on the behavioural determinants of hypoglycaemia has led to real world reduction in hypoglycaemia. In this state of the art review, we critically analyse the updated evidence for use of CGM in non-diabetic childhood hypoglycaemia disorders since 2020 and provide suggestions for qualified use. In 2023, childhood hypoglycaemia remains a major public health problem and significant risk factor for consequent adverse neurodevelopment. Irrespective of the underlying cause, key elements of clinical management include the detection, prediction and prevention of episodes of hypoglycaemia. These tasks are increasingly served by Continuous Glucose Monitoring (CGM) devices that measure subcutaneous glucose at near-continuous frequency. While the use of CGM in type 1 diabetes is well established, the evidence for widespread use in rare hypoglycaemia disorders is less than convincing. However, in the few years since our last review there have been multiple developments and increased user feedback, requiring a review of clinical application. Despite advances in device technology, point accuracy of CGM remains low for children with non-diabetes hypoglycaemia. Simple provision of CGM devices has not replicated the efficacy seen in those with diabetes and is yet to show benefit. Machine learning techniques for hypoglycaemia prevention have so far failed to demonstrate sufficient prediction accuracy for real world use even in those with diabetes. Furthermore, access to CGM globally is restricted by costs kept high by the commercially-driven speed of technical innovation. Nonetheless, the ability of CGM to digitally phenotype disease groups has led to a better understanding of natural history of disease, facilitated diagnoses and informed changes in clinical management. Large CGM datasets have prompted re-evaluation of hypoglycaemia incidence and facilitated improved trial design. Importantly, an individualised approach and focus on the behavioural determinants of hypoglycaemia has led to real world reduction in hypoglycaemia. In this state of the art review, we critically analyse the updated evidence for use of CGM in non-diabetic childhood hypoglycaemia disorders since 2020 and provide suggestions for qualified use. KEYWORDS hypoglycaemia, continuous glucose monitoring, children, hyperinsulinism, glycogen storage disease, prematurity Introduction In 2023, non-diabetes hypoglycaemia remains a major global problem for children. Its effects are far reaching, with impacts on quality of life (1,2), health economics (3), hypoglycaemia fear (4), reaching beyond the individual to the extended family (5,6). Although recent studies (7), complimenting previous work (8,9), have suggested a lesser effect of transient neonatal hypoglycaemia (10), there remains little doubt of the impact of severe childhood hypoglycaemia on neurodevelopmental delay, particularly in those children with severe and recurrent hypoglycaemia due to congenital hyperinsulinism (CHI) (9)(10)(11). Essential to all hypoglycaemia management, irrespective of the cause, is the detection, prediction and prevention of episodes through glucose testing (12, 13). The first of these three tasks has been traditionally performed by fingerprick blood glucose testing (13), with prediction and prevention reliant on clinical skill and patient experience. However, over recent years, all three tasks are increasingly being performed by continuous glucose monitoring (CGM) in either its raw form or through its manipulation by modern computer algorithmics. For people living with diabetes, CGM and associated predictive algorithms are widely used and well established in the reduction of hypoglycaemia (14-17) and cost-effectiveness (18-20). However, for those with a non-diabetes hypoglycaemia disorder, the utility in diabetes has not been replicated and CGM has not been established in routine clinical practice. The use of CGM in rare hypoglycaemia disorders is a rapidly evolving and expanding field. In this review we have followed on from a comprehensive review in 2020 (13), to provide an update on improvements in the technology and utility of CGM focusing mainly on CHI, glycogen storage diseases (GSD) and neonatal prematurity. We reflect on our predictions from 2020, synthesise current understanding and look to the future. Accuracy We have detailed the background to accuracy assessments in CGM elsewhere (13) but it is worth outlining the two differing approaches to accuracy assessment: 1) pairing CGM values with fingerprick glucometer values and measuring difference; 2) evaluating the ability of CGM devices to 'detect' hypo(or hyper)glycaemia within a time window and thus utilising to a fuller extent the semi-continuous nature of CGM. Measures of accuracy differ widely throughout the literature, but the former is more commonly used and tends to incorporate mean absolute relative difference (MARD), mean absolute difference and hypoglycaemia sensitivity/specificity. A summary of CGM accuracy studies by various groups using different CGM devices in non-diabetes hypoglycaemia is presented in Table 1. Neonates Beardsall et al. first evaluated the accuracy of CGM devices in neonates in 2005 (21) and later in 2013 (22); they reported a correlation coefficient of 0.69-0.94 with safe results on an error grid (albeit one designed for those with diabetes). However, hypoglycaemia sensitivity was found to be only 17%. More recent results from the same group showed a relatively small MARD of 11% but a hypoglycaemia sensitivity of only 59% with the latest devices and technologies (23). These calculations were based on a lower threshold for hypoglycaemia (<2.6mmol/l) than is usually used outside the neonatal unit. Furthermore, as described above, sensitivity is based on point comparisons of accuracy which can underestimate the clinical value of sensor glucose trends in detecting hypoglycaemic events. Recent work in Australia by Vijayanand et al. (24) has confirmed the poor hypoglycaemia sensitivity seen in this group with results of 54% when using point comparisons. Childhood hypoglycaemia disorders CGM is not routinely used in patients with CHI and therefore data is relatively sparse (Table 1). In the first evaluation of CGM in CHI, Alsaffar et al. (25) reported a hypoglycaemia (3.5mmol/L) sensitivity of only 52% but did not report a MARD. While an evaluation of a more up to date device by Rayannavar et al. (26) showed a better hypoglycaemia sensitivity of 86%, this was calculated using a higher cut-off for hypoglycaemia (3.9mmol/L), as is standard practice in some countries. When hypoglycaemia <3.0mmol/L was investigated, a low sensitivity of 66% was demonstrated. As existing error grids (such as Parks and Clarke) are designed for evaluation of CGM accuracy for those with diabetes, they have not been used as standard in assessments in CHI. Recently Worth et al. (27) developed an expert-consensus error grid for use in CHI and used this to evaluate the accuracy of one of the most recent CGM sensors, the Dexcom G6. Results suggested the presence of significant clinical risk in the use of CGM for patients with CHI due to poor device accuracy on error grid analysis and hypoglycaemia sensitivity of only 45%. Analysis of the ability of the Dexcom G6 to detect glucometermeasured hypoglycaemia within a 30 minute window was marginally better but still unreliable at 51% (27). Equally, CGM is also not used routinely in patients with GSD and assessments of CGM accuracy for this group have been largely incomplete (Table 1). These demonstrate correlation between CGM and glucometer values but the magnitude of error has not been reported. Papers (28, 29) report mean difference or correlation but due to the presence of both overestimation and underestimation, and no report of mean absolute difference, it is impossible to determine the average magnitude of errors. Rossi et al. (30) went on to evaluate CGM error by glucose value and also between those with GSD1a and healthy volunteers. They found that CGM overestimation was worse for those with GSD1a and at glucose values <3.9mmol/L, thereby increasing the risk of missed hypoglycaemia for the most vulnerable groups at the time of greatest need. Efficacy of CGM to detect and prevent hypoglycaemia We have previously summarised the efficacy of CGM for children with non-diabetes hypoglycaemia due to various conditions (13). Here we summarise recent developments in the field with regards to the conventional use of CGM to detect and prevent hypoglycaemia by simple provision to patients and clinicians. The non-conventional use of CGM is discussed later in Section 7. Neonates Previously summarised studies (13) have demonstrated the utility of CGM to reduce painful procedures, detect unsuspected hypoglycaemia and reduce hyperglycaemia. More recently, Ferná ndez Martí nez et al. (31) confirmed the ability of CGM to detect unsuspected and prolonged hypoglycaemia in very low birth weight (VLBW) neonates. Win et al. (23) have since demonstrated significant fluctuations in glucose in neonates; more pronounced in those with CHI. The same group recently published the results of an international, multi-centre RCT investigating the use of CGM in preterm neonates and clearly demonstrated a reduction in hypoglycaemia and hyperglycaemia for those in the CGM group (32) encouraging CGM as a potential tool for regular use in the neonatal intensive care unit. Hypoglycaemia associated with rare endocrine conditions At the time of our previous review in 2020, there was no evidence for CGM reducing hypoglycaemia for children with any endocrine conditions other than diabetes mellitus. In the absence of larger scale studies, we discussed (13) minimal evidence for use of CGM for both adults and children with adrenal insufficiency (AI) and the anecdotal reports of CGM use for those with CHI. Further single-case, anecdotal reports of utility of CGM in CHI (33) and hypopituitarism (34) have since been published. Importantly however, Worth et al. have recently published non-randomised data on CHI patients with periods of blinded and unblinded CGM (35); suggesting that the simple provision of CGM (without expert or algorithmic interpretative support) does not reduce hypoglycaemia for those with CHI. The addition of interpretative algorithmic or clinical support is discussed in Section 7. However, at the time of writing, there are no comprehensive studies evaluating the efficacy of CGM to reduce hypoglycaemia for children with endocrine hypoglycaemia. Hypoglycaemia associated with rare hereditary metabolic disorders We have previously outlined (13) the utility of CGM to detect unsuspected hypoglycaemia and facilitate manipulation of diet and treatment for patients with GSD. Previous anecdotal reports highlighted the utility of retrospective CGM data analysis but advised against the provision of real-time CGM to patients for fear of inappropriate treatment alterations (36). Since our previous review, there have been further anecdotal reports of CGM utility in the detection of glycaemic variability and excursions for patients with metabolic causes of hypoglycaemia (37-39) but no systematic evaluations of the use of CGM to actually prevent or reduce hypoglycaemia. Family perspectives Our previous review discussed families with CHI and GSD reporting marginal benefit from the use of CGM as secondary outcomes of studies. Anecdotally, families found glucose trends helpful. Since 2020, the significant increase in the use of CGM in hypoglycaemia disorders has led to an increase in literature regarding families' perceptions of this emerging technology, described below. Patient charity reports Patient charities fulfil a vital role of providing support to those with hypoglycaemia conditions but also provide an important window into the views and opinions of families. In a recent unpublished study (summarised in an opinion paper (40)), the UK Children's Hyperinsulinism Charity (UK CHC) reported that families with CHI find CGM: offers a safety net, improves quality of life, and reduces worry. Patients reported (40) difficulty in access to CGM and a call was made for wider availability for families with CHI. While this survey is likely subject to significant positive sampling bias, it does offer an important insight into the opinions of some families with CHI. The charity Congenital Hyperinsulinism International (CHI) recently revealed that 45.7% of respondents to a global registry use CGM but that access to devices is often a problem and trust in the data generated is often low (2). They also report that families generally find devices useful but that patients experience problems with poor accuracy (6). Again, this is likely open to sampling bias but offers an important user-perspective. Within GSD, CGM is a much higher research priority for healthcare professionals than it is for patients and carers who rank it as a lower priority (41). Qualitative studies While patient organisations have called for wider access to CGM, it is important to formally assess families' experiences of CGM to actively seek out both positive and negative views. As recently highlighted by Peeks et al. (42), "glucose management as assessed with CGM should be balanced against psychosocial well-being and quality of life" which cannot be assumed to be higher with CGM than without. In CHI patients, Auckburally et al. (43) undertook semistructured interviews with families who had been provided with a CGM for 12 weeks as part of a research project. As there was no existing information on CHI families' experiences of CGM, the authors performed a thematic analysis to identify themes important to patients and their families. Such detailed analysis revealed a rich and complex mixture of attitudes towards CGM. Families reported positive feelings about CGM's function as an educational tool which could motivate behavioural changes to prevent hypoglycaemia. However, the problematic issues of poor accuracy and irritating alarms were raised by all participants. In order to better understand the reasons for a high rate of dissatisfaction with CGM seen in CHI families, Ahmad et al. (44) performed semi-structured telephone interviews with those who had discontinued use. Primary reasons for discontinuation were pain, device inaccuracy, issues with technical setup and 90% of those surveyed thought that CGM device use would have been easier if their child had been a different age (either younger or older) (44). Comprehensive assessments of families' experiences of CGM, with a focus on the reduction of selection bias, are essential in the journey to establish CGM as a therapeutic option for paediatric hypoglycaemia disorders. The authors are aware of two separate studies aiming to achieve this for families with CHI and the results are eagerly awaited. Barriers to the use of CGM In our 2020 review we highlighted the barriers to wider use of CGM in paediatric hypoglycaemia disorders and to date there are no improvements with regards to lag time, alarms or fingerprick testing. However, with regards to clinician inertia and cost, an update is worthwhile. Clinician inertia and usability Over the last three years, the authors have noticed a significant increase in the interest in CGM by clinicians working in paediatric hypoglycaemia disorders. There is now less suspicion of the technology and a higher acceptance of using CGM as a routine part of care. This is mirrored in the significant increase in publications relating to CGM in both hypoglycaemia disorders and neonatology. However, the interest and marketing strategy of device manufacturers remains firmly focused on diabetes mellitus, precluding wider adoption and development specific to hypoglycaemia. Cost and widening access As CGM technology develops, it is important that the availability of devices is considered, especially for those in low-income countries (LICs) and for patients with rare diseases. These groups are often marginalised and disadvantaged in the commercially-driven push for technological progression but efforts must be made to minimise access inequalities (45). As a technology, CGM could arguably have significant impact in LICs due to the added burden of hypoglycaemia from malaria, malnutrition, diarrhoea and sepsis (46). Additionally, for people living with diabetes, access to insulin is often intermittent in LICs (47), leading to hypoglycaemia and hyperglycaemia. CGM would also be highly valuable in the neonatal setting as capacity for regular glucose monitoring in neonatal units in LICs is often limited and neonatal mortality is high (48). Indeed, neonatal hypoglycaemia is often present in otherwise uncomplicated newborn infants, and recognition and treatment may have a significant impact on neonatal outcomes (49,50). Moreover, the long-term impacts associated with childhood hypoglycaemia, such as neurodisability, epilepsy and reduced cognitive function (9, 51) have a higher burden in LICs, being poorly understood by wider society and suboptimally managed due to meagre resources (52-54). So, while the costs of CGM may be high, its implementation may enable faster, accurate treatment modification, improving outcomes (38) and likely contributing to value based healthcare in both common, high volume disease (55) and rare, low volume disease such as GSD (56). However, it is important to recognise that technology developed for a high-income setting is not always appropriate for LICs where the environment is different; there can be extremes of temperatures, intermittent access to internet and electricity, high levels of dust and minimal access to engineers to repair devices (50, 57-59). A target product profile (TPP)-based approach has been developed to identify key specifications for product innovation in LICs. This approach has been particularly successful in development of neonatal devices, most notably in bubble CPAP, and a similar approach should be considered in the development of CGM devices (50, 60). Updates on previously suggested developments In our 2020 review we predicted that future developments would be focused on CGM device technology and predictive hypoglycaemia algorithms. Here we provide an update on the developments in these areas over the last three years before moving on to discuss alternative and novel areas for CGM use in Section 7. CGM device technology The direction for CGM device technology development continues towards miniaturisation, with a focus on reducing the invasive nature of some CGM devices. Dexcom ® have since released the G7 device which is smaller, thinner and predicted to be more accurate. Abbott ® have released the Freestyle Libre 3, also smaller and thinner and now offering real time readings with optional alerts. Eversense ® now have an implantable sensor with a six month wear time and requiring only a single calibration per day. There has also been significant interest in the last few years on optical sensors that detect photons to determine the glucose concentration via the interaction between glucose molecules and different wavelengths of light (61). Other sensor developments focus on the non-invasive measurement of sweat, urine, saliva, tears (62) and even thermal monitoring (63); however, these ideas have not yet translated to a commercially viable stage. Predictive hypoglycaemia algorithms Our 2020 review (13) outlined the background to the use of predictive algorithms for hypoglycaemia and the different forms that these can take; physiological, data-driven, and hybrid (64). While non-machine learning algorithms such as Model Predictive Control have been beneficial for adults (65) and neonates (66) using closed loop insulin delivery, these systems are of no use to the majority of patients with rare hypoglycaemia disorders whose hypoglycaemia is not caused by exogenous insulin. Work in the field of data-driven predictions continues to expand rapidly in diabetes and artificial intelligence and machine learning methods using large historical datasets continue to be used to derive theoretical prediction models ( Figure 1). While, multiple groups have continued to publish increasingly accurate in-silico algorithms (67)(68)(69)(70), these have been evaluated by systematic review (71) and meta-analysis (72) and found to have insufficient ability to detect and prevent hypoglycaemia. The authors conclude that improvement is required before application in clinical settings. As suggested, these algorithms have been evaluated in-silico only with no conclusive examples of Machine Learningdriven predictive algorithms reducing hypoglycaemia in the real world. Decision Support Systems (DSS) are an extension of glucose predictive algorithms and facilitate decision making (e.g. food intake) based on various inputs (e.g. CGM data) and predicted outcomes (e.g. hypoglycaemia). Recent DSSs have shown in-silico (73) and possibly real world (74) reduction in hypoglycaemia through modification of insulin dosing for people living with diabetes. However, Tyler et al. (75) note in their systematic review that "it has not yet been shown that a DSS can improve time in range in human studies" and more work is required. Vitally, all DSSs focus on the use of exogenous insulin as either an input or output and are therefore of no use to those with a rare hypoglycaemia disorder such as CHI or GSD but may have potential in neonates on insulin therapy (66). Novel directions and a possible future for CGM in hypoglycaemia So far we have provided updates on areas covered in our previous review. In this section we move on to discuss novel areas and uses for CGM which have either emerged since 2020 or are now gaining prominence. Person-centred outcome measures have been defined for type 1 diabetes (76, 77) but are currently lacking for rare hypoglycaemia disorders. This causes difficulty in comparing studies and evaluating day to day impact for patients. Consensus, person-centred outcomes would greatly enhance routine healthcare and research for these groups, particularly with regards to emerging but as yet unproven technologies such as CGM. CGM to elicit patterns and digital phenotypes There is increasing recognition of phenotypes beyond those classically described by physical traits or cellular changes. Most recently established is the "digital phenotype" (78). The digital phenotype covers both aspects of behaviours related to technology such as social media use use as well as behaviours measured by Publications by year with the search terms "continuous glucose monitoring" (CGM) and "machine learning" (ML) or "artificial intelligence" (AI), found on Google Scholar. technology such as heart rate monitors, accelerometers and CGM. These new measures facilitate a more comprehensive and individualised picture of patients' health and contribute to "P4 medicine" (79); allowing for a predictive, preventative, personalised and participatory approach to management. Worth et al. (80) took the first steps towards extending the digital phenotype of CHI with their analysis of retrospectively collected CGM data. Previously collected CGM data was used to identify periods of high hypoglycaemia risk in the early morning in patients with CHI; opening the door for targeted interventions on a group and individual level. Further work by this group (81) CGM as a behaviour change tool CGM is still in its infancy as a technology and new ways are being explored to derive positive impact for patients' health. Traditional usage has focused on high frequency glucose data to allow patients to adjust insulin doses and to predict upcoming excursions from euglycaemia. As discussed above, CGM has been adopted by the computer science community with a focus on the development of glucose forecasting algorithms (64,84) to improve the accuracy with which these excursions are predicted. However, a new direction for CGM use is now being investigated, CGM as a behaviour change tool. In their review, Ehrhardt and Zaghal (85) conclude that "Rather than being used as a "reactionary device" for hypoglycaemia prevention and glycaemic management, CGM should be assessed for its use as a prevention tool. Its potential role as an adjunct to lifestyle changes [ … ] calls for further evaluation". In a survey of 40 people living with diabetes (86), 90% commented that CGM contributed to a healthier lifestyle, with 87% modifying food choices and 47% increasing physical activity based on CGM. Recent publications have also suggested that CGM could act as a behaviour modification tool for those with obesity (87). Combining pattern recognition with behaviour change has the potential to significantly improve self-management behaviours (88). Worth et al. used CGM to identify individual patterns in weekly hypoglycaemia risk of patients with CHI (81). The same group developed interpretative algorithms to facilitate patient understanding of patterns and provided suggestions for reflection designed to modify parental behaviours (35). The resulting change in fingerprick and self-management behaviours led to a reduction in real world hypoglycaemia of 25% (35,81), demonstrating the potential power of using CGM as a tool to identify and modify the behavioural determinants of hypoglycaemia. Due to the focus on weekly patterns and behavioural determinants of hypoglycaemia, this approach is less subject to problems with poor point accuracy and patient dissatisfaction with alarms, suggesting a novel and sustainable path to CGM application. CGM to diagnose and inform management While children with rare hypoglycaemia disorders do not have exogenous insulin to adjust based on CGM readings, there are many other diagnostic and management decisions that can be made upon the basis of CGM outputs. Work evaluating the CGM profiles of healthy subjects (89,90) provides more data with which researchers can compare results from disease cohorts and evaluate glycaemic control in context. Rossi et al. (30) have shown this with their own assessment of healthy subjects in comparison to those with GSDIa. Separately, Rossi et al. (91) propose the use of CGM in a hybrid approach to determine fasting tolerance in children with GSDs rather than the traditional "controlled fast" with multiple fingerprick tests. They go on to highlight the efficacy of CGM to determine incidence of nocturnal hypoglycaemia as well as the impact of diet and medications on glycaemic profiles. Peeks et al. (42) support this approach and have documented their use of CGM to monitor the impact of nocturnal dietary interventions, changes in starch loads, and treatment with empagliflozin for patients with hepatic GSDs. In the case of treatment with empagliflozin, the authors highlight the utility of CGM to detect the potential hypoglycaemia resulting from medication-induced glycosuria (42). Logel et al. (92) similarly used intermittent CGM to initiate and then titrate doses of diazoxide in a patient with Glut1 deficiency who had failed ketogenic diet; without the high granularity data of CGM it was felt that diazoxide would have been administered at incorrect doses, risking the loss of efficacy seen in other cases treated without CGM. CGM as an outcome marker in clinical trials In recent years CGM has become popular as an outcome in clinical trials to determine efficacy of interventions to reduce hypoglycaemia. The high granularity data generated by CGM reduces the chance of type II errors in clinical trials and allows investigators better insight into glycaemic changes secondary to therapeutics. CGM has recently been used as an outcome measure for: hypoglycaemia after paediatric cardiac surgery (93); treatment of CHI with Dasiglucagon (94); treatment of CHI with RZ358 (95); treatment of GSDIa with AAV8 gene transfer (96) and is planned for more upcoming therapeutic trials in rare hypoglycaemia disorders. An essential component of using CGM as an outcome measure is understanding the baseline data for each disease and population (42). This requires quantification of as many patients as possible (79); Rossi et al. (30) recently provided the first publication of CGM metrics for patients with GSD1a, as did Worth et al. (80,82) for patients with CHI, essential datasets for those utilising baseline characteristics when designing future therapeutic trials using CGM for primary or secondary outcomes. Conclusion There has been considerable progress in the development of the relatively new technology of CGM. However, in childhood hypoglycaemia disorders many historical problems remain. CGM continues to be insufficiently accurate, somewhat burdensome for patients and their families, costly, and lacking in evidence for its ability to reduce hypoglycaemia when provided to families without support. However, there is scope for optimism. Devices continue to miniaturise, improve in accuracy and reduce patient burden. Research and clinical teams are working around suboptimal point accuracy and lack of patient educational resources to develop novel ways of utilising this technology. CGM is being used for diagnostics, monitoring changes in management, establishment of baseline characteristics, modifying behaviour, and ultimately to reduce hypoglycaemia when used retrospectively and combined with interpretative algorithms or clinical expertise. Use in neonatal medicine is becoming established, with good evidence for a reduction and early recognition in neonatal hypoglycaemia. A lack of guidelines for the use of CGM in hypoglycaemia disorders has restricted progress but given rapid technological advances, it is predicted to play a larger role in all forms of childhood hypoglycaemia disorders. The challenge is to adapt CGM technology to clinical application with research designed to bring CGM innovations for patient benefit. Author contributions CW researched and wrote the first draft of the manuscript other than Section 5.2 which was written by LH. All authors contributed to the article and approved the submitted version. Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
2023-01-23T14:11:55.690Z
2023-01-23T00:00:00.000
{ "year": 2023, "sha1": "95f9fe4b5d33cd95fd9a55dd810cc631cc2191b6", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "95f9fe4b5d33cd95fd9a55dd810cc631cc2191b6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237848366
pes2o/s2orc
v3-fos-license
POST-1990 MIGRATION BIOGRAPHIES OF SLOVAKS FROM VOJVODINA: A TIME-GEOGRAPHIC PERSPECTIVE Migrants’ life history, whose life-paths were affected by significant political and social events, is one of the academic research interest in human geography, social science, and humanities. The aim of this paper is to illustrate how the time geography could be used as an approach to identify individuals’ behavioural patterns. Data were collected from a small sample of Slovaks from Vojvodina (n=34) and subsequently analysed in relation to certain situations in time and space (place). Semi-structured interviews and time-space records were used in constructing 3D life-paths and life chart. Using time geography approach we provide a certain amount of information enabling generation of a broader picture of person’s life to identify social, economic and behavioural aspects (including motivation to migrate) affected in some cases by war conflicts in former Yugoslavia, in others by post-war subsequent political, legal, social and economic changes. The findings of the research show that this kind of „the life-path schemes“ helps describe a detailed life situation from one time period to another, where geographical sites serve as important „stations“ for Slovaks from Vojvodina who decided to spend a longer period in Slovakia or to settle down in a new homeland. INTRODUCTION International migration is currently a characteristic manifestation of globalizing processes and a significant attribute of global change in an increasing number of countries. It is considered one of the fundamental civilization challenges of the 21st century due to its economic, social, population, cultural, political, security, environmental and other impacts. The majority of people migrate internationally for reasons related to work, study and family. In contrast, other people leave their homes and countries for a range of compelling and sometimes tragic reasons, such as war conflict, persecution and marginalization (IOM 2019) The study of historical migration of contemporary Slovaks from the "Lower Land", originally coming from Upper Old Hungary, is a relatively frequent topic. Mostly when examining the diversity of the origin of Slovak settlers, it is approached from a historical, historical geographic or cultural, especially ethnographic aspect (e.g. Siracký 1980, Benža et al. 2006, Gurňák 2007, Botík 2011and Filadelfi 2012, where factual reference is made to migrations in the 18th and 19th centuries and their intention to repopulate the Lower Hungary and the present-day Vojvodina. We follow an interest in this topic mainly among Slovak experts and then in the circles of Vojvodina experts, mostly from the first half of the 20th century. Unfortunately, scientific works with a similar focus, but currently already oriented towards movements in the opposite direction, when the descendants of the Vojvodina Slovaks migrated back to Slovakia, especially since the end of the 20th century to the present, are only few (e. g. Zlatanović and Marušiak 2017, Surová 2016and Marušiak and Zlatanović 2020. For these reasons, we will not be able to rely on numerous credible academic sources treating this issue, but we will focus on our own time-space (geographical) research supplemented by the knowledge from ethnological, sociological and political science research. Our ambition is to make this new phenomenon visible from the point of view of time-geography and to examine the biographical records of a selected sample of Slovaks from Vojvodina. The aim is to illustrate how the time-geographical approach using spatio-temporal diary data and semi-structured interviews may be used to illuminate the impact of the changing conditions in Vojvodina, an autonomous province in northern Serbia, in the post-1990 period (until 2018) in order to understand the geographical aspects of the migratory behaviour of Slovaks from Vojvodina and their motives to spend a longer period in Slovakia or to settle down in a new homeland. The study also suggests the effects of migration on life paths trying to articulate their "emotional geographies" in the studied period. THEORETICAL BACKGROUND Migration plays a critical part in people's life-paths, usually being associated with major life changes (Boyle 2009). For those moving abroad, migration can represent significant changes including separation from broader family and friends, loss of the familiar and close environment (including landscape), the stress of the move itself, and the potential difficulties associated with coming to milieu with a different cultural and socio-political setting. Although the vast majority of migrants are motivated by a desire for better economic opportunities and a higher standard of living for themselves and their families, some are forced to leave due to armed conflict and other causes. (Divinský 2007 andDivinský andZachar Podolinská et al. 2018). Over 82 million international migrants lived in Europe in 2019. A little over half of these (42 million) were born in Europe but were living elsewhere in the region. This number has increased since 1990. It was much lower, at around 28 million (IOM 2019). For most South-Eastern European countries, emigration rather than immigration has been the key feature over recent decades. "Through migration different communities are linked together in more or less formal adaptation of flows and the implications of these streams may impact on both the migrants and the places through or to which they move" (Boyle 2009, p. 96). The approaches of geographers taken to understand population migration are theoretically and empirically diverse Uherek 2007 andBoyle 2009). Remigration (or reemigration), which means the act or process of returning or migrating back to the place of origin (origin of ancestors) is a specific case of migration (Janská and Drbohlav 2009). During the late 1960s and 1970s, behavioural geographical thought influenced the approach to the study of migration by stressing the mechanisms behind the individual acts of migration and put the individual agent at the heart of the process. The role of perceptions and the limits of the rational decision making model were emphasized in these approaches. It was supposed that people do not always take perfect or rational decisions (Golledge and Stimson 1997). This encouraged more qualitative approaches to be adopted (e. g. focused attention on the beliefs, aspirations, constraints, etc.). Migration is a complex and multifaceted process and thus the concept of migration is becoming more "fluid" than it was assumed in the past. This paper focuses on the issue of migration from the perspective of time-geography and so the key aspect will be the study of life-paths and their important segments. Timegeography is one way of thinking about a hybrid time-space (Merriman 2012). Time-geographical approach concerns trajectories in time and space of individuals and groups (Hägerstrand 1967). "Time-geography rests on the premise that each of the actions and events consecutively occurring between the birth and death of an individual has both temporal and spatial attributes, thus the biography of a person is ever on the move with her and can be conceptualized and diagrammed at daily and lengthier scales of observation as an unbroken, continuous path through timespace …" (Pred 1981, p. 9). The continuous flow of time cannot be halted. In time-geography, the lifetime of every human being as an indivisible unit is anchored in the historic time dimension from the moment of birth until death. The life experiences of two individuals are logically (naturally) different. The specific differences, among other things, depend on what happened in society (or specific milieu) during their lives (Ellegård 2018). The path is a fundamental concept in time-geography. The combination of the path and individual concepts into individual path reveals that the path illustrates the time-space movements of an individual. It can be used to visualize the movements of individuals and it works on various scales in time and space. Several authors underline that the path is an abstract illustration of the time-space movements of an individual (Lenntorp 1999, Ira 2001, Lundén 2003and Pred 2005, while the individual represented by the path is something much more complex (Ellegård 1999 and. In one sense the life-path can be read as diagrams of choices made by individuals, in another sense they depict constraints on the movements of people (Cresswell 2013). The path can be regarded as a flow of events, experienced by a human individual as his/her activity sequence. "The path can indicate the individual's getting in touch with other individuals, staying at a place doing something together with them, and leaving it afterwards" (Ellegård 2018, p. 30). The very idea of life-paths shaped Torsten Hägerstrand's geographical thoughts. His first research task was to 'trace the life from year to year' of 10,000 people who inhabited his home parish during 100 years. He succeeded in condensing the whole life story in one diagram; Hägerstrand subsequently developed the paths and projects of time-geographic as a 'scientific observer', and later Hägerstrand's perspective on the 'paths and projects' of time-geography incorporated memories, feelings, knowledge, imagination and goals as elements of a 'living landscape' (Daniels and Nash 2004). The focus of the time-space records is to gain knowledge about opportunities and constraints at the individual level, which can be used for many purposes (Ellegård 2019). The time-space record (time-space activity diary) is not only a significant instrument for a sensible analysis of everyday life but also for life-paths analysis of individuals (Frändberg 2008). The use of time-space records expanded after the introduction of Hägerstrand's time-geography into human geographical research (Hägerstrand 1970). The development of the geographic information systems (GIS) and computational capabilities in the last few decades facilitate the analysis of time-space records (Couclelis 1999). STUDY AREA Vojvodina is an autonomous province in northern part of Serbia ( Fig. 1) bordered by Croatia (West), Hungary (North) and Romania (East). It covers an area of 21,614 square km and has a population of 1.932 million (2011). Most of the territory consists of an extensive plain that is part of the Pannonian Basin. Vojvodina includes 3 historic regions (Bačka, Banat and Srem). Vojvodina is divided into seven districts that are further subdivided into 45 municipalities (https:// www.sio.vojvodina.gov.rs/index.php/linkovi/linkvi-opstine) and 8 towns or cities. As far as the ethnicity is concerned, the province is very diverse with more than 25 ethnic groups. Six languages (including Slovak) are in the official use by the provincial administration. Slovaks living in Vojvodina are the ethnic group, the existence of which is the result of historical migration with the beginning in the late eighteenth century. The Ottoman Turks controlled the region from the early 16th to the late 18th century. With the region's incorporation into the Austrian Habsburg empire later in the 18th century, large numbers of Slovaks also migrated to the area. Sirácky (1980) defines three stages of migration: first stagecolonization of the Lower Lands (1690 -1710), second stagecolonization of settlements in Hungary (1711 -1740) and the third stagethe colonization of settlements in Vojvodina (1745 -1790). At the time of the third stage, the southernmost part of Hungary was settled -Bačka, Banat and Srem (Filadelfi 2012). Bački Petrovac (Báčsky Petrovec in Slovak) was the first to be settled in 1745. Until today, it has a strong Slovak character and is an important cultural centre of Slovaks in Vojvodina. Descendants of Slovak migrants have lived in the territory of Vojvodina for more than two and a half centuries. Many have kept the language and faith of their ancestors. They have maintained Slovak traditions and culture despite external pressures (Surová 2018). Most of them are Protestants, especially members of the Lutheran Church. Most Slovaks live in Kovačica (10,577) and Bački Petrovac (8,772). Both towns are the cultural centres of Slovaks in Vojvodina (Fig. 1). Most members of the Slovak community in Vojvodina speak the Slovak language. They have the right to education in their mother tongue. The modern history of Vojvodina at the end of the 20th and the beginning of the 21st century was marked by war conflicts. The wars in former Yugoslavia from 1991 to 2001 were a series of separate but related ethnic (and in some cases also religious) conflicts, wars of independence and insurgencies. In 1992 it resulted in the breakup of the Yugoslav federation. Unresolved tensions resulted in declaration of new constituent republics. After the breakup of former Yugoslavia, civil war, and bombing of the country in 1999 by the NATO, many Slovaks migrated from Vojvodina (Serbia) to Slovakia. Slovakia provided preferential treatment in the areas of residency, work, education, healthcare system, and citizenship acquisition to foreign Slovaks (Zákon č. 70/1997 Z.z. o zahraničných Slovákoch, Zákon č. 474/2005 Z.z. o Slovákoch žijúcich v zahraničí a o zmene a doplnení niektorých zákonov). "Some Slovaks from Serbia stayed in Slovakia permanently, some of them migrated further to other states while some returned back to Serbia. Many of them acquired either Slovak living abroad certificate or citizenship or both for various reasons" (Surová 2016, p. 6). DATA AND METHODS In this article, we apply the mixed-methods approach using time-space records (diaries), semi-structured interviews and simple observations. The union of the qualitative and quantitative methods and a focus on the specific with different geospatial technologies leads to innovative and exciting ways of understanding and visualizing the multifaceted relationships between spatial phenomena (Yeager and Steiger 2013). Qualitative research is the collection of information about human behaviour and perception. It is about focusing in depth to find out why and how certain activities and events occur. In our study the observation was applied as a way of collecting data through observing. Observation data collection method we understand as a participatory study. Data collection in the semi-structured or semi-systematic observation was conducted partially using specific variables (on the time-space behaviour and the behaviour in time-space) and according to a pre-defined schedule. Advantages of observation data collection method included high levels of flexibility in terms of application and generating a permanent record of phenomena and direct access to research phenomena (Hai, ed. 2016). In the case of research among Slovaks from Vojvodina, it was a participatory observation. During this observation, the observer entered the group (in some cases in Vojvodina, in others in Slovakia, especially in Bratislava) of subjects and became an active member of the group. It was a more comprehensive examination of the group, the task of which was to reveal the internal perspective of the participants. In order to collect survey data, we used the time-geographical approach which is based on records of activities that have taken place in a certain time and space. Time-space records (time-space activity diaries) in our survey enabled to collect information on: (1) the activity content: the time that an activity episode starts and ends (when), (2) the geographical context, i.e. the spatial location where the activi-ty takes place (where), (3) the social context, i.e. the person(s) involved in the event (with whom), and (4) the use of transportation mode(s) enabling access the place where the activity occurs (how). There were several additional aspects for which information was collected (motivations to move and respondent's feelings and emotions during activities (Schwanen, 2009). This method recording individu-als´ activities and movement in time and space may facilitate reflections on changes in the patterns of activities and may enable a deeper understanding of relationships in the community (Díaz-Muňoz et al. 1999 andEllegård 2019). It illustrates practices and activities in geographical and social context. The diary method provides an effective approach to collect data that enable the systematic study of the life activities of individuals and was applied in several scientific disciplines (e.g. Schwanen, 2009, Schwanen and Kwan 2012, Ellegård 2019, and Sunnqvist et al. 2020. The participants filled out the time-space records and then participated in interviews to discuss them. We have conducted semi-structured interviews, a method of inquiry in which we specified and asked our participants a set of questions, but the order and way we asked the questions may have varied. The life charts technique was also used in our study. Time geographical life charts activate autobiographical memories by merely asking about geographical moves through life. E.g. Sunnqvist et al. (2007 and2020) explored the use of time geography life charts in clinical psychiatric practice. The life charts provide an information on the person's sociocultural capacity, as well as on stressful or challenging (difficult to solve) life events. This technique makes it possible to recall and discuss difficult political, economic and social situations and stressful events in order to obtain a comprehensive picture of the entire life situation. This method could give the researcher deeper knowledge than otherwise possible. The timegeography life chart strengthens the comprehensive picture of the respondent's life situation. Snowball sampling is a method that has been used in social sciences to study sensitive topics (Cohen and Arieli 2011). Snowball sampling is a nonprobability sampling technique where existing study subjects recruit future subjects from among their acquaintances (e.g. Goodman 1961 andRochovská et al. 2014). The sample group grows like a rolling snowball. The method applied in our research involved the selection of sample of Slovaks from Vojvodina utilizing "insider" knowledge and referral chains among subjects who possess common traits that are of research interest (Kaplan et al. 1987). RESULTS In this article we analyse the activity patterns of a selected group of 34 Slovaks from Vojvodina, who currently live in Slovakia. Fifteen men and a slightly larger sample of nineteen women (average age 44.6 years in 2018) were in this group. From the total number of 34 respondents we selected 4 persons (two females and two males) with considerably different life destinies and on the example of their life-paths we analysed in more detail the complexity of their behavioural patterns through two techniques (3D graphs in Figs. 2 -4 and one life chart in Fig. 5). The data allowed us a new insight into the their behaviour. It contributes to understandding of activities of people affected by specific political, economic, socio-cultural, institutional, and geographical factors in a Euclidean space and time. Important aspects of the time-space behaviour of all thirty-four interviewees are presented in Tab. 1. Almost three decades of our analysis were divided into three periods: pre-war period in Vojvodina (January 1st 1990 -April 6th 1992), war time (April 7th 1992 -December 14th 1995) and post-war period (December 15th 1995 -April 16th 2018) ending in the time when survey was completed. Among motivational reasons of Slovaks from Vojvodina for a long-term or permanent residence in Slovakia we can find in most cases either continuation in a higher education (all university degree levels) or finding jobs that give them much better financial opportunities (working as managers in big corporations, IT specialists, teaching at universities, specialist in various fields such as software engineering in security service or computer programming). There were also security concerns influenced by the experience, especially during the toughest military interventions in 1991 -1995. After 1995, our respondents also travelled to other countries around the world (Australia, Denmark, Ireland, South Korea, Switzerland, United Kingdom, and USA); mainly for reasons of completing various study stays (Czechia, Ireland, and United Kingdom), longer visits of the relatives (in Canada, Germany, and the USA) or job specialization (Qatar, South Korea, and Switzerland). They mostly returned to Slovakia. Respondents who decided to move to Slovakia in the post-conflict period after 2000, in most cases stated family reasons (marriage, family reunion, after completing university studies they started families and having children, so they decided to apply for a mortgage and stay) and a desire to continue higher education under more favourable conditions than in Serbia (university studies including doctoral education in Slovakia is for free). During their studies many took advantage of the possibility of a short stay abroad (Denmark, United Kingdom, and the USA). As an example of visual interpretation of time-space records in 3D graphs we have selected three respondents of Slovak origin from Stara Pazova (Fig. 2), Bački Petrovac (Fig. 3), and Jánošik (Fig. 4). A selected group of 3 respondents represented by 1 woman and 2 men presents different types of life-paths influenced by different family backgrounds, life situations and other factors. Vertical axis in the graphs represents the temporal dimension (time context), duration of some activity in certain geographical contexts of one individual (expressed by other two dimensions). Every interruption or diversion meant change of direction of the movement. Time span used in the construction of 3D graphs was from 1990 until 2018. Evaluating the whole set of respondents, we can say that before the war there were no important changes in place of living recorded, on the contrary, in the post-war period, the movements were very frequent and many of them were directed to Slovakia. The life-paths of three selected respondents documented by 3D graphs (Figs. 2 -4) are analysed in more detail in the following paragraphs. In the case of the fourth respondent from Bački Petrovac (39, F), the 28-year segment of her life-path is analyzed in more detail through a life chart (Fig. 5). A woman, 43 years old, from Stara Pazova, was interested in studying in Slovakia (Fig. 2). The war in the former Yugoslavia only hastened the decision. Thanks to her university studies (in Bratislava), she was able to spend more time in an au pair programme in Hørsholm, Denmark (September 1998 -July 1999). During her studies, she regularly returned home to her birthplace, where she spent months on vacation and in the autumn travelled back to Bratislava. After graduating from university in June 2000, she got a job in Bratislava, and since then there have been no reasons for the decision to return to Vojvodina. Tab. 1 Motivations of Slovak respondents from Vojvodina to move to Slovakia Source: own research Motivations: 0 -no reason to move, FA -Family, IN -Institution, SE -Security, EC -Economic (financial) reasons, WO -Work, ED -Education, MA -Marriage, LC -Living conditions, OT -Others As an example of visual interpretation of time-space records in 3D graphs we have selected three respondents of Slovak origin from Stara Pazova (Fig. 2), Bački Petrovac (Fig. 3), and Jánošik (Fig. 4). A selected group of 3 respondents represented by 1 woman and 2 men presents different types of life paths influenced by different family backgrounds, life situations and other factors. Vertical axis in the graphs represents the temporal dimension (time context), duration of some activity in certain geographical contexts of one individual (expressed by other two dimensions). Every interruption or diversion meant change of direction of the movement. Time span used in the construction of 3D graphs was from 1990 until 2018. Evalu- ating the whole set of respondents, we can say that before the war there were no important changes in place of living recorded, on the contrary, in the post-war period, the movements were very frequent and many of them were directed to Slovakia. The life paths of three selected respondents documented by 3D graphs (Figs. 2 -4) are analysed in more detail in the following paragraphs. In the case of the fourth respondent from Bački Petrovac (39, F), the 28-year segment of her life-path is analyzed in more detail through a life chart (Fig. 5). A woman, 43 years old, from Stara Pazova, was interested in studying in Slovakia (Fig. 2). The war in the former Yugoslavia only hastened the decision. Thanks to her university studies (in Bratislava), she was able to spend more time in an au pair programme in Hørsholm, Denmark (September 1998 -July 1999). During her studies, she regularly returned home to her birthplace, where she spent months on vacation and in the autumn travelled back to Bratislava. After graduating from university in June 2000, she got a job in Bratislava, and since then there have been no reasons for the decision to return to Vojvodina. Life path of man from Bački Petrovac (60, Ma) is visualized in Fig. 3. The reasons for immigrating to Slovakia were primarily of a private working nature. Conflicts during the 1990s had only an indirect effect on moving away. After immigrating to Slovakia in January 1993, he and his family changed housing several times during that year (Senec till February, after that Bratislava till May, Plavecký Štvrtok by mid-October). After that they moved back to Bratislava, where they live until now. If there were no wars and the subsequent disintegration of Yugoslavia, he would probably remain living in Vojvodina. At present, he no longer intends to go back. Fig. 4 shows the life-path of man from village Jánošik (37, Mb). The conflicts in the former Yugoslavia and the accompanying phenomena motivated him and his parents to move to Brezno (September 1991-June 1994. After the wars in the 1990s, they returned to Jánošik (July 1994 -August 1998). He first attended Slovak grammar school in Jánošik. Then he continued his education at secondary schools. First two years he spent in the secondary school in Kovačica (September 1996-August 1997 and the last two years in Bački Petrovac (September 1997-June 2000. Then he applied to the University in Novi Sad but he did not succeed in completing it (October 2000-February 2003. From March till November 2003, he served in the Serbian Army (compulsory military service) for almost one year. Since he was unable to find a job after returning home, he decided to come to Bratislava, where he continued his university education, which he completed by defending his doctoral dissertation in June 2016. He is currently employed in public administration and has a family. The only reason why they would move out of Slovakia could be a "war conflict in Slovakia". Twenty-eight years of life-path of a 39-year-old woman from Bački Petrovac are visualised in Fig. 5 (life chart). Conflict environments in the former Yugoslavia had a direct impact on her decision to leave Vojvodina. This has happened after the NATO bombing in 1999. She enrolled in the International Baptist Lay Academy in Budapest, Hungary (study from October 1999 to February 2000). She did not plan to go immediately to Slovakia, but first to the USA. Her decision changed after meeting her husband. Later, they decided together to settle in Slovakia. Due to her adventurous nature, in the past she tried to find a longer-term volunteer job in the United Kingdom. There she spent almost half a year (from February till August 2000) in several cities -Otford, Norwich, Leeds, Glasgow, Edinburg, and London. Analysing results of the applied mixed-methods approach using time-space records, semi-structured interviews and simple observations we can state (similar to Marušiak and Zlatanović 2020 in their study) that the main motivation (push factors) for the migration of Slovaks from Vojvodina to Slovakia during the 1990s was to avoid the difficulties and hardships of war in former Yugoslavia. Later, the difficult political and economic situation in Serbia combined with the unfavourable life prospects of our respondents, became an important motivation for migration. The main pull factors for choosing Slovakia included the relatively small administrative barriers, partial or complete knowledge of a language or linguistic proximity, as well as the presence of family or friendly ties in Slovakia. The improved political and economic situation in Slovakia after 1998 and later the accession to the EU was a significant factor that influenced the widening gap between the quality of life in Serbia and Slovakia. In addition to better material well-being, a better functioning public administration and public services, a more flexible labour market and a more transparent legal environment in Slovakia have become the motivating factors. The analysis of the responses from the semi-structured interviews also shed light on some details related to the perception of migrations to Slovakia. Because all respondents experienced the period of war conflicts in the former Yugoslavia in some way, they were asked: Did the conflicts in the former Yugoslavia have a direct impact on your decision to move out from Vojvodina? Twelve persons (more than one third) pointed out that the conflict had directly affected their decision to move. People in fragile and conflict-affected country feel that emigration will provide greater security and enable a higher quality of life. Thirty-nine years old female stated: "After the war, the situation in Serbia remained difficult and it was impossible to think about a better future. I planned to leave her permanently since the war ended, since 1999." The response of forty-five years old male was: "At that first moment, it influenced my decision to go to study abroad." and the answer of the 54-year-old man was: "Yes, because of the situation there. It was chaos ...". Events that occurred on the territory of former Yugoslavia have influenced migration to Slovakia. For 22 persons it was the major reason. Often mentioned reasons were: to avoid potential conflict in the future, get a better job, education or to help family members. To the question: "Have you already had someone in Slovakia who helped you with your arrival?", the vast majority of interviewed (almost three fourths) said that they found jobs, accommodation or solved other problems through siblings, family, relatives, friends or local church (in one case). Only more than one fourth of respondents, said that at the time they came to Slovakia no one had helped them. In response to the question "Did you face any problems after arrival to Slovakia?" more than two thirds of interviewees said that they have had problems. Some of these troubles pertain to daily matters. Of the issues they mentioned, it is worth mentioning what has also appeared in the studies Zlatanović and Marušiak (2017) and Marušiak and Zlatanović (2020): adaptation to the Slovak society, new environment, way of life, language pitfalls which caused unpleasant situation in communication, bureaucracy at the Foreign Police or the problems with colleagues. We give some examples: "I had problems with the authorities, who could not understand how I can be Slovak when I was born in Serbia, they called me Serbian and were not willing to help." (Female, 39), "Exaggerated bureaucracy at offices and fear of some Slovak citizens of foreigners." (Female, 37). One of the other questions focused on the relationship of Slovaks from Vojvodina with the inhabitants of Slovakia was: "How do you assess the behaviour of the Slovak population towards you? (What kind of opinion they have about you)?" Majority of our respondents had positive opinion on behaviour of Slovaks towards Slovaks from Vojvodina, only small part of them have mentioned bad experiences after coming to Slovakia. In the eyes of foreign Slovaks, population of Slovakia is friendly, especially when they overcome their first fear or initial suspicions about somebody different. They are willing to help, but many of them know very little about the Slovaks from Vojvodina. ("We, the foreign Slovaks are largely irritated by the fact that Slovaks in Slovakia know very little or nothing at all about expatriate communities."-40 years old female). In the semi-structured interviews, our respondents also characterized their perception of relations with the people of Slovakia. The most frequent answers to this question were that generally they recognise their relationship with domicile Slovak residents as friendly and positive. In several answers, the respondents expressed their experience, which says that Slovaks are also narrow-minded, racists are among them, they have problems with foreigners and also that their mentality is different. Last question in our interview was: What reasons would influence your decision to return to Vojvodina? The most serious reasons for returning could be if one of the relatives would need help or economic support. Most respondents did not plan to return to Vojvodina. Some of them mentioned that the reasons for their return to their country of birth could be "spiritual calling", retirement, good job and financial security in Serbia, or maybe a war in Slovakia or if Slovakia should deport them. The analysis of the spatio-temporal records of our respondents between 1990 and 2018 showed a relatively large variability between the patterns of spatiotemporal behaviour. In the first years, trajectories with a wider spatial extent prevailed; later migration routes were limited to the movement within Slovakia and its neighbours and between localities in the new homeland Slovakia and the localities in Vojvodina. To some extent in line with the results of the Marušiak and Zlatanović surveys, respondents in our research stated that leaving for Slovakia means a solution to their individual safety and prosperity. "Slovakia is thus becoming a safe country for the members of this migrant group, given the situation in which they found themselves in their country of origin" (Marušiak and Zlatanović (2020, p. 156). DISCUSSION AND CONCLUSION In the future, the diasporas of Slovaks as well as diasporas in other Central and Eastern European nations in several developed countries may cause some return migration. Judging from the magnitude of this phenomenon in recent years, Kupiszewski et al. (2013) assume that these flows will be rather limited in size, because the ethnic factor of international migration within Europe is going to considerably decrease in significance. Our findings identified some attitudes of respondents towards emigration. At the time of decision, they would preferably migrate anywhere outside Serbia, mostly to Slovakia but not outside Europe. This could be explained by the overall bad socio-economic situation in Serbia and also by their limited migration experiences and residencies abroad. Slovakia is the country to which they feel close, the official language is their mother language, some of them studied in Slovakia and some of them hold Slovak citizenship. According to the respondents, migration to Slovakia brought some negatives, such as adaptation to the Slovak society, new environment, way of life, language pitfalls which caused an unpleasant situation in communication, bureaucracy at the Foreign Police or the problems with colleagues. The results of our survey are in line with the concluding remarks of research published by Marušiak and Zlatanović (2020, p. 155). "The migration of Vojvodina Slovaks to Slovakia is perceived also as a security issue from several aspectsfrom the point of view of individual security as well as from the point of view of the security of Serbia as a country of origin and of Slovakia which ceases to be only an "external homeland" for a great part of the community members, as it is also their country of destination and is becoming their new homeland". Wars in former Yugoslavia had long term effects not only on the populations in the conflict zones, but also on populations beyond these territories. Significant part of the post-conflict geographical research is focused on the political, social and economic consequences of wars (Uher 2018). In this paper a special attention was given to selected behavioural geographical aspects. The long-term impacts of the wars in former Yugoslavia are not entirely clear, but our behavioural geographical research shows that there are certain differences between the motivations and life paths of Serbs from non-Serbian republics of the former Yugoslavia (see Uher and Ira 2019), the motivations and life-paths of Bosniaks (Uher and Ira 2021) and inhabitants from Vojvodina of Slovak origin. Patterns of post-war spatio-temporal behaviour of Slovaks from Vojvodina are largely influenced by their origin and the emerging type of new relations with the country of their ancestors. It can be stated that part of the migration of the Vojvodina´s Slovaks has the nature of "reemigrations". The time-space records (time-space activity diaries) have proven to be an effective instrument for a sensible analysis of life-paths of individuals. The detailed life chart provided an information not only on challenging but also on stressful life events. These techniques enabled to recall and discuss difficult situations and stressful events in order to obtain a comprehensive picture of the entire life situation of a small sample of Slovaks from Vojvodina who currently live in Slovakia. The role of the time-geographic techniques to understand time-space processes and to track individual's existence in the time-space emerges again in various (and in some cases new) contexts. Individuals are entering new situations and may experience social and perceptual unease and stress, which can be felt differently in specific population groups (ethnic minorities, threatened persons, economically vulnerable population, and so on). Therefore, several time-geographic concepts seem to gain new qualities, significance and meaning (Klapka et al. 2020 Article first received: April 2021 Article accepted: June 2021
2021-09-01T15:17:59.148Z
2021-06-15T00:00:00.000
{ "year": 2021, "sha1": "d13e39365b12e17f1c67cc85453e3fcde8a4b78a", "oa_license": null, "oa_url": "https://www.sav.sk/journals/uploads/06171013Ira,%20Uher.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6d99c19144d0eeabbc0aa0493f8cdbaf182cf298", "s2fieldsofstudy": [ "Sociology", "Geography" ], "extfieldsofstudy": [ "History" ] }
50764
pes2o/s2orc
v3-fos-license
Evaluation of BioCreAtIvE assessment of task 2 Background Molecular Biology accumulated substantial amounts of data concerning functions of genes and proteins. Information relating to functional descriptions is generally extracted manually from textual data and stored in biological databases to build up annotations for large collections of gene products. Those annotation databases are crucial for the interpretation of large scale analysis approaches using bioinformatics or experimental techniques. Due to the growing accumulation of functional descriptions in biomedical literature the need for text mining tools to facilitate the extraction of such annotations is urgent. In order to make text mining tools useable in real world scenarios, for instance to assist database curators during annotation of protein function, comparisons and evaluations of different approaches on full text articles are needed. Results The Critical Assessment for Information Extraction in Biology (BioCreAtIvE) contest consists of a community wide competition aiming to evaluate different strategies for text mining tools, as applied to biomedical literature. We report on task two which addressed the automatic extraction and assignment of Gene Ontology (GO) annotations of human proteins, using full text articles. The predictions of task 2 are based on triplets of protein – GO term – article passage. The annotation-relevant text passages were returned by the participants and evaluated by expert curators of the GO annotation (GOA) team at the European Institute of Bioinformatics (EBI). Each participant could submit up to three results for each sub-task comprising task 2. In total more than 15,000 individual results were provided by the participants. The curators evaluated in addition to the annotation itself, whether the protein and the GO term were correctly predicted and traceable through the submitted text fragment. Conclusion Concepts provided by GO are currently the most extended set of terms used for annotating gene products, thus they were explored to assess how effectively text mining tools are able to extract those annotations automatically. Although the obtained results are promising, they are still far from reaching the required performance demanded by real world applications. Among the principal difficulties encountered to address the proposed task, were the complex nature of the GO terms and protein names (the large range of variants which are used to express proteins and especially GO terms in free text), and the lack of a standard training set. A range of very different strategies were used to tackle this task. The dataset generated in line with the BioCreative challenge is publicly available and will allow new possibilities for training information extraction methods in the domain of molecular biology. (page number not for citation purposes) Background The recent advances in Molecular Biology are responsible for the accumulation of various and complex data types. They include biological sequences derived from genome projects, and structural data of biomolecules from the structural genomic initiatives. One of the more important items is the characterization of protein function obtained through biochemical and genetic experiments. To handle the increasing amount of complex data, computational methods are being developed in the areas of bioinformatics and computational biology. A number of comparative assessments of the different computational approaches, addressing not only independent evaluation of resources but also the accessibility of the tools for real world applications have been carried out. The Critical Assessment of Protein Structure Prediction (CASP) contest constitutes one of the first community wide experiments to benchmark the state of the art of protein structure prediction (refer to Proteins. 2003;53 Suppl 6:524-33). CASP has been running for a decade and had served as a model for later initiatives. Among those initiatives are the Critical Assessment of Microarray Data Analysis (CAMDA) contest to analyze the performance of microarray bioinformatics tools [1] and the Critical Assessment of PRediction of Interactions (CAPRI) contest for the assessment of protein interaction prediction techniques [2]. Also for genome bioinformatics an evaluation contest was carried out, called Genome Annotation Assessment Project (GASP) [3]. Other assessments of computational tools applied to the biomedical domain include the Genome Access Workshop (GAW) for statistical genetics techniques [4] and the Predictive Toxicology Challenge (PTC) for computational toxicology approaches [5]. The biomedical literature constitutes one of the most valuable data sources for functional descriptions of biomolecules, and as such it is constantly subject to manual extraction of relevant information by biological database curators as well as by individual researchers. Given the volume of publications and functional descriptions, a number of computational analysis techniques have been developed in recent years to extract information from biological text sources. The community-wide evaluation strategies are not exclusive to the bioinformatics domain, they are also used commonly to estimate the performance of information extraction and retrieval tools, e.g. the Message Understanding Conferences (MUCs) [6]. In the domain of biomedical literature, the knowledge discovery and data mining (KDD) challenge cup [7] evaluated how text mining tools could aid in the process of database curation, in this case of the FlyBase database [8]. The first Genomics track [9] of the Text REtrieval conference (TREC) focused on the evaluation of current strategies of ad hoc retrieval and information extraction of biomedical texts. The Critical Assessment for Information Extraction in Biology (BioCreAtIvE) contest was organized to evaluate current text mining techniques applied to the biological research literature in biologically realistic scenarios, including the evaluation of different text mining approaches aimed to solve two tasks focused on the use of information by biologists and database curators. The two major tasks addressed by this contest were the extraction of gene names and the normalization of genes [10,11], while the second task, which will be discussed in detail in this article, was the extraction of protein annotations from full text scientific articles. The assessment was discussed in the context of a workshop held in March 2004 (refer to http://www.pdg.cnb.uam.es/BioLINK/ workshop_BioCreative_04/handout/). Task 2 description Gene Ontology (GO) provides a consistent set of controlled vocabularies (concepts) which are useful to annotate gene products, such as proteins [12]. The terms organized in GO are nowadays the most important biological annotation resource and display a range of advantages over previous annotation efforts based on functional keywords. There are three main categories used to describe relevant aspects of gene products, namely cellular component, biological process and molecular function. These relevant biological aspects of gene products are extensively used to annotate proteins within biological databases (e.g. GOA) [13]. Therefore GO terms were considered for task 2 of the BioCreAtIvE contest, addressing the assignment of functional annotations (GO terms) to human gene products using text mining and information extraction techniques. The training and test set for annotations using GO terms were provided by human experts (GOA curators) who are involved in the manual assignment of GO terms to gene products [14]. The analyzed annotations were extracted from full text articles, because often the annotation-relevant text passages, and especially the experimental evidence supporting those annotations, are not provided in the abstracts accessible in PubMed. Task 2 was divided into sub-tasks each focusing on certain aspects associated with the annotation process. A total of nine teams participated at task 2; each group could submit up to three results for each single run. More than 15,000 individual results were submitted for evaluation by a team of three curators, who dedicated several month to the evaluation of the results [14]. (page number not for citation purposes) Task 2.1 Identification of annotation relevant text passages The aim of sub-task 2.1 was to evaluate different approaches for the extraction of text passages which contain statements that relate functional annotations for GO terms to the corresponding gene products. The participating systems were provided with a test set consisting of triplets of protein identifiers (Swiss-Prot accession number), GO identifiers and the articles' filenames. Then they returned text fragments which contain predictions consisting of information relevant to the annotations of the corresponding GO term and associated gene products. The assessment did not specify any explicit length of the evidence text. Task 2.2 Assignment of GO terms to gene products The purpose of sub-task 2.2 resembled the typical human annotation procedure, in the sense that the participants had to return the annotations derivable from a given protein-article pair. The annotations which are contained within the articles should thus be automatically identified and the corresponding GO-term returned together with the supporting text passage. In order to make this task easier, the number of protein-GO term associations for each GO category contained in each article was provided for the test set (see data set section). Task 2.3 Selection of relevant papers Within this sub-task, given a collection of articles, those papers should be returned which are relevant for the annotation of certain proteins to derive GO annotations for them. Also the evidence text fragments should be returned. In this sub-task, the groups were asked, given a collection of articles, to return papers relevant for the annotation of certain proteins together with the GO annotations and the text fragment evidence. The evaluation of subtask 2.3, an ad hoc retrieval task, was not carried out in the current BioCreAtIvE evaluation. A similar task was posed at the TREC Genomics track 2004 [15]. Data set and evaluation strategy The Gene Ontology Annotation (GOA) database http:// www.ebi.ac.uk/GOA provides a large collection of manually extracted associations of proteins to GO terms. Curators responsible for those annotations have a high degree of expertise in carefully annotating proteins with their corresponding functional and biological information. Therefore the GOA curators at the European Bioinformatics Institute (EBI) were asked to evaluate the results of automatic annotation extraction tools that took part in the BiocCreAtIvE task 2 [14]. The GOA database contains manually extracted associations of proteins to GO terms, providing the article identifier which contains the information source for the annotation itself, as well as the type of evidence supporting those annotations [13]. For instance the following example corresponds to a single GOA entry: P41220 RGS2_HUMAN GO:0005096 PMID:10747990 TAS F Regulator of G-protein signaling 2 IPI00013177. Here the protein with the accession number P41220 has been annotated as a 'Regulator of G-protein signaling 2' (GOID 0005096) using information derived from the article with the PubMed ID '10747990'. For the assessment itself, three distinct expert annotators were responsible for the evaluation of the submitted predictions. This allowed an estimate of inter-annotator agreement and objective evaluation metrics [14]. Data preparation: the training data As already mentioned the training data encompassed basically GOA annotations and the GO terms as well as full text articles. Although GOA provides the associations and the corresponding article identifier, it doesn't contain a protein dictionary, and often the annotated protein appears in the textual data as a synonym or typographical variant which is not covered by the Swiss-Prot database. As we did not provide a fixed name dictionary for the contest, participants could use external publicly available sources which were suitable to cross-link the given protein to additional information such as synonyms or protein descriptions contained in databases like LocusLink [16] or HUGO [17]. Some participants integrated such additional information sources into their systems. The articles linked through GOA to the annotations are often only accessible as abstracts, as most of the journals do not provide free access to the full text articles. In practice the curators use full text articles for their annotation procedure, especially to support annotations based on experimental evidence. Taking only the abstract is often not enough to recover annotation relevant text passages. GO annotations are associated with evidence codes, which are assigned to describe the type of evidence used to create the annotations http://www.geneontology.org/ GO.evidence.html. We did not make use of the following evidence codes, because these annotations cannot be retrieved from the literature: IC (based on curator judgment), ND (no data) and IEA (inferred from electronic annotation). The terms which build up GO are categorized into three non-overlapping branches: Cellular Component, Molecular Function and Biological Process. A protein may be annotated with one or more terms from each category, related to information that appears in many different articles. As the curators follow a protein centered approach, those articles might contain additional functional annotation for other proteins which are not used in GOA. The GOA release of May 2003 was used for this experiment; it contained approximately 84604 annotations. A total of 9725 PMIDs were used to derive annotations. The corresponding articles were further processed to select those which corresponded to the Journal of Biological Chemistry (JBC) (a total of 1683). As we had access only to a certain release of the JBC articles, those which corresponded to an available full text article were selected. A set of 640 JBC articles remained which had linked GO annotations provided by GOA. Also a number of full text articles belonging to the journals Nature Medicine, Nature Genetics and Oncogene were filtered in a similar way to obtain only those articles used for GO annotations as provided in GOA (163 articles). The final training set thus contained a total of 803 full text articles from four different journals that were provided in Standard General Markup Language (SGML) format. The provided training set constituted a data source which provides only indirect linkage between text passages and protein-GO annotations and not the direct passages of text in which the GO-protein relation can be found. Although this adds difficulty to the training of various computational systems based on learning techniques (AI), we think that it reflects the real world scenario encountered by database curators. The test data The BioCreAtIvE test set contained full text articles, just as the database annotators use for their work. A total of 212 full text articles freely distributed by the Journal of Biological Chemistry (JBC) in SGML format were provided to the participants, 113 for task 2.1 and 99 for task 2.2. Those articles were dated between the years 1998 and 2002. The GOA curators provided a total of 1076 gene product-journal-GO term associations to the participants for task 2.1, such as O75612 JBC_1998-2/bc028208.gml 0005515, where O75612 corresponds to the protein accession number from Swiss-Prot, JBC_1998-2/bc028208.gml is the name of the file containing the article and 0005515 the GOID of the term which has been manually annotated to the protein. In case of task 2.2, the test data contained for each protein and journal pair, the number of annotations per GO-category encountered by the curators in the article. For instance the protein with the Swiss-Prot accession number P16471 had 7 biological process terms, 1 cellular component term, and 2 molecular function terms associated through the article JBC_1999-2/bc035461.gml. For the task 2.3, the teams were asked to provide for, ten proteins, the articles which are relevant for annotation, together with the GO terms and the annotation text passages. The numerical summary of the training and test sets used in task 2 is contained in table 1. When considering the overlap between the proteins used in the training set and the proteins appearing in the test set, 11 of them occur in both the training and the test set of task 2.1 and 8 in case of task 2.2. A total of 185 GO terms of the task 2.1 test set are also contained in the training set, while 165 GO terms of task 2.2 are also present in the training data. This means that only a fraction of GO terms both were present in the training and the test set. Evaluation strategy The evaluation was carried out by three GOA database curators, see accompanying article [14]. The Extensible Markup Language (XML)-like submissions contained text fragments marked to allow the evaluators to decide whether the predictions were correct or not. In case of subpart 2.2 also the prediction of the GO code itself was also assessed, together with text passage supporting the annotation. The text passages submitted as evidence by the various teams were highlighted by a tool to facilitate the evaluation. A substantial number of predictions were revised that were associated with randomly number of proteins (x), providing sufficient grounds for the statistical analysis of the results. After revising the predictions, the GOA evaluators decided about the quality of the predictions by following the protein accession number and the GOID. Three levels of accuracy were used for the annotations by the evaluators including the evaluation of the presence of the GO term and/or corresponding proteins and verifying their relation with the submitted text passage. Also additional comments related to the predictions were provided by the curators regarding the quality of the predictions. The independent predictions for both GO terms and proteins were scored as high in cases where the protein or the GO term were extracted correctly. The submissions tagged with generally corresponded to those predictions which were generally correct, but too general to be of practical use. For instance in case of the protein predictions, this means that the specific protein was not identified but a homologue from another organism or a general reference to the corresponding protein family was encountered. In case of the GO term predictions scored as generally, a high level parent term of the actual GO term might be referenced. Results tagged as low are basically wrong predictions. A double identification in a given text passage of high for protein and GO term implies the correct (high) identification of the association between them. Concerning task 2.3, the limited number of participants and the technical difficulty of the evaluation did not allow us to assess the results of sub-task 2.3 in time for the assessment workshop. Results The dataset produced at the BioCreative contest task two is freely available from: http://www.pdg.cnb.uam.es/ BioLINK/BioCreative.eval.html [18] and is given in as an XML-like format. From the nine registered users who participated in task 2.1, a total of 15,992 evidence passages were provided to the curators. Out of those, 12,014 corresponded to the requested queries (the rest corresponded to new predictions which were not contained in the test set). On average 11.34 (standard deviation of 2.30) submissions of annotation predictions were sent for each single query triplet across all the user submissions (21 runs). Users submitted between a single run up to the maximum of three runs allowed; there were 21 runs submitted for task 2.1. This was especially work-intensive for the GOA annotators (evaluators), as in many cases the textual passages returned were entire paragraphs. It is possible to distinguish between two approaches followed by the participants. The majority of users tried to submit a result for each case contained in the test set. Those approaches focused on obtaining a high recall rather than a high precision. On the other hand, there were users who submitted results only for a small number of high confidence predictions to achieve a high precision. Although for practical use of text mining applications, high precision is desirable, a reasonable recall is essential, consequently an compromise between both should be favored. Of the diverse approaches adopted, three main strategies can be characterized. 1) Methods were centered in the GO terms themselves, and in general used pattern matching and matching of the words making up the GO terms; these were associated to a certain weight or frequency and part of speech information. Those approaches tried to submit results for each query and were thus centered in reaching high number of correct predictions. For instance Couto et al. [19] based their information extraction method on the calculation of the information content of each GO term. Ehrler et al. [20] applied manually crafted regular expressions or heuristic rules in their methods. A more computational linguistic approach was followed by Verspoor et al. [21] which incorporated statistical term frequency and 'part of speech' information. Finally Krallinger et al. [22] constructed a heuristic weight scheme to words or terms associated with the original query GO term, matched to sentence windows. 2) Other strategies are characterized by the use of machine learning techniques. Due to the lack of a high quality training set, those strategies were less effective than others. Some of those methods use words co-occurring with GO terms to derive their training set. Rice et al. [23] applied term based support vector machines to return the paragraph which might contain the annotation relevant passages while Ray et al. [24] applied Naïve Bayes models and n-gram models to rank the paragraphs according to their annotation associations. 3) Finally the third tendency is characterized by the aim of reaching a high precision through pattern matching and template extraction. Chiang et al. [25] implemented a hybrid approach which focused on high precision. It is based on phrasal pattern matching and a sentence classification system using Naïve Bayes methods, as well as term indexing techniques. Although the obtained recall is low it achieved a high precision. Table 2 lists the different features and resources used by the participants. Not only does the basic processing unit differ between the various approaches (e.g. sentence level vs. paragraph level) but also the methods themselves are diverse. Despite this variety within the procedures used, some commonalities among them can be identified. For instance, the majority of users worked at sentence level, and processed the full article. Almost half of the participants integrated a machine learning method into their approach. A significant number of the participants took advantage of pattern matching and regular expressions. Regarding external resources, the HUGO database and the UMLS/MeSH dictionary were used by two participants. An overall summary of the distinct participating groups is provided by table 3. Task 2.1 The aim of sub-task 2.1 was to assess tools able to extract text fragments to support the annotation of a given protein-GO term association. Table 4 shows the overall results obtained for each run by the different groups and figure 1 shows the results in terms of TP (i.e. correct predictions) vs. precision. The group which obtained the highest precision results was Chiang et al. [25], with a precision of 0.80, although the number of correct predictions was of only 36 annotations. All the runs submitted by this group are characterized by high precision and low recall (ranging from a total of 45 to 251 submissions and precisions from 0.46 to 0.80). On average this group has also the highest percentage of overlap with respect to the correct predictions submitted by other groups. When considering the total number of correct predictions (TP), Krallinger et al. [22] (303 annotations) and Couto et al. [19] (301 annotations) obtained the highest number of correctly extracted GO-protein associations. Both groups obtained a very similar number of correct annotations and there is also a higher overlap between the correct [21] predictions of those two groups when compared to others. The precision of these methods was rather low (0.29), as they submitted results for all queries. Both methods associate the GO terms and the word tokens forming the GO terms with a weight, a heuristic sub-tag weight in case of [22] and the information content in case of [19]. Other groups who extracted a large number of correct annotations were Verspoor et al. [21] and Ehrler et al. [20] with 272 and 268 correct predictions respectively and a precision of 0.26. Gene ontology terms Not only the evidence passages but also the identification of GO terms was assessed. The scoring scheme was, as already mentioned, divided into three sets of extraction accuracy (high, generally and low). True positive (TP) predictions were considered as those which were evaluated as high for both the GO term and the corresponding protein. In general shorter GO terms, with lengths between 1 and 4 words, show the tendency that shorter ones are easier to predict than longer ones, and that the difficulty increases with the length of the term (see figure 2). This is similar to the case of gene names in task 1, where shorter gene names (e.g. yeast genes) are better extracted when compared to longer gene names (e.g. mouse genes). Nonetheless terms with a length of 5 words have an increased percentage of correct predictions. This could in part be explained by the presence of some information rich words in those GO terms. There is again a tendency that shorter terms are easier to predict than longer ones in Ehrler et al. [20] Sequentially applied finite state automata Yes Yes Couto et al. [19] Information content of terms Yes Yes Krymolowski et al. [26] Heuristic rules, query expansion and question answering system Yes No Verspoor et al. [21] Word proximity networks approach Yes Yes Krallinger et al. [22] Heuristic weight and sentence sliding window Yes No Rice et al. [23] Term-based SVM approach Yes Yes Ray et al. [24] Statistical learning/Naïve Bayes method Yes Yes Chiang et al. [25] Hybrid method: pattern matching and sentence classification. Yes Yes the range of GO term lengths of between 5 to 8 words. The high percentage of correct predictions of GO terms with lengths of between 9 and 10 words are basically outliers. It is important to take into account that some words forming the GO term are stop words or unspecific words, and others are polysemic (they may have several meanings, and might thus be used in a different context, not associated to the sense provided in the GO term). GO terms which contain polysemic words, or words which are often used in a different context (e.g. as part of an experimental method) are more difficult to extract. Also the predictions according to the distinct GO categories were analyzed in detail. Figure 3 shows the different evaluation types for the annotation predictions related to the three GO categories. Protein names To extract correct annotations it is also important to identify the protein names and symbols in the articles. This was the main concern of task 1 of the BioCreAtIvE contest. In task 2, the participants were provided with Swiss-Prot accession numbers of human proteins rather than the protein names themselves, and as proteins usually appear in free text as symbols or names, they had to use links to databases such as UniProt or HUGO to obtain lists of protein names, symbols and descriptions. The tools used in task 1 performed in general significantly better when compared with the protein identification strategies used in task 2, as most of the participants focused on the identification of the GO term. The overall performance of protein identification was better than the GO term extraction, not only for sub-task 2.1 but especially for sub-task 2.2, meaning that it is easier to find text passages which refer to a given query protein than GO terms. The identification of the protein names is actually a variant of the named entity task, which is known to perform well, around 80 percent for the protein and genes in case of task 1A. A detailed analysis of the evaluation of the protein extraction is given in the BioCreAtIvE workshop handouts [18]. Task 2.2 The sub-task 2.2 was concerned with automatically assigning GO terms to protein and article pairs, returning the text passages which support those assignments. Thus, it consists basically of a text categorization and passage retrieval task. A total of 5258 predictions were submitted by the participants, which corresponded to 3882 unique protein-GO term-article triplets. A total of 4976 were completely evaluated (i.e. evaluation of both the protein and the GO term 4 and table 5). There are also predictions which are in principle correct, but the assigned GO term is too general to beuseful for practical purposes (evaluated as 'Generally). BioCreAtIvE corpus The evaluation of the task 2 predictions was carried out by GOA database curators and was based on the returned evidence text. In case of sub-part 2.2, the prediction of the GO code itself was also assessed together, with the annotation text passage. The XML-like submissions contained thus text fragments critical for the evaluators to decide whether the predictions were correct. Those text passages were highlighted by a tool used by the evaluators to visualize the submitted text passages within the whole article. This visualization and text highlighting program was implemented for the evaluation team and facilitated the assessment of the submitted text passage within its context in the whole article. Therefore it helped to speed up the evaluation and provided a standard interface to assist the scoring of the submitted predictions. This was done, having in mind future practical applications using those predictions utilities. The data set produced during the BioCreAtIvE contest, i.e. the evaluated predictions, has been released and is freely accessible through the web [18]. It is provided in an XML-like format and contains tags which label the evaluation type for each prediction. To obtain the dataset, an agreement must be signed which contains the contact information and assures that the dataset will be used for research purposes only. The length of the evidence passages is highly variable, as some of the predictions consist of entire paragraphs, while other predictions consist only in a single sentence. Discussion The use of GO terms for a text mining task was challenging because the terms which build up GO are controlled concepts which might be expressed in natural language text in a number of different ways. Moreover there are over 15,000 concepts in GO. GO is actively maintained and continually expanded. It constitutes a widely used set of terms for protein annotation, fulfilling the demands to support annotation in multiple biology databases, such as Swiss-Prot and UniProt. Only the use of biologically inspired tasks for text mining tools will provide methods which are of practical relevance for biologists and bioinformaticians. The integration of bioinformatics applications with text mining tools might create new knowledge sources in the future. Community wide evaluations of biomedical text mining strategies can assist the process of improving currently available text mining and information extraction tools and speed up the integration of the heterogeneous data types produced in life sciences. A broad range of techniques were applied to extract the relation of GO term to proteins in text (task two). Among the main difficulties encountered in task two were the lack of a high quality training set consisting in the annotation relevant text pas-sages rather than full text articles associated with certain protein-GO annotations. The overlap between GO terms in the training and the test set was also rather low, which especially effected approaches relying on machine learning techniques. The over-annotation of the test set (on average more GO terms were extracted by the GOA evaluation team from test set articles than was the case for the training set articles) reflected the article-centric approach in the test set versus the protein centered approach of the training set. Therefore GO terms which in case of GOA (training set) annotations might have been discarded were included in case of this challenge. The vast amount of existing GO terms (large number of classes), the lack of a substantial number of available synonyms for those GO terms and the use of full text articles rather than abstracts posed additional difficulties for the participants. Although the number of GO terms which comprise the test sets of task two is small when compared to all GO terms, it is still useful in providing an insight into particular aspects of the three categories which build up GO. For instance when looking at the length of correctly predicted GO terms of sub-task 2.1, there was an inverse relation between the average length of the GO terms of each category and the percentage of correct predictions. This means that the terms belonging to the Cellular Component category are on average shorter (average length of 2.03 words) and contain more informative words and therefore were easier to detect (percentage of correct predictions was 34.61%) when compared, for instance, to the Biological Process terms (with average length of 3.56 words and a percentage of correct prediction of 23.02%). The order of difficulty in predicting the terms belonging to each GO category is identical for sub-task 2.2. Although in general shorter terms seem to be easier to predict, this is not always the case when retrieving terms which are formed by a single word. We propose that when predicting those terms some of them are too general to be of practical use (task 2.2). There are also cases when retrieving those words, where they are used with a different meaning (polysemic) which does not correspond to the meaning which is provided for the GO term. Moreover they appear often as part of expressions in a different semantic context (task 2.1 and task 2.2). There were groups which took part in task 2 who gave priority the recall (predictions for every query case) while others focused on precision (only predicting a relatively small number of high confidence cases). Although a trade-off between both would be desirable, the potential end users have to decide depending on the needs in each case, whether they are interested in recall, precision or fscore (i.e. balanced precision and recall). For instance proteins which are highly quoted in the literature might be a case for high precision demands, while sparsely quoted proteins might be a target for high recall methods. In general the overlap between the predictions made by the different groups is relatively small (except in the case of Chiang et al.), especially in case of sub-task 2.2. This agrees with the diverse methodological approaches implemented by the participants. In task 2.1 (retrieving the terms) most of the correct predictions were made only between 1-3 times, and in task 2.2 (predicting the terms) the vast majority of the correct predictions were made only once. This implies that the features and methods exploited by a certain participant are useful only for certain scenarios, while in other situations, other properties adopted by different strategies might be advantageous. An approach which is able to efficiently integrate the characteristics used by the different methods into a single tool could increase the performance significantly. The dataset produced within task 2 serves as a 'weak labelled' training set for future applications, meaning that although the text passages and their corresponding evaluations are provided, the exact words relating to the protein entity, GO terms and the relationship are not especially highlighted. Conclusion The BioCreAtIvE challenge for evaluation of text mining tools applied to biomedical literature was organized in two main tasks, the first related to the detection of protein and gene names and the second task was concerned with the extraction of protein annotations based on GO terms. The assessment of the submitted predictions for task 2 pointed out that there is still need for significant improvement to make the existing tools valuable for practical pur-poses, especially in sub-task 2.2. Thus, to monitor future improvements in this field, a similar set up in the context of future evaluations will be necessary. The data set derived from this challenge, which is freely available, might serve as a valuable training data for new text mining tools. The progress based upon the availability of such training data should be monitored through future contests, which in turn could provide new data resources. The evaluations of large collections of predictions in this field is very expensive and time consuming and relies on the expertise of professional database curators such as the GOA team. There are also lessons learned from this edition of BioCreAtIvE which might improve future assessments, for instance a limitation to one or two runs per participant instead of three would facilitate the task of the curators who evaluated the predictions, as this process is specially work intensive. Limitation on the length of the evidence passage could also reduce the workload of the curators assessing the evidence passages. Also two variants of submission types could be adopted in future tasks, in analogy to task 1. For instance a closed submission type would allow only the use of previously specified external resources, while an open submission type might also integrate other additional information resources or databases. In this way a comparison between the distinct methods would be easier. The future extension of GO itself in terms of an enriched lexicon of synonyms for GO terms is perhaps more suitable for NLP strategies. This use of such resources mightincrease the importance of text mining applications in the near future.
2014-10-01T00:00:00.000Z
2005-05-24T00:00:00.000
{ "year": 2005, "sha1": "239e6859e13f3ee62431c80d42e2ffdd5e43e882", "oa_license": "CCBY", "oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/1471-2105-6-S1-S16", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7c9412df2808d38cb5b541af18d7dfd0fed4d0b3", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
15364489
pes2o/s2orc
v3-fos-license
Pretreatment with Relaxin Does Not Restore NO-Mediated Modulation of Calcium Signal in Coronary Endothelial Cells Isolated from Spontaneously Hypertensive Rats We demonstrated that in coronary endothelial cells (RCEs) from normotensive Wistar Kyoto rats (WKY), the hormone relaxin (RLX) increases NO production and reduces calcium transients by a NO-related mechanism. Since an impairment of the NO pathway has been described in spontaneously hypertensive rats (SHR), the present study was aimed at exploring RLX effects on RCEs from SHR, hypothesizing that RLX could restore calcium responsiveness to NO. RCEs were isolated from WKY and SHR. Calcium transients were evaluated by image analysis after the administration of angiotensin II or α-thrombin. Angiotensin II (1 µM) caused a prompt rise of [Ca2+]i in WKY and SHR RCEs and a rapid decrease, being the decay time higher in SHR than in WKY. NOS inhibition increased calcium transient in WKY, but not in SHR RCEs. Whereas RLX pretreatment (24 h, 60 ng/mL) was ineffective in SHR, it strongly reduced calcium transient in WKY in a NO-dependent way. A similar behavior was measured using 30 U/mL α-thrombin. The current study offers evidence that RLX cannot restore NO responsiveness in SHR, suggesting an accurate selection of patients eligible for RLX treatment of cardiovascular diseases. Introduction In the vascular system, the nitric oxide (NO) pathway mediates vasorelaxation and platelet anti-aggregation and protects from ischemic disorders [1]. NO, physiologically produced by different nitric oxide synthase isoforms (eNOS, nNOS and iNOS), can activate soluble guanylyl cyclase (sGC) to produce cyclic guanosine monophosphate (cGMP), which in turn activates the cGMP-dependent protein kinase (cGK-I), modulating ion channels, phosphodiesterases and calcium pumps [2]. In this context, convincing evidences exist in the literature that the hormone relaxin (RLX) can promote coronary and systemic vasodilatation by increasing NO bioavailability and NOS enzyme expression [3], thereby reducing hypertension and protecting the heart against ischemia/reperfusion-induced injury [4,5]. Thus, administration of recombinant human H2 RLX, or serelaxin, has been proposed as a potential therapeutic strategy for hypertension and heart ischemia [6]. The spontaneously hypertensive rat (SHR) is an animal model used for the study of hypertension, hypertensive heart disease, cardiac remodeling and hypertrophy. In this model, alterations in Ca 2+ handling have been described at very early stages of the disease, even before the appearance of cardiac remodeling [7]. Many different factors are involved in this spontaneous, age-dependent pathological condition, including an impairment of the NO pathway. In particular: (i) mRNA expression of cGMP-dependent protein kinase I (cGKI) was found reduced in aortic rings of 6 week-old SHR [8]; (ii) decreased cGK activity was detected in ventricular and atrial tissue of aged SHR [9] and other forms of hypertensive animals [10] and (iii) and, according to our research data, the cGKI expression is reduced in cardiomyocytes and coronary endothelial cells (RCEs) of 12 week old SHR [11,12]. Of note, cGKI is a major regulator of intracellular calcium homeostasis and its over-expression was found to restore NO-mediated calcium regulation in RCEs and aortic smooth muscle cells isolated from SHR [12,13]. Along this line of thought, the administration of RLX to in female non-pregnant SHR was reported to cause a sustained decrease in blood pressure [14] and to substantially blunt the vascular response to vasoconstrictors in mesenteric vasculature but not in portal vein [15]. Besides these acute functional effects, RLX was also capable of reverting arterial adverse remodeling and decreased compliance in elderly SHR [16]. The cellular mechanisms underlying these vascular effects of RLX are not fully understood but represent a topic worthy of investigation because of their obvious medical interest. In this context, previous studies performed by our team on RCEs from Wistar Kyoto (WKY) rats, the normotensive counterpart of SHR, have demonstrated that RLX increases NO production by up-regulating NOS expression and decreases vasoconstrictor-induced intracellular calcium concentration ([Ca 2+ ]i) rise by a NO-related mechanism [17]. Therefore, it was reasonable to assume that a similar mechanisms may also be operating in SHR. In the present study we aimed at exploring the effects of RLX on RCEs isolated from SHR, based on the working hypothesis that RLX may restore [Ca 2+ ]i responsiveness to NO. According to our previous study on WKY and SHR, we used angiotensin II (AT-II) and α-thrombin (THR) to induce [Ca 2+ ]i increase in RCEs, since these cells have been shown to express AT-II and THR receptors and to respond to exogenous AT-II and THR by modulation of NO-dependent [Ca 2+ ]i increase [12,18,19]. Preliminary data were presented at the Fourth International Conference on "Relaxin and related peptides" [20]. Intracellular Ca 2+ Control Conditions At baseline, [Ca 2+ ]i, evaluated by Fura 2 fluorescence, was 112.5 ± 2.76 nM in RCEs isolated from WKY and slightly higher in those from SHR (144.1 ± 7.43 nM). Stimulation of RCEs with 1 µM AT-II, caused a prompt rise of [Ca 2+ ]i in both WKY and SHR strains ( Figure 1). In the SHR cells, the maximum [Ca 2+ ]i increase was slightly, albeit not significantly higher than in those from WKY ( Figure 2A). Calcium signals decreased rapidly in WKY RCEs with a decay time of 19.2 ± 0.61 s, whereas in SHR cells, the decay time was significantly higher ( Figure 2B). A 10 min. incubation with the NO-donor SNAP significantly decreased maximum [Ca 2+ ]i increase and decay time in the WKY cells, whereas it was ineffective in the SHR RCEs ( Figure 2). Moreover, 20 min. preincubation with the non-selective NOS inhibitor L-NAME significantly increased maximum [Ca 2+ ]i increase and decay time in the WKY, but not in the SHR cells. Inhibition of NOS II with W1400 had no effect on calcium transient in both strains. The data depicted in Figure 2 were also reported in Table 1 as differences (Delta) between the calcium transient parameters in control (basal) and incubated cells with SNAP, W1400 or L-NAME. Table 1. Differences among control and incubated cells with the specified molecules in basal control in RCEs isolated from WKY and SHR after AT-II activation of calcium transient. Control SNAP W1400 L-NAME The effect of RLX pretreatment is also reported. In WKY RCEs, maximal calcium values and decay times obtained were strongly reduced by SNAP and RLX, whereas L-NAME increased both parameters. W1400 only marginally influenced calcium transients, suggesting that under basal conditions the NO production was mainly dependent on the activation of a constitutive endothelial NOS. On the contrary, in SHR RCEs SNAP, NOS inhibition or RLX pretreatment were ineffective. Intracellular [Ca 2+ ]: RLX Effects In WKY RCEs, a 24-h incubation with RLX decreased both peak and decay time of the agonist-induced [Ca 2+ ]i transient ( Figure 1A, Figure 3): the effect of RLX preincubation was potentiated by a 10-min. incubation with SNAP. A 20 min incubation with either the nonspecific NOS inhibitor L-NAME or the selective NOS II inhibitor 1400W modified the kinetics of AT-II induced [Ca 2+ ]i transients in RLX-pretreated WKY RCEs: as shown, both inhibitors increased the maximum and the decay time of the calcium transient. A 24 h incubation with RLX in SHR cells was ineffective in reducing the calcium transients, as evaluated by maximum values at the peak and decay times. Again, in SHR cells, the short incubation with SNAP or with both NOS inhibitors did not modify calcium transients in the RLX-pretreated cells. Table 2 as delta. In WKY, SNAP was still effective in reducing the maximal calcium value, while it influenced to a minimal extent decay time in RLX-pretreated cells. W1400 and L-NAME strongly increased calcium transient as indicated by the delta differences among in RLX-pretreated control and NOS inhibitors. The direct comparison of the alcium transient value of untreated cells shows that RLX effectively reduced the delta calcium and decay time value. On the contrary, in SHR RCEs RLX was ineffective and nor SNAP or NOS inhibition modified calcium transients. Effect of RLX on α-Thrombin-Induced Calcium Transient As already described [19], 30 U/mL THR induced a rise of [Ca 2+ ]i in RCEs of both strains (Table 3). RLX pretreatment significantly reduced both parameters of calcium transient (i.e., maximal calcium peak and decay time) in WKY cells. Again, in SHR RCEs THR-induced calcium transients were not modified by RLX pretreatment (Table 3). Discussion The present data confirm that [Ca 2+ ]i in RCEs isolated from normotensive WKY rats is strongly modulated by the NO pathway. In particular, while a NO donor decreases the calcium transient induced by AT-II, an aspecific inhibitor of NOS (L-NAME) can increase it. Moreover, the hormone RLX, known for its vasodilatory properties [5], reduces calcium transients in a NO-dependent mode related to NOS II [17]. These data validate the important role of NO in endothelial cells as a modulator of calcium signals in normotensive rats and suggest that NO can act in an autocrine manner in RCEs [12,17]. A remarkably different behavior is observed in SHR RCEs. In fact, in these cells, NO is unable to modulate [Ca 2+ ]i and this ineffectiveness is maintained after treatment with RLX, suggesting that a downstream step of the NO pathway is altered. Similarly, using THR as calcium agonist, RLX-pretreatment strongly reduces calcium transients in WKY, whereas it is ineffective in SHR RCEs. Since RLX-pretreatment significantly lowers the calcium transients induced by two different agonists (i.e., AT-II and THR) only in WKY, the disfunctionality in SHR cells should lie in the NO pathway instead of the specific receptor signal. In this context, we have previously reported that SHR RCEs show low/absent expression of cGKI enzyme [12]. Moreover, a similar reduction of cGKI has been described in cardiomyocytes [11] and aortic smooth muscle cells [13]. The present data demonstrate that RLX is unable to restore NO responsiveness in SHR RCEs and appears to be partially in disagreement with previous in vivo studies in which RLX reduced blood pressure [14], cardiac and renal fibrosis [21] in SHR. This discrepancy may depend on many reasons. Multiple signal transduction pathways are activated in response to relaxin [6]. Indeed, RLX receptors are coupled with different G proteins, including the Gs cyclic AMP-stimulating protein [22]. Therefore in vivo effects of RLX in SHR could be mediated by other pathways. Of note, a direct NO-mediated relaxant effect of RLX on smooth musculature has been consistently described in other target organs, including smooth muscle cells [23], the uterus [24,25] and the gastrointestinal tract [26]. The current study evidences that RLX cannot restore NO responsiveness in SHR RCEs and underlines the importance of the NO/sGK/cGKI pathway in controlling the [Ca 2+ ]i dynamics presiding to the regulation of vascular tone. A genome-wide association study found that common genetic polymorphisms in human cGKI-1 gene (PRKG1) are significantly associated with enhanced diastolic blood pressure in response to an acute salt load in patients with hypertension [27]. Chronically elevated blood pressure increases left ventricular (LV) pressure, enhances LV radial systolic performance and leads to LV hypertrophy. Recently, LV systolic radial deformation (strain) has been associated with common polymorphisms in PRKG1 [28]. In particular, LV radial systolic deformation is significantly higher in patients carrying PRKG1 homozygote polymorphism than in heterozygotes and noncarriers. This knowledge may have clinical implications, as it suggests that NO-modulating drugs (including RLX) used for cardiovascular diseases might be low effective or ineffective in these patients. Even if further studies must be undertaken to elucidate how the genetic variants of PRKG1 might influence cardiovascular diseases, in homozygote PRKG1 polymorphisms carriers, RLX could be ineffective in the treatment of hypertension and hypertensive heart diseases, suggesting an accurate evaluation of RLX effect in clinical setting. Chemicals Highly purified porcine RLX (2500-3000 U/mg) was a generous gift from O. D. Sherwood. RLX was used at a concentration of 60 ng/mL, which is in the range found effective in inducing coronary vasodilatation in rat hearts [29]. Media, sera, and reagents for cell culture were from Sigma-Aldrich (Milan, Italy) and Gibco Life Technologies (Milan, Italy). Cell culture plastic ware was purchased from Costar (Corning Costar Co., Costar Italia, Milan, Italy). Fura 2-AM and Pluronic F127 were from Molecular Probes Life Technologies (Milan, Italy). The selective NOS II inhibitor 1400W [30] was from Alexis Biochemicals (Enzo Life Sciences, New York, USA) and the NO-donor S-nitroso-Nacetylpenicillamine (SNAP) from Tocris (Bristol, UK). THR was from Roche Life Sciences, NG-nitro-L-arginine methylester (L-NAME) and angiotensin II (AT II) were from from Sigma-Aldrich as were the other chemicals used. Isolation and Culture of Rat Coronary Endothelial (RCE) Cells RCEs were isolated from the heart of 3-4-month old male Wistar Kyoto and aged matched SHR rats, as described previously [19]. Rats (Charles River, Lecco, Italy) were housed in the Centro per la Stabulazione degli Animali da Laboratorio (Ce.S.A.L., University of Florence), maintained for at least one week after their arrival in a 12 h dark-light cycle with pellet food and water ad libitum. Formal approval to conduct the experiments described was obtained from the Animal Subjects Review Board of the University of Florence. The ethics policy of the University of Florence complies with the Guide for the Care and Use of Laboratory Animals of the U.S. National Institutes of Health (NIH Publication No. 85-23, revised 1996; University of Florence Assurance No. A5278-01). Briefly, after enzymatic digestion of the heart, the suspension was centrifuged and the pellet was stirred for 30 min at 37 °C in the presence of 10 mg/50 mL trypsin. The recovered pellet was resuspended in 15 mL of culture medium (see below), and plated. After 4 h, cells were washed twice and grown until confluence (5-6 days) in M199 containing 10% fetal calf serum (FCS), 10% newborn calf serum, 250 U/mL penicillin G, 0.625 µg/mL amphotericin, and 250 µg/mL streptomycin. Isolated RCEs were cytocharacterized as previously reported [19,31]. Cells immunoreactive for endothelial markers ranged between 96% and 98%. For all experiments, cells were used at the first passage. Stimulation of RCEs with RLX was carried out in M199 medium without phenol red. Determination of Intracellular Ca 2+ Intracellular cytosolic Ca 2+ ([Ca 2+ ]i) was evaluated with Fura-2 by microscopic image analysis as described previously [12,19]. Briefly, cells were grown on round cover slips to subconfluence and then incubated for 24 h in serum-free medium in the absence (controls) or presence of RLX (60 ng/mL). Cells were loaded with the Ca 2+ -sensitive fluorescent probe Fura 2-AM (4 µmol/L) and Pluronic F (0.02%) for 45 min at room temperature in HEPES-bicarbonate buffer containing (mM): NaCl 140, KCl 2.9,
2016-04-04T08:54:49.290Z
2015-05-26T00:00:00.000
{ "year": 2015, "sha1": "ce7bcb0646a2bc1c95045c66ab68a0295792fc89", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/20/6/9524/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ce7bcb0646a2bc1c95045c66ab68a0295792fc89", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
244957013
pes2o/s2orc
v3-fos-license
Functional Characterization of a Novel Heterozygous Mutation in the Glucokinase Gene That Causes MODY2 in Chinese Pedigrees Background Glucokinase (GCK) plays a central role in glucose regulation. The heterozygous mutations of GCK can cause a monogenic form of diabetes, maturity-onset diabetes of the young (MODY) directly. In our study, we aimed to explore the mechanism of the novel mutation GCK p.Ala259Thr leading to glucokinase deficiency and hyperglycemia. Methods Thirty early-onset diabetes pedigrees were referred to whole exome sequencing for novel mutations identification. Purified wild-type and mutant GCK proteins were obtained from E.coli systems and then subjected to the kinetic and thermal stability analysis to test the effects on GCK activity. Results One novel missense mutation GCK p.Ala259Thr was identified and co-segregated with diabetes in a Chinese MODY2 pedigree. The kinetic analysis showed that this mutation result in a decreased affinity and catalytic capability for glucose. The thermal stability analysis also indicated that the mutant protein presented dramatically decreased activity at the same temperature. Conclusion Our study firstly identified a novel MODY2 mutation p.Ala259Thr in Chinese diabetes pedigrees. The kinetic and thermal stability analysis confirmed that this mutation caused hyperglycemia through severely damaging the enzyme activities and protein stability. The GCK gene (7p15.3-p15.1) encodes the glucokinase (GCK) enzyme, which is a rate-limiting enzyme of glycolysis that is responsible for phosphorylating glucose to glucose-6phosphate. GCK has unique kinetic characteristics, including a low affinity for glucose (S 0.5 = 5-8 mmol/L) (4), cooperativity with its glucose substrate (Hill coefficient, h=1. 7), and a lack of inhibition by its product glucose-6-phosphate (G-6-P). In pancreatic b cells, GCK maintains glucose homeostasis through regulating glucose-stimulated insulin secretion in response to the intracellular glucose concentration (5)(6)(7). In the liver, GCK stimulates glucose disposal and glycogen storage (8). In addition, the crystal structures of human GCK present both active and inactive forms according to the glucose levels. Katama and colleagues revealed that GCK had a small and large domain that were separated by a deep cleft; these domains undergo a large conformational change through rotation of the small domain, which is induced by binding to glucose (9,10). Given its central role in glucose regulation, mutations in the gene encoding glucokinase can cause both hyper-and hypoglycaemia. Heterozygous activating GCK mutations can cause persistent hyperinsulinaemic hypoglycaemia of infancy (PHHI) (11). Furthermore, homozygous inactivating GCK mutations leading to complete GCK deficiency present as permanent neonatal diabetes mellitus (12), whereas heterozygous inactivating mutations are the underlying causes of MODY2 (13). MODY is the most common type of monogenic diabetes, accounting for 2% to 5% of all diabetes cases in Europe (14). Previous studies indicate that GCK-MODY2, HNF1A-MODY3, HNF4A-MODY1 and HNF1B-MODY5 account for more than 95% cases of MODY in Caucasians, but only account for just 10-20% of MODY cases in Asia (including China, Japan and Korea) (15). An epidemiological investigation in Chinese hyperglycemia pedigrees that fulfilling the clinical diagnostic criteria for MODY show that the MODY subtype detection rate was 18.42% for GCK (16). Heterozygous mutations in GCK lead to decreased glucokinase activity and thus deficient sensitivity to glucose in b cells and impaired glycogen synthesis in the liver (17). GCK/ MODY2 occurs with a mild non-progressive hyperglycaemia, which generally is asymptomatic and develops without an increased risk of late complications, such as diabetic retinopathy or nephropathy (18,19). Due to the unapparent symptoms, MODY2 is often misdiagnosed and treated inappropriately. However, a molecular genetic diagnosis can change the management, since patients with GCK mutations rarely require pharmacological treatment. Thus, a correct genetic diagnosis is important for guidance of the prediction of asymptomatic relatives and personalized treatment for those with diabetes. To date, although more than 600 different GCK/ MODY2 mutations have been reported, including nonsense, missense, and frameshift mutations, less than 20% of these mutations have been functionally characterized (20)(21)(22). Pathophysiological studies on naturally occurring GCK missense mutations will provide further clues to help elucidate the mechanisms of glycaemic disorders and investigate the biological characteristics of this enzyme. In this study, we report the novel GCK missense mutation Ala259Thr, which co-segregates with diabetes in Chinese MODY families for the first time. This mutation, which alters alanine to threonine at the 259 th amino acid, is located proximal to the glucose-binding site but has not been investigated biochemically. Herein, we discovered that the Ala259Thr mutation exerted effects on the catalytic activity and protein thermostability of glucokinase. Subjects The retrospective study included 30 early-onset diabetes pedigrees referred for genetic testing. All pedigrees were clinically diagnosed with MODY according to the following classic criteria (23,24): a family history diabetes for at least two consecutive generations, early-onset of diabetes before the age of 25 years, no need for insulin therapy, and negative for type 1 diabetes antibodies. The diagnosis was made based on an oral glucose tolerance test (OGTT). The fasting blood glucose (FPG), 2h blood glucose (2h PG), fasting insulin (FINS), 2h insulin (2h-INS) and glycated hemoglobin (HbA1c) levels were measured in all family members available for testing. The study was performed according to the Declaration of Helsinki and was approved by our institutional review boards. Informed consent was obtained from all family members. Identification of Glucokinase Gene Mutations by Whole Exome Sequencing Genomic DNA was extracted from peripheral lymphocytes using a Qiagen DNA extraction kit (Qiagen, Frankfurt, Germany). Whole exome sequencing was performed to explore novel mutation and direct sequencing was used to validate the positive mutation. The coding regions of exons 1a-10 and the intron-exon boundaries of the GCK gene were amplified by PCR using self-designed primers ( Table 1). PCR products were purified using QIAquick PCR purification columns (Qiagen, Frankfurt, Germany), and both strands were sequenced using the BigDye Terminator Cycle Sequencing Kit (Applied Biosystems, CA, UK) according to the manufacturer's recommendations. Production and Purification of Recombinant Wild-Type and Mutant Glucokinase Recombinant human wild-type liver GCK was constructed with a His tag at the NH2 terminal and ligated into the pET-28a(+) vector. The Ala259Thr mutation was generated based on the His-GCK construct by PCR using a kit (QuikChange II Site-Directed Mutagenesis Kit, Stratagene, CA, USA). The following oligonucleotide was used to generate the Ala259Thr mutation: forward primer (5' CGAGTGGGGCACCTTCGGGGACTC CGGCGAGCTGGACGAGTT 3') and reverse primer (TCCCC GAAGGTGCCCCACTCGGTATTGACGCACATGCGGC CCT). The wild-type and mutant GCK sequences were verified using the ABI 3500xl DNA sequencer (Applied Biosystems, USA). The wild-type and mutant GCKs with His tags were transformed into E. coli (BL21-CodonPlus (DE3)-RIPL chemically competent cells) and then purified from 30-g cell pellet. Two-step affinity chromatography was used with Ni-NTA beads to bind the fusion protein, which was eluted with Ni-NTA and loaded onto the Superdex ™ 200 16/60 column. Both the wild-type and mutant His-GCK purified proteins showed a single band on SDS-PAGE gels. The purified proteins were quantified using the Bradford method (Bradford Protein Assays, Thermo Fisher Scientific, USA) using standard methods and stored at -80°C in 30% glycerol, 5 mmol/L glutathione, 5 mmol/L dithiothreitol (DTT), 200 mmol/L KCl, and 50 mmol/L Tris buffer (pH 7.4). Kinetic Analysis GCK activity was measured spectrophotometrically based on the ADP-Glo ™ Kinase Assay (Promega, USA). The luminescent signal generated is proportional to the ADP concentration produced and is correlated with the kinase activity. Kinetic parameters were also determined according to the assay as follows. First, standard ATP/ADP mixtures representing different conversion percentages were prepared to generate the standard curve for conversion of ATP to ADP. Second, ten serial two-fold dilutions of glucose in the assay buffer (final concentration starting from 200 mM) in the presence of 1 mM ATP were generated to determine the optimal glucose concentration when the half maximal velocity (V max ) of the reaction was reached. The assay buffer contained 50 mM Tris, 100 mM KCl and 10 mM MgCl 2 (pH 7.5). GraphPad Prism 7.0 (GraphPad Software, La Jolla, CA, USA) was used to calculate the glucose-K m (S 0.5 ), glucose-K cat , ATP-K m , ATP-K cat , Hill coefficient (h) and inflection point of glucose. The relative activity index and the glucose concentration at the inflection point were also calculated. Thermal Stability Analysis The thermal stability of the mutant and wild-type His-GCK enzymes was assessed using the ADP-Glo ™ Kinase Assay with 3 mM (for the wild-type)/11 mM (for the mutant) glucokinase and 1 mM ATP. The enzymes were incubated in a water bath at 25,30,35,40,45, 50, 55, and 60°C for 30 min or 50°C for 0, 5, 10, 15, 20, 25, and 30 min. Luminescence was measured to represent the glucokinase activity as described above. Statistical Analysis All results are presented as the mean ± SD. Student's two-tailed unpaired t test was used to assess differences between groups. The Mann-Whitney test was used to evaluate differences in clinical parameters between the mutants and non-mutants. The statistical analyses were performed using SAS 8.0 (SAS institute, Cary, NC, USA). A two-tailed p value less than 0.05 was considered significant. Identification of a Novel Missense Mutation in the GCK Gene The 12 exons of the GCK gene were scanned for the validation of mutations using direct sequencing for each of the affected families. A novel heterozygous missense mutation in GCK gene exon 7 (codon 259 GCC➔ACC; Figure 1) resulting in an amino acid substitution (Ala 259 ➔Thr) was identified in the proband. The same mutation was also identified in the proband's father and grandfather. Conversely, this mutation was not found in the four unrelated healthy individuals used as controls. Clinical Profiles of the Patients The male proband (III:2) was diagnosed with diabetes at 5 years of the age and presented with fasting hyperglycemia. Biochemical studies showed elevated fasting plasma glucose (FPG) (8.0 mmol/L), 2h plasma glucose (2h PG)(12.8 mmol/L) after oral glucose tolerance test (OGTT), and glycatedhemoglobin A1c (HbA1c) (7%). In contrast, fasting insulin (FINS) and 2h insulin (2h-INS) decreased (FINS: <0.2 uU/ml, 2h-INS: 2.23 uU/ml). The proband's father (II:3) and grandfather (I:2) were diagnosed with fasting hyperglycaemia at the ages of 33 and 47 years, respectively, during routine screening (FGP=7.8 and 9 mmol/L, respectively). The HbA1c level was elevated in proband's grandfather (7.5%), but was normal in proband's father (6%). Both FINS and 2h-INS were normal in proband's father and grandfather. The proband's paternal aunt (II:2) presented with gestational diabetes mellitus (GDM) during her first pregnancy according to the previous medical history. But it is unavailable for us to get her biochemical results. None of the diabetic patients received hypoglycemic drugs. The rest of the family were normal glucose regulation (NGT) individuals with normal blood glucose and insulin levels ( Figure 2). Production of Recombinant Wild-Type and Mutant Glucokinase The recombinant wild-type and mutant enzymes were expressed in an E. coli system (BL21(DE3)-Gold). Nine preparations of the Hisfusion protein were purified with a Ni-NTA column, thrombin digestion and Superdex ™ 200 column, with yields of 11.5 mg/L and 10 mg/L for the wild-type and mutant proteins, respectively. All His-GCK proteins were proven to be essentially pure based on the presence of a single band at 75 kDa on a SDS-PAGE gel. Kinetic Analysis Both the purified wild-type and mutant GCKs were subjected to kinetic analysis using glucose and ATP as substrates. The response curves of GCK for a series of glucose or ATP concentrations, which indicate the affinity of GCK for the substrates, are shown in Figure 3 (glucose as substrate) and Figure 4 (ATP as substrate). The kinetic parameters, including the substrate affinities (S 0.5 for glucose and ATP-K m for ATP), catalytic constants (glucose-K cat and ATP-K cat , respectively), Hill coefficients and inflection points of glucose, are shown in Thermal Stability Analysis The thermostability tests of the wild-type and mutant His-GCK enzymes were performed at different temperatures to investigate protein stability, which was also a key determinant of enzyme function. The enzyme activity of wild-type GCK was stable under a temperature of 45°C with a sharp decline at 50°C ( Figure 5A). In contrast, the Ala259Thr mutant maintained activity under a temperature of 40°C but decreased dramatically at 45°C ( Figure 5A). However, the mutant maintained decreased enzyme activity similar to the wild-type enzyme at 50°C that was maintained for 30 min ( Figure 5B). The statistical analyses were performed with SAS 8.0 (SAS Institute, Cary, NC, USA). A twotailed p value <0.05 was considered significant. DISCUSSION MODY is an autosomal dominant form of diabetes that is characterized by early onset, pancreatic dysfunction and a noninsulin-dependent diabetes status. Heterozygous mutations in GCK were first recognized as the intrinsic cause of MODY2 in 1992 (13,25). HNF1A, GCK, HNF4A, and HNF1B are the most common types in Europeans. MODY2 accounts for approximately 80% of MODY patients in Span (22), 38-86% in Italy (26)(27)(28), 56% in France (29) and 32% in United Kingdom (30). However, MODY2 is rarely reported in Asian patients, with a prevalence of 1% in Japanese (31), 2% in Korean (32) and 1-4% in Hong Kong Chinese patients (33,34). In our study, we recruited 30 early-onset diabetes pedigrees for genetic testing and discovered the novel mutation Ala259Thr in GCK, which was accompanied by hyperglycaemia and was in accordance with autosomal dominant inheritance. Three of the four diabetic patients in this pedigree were characterized by mild hyperglycaemia. The patients with Ala259Thr mutations did not require treatment, but could be managed with diet or exercise. Although we could not confirm whether the remaining female patient was a mutation carrier, since her DNA sample was not available, this patient was definitely diagnosed with GDM during her first pregnancy and returned to a normal glucose level after delivery. Therefore, this family was confirmed to be a MODY2 pedigree linked to a novel GCK p.Ala259Thr mutation. Different MODY2 mutations have been reported to impair GCK function through different mechanisms, including kinetics, enzymatic activity or protein thermostability (18,22,(35)(36)(37)(38). Therefore, we investigated the functional characteristics of the Ala259Thr recombinant protein to elucidate the potential mechanism resulting in hyperglycaemia. In the kinetic analysis, Ala259Thr presented a higher S 0.5 value and inflection point, which indicated that a high glucose concentration was required to achieve the V max . Moreover, the lower Hill coefficient and K cat of the Ala259Thr mutation revealed a decreased affinity and catalytic activity when glucose was used as the substrate. However, when ATP was used as the substrate, the K m -ATP and K cat showed no significant differences between the mutant and wild-type recombinant proteins, which suggested that the Ala259Thr mutation affected the binding or catalytic capacity for glucose but not ATP. No mutation was reported in the same position previously, but the nearby mutations p.Trp257Arg and p.Gly261Arg presented decreased K cat values (39,40). Most of the GCK mutations reported were found to be kinetically inactive, with alterations of one or more kinetic parameters (22,35,40). However, kinetic inactivation may not be the only factor that causes hyperglycaemia. Different MODY2 mutations have been reported to impair GCK function through different mechanisms, including kinetics, enzymatic activity or protein thermostability (36,37). In our study, we also performed a thermal analysis. The wild-type GCK recombinant protein was stable under a temperature of 45°C, whereas the mutant protein presented dramatically decreased activity at the same temperature. Since the temperature for protein thermostability exceeds the normal temperature of the human body, the effect of thermal stability on hyperglycemia remained to be confirmed. As for the investigation of enzyme function, biochemical experiments are still the first choice in most literature reports since researchers can directly obtain recombinant protein and perform kinetic and thermal stability analysis to evaluate the activity and stability of the enzyme. However, it might be better to perform functional experiments in vivo to demonstrate the mechanism that the mutation contributing to hyperglycemia. In addition, the glucokinase regulatory protein (GKRP) could act as a competitive inhibitor of glucose and regulate GCK activity through protein-protein interactions (41). Posttranslational regulation of GCK could also influence GCK activation (14). For example, cytoplasmic Ca(2+) levels may regulate GCK activation and therefore glucose metabolism and insulin secretion (42). The GCK-R369P and GCK-V367M mutations could impair glucose-stimulated insulin secretion through posttranslational regulation of GCK Snitrosylation (14). However, whether these mechanisms participate in GCK p.Ala259Thr activity needs to be elucidated. In the present study, we identified the novel mutation GCK p.Ala259Thr that co-segregated with diabetes in a Chinese MODY2 pedigree. Our study illustrated that the GCK p.Ala259Thr mutation had an immediate impact on the kinetic inactivity and thermal instability of the GCK enzyme, which led to hyperglycaemia in the mutation carriers of this pedigree. Other potential mechanisms, such as posttranslational regulation or crystal structure crystallographic analysis, need to be assessed in future studies. DATA AVAILABILITY STATEMENT The data analyzed in this study is subject to the following licenses/restrictions: Datasets consist of data routinely recorded in clinical practice. Requests to access these datasets should be directed to Cheng Hu, alfredhc@sjtu.edu.cn. ETHICS STATEMENT Ethical approval was granted by the Institutional Review Board of Shanghai Jiao Tong University Affiliated Sixth People's Hospital. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. AUTHOR CONTRIBUTIONS CH and YG contributed to the study design, acquisition and interpretation of data, reviewed and edited the manuscript. FJ focused on the biological experiments, analysis and interpretation of data, drafted and edited the manuscript. JY contributed to the biological experiments. RZ, XM, and YB contributed to the pedigree collection, genetic testing and clinical diagnosis. All authors contributed to the article and approved the submitted version. FUNDING This work was supported by the National Science Foundation of China (NSFC) (31500955, 81800702), the Shanghai Outstanding Academic Leaders (20XD1433300) and the Interdisciplinary Program of Shanghai Jiao Tong University (YG2021ZD20) and Nantong Municipal Science and Technology Project (MS22019005).
2021-12-09T14:14:29.715Z
2021-12-09T00:00:00.000
{ "year": 2021, "sha1": "8ae481dfef7653e92d620c66aa275830ebd0a2fa", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2021.803992/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8ae481dfef7653e92d620c66aa275830ebd0a2fa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
141231055
pes2o/s2orc
v3-fos-license
Monte Carlo simulation of amorphous magnets with random exchange interactions Using the Monte Carlo method, the computer simulation of magnetic properties of pure amorphous Gd and Re-Gd amorphous alloys was performed. For the model of amorphous Gd the temperature dependencies of magnetization and magnetic susceptibility were calculated at different values of the ratio of exchange interaction integrals wihin the first and the second coordination spheres J1/|J2| . The magnetic phase diagram in coordinates T - J1/|J2| was constructed. For the models of Re-Gd amorphous alloys the dependence of spin-glass transition temperature on concentration of Gd atoms was calculated. For all our models the hysteresis loops were calculated at different temperatures. Introduction Binary amorphous alloys of heavy rare-earth metals with nonmagnetic transition metals are of great interest due to their unique magnetic properties [1,2]. In these systems the transition into the spinglass state takes place. In amorphous alloys of the rhenium-gadolinium (Re-Gd) system the typical for spin glasses maximum on the temperature dependence of magnetic susceptibility (T) and irreversibility of magnetization M(T) were experimentally revealed [3]. The spin-glass state occurs due to random exchange interactions in this system. The nature of the spin-glass state on the microscopic level is studied insufficiently. In this connection, the computer simulation of atomic structure and magnetic properties of amorphous magnetic materials is perspective direction. In this work we report on the Monte Carlo simulation of the magnetic properties of pure amorphous Gd and amorphous alloys of the Re-Gd system which are not studied so far. Simulation technique Using the molecular dynamics method, we constructed the models of atomic structure of pure amorphous Gd and Re-Gd amorphous alloys. Each model contained 100 000 atoms in a cubic cell with periodic boundary conditions. The interatomic interaction was described by an empirical polynomial potential [4]. The magnetic moments of Gd atoms in the process of simulation were situated in the corresponding atomic locations in the amorphous structure. The coordinates of the atoms were previously obtained by the molecular dynamics method. Magnetic properties of these models were studied using the Monte Carlo method within the standard Metropolis algorithm [5] in the framework of the Heisenberg model. After formation of a random initial spin configuration, an attempt of changing the configuration is made, i.e. the randomly chosen spin randomly changes its spherical where ΔE is the energy variation. This procedure repeats for calculation of average values of physical quantities. The Hamiltonian describing the interaction of the magnetic moments of gadolinium atoms was written in the following form [6,7]: Results and discussion For the model of pure amorphous Gd, we calculated the temperature dependencies of spontaneous magnetization M, where S M is saturation magnetization, at various values of the 12 JJ ratio ( 12 10 JJ , 11, 12, 13 and 14). In the Fig. 1 we present the dependence of maximum spontaneous magnetization (at T = 1 К) on the 12 JJ ratio for the model of amorphous Gd. the asperomagnetic state, and at 12 11 JJ the system transits into the spinglass state. In the asperomagnetic state the spontaneous magnetization differs from zero but the magnetic moments are non-collinear, arranged randomly as in spin glasses. The type of a phase was determined by the value of S MM at 1 T  K. In Fig. 2 we present the temperature dependencies of magnetic susceptibility for the models of pure amorphous gadolinium at different values of the 12 JJ ratio equal to 8, 10, 12, 14 and 16. The values of susceptibility were calculated during cooling the model from the paramagnetic state in the temperature interval 100 1 T  K with the step Δ5 T  K in the absence of magnetic field. At each temperature the values of susceptibility were averaged over 10 cycles of 10 3 MC-steps/spin. In all these (T) curves a distinct maximum is observed that proves the presence of a magnetic phase transition. Position of the maximum corresponds to the phase transition temperature Thus, we obtained the magnetic phase diagram for amorphous Gd in the 12 JJ -T coordinates (Fig. 3). It allows one to determine the phase state of the system depending on the temperature and the exchange integral within the second coordination sphere 2 J . The temperature of the magnetic phase transition increases monotonically with increasing the 12 JJ value and reaches the constant value 75 f T  K at 2 0 J  (in this case 12 JJ). We studied the temperature dependencies of magnetic susceptibility   The minimal concentration of gadolinium atoms at which the spin glass transition takes place is above the percolation threshold in this system that is 4 at. % Gd. Thus, the spin glass transition in the Re-Gd system takes place only above the percolation threshold in this sysytem, i.e. at 7 x  at. % Gd. The temperature dependence of the transition temperature (Fig. 4) is linear and agrees well with the experimental results [3]. We studied the behaviour of the models of amorphous Gd and Re-Gd amorphous alloys with application of an external magnetic field. The hysteresis loops for the model of amorphous Gd were calculated at 12 10 JJ (this corresponds to the spin-glass state) and temperatures 1 T  , 30, 50 K (Fig. 5). The external magnetic field was varied from -100 to 100 kOe with a step 5 kOe. The coercive 1 T  K is ~10 kOe and the remanent magnetization is ~0,55MS. With increasing the temperature, the coercive field and the remanent magnetization monotonically decrease and reduce to zero at the spin-glass transition temperature 50 The hysteresis loops for the models of Re100-xGdx amorphous alloys at 1 T  K were also calculated (Fig. 6). The external magnetic field was also varied from -100 to 100 kOe with a step 5 kOe. At x = 12-61 at. % Gd the coercive field is less than 5 kOe and only at x = 93 % it is about 10 kOe. The remanent magnetization monotonilcally increases with increasing the concentration of gadolinium atoms. The results of Monte Carlo simulation of magnetization and remagnetization processes coincide qualitatively with the results of experimental study of various amorphous alloys based on rare-earth metals [8,9].
2019-05-01T13:02:53.698Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "e9cb8ca173673161d22c171072a889ba605fa7ce", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1163/1/012049", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "d0a013a2b331262081d31d672a5ff819020bcf24", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
228980873
pes2o/s2orc
v3-fos-license
Assessing Current Seismic Hazards in Irpinia Forty Years after the 1980 Earthquake: Merging Historical Seismicity and Satellite Data about Recent Ground Movements : Recently, a new strain rate map of Italy and the surrounding areas has been obtained by processing data acquired by the persistent scatterers (PS) of the synthetic aperture radar interferometry (InSAR) satellites—ERS and ENVISAT—between 1990 and 2012. This map clearly shows that there is a link between the strain rate and all the shallow earthquakes (less than 15 km deep) that occurred from 1990 to today, with their epicenters being placed only in high strain rate areas (e.g., Emilia plain, NW Tuscany, Central Apennines). However, the map also presents various regions with high strain rates but in which no damaging earthquakes have occurred since 1990. One of these regions is the Apennine sector, formed by Sannio and Irpinia. This area represents one of the most important seismic districts with a well-known and recorded seismicity from Roman times up to the present day. In our study, we merged historical records with new satellite techniques that allow for the precise determination of ground movements, and then derived physical dimensions, such as strain rate. In this way, we verified that in Irpinia, the occurrence of new strong shocks—forty years after one of the strongest known seismic events in the district that occurred on the 23 November 1980, measuring Mw 6.8—is still a realistic possibility. The reason for this is that, from 1990, only areas characterized by high strain rates have hosted significant earthquakes. This picture has been also confirmed by analyzing the historical catalog of events with seismic completeness for magnitude M ≥ 6 over the last four centuries. It is easy to see that strong seismic events with magnitude M ≥ 6 generally occurred at a relatively short time distance between one another, with a period of 200 years without strong earthquakes between the years 1732 and 1930. This aspect must be considered as very important from various points of view, particularly for civil protection plans, as well as civil engineering and urban planning development. analysis, analysis, writing—original draft preparation, writing—review and editing, visualization, F.B., A.P. and G.F.; Introduction This study is based on the analysis of a fine-scale ground velocity map of Italy determined by the fusion of Global Navigation Satellite Systems (GNSS) with synthetic aperture radar interferometry (InSAR) data derived from satellites [1]. The dataset derives from a period of observation between 1990 and 2012. The InSAR dataset is part of the "Piano Straordinario di Telerilevamento" (Special program for Remote Sensing, promoted by the Italian Ministry of Environment). Due to the quasi-polar orbit of the satellites, space-borne InSAR observations can only determine the East-West (E-W) and Up-Down (U-D) components of the movement of persistent scatterers. However, there are millions of scatterers that are unreachable, due to the fact that only a few hundred GNSS stations exist. The North-South (N-S) component is provided by a C 2 continuous bi-cubic interpolation function that is well suited to interpolate sparse GNSS stations displaced inside Figure 1. Map of the East component of the ground velocity field of the Italian Peninsula, derived from Global Navigation Satellite Systems (GNSS) and synthetic aperture radar interferometry (InSAR) during more than two decades of observation . The area of the Central Apennines presents major earthquakes from 1990 to present day. From the figure above, it is clear that the main seismogenic areas are linked to the boundary that divides the two blocks with opposite E-W components of velocity. The vertical component of the InSAR data highlights the current general uplift occurring in most of Southern Italy, even if this uplift is lower than in the Central Apennines (especially in the "Abruzzo Dome" [1]), confirming a wealth of the geological literature. Conversely, few areas show subsidence, mainly because of human groundwater exploitation. In this frame, the highest uplift values of the whole Southern Apennines-exceeding 1.8 mm/a-are present in the chain segment between Benevento and Potenza. This area of higher-than-surroundings uplift roughly corresponds to the Irpinia sector, in a belt just west of the Campania-Puglia border. Thus, it is possible to call this area the "Irpinian Dome" (Figure 2). The Ufita and Marzano faults represent the surface traces of the two different patterns of the East-West ground velocity component (Figure 3 (top)). The uplifting area is divided into two different parts and, between them, exists a narrow corridor of lower uplift <1 mm/a, (Figure 3 (bottom)). It is interesting to note that this corridor is placed near the epicentral areas of the 1930 and 1980 earthquakes. from Global Navigation Satellite Systems (GNSS) and synthetic aperture radar interferometry (InSAR) during more than two decades of observation . The area of the Central Apennines presents major earthquakes from 1990 to present day. From the figure above, it is clear that the main seismogenic areas are linked to the boundary that divides the two blocks with opposite E-W components of velocity. The Apennine chain, an approximately linear belt hosting the most rapidly slipping normal faults, and the most damaging earthquakes, are coincident with the areas in which the morphological surface height, when averaged on a horizontal scale of tens of kilometers, is greatest [5]. In this area, the first studies based on the relative movements of the GNSS stations have already determined a medium value of a ca. 3 mm/a extension, linked to the differential movements between the two blocks. This also allows the emplacement of melt intrusions along deep-rooted faults [6]-the last occurrence of this kind probably triggered the 2013/2014 Matese seismic swarm [7]-and the widespread emission of deep-originated CO 2 [8]. This regime is dissecting the former Cenozoic east-verging thrust belt related to the west-dipping subduction of the Apulian lithosphere [9]. This compressive regime ended at 650 ka in the middle Pleistocene [10]. The E-W component of InSAR movements [1] has also confirmed the frame depicted by [11], in which the Ortona-Roccamonfina is not a single lineament, but a 30 km wide deformation channel: this channel is characterized by prevalent west-directed velocities in the stable Europe frame, nested in the Adriatic eastward-moving block. The vertical component of the InSAR data highlights the current general uplift occurring in most of Southern Italy, even if this uplift is lower than in the Central Apennines (especially in the "Abruzzo Dome" [1]), confirming a wealth of the geological literature. Conversely, few areas show subsidence, mainly because of human groundwater exploitation. In this frame, the highest uplift values of the whole Southern Apennines-exceeding 1.8 mm/a-are present in the chain segment between Benevento and Potenza. This area of higher-than-surroundings uplift roughly corresponds to the Irpinia sector, in a belt just west of the Campania-Puglia border. Thus, it is possible to call this area the "Irpinian Dome" (Figure 2). The Ufita and Marzano faults represent the surface traces of the two different patterns of the East-West ground velocity component (Figure 3 (top)). Relationship between Strain Rate and Earthquakes The strain rate provides a measure of the superficial deformation, and for this reason, is useful information for studying and analyzing geodynamics. Many authors have produced strain rate maps of the Italian territory using GPS station data. In the last decade, for example, Riguzzi et al., (2012) [12] estimated the strain rate, using the GPS velocity solution, of the Italian area-provided by Devoti et al., (2011) [13]. Palano (2015) [14] carried out an analysis of the stress and strain-rate fields of Italy. He performed a comparison of GPS inferred strain-rate data and 308 stress datasets interpolated at each node of a regular grid. The uplifting area is divided into two different parts and, between them, exists a narrow corridor of lower uplift <1 mm/a, (Figure 3 (bottom)). It is interesting to note that this corridor is placed near the epicentral areas of the 1930 and 1980 earthquakes. The geographical axis of the Irpinian dome is placed east of the main NE-dipping faults, on the surface projection of the hanging wall. Any useful information of the N-S component of the ground movement can be detected by InSAR satellites because their quasipolar orbits only make the detection of vertical and E-W velocity components possible. Relationship between Strain Rate and Earthquakes The strain rate provides a measure of the superficial deformation, and for this reason, is useful information for studying and analyzing geodynamics. Many authors have produced strain rate maps of the Italian territory using GPS station data. In the last decade, for example, Riguzzi et al., (2012) [12] estimated the strain rate, using the GPS velocity solution, of the Italian area-provided by Devoti et al., (2011) [13]. Palano (2015) [14] carried out an analysis of the stress and strain-rate fields of Italy. He performed a comparison of GPS inferred strain-rate data and 308 stress datasets interpolated at each node of a regular grid. Montone and Mariucci (2016) [15] provided an updated present day stress map for the Italian territory combining seismicity, data retrieved from a breakout analysis in deep wells, and fault data. Starting from this base Mastrolembo and Caporali (2017) [16] presented a direct comparison of the principal horizontal directions of stress and strain-rate directions of extension, estimated at the position of each stress measurement in their data set. For this, they used GPS data coming from over 500 stations distributed on the Italian peninsula, however, they did not provide a general map. This work instead benefits from a new fine-scale strain rate field of the whole continental Italy and Sicily ( Figure 4) [17], determined from the surface ground movements map obtained by the satellite InSAR observations between 1990 and 2012 [1]. The twodimensional velocity gradient tensor is calculated by applying the infinitesimal strain approach [18,19] with a grid of 20 km × 20 km. The known horizontal incremental velocity vector V i of the i-vertex polygon is defined as: where A i is the unknown velocity at the origin of the coordinate system, x j is the position of the station, and t ij is the displacement gradient tensor. Following the tensor theory, we separated the second-rank tensor into a symmetric and an anti-symmetric tensor. Then, t ij can be additively decomposed as follows: The symmetric and anti-symmetric parts of the infinitesimal strain rates can be associated with the infinitesimal strain e ij and rotation ω ij tensors. Principal strains e 1, e 2 were computed as: and the horizontal second invariant of the strain rate (SR) tensor was also evaluated as the scalars and is presented in Figure 4: The determination of the second invariant of the strain rate provides important additional information to support the analysis of the geodynamics and the earthquake distribution of the study area. A recent study [17] based on the analysis of the seismic events that have occurred since 1990 in the Italian peninsula, shows that the probability of earthquakes occurring is linked to SR by a linear correlation. More specifically, the probability that a strong seismic event will occur doubles with the doubling of SR. Then, the SR is used as an independent and quantitative tool to spatially forecast seismicity. The results of this study agree with these former studies, especially for the detection of the high strain rate along the Central and Southern Apennines axis and in Northern Sicily [19]. Po Plain the SR is lower than in the buried and seismic Apenninic units because of its attenuation in the plastic Neogene sedimentary cover). Outside these areas no shallow significant earthquakes occurred until 1990, even though strong events occurred after 1940, such as Friuli (the M 6.5 6 May 1976), Western Sicily (the Mw 6.4 15 January 1968), and Valais (the M 6.1 25 January 1946). In addition to these high strain rate areas that have been hit by strong earthquakes, there are others that, while showing high values of this value, have not been hit by relevant earthquakes since 1990. In recent years, only areas characterized by high strain rates have been affected by significant earthquakes, therefore, it is not unreasonable to empirically hypothesize that significant seismic events of the next decades have a greater chance of occurring only in the areas characterized by high strain rates. The year 1990 is taken as a milestone because, after beginning the survey in 1991, the former earthquakes do not influence the data. Stain Rate in Irpinia Irpinia is one of the main areas of the core of the Central and Southern Apennines chain. The differential movements between the two blocks, in which the Italian peninsula is divided imply a medium strain rate of 50 nstrain/a [5]. Here, the main fault systems are the Ufita, Monte Marzano, and Caggiano faults [22] (Figure 2 (top)). However, deformation is also linked to faults with a highly different orientation, well constrained in the historical record. For this issue, it is interesting to note that the focal mechanism solutions of the 1930 and 1962 earthquakes are significantly different from Outside these areas no shallow significant earthquakes occurred until 1990, even though strong events occurred after 1940, such as Friuli (the M 6.5 6 May 1976), Western Sicily (the Mw 6.4 15 January 1968), and Valais (the M 6.1 25 January 1946). In addition to these high strain rate areas that have been hit by strong earthquakes, there are others that, while showing high values of this value, have not been hit by relevant earthquakes since 1990. In recent years, only areas characterized by high strain rates have been affected by significant earthquakes, therefore, it is not unreasonable to empirically hypothesize that significant seismic events of the next decades have a greater chance of occurring only in the areas characterized by high strain rates. The year 1990 is taken as a milestone because, after beginning the survey in 1991, the former earthquakes do not influence the data. Stain Rate in Irpinia Irpinia is one of the main areas of the core of the Central and Southern Apennines chain. The differential movements between the two blocks, in which the Italian peninsula is divided imply a medium strain rate of 50 nstrain/a [5]. Here, the main fault systems are the Ufita, Monte Marzano, and Caggiano faults [22] (Figure 2 (top)). However, deformation is also linked to faults with a highly different orientation, well constrained in the historical record. For this issue, it is interesting to note that the focal mechanism solutions of the 1930 and 1962 earthquakes are significantly different from the kinematics of the typical large earthquakes that occurred along the crest of the Southern Apennines. Instead, these are well-fitted by the Mw 6.9 23 November 1980 earthquake, caused by predominant normal faulting along NW-SE-striking planes. The fault linked to the Mw 6.7 23 July 1930 earthquake is blind and its magnitude and focal mechanism are debated ( [23] and references therein). Many focal mechanisms have been proposed, from a "classical" NW-SE to an ESE-WNW striking plane. These belong to an array of oblique dextral slips on the EW-trending planes crossing the whole Southern Apennines which is dissecting the orogen in various contiguous sectors. The level of the transcurrent component is debated as well. However, the effects of the earthquake presented in [24] fit better with a NW-SE striking fault. The 1962 sequence is composed of three different shocks at 18:09, 18:19, and 18:44 UTC, the second being the most destructive (Io IX MCS, Mw 6.1, [25]). Additionally, identification of the faults responsible for these earthquakes is difficult because of the lack of reported surface faulting. Only in 2016 was a reliable focal mechanism produced [25] with two solutions: dominant strike-slip rupture along a north-dipping, E-W striking plane, or along a west-dipping, N-S striking plane. Its depth is still controversial, varying between 7 and 35 km. Therefore, the focal mechanism solutions of the 1962 earthquakes are significantly different from the kinematics of the typical large earthquakes occurring along the crest of the Southern Apennines, well-fitted instead by the Mw 6.9 23 November 1980 earthquake, caused by predominant normal faulting along NW-SE-striking planes. Irpinia is one of the areas in Italy showing a higher strain rate (Figures 4 and 5) during the 1991-2011 InSAR survey: currently north of it, in the Sannio sector, the strain rate is at a low level with a value of 20 nstrain/a 10 km north of Benevento. However, to the SE of Benevento, the value increases to 35 nstrain/a in less than 30 km at Grottaminarda, reaching the highest levels (48 nstrain/a) 15 km south of the epicenter of the 23 November 1980 earthquake. Therefore, Irpinia is still currently one of the areas with a higher strain rate in Italy, with values always >32 nstrain/a, and showing a maxima over the hanging wall of the Monte Marzano fault system. The southward strain rate dramatically drops to 35 nstrain/a near Polla. However, while north of Irpinia along the chain axis the value drops rapidly under 30 nstrain/a, the southward values remain above this value for much longer, up to the Pollino line (the border between Central Apennines and Calabria-Peloritani arc). In the picture of the EW-trending lithospheric faults dissecting the Apenninic orogen, these sudden strain rate drops north and south of Irpinia can be related to different strain rate conditions occurring in the adjacent sectors. [26]. Only in the NE Sicily 19 earthquake was the strain rate as high as in Irpinia. Therefore, from this point of view, we can hypothesize that in Irpinia, the probability of a new strong event is still very high. The Historical Record of Earthquakes in Irpinia Additionally, historical seismicity can allow this-somehow unexpecte statement, given the time intervals between Irpinian earthquakes. "Irpinia" historical-geographical area of southern Italy, located in the Campania reg approximately corresponding to the territory of the current province of Avellino, w in turn, largely recalls the historic province of Principato Ultra of the Kingdom Naples. The Irpinia area is one of the most seismically active sectors of the entire Ita territory. The seismogenic belt that runs along the Apennine chain, in fact, crosses northern and eastern part of the province of Avellino, where strong earthquakes h frequently occurred over centuries. The most important historical seismic events are placed in the hanging wall of Monte Marzano fault system [22]. If we take a polygon with vertices at the coordi points 41.314° N, 14 Figure 6 (bottom)), corresponding to the Apennine seis belt site of the major historical and instrumental seismicity, the parametric catalo Italian earthquakes CPTI15 [26] reports about twenty earthquakes with magnitude M 5.0, starting from the year 1000 (see Table 1). Of these, seven have a Mw between 6.0 6.8. It must be said that the catalog can be considered complete, for the strongest ev Figure 2 (top)) are represented with black hashed lines. Since the strongest shallow events that occurred inland in Italy from 1990 to today are placed only in areas characterized by high strain rates [17], the high strain rate detected in Irpinia implies-from a theoretical point of view-a scenario where a new strong earthquake seems more likely. This can be somehow counterintuitive, because this area hosted most of the strongest earthquakes in southern Italy after 1908Italy after , in 1930Italy after , 1962Italy after , and 1980: only the Mw 6.4 1968 Belice and the Mw 6.0 1978 Patti gulf events (both in Sicily) reached similar magnitudes [26]. Only in the NE Sicily 1978 earthquake was the strain rate as high as in Irpinia. Therefore, from this point of view, we can hypothesize that in Irpinia, the probability of a new strong event is still very high. The Historical Record of Earthquakes in Irpinia Additionally, historical seismicity can allow this-somehow unexpected-statement, given the time intervals between Irpinian earthquakes. "Irpinia" is a historical-geographical area of southern Italy, located in the Campania region, approximately corresponding to the territory of the current province of Avellino, which in turn, largely recalls the historic province of Principato Ultra of the Kingdom of Naples. The Irpinia area is one of the most seismically active sectors of the entire Italian territory. The seismogenic belt that runs along the Apennine chain, in fact, crosses the northern and eastern part of the province of Avellino, where strong earthquakes have frequently occurred over centuries. The most important historical seismic events are placed in the hanging wall of the Monte Marzano fault system [22]. If we take a polygon with vertices at the coordinate points 41.314 • N, 14 Figure 6 (bottom)), corresponding to the Apennine seismic belt site of the major historical and instrumental seismicity, the parametric catalog of Italian earthquakes CPTI15 [26] reports about twenty earthquakes with magnitude Mw ≥ 5.0, starting from the year 1000 (see Table 1). Of these, seven have a Mw between 6.0 and 6.8. It must be said that the catalog can be considered complete, for the strongest events (Mw ≥ 6.0) only for the last 400 years, namely from 1620 up to today [27]. From the diagram in Figure 6 (top), it can be seen that until the end of the 17th century, the seismic history of the Irpinia sector is largely incomplete and poorly documented. This, obviously, is not because there were no earthquakes at all, but because only little and partial historical information about that area for those ancient periods exists today. Only a couple of earthquakes are known (in 1466 and 1517) to have occurred in this period, plus two events before the year 1000, which occurred in the year 989 and 62 CE [28]; thus, outside the reference window of the historical catalog. Both these events originated from the monte Marzano Fault [29]. The earthquake of 5 December 1456 [30] was deliberately not taken into consideration in the present study, because it is a complex event that affected a very large area of southern Italy, causing damage from Puglia to Abruzzo, and whose epicenter is not well located nor defined. Probably, that earthquake was made up of several shocks that occurred in different sectors of the central-southern Apennines a few days apart, and Irpinia was only one of the several areas that were struck [28]. Table 1. List of the main Irpinia earthquakes (Mw > 5.0) extracted from the CPTI15 catalog [26]. For the description of the various parameters see this catalog at https://emidius.mi.ingv.it/CPTI15-DBMI15/index_en.htm (accessed on 30 January 2021) As attested by Rovida et al. [27], this historical record can be considered complete since 1620 for M 6.0+ earthquakes. A lack of seismic events in the historical record for a given area can be due to the following reasons: Year Mo Da (a) an area of genuinely low long-term seismicity; (b) either the incompleteness or a too-short time-span of the earthquake catalog; (c) a quiescent period in an area characterized by temporal clustering, followed by a long recurrence interval [31]. The seismic history of Irpinia is better documented, starting from the end of 1600, and as minor events (4.0 ≤ Mw < 5.0) can be considered well documented only starting from the end of the 19th century ( Figure 6) (b) either the incompleteness or a too-short time-span of the earthquake catalog; (c) a quiescent period in an area characterized by temporal clustering, followed by a long recurrence interval [31]. The seismic history of Irpinia is better documented, starting from the end of 1600, and as minor events (4.0 ≤ Mw < 5.0) can be considered well documented only starting from the end of the 19th century ( Figure 6 From its seismic history it can also be seen that, over the 400 year time-span of seismic catalog completeness for M ≥ 6.0 events, in Irpinia, the strongest earthquakes (Mw ≥ 6.0) tend to group over time, spaced from long phases characterized by lower and less frequent seismicity (Figure 6 (top and bottom)). At the turn of the seventeenth and eighteenth centuries, over a period of 40 years, Irpinia was affected by four damaging earthquakes, three of which occurred in just 10 years (1692, 1694, 1702, and 1732). Of these, the ones that occurred in 1694 (considered as a sort of twin of the 1980 earthquake), in 1702 and in 1732 were large events of Mw > 6.5. Each of these caused From its seismic history it can also be seen that, over the 400 year time-span of seismic catalog completeness for M ≥ 6.0 events, in Irpinia, the strongest earthquakes (Mw ≥ 6.0) tend to group over time, spaced from long phases characterized by lower and less frequent seismicity ( Figure 6 (top and bottom)). At the turn of the seventeenth and eighteenth centuries, over a period of 40 years, Irpinia was affected by four damaging earthquakes, three of which occurred in just 10 years (1692, 1694, 1702, and 1732). Of these, the ones that occurred in 1694 (considered as a sort of twin of the 1980 earthquake), in 1702 and in 1732 were large events of Mw > 6.5. Each of these caused extensive destruction over large areas and many casualties. Another cluster of strong earthquakes is the one that hit the sector in the twentieth century, between 1930 and 1980 (three events with Mw ≥ 6.0 over a period of 50 years). So, Irpinia belongs to the belt of very high seismic hazard running along the Central and Southern Apennines (Figure 7). Figure 7. Map of seismic hazards in Irpinia (see [32]) and adjoining areas (colors in the background), derived mostly from the historical seismic records, as shown by the overlapp strong seismic events within the map itself. The main towns of the area are drawn and labe with dark blue squares, and the main faults (see Figure 2 (top)) are represented with black lines. The spatio-temporal clustering of earthquakes in the Southern Apennines documented in the scientific literature. By comparing the number of earthqua record in the last five to seven centuries, with the number implied by slip-rates on normal faults averaged over 18 kyrs in the Southern Apennines, Papanikola Figure 7. Map of seismic hazards in Irpinia (see [32]) and adjoining areas (colors in the background), derived mostly from the historical seismic records, as shown by the overlapping of strong seismic events within the map itself. The main towns of the area are drawn and labeled with dark blue squares, and the main faults (see Figure 2 (top)) are represented with black hashed lines. It is unlikely that in the 200-year time-span between 1732 and 1930 there were large (M 6.0+) earthquakes in the Irpinia area, since these are not present in the historical record. In the same time interval, not just "minor" earthquakes are well documented in the very same area (i.e., the 1741 Mw 5.4, 1794 Mw 5.3, and 1853 Mw 5.6 Irpinian events; see Table 1, Figure 5), but strong events are also well known to have struck other adjacent Apennine areas (the 1805 Mw 6.7 Matese earthquake; those of 1851 Mw 6.5 and 1857 Mw 7.1 in Basilicata [26]). Therefore, it can be assumed that the historical seismicity of Irpinia has been characterized by periods of intense activity, with strong earthquakes over a few years or decades, interspersed with long periods of minor-to-moderate activity, with earthquakes of magnitude lower than 6.0. The spatio-temporal clustering of earthquakes in the Southern Apennines is well documented in the scientific literature. By comparing the number of earthquakes on record in the last five to seven centuries, with the number implied by slip-rates on active normal faults averaged over 18 kyrs in the Southern Apennines, Papanikolaou and Roberts [33] demonstrated that the long history of earthquakes in the Italian Apennines may indeed contain evidence for earthquake clustering. In particular, according to Papanikolaou and Roberts [33], Irpinia and northern Basilicata show a very high number of earthquakes and this indicates that this area may be in a temporal earthquake cluster phase. Meanwhile, the sector located slightly further south, up to the Pollino massif, could be in a temporary anti-clustering process. The strain rate map in Farolfi et al. [17], in which the Irpinia-Basilicata sector is characterized by a much higher strain rate than the Pollino sector (and the intermediate sector, Vallo di Diano, shows intermediate values), fits well with these results. In the Central and Northern Apennines, earthquake clustering is known to exist. For example, Tondi and Cello [34] observed a time interval of ca. 350 years among the beginning of seismic clusters in the Central Apennines Fault System. The current sequence, which started in 1997 and continued with the events of 2009 and 2016, has arrived on time if we consider that two main seismic clusters in the past began in the years 1349 and 1688 [35]. In the Northern Apennines, a major seismic crisis occurred between 1915 and 1921 [36], while in this area, the historical record before 1915 is composed of a few destructive events [26]. Additionally, the two-year period 2012-2013 showed a high level of activity, not only in the area of the Emilia seismic sequence, but also in the Garfagnana sector, accompanied by a high strain rate. Currently, the most likely explanation for seismic clustering is the "stress transfer" between faults [36] and references therein, due to coseismic movement rearranging the Coulomb failure stress on other nearby faults [37]. However, this explanation falls short when there is the occurrence of an isolated, single event (such as the Mw 6.1 6 November 1599 Valnerina, and the Mw 6.4 13 January 1832 Valle Umbra earthquakes) that do not trigger a level of Coulomb stress transfer, resulting in strong earthquakes on other neighboring faults. For other researchers, there is a sort of "domino effect" between the crustal blocks that make up the Apennines [38]. In conclusion, we suppose that the deformation rate value, as described in Farolfi et al. [18], represents a conditio sine qua non for the occurrence of strong earthquakes (M > 5.5). This hypothesis was corroborated by the observation that all the strongest shocks of the last three decades in the Italian territory are located in areas characterized by a high rate of deformation [17]. Conclusions In the last twenty years, the main shallow earthquakes (depth ≤ 15 km) in Italy and the Alps have occurred only in some of the horizontal strain rate zones, as depicted by Montone and Mariucci [15]. Meanwhile, the strain rate is currently low in other areas affected by recent earthquakes that occurred before the 1990-2012 survey, such as Belice (1968) and Friuli (1976). These areas are also where the higher seismic events from 1915 to now have occurred. The area of the 1915 Marsica earthquake also shows lower-than-surroundings strain rate values, such as in the Central and Eastern sections of the Northern Apennines (in the Western sector, higher seismicity barely corresponds to a slightly higher strain rate). In this picture, the high strain rate level indicates that the scenario of a new strong shake in Irpinia is not unlikely. Additionally, the historical record is in agreement with this, given the short temporal distance between strong (M6+) seismic events in Irpinia during the 400 years of catalog completeness (i.e., from 1620 to present), and a long 200-years period without M6+ seismic events occurring between 1732 and 1930. Moreover, by merging historical seismicity and InSAR satellite data, we think that, in the future, a hypothesis related to the following scenarios should be explored: • the short time gap between strong events in Irpinia during the 1694-1732 and 1930-1980 periods is linked to periods of continuous high strain rates; • instead, the long seismic gap (a lack of strong seismicity) between 1732 and 1930 could have originated from a strain rate drop after the 1732 earthquake.
2020-10-29T09:06:29.111Z
2020-10-27T00:00:00.000
{ "year": 2021, "sha1": "009290ffa7005405c9eb22a82c5cb4141b6c1d96", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/geosciences11040168", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "34d4b809d6a60fa3db4c9bb297d6a41f1caeb94c", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
269767431
pes2o/s2orc
v3-fos-license
Analyzing Sustainable 3D Printing Processes: Mechanical, Thermal, and Crystallographic Insights In this study, the objective was to optimize energy consumption in the fused deposition modeling (FDM) 3D printing process via a detailed analysis of printing parameters. By utilizing thermal analysis techniques, this research aimed to identify lower printing temperatures that could lead to reduced energy usage. Experimental analysis was conducted using a three-level L9 Taguchi orthogonal array, which involved a systematic combination of different extruder temperatures and cooling fan capacities. Furthermore, the research incorporated differential scanning calorimetry (DSC) and X-ray diffraction (XRD) methods to analyze the thermal properties and crystallinity of the 3D-printed specimens. The results indicated that temperature was a key factor affecting crystallinity, with samples printed at 190 °C and 60% fan capacity showing the highest mean values. By conducting a multi-objective desirability analysis, the optimal conditions for maximizing ultimate tensile strength (UTS), tensile modulus, and elongation at break while minimizing energy consumption for PLA 3D-printed samples were determined to be a temperature of 180 °C and a fan speed of 80%. Introduction Additive manufacturing (AM) encompasses a group of processes that facilitate the incremental production of objects based on three-dimensional (3D) model data, layer by layer.This is in contrast to subtractive manufacturing, the predominant method in most manufacturing operations.An advantage of AM lies in its ability to create functional components with complex geometries that prove challenging to produce using traditional methods.Fused deposition modeling (FDM), among various AM techniques, stands out as a highly popular and extensively utilized method for producing parts using plastic materials [1][2][3].Various factors within the FDM process can impact the mechanical properties of 3D-printed parts.For instance, parameters such as layer thickness [4][5][6], infill density, printing temperature, and filament type [7][8][9][10][11][12][13] all play a significant role in determining the mechanical characteristics of the produced object.Also, the post-processing treatments can considerably influence the mechanical behavior of 3D-printed parts [14][15][16][17].On the other hand, the sustainability and energy efficiency of AM have become increasingly critical considerations in contemporary industrial practices.Numerous scientific research [5,6,[18][19][20][21][22][23][24] has concentrated on this aspect of AM technology, aiming to establish a compromise between energy efficiency and mechanical strength.Shifting focus back to the correlation between production costs and the fabrication of printed components, the current emphasis, as highlighted in reference [6], is on optimizing energy efficiency and cost reduction within the manufacturing process, particularly in the domain of material extrusion additive manufacturing.This optimization is essential for achieving both sustainability and cost-effectiveness in production.Notably, MEX (material extrusion) 3D printing consistently demonstrates its capability to produce high-quality parts, especially when utilizing expensive high-performance polymers with applications in the biomedical, automotive, and aerospace sectors.The referenced scientific paper explores the effect of three crucial parameters (layer thickness, nozzle temperature, and printing speed) on energy consumption (directly influencing production costs) and tensile strength of 3D-printed PEEK (Polyetheretherketone) parts.The dominant parameter influencing the tensile strength and energy printing consumption was found to be layer thickness. The research outlined in [18] extensively investigates the impact of seven universal and machine-agnostic 3D printing configurations on both the energy usage and mechanical properties of parts manufactured from PLA (polylactic acid)using the MEX 3D printing method.The results underscored that printing speed and layer thickness had the most substantial impact on energy consumption in the study.Additionally, infill density and orientation angle were identified as the primary factors influencing compressive strength. Furthermore, in a related study [19], statistical modeling tools were employed to evaluate various metrics associated with compression and energy consumption in 3D printing.These metrics encompassed printing time, weight, energy printing consumption, specific printing energy, specific printing power, compression strength, compression modulus of elasticity, and toughness.Among the analyzed factors, layer thickness emerged as the most influential control parameter, while nozzle temperature and raster deposition angle exhibited lesser impact on the outcomes. Study [20] quantifies energy consumption for producing 3D-printed PLA parts, providing valuable insights for assessing the process's sustainability.Six machine-independent parameters with three levels were analyzed, revealing a significant impact on energy consumption, with differences of up to 250% among the studied cases.The study also evaluated the effect of these parameters on the flexural strength of PLA parts, finding substantial variations of up to 300%.Although efforts were made to identify optimal 3D-printing settings for balanced performance, achieving both minimal energy consumption and high flexural strength proved challenging.High layer thickness values were associated with minimum energy consumption, although they produced parts with average flexural strength.Vidakis et al. [21] investigated the impact of six control factors (infill density, raster deposition angle, nozzle temperature, print speed, layer thickness, and bed temperature) on the energy performance and mechanical properties of Poly[methyl methacrylate] in 3D printing.The study found that raster deposition angle and printing speed were the most influential parameters for tensile strength, while layer thickness and printing speed significantly affected energy consumption.Quadratic regression models were developed for each response metric, allowing the identification of an optimal balance between energy efficiency and mechanical strength in engineering applications.Similarly, article [22] addresses the contemporary industrial demand for sustainability and energy efficiency in AM, particularly focusing on the need for 3D-printed parts with strong mechanical properties, especially in high-performance polymers like polycarbonates used in MEX.The study investigates the impact of seven control parameters on the energy consumption and compressive performance of polycarbonate in MEX AM.Using a three-level L27 Taguchi experimental design with 135 experiments, the research identifies layer thickness and infill density as the most influential factors in energy consumption, while infill density and orientation angle significantly affect compressive strength. PLA filament is widely favored in the 3D printing market due to its affordability, smooth printing operation, and satisfactory physical properties [25].It is crucial to carefully design and control the PLA crystallization temperature as an essential parameter.Throughout the processing conditions, changes in the PLA crystallization temperature can influence the formation of specific crystalline structures in PLA, favoring one structure over another [26]. The correlation between crystallinity and mechanical performance in 3D-printed materials is a critical aspect of understanding how the structural arrangement of a material at the molecular level influences its mechanical properties.The degree of crystallinity in a 3D-printed object can significantly impact its strength, stiffness, and other mechanical characteristics. In [27], a positive correlation was observed between crystallinity data and impact strength data, indicating that higher crystallinity resulted in enhanced impact strength. Ansari et al. [25] have found that crystallinity has an inverse relationship with print speed, with higher crystallinity achieved at lower print speeds.Additionally, infill density plays a significant role in influencing crystallinity.A lower infill density of 50% is associated with higher crystallinity, but this decreases significantly when increasing the infill density to 75%.Notably, both the grid and trihexagonal patterns exhibit higher crystallinity, while the triangular pattern is linked to the lowest crystallinity.Magri et al. [1] conducted research on the mechanical properties of carbon fiber-reinforced PLA composites.They found that the inclusion of fiber reinforcement led to enhanced tensile properties.The maximum tensile strength was achieved at the highest nozzle temperature (230 • C).Additionally, their findings indicated that annealing increased crystallinity, and a lower cooling rate resulted in a higher crystalline structure in the polymer. The literature extensively underscores that the quality and performance of 3D-printed parts are inherently tied to the intricacies of chosen process parameters.The nuanced interplay of variables such as layer thickness, printing speed, and infill density plays a pivotal role in shaping the final outcome.This realization is of paramount importance in the landscape of AM, where precision and efficiency are sought to be optimized continually.Recognizing the multifaceted impact of these parameters on mechanical properties and energy consumption, the literature emphasizes the ongoing need for systematic investigation and optimization, particularly in the realm of AM. In the context of this imperative, the present work assumes significant relevance.The primary objective was to derive optimized printing parameters that not only ensure comparable mechanical performance but also contribute to a reduction in energy consumption.This dual focus on performance and energy efficiency is vital for leading additive manufacturing towards a sustainable and cost-effective trajectory.As industry increasingly prioritizes environmental responsibility, achieving this delicate balance becomes imperative for the widespread adoption and continued growth of 3D printing technologies. Moreover, this exploration delves into the realm of material science via the application of differential scanning calorimetry (DSC) and X-ray diffraction (XRD).By utilizing DSC to evaluate the crystallography of both colored thermoplastic aliphatic polyester filament and 3D-printed specimens, the study extends beyond conventional analyses.Establishing a direct correlation between crystallinity and mechanical properties offers a profound understanding of how the molecular structure influences the final product's strength, stiffness, and other mechanical characteristics.This linkage between crystallinity and performance is a critical aspect in adjusting printing parameters to control the crystalline structure of the material. The use of DSC and XRD provides nuanced insights that extend beyond traditional mechanical testing.The targeted adjustments informed by crystallinity data offer a pathway to enhance not only the mechanical performance of printed objects but also their sustainability.This dual optimization aligns with the evolving expectations of industries seeking to embrace 3D printing as a versatile and environmentally responsible manufacturing method. By emphasizing the crucial relationship between process parameters, mechanical performance, and energy consumption, the present study contributes valuable insights into the global perspective of sustainable manufacturing.The integration of advanced analytical techniques, such as DSC and XRD, improves the understanding of material behavior, allowing for more precise adjustments and, consequently, promoting the evolution of additive manufacturing towards higher performance and heightened sustainability. Materials and Methods As mentioned above, the aim of the present work was to identify the influence of printing parameters (extrusion temperature and fan speed) on the mechanical behavior and energy consumption of 3D-printed colored thermoplastic aliphatic polyester objects, so as to establish the optimal solution using the steps presented in Figure 1. and energy consumption of 3D-printed colored thermoplastic aliphatic polyester objects, so as to establish the optimal solution using the steps presented in Figure 1. The performed investigation begins with an analysis of the thermal properties of PLA filament (using the characteristics given in Table 1) via Thermogravimetric Analysis (TGA) and differential scanning calorimetry (DGA).The specific used filament was Polymaker PolyTerra Polymaker (Utrecht, The Netherlands) with 1.75 mm diameter. The next step involves establishing the printing parameters, followed by the fabrication of tensile test specimens.These specimens are then subjected to tensile testing, and the experimental data (tensile properties and energy consumption) are collected.Using these data, a statistical analysis is performed, focusing on input parameters such as fan speed and printing temperature, with the aim of identifying the optimal settings for these parameters that minimize energy consumption while maximizing tensile properties.Finally, the conclusions are drawn from the interpretation of the results, which should ideally provide the optimal solution for the 3D printing parameters that balance energy efficiency with material performance.The performed investigation begins with an analysis of the thermal properties of PLA filament (using the characteristics given in Table 1) via Thermogravimetric Analysis (TGA) and differential scanning calorimetry (DGA).The specific used filament was Polymaker PolyTerra Polymaker (Utrecht, The Netherlands) with 1.75 mm diameter.The next step involves establishing the printing parameters, followed by the fabrication of tensile test specimens.These specimens are then subjected to tensile testing, and the experimental data (tensile properties and energy consumption) are collected.Using these data, a statistical analysis is performed, focusing on input parameters such as fan speed and printing temperature, with the aim of identifying the optimal settings for these parameters that minimize energy consumption while maximizing tensile properties.Finally, the conclusions are drawn from the interpretation of the results, which should ideally provide the optimal solution for the 3D printing parameters that balance energy efficiency with material performance. DSC Analysis In order to ensure the stability of the colored thermoplastic aliphatic polyester polymer and prevent any degradation that could potentially influence the experimental results, the temperatures used in the extrusion processes were evaluated for their impact on thermal properties.The crystallization behavior of the material was analyzed with a DSC 3+ Star system from METTLER TOLEDO (Leicester, England) under a N 2 atmosphere between 10 and 260 • C with 10 • C/min. The investigation into the melting and crystallization characteristics of the investigated material involved analyzing the endothermic and exothermic peaks in the DSC thermograms.The degree of crystallinity (X c ) was determined based on the information derived from the DSC thermogram using equation [25]: where ∆H m is the measured melting enthalpy of PLA specimen [J/g], ∆H m 100% is the melting enthalpy for 100% crystalline PLA (93.7 J/g [25]), and w is the mass fraction of PLA in the analyzed sample. 3D Printing Process The printing process utilized the Raise E2 3D printer, characterized by a volume capacity of 330 × 240 × 240 mm.The particular printing parameters employed in this study are detailed in Table 2, with the build orientation set as X-Y model lines oriented at a 45 • angle.Extruder temperature and fan speed were variable parameters.The extrusion process parameters for the samples were precisely defined by setting the extruder temperatures at 160 • C, 170 • C, and 180 • C.These values for extruder temperatures were chosen based on the thermodynamic analysis of the filament (given in Supplementary Material Figure S1: TGA curve for filament), which presented the glass transition temperature (T g ) at 62.85 • C, cold crystallization temperature (T c ) at 112.40 • C and melting temperature (T m ) at 160.34 • C (Figure 2).Therefore, the authors chose the range of extrusion temperature between the melting temperature obtained from DSC analysis (160.34 • C) and the extrusion temperature indicated by the filaments provider (210 • C), presented in Table 1.The selected temperatures are intentionally maintained below 210 °C to achieve a dual objective: firstly, to ensure the thermal integrity of the filament during the extrusion process, and secondly, to enhance the energy efficiency of the system by minimizing the power consumption required for extrusion. The geometrical characteristics and dimensions of the specimen are shown in Figure 3a, and the printed specimens are presented in Figure 3b. Energy Consumption Measurement Figure 4 shows the electrical diagram used to measure the active energy consumed by the 3D printer.The selected temperatures are intentionally maintained below 210 • C to achieve a dual objective: firstly, to ensure the thermal integrity of the filament during the extrusion process, and secondly, to enhance the energy efficiency of the system by minimizing the power consumption required for extrusion. The geometrical characteristics and dimensions of the specimen are shown in Figure 3a, and the printed specimens are presented in Figure 3b. Energy Consumption Measurement Figure 4 shows the electrical diagram used to measure the active energy consum by the 3D printer. Energy Consumption Measurement Figure 4 shows the electrical diagram used to measure the active energy consumed by the 3D printer. The equipment used in the scheme from Figure 4 are as follows: • K-voltage disconnect switch; • F 10A-fuse; • Active energy meter-it uses a 3-phase digital power clampmeter PeakTech P1660 (Distrelec, Vienna, Austria) (with active energy measurement function), mounted in a single phase (see Figure 5).The equipment used in the scheme from Figure 4 are as follows: Active energy meter-it uses a 3-phase digital power clampmeter PeakTech P1660 (Distrelec, Vienna, Austria) (with active energy measurement function), mounted in a single phase (see Figure 5).For connecting the power clampmeter P1660 in a single-phase circuit in order to measure the active energy, only two voltage test leads are connected, one on the phase on which the current is measured (named Yellow in Figure 5) and the other on the neutral conductor. The used power clampmeter PeakTech P1660 measures AC voltage V, frequency f, current I on the phase conductor surrounded by amperometric clamp, active power P, reactive power Q and apparent power S, power factor cosφ, as well as the active energy Wa consumed by the load.Some specifications of power clampmeter P1660 are: The equipment used in the scheme from Figure 4 are as follows: Active energy meter-it uses a 3-phase digital power clampmeter PeakTech P1660 (Distrelec, Vienna, Austria) (with active energy measurement function), mounted in a single phase (see Figure 5).For connecting the power clampmeter P1660 in a single-phase circuit in order to measure the active energy, only two voltage test leads are connected, one on the phase on which the current is measured (named Yellow in Figure 5) and the other on the neutral conductor. The used power clampmeter PeakTech P1660 measures AC voltage V, frequency f, current I on the phase conductor surrounded by amperometric clamp, active power P, reactive power Q and apparent power S, power factor cosφ, as well as the active energy Wa consumed by the load.Some specifications of power clampmeter P1660 are: For connecting the power clampmeter P1660 in a single-phase circuit in order to measure the active energy, only two voltage test leads are connected, one on the phase on which the current is measured (named Yellow in Figure 5) and the other on the neutral conductor. The used power clampmeter PeakTech P1660 measures AC voltage V, frequency f, current I on the phase conductor surrounded by amperometric clamp, active power P, reactive power Q and apparent power S, power factor cosφ, as well as the active energy W a consumed by the load.Some specifications of power clampmeter P1660 are: By definition, active energy is the integral of active power, calculated for a time interval [t 1 , t 2 ]: Polymers 2024, 16, 1364 8 of 22 and the active power where V and I are root-mean-square (RMS) values of voltage and current intensity, and cosφ is the power factor.The voltage and current values measured by the clampmeter, which are also included in the calculation of power and energy, are true RMS values, which indicates better measurement accuracy, considering that voltages and currents are not always perfectly sinusoidal. XRD Analysis In order to study the structure of the analyzed samples, a D8 Advance diffractometer (Bruker-AXS, Karlsruhe, Germany) with Cu-Kα radiation (λ = 1.54 Å) was used.The tests were performed on fragments from the 3D-printed specimens in the measurement range (2θ) 10-70 • .The equipment with θ-θ configuration and Bragg-Brentano geometry was operated at 40 KV and 40 mA, with scanning conditions: step 0.1 • and scan speed 0.1 • /5 s.The measurements were carried out via the XRD Commander, and the raw files were obtained.DIFFRAC.EVA v14 software and PDF-ICDD database were used for the qualitative interpretations, and the Rietveld refinements (quantitative interpretations) were run in the TOPAS 4.1 software.Although the X-ray spectra show the same trend, a difference can be seen depending on the temperature and the cooling rate. The degree of crystallinity X C in XRD spectra was calculated using the equation where A crys is the fitted areas of the crystal phase, and A amorph is the fitted areas of the amorphous phase [28]. Tensile Testing The 9 groups of specimens, each containing 3 samples (printed with 0.2 mm layer thickness, 50% infill percentage, and 2 shells), underwent mechanical testing to establish the main material characteristics, including ultimate tensile strength (UTS), elongation at break (A), and tensile Young's modulus (E).Tensile tests were conducted using an electromechanical machine equipped with a 2.5 kN force cell, operating at a speed of 5 mm/min, and the elongation at break was measured with an axial extensometer (see Figure 6). Design of Experiments and Optimization To optimize the 3D printing process and enhance the tensile properties of printed materials while minimizing energy consumption, a systematic approach utilizing the Design of Experiments (DOE) was employed.The primary objective was to maximize tensile properties (UTS, ultimate tensile strength; E, Young's modulus; and A, elongation at break), encompassing strength and flexibility while concurrently minimizing the en- Design of Experiments and Optimization To optimize the 3D printing process and enhance the tensile properties of printed materials while minimizing energy consumption, a systematic approach utilizing the Design of Experiments (DOE) was employed.The primary objective was to maximize tensile properties (UTS, ultimate tensile strength; E, Young's modulus; and A, elongation at break), encompassing strength and flexibility while concurrently minimizing the energy consumed during the printing process.The factors under investigation were extrusion temperature and fan speed, recognized as pivotal parameters influencing the printing outcome.A full factorial experimental design was chosen to comprehensively explore all possible combinations of factor levels. The experiments involved 3D printing runs for each combination of extrusion temperature and fan speed, with subsequent measurement and recording of tensile properties, alongside monitoring energy consumption for each run.This meticulous data collection aimed to establish a robust understanding of the relationships between the chosen parameters and the desired outcomes. The subsequent statistical analysis, employing tools like Analysis of Variance (ANOVA), facilitated the identification of significant factors.This step was crucial in discerning which variables, among extrusion temperature and fan speed, significantly influenced tensile properties and energy consumption. Upon identifying these influential factors, optimization techniques, such as desirability analysis, were applied to determine the optimal combination of extrusion temperature and fan speed that would yield the maximum tensile properties while concurrently minimizing energy consumption. Experiment No. To enhance sustainability in manufacturing, a decision-making method is essential, capable of concurrently addressing requirements related to energy consumption and mechanical properties.In this study, the desirability approach is applied to optimize printing parameters (temperature and fan speed), ensuring a simultaneous consideration of objectives related to both energy efficiency and mechanical performance. Fan Desirability analysis provides values within a range of zero to one, with one indicating the highest level of suitability. For the case when the importance is the same for each response, the composite desirability D is calculated with the formula [29] Polymers where n is the number of responses, d i represents the desirability for each individual response, calculated (for the case when the goal is to maximize the response desirability) as [29] When the objective is to minimize the response, the formulas used are [4] as follows: y i , T i , L i , and U i represent the predicted value, target value, lowest value, and upper value, respectively, of the analyzed response. Thermal Characteristics of Samples Evaluated by DSC Analysis The thermal characteristics of samples evaluated by DSC analysis are presented in Figure 7 and Table 4.All the samples exhibited three main phase transitions, which corresponds to the material glass transition temperature (T g ), cold crystallization temperature (T c ), and melting point (T m ).As can be seen, regardless of the printing temperature and cooling rate, all experiments presented an endotherm glass transition peak (T g ) around 63 • C, similar to the filament sample.The cold crystallization process indicates the crystallization of a material during heating and is a typical behavior for aliphatic polyesters such as PLA, which, due to the structures of the macromolecules, does not crystallize easily [30].The broad exothermic peaks associated with the cold crystallization temperature (Tc) are observed within the temperature range of 118-124 °C.At a speed fan of 60%, the cold crystallization temperature Tc shifts to lower temperatures from 124.89 to 118.99 °C as the printing temperature increases from 170 to 190 °C due to increased mobility of molecular chains throughout the heating cycle [13].Moreover, experiment no. 1 exhibited a second small exotherm crystallization peak at 140.66 ± 0.12 °C, which means that the low printing temperature of 170 °C and the low-speed fan of 60% caused the incomplete melting of PLA, so unmelted crystals were remelted and recrystallized during the DSC analysis [31].A further increase in fan speed leads to similar behavior for experiments 4-9, namely the crystallization temperature increases with the printing temperature from 160 to 180 °C, followed by a decrease in Tc at the printing temperature of 190 °C.This fact can probably be due to in- The cold crystallization process indicates the crystallization of a material during heating and is a typical behavior for aliphatic polyesters such as PLA, which, due to the structures of the macromolecules, does not crystallize easily [30].The broad exothermic peaks associated with the cold crystallization temperature (T c ) are observed within the temperature range of 118-124 • C. At a speed fan of 60%, the cold crystallization temperature Tc shifts to lower temperatures from 124.89 to 118.99 • C as the printing temperature increases from 170 to 190 • C due to increased mobility of molecular chains throughout the heating cycle [13].Moreover, experiment no. 1 exhibited a second small exotherm crystallization peak at 140.66 ± 0.12 • C, which means that the low printing temperature of 170 • C and the low-speed fan of 60% caused the incomplete melting of PLA, so unmelted crystals were remelted and recrystallized during the DSC analysis [31].A further increase in fan speed leads to similar behavior for experiments 4-9, namely the crystallization temperature increases with the printing temperature from 160 to 180 • C, followed by a decrease in Tc at the printing temperature of 190 • C.This fact can probably be due to insufficient time for the macromolecular chains to reorganize and form crystals at the temperature of 190 • C [32].It should be mentioned that, regardless of the fan speed, the lowest values of the Tc temperature were recorded at a temperature of 190 • C. Experiment No. Compared with the filament sample, which presents one sharp melting peak (T m ) at 160.34 ± 0.22 • C, all the printed samples exhibit a broad peak with a maximum of 160 • C. In addition, for experiment nos.7, 8, and 9, two melting peaks around 154 • C and 160 • C were observed.According to Y. Xu et al. [31] and B. Ma et al. [33], the double melting peak is a typical behavior for PLA, where the weaker and irregular crystals melt first and change into α crystals, after which the formed α crystals melt, resulting in the second T m peak.These results are consistent with data obtained from X-ray diffraction analysis, as seen in Table 4. XRD Analysis Figure 8 shows the XRD patterns of 3D-printed samples.X-ray diffraction investigations show a change in the degree of crystallinity depending both on temperature and the rate of cooling (Figure 8).Semi-crystalline nature of colored thermoplastic aliphatic polyester is revealed by a broad peak (like a hump effect) in the range of 10-25 • 2θ degrees. Based on the experimental data and adequate structural model, the lattice parameters were refined using the Rietveld method and Topas 4.1 program.From many crystal structures of the four polymorphs (α, α', β, and γ) of colored thermoplastic aliphatic polyester solved over time starting from 1968 with the study of De Santis and Kovacs [34], it was assumed that an orthorhombic lattice similar to that reported by Aleman et al. [35] was obtained in our experiments.The lattice parameters a, b, and c, refined for the studied samples, were in the range a = 10.47-10.75Å; b = 6.41-6.59Å; and c = 27.50-28.14Å (see Table 5), in good agreement with the literature [28,[35][36][37][38].The basic configuration of the colored thermoplastic aliphatic polyester crystal (α-phase) structure consists of two 10 3 helices packed in an orthorhombic unit cell. gations show a change in the degree of crystallinity depending both on temperature and the rate of cooling (Figure 8).Semi-crystalline nature of colored thermoplastic aliphatic polyester is revealed by a broad peak (like a hump effect) in the range of 10-25° 2θ degrees. The significant difference between the values of the crystallinity degree resulting from DSC and XRD, respectively, can be explained by the fact that the XRD technique scanned the sample surface, while the DSC technique evaluated the bulk crystallinity.The extrusion of PLA material in the printing process has modeled the morphology of the surface by flattening it and, as a result, the orientation led to higher crystallinity. The values of the crystallinity degree calculated from the X-ray spectra show that the degree of crystallinity has a tendency to decrease with the rise of temperature and with the rise of the cooling rate (with the exception of 190 • C-80% sample, which shows the highest degree of crystallinity).It looks like lower printing temperatures associated with medium-lower cooling rates provide a higher degree of crystallinity.At higher printing temperatures, it must increase the cooling rate to maximize the degree of crystallinity.To establish a correlation between tensile properties and crystallinity of material, we need to examine how the mechanical properties of the material (such as ultimate tensile strength, Young's modulus, and elongation at break) vary with changes in its crystallinity. Process Parameters Effect on Tensile Properties, Crystallinity and Energy Consumption Based on the charts provided, the following observations can be made: generally, there seems to be a trend where materials with higher crystallinity exhibit greater tensile strength.For instance, at temperatures and fan speeds that lead to higher crystallinity (e.g., 190 • C and 60% fan speed), the tensile strength is also higher. Young's modulus appears to have a similar correlation with crystallinity.More crystalline materials generally have a higher Young's modulus, indicating greater stiffness.To establish a correlation between tensile properties and crystallinity of material, we need to examine how the mechanical properties of the material (such as ultimate tensile strength, Young's modulus, and elongation at break) vary with changes in its crystallinity. Based on the charts provided, the following observations can be made: generally, there seems to be a trend where materials with higher crystallinity exhibit greater tensile To establish a correlation between tensile properties and crystallinity of material, we need to examine how the mechanical properties of the material (such as ultimate tensile strength, Young's modulus, and elongation at break) vary with changes in its crystallinity. Based on the charts provided, the following observations can be made: generally, there seems to be a trend where materials with higher crystallinity exhibit greater tensile strength.For instance, at temperatures and fan speeds that lead to higher crystallinity (e.g., 190 °C and 60% fan speed), the tensile strength is also higher.The elongation at break shows an inverse trend, materials with higher crystallinity seem to have lower elongation at break, suggesting that they are less ductile and more brittle. About the crystallinity (XRD and DSC), there is a clear trend where increased temperature and decreased fan speed lead to higher crystallinity, measured by both X-ray diffraction (XRD) and differential scanning calorimetry (DSC). To make a direct correlation, we should compare each set of printing parameters to the respective crystallinity values.For example, we could compare the tensile strength at different degrees of crystallinity to see if there is a linear or non-linear relationship between them.Similarly, we can compare Young's modulus and elongation at break to crystallinity to better understand the relationship between the material structure and its mechanical behavior. These correlations are crucial for understanding how the mechanical properties of a polymer change with its crystalline structure, which can be influenced by processing conditions such as temperature and cooling.Such understanding is vital for optimizing manufacturing processes and designing materials with specific properties required for particular applications. The results suggest that, within the tested range of fan speeds and temperatures, 3D-printed samples exhibited relatively stable mechanical properties (UTS, A%, and E) with minor variations, very similar to samples printed with the provider's recommended temperature of 210 • C as in previous authors studies [40]. Generally, the highest values of tensile modulus are obtained at 180 • C, while the lowest corresponds to 190 • C, and the highest value of Young's modulus is obtained at 180 • C temperature and 100% fan speed. Energy consumption increases with higher fan speeds and temperatures, which is important to consider in terms of cost and energy efficiency. The obtained results regarding the crystallinity revealed that the printing temperature significantly influences this parameter, the highest values were obtained at 190 • C while the lowest values correspond to 180 • C. It can be observed that at 190 • C it was obtained, the lowest elongation at break, resulting in higher crystallinity, tends to reduce the flexibility.On the other hand, at 180 • C, the elongation at the break had the highest values; this result supports the statement presented before regarding the correlation between crystallinity and elongation at break.Complementary to this investigation, the authors conducted additional research into the tensile characteristics of specimens fabricated using a similar grade of polymeric-colored thermoplastic aliphatic polyester.Cross-referencing with findings from study [40], employing identical parameters of infill density (50%) and layer thickness (0.2 mm), but at a printing temperature of 200 • C, it can be observed marginal discrepancies, specifically a 9.81% difference in ultimate tensile strength (UTS) as compared to the UTS at 170 • C; a 9.43% difference compared to 180 • C; and a 10.8% difference relative to 190 • C extrusion temperature.These variations suggest that a reduction in extrusion temperature marginally influences the tensile strength of the printed samples.Conversely, reference [41] indicates that, under analogous conditions of a 0.2 mm layer thickness and a 200 • C extrusion temperature, the material exhibits an approximately 20% enhancement in Modulus of Elasticity (E) in comparison to median values reported in the present study, which may be attributed to divergent factors during printing, but still approximately in the same range.Additionally, in the same reference [2], the UTS showcases an approximate 15% decrement at an extrusion temperature of 200 • C relative to the average UTS measurements documented herein; this implies that reducing the temperature to improve energy efficiency can be successfully applied without compromising the tensile performance of the material. Single Response Analysis To assess the impact of printing parameters (temperature and layer fan speed) on energy consumption and mechanical properties, Pareto charts (Figure 14) and main effect plots (Figure 15) graphical representations are illustrated.It can be seen that the main factor that influences the energy consumption and tensile modulus is the temperature, while UTS and elongation at break are influenced both by temperature and fan speed, almost in the same way.Regarding the crystallinity, the printing temperature has a great influence, as also was concluded in [25], and the main effects plot for the degree of crystallinity is similar to the graph presented in [25], indicating that at intermediate printing temperature, the lowest value of crystallinity is obtained.It can be seen that the main factor that influences the energy consumption and tensile modulus is the temperature, while UTS and elongation at break are influenced both by temperature and fan speed, almost in the same way.Regarding the crystallinity, the printing temperature has a great influence, as also was concluded in [25], and the main effects plot for the degree of crystallinity is similar to the graph presented in [25], indicating that at intermediate printing temperature, the lowest value of crystallinity is obtained. Multi-Response Optimization Optimization analysis was carried out to find the best combination of printing parameters to achieve maximum mechanical performance considering mechanical properties (UTS, A, E) and minimum energy consumption, as illustrated in Table 6.Table 7 shows the ranks allocated to different options related to the printing parameters.In Table 7, we observe that rank 1 corresponds to the experimental condition of 180 • C temperature and 80% fan speed, while rank 2 corresponds to the same temperature (180 • C) but with a slightly lower fan speed of 60%.The difference between the corresponding desirabilities is only 0.95%.The observation of similar composite desirability scores between these two conditions may initially seem unexpected, especially considering the Pareto chart's indication that temperature is the most significant factor affecting energy consumption, tensile modulus, and crystallinity, while fan speed's impact is less pronounced.These differences, although minor, are captured by the composite desirability analysis and reflected in the ranks assigned to each condition.Therefore, despite temperature being the dominant factor according to the Pareto chart, the combined effect of temperature and fan speed still warrants consideration in assessing the overall desirability of each experimental condition.The close ranks between conditions with similar temperatures but different fan speeds highlight the importance of comprehensively evaluating all factors to optimize the desired outcomes effectively. The optimization plot depicted in Figure 16 presents the impact of each factor (columns) on the responses or composite desirability (rows).Vertical red lines on the graph mark the current factor settings, and the numbers in red at the top of each column indicate the present factor level settings.The horizontal blue lines, along with accompanying numbers, signify the responses associated with the current factor level. The optimization process yielded specific printing parameter values, namely 180 • C printing temperature and 80% fan speed, which are visually depicted in red on the optimization plot in Figure 16. Conclusions Based on the provided results, several general observations and conclusions can be made regarding the tensile properties of colored thermoplastic aliphatic polyester 3D-printed samples under different printing conditions: The highest tensile modulus value was achieved at 180 °C temperature and 100% fan speed.This indicates that this specific combination of temperature and fan speed is favorable for obtaining the maximum stiffness or tensile modulus in 3D-printed products.Generally, higher values of tensile modulus were consistently obtained at 180 °C compared to other temperature settings.In contrast, smaller tensile modulus values were observed at 190 °C.This suggests that a temperature of 180 °C is ideal for achieving higher stiffness properties in colored thermoplastic aliphatic polyester prints, while temperatures exceeding this level may lead to reduced stiffness. The highest elongation at break value was obtained at 180 °C temperature and 80% fan speed.This combination allowed the analyzed material to exhibit the greatest deformation or ductility before breaking.Similar to the findings for tensile modulus, higher values for elongation at break were generally associated with a temperature of 180 °C, while smaller values were observed at 190 °C.Additionally, it was noted that the elongation at break values at 190 °C was nearly equivalent to those obtained at 170 °C.This indicates that temperatures above 180 °C may not provide significant benefits in terms of elongation at break. The ultimate tensile strength remained relatively similar for all tested printing conditions.This suggests that the UTS of PLA is relatively consistent across the range of parameters investigated.The parameters that exhibited notable changes were the elastic modulus and elongation at break. Temperature significantly impacts energy consumption and tensile modulus, while fan speed is a key factor for UTS and elongation at break.On the other hand, the degree of crystallinity of 3D-printed specimens is highly influenced by the printing temperature.The examination of crystallinity via both DSC and XRD analyses reveals intricate relationships influenced by temperature and fan speed.The substantial differences observed Conclusions Based on the provided results, several general observations and conclusions can be made regarding the tensile properties of colored thermoplastic aliphatic polyester 3Dprinted samples under different printing conditions: The highest tensile modulus value was achieved at 180 • C temperature and 100% fan speed.This indicates that this specific combination of temperature and fan speed is favorable for obtaining the maximum stiffness or tensile modulus in 3D-printed products.Generally, higher values of tensile modulus were consistently obtained at 180 • C compared to other temperature settings.In contrast, smaller tensile modulus values were observed at 190 • C.This suggests that a temperature of 180 • C is ideal for achieving higher stiffness properties in colored thermoplastic aliphatic polyester prints, while temperatures exceeding this level may lead to reduced stiffness. The highest elongation at break value was obtained at 180 • C temperature and 80% fan speed.This combination allowed the analyzed material to exhibit the greatest deformation or ductility before breaking.Similar to the findings for tensile modulus, higher values for elongation at break were generally associated with a temperature of 180 • C, while smaller values were observed at 190 • C. Additionally, it was noted that the elongation at break values at 190 • C was nearly equivalent to those obtained at 170 • C.This indicates that temperatures above 180 • C may not provide significant benefits in terms of elongation at break. The ultimate tensile strength remained relatively similar for all tested printing conditions.This suggests that the UTS of PLA is relatively consistent across the range of parameters investigated.The parameters that exhibited notable changes were the elastic modulus and elongation at break. Temperature significantly impacts energy consumption and tensile modulus, while fan speed is a key factor for UTS and elongation at break.On the other hand, the degree of crystallinity of 3D-printed specimens is highly influenced by the printing temperature.The examination of crystallinity via both DSC and XRD analyses reveals intricate relationships influenced by temperature and fan speed.The substantial differences observed between Figure 1 . Figure 1.Flow chart diagram illustrating the stages of the performed investigation.Figure 1. Flow chart diagram illustrating the stages of the performed investigation. Figure 1 . Figure 1.Flow chart diagram illustrating the stages of the performed investigation.Figure 1. Flow chart diagram illustrating the stages of the performed investigation. Figure 2 . Figure 2. DSC analysis on the colored thermoplastic aliphatic polyester filament (first heating scan). Figure 3 . Figure 3.The 3D-printed specimens: (a) shape and dimensions of specimens; (b) specimens used for testing. Figure 2 . Figure 2. DSC analysis on the colored thermoplastic aliphatic polyester filament (first heating scan). Polymers 2024 ,Figure 2 .Figure 3 . Figure 2. DSC analysis on the colored thermoplastic aliphatic polyester filament (first heating scThe selected temperatures are intentionally maintained below 210 °C to achiev dual objective: firstly, to ensure the thermal integrity of the filament during the extrus process, and secondly, to enhance the energy efficiency of the system by minimizing power consumption required for extrusion.The geometrical characteristics and dimensions of the specimen are shown in Fig 3a,and the printed specimens are presented in Figure3b. Figure 3 . Figure 3.The 3D-printed specimens: (a) shape and dimensions of specimens; (b) specimens used for testing. Figure 4 . Figure 4. Wiring diagram for measuring the active energy consumed by the 3D printer. Figure 4 . Figure 4. Wiring diagram for measuring the active energy consumed by the 3D printer. Figure 4 . Figure 4. Wiring diagram for measuring the active energy consumed by the 3D printer. Figure 8 . Figure 8. XRD patterns of 3D-printed samples.Based on the experimental data and adequate structural model, the lattice parameters were refined using the Rietveld method and Topas 4.1 program.From many crystal structures of the four polymorphs (α, α', β, and γ) of colored thermoplastic aliphatic polyester solved over time starting from 1968 with the study of De Santis and Kovacs Figures 9 - 23 Figure 9 . Figures 9-11 present the results regarding the influence of printing parameters on the mechanical characteristics of 3D-printed samples, while Figures 12 and 13 show the values of consumed energy and degree of crystallinity, respectively.Polymers 2024, 16, x FOR PEER REVIEW 14 of 23 Figure 10 . Figure 10.The influence of printing parameters on Young's modulus of 3D-printed specimens. Figure 9 . 23 Figure 9 . Figure 9.The influence of printing parameters on ultimate tensile strength of 3D-printed specimens. Figure 10 . Figure 10.The influence of printing parameters on Young's modulus of 3D-printed specimens.Figure 10.The influence of printing parameters on Young's modulus of 3D-printed specimens. Figure 10 . Figure 10.The influence of printing parameters on Young's modulus of 3D-printed specimens.Figure 10.The influence of printing parameters on Young's modulus of 3D-printed specimens. Figure 10 . Figure 10.The influence of printing parameters on Young's modulus of 3D-printed specimens. Figure 11 . Figure 11.The influence of printing parameters on elongation at break of 3D-printed specimens.Figure 11.The influence of printing parameters on elongation at break of 3D-printed specimens. Figure 11 . 23 Figure 12 . Figure 11.The influence of printing parameters on elongation at break of 3D-printed specimens.Figure 11.The influence of printing parameters on elongation at break of 3D-printed specimens.Polymers 2024, 16, x FOR PEER REVIEW 15 of 23 Figure 13 . Figure 13.The influence of printing parameters on the crystallinity of 3D-printed specimens. Figure 12 . 23 Figure 12 . Figure 12.The influence of printing parameters on the energy consumed for 3D printing of specimens. Figure 13 . Figure 13.The influence of printing parameters on the crystallinity of 3D-printed specimens. Figure 13 . Figure 13.The influence of printing parameters on the crystallinity of 3D-printed specimens. Table 1 . Characteristics of colored thermoplastic aliphatic polyester filaments from providers' data sheets. Table 2 . Printing parameters used for specimen fabrication. Table 4 . Thermal characteristics of the analyzed samples. Table 6 . Optimization goals for analyzed characteristics. Table 7 . Composite desirability and ranks. * The highlighted line corresponds to the optimal values (having Rank 1) of printing settings.
2024-05-15T15:02:02.211Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "8e9f857ebc8a4f5144e60a8fe5d188c626f7bce0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/16/10/1364/pdf?version=1715343574", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9fbbe358f6309c788ddfa7ff1aa828e9310ff8b0", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
237463590
pes2o/s2orc
v3-fos-license
Participatory Research on Environmental Health: Exploring the perceptions of family health strategy professionals The increasing environmental degradation and the diversity of environmental issues affecting public health in Brazil have required changing the routines and norms of primary healthcare services. The aim of the current study was to explore the perceptions of family health strategy professionals about priority environmental issues associated with risks to the health of local communities. Participatory action research based on the photovoice and focal group techniques and conducted with 28 professionals from two family health strategies in the Casimiro de Abreu County, Rio de Janeiro State, Brazil. Although participants were sensitive to health risk situations associated with inadequate environmental sanitation conditions, they showed limited perception about these risks and about possible actions to be taken in order to change local issues. Professionals of the two localities did not perceive themselves as co-responsible actors for improving the environmental conditions of the territory. There is need of taking contextualized environmental education actions focused on empowering and engaging health professionals, and the investigated community, to reduce health risk conditions through the equal access to sanitation services. Resumen La creciente degradación ambiental y la diversidad de problemas ambientales que afectan la salud pública en Brasil han requerido cambiar las rutinas y normas de los servicios de atención primaria de salud. El objetivo del presente estudio fue explorar las percepciones de los profesionales de la estrategia de salud familiar sobre los problemas ambientales prioritarios asociados con los riesgos para la salud en las comunidades locales. Investigación acción participativa basada en las técnicas de fotovoz y grupo focal y realizada con 28 profesionales de dos estrategias de salud familiar en el municipio de Casimiro de Abreu, estado de Río de Janeiro, Brasil. Aunque los participantes eran sensibles a las situaciones de riesgo para la salud asociadas con condiciones inadecuadas de saneamiento ambiental, demostraron una percepción limitada sobre tales riesgos y las posibles acciones que se tomarán para cambiar los problemas locales. Los profesionales de ambos lugares no se percibieron como actores corresponsables para mejorar las condiciones ambientales del territorio. Es necesario tomar acciones de educación ambiental contextualizadas, enfocadas en empoderar e involucrar a los profesionales de la salud y la comunidad investigada para reducir las condiciones de riesgo para la salud a través del acceso igualitario a los servicios de saneamiento. Palabras clave: Salud ambiental; Promoción de la salud; Personal de salud. Introduction The increasing environmental degradation and the diversity of environmental issues affecting public health in Brazil have required changing the routines and norms of primary healthcare services. Old health risks associated with aspects such as disorderly urbanization processes, regional imbalances, non-inspection of environmental situations by responsible bodies and inadequate environmental sanitation conditions were added to new risks deriving from different technologies, industrialization Research, Society andDevelopment, v. 10, n. 9, e46810917956, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i9.17956 3 and from the use of non-renewable natural resources, which together increased the risk of spreading chemical and radioactive agents in the environment (Almeida, Reis & Araújo, 2020). The promotion of strategies focused on acknowledging environmental factors as key determinants of the health condition of individuals and communities is favored by family health units. Thus, the Family Health Strategy (FHS) emerged in the Brazilian scenario as a priority service focused on consolidating and expanding collective actions capable of encouraging the pursuit of healthy environments and sustainable development and, consequently, capable of improving the quality of life of the general population (Dias et al., 2018). Accordingly, primary healthcare professionals should encourage health practices to enable the applicability of community intervention methods focused on broadening the perception of people affected by environmental issues, as well as on empowering them to improve their environmental and health conditions (Moniz, Daher, Sabóia & Ribeiro, 2020). However, these health professionals are often unprepared to take actions aimed at promoting healthy environments and ecologically sustainable attitudes (Bruno et al., 2021). The participatory approach in the environmental health field has been adopted as scientific methodology to help developing community learning and training practices focused on tackling and solving collective issues. However, studies involving health professionals in this process remain scarce in the Brazilian scenario (Moniz, Sabóia, Carmo & Hacon, 2017). The aim of the current study was to explore the perceptions of family health strategy professionals about priority environmental issues associated with risks to the health of local communities. Methodology Participatory action research based on the photovoice and focal group techniques and conducted in Rio Dourado and Barra districts, Casimiro de Abreu County, Rio de Janeiro State, Brazil. Action research is a qualitative methodology that seeks problem-solving with the groups involved in a social interventions process (Pereira et al., 2018). This type of research was utilized to incorporate the awareness of a social problem, turning the research into a changing vehicle in which the researchers and participants were involved by group techniques (Minayo, Assis & Souza, 2016). The study involved eight professionals of the family health strategy in Rio Dourado district and twelve professionals of the family health strategy in Barra district agreed to participate in the study. The data collection occurred from May 2017 to June 2018. Participants from each locality were asked to take pictures of the environmental issues they thought to have high priority due to risks to the health of the local community. They had one month to photograph the territory as many times as they wanted. Photos were filed and printed before the focus group meeting in order to be used in panels. The next stage comprised a focus group meeting aimed at making environmental and health-risk diagnosis, based on the expectations and perceptions of professionals who knew the community well. Participants were identified by different numbers, based on the sequence in the group, to assure anonymity. Speeches were audio recorded. A panel was collectively built with the pictures taken by participants, who discussed about them in order to reach group consensus about the selection of a priority issue and to analyze the cause-effect-intervention relationship. Multiple data analysis was carried out through systematized triangulation and it was based on thematic content analysis (Minayo, Assis & Souza, 2016). The organization of data gathered during the group meetings through drawings, photo panel and speech transcriptions allowed grouping the analyses based on the Protocol for Assessing Community Excellence in Environmental Health (PACE-EH) (Centers for disease control and prevention, 2000). Research, Society and Development, v. 10, n. 9, e46810917956, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10. 33448/rsd-v10i9.17956 4 The present study is part of the project entitled "Environmental Education and Nursing: a path to ethics, sustainability and health promotion", and was approved by the Research Ethics Committee of Federal Fluminense University (Opinion N. 1.934.809). The participants provided informed consent by written. Results The following categories were defined based on the mains issues addressed by the participants. The team of ESF Rio Dourado was named team 01 and the team of ESF Barra was denominated team 02. Water and Sewage Problems and Risks of Infectious and Parasitic Diseases The lack of sanitary sewage treatment service and the small coverage of the water treatment service in the investigated community were the main issues highlighted by the participants. Rainwater drainage was not mentioned; however, participants did not acknowledge this aspect as a sanitation matter when one of the moderators addressed it at the end of the group discussion. Two participants reported that the water supply for human use in Rio Dourado district derives from the water source at Mountain Farm and from artesian wells. One participant expressed his concern about the location of some wells on the grounds of the dwellings because they were very close to cesspits: When we arrive at these houses, it is possible seeing that the wells are very close to cesspits that were built in the same place...There is a place where the water in natura comes from the source, everyone drinks it and there is no water treatment there (Participant 1, team 01). Participants reported that artesian wells are often inappropriately installed, as well as that the population does not carry out chemical and microbiological water analyses. Accordingly, professionals also mentioned the risk of having the aquifers and the wells contaminated due to the discharge of domestic waste and sewage in the soil and in the river, as seen in the following speeches: All the sewage leaving the houses flows towards the river, half of the population has septic tanks, whereas the other half discharges their sewage in the river (Participant 06, team 01). During the visits, we instruct dwellers to throw chlorine into the wells, to boil the water; nowadays, nobody else boils it because the cooking gas is expensive, so it is just used to cook the food (Participant 1, team 01). One participant addressed the importance of adopting complementary traditional means of treating drinking water such as boiling, filtering and chlorination, in neighborhoods. However, these alternatives are not used by dwellers due to costs with the purchase of cooking gas and filter. Two professionals did not know about the existence of a water treatment network in Rio Dourado district. In addition, no participant in the localities mentioned that the local government should monitor the chemical, bacteriological, parasitological and toxicological aspects of the aquifer and river waters through the environmental health surveillance sector. Participants were also unaware of the state's responsibility to preserve the environmental quality of the water source. Participants of the team 01 reported that the means of domestic sewage destination used by the population was based on dumping sewerright into the river or in septic or rudimentary septic tanks, as seen below in participants' speeches: Half of the population living in Rio Dourado district has septic tanks, whereas the other half discharges their sewage in the river (Participant 1, team 01); Open sewage, insects and rats in the houses (Participant 04, team 01); All the sewer leaving the houses flows towards the river, this one still has a current in the gully... (Participant 06, team 01). The same situation was reported by the professionals of the team 02, except the problem of the discharge of sewage in the river: Problems caused by sewage in the tanks (Participant 05, team 02). Three professionals of the team 01 and one professional of the team 02 only associated the risk of developing diseases such as worm infections and dermatitis with the use of water from artesian wells and aquifers in the region. Two participants of the team 01 associated the risk of developing other diseases, such as dengue and zika with mosquito proliferation due to the incidence of stagnant and contaminated water from the river and from the well, as seen in the following speeches: Although one participant mentioned the contamination of water and soil by heavy metals and drugs, no causal relationship between the environmental exposure to these substances and its possible effects on people's health was established by any professional, as seen in the speech of one of them: It's not just parasites, they even throw drugs and cell phone batteries there, and it will contaminate our water tables…light bulbs in the garbage, lead (Participant 01, team 01). Risks to human health due to contact with soil and water possibly contaminated with different chemical substances and with several other infectious agents were also not perceived. Thus, participants associated the risk of having groundwater and surface water contamination in the district with lack of adequate final domestic sewage and waste disposal, rather than with the increasing use water by local agriculture and human consumption. Garbage Problem and risks of Infectious and respiratory diseases According to participants from both localities, household garbage collection is irregularly done by the company hired by the local government, fact that leads to inappropriate accumulation of domestic waste in the streets and in vacant lots, as shown in the following speeches: The garbage collection situation has worsened, nowadays. I would say that garbage collection is barely done once a week (Participant 01, team 01); The garbage truck goes to Vila Verde once a week (Participant 07, team 01); Look, it's a lot of garbage to be near a commercial establishment and houses (Participant 04, team 02). A single participant from each locality mentioned that garbage accumulation in the streets favors the proliferation of mosquitoes that can be arboviruses vectors: 03, team 01); Diseases caused by mosquitoes: dengue, chikungunya, zika (Participant 09, team 02). Only one participant of the team 01 and two participants of the team 02 mentioned about the risk of leptospirosis due to the proliferation of rodents attracted by the accumulated garbage: Leptospirosis is another effect (Participant 10, team 01). Thus, one of the alternative practices often inadequately implemented by the population lies on burning household garbage and debris in public places or in vacant and abandoned lots, as seen in the following reports: The garbage, the stench of the material, the garbage being burned, one thing leads to another (Participant 07, team 01); The garbage collection issue and residents burn it (Participant 10, team 02). Garbage burning was perceived as likely to increase the risks of having people developing respiratory diseases, as seen in the following statements: The burning process is also directly linked to respiratory diseases (Participant 7,team Participants only associated the garbage problem with the risk of developing dengue, leptospirosis and respiratory diseases. In addition to respiratory diseases, such as rhinitis and sinusitis, other chronic diseases were not mentioned by any participant from both localities. This finding evidenced their insufficient knowledge about other situations, such as garbage burning and accumulation in public spaces, which can be potentially dangerous to the environment and to people's health. Actions focused on coping with inadequate basic sanitation In relation to social participation, professionals of both studied communities raised as a priority the need for only people from the communities to begin to participate in concrete of the deliberative spaces for planning and decision-making on issues related to the increased sanitation coverage. The speeches below reflect this situation: Population has to walk side by side of management to function (Participant 3, team 02); The representative of the residents´ association must attend the council to follow the situation of the sewage and garbage (Participant 7,team 01). In this sense, it was observed that the health professionals who participated of the current study did not perceive themselves as co-responsible actors for improving the environmental conditions of the territory. They just held the population and the local government accountable for the local environmental condition, as it is observed in the following speeches: Despite this, some family health professionals recognized that they should educational actions, mainly, environmental education to enable the population to apply emancipatory primary health care practices, to improve people's knowledge about their environmental and health rights, as well as to empower them to pursue better health, environmental and life conditions. However, participants said they are not prepared to do such actions. Discussion Sanitation was selected by the participants as priority environmental health issue in territories studied. This is a chronic structural problem of most Brazilian counties. Lack of adequate basic sanitation conditions remains one of the main causes of poor environmental and life quality in Brazil, as well as of hospitalizations and deaths due to infectious and parasitic diseases (Ferreira, Graziele, Marques & Gonçalves, 2021). Sanitation has been highlighted as one of the fundamental pillars of the health and well-being of both humans and the planet. The advancement and applicability of technologies focused on reducing inequality in the access to sanitation services by vulnerable populations worldwide, mainly by those living in developing countries, have been addressed in the United Nations' policy through the Millennium Development and Sustainable Development Goals (Zhou et al., 2018;World Health Organization, 2021). Water contamination in Dourado River and in the aquifers of the region, by the inappropriate discharge of domestic sewage, as well as the proximity between wells and cesspits, were perceived as a worrying situation by participants. Lack of sewage treatment networks contributes to environmental degradation and to the emergence of imminent risks to human health. Household waste generates toxic residues that can contaminate different aquifer layers due to percolation and to groundwater precipitation through natural infiltration processes (Tran, Gin & Ngo, 2015). The contamination of freshwater sources poses one of the main risks to public health. Participants showed insufficient knowledge about the great diversity of microorganisms (viruses, fungi, protozoa, worms, algae and bacteria) that can be found in the water and transmitted through fecal-oral route due to contaminated water intake, inadequate domestic and personal hygiene and to contact with insect vectors that breed in water (Ferreira et al., 2021). The application of measures focused on expanding the coverage of sanitation services can significantly reduce pathogen-transmission risks (Ferreira et al., 2021). The risk of developing non-infectious diseases associated with the contact with contaminated soil or water was not mentioned by the here in investigated professionals. However, there is also the risk of having intoxications and diseases due to human exposure to different environmental contaminants such as heavy metals, agrochemicals, volatile organic compounds, drugs, among others (Moniz et al., 2017). According to participants, some residents living in neighborhoods subjected to irregular garbage collection service leave their garbage accumulating in the streets, whereas others resort to burning practices. Only one participant associated garbage accumulation with possible endemic diseases in the region, such as zika, chikungunya caused by the transmitting agent (mosquito vector) in the last years (Donalisio, Freitas & Zuben, 2017;Ribeiro, Teles & Tuon, 2020). Another participant associated garbage accumulation with the proliferation of Leptospira-transmitting rodents. Participants were concerned with the fact that garbage-burning practices increase the risk of having populations developing respiratory diseases, mainly the most vulnerable groups such as children and elderly, who are already affected by these issues due to other factors. This concern of the professionals is in agreement with the literature. The emission of toxic particles and atmospheric gases through the burning of domestic solid waste cause environmental and health impacts. These pollutants are associated with the risk of premature birth, low birth weight, increased incidence of cancer, respiratory and neurological diseases (Vollmer et al., 2021;Campos & Costa, 2017). In addition, these health issues have negative impact on the health sector due to increased health system-related expenditures, besides decreasing agricultural productivity (Campos & Costa, 2017). The professionals showed that they do not take measures to deal with environmental issues, and none of them had knowledge about strategies and educational contents focused on raising the awareness of the population about environmental health matters. These results were found in other studies (Bruno et al., 2020;Moniz et al., 2017). This fact is possibly due to the absence of inclusion of environmental health in undergraduate health students training (Souza, Andrade & Silva, 2017;Vollmer et al., 2021;Kligler, Zipp, Rochetti, Secic & Ihde, 2021). Thus, the results showed the need of taking contextualized environmental education actions aimed at empowering and encouraging both health professionals and community to pursue greater equity in sanitation services. Professionals need qualification to improve their knowledge and prepare them to develop environmental education actions that take into consideration the complexity of environmental determinants and their relation to human health and it is the government's duty to promote permanent health education actions (Moniz et al., 2020). The sanitation crisis in Brazil highlights the importance of the social participation in instances for decision making, because this is the best way to guarantee the human rights to their environmental determinants and to minimize the impacts of this crisis, especially in health (Moniz et al., 2017). Environmental education is an important instrument used to encourage people to engage in sanitation matters (Souza, Santos, Guimarães, Ribeiro & Silva, 2018). Such educational process is one of the main means to encourage people's participation in, and social control of, basic sanitation in the county, since it enables spaces where, in a critical way, it is possible exchanging knowledge about the reality and about the need of organization to comply with sanitation rights and duties (Piccoli, Kligerman, Cohen & Assumpção, 2016). In addition, the educational action enables clarifying doubts and exchanging information about inadequate environmental attitudes such as garbage burning. The participatory diagnosis proved to be a valuable tool to encourage the participation of all local FHS professionals in the reflexive and critical analysis of the environmental health status, besides contributing to the way of rethinking health promotion practices with emphasis on the environmental care in the scope of Primary Health Care. Conclusion Although FHS professionals were sensitive to health risk situations associated with inadequate environmental sanitation conditions, they showed limited perception about these risks and about actions that could be taken to change local issues. Based on this finding, it was possible inferring the need of taking contextualized environmental education actions focused on empowering and engaging health professionals, and the investigated community, to reduce health risk conditions through the equal access to sanitation services. It is hoped that the knowledge of this study can subsidize the development of new participatory studies and care practices in environmental health in the territories.
2021-09-09T20:44:25.096Z
2021-07-31T00:00:00.000
{ "year": 2021, "sha1": "c0a84f685a8a8b034364fce6d650b016b1dcfb5f", "oa_license": "CCBY", "oa_url": "https://rsdjournal.org/index.php/rsd/article/download/17956/16365", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "36cb42fe0524f0e4307c268b9520d6fa24d88f80", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Sociology" ] }
14921333
pes2o/s2orc
v3-fos-license
External leg amputation in conformal invariant three-point function Amputation of external legs is carried out explicitly for the conformal invariant three-point function involving two spinors and one vector field. Our results are consistent with the general result that amputing an external leg in a conformal invariant Green function replaces a field by its conformal partner in the Green function. A new star-triangle relation, involving two spinors and one vector field, is derived and used for the calculation. Introduction This work is concerned with the amputation of external legs in conformal invariant Green functions in Euclidean space with general number of dimensions. Various aspects of CFT (conformal field theory) in D dimensions have been reviewed in Refs. [1], [2] and [3]. We consider conformal invariant Green functions involving spinors and vector fields, which are relevant for the infrared limit of massless QED 3 [4,5], and for conformal QED 4 [6,7]. Some other areas which use D-dimensional CFT's with fields of non-zero spin are N = 4 supersymmetric Yang-Mills theory and unparticle physics [8,9]. Conformal invariant Green functions have the external legs included, but amputed Green functions are easier to calculate. This provides the motivation for studying amputation. Moreover, the conformal partial wave expansion [1,2,3,10,11] involves amputed Green function. This expansion expresses the contribution of the various quasi-primary fields to the product of two field operators at arbitrary separation. From this, one can find the contributions of the quasi-primary fields to the four-point function. A recent work which uses the conformal partial wave expansion and the amputed three-point function is Ref. [9]. Moreover, the techniques of calculation developed in this paper can be useful in other areas which involve evaluation of massless Feynman integrals, like N = 4 Yang-Mills theory [12]. The star-triangle relation involving scalar fields [13,14] (referred to as the D'EPP formula in Ref. [9]) has wide-ranging applications: see Ref. [15] and references therein. In this work, we have derived an analogous relation, involving two spinors and one vector field. Formally, amputation of an external leg in a Green function in D-dimensional CFT replaces a field of scale dimension d by its conformal partner, having scale dimension D − d [1,2,3,10,11]. However, only an explicit calculation can determine the coefficient which comes with the amputed Green function. The case of non-zero spin is more involved, as there are more than one invariant structures for a given Green function. A known example is the massless Yukawa theory [1]. But we will find that the case involving spinors and vector field is much more complicated. For some of the calculations involved, the use of the star-triangle relation derived by us is essential. The paper is organized as follows. In Sec. 2 we introduce amputed Green function in CFT through the example of massless scalar field theory. In Sec. 3, we introduce amputation of spinor leg through massless Yukawa field theory. In Sec. 4, we give the structures C 1µ and C 2µ of the conformal spinor-spinor-vector Green function and state how spinor leg amputation for these structures turns out to be different from the Yukawa case. The startriangle relation with two spinors and one vector field is derived in Sec. 5. Spinor leg amputation of C 1µ and C 2µ is carried out in Secs. 6 and 7. A check of these results is performed by spinor leg amputation in the transverse Green function of the current in Sec. 8. Vector leg amputation of C 1µ and C 2µ is carried out in Sec. 9. In Sec. 10, we present our conclusions. Amputation in scalar field theory In this section, we explain the aim of our work by reviewing the simplest example of scalar field theory. The two-point function and its inverse for a conformal scalar of scale dimension d are given by where x ab ≡ x a − x b and N is an arbitrary constant. Together they satisfy Thus, G −1 d is the two-point function of a scalar field of scale dimension D − d. In Appendix A, we indicate how to arrive at G −1 d from G d . A field of the same spin but of scale dimension D−d is called the conformal partner [3] or shadow operator [10] of the field of scale dimension d. Both these fields have the same set of values for the Casimir operators of the conformal group. Consider next a three-point function φ d (x 1 )φ l (x 2 )φ ∆ (x 3 ) of three scalar fields of scale dimensions d, l and ∆. The three-point function with the φ d -leg amputated is defined by with˜on a field denoting amputation. Using Eq. (2), this definition can also be written as Next, using the conformal transformation properties of the left-hand side of Eq. (4), it can be shown that the amputed three-point function is again a three-point function but with φ d replaced by its conformal partner. [See Ref. [3]; in Appendix B of the present work, we extend the demonstration to spinor and vector field.] Thus, where ∼ means upto some coefficient. Now, the structure of The non-trivial part in determining the coefficient on the right-hand side of Eq. (5) is therefore the evaluation of the integral . This can be done by using the star-triangle relation of Eq. (69). We then find that where G −1 d is given in Eq. (1). [Let us note that that Eq. (2.11) of Ref. [9] can be reproduced from our Eq. (7) by relabelling the scale dimensions and the coordinates appropriately.] The aim of the present work is to derive similar amputation equations for the spinor-spinorvector Green function which is relevant to QED. Amputation of spinor leg The fermion two-point function ψ d (x 1 )ψ d (x 2 ) and its inverse in CFT are given by Here N is again an arbitrary constant. It will be instructive to first consider the Yukawa (ψγ 5 ψφ) theory (D is even and γ 5 = i D/2 γ 1 γ 2 · · · γ D ). There are two conformal-invariant structures [3] for with γ 5 C ± γ 5 = ±C ± . Corresponding to Eq. (7), we now have the integrals being evaluated by using the star-triangle relation for the Yukawa theory given in Eq. (71). Here S −1 d and S −1 l are as given in Eq. (8), and For the case D = 4, these results are given in a different form in Appendix 6 of Ref. [1]. It may be noted that amputation again replaces d by D − d (or l by D − l) in Eqs. (11) and (12) in accordance with the general result. An additional feature is that C + goes over to C − and vice versa in these equations. This is consistent with the counting of the number of gamma matrices on each side of Eq. (11) and Eq. (12). The point is that we must have either odd or even number of gamma matrices on each side of an equation (since the product of odd (even) number of gamma matrices has a zero (non-zero) trace for even D). That amputation of one spinor leg gives back a standard structure is a special feature of the Yukawa theory. We will see that this feature is not present when we have a vector field coupling to the spinors. Spinor-spinor-vector Green function The Green function has two conformal invariant structures [3]: where To ampute ψ d (say), we have to proceed as in Eq. (11). But we will now come across an important difference: the amputation of one spinor leg will not give back either C 1µ or C 2µ (or a linear combination of them). At least for even D, this can be understood from the fact that each is a product of even number of gamma matrices, while both (14) and (15) 14) and (15), and also those in Eqs. (9) and (10), are invariant when x 1 ↔ x 2 , d ↔ l and hermitian conjugation are performed together. Recall that the Euclidean gamma matrices are all hermitian.] However, when both ψ d and ψ l are amputated, we get back linear combinations of C 1µ and C 2µ : see Secs. 6 and 7. A star-triangle relation with two spinors and one vector field The star-triangle relation which we are going to prove, and which will be later used for amputing C d,l,∆ 1µ , is: where Eq. (70) holds. The vector field has the propagator corresponding to scale dimension δ 3 (see Eq. (39) with Eq. (18) can be viewed as a generalization of the more familiar star-triangle relations given by Eqs. (69) and (71), as follows. The left-hand side of Eq. (18) represents the propagation of two conformal spinors and one conformal vector field from the external points x a (a = 1, 2, 3) to the internal vertex x 4 with an interaction γ ν . The right-hand side is a linear combination of the two available structures (14) and (15). A check for Eq. (18) can be performed for the case δ 3 = 1. In this case, the vector field propagator on the left-hand side is g µν (x 34 )/x 2 34 = ∂ µ x 3 ∂ ν x 3 ln |x 34 |, that is, longitudinal in x 3 . On the right-hand side, only the second term remains, and this term is also longitudinal in x 3 as follows. Since Eq. (70) now gives D/2 − δ 1 − 1/2 = −(D/2 − δ 2 − 1/2) = n/2 (say), the coordinate x 3 now occurs in the combination (x 13 /x 23 ) n λ x 3 µ (x 1 x 2 ), which equals −(1/n)∂ µ x 3 (x 13 /x 23 ) n . The vector field propagator in Eq. (18) is invariant under the standard transformation law for a conformal vector. A relation previously derived in Refs. [16] and [17] also involved two spinors and one vector field, but had a covariant gauge propagator. It is the relation (18) above, which will be necessary for the amputation of the spinor leg and also the vector leg in the structure C d,l,∆ 1µ . Another difference is that here we have completely general values for the scale dimensions δ 1 , δ 2 and δ 3 ; this will also be necessary for the present purpose. However, the derivation of Eq. (18), which is to be presented now, will be along the same lines as followed in Refs. [16] and [17]. We are thus going to use the operator algebraic method due to Isaev [15] which reduces Feynman integrals to products of position and momentum operatorsq i andp i taken between position eigenstates. As explained in Sec. 2 of Ref. [16], this method involves starting from the "pqp" form and passing to the "qpq" form. In our case, the idea is to split the left-hand side of Eq. (18) into a longitudinal part and a transverse part, and tackle them as in Sec. 4 of Ref. [16] and Sec. 2 of Ref. [17] respectively. In view of the general values of the scale dimensions, the starting "pqp" forms are somewhat different from that in these references. The starting forms are [These are, however, quite similar to the "pqp" form for the three-point function of the Yukawa theory: see Eq. (5) of Ref. [16].] To determine the proportion in which Γ long µ and Γ tr µ are to be taken, we use the relation [This formula can be derived by first evaluating ∂ µ ∂ ν (1/r n−2 ), and hence ∂ 2 (1/r n−2 ). Here Next, we have to express the position-space matrix elements (that is, between x| and |y ) of the right-hand sides of Eqs. (20) and (21) in terms of the matrix elements ofp λp −2α−1 , p −2β andp νpµp −2 from the Appendix of Ref. [16] (see Eqs. (14) and (15) of Ref. [16]). Then we can write down the matrix element of the right-hand side of Eq. (23) using Eq. (22) for n = D − 2β. This leads to On the other hand, we can put Γ µ in the "qpq" form and then take the matrix element. This involves a long calculation, given in Appendix D, and leads to The right-hand sides of Eqs. (24) and (25) are now to be equated. After that, we let x = x 1 − x 2 and y = x 3 − x 2 , and also change to a new integration variable x 4 defined by z = x 4 − x 2 . We also define δ 1 , δ 2 and δ 3 by D/2 − α = δ 1 , α + β = δ 2 and D/2 − β = δ 3 . This leads us to the relation given in Eq. (18). Spinor leg amputation in In this Section and the next, we are going to evaluate The integration over x 1 amputes ψ d (x 1 ), while that over x 2 amputesψ l (x 2 ). We consider C 1µ in this Section. From Eqs. (26) and (14), we see that the x 1 integration can be done by using the star-triangle relation of Eq. (71). The integration over x 2 then involves Consequently, this integral is of the form which can be evaluated by using the star-triangle relation of Eq. (18). We thus get where S −1 d and S −1 l are as in Eq. (8), and the coefficient F (d, l, ∆) is given by Using Eq. (15), we write down the integral (26) for C 2µ in full. There are two terms. On interchanging the integration variables x 1 , x 2 in the second term, we find that the integral under consideration is Let us evaluate the first term in (31). First we perform the x 2 integration using Eq. (71). Then the remaining x 1 integral is of the form where x D+d−l−∆+1 15 . (33) The right-hand side of Eq. (32) is obtained by writing / x 13 = / x 43 − / x 41 on the left-hand side. Now I can be evaluated by using Eq. (69). After some algebra, the first term in (31) is found to be multiplied with a coefficient which is symmetric in d and l. Then adding the second term in (31), we finally arrive at where the coefficient F (d, l, ∆) is given by Eq. (30). Spinor leg amputation in Green function of current We consider this case for checking the results of Secs. 6 and 7. The current j µ has the scale dimension D − 1. It can be checked that which is the Ward identity in position space. So the transverse spinor-spinor-current Green function is Now from Eq. (26), we see that ∂/∂x 3µ commutes with the operation of amputation. Thus, the combination in Eq. (37) should continue to be of the form C 1µ − C 2µ after the spinor legs are amputed. Indeed, by putting d = l and ∆ = D − 1 in Eqs. (29) and (35) and taking the difference, we obtain This serves as a check on the coefficients obtained in Eqs. (29) and (35). Vector leg amputation in The vector field two-point function and its inverse are given by They satisfy The amputation equations are which is obtained by using Eq. (18), and which is obtained by using Eq. (72). Here the coefficient F ′ (d, l, ∆) is given by . (43) Conclusion The previous works in the literature on conformal scalar field theory and Yukawa theory have been extended by us to the next complicated case of the theory of spinors and vector fields in two ways. We have performed amputation of the external legs of the spinorspinor-vector Green function, and secondly, we have derived a new star-triangle relation involving these fields. Our results for amputation will be useful for conformal partial wave expansion, and the star-triangle relation will be useful for Feynman diagram calculation, in any conformal theory involving spinors and vector fields. in Ref. [3]. Here we consider the spinor field and the vector field, using the specific case of the three-point function. We want to show that (compare with Eq. (4) for the scalar field) is a three-point function with dimensions D − d, l and ∆ for the three fields. For this, we need to check that it satisfies the invariance condition for the three-point function with these scale dimensions under conformal inversion and under scale transformation. Let us consider first conformal inversion: x µ → Rx µ = x µ /x 2 . Under this operation, the various (Euclidean) fields transform as [3] So the invariance condition ψ d ( Similarly, the condition ψ d ( since S −1 d is the two-point function of a spinor of dimension D − d (see Eq. (8)). We insert Eqs. (48) and (49) on the right-hand side of Eq. (45), and then let x 1 → Rx 1 (so . Comparing the resulting expression with Eq. (45) again, we get Comparing this with Eq. (48) leads to the desired conclusion. For the scale transformation x µ → λx µ , we proceed along similar lines, using ψ ′ . Amputation ofψ l can be handled similarly. For amputation of the vector leg, we have to show that is a three-point function with dimensions d, l and D − ∆ for the three fields. The condition since D −1 µν is the two-point function of a vector field of dimension D − ∆. We insert Eqs. (48) and (52) in the right-hand side of Eq. (51), then let x 3 → Rx 3 and follow the procedure adopted for ψ d .
2011-04-18T12:07:59.000Z
2009-07-10T00:00:00.000
{ "year": 2009, "sha1": "c7a8825ab18a613b484ee1718d1469884856bb42", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0907.1769", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c7a8825ab18a613b484ee1718d1469884856bb42", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
13836311
pes2o/s2orc
v3-fos-license
Paeoniflorin improves cardiac function and decreases adverse postinfarction left ventricular remodeling in a rat model of acute myocardial infarction Background Paeoniflorin (PF) is the active component of Paeonia lactiflora Pall. or Paeonia veitchii Lynch. This study was, therefore, aimed to evaluate the improvement and mechanism of the PF on ventricular remodeling in rats with acute myocardial infarction (AMI). Materials and methods In this study, AMI model was established by ligating the anterior descending coronary artery in Wistar rats. After 4 weeks gavage of PF, the apparent signs and the left ventricle weight index of Wistar rats were observed. The left ventricular ejection fraction (LVEF) was evaluated by Doppler ultrasonography. Changes in cardiac morphology were observed by pathologic examination, and apoptosis was observed by the terminal deoxynucleotidyl transferase dUTP nick end labeling assay. In addition, enzyme-linked immunosorbent assay was used to detect the expression of tumor necrosis factor-α (TNF-α), interleukin-6 (IL-6) interleukin-10 (IL-10) and brain natriuretic peptide (BNP). Immunohistochemistry and Western blot method were applied to detect Caspase-3 and Caspase-9. Results Compared with the model control, the survival conditions of rats in all treatment groups were generally improved after PF treatment. LVEF was significantly increased, and both left ventricular end-diastolic inner diameter and left ventricular end-systolic inner diameter were significantly reduced. Moreover, pathologic examination showed that the myocardium degeneration of the rats treated with PF was decreased, including neater arrangement, more complete myofilament, more uniform gap and less interstitial collagen fibers. Furthermore, the mitochondrial structure of cardiomyocytes was significantly improved. The ultrastructure was clear, and the arrangement of myofilament was more regular. Also, the expression of Caspase-3 and Caspase-9 was inhibited, and apoptosis was obviously reduced in the PF treatment groups. BNP, TNF-α and IL-6 were also decreased and IL-10 was increased in the treated rats. Conclusion PF could significantly improve the LVEF of rats. It decreased adverse left ventricular remodeling after myocardial infarction in rat models. The potential mechanism could be that PF decreased and inhibited BNP, TNF-α and IL-6, increased IL-10 and further inhibited the expression of Caspase-3 and Caspase-9, thus promoting ventricular remodeling. Introduction Acute myocardial infarction (AMI) is caused by a sudden and severe decrease or interruption of blood supply in coronary artery processes, thus resulting in severe acute ischemia in the myocardium and further ischemic necrosis. 1 Ventricular remodeling (VR) is the change of the ventricular shape and structure following AMI. Then, the Drug Design, Development and Therapy 2018:12 submit your manuscript | www.dovepress.com Dovepress Dovepress 824 chen et al myocardium gets thinner in the infarct zone, while myocardial hypertrophy gets more significant in the non-infarct zone. What is worse, contractile dysfunction, neurohormonal activation, histologic remodeling, inflammatory changes and apoptosis arise after AMI. 2 The gradual enlargement of the ventricular chamber caused by a series of physiologic and pathologic processes leads to hemodynamic changes, generating heart failure. Hence, VR following myocardial infarction (MI) is the significant pathologic basis of heart failure and is running through the whole process. Delaying or preventing VR is, therefore, critical factor for preventing heart failure. 3,4 Paeoniflorin (PF) is the main active component of the commonly used Traditional Chinese Medicine peony, Paeonia lactiflora Pall. or Paeonia veitchii Lynch. PF has diverse biologic functions and inhibits platelet aggregation, thrombosis, atherosclerosis and tumor formation, dilates coronary vessels, increases coronary blood flow, improves microcirculation, protects the liver and so on. [5][6][7][8][9][10][11][12] Previous studies have also demonstrated that PF (10 mg/kg) reduced infarct size in ischemia/reperfusion injury rat, improved the hemodynamic parameters, decreased Caspase-3 and Bax expressions, but upregulated Bcl-2 in the left ventricles (LVs). 13 Recently, PF (5, 10, 20 mg/kg) has been shown to decrease the expression levels of tumor necrosis factor-α (TNF-α), interleukin (IL)-1β, IL-6 and nuclear factor-κB, inhibit the activities and protein expression levels of inducible nitric oxide synthase and repress Caspase-3 and Caspase-9 activities. 14 However, whether PF can ameliorate AMI and the potential underlying mechanisms in rats remains to be elucidated. The cardioprotection of PF has been confirmed in the rats with MI during paracmasis by a pilot experiment in this test. Then, to further verify the curative effects and mechanisms of PF on the rats, the inflammatory factors, myocardial mitochondria and the expression of Caspase-3 and Caspase-9 were observed, indicating PF's effect on VR. We hypothesized that PF inhibited apoptosis, leading to increased myocardial salvage, reduced fibrosis size and mitigated VR in a rat model of AMI. Model establishment A total of 120 male Wistar rats (SPF, 200±10 g) were purchased from Beijing Weitong Lihua Experimental Animal Co., Ltd. (SCXK 2015-0008). The research protocol was approved by the Animal Ethics Committee of Guang'anmen Hospital of China Academy of Chinese Medical Sciences (No. 2015EC035-02). The rats were kept in the animal room of Guang'anmen Hospital of China Academy of Chinese Medical Sciences, according to the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health (Bethesda, MD, USA). The indoor temperature was maintained at 25°C±1°C. All rats had standard feed without eating and drinking restrictions. The AMI model in rats was made by referring to the methods of references. 15,16 The specific procedures are as follows. The experimental rats were weighed and marked. After intraperitoneal injection of 10% chloral hydrate (0.33 g/ kg; Sigma-Aldrich, St Louis, MO, USA) for anesthesia with skin preparation (shaving the hair), the rats were fixed in a supine position on a surgical table. Endotracheal intubation was conducted and fixed with lines after using iodophor for disinfection. The tracheal tube was connected with the HX-300 medical ventilator (Taimeng Technology Ltd., Chengdu, China), involving a tidal volume of 8 mL, frequency of 70 times/min and respiratory ratio of 1:3. An incision was made about 1 cm above the sternum slightly toward the left, and the muscular layer was separated bluntly. The thoracic cavity was opened with an eye speculum of ophthalmology by the third and fourth intercostal space to expose the heart and ligate the anterior descending coronary artery, while only threading was performed in rats selected randomly in the sham surgery group in advance, without ligation. Criteria of successful models are as follows: it was noted after ligation that the limb lead II significantly increased in the R wave amplitude, increased in the T wave and was markedly elevated in the ST segment, showing convexity and monophasic curve. Standard limb electrocardiogram indicated that the ST junction was markedly elevated. The color of the ligation site was grayish white. When the rats were stable, the thoracic skin was clamped with a hemostatic clamp and sutured carefully with removal of the medical ventilator. Subcutaneous injection of 100,000 IU penicillin was performed in each rat for preventing infection. After surgery, the rats were placed on the insulation blanket and covered with rugs to keep warm and increase the survival rate. The rats were observed closely all the time and chest compression for rescue was performed timely in the presence of ventricular fibrillation. Animals in each group were given water and feed normally, and administered with medication since 2 hours after the recovering time. The AMI models of rats established successfully were gavaged for 28 days once daily. animal grouping and administration One hundred and five of the 120 rats were made MI models and the remaining 15 had sham surgery. Among the 105 rats, 23 were excluded as 10 died, 9 had ventricular fibrillation and 4 models were unsuccessful. Therefore, the 82 rats that survived after surgery were randomly assigned to the model 17 Rats in the sham surgery group and model group were administered with 10 mL/kg/d of 0.5% CMCC-Na solution. Also, other test substances were suspended in 0.5% CMCC-Na solution. Rats in the middle dose group were given the adult human equivalent dose of 70 kg (converted by the body surface area), while rats in the high-dose group were administered two times of adult human equivalent dose, and rats in the small-dose group were given one-half of the adult human equivalent dose. sampling and sample processing methods After intraperitoneal injection of 10% chloral hydrate mixture (300 mg/kg), rats in each group were fixed on a dissecting table and their abdominal cavities were cut open. Blood of abdominal aorta was collected with a 10 mL syringe and treated on different indexes. Then, the thoracic cavities were cut after clamping the abdominal aorta using a hemostatic clamp. Next, the hearts were cut and lavaged quickly with ice-cold normal saline to wash the residual blood, and then water was blotted up using filter paper. Moreover, the whole hearts and LVs were set on the ice after weighing by an accurate electronic balance (Sartorius Scientific Instruments Co., Ltd., Beijing, China). Also, the apex of the heart of size 3×3 mm was selected and fixed with 3% glutaraldehyde for electron microscopy. Finally, the cardiac tissue at 0.3 cm under the ligature was fixed with 4% paraformaldehyde for HE staining and Masson staining. Peri-infarct tissue was defined as the area of myocardium within 2 mm of the visible edge of infarction, and remote area was taken from the interventricular septum, 18 while the rest of the tissues were immediately put into liquid nitrogen and stored in a refrigerator at −80°C. Experimental methods for determining the effect to VR The aim of this study was to observe the effect of PF on VR. Hence, the appearance, cardiac functions, left ventricle weight index (LVWI), myocardial cell morphology and other indexes reflecting VR were observed in the six groups of rats. general observations The appearance, activities, mental state, food intake, water intake, weight and stools were observed. The BW and LVWI were weighed in rats with the electronic balance (Sartorius, BJ, CHN) in each group, respectively. Also, the LVWI (unit: mg/g) was calculated according to the following formula: LVWI = LV weight (mg)/BW (g) Cardiac function detection After 28 days of gavage, the rats were anesthetized, and cardiac function was evaluated by Doppler ultrasonography. The BW was recorded and the heart rate, left ventricular ejection fraction (LVEF), left ventricular end-diastolic inner diameter (LVIDd), left ventricular end-systolic inner diameter (LVIDs) and other cardiac function parameters were measured by DW-350 B-mode echocardiography (Dawei Electronic Equipment Co. Ltd., Xuzhou, China). Morphologic observations on VR Morphological changes of myocardial cells in the infarct tissue were observed by HE staining, referring to the production methods of Zhang et al. 15 The degree of myocardial fibrosis in the infarct tissue was observed by Masson staining according to the methods of Itter et al. 16 In all experimental groups, the total amounts of fibrotic tissue (blue color) and muscle tissue (red tone) were determined in a standardized set of 10 cross-sections, starting at the level of the papillary muscle and moving toward the apex every 300 μm. The percentage of fibrosis was calculated from the ratio of the green area over the total (red + blue) area. The production was completed under the assistance of the Pathology Department, Guang'anmen Hospital, China Academy of Chinese Medical Sciences. The mitochondrial ultramicrostructure in the infarct tissue was observed with the transmission electron microscope, assisted by the Peking University School. apoptosis observation by terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) Apoptosis was observed by the TUNEL assay. 19 Tissue sections were observed under a confocal microscope, in which the normal Tunel, muscle cell nuclei were blue, while the apoptotic nuclei showed varying shades of brown. Each slice in the same area of the distribution of apoptotic cells was randomly selected out of five high-power fields. Then, the average number of apoptotic cells in each field was calculated as a percentage of the total cell number. Thereby, the apoptosis index (%) was generated. Serum BNP, TNF-α, IL-6 and IL-10 in rats were detected using the standard kit (MultiSciences Biotech Co. Ltd, Hangzhou, China). The specific procedures were performed strictly in accordance with the instructions of the kit. The OD values of all orifices were measured at 450 nm (410 nm if coloring by ABTS) after zeroing as the blank control orifice on a detector for enzyme-linked immunosorbent assay. Detection of Caspase-3 and Caspase-9 proteins in remote heart tissue by immunohistochemistry On the basis of immunohistochemical procedures, 20 the positive criteria and methods of treatment were as follows: as the positive criterion, the cytoplasm of myocardial cells was observed, because it was the major expression site for Caspase-3 and Caspase-9. Its color showed out brown, including brownish yellow, brown and dark brown in sections. The expression levels were estimated by integrated OD of the positive cells. Integrated OD was the average cumulative OD of the positive staining area of each group determined by ImageJ software. Detection of Caspase-3 and Caspase-9 proteins in remote heart tissue by Western blot The procedure was as follows. Total proteins in myocardial tissues were extracted and prepared as 10% separation gel. Also, the protein samples were detected, sealed and incubated with the first and second antibodies (1:500; Sigma-Aldrich), respectively. Electrochemiluminescence (Upstate Technology LLC, NYC, NY, USA) was added into the surface of the membranous protein, and then film exposure was done, with developing for 2 minutes and fixing treatment. The images were scanned for preservation and analyzed with ImageJ software, with the gray-scale value digitized on each special band. The gray-scale value of the target protein divided by the gray-scale value of GAPDH was used to express the relative content of the target protein of the samples. statistical analysis Professional statistical software SPSS 17.0 was adopted for statistical analysis of data. The experimental data were represented as mean±SD (x -±s), and normal distribution test was conducted on the data. For the data which met normal distribution and homogeneity of variance, analysis of variance test or Student's t-test was adopted. For the data which did not meet normal distribution, nonparametric test was adopted. P,0.01 meant very significant difference, while P,0.05 meant significant difference. Experimental results and analysis Analysis of the number of tested animals and general situation After 28 days of gavage, all rats survived in the sham operation group, 3 rats died in the model group and 1 rat died in the remaining groups; therefore, 15 rats survived in each group. Severe arrhythmia, ventricular fibrillation, massive hemorrhage and post-surgical wound infection were the major causes of death for rats during the experiment. Accidie, epilation, emaciation, anorexia, loose stool and other symptoms were observed to some extent in each group during the treatment, except for the sham surgery group. Among the rats, the ones in the model group showed the most severe symptoms. They had generally poor condition, markedly reduced activities, impetuosity, decrease in food intake, water intake and tachypnea. However, in the rats of the treatment groups, this general situation had significantly improved. In addition, BW had reduced in the model group rats compared to that in those of the sham surgery group, but there was no statistical difference in all groups (P.0.05; Figure 1). Observation on LVWI LVWI had increased significantly in the model group than that in the sham surgery group (P,0.01). Further comparison of this index with that in the model group showed that the Cardiac function No statistically significant difference was observed in all parameters in all the groups before surgery (P.0.05). Twenty-eight days after surgery, there was no statistical difference in heart rate (P.0.05) ( Figure 3A). While compared with that in the sham surgery group, the LVEF value decreased obviously in the model group (P,0.01), and significant enlargement was noted in LVIDd and LVIDs (P,0.01). Compared with the model group, the captopril group had higher LVEF value (P,0.05) and significantly improved LVIDd (P,0.01), but insignificant LVIDs change (P.0.05). The LVEF value was markedly higher in the PF group than that in the model group, with obvious reduction in LVIDd and LVIDs ( Figure 3B-D). Morphologic observation on VR In the sham surgery group (Figure 4A), the myocardial cells showed no obvious degeneration and were arranged regularly and clearly. The myofilaments were relatively complete, and the intercellular space was even. In the model group ( Figure 4B), focal necrosis in the infarct tissue was identified in some myocardial cells, and the myocardial fibers dissolved and broke partly. A number of myocardial fibers showed obvious disorder in the arrangement. Karyopyknosis and karyorrhexis were noticed in several cells, and dilatation was seen in the myocardial capillaries. Further, more severe inflammatory cell infiltration and myocardial tissue necrosis were noted under the epicardium. In the captopril group ( Figure 4C), disordered arrangement was noted in local myocardial fibers, along with a small amount of myocardial cell degeneration, necrosis and slight inflammatory infiltration. A little capillary hyperemia was found (significantly slighter than that in the model group). In the PF groups ( Figure 4D-F), it could be seen that a small amount of myocardial cells were loose, edematous and necrotic. Compared with the model group, some myocardial fibers were arranged more regularly with slight inflammatory cell infiltration, and the overall morphology was relatively clear. Masson staining Results of Masson staining suggested that the myocardial cells were arranged very regularly in the sham surgery group ( Figure 5A). No obvious hyperplasia was noticed in the collagen fibers of intercellular substances. But in the model group ( Figure 5B), irregular arrangement was noted in the myocardial cells with more disordered structures in the infarct tissue. A large number of collagen fibers replaced the myocardial cells with obvious hypertrophy in some myocardial cells. Although collagen fibers were also observed to replace some myocardial cells and alternated with myocardial cells in the captopril group ( Figure 5C) and PF groups ( Figure 5D-F), the results were without significant statistical difference. This was, however, greatly reduced only in the PF-treated hearts. Quantification of the scar fraction ( Figure 5G) confirmed that PF could significantly reduce fibrosis compared to the model group (P,0.01). Results of mitochondrial ultramicrostructures In the sham surgery group (Figure 6A), the ultramicrostructures of myocardial mitochondria were clear in rats, with more complete membranes, compact cristae, clear matrix, regular intercalated discs and favorable continuity, while there were several broad, broken or vacuolar mitochondrial cristae occasionally with disordered intercalated discs and unfavorable continuity. In the model group ( Figure 6B), the mitochondria showed obvious edema with broken cristae in some portions of the infarct tissue. Significantly increased collagen fibers were observed with more disordered arrangement and loose structures. The intimal and adventitial integrity was destroyed markedly and was even broken and dissolved with In the captopril group ( Figure 6C), the myocardium cells were arranged more regularly. The arrangement of myofibrils was orderly, while some mitochondria were broken slightly and the sarcoplasmic reticulum expanded slightly. Furthermore, the myofilaments were arranged regularly and orderly, but some were broken. In the PF groups ( Figure 6D-F), the ultramicrostructure degeneration of myocardial cells was significantly reduced compared with that in the model group. Cristae of some mitochondria were slightly broken, and the myofilaments were arranged more regularly and some were loose or broken. Moreover, a little collagen hyperplasia was observed. Results of apoptosis detected by TUNEL Muscle cell nuclei were blue, but apoptotic nuclei were brown ( Figure 7A-F) in remote heart tissue. Compared with the sham surgery group, apoptosis increased markedly in remote tissue in the model group (P,0.01). In comparison with the 829 Paeoniflorin in an AMI rat model model group, apoptosis reduced obviously in the treatment groups (P,0.01; Figure 7G). Results of BNP, TNF-α, IL-6 and IL-10 Compared with the sham surgery group, BNP increased markedly in the model group (P,0.01). However, BNP reduced markedly in the captopril group and PF groups, compared with that in the model group (P,0.01; Figure 8A). Compared with the sham surgery group, the concentrations of serum TNF-α and IL-6 increased markedly in the model group (P,0.01), while IL-10 reduced significantly (P,0.01). In contrast, TNF-α (P,0.05) and IL-6 (P,0.01) reduced obviously in the captopril group, whereas no statistical difference was noted in IL-10 (P.0.05) when compared to the model group. Similarly, TNF-α and IL-6 reduced markedly in the PF groups compared with those in the model group as well. More importantly, IL-10 was observed to increase significantly in the PF groups ( Figure 8B-D). immunohistochemical results Significant statistical differences were noted between the sham surgery group and other groups (P,0.01) in remote heart tissue. There were also obvious differences between the captopril group and PF groups compared with those in the model group (P,0.01). It was demonstrated that PF and captopril could markedly inhibit the expression of Caspase-3 and Caspase-9 proteins (Figures 9-11). Western blot results Compared with the sham surgery group, the expression levels of Caspase-3 and Caspase-9 proteins were increased significantly in the model group in remote heart tissue, while the expression of Caspase-3 and Caspase-9 proteins reduced markedly in the captopril group and the PF groups. Also, the treatment groups had lower expression than that in the model group. The results suggested that PF could significantly inhibit the expression of Caspase-3 and Caspase-9 proteins (Figures 12 and 13). Discussion LVEF evaluated by echocardiography was a more accurate measure of systolic function than ±dP/dt(max) in an MI model. 21 As revealed in this study, rats in the PF group generally improved with reduced LVWI and markedly increased LVEF, compared with those in the model group. Furthermore, the ventricular chambers reduced during systole and diastole, demonstrating that cardiac function in rats had been improved significantly by the treatment of PF. LV remodeling was characterized by anatomic changes. 2 PF could improve these changes, and thereby inhibit LV remodeling. Further observation of myocardial tissues in the PF group by HE staining and Masson staining showed that the myocardial cells reduced in degeneration and were arranged regularly. In addition, the myofilaments were relatively complete, the space was relatively even, and the collagen fibers of intercellular substances obviously decreased. All the results suggested that the curative effects were equal to those in controls with positive drugs. By the treatment of PF, the structures of myocardial mitochondria in the rats improved markedly. The ultramicrostructures were clear with complete membrane, compact cristae, clear matrix, complete mitochondrial membranes and regularly arranged myofilaments. Long-term activation of the neurohormonal response, especially of the sympathetic nervous system and the reninangiotensin-aldosterone system, was a major molecular hallmark of adverse LV remodeling. 2 BNP in myocardial ischemia and hypoxia was synthesized, and its increased degree was positively correlated with the severity of myocardial ischemia and hypoxia. 22 In this study, BNP reduced markedly in the captopril group and the PF-treated groups. Some researches have confirmed that the concentrations of TNF-α, IL-6 and IL-10 were closely associated 831 Paeoniflorin in an AMI rat model 833 Paeoniflorin in an AMI rat model with the severity of heart failure. 23 TNF-α promoted VR after MI by initiating a cascade reaction of inflammatory cells. During VR, the interaction of IL-6 and its receptors could gradually induce cardiac fibroblasts to transform into myofibroblasts. Moreover, it promoted a large amount of collagen protein secretion by stimulating mature fibroblasts in the heart, and precipitated cardiac fibrosis and myocardial cell growth. 23 IL-10 could effectively prevent the proliferation and activation of inflammatory cells, and thus improve VR. In general, inflammatory cytokines, such as IL-10, could affect the results from two aspects: on the one hand, they could inhibit the inflammatory reaction and protect the heart, helping in gradually recovering from the heart injury, and thus playing a positive role. On the other hand, they played a negative role and could stimulate and aggravate the inflammation, resulting in aggravation of the heart injury and causing further heart failure. In this study, in comparison with the sham surgery group and the model group, significant differences were noted in inflammatory factors such as TNF-α, IL-6 and IL-10. These results suggested that there were inflammatory mechanisms in the MI model caused by ligation of coronary arteries. On the contrary, obvious changes were noticed in the inflammatory cytokines after treatments. Captopril could downregulate the concentration of serum TNF-α and IL-6, but without obvious regulation in IL- 10 835 Paeoniflorin in an AMI rat model TNF-α and IL-6 and upregulate IL-10, showing two-way comprehensive action. It, therefore, demonstrated that one of the mechanisms against VR by PF was the regulation of inflammatory factors with a comprehensive, all-round and favorable two-way method. PF improved and slowed down the effects of the inflammatory cascade reaction during VR, and held back the continuous development of VR, thus preventing heart failure. Current researches on heart failure were focused on the mechanism of energy metabolism dysfunction, as this disorder of myocardia was one of the leading causes of the occurrence and development of heart failure. The mitochondrial structure was destroyed during congestive heart failure (CHF) in which adenosine triphosphate formation decreased, energy supply became insufficient and myocardial apoptosis was promoted. In addition, myocardial apoptosis was demonstrated to be the major mechanism of myocardial remodeling in CHF. Therefore, determining the mechanism of mitochondrial energy metabolism dysfunction may provide new therapeutic targets and strategies for the treatment of heart failure. 24,25 In chronic heart failure rats, and model rabbits with heart failure induced by volume overload and myocardial ischemia and reperfusion, as revealed in the studies, Caspase-3 and Caspase-9 were involved in the pressure load-induced heart failure and in the occurrence and development of myocardial remodeling. Further, the concentrations of the two factors were positively correlated with severity of heart failure. 20,26 Hence, we speculated that Caspase-3 and Caspase-9 participated in myocardial apoptosis in rats with heart failure following myocardial infarction. Research revealed that myocardial apoptosis was involved in the whole process of heart failure. 27 Myocardial apoptosis appeared immediately since heart failure, which may be closely associated with a significant increase in intraventricular pressure and the excessive activation of local intracardiac renin-angiotensin system. Apoptosis, which is active cell death regulated by genes, was characterized by the initiation of apoptotic signaling pathways and expression of relevant genes. Caspase-3 is an important protease during apoptosis and also a marker enzyme of apoptosis. 28 Myocardial apoptosis activated the caspases family via the death receptor pathway or mitochondrial pathways. 29 Caspase-9 was activated by apoptotic bodies, promoted by various proapoptotic factors during the process of apoptosis, which promoted the expression of Caspase-3 in the downstream caspase family. Caspase-3 was finally required to perform the process of myocardial apoptosis. 30 It has been demonstrated in this study that PF could downregulate the expression of Caspase-3 and Caspase-9 in rats with heart failure. Also, the intervention could markedly improve clinical symptoms and cardiac function indexes in heart failure rats, showing significant differences compared with those in the model group. Hence, it was speculated that PF could reduce cell apoptosis, inhibit VR, and improve or delay the formation and development of heart failure. These, therefore, may be the mechanisms for treatment for heart failure following MI. Conclusion The PF could significantly improve the cardiac function in rats with VR after AMI. It also adjusted the inflammatory cytokines of TNF-α, IL-6 and IL-10. Furthermore, it reduced the BNP level and inhibited the expression of Caspase-3 and Caspase-9. This study has provided a preliminary discussion on the treatment mechanism of PF for CHF from the viewpoint of disease, whereas curative effects based on disease-syndrome animal model, expression of other cytokines and molecular mechanisms are still worth continuing in this research.
2018-05-03T01:02:46.323Z
2018-04-12T00:00:00.000
{ "year": 2018, "sha1": "e6a31ac27b84e10021b70c399c14fb320a78d355", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=41519", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d26832dec7954527a03b4fb190863f3bbfe14c2f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
210862104
pes2o/s2orc
v3-fos-license
Prognostic value of sleep apnea and nocturnal hypoxemia in patients with decompensated heart failure Abstract Background Nocturnal hypoxemia is an important factor underlying the impact of sleep apnea on heart failure. It remains unclear whether nocturnal hypoxemia has a greater prognostic value in acute decompensated heart failure (ADHF) compared with the frequency of sleep apnea. Hypothesis Nocturnal hypoxemia might be better than the frequency of sleep apnea in predicting the outcomes in ADHF. Methods Sleep studies were prospectively performed during an ADHF hospitalization from January 2015 to December 2017. Sleep apnea was defined as the apnea‐hypopnea index (AHI) ≥15/h. The severity of nocturnal hypoxemia was determined by the percentage of time with saturation below 90% (T90%). The endpoint was the first event of all‐cause death, heart transplantation, implantation of left ventricular assist device, unplanned hospitalization for worsening heart failure, acute coronary syndrome, significant arrhythmias, or stroke. Results Of 382 patients, 189 (49.5%) had sleep apnea. The endpoint incidence did not differ between AHI categories (≥15/h vs <15/h: 52.4% vs 44.6%, log rank P = .353), but did between T90% categories (≥3.6% vs <3.6%: 54.5% vs 42.4%, log rank P = .023). Multivariate Cox regression analysis showed that T90% was independently associated with the endpoint (hazard ratio [HR] 1.008, 95% confidence interval [CI] 1.001‐1.016, P = .033), whereas AHI was not; the risk of the endpoint increased by 40.8% in patients with T90% ≥3.6% (HR 1.408, 95%CI 1.030‐1.925, P = .032). Conclusion Nocturnal hypoxemia had a greater prognostic value in ADHF than the frequency of sleep apnea. | INTRODUCTION Sleep apnea, typically categorized as predominantly obstructive (OSA) or central (CSA), is highly prevalent in both acute decompensated heart failure (ADHF) 1,2 and chronic stable heart failure. 3,4 Sleep apnea is responsible for multiple cardiovascular pathophysiological changes in heart failure, such as myocardial ischemia, 5 increased pulmonary arterial pressure, 6 and abnormal cardiac electrophysiological activities, 7,8 based on complex mechanisms, including nocturnal hypoxemia, increased sympathetic activity, enhanced renin-angiotensinaldosterone system, and chronic inflammation. 9,10 It has been reported that sleep apnea, generally scored by the apnea-hypopnea index (AHI), might be an independent risk factor of adverse outcomes in heart failure. 1,2,11 However, AHI has been questioned as a prognostic predictor of heart failure in some studies. 12,13 AHI is only a metric reflecting the frequency of apneas and hypopneas during sleep and does not take the lengths of apneas and hypopneas into consideration on its own definition. Therefore, more importance should be attached to detailed characteristics of sleep apnea. Nocturnal hypoxemia, as a composite consequence of apneas and hypopneas, might better represent the adverse effects of nocturnal respiratory events in heart failure. Gottlieb et al reported that increased hemodynamic stress in heart failure was related to the percentage of time with saturation below 90% (T90%), but not to the AHI. 14 Evidence also suggested that nocturnal hypoxemia appeared to be more robust to predict outcomes in stable chronic heart failure compared with AHI. 11,15 However, it is unclear whether nocturnal hypoxemia is better than AHI in predicting the outcomes in ADHF. Therefore, in the present study, we aimed to compare AHI and several parameters of nocturnal hypoxemia in evaluating the prognosis in hospitalized heart failure patients. | Patients This single-center, prospective, observational study was performed in Heart Failure Center, Fuwai Hospital. From January 2015 to December 2017, patients with ADHF were consecutively enrolled, including both new-onset heart failure and decompensation of chronic heart failure. ADHF was diagnosed based on symptoms/signs of fluid overload and/or hypoperfusion, and appropriate additional investigations such as chest X-ray, electrocardiogram, N-terminal pro-brain natriuretic peptide (NT-proBNP), and echocardiography according to the European Society of Cardiology Guidelines. 16 The exclusive criteria were as follows: age <18 or >80 years; any coronary event within the previous 3 months or at the time of enrollment, namely, acute coronary syndrome (ACS), percutaneous coronary invention, coronary artery bypass grafts; implantation of pacemaker, implantable cardioverter defibrillation (ICD), or cardiac resynchronization therapy within the previous 3 months; heart valvular surgeries within the previous 3 months; stroke within the previous 6 months; dialysis; chronic obstructive pulmonary disease; acute myocarditis or infective endocarditis; significant uncorrected valvular heart disease; malignancy; pregnancy; diagnosed sleep apnea, or previously receiving any type of positive pressure ventilation or oxygen therapy. Patients were also excluded if they were admitted to hospital for cardiovascular interventions and surgeries. The study protocol conformed to the Declaration of Helsinki and was approved by the institutional review board of Fuwai Hospital. Individual informed consents were signed. | Sleep study Patients received sleep studies by means of Apnealink Plus (Resmed, Ltd, Martinsried, Germany) from 22:00 to 6:00 after an initial improvement of heart failure during the hospitalization period by intensive therapy. Patients undergoing sleep study were requested to relieve from edema and lie in a supine position without dyspnea under room air. Sleep studies were not done on patients who were hemodynamically unstable, had nocturnal dyspnea, needed oxygen supplement or ventilation. Nasal airflow amplitude and oxygen saturation were measured by a nasal flow pressure cannula and a finger pulse oximeter, respectively. The recorded data were analyzed by two-step method. First, the data were analyzed automatically by software, Apnealink Version 10.20. Then the recordings were manually reanalyzed by a sleep specialist who was blinded to the clinical status of patients. In recordings, only time periods with both sufficient airflow and saturation signals were considered valid recording time. We only took account those sleep studies with a minimum 4-hour valid recording time. Apnea was defined as breathing amplitude decreased by ≥90% for ≥10 seconds. Hypopnea was defined as breathing amplitude decreased by ≥30% lasting for ≥10 seconds, accompanied by a ≥3% drop in oxygen saturation. 17 AHI was defined as the total number of apneas and hypopneas per hour. Sleep apnea was defined as AHI ≥15/h. Oxygen desaturation index (ODI) was defined as the total number of desaturation events where oxygen saturation decreased by ≥3% per hour. The mean saturation (meanSO 2 ), the minimal saturation (minSO 2 ), and T90% during sleep were also recorded. | Blood samples and echocardiography Blood samples were routinely collected for every patient. We examined a series of blood parameters, including NT-proBNP, hemoglobin, serum creatinine (SCr), blood urea nitrogen, potassium, sodium, glycated hemoglobin, total cholesterol, and low-density lipoprotein cholesterol. The renal function was evaluated by eGFR (mL/min/1.73 m 2 ) based on SCr using modification of diet in renal disease (MDRD) equation. Renal dysfunction was defined as eGFR <60 mL/min/1.73 m 2 . Echocardiography was performed using ultrasound system (Vivid E9; GE, Norway) on admission. | Follow-up and endpoint The enrolled patients were systematically followed up every 3 months by outpatient reviews or telephone calls after discharge until December 31, 2018. Follow-up was terminated when death, heart transplantation, or implantation of left ventricular assist device (LVAD) occurred. The endpoint was defined as the first event of death from any cause, heart transplantation, LVAD implantation, unplanned hospitalization for worsening heart failure, ACS, significant arrhythmias, and stroke. Significant arrhythmia event was defined as sustained ventricular tachycardia, ventricular fibrillation of asystole. Information of the adverse events was obtained from the medical records for those patients who were followed up at our hospital. For those patients who were not followed up at our hospital, detailed information was obtained by telephone calls with patients' families and local medical institutions they were admitted to if necessary. Data regarding the adverse events were collected and determined by two blinded cardiologists. | Statistical analysis Continuous variables were presented as mean ± SD or median with interquartile range (IQR) as appropriate, while categorical variables were expressed as frequency and percentage. Baseline characteristics were compared with Student's t test or Mann-Whitney U test for continuous variables, and chi-square test or Fisher's exact test for categorical variables. The impact of each sleep study parameters on the time to the endpoint was assessed by Kaplan-Meier analysis using logrank test. The thresholds of sleep study parameters were determined by the median values except for AHI. Factors associated with the endpoint were determined using univariate Cox regression analysis, including age, gender, BMI, coronary artery disease, hypertension, diabetes mellitus, dyslipidemia, atrial fibrillation, renal dysfunction, NYHA class, mean arterial blood pressure (MAP) at discharge, NT-proBNP, LVEF, medications prescribed at discharge (ie, angiotensin converting enzyme inhibitor [ACEI] /angiotensin receptor blocker [ARB], β-blocker, spironolactone, calcium channel blocker, and statin) and sleep study parameters. Variables with P < .10 in univariate analysis were included in a multivariate Cox regression analysis to identify the independent risk factors of the endpoint based on stepwise backward selection using a likelihood ratio (P > .5 for exclusion). Sleep study parameters were included in multivariate analysis irrespective of their significance in univariate analysis. Because of potential correlation between sleep study parameters, each tested parameter was analyzed separately in multivariate analysis. Hazard ratios (HR) and 95% confidence intervals (CI) were calculated. A two-tailed P < .05 was considered statistically significant. All data were analyzed using SPSS version 23.0 (IBM corporation, Armonk, New York). | RESULTS A total of 420 patients who met the predefined inclusion/exclusion criteria were followed up systematically after discharge. Follow-up Abbreviations: BMI, body mass index; CI, confidence interval; HR, hazard ratio; MAP, mean atrial blood pressure; NT-proBNP, N-terminal pro-brain natriuretic peptide; NYHA, New York Heart Association; T90%, the percentage of time with oxygen saturation below 90%. more diuretics. There were no differences of AHI and AHI categories between patients with and without the endpoint. T90% was significantly higher in patients with the endpoint ( Table 1). The Kaplan-Meier analysis showed the incidence of the endpoint did not differ between AHI categories (≥15/h vs <15/h: 52.4% vs 44.6%, χ 2 = 0.862, log rank P = .353; Figure 2A) AHI might not be the best metric to determine the severity of sleep apnea in heart failure. As mainly reflecting the frequency of apneas and hypopneas during sleep on its own definition, AHI does not consider the durations of apneas and hypopneas. As a consequence, AHI cannot differentiate between apneas and hypopneas with the same number but different durations. Moreover, the lengths of apneas and hypopneas are dependent on cardiac function. 22 The greater the extent of cardiac dysfunction, the longer apneas and hypopneas will be. As a result, the total number of apneas and hypopneas is potentially limited in heart failure and the severity of sleep apnea, determined by AHI, is consequently underestimated. Abbreviations: ACEI, angiotensin converting enzyme inhibitor; ARB, angiotensin receptor blocker; BMI, body mass index; CI, confidence interval; HR, hazard raito; MAP, mean atrial blood pressure; MinSO 2 , the minimal oxygen saturation; NT-proBNP, N-terminal pro-brain natriuretic peptide; NYHA, New York Heart Association. The result of our study demonstrated that some parameters representative of nocturnal hypoxemia (ie, T90% and minSO 2 ) were better than AHI in predicting adverse outcomes in ADHF, which was consistent with some previous studies in stable chronic heart failure. Oldenburg et al 11 showed significant association with the prognosis. 15 In addition, minSO 2 was also demonstrated a more robust association with fatal or resuscitated sudden cardiac death than AHI. 23 These findings suggested that nocturnal hypoxemia might better represent the detrimental effects of sleep apnea than the frequency of apneas and hypopneas in heart failure. It was reported that T90%, rather than AHI, predicted the elevations in brain natriuretic peptide, 14 indicating nocturnal hypoxemia might be an important factor underlying the impact of sleep apnea on acute hemodynamic stress in heart failure. Another study found that cardiac norepinephrine spillover was correlated with a reduced oxygen saturation, but not with the AHI, 24 indicating increased sympathetic activity was more associated with nocturnal hypoxemia. Overall, nocturnal hypoxemia might be a better measure representative of adverse effects of sleep apnea than AHI, explaining why it was better than the frequency of apneas and hypopneas in predicting the prognosis in ADHF. 34 A randomized controlled trial has been registered to assess the efficacy of an oral appliance for sleep-disordered breathing and cardiac function in patients with heart failure. 35 | CONCLUSION This study demonstrated that nocturnal hypoxemia was more predictive of adverse outcomes in decompensated heart failure than the frequency of sleep apnea. Prospective studies should be conducted to determine the effect of oxygen therapy on the prognosis of heart failure and sleep apnea. SUPPORTING INFORMATION Additional supporting information may be found online in the Supporting Information section at the end of this article. How to cite this article: Huang Y, Wang Y, Huang Y, et al.
2020-01-23T09:05:52.439Z
2020-01-22T00:00:00.000
{ "year": 2020, "sha1": "4e7afa39e812f7d502165c2ad504a964edcc7b6d", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/clc.23319", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9a9969961c279f712ba2ad771c31f56df3ca735f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
257630809
pes2o/s2orc
v3-fos-license
Prevalence and severity of COVID-19 among children and adolescents with autism spectrum disorders in the Republic of Korea Autism spectrum disorder is considered a vulnerability for many diseases including coronavirus disease 2019. This study investigated trends in coronavirus disease 2019 among children and adolescents with and without autism spectrum disorder and to evaluate whether there are differences in the prevalence, severity, and case fatality rate. We used data from the National Health Insurance Service for all people ⩽19 years of age. Among 9,187,211 children and adolescents ⩽19 years of age, 402,499 (4.4%) were coronavirus disease 2019–positive. Of the total population, 63,054 (0.7%) were diagnosed with autism spectrum disorder, among whom 2557 (4.1%) were coronavirus disease 2019–positive. The coronavirus disease 2019 prevalence was lower among children and adolescents with autism spectrum disorder, with 4055 per 100,000 versus 4383 per 100,000 without autism spectrum disorder (p < 0.001). However, children and adolescents with autism spectrum disorder exhibited a higher proportion of hospitalization (24.8% vs 21.5%) and severe disease (0.2% vs 0.01%) than those without autism spectrum disorder (p < 0.001); the length of hospital stay among inpatients was not different between the two groups (9.5 vs 9.4 days, respectively; p = 0.48). There were six deaths in total, with no deaths among children and adolescents with autism spectrum disorder. The quarantine policies have played a great role in sustaining low prevalence and higher hospitalization rates among children and adolescents with autism spectrum disorder. Lay abstract It is more difficult to prevent coronavirus disease 2019 in children and adolescents with autism spectrum disorder, as they have trouble communicating and adjusting to their new daily lives like wearing masks and social distancing. However, there have not been many studies that focused on coronavirus disease 2019 among children and adolescents with autism spectrum disorder. We included all Korean citizens under the age of 19 as our study subjects. Among them, we found out the prevalence, severity, and case fatality of coronavirus disease 2019 in children and adolescents with and without autism spectrum disorder. The prevalence of coronavirus disease 2019 among children and adolescents with autism spectrum disorder was lower than that of those without autism spectrum disorder. For severity, children and adolescents with autism spectrum disorder were more likely to enter severe stages of disease and had higher hospitalization rates than those without autism spectrum disorder. There were no deaths among children and adolescents with autism spectrum disorder, while a few died among children and adolescents without autism spectrum disorder. However, due to the small number of deaths, it was difficult to determine whether there was a link between autism spectrum disorder and coronavirus disease 2019 deaths. We found that the appropriate quarantine policies have played a great role in sustaining overall low prevalence and higher hospitalization rates among children and adolescents with autism spectrum disorder than those without autism spectrum disorder. Furthermore, because Korea has fewer schools and facilities (i.e. personal care, social training, and skilled nursing facility) for children and adolescents with autism spectrum disorder than other countries, those with autism spectrum disorder have fewer social contacts than even before the COVID-19 pandemic. Since the start of the coronavirus disease 2019 pandemic, many people have been infected and have died worldwide Dashboard, 2022). However, the threats posed by pandemics do not apply equally to everyone (Centers for Disease Control and Prevention. Pneumonia, 2021; Centers for Disease Control and Prevention. TB and Children, 2021;Hernandez-Vargas et al., 2014;Nwachuku & Gerba, 2006;Poehling et al., 2006;Thompson et al., 2004). Previous studies have reported that COVID-19-related health outcomes, including incidence of COVID-19, severity, and mortality rates, are worse in socioeconomically disadvantaged groups . Socioeconomic level, race, old age, and disability are known factors that influence health outcomes (Singh & Jemal, 2017;Webb Hooper et al., 2020;Wiemken et al., 2020). In particular, individuals with disabilities are well-known vulnerable groups (Choi et al., 2021;Vai et al., 2021). Among these, individuals with autism spectrum disorders (ASDs) are reported to be vulnerable to infectious diseases such as COVID-19. To prevent the acquisition of infectious diseases, changes in the activities of daily living, such as wearing masks and social distancing, are necessary. However, it is difficult for individuals with ASD to adapt to these changes compared to those without ASD (Bitan et al., 2022). Therefore, it is believed that children and adolescents with ASD are more vulnerable to COVID-19; however, few studies have been conducted. In Korea, there are approximately 470,000 births annually, and 0.5%-0.7% of them have ASD, diagnosed by a doctor at age 8 when they enter elementary school. Epidemiological studies have reported an ASD prevalence of up to 2.7%, but the risk of COVID-19 in children and adolescents with ASD is not yet studied (Yoo et al., 2022). As such, the present study aimed to determine the prevalence, severity, and case fatality rate associated with COVID-19 among children and adolescents ⩽19 years of age and to evaluate whether there are differences between children and adolescents with and without ASD. We investigated whether health inequalities exist by examining differences in COVID-19-related health outcomes between children and adolescents with and without ASD. Study design and population This retrospective cross-sectional study used claims data provided by the National Health Insurance Service (NHIS). In Korea, it is compulsory to join the NHIS, and 97% of citizens are NHIS beneficiaries. The remaining 3%, who are unable to pay insurance premiums, are Medical Aid recipients and are supported by the government budget for medical expenses (2018Medical Aid Statistics, 2019. As the sole insurer in Korea, the NHIS houses data regarding diagnosis, treatment, and prescriptions covered by insurance. This study was conducted on all populations ⩽19 years of age as of March 31, 2022. Data on COVID-19 prevalence, severity, hospitalization, and case fatality for study participants were used from 1 January 2020, when the first case of COVID-19 was detected in Korea, to 31 March 2022, when a massive spread of the Omicron variant occurred. The total population was based on data from the Korean National Statistical Office. Children and adolescents with ASD were defined as those with ASD diagnosis codes (F84.0, F84.1, F84.5, F84.8, and F84.9) between January 1, 2002 and December 31, 2019. As of March 2022, the total population of Korea was 52,762,651, of which 9,187,211 (17.4%) were ⩽19 years of age. The number of children and adolescents with ASD ⩽19 years of age was 63,054, corresponding to 0.7% of the population ⩽ 19 years of age. COVID-19 patients were defined as those with a COVID-19 diagnostic code (U07.1) according to the Korean Standard Classification of Diseases-7, which is a modified version of the International Classification of Diseases, 10th Revision. Data collection Data regarding sex, age, insurance type, and residential area were collected from the NHIS database. Insurance type served as a proxy for the economic level of the study participants, who were classified into two groups: NHIS beneficiaries and Medical Aid recipients. Residential areas were divided into the Seoul metropolitan area, Daegu and Gyeongsangbuk province, and other areas according to the level of the COVID-19 epidemic. Comorbidities were estimated using the Charlson Comorbidity Index (CCI), which is an index that predicts the risk for death within 1 year of hospitalization according to the type(s) and condition(s) of comorbidities (Sundararajan et al., 2004). Disability was identified using the National Disability Registry. The World Health Organization ordinal scale was used to assess the severity of infection, which was divided into four categories: ambulatory (score 1 and 2); hospitalized, mild disease (scores 3 and 4); hospitalized, severe disease (scores 5-7); and death (score 8) (WHO R&D Blueprint novel Coronavirus, 2020). Deaths from COVID-19 were defined as death after being hospitalized with a diagnosis of COVID-19 (Korea Centers for Disease Control and Prevention, 2022). The length of admission was defined as the period of hospitalization for the main diagnosis of COVID-19 after the initial diagnosis of COVID-19. The study protocol was approved by the Institutional Review Board (IRB) of Seoul National University Bundang Hospital (IRB number X-2109-709-902). Statistical analysis Characteristics of the total population, children and adolescents without ASD, and those with ASD are expressed as mean with standard deviation or number and percentage. Prevalence was calculated as the number of COVID-19 patients per 100,000, and case fatality rate was calculated as the number of deaths among COVID-19 patients. Descriptive statistics were compared using the two-tailed Student's t-test or analysis of variance for continuous variables, and the χ 2 test for categorical variables. All tests were two-tailed, and differences with p < 0.05 were considered to be statistically significant. SAS Enterprise Guide version 8.2 (SAS Institute, Inc., Cary, NC, USA) was used for statistical analysis. Baseline characteristics and distribution of patients with COVID-19 As of March 31, 2022, of the total population of 52,762,651, the population ⩽ 19 years of age was 9,187,211 (17.4%) in Korea. Among the population ⩽19 years of age, 402,499 (4.4%) were positive for COVID-19 at least once. Of the 9,187,211 individuals ⩽19 years of age, 63,054 (0.7%) had ASD. Of the 9,124,157 children and adolescents without ASD, 399,942 (4.4%) were positive for COVID-19, and 2557 (4.1%) of 63,054 children and adolescents with ASD were positive, resulting in fewer COVID-19 infections among those with ASD (p < 0.001). Among children and adolescents with ASD, males outnumbered females by a factor of 2.9 (46,905 vs 16,149, respectively), and the proportion of male and female in COVID-19 patients (1921 vs 636) among children and adolescents with ASD was not significantly different compared to the overall children and adolescents with ASD under 19 years of age. (p = 0.38). For children and adolescents both with and without ASD, the proportion of those with COVID-19 was higher in the 0-4 years' age group and 5-9 years' age group (p < 0.001). The proportion of Medical Aid recipients was 6.5% among children and adolescents with ASD and 2.2% among those without ASD. The proportion of Medical Aid was similar for children and adolescents with and without COVID-19 in children and adolescents both with and without ASD. According to region, the proportion of COVID-19 patients was higher in the Seoul metropolitan area than the proportion of the population ⩽ 19 years of age among children and adolescents with ASD (60.6%) and in children and adolescents without ASD (59.8%). When the CCI was high, both children and adolescents with ASD (1.7% for CCI ⩾ 3) and those without ASD (0.5% for CCI ⩾ 3) had a higher proportion of children and adolescents with COVID-19 (Table 1). Prevalence of COVID-19 The prevalence of COVID-19 in the total population ⩽19 years of age was 4381 per 100,000 children and adolescents. It was 4055 per 100,000 among children and adolescents with ASD, which was lower than that in children and adolescents without ASD (4383 per 100,000; p < 0.001). Among children and adolescents with ASD, both males (4096 vs 4520; p < 0.001) and females (3938 vs 4239; p = 0.008) had a lower prevalence than those without ASD. According to age, the prevalence was higher among children and adolescents with ASD than in those without ASD at 0-4 years of age (5444 vs 4961; p < 0.001); however, at 5-9 years of age, the prevalence was marginally significantly lower (4909 vs 5088; p = 0.07), 10-14 years of age (3665 vs 3916; p = 0.003), and 15-19 years of age (3003 vs 3762; p < 0.001) were also significantly lower. Both NHIS beneficiaries and Medical Aid recipients had a lower prevalence of ASD than those without ASD (p < 0.001). Children and adolescents with ASD had a lower prevalence of COVID-19 in the Seoul metropolitan area and other areas (p < 0.001); however, there was no significant difference between Daegu and Gyeongsangbuk province (p = 0.05). Children and adolescents with ASD had a higher prevalence than those without ASD when the CCI score was ⩾3 (4988 vs 4673, p = 0.001) or for those with disabilities (4005 vs 3789; p = 0.01) ( Table 2). Severity of COVID-19 A total of 634 (24.8%) children and adolescents with ASD and 85,974 (21.5%) without ASD were hospitalized. Hospitalized patients with mild disease, who did not require oxygen or required oxygen therapy below the nasal prong, were 628 (24.6%) in the ASD group and 85,920 (21.5%) in those without ASD. Patients hospitalized for severe disease requiring oxygen therapy more than high flow were 6 (0.2%) with ASD and 50 (0.01%) without ASD. There were six deaths among children and adolescents without ASD ( Figure 1, Table 3). Length of hospital stay for COVID-19 The mean length of hospital stay for those with ASD was 2.3 (4.4) days, while the mean length of stay for those without ASD was 2.0 (4.1) days. For inpatients, the mean length of stay was 9.5 (3.6) days for children and adolescents with ASD, and 9.4 (3.3) days for those without ASD. There was no significant difference between the groups (p = 0.48). The number of patients hospitalized for 1-14 days and 15-28 days was 597 (23.4%) and 36 (1.4%), respectively, among children and adolescents with ASD, and 82,946 (20.7%) and 2787 (0.7%) in those without ASD, respectively (Table 4). Discussion In this study, there was a lower prevalence of COVID-19 among children and adolescents with ASD than in those without ASD, although those 0-4 years of age, residents of Daegu and Gyeongsangbuk province, those with a CCI score ⩾3, and children and adolescents with disabilities, had a higher prevalence of COVID-19 than those without ASD. Children and adolescents with ASD were more severely affected than those without ASD, and hospitalization for 1-14 days and 15-28 days was slightly greater, on average, although not substantially different. Only children and adolescents without ASD died; however, because the number was very small, it was difficult to determine whether the difference was statistically significant (Figure 1). A previous study reported that individuals with ASD had a slightly higher prevalence of COVID-19 than the general population, especially among children and adolescents, and were more likely to have severe disease (Krieger et al., 2021). A study from the United States found that patients with intellectual disabilities or developmental disabilities along with ASD had higher hospitalization rates and longer length of hospital stay due to COVID-19 than those without disease(s) (Karpur et al., 2022). In a study involving patients with intellectual or developmental disability (IDD), including ASD, the prevalence of COVID-19 was 1.28 times higher than in those without IDD (Lunsky et al., 2022). The higher prevalence among ASD patients in previous studies can be explained in terms of the behavioral characteristics of autism. This may be because it was difficult for individuals with ASD to follow quarantine rules such as social distancing and personal hygiene due to aggression, low communication skills, and insufficient attention (Eshraghi et al., 2020;Hollis et al., 2021). In addition, ASD-related facilities, such as educational institutions and group homes, can also be risk factors for acquiring COVID-19 (Bergman et al., 2021;Gurdasani et al., 2021). In contrast, our study found that the prevalence of COVID-19 was lower among children and adolescents with ASD. Before the outbreak of the Omicron variant, the overall prevalence of COVID-19 in Korea was low, and strict quarantine restrictions were imposed. Since the beginning of the COVID-19 pandemic, Korea has relied on preventive testing, immediate tracking, and treatment, known as the "3T strategy" (test-track-treatment) (Na et al., 2020). In addition, the difference between the results of our study and previous studies may be due to differences between adults and children and adolescents. During the COVID-19 pandemic, schools almost exclusively conducted online classes, or online classes with intermittent attendance at school. Even after lowering quarantine measures against COVID-19, preventive tests were performed on a regular basis in high-risk group facilities such as schools, and mental health and long-term care facilities (Korea Centers for Disease Control and Prevention, 2021). As a result, the low prevalence of COVID-19 among children and adolescents with ASD in Korea is believed to be a result of a combination of reduced social activities and quarantine policies. Furthermore, because Korea has fewer schools and facilities (i.e. personal care, social training, and skilled nursing facility) for children and adolescents with ASD than other countries, those with ASD have fewer social contacts than even before the COVID-19 pandemic (Kim et al., 2019;Lee & Yun, 2019). In addition, it has been reported that most adults with ASD were not fully vaccinated, therefore, the COVID-19 prevalence is higher among those with ASD (Lunsky et al., 2022). Because vaccination was introduced late in children and adolescents, this difference may not have been clear in children and adolescents. The severity of COVID-19 has been reported to be greater among individuals with ASD in previous studies. Several studies have reported that individuals with ASD experience higher hospitalization rates, longer length of admission, and more frequent use of invasive mechanical ventilation and intensive care unit admission due to COVID-19 than the general population (Karpur et al., 2022;Koyama et al., 2022). In a study that analyzed 35,898,076 people with disabilities, those with ASD and IDD were 9 times more likely to be hospitalized for COVID-19 infection and were hospitalized 6 times longer than those without ASD and IDD (Karpur et al., 2022). The results of our study are in accordance with those reported in previous studies. It is a well-known fact that the severity and risk of death from COVID-19 increase with age (Guan et al., 2020). Previous studies have only included adults, or most of the subjects were adults. Therefore, the results of studies on children and adolescents with a relatively low risk of severity may be different (Karpur et al., 2022;Koyama et al., 2022;Lunsky et al., 2022). The increased disease severity can be attributed to immunological factors. Congenital infection, activation of maternal immunity and transplacental antibodies are associated with the pathophysiology of ASD and often with co-occurring medical conditions that affect the immune system (Fernandes & Kwak, 2022;Lima et al., 2020). In addition, individuals with ASD have an increased likelihood of experiencing "cytokine storms" due to the increased levels of pro-inflammatory cytokines, (Lima et al., 2020) which can result in more severe outcomes (Lin et al., 2020). Some studies have shown that low melatonin levels among individuals with ASD may also increase their susceptibility to SARS-CoV-2 infection (Anderson & Reiter, 2020;Brown et al., 2021). Furthermore, how Korea responds to COVID-19 patients is related to the higher rate of hospitalizations among those with ASD. The Korea Infectious Disease Response Manual for the Disabled requires that special attention be devoted to individuals with ASD who have limited communication ability and have difficulty in providing sufficient information and understanding, and consider them a priority for hospitalization (Ministry of Health and Welfare, 2021). Meanwhile, beginning in December 2021, all confirmed COVID-19 patients in Korea are treated at home, with only critically ill patients being admitted to hospital (Central Disease Control Headquarters, Central Disaster Management Headquarters, 2021). Children and adolescents with ASD may have a higher hospitalization rate than the general population because it is assumed that undergoing therapy at home is challenging due to communication issues. Results of this study revealed a relatively low case fatality rate, with only six deaths in the population ⩽19 years of age. Owing to the small number of cases, it was difficult to determine the relationship between ASD and death due to COVID-19 in children and adolescents. In previous studies, adults, children and adolescents with ASD or IDD exhibited greater mortality or case fatality rates (Henderson et al., 2022;Koyama et al., 2022). The case fatality rate was higher in the IDD group than in the non-IDD group in a study that analyzed children and adolescents separately; however, the number of deaths was very low (1-2 deaths per group), similar to our study (Turk et al., 2020). In a recent study, the risk factors for COVID-19 infection in people with IDD receiving residential support services were increased age, Down syndrome, increased number of residents, and chronic kidney disease. Heart disease was a risk factor for COVID-19 death (Landes et al., 2021). The underlying medical condition is one of the important risk factors. Approximately 9.2% of COVID-19 patients among school-age children and adolescents with underlying diseases in the United States were children and adolescents with disability, including ASD (Leeb et al., 2021). In a study that analyzed six hospitals in the United States, 20.2% of children and adolescents under the age of 18 hospitalized for COVID-19 had neurologic or developmental conditions including ASD. In addition, if these underlying medical conditions were present, the length of hospital stay and the number of intensive care unit (ICU) hospitalizations increased. However, similar to our study, there were only 11 deaths in this study out of 713 total participants (Wanga et al., 2021). COVID-19 mortality rates increase with age, and other comorbidities that could increase the risk of COVID-19 also rise with age. As a result, children and adolescents with ASD have significantly lower COVID-19 mortality rates than adults. Age is a strong risk factor for COVID-19 infection and death in the general population as well as in those with ASD. Therefore, it is considered that a largescale study is needed to investigate the mortality rate of children and adolescents in the future. Our study has a strength in that it used nationally representative data from virtually all children and adolescents with ASD in Korea. This is meaningful in situations where data regarding COVID-19 health outcomes of children and adolescents with ASD are scarce. Furthermore, because the government covers all COVID-19-related hospitalizations, with no out-of-pocket costs to patients, the hospitalization rate is less influenced by socioeconomic factors . On the other hand, the prevalence may be an underestimate of the actual prevalence because an individual who has not been tested for COVID-19, even if they exhibit symptoms, is not counted as a COVID-19 patient. The validity of the ASD diagnosis is one of our limitations. There is no precise information in our data about who entered the diagnostic code. In Korea, psychiatric care is available in primary care settings without a referral, and the public is aware that a psychiatrist makes the diagnosis of ASD (Korea National Health Insurance Service and Health Insurance Review and Assessment Service, National Health Insurance Statistical Yearbook. 2021;1.). Therefore, the majority of diagnoses are assumed to have been made in psychiatry. Conclusion Children and adolescents with ASD in Korea exhibited a lower prevalence, greater disease severity, and no significant difference in hospitalization length than those without ASD. The results can be partially explained by the implementation of Korea's COVID-19 quarantine policy, which resulted in a small number of COVID-19 patients. However, it is also assumed to be influenced by a decrease in social contacts as a result of the absence of facilities and services for children and adolescents with ASD. Building and supporting facilities and services for children and adolescents with ASD are vital, as are maintaining facilities and services in the event of an infectious disease outbreak.
2023-03-21T06:16:20.789Z
2023-03-19T00:00:00.000
{ "year": 2023, "sha1": "589c620bdcee748745da0698a26b71a753c76fc7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "d69841337c21c0bd2d243cb451a09ec5be96d787", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256056586
pes2o/s2orc
v3-fos-license
Gamma-Aminobutyric Acid Promotes Beige Adipocyte Reconstruction by Modulating the Gut Microbiota in Obese Mice Given the increasing prevalence of obesity, the white-to-beige adipocyte conversion has attracted interest as a target for obesity treatment. Gamma-aminobutyric acid (GABA) treatment can reduce obesity, but the underlying mechanism remains unclear. Here, we aimed to investigate the mechanism by which GABA triggers weight loss by improving the beiging of inguinal white adipose tissue (iWAT) and the role of gut microbiota in this process. The results showed that GABA reduced body weight and adipose inflammation and promoted the expression of thermogenic genes in the iWAT. The 16S rRNA sequence analysis of gut microbiota showed that GABA treatment increased the relative abundance of Bacteroidetes, Akkermansia, and Romboutsia and reduced that of Firmicutes and Erysipelatoclostridium in obese mice. Additionally, serum metabolomic analysis revealed that GABA treatment increased 3-hydroxybutyrate and reduced oxidized lipid levels in obese mice. Spearman’s correlation analysis showed that Akkermansia and Romboutsia were negatively associated with the levels of oxidized lipids. Fecal microbiota transplantation analysis confirmed that the gut microbiota was involved in the white-to-beige adipocyte reconstruction by GABA. Overall, our findings suggest that GABA treatment may promote iWAT beiging through the gut microbiota in obese mice. GABA may be utilized to protect obese people against metabolic abnormalities brought on by obesity and gut dysbiosis. Introduction The incidence of obesity has risen at an unprecedented rate over the past 30 years, seriously endangering human health and socio-economic development [1,2]. The most common way to lose weight is through lifestyle interventions, such as exercise and diet, but these are difficult for patients to maintain. Most weight loss drugs on the market, meanwhile, have undesirable side effects [3]. Previous studies have shown that the beiging of inguinal white adipose tissue (iWAT) facilitates weight loss and has an anti-obesity effect [4,5]. Therefore, a strategy for increasing beige fat activity could be an effective therapeutic for preventing obesity. It was previously believed that adipose tissue could be divided into two categorieswhite and brown adipose. Brown adipose tissue (BAT), which is an important thermogenic tissue in the body, contains uncoupling protein 1 (UCP1) and higher numbers of mitochondria. Meanwhile, WAT plays a major role in lipid and energy storage. In recent years, beige adipose, a third type of adipose tissue, has been identified, which can also Glucose and Insulin Tolerance Test Fasting blood glucose (FBG) levels were measured in the blood samples collected from tail veins after fasting the mice for 16 h. Glucose tolerance tests (GTT) and insulin tolerance tests (ITT) were conducted at the beginning and end of animal treatment, according to previously published methods [21]. Briefly, the GTT was performed with a dose of 2 mg·kg −1 of glucose per body weight; blood glucose was measured before and after 30, 60, and 120 min of intraperitoneal injection of glucose. Similarly, ITT was performed with an insulin dose of 0.75 U·kg −1 ; blood glucose was measured at 0, 15, 30, 60, and 90 min after insulin injection. Cold Stimulation Test Before performing cold stimulation, the rectal temperature of each mouse at room temperature (25 • C) was recorded as a measure of the basal body temperature (at 0 min); mice were then placed under cold storage conditions (4 • C). The rectal temperature of the mice was measured and recorded after 30, 60, 120, and 180 min. Water and food were not provided during the test. After the experiment, the mice were transferred to room temperature as soon as possible following which they were free to eat and drink water. Tissue Collection and Immunostaining Mice were euthanized via cervical dislocation, following which small intestine, colon, iWAT, epididymal WAT (eWAT), and BAT samples were immediately collected. Samples were then quickly placed in a freezer at −80 • C until use. Subsequently, the samples were harvested and fixed in 4% formaldehyde overnight for histological examination. Gut Microbiota Analysis The gut microbiota was analyzed following a previously published method [22]. Briefly, at the end of the above experiment, fecal samples were collected from each mouse and stored at −80 • C. Genomic DNA was extracted from fecal samples using the QIAamp DNA stool kit (Qiagen, Hilden, Germany), following which the purity and concentration of DNA were determined via agarose gel electrophoresis. The V3-V4 region of the bacterial 16S ribosomal ribonucleic acid (rRNA) gene was amplified using polymerase chain reaction (PCR); the PCR product was identified with agarose gel electrophoresis (2%), and the target band was recovered. The TruSeq ® DNA PCR-Free sample preparation kit (Illumina, San Diego, CA, USA) was used to construct the library. After the library was qualified, online sequencing was performed usingIllumina NovaSeq6000 (Otogenetics, Norcross, GE, USA). The sequences clustered with 97% similar taxonomy were grouped as an operational taxonomic unit (OTU). The free online Metware Cloud Platform (Metware Biotechnology Co., Ltd., Wuhan, China) was used to conduct linear discriminant analysis (LDA) effect size (LEfSe), Chao 1, Shannon index and non-metric multidimensional scaling (NMDS), and Tax4Fun. Non-Targeted Metabolomics Serum collected at the end of the study period was used for non-targeted metabolomic data analysis. Serum was isolated from the whole blood collected from anesthetized mice via retro-orbital sampling. In accordance with the manufacturer's instructions, the serum samples were placed on ice, three times the volume of ice-cold methanol was added, and the samples were centrifuged for 10 min (12,000 rpm, 4 • C). Then, the supernatant was collected and centrifuged for 5 min (12,000 rpm, 4 • C). Finally, the supernatant was collected and used for liquid chromatography with tandem mass spectrometry analysis (UPLC, Shimpack UFLC SHIMADZU CBM A system, https://www.shimadzu.com/ 13 November 2022, Shimadzu, Columbia, OR, USA; MS, QTRAP ® System, https://sciex.com/ 13 November 2022, Sciex, Framingham, MA, USA), according to standard protocols. The Metware Cloud Platform (Metware Biotechnology Co., Ltd., Wuhan, China) was used for metabolomics data analysis. Magnetic Resonance Imaging (MRI) After 16 weeks, MRI was used to assess visceral and subcutaneous fat in mice fed with the SD or an HFD. Images were acquired using a 9.4 Tesla, 30 cm-diameter-bore small animal MRI scanner (uMR 9.4T, United Imaging Life Science Instrument Co., Ltd., Wuhan, China) equipped with a gradient insert with a maximum strength at 1000 mT/m and a slew rate up to 10,000 T/m/s. For homogenous total-body scanning, a two-channel volume coil with an 86 mm inner diameter was used for both transmitting and receiving signals. T1 weighted multi-slice fast spin echo scans were performed on the entire mouse body horizontally; the parameters were as follows: TE/TR = 8.72/500 ms, BW = 360 Hz/pixel, field of view: 40 × 96 mm 2 , data matrix size 426 × 1024, 22 × 0.5 mm slices with a 0.1 mm slice gap. Data were analyzed using Carimas software (version 2.9, Turku PET Centre, Turku, Finland). For the FMT experiment, the antibiotics-treated obese mice were randomly divided into HFD/FMT and GABA/FMT group. After treatment with antibiotics, the fecal microbial supernatant from the HFD group and the HFD+GABA group was gavaged into the HFD/FMT and GABA/FMT mice (150 µL·d −1 for 7 consecutive days, with 0.1 g feces/mL), respectively, according to a previously reported method [23]. The body weights and FBG of all mice were recorded before and 4 weeks after FMT. Quantitative PCR Analysis Total RNA was extracted from iWAT and BAT samples using a quantitative PCR kit (TRIzol reagent, Taraka, Kyoto, Japan), according to the manufacturer's instructions. To synthesize the complementary DNA (cDNA), total RNA was reverse-transcribed using a cDNA Synthesis Kit (TaKaRa, Kyoto, Japan). Real-time PCR amplification was performed using gene-specific primers and a SYBR Green real-time PCR kit (Takara, Kyoto, Japan). The gene primers used in this study are listed in Table 1. The expression levels of the target gene were normalized to those of β-actin in the cDNA sample; they were calculated using the 2 − CT statistical method. Western Blot Analysis Western blot analysis of tissues was performed following standard procedures described in a previously published paper [24]. Loading controls were established using β-actin immunoblots. Briefly, for each sample, 40 mg tissue lysate was separated using 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis, following which the resolved proteins were transferred onto polyvinylidene fluoride (PVDF) membranes. The PVDF membranes were incubated with a blocking buffer (5% skimmed milk) for 1 h and then incubated with the following primary antibodies: anti-UCP1 (1:2,000, Abcam, ab10983) and anti-β-actin (1:5000, CST, #4937). These antibodies were diluted in the blocking buffer overnight at 4 • C. The next day, the membranes were washed and incubated with appropriate secondary antibodies for 1 h at room temperature. Finally, the PVDF membranes were washed thrice, and the protein signal was detected using an enhanced chemiluminescence reagent (Pierce, Rockford, IL, USA). Immunoreactive proteins were then detected using an enhanced chemiluminescence system (GE Healthcare Life Sciences, Buckinghamshire, UK). Western blots were quantified with the Image J software (NIH, Bethesda, MD, USA), using β-actin as an internal control. Statistical Analysis All data are reported as mean ± standard deviation. Statistical differences were analyzed using a two-tailed Student's t-test or analysis of variance (ANOVA); the Tukey-Kramer test was used as a multiple comparison post-hoc test. A p-value < 0.05 was considered significant. Differences in the LEfse ratios of core fecal microbiota were analyzed using the non-parametric Kruskal-Wallis rank sum test. Spearman's correlation was used to determine the relationship between gut microbiota and serum metabolites. GABA Treatment Promotes Energy Consumption and Improves Glucose Metabolism The HFD-induced obese mice were used to study the effect of GABA treatment on energy consumption. The body weights of mice in the HFD group significantly increased compared to those in the SD group during animal modeling ( Figure 1A). HFD feeding for 16 weeks led to a significant increase in body weight ( Figure 1B). Moreover, abdominal MRI revealed that the accumulation of epididymal and subcutaneous fat was significantly higher in HFD mice than in SD mice ( Figure 1D,E). Meanwhile, four-week GABA intervention reduced weight gain and WAT accumulation in the HFD-fed mice for 12 weeks, while the weight and fat loss of the SD+GABA group mice were not apparent ( Figure 1A-E). However, it is unclear how GABA reduced fat accumulation in obese mice, leading to weight loss. Obesity develops when energy intake exceeds energy dissipation [25]. In this study, no significant difference was observed in the energy intake among HFD-fed mice ( Figure 1C), which indicates that the effects of GABA treatment on body weight and obesity phenotype were not caused by reduced food consumption or energy intake. This, in turn, shows that GABA may have promoted energy expenditure in obese mice. After being kept in cold storage (at 4 • C) for 180 min, mice in the GABA group showed increased rectal temperatures compared to those in the HFD group ( Figure 1F). This indicates that the GABA-treated mice dissipated more energy than those in the HFD group. After GABA treatment, the FBG in the HFD+GABA group mice significantly decreased compared to that in the HFD control mice ( Figure 1G). In addition, the area under the curve (AUC) of the GTTs revealed that the HFD-fed mice had worse glucose tolerance than the SD mice and that GABA could restore glucose tolerance ( Figure 1H,I). Similarly, regarding ITT analysis, the area under the curve values of GABA-treated mice were remarkably lower than those of the HFD control mice ( Figure 1J,K). Overall, GABA treatment successfully promoted energy consumption and improved glucose tolerance and insulin resistance in obese mice. higher in HFD mice than in SD mice ( Figure 1D,E). Meanwhile, four-week GABA intervention reduced weight gain and WAT accumulation in the HFD-fed mice for 12 weeks, while the weight and fat loss of the SD+GABA group mice were not apparent ( Figure 1A-E). However, it is unclear how GABA reduced fat accumulation in obese mice, leading to weight loss. , and the area under the ITT curve (K) at the end of experiment. Data are presented mean ± standard deviation and were analyzed using a one-way or two-way ANOVA. * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001 compared to the SD group in (A,B,F-K). ^ p < 0.05, ^^ p < 0.01 and ^^^^ p < 0.0001 compared to the HFD group in (A,H,J). n = 5-6 for all groups. Ns, not significant. ANOVA, analysis of variance; BAT, brown adipose tissue; eWAT, epididymal white adipose tissue; GABA, Gamma-aminobutyric acid; GTT, glucose tolerance test; HFD, high-fat diet; ITT, insulin tolerance tests; iWAT inguinal white adipose tissue; SD, standard diet. , and the area under the ITT curve (K) at the end of experiment. Data are presented mean ± standard deviation and were analyzed using a one-way or two-way ANOVA. * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001 compared to the SD group in (A,B,F-K).ˆp < 0.05,ˆˆp < 0.01 andˆˆˆˆp < 0.0001 compared to the HFD group in (A,H,J). n = 5-6 for all groups. Ns, not significant. ANOVA, analysis of variance; BAT, brown adipose tissue; eWAT, epididymal white adipose tissue; GABA, Gamma-aminobutyric acid; GTT, glucose tolerance test; HFD, high-fat diet; ITT, insulin tolerance tests; iWAT inguinal white adipose tissue; SD, standard diet. GABA Promotes Energy Consumption through iWAT Beiging Non-trembling thermogenesis is mainly regulated by the activation of brown and/or beige adipocytes [26]. H&E staining revealed that the HFD increased fat deposition in iWAT tissue ( Figure 2A). Meanwhile, GABA treatment reduced lipid accumulation in the HFD-induced obese mice ( Figure 2A). Furthermore, the H&E staining of adipocyte area in GABA-treated mice revealed that adipocytes were smaller and fewer than those in the HFD control mice ( Figure 2B). The expression of thermogenic genes in the BAT and iWAT was also evaluated. The results revealed that the mRNA expression of brown fat cell-specific genes (including Ucp1, Prdm16, Cidea, Pgc1a, and Mct1) in the iWAT was strongly activated in the GABA-treated mice ( Figure 2C-G). Meanwhile, UCP1 antibody staining indicated an increased UCP1 protein expression in the iWAT in histological sections ( Figure 2H). The H&E staining of BAT showed that the average volume of intracellular vacuoles in the HFD+GABA group was less than that in the HFD group, while similar between the SD and the SD+GABA group (Supplementary Figure S1A). Although the mRNA expression of Ucp1 and Mct1 in the BAT was increased in the SD+GABA group compared to the SD group, the thermogenic genes of BAT in the HFD+GABA group was not statistically higher than that in the HFD group (Supplementary Figure S1B-F). These results indicated that GABA played an important role in energy expenditure and iWAT beiging. Data are presented as mean ± standard deviation and were analyzed using a one-way or two-way ANOVA. * p < 0.05 and **** p < 0.0001. ANOVA, analysis of variance; iWAT inguinal white adipose tissue. GABA Reduces Fat Inflammation and Restores Intestinal Structure in HFD Mice According to prior studies, HFD mice produce more macrophages as well as proinflammatory cytokines in the WAT than SD mice [27]. Furthermore, Nehemiah Cox et al. found that macrophages in the WAT respond to dietary fat intake and regulate fat storage in a paracrine manner [28]. Therefore, an analysis of the macrophage marker and cytokine expression after GABA treatment was conducted using quantitative PCR. Higher expres- . n = 5-6 for all groups. β-actin as PCR reference gene in (C-G). Data are presented as mean ± standard deviation and were analyzed using a one-way or two-way ANOVA. * p < 0.05 and **** p < 0.0001. ANOVA, analysis of variance; iWAT inguinal white adipose tissue. GABA Reduces Fat Inflammation and Restores Intestinal Structure in HFD Mice According to prior studies, HFD mice produce more macrophages as well as proinflammatory cytokines in the WAT than SD mice [27]. Furthermore, Nehemiah Cox et al. found that macrophages in the WAT respond to dietary fat intake and regulate fat storage in a paracrine manner [28]. Therefore, an analysis of the macrophage marker and cytokine expression after GABA treatment was conducted using quantitative PCR. Higher expression levels of macrophage marker F4/80 and pro-inflammatory cytokines (TNFα and IL1β) and lower expression of the anti-inflammatory cytokine IL10 were observed in the iWAT of HFD mice than in that of the SD group ( Figure 3A-D). Notably, obesity significantly increased the number of CD86-positive M1 macrophages in adipose tissues ( Figure 3E), whereas GABA treatment reduced inflammation and proinflammatory M1 macrophages in the iWAT of the HFD group ( Figure 3A-E). Owing to the general association of obesity with damaged intestinal integrity [29], the morphology of small intestine and colon was observed in this study. H&E staining showed that villus height of the small intestine was less and crypt depth was shallower in the HFD group than those in the SD group, which was partially improved by GABA treatment ( Figure 3F-H). The major tight junction protein ZO-1 plays an important role in maintaining intestinal integrity in the colon. According to IHC analysis, HFD administration reduced ZO-1 expression, which was significantly increased by GABA treatment (Figure 3J). Furthermore, PAS staining demonstrated that HFD administration reduced the number of goblet cells and glycogen accumulation in the colon, which was increased by GABA treatment ( Figure 3K). Overall, a major result of GABA treatment in HFD mice was the reduction in fat inflammation and the enhancement of intestinal barrier integrity. Owing to the general association of obesity with damaged intestinal integrity [29], the morphology of small intestine and colon was observed in this study. H&E staining showed that villus height of the small intestine was less and crypt depth was shallower in the HFD group than those in the SD group, which was partially improved by GABA treatment (Figure 3F-H). The major tight junction protein ZO-1 plays an important role in maintaining intestinal integrity in the colon. According to IHC analysis, HFD administration reduced ZO-1 expression, which was significantly increased by GABA treatment ( Figure 3J). Furthermore, PAS staining demonstrated that HFD administration reduced the number of goblet cells and glycogen accumulation in the colon, which was increased by GABA treatment ( Figure 3K). Overall, a major result of GABA treatment in HFD mice was the reduction in fat inflammation and the enhancement of intestinal barrier integrity. GABA Modulates the Composition of Gut Microbiota To investigate whether the anti-obesity and beiging of iWAT effects of GABA are related to the gut microbiota, 16S rRNA gene sequencing of the gut microbiota was conducted. On the rank charts, the HFD group had a smaller lateral range than the SD group, indicating less species richness ( Figure 4A). Compared to that in the HFD group, the curve in the GABA-treated HFD group was flatter, indicating that the species were more equally distributed ( Figure 4A). More OTUs were identified in the GABA-treated group than in the HFD control group, which indicated that the former had a higher microbiota abundance ( Figure 4B). Phylum level analysis of the gut microbiota indicated that HFD administration significantly reduced Bacteroidetes and increased Firmicutes abundance relative to that in the SD group, whereas GABA treatment reversed these changes ( Figure 4B). Both the Chao 1 and Shannon indices revealed significant differences in the alpha diversity among the SD, HFD and GABA groups ( Figure 4C,D). Compared to that in the SD group, the microbial diversity was low in the HFD mice, which has previously been shown to be strongly associated with adiposity [14]. These results indicated that the gut microbial community of HFD mice was significantly affected by GABA treatment. Beta diversity analysis, using NMDS, revealed that GABA influenced the gut microbiota composition in HFD mice ( Figure 4E). A t-test of species differences at the genus level between the SD and HFD groups showed that HFD administration increased the abundance of harmful bacteria (including Colidextribacter, Mucispirillum, and Erysipelatoclostridium) while reducing that of anti-inflammatory bacteria (Bacteroides and Akkermansia) compared to that in the SD group ( Figure 4F). Additionally, the LEfSe analysis demonstrated significant species differences between the HFD and HFD+GABA groups. Based on statistical analysis, GABA treatment increased the relative abundance of Ileibacterium, Akkermansia, and Romboutsia while reducing that of Deferribacteres and Mucispirillum ( Figure 4G). In addition, Tax4Fun [30] was used to predict the relative abundance of functional categories from databases such as the Kyoto Encyclopedia of Genes and Genomes (KEGG). Our results showed that the pathways involved in metabolism and genetic information processing were upregulated in the fecal microbiome of the GABA group, whereas those involved in human diseases were upregulated in the fecal microbiome of the HFD control group ( Figure 4H), indicating the different functions of the fecal microbiota between the SD and HFD groups with or without GABA treatment. Overall, these results suggest that the composition and function of gut microbiota are influenced by both an HFD and GABA. dance ( Figure 4B). Phylum level analysis of the gut microbiota indicated that HFD administration significantly reduced Bacteroidetes and increased Firmicutes abundance relative to that in the SD group, whereas GABA treatment reversed these changes ( Figure 4B). Both the Chao 1 and Shannon indices revealed significant differences in the alpha diversity among the SD, HFD and GABA groups ( Figure 4C,D). Compared to that in the SD group, the microbial diversity was low in the HFD mice, which has previously been shown to be strongly associated with adiposity [14]. These results indicated that the gut microbial community of HFD mice was significantly affected by GABA treatment. Beta diversity analysis, using NMDS, revealed that GABA influenced the gut microbiota composition in HFD mice ( Figure 4E). A t-test of species differences at the genus level between the SD and HFD groups showed that HFD administration increased the abundance of harmful bacteria (including Colidextribacter, Mucispirillum, and Erysipelatoclostridium) while reducing that of anti-inflammatory bacteria (Bacteroides and Akkermansia) compared to that in the SD group ( Figure 4F). Additionally, the LEfSe analysis Gut Microbiota Mediates the Effect of GABA on iWAT Beiging To further validate the role of gut microbiota in the therapeutic effect of GABA on WAT beiging in HFD mice, antibiotics were used to eliminate the gut microbiota of HFD model mice, following which GABA was administered for four weeks ( Figure 5A). Gut microbiota analysis showed that the species richness (measured using Chao1 and Shannon Indexes) statistically decreased after antibiotic intervention and that it was not significantly modified between the Abx/HFD group and Abx/GABA group ( Figure 5B,C). Without microbiota, the GABA treatment only slightly reduced the weights of obese mice ( Figure 5D) and could not ameliorate glucose metabolism or insulin sensitivity ( Figure 5E-H). Compared to that in the Abx/vehicle group, the iWAT did not decrease in the Abx/GABA group ( Figure 5I). Importantly, the mRNA expression of brown fat cell-specific genes (including Ucp1, Prdm16, Cidea, Pgc1a, Mct1, and Dio2) in the Abx/GABA group showed no statistically significant increase ( Figure 5J). These data indicate that the effects of GABA on glucose metabolism and fat beiging were dependent on the gut microbiota. To further test this hypothesis, the FMT experiment was conducted, fecal matter from HFD mice treated with or without GABA was transplanted into the HFD mice treated with antibiotics. Compared with the HFD/FMT group, GABA treatment restored metabolic dysfunction in the GABA/FMT group ( Figure 6A,B). That is, the obese mice of the GABA/FMT group lost weight and displayed improved glucose tolerance. Transplantation of the fecal matter from GABA-treated mice into sterile HFD mice promoted the expression of UCP1 protein and thermogenic genes in iWAT ( Figure 6C-E). LEfSe analysis showed that GABA/FMT increased the relative abundance of Akkermansia, Romboutsia, and Lactobacillus, while it reduced the abundance of Erysipelatoclostridium and Deferribacteres ( Figure 6F). Overall, these findings further demonstrate the role of gut microbiota in the effects of GABA on WAT beiging and the improvement of metabolic diseases. . Data are presented as mean ± standard deviation and were analyzed using a two-tailed Student's t-test or two-way ANOVA. ** p < 0.01, *** p < 0.001 and **** p < 0.0001 compared to HFD/FMT group. ANOVA, analysis of variance; FMT, fecal microbiota transplantation; GABA, Gamma-aminobutyric acid; GTT, glucose tolerance test; HFD, high fat diet; iWAT inguinal white adipose tissue; LDA, linear discriminant analysis. Effect of GABA on Serum Metabolites Intestinal microbiota is known to shape the metabolic pathways and obesity in the host; metabolomic studies are continuously expanding our knowledge regarding the impact of microbiota on metabolic diseases [31]. In this study, the non-targeted metabolomic profiling of more than 500 metabolites was conducted on the plasma from SD and HFD mice treated with or without GABA. First, an orthogonal partial least squares discriminant analysis (OPLS-DA) was performed to analyze the metabolomics data. The OPLS-DA model indicated significant metabolic variations between the SD and HFD groups as well as the HFD and HFD+GABA groups ( Figure 7A,B). Next, heatmap analyses of metabolites were performed. Metabolites with fold changes ≥2 and a variable important in projection values of ≥1 were selected ( Figure 7C). Heatmap analyses revealed there was no difference in the expression of most metabolites between the HFD and HFD+GABA groups and that the reduction in the levels of differentially expressed metabolites was higher than the increase in their levels ( Figure 7C). . Data are presented as mean ± standard deviation and were analyzed using a two-tailed Student's t-test or two-way ANOVA. ** p < 0.01, *** p < 0.001 and **** p < 0.0001 compared to HFD/FMT group. ANOVA, analysis of variance; FMT, fecal microbiota transplantation; GABA, Gamma-aminobutyric acid; GTT, glucose tolerance test; HFD, high fat diet; iWAT inguinal white adipose tissue; LDA, linear discriminant analysis. Effect of GABA on Serum Metabolites Intestinal microbiota is known to shape the metabolic pathways and obesity in the host; metabolomic studies are continuously expanding our knowledge regarding the impact of microbiota on metabolic diseases [31]. In this study, the non-targeted metabolomic profiling of more than 500 metabolites was conducted on the plasma from SD and HFD mice treated with or without GABA. First, an orthogonal partial least squares discriminant analysis (OPLS-DA) was performed to analyze the metabolomics data. The OPLS-DA model indicated significant metabolic variations between the SD and HFD groups as well as the HFD and HFD+GABA groups ( Figure 7A,B). Next, heatmap analyses of metabolites were performed. Metabolites with fold changes ≥ 2 and a variable important in projection values of ≥1 were selected ( Figure 7C). Heatmap analyses revealed there was no difference in the expression of most metabolites between the HFD and HFD+GABA groups and that the reduction in the levels of differentially expressed metabolites was higher than the increase in their levels ( Figure 7C). The LEfSe analysis showed that GABA treatment resulted in significant metabolic variations compared to those in the HFD control group ( Figure 7D). As previously mentioned, obesity promotes an increase in the levels of oxidized lipids [32]. Among the 37 metabolites with statistically significant differences in serum levels, several oxidized lipids, including (±)17-HDHA, (±)18-HEPE, (±)9-HETE, 12-EET, 14(S)-HDHA, and 15-ox-oETE, were enriched in the HFD control group, whereas GABA treatment significantly The LEfSe analysis showed that GABA treatment resulted in significant metabolic variations compared to those in the HFD control group ( Figure 7D). As previously mentioned, obesity promotes an increase in the levels of oxidized lipids [32]. Among the 37 metabolites with statistically significant differences in serum levels, several oxidized lipids, including (±)17-HDHA, (±)18-HEPE, (±)9-HETE, 12-EET, 14(S)-HDHA, and 15-oxoETE, were enriched in the HFD control group, whereas GABA treatment significantly reduced their concentrations but increased the levels of 3-hydroxybutyric acid and hyodeoxycholic acid ( Figure 7D). Next, metabolomics pathway analysis was used to explore the metabolic pathways that may be affected by GABA treatment. As shown in Figure 7E, KEGG enrichment analysis results showed that the metabolic pathways largely involving arachidonic acid and fatty acid metabolism and pantothenate and CoA biosynthesis, which are important for understanding the effect of GABA on obesity. Among these, arachidonic acid metabolism was identified as the most important pathway. According to these results, the HFD-induced serum metabolism abnormalities were effectively reversed by GABA treatment. Potential Relationships between Serum Metabolites and the Gut Microbiota To intuitively reflect the correlation between the expression of differential microbiota and metabolites, Spearman correlation analysis was conducted. Correlation data among the top 20 differential metabolites and microbiota were extracted to draw a heat map. As shown in Figure 7F, some bacteria (including Akkermansia, Romboutsia, Ileibacterium, and Lachnospiraceae_UCG_006) were negatively correlated, while other bacteria (including Mucispirillum, Pseudomonas, and Erysipelatoclostridium) were positively correlated with the levels of oxidized lipids, such as PGF2α. Additionally, 3-hydroxybutyrate was positively correlated with Romboutsia ( Figure 7G) and PGF2α was negatively correlated with Akkermansia ( Figure 7H). These relationships suggest that the gut microbiota could affect serum metabolite levels. Metabolomics analysis of the gut microbiota and metabolites may provide direction for further research into the pathogenesis of obesity and the mechanism of GABA treatment. Discussion The incidence of obesity and related metabolic abnormalities has been increasing rapidly in recent years; therefore, there is an urgent need to find new treatment strategies to prevent obesity. Several studies have shown that increasing the heat production in WAT and BAT may be an effective strategy for exploring obesity treatments. In mainstream metabolic disease research, several genetic mouse models (such as ob/ob and db/db mice) and HFD-induced obesity mouse models have been used [33]. Here, a diet-induced obese mouse model was used to mimic the development of human obesity. Studies have reported that a reduced thermogenic activity of BAT or the lack of a WAT beige effect can lead to dietinduced obesity [5,26]. Here, HFD mice showed significantly reduced levels of beige cells in the iWAT, which indicated that obesity led to fat metabolism and differentiation barriers. This study confirmed that GABA treatment could improve the metabolic syndrome by regulating the beiging of iWAT. GABA is widely concerned because of its good physiological function and application prospect and has been used in food and medicine. In recent metabolism-related studies, GABA has mainly been assessed regarding the pathogenesis of type 1 diabetes [18,34]. In fact, GABA receptors are expressed in many tissues, including intestinal, hepatic, and adipose [35,36]. Our previous study revealed that GABA can improve glucose metabolism by reducing β-cell dedifferentiation [17], although the mechanisms of weight loss and increased insulin sensitivity remain unclear. Here, the effect of GABA on weight loss in HFD mice was investigated, demonstrating that GABA plays an important role in the beiging of iWAT. One surprising discovery of this study was that GABA promoted the beiging of iWAT rather than BAT activation in HFD mice, while the effect was not evident in the SD+GABA mice ( Figure 2C-H and Supplementary Figure S1B-F). Therefore, understanding the potential mechanism by which GABA promotes the beiging of iWAT may provide new targets for the prevention and treatment of metabolic diseases and could provide an alternative drug choice to help obese patients to lose weight. Fecal microbiota analysis showed that GABA treatment improved the gut microbiota dysbiosis induced by an HFD. Consistent with previous reporting, obesity was associated with a relative increase in Firmicutes abundance at the expense of Bacteroidetes [37], while GABA treatment significantly reduced Firmicutes and increased abundance in HFD mice ( Figure 4B). GABA treatment increased the relative abundance of Akkermansia and Romboutsia, which were negatively associated with the body weight and beiging of iWAT. As is well known, Akkermansia reduces adiposity and improves glucose homeostasis in HFD mice [38]. In the small intestine, Romboutsia is a natural organism that can utilize carbohydrates [39]. Therefore, Romboutsia may also be considered a candidate genus to predict and treat obesity and related metabolic disorders. As the metabolome comprises both human and microbial metabolic activities, we further examined the serum metabolites. The analysis of non-targeted serum metabolomics showed that GABA treatment influenced the metabolism of lipids and ketone bodies, particularly by altering the concentrations of oxidized lipids, which were found to be the hub of gut microbiota regulating fat inflammation and metabolism. In the present study, GABA treatment was found to improve the gut microbiota composition of HFD mice. The correlation analysis of serum metabolites and gut microbiota revealed that the abundance of bacteria (phylum Verrucomicrobia; genera Akkermansia, Roseburia, and Lactobacillus) increased by GABA treatment correlated negatively with the levels of oxidized lipids. Moreover, 3-hydroxybutyrate showed a positive correlation with the genus Akkermansia ( Figure 7F). These findings indicate that gut microbes have an impact on serum metabolism. Moreover, clearing gut microbiota with antibiotics was found to significantly inhibit the effects of GABA on the beiging of iWAT in obese mice. After gavaging GABA-treated mouse feces into sterile HFD mice, the thermogenic gene levels in iWAT were found to significantly increase. These results showed that GABA treatment improved the gut microbiota composition of obese mice, thereby increasing their beige fat content and ultimately improving their metabolic levels. If these results could be further confirmed in clinical trials, GABA could be used to treat obesity-related metabolic disorders. Overall, the potential biomarkers related to gut microbiota provided useful information for understanding the effects of GABA on obesity and strengthened its therapeutic value for treating obesity. The results of this study revealed that gut microbiota mediated the mechanism of GABA-induced WAT beiging. Previous studies have proposed several mechanisms of fat beiging, such as the β-AR signaling pathway [40]. Although the activation of BAT mostly depends on β-AR signaling, it has been reported that the beiging of iWAT may not be related to β-AR signaling [8,14]. Recent studies have demonstrated that interactions between the host and gut microbiota can affect many aspects of energy metabolism [41]. The microbiome has also been shown to influence cold-induced fat formation and regulate metabolic diseases [42]. Interestingly, the mechanism by which GABA altered the composition of gut microbes in this study was similar to the mechanisms identified in previous studies. This view supports the causal role of microbes in WAT inducement [10,14]. The transplantation of fecal matter from GABA-treated HFD mice into sterile HFD mice promoted the beiging of iWAT, indicating that GABA-induced WAT beiging requires the participation of gut microbiota. However, we cannot rule out the possibility that certain metabolic changes in the sterile mice may have been caused by antibiotics. This study still has some limitations. First, this study cannot demonstrate that GABA exerts weight loss effects through a single bacterial change. Second, the study has not identified which metabolite changes directly mediate beige adipocyte reconstruction of GABA. More studies are needed to further explore the regulatory relationship between metabolites and white adipocyte browning. In addition, these findings were obtained in animals and require further confirmation by clinical studies. In summary, this study showed that GABA promoted iWAT beiging via the gut microbiota; however, the molecular mechanism of the gut-fat axis remains unclear. Further research on the transplantation of specific gut microbiota or metabolites in sterile mouse models may provide an insight into this aspect. Conclusions In summary, this study demonstrated that GABA reduced obesity by reducing WAT deposition and increasing iWAT beiging. Additionally, variations in serum metabolism caused by the gut microbiota contributed to a better understanding of the potential mechanisms underlying the anti-obesity effects of GABA. Furthermore, a key finding of our study suggests that GABA may be a potential anti-obesity drug and that the gut microbiota may be a potential target for GABA therapy.
2023-01-22T06:16:09.274Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "2652916acad029dadde4dcdce5cc11e6b02c06c1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/15/2/456/pdf?version=1673776596", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1fdd1289e5e3a4db925aef35e8204b18d41d565a", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
237301690
pes2o/s2orc
v3-fos-license
Application of Computer Aided Design Software in Interior Design The core course of interior design major is the course of computer-aided design. Through the characteristics, goals and existing problems of the computer-assisted course of interior design, this paper also overcomes the improvement development trend and mode of this course to achieve the high quality of design and the employment problems of students, reflecting the perfect integration of engineering and learning. Design concept, to achieve the unity of professional needs and industry standards, the coordination of learning and employment, the consistency of design content and work goals. Introduction At present, the method of computer-aided design mainly gives detailed design according to conceptual design. Industrial designers describe the appearance of products through drawings or models [1][2]. Traditional computer-aided design systems, such as Autodesk's Auto-CAD, can assist designers to complete the depiction of product shapes [3][4]. But the conceptual design depends to a large extent on the designer's experience and design inspiration. Existing computational aided modeling tools can assist designers in constructing models, but our research is to propose an interactive design method to assist designers in better conceptual design [5][6]. This course selects the design content based on the core skill requirements of the decorative design industry design and management first-line job groups and interior decorators, interior architects, rendering designers and other professional posts, and develops curriculum standards based on job standards to simulate job jobs The process implements the design process. The design mainly adopts the method of project simulation and actual combat. The project simulation design operation drives the mastery of software knowledge, and the design is carried out around the advanced operation technology and job tasks of the design industry. Computer aided design software In this research, the individual design plan is initially composed of a set of graphic elements, which are stored in the 3D model library. However, the computer software system should automatically consider some factors that affect the structure of the design results. For example, it is necessary to assign a low-cost objective function to an incoherent object to ensure the reproduction of coherent objects. All the goals form the external environment in which the design plan evolves. We define constraints in the "control file", such as the size of the object and its primitives, the number of primitives contained in each object, and so on. After setting the evolution environment and initial restrictions, users only need to assign fitness to each new generation. However, some restricted conditions of regeneration, such as the possibility of mutation and the type of crossover (the way the parent gene is combined during regeneration), will change during the evolution process. For example, some useful features are evolved through generations, reducing the possibility of mutation, and the chance of losing these useful features is also reduced. This article focuses on the new developments in the application of genetic algorithms in computer-aided design systems. We build a database for primitives, each of which is a 3D model. First, a number of graphic elements are combined to form individual design schemes. For example, the first 3 graphic elements are combined to generate the first individual, the next 3 graphic elements are combined to generate the second individual, and so on, 30 graphic elements are needed to generate 10 individuals. These individuals will be displayed on the computer screen, and the designer will evaluate and assign fitness to each individual. Then, the system regenerates the population according to the principle of being proportional to fitness, and uses crossover and mutation in the process of regenerating offspring. The results of each generation will be displayed in 3D graphics. This process will produce more and more adaptable design schemes that are more popular among designers. The whole design process is shown in Figure 1. The way of data storage is the key to achieving genetic manipulation. Define geometric primitives and the chromosomes that interact with surrounding primitives, consisting of 5 genes, 9 gene positions in total (Table 1). Red, blue, yellow, etc. Several graphic elements together constitute an individual design plan, and what controls the interaction of a graphic element with other graphic elements is the "sign" chromosome. A "created" primitive can be displayed, but it is not connected to other primitives. The first primitive is always the "create" primitive. An "additive" primitive is combined with the primitives it can touch. A "minus" primitive is to be removed from the primitive group. Therefore, what kind of design gets depends to a large extent on the order in which the primitives are introduced. Each graphic element is represented by the five genes mentioned above, and the geometric shape of a graphic element is generated after decoding. In biology, genotype interacts with evolutionary 3 environment to form phenotype. In this study, the phenotype is the geometric shape of a primitive. Then, several graphic elements are combined to form individual design schemes. The same fitness function is applied to the graphic element and the individual design plan. The process of image element combination is the combination of chromosomes. When displaying an individual, the information is read according to the gene position in the chromosome. Therefore, it can be guaranteed that observable genetic information appears in the offspring, and the tendency and way of evolution can also be analyzed. Figure 2 illustrates the method of combining image elements and chromosomes. The reason why this example can be combined into a wine glass is controlled by the "direction" gene. Figure 2. Interior design assistance process. The genetic data of design plan individuals and graphic elements are stored in the same way. That is to say, in Figure 3, a chromosome with 9 gene positions is used to represent a picture element, and 3 picture elements form an individual, then a chromosome with 27 bits is used to represent an individual. Under this method, the entire design scheme is phenotype. This method has obvious advantages: it makes the whole system more concise conceptually, and allows direct comparison between parent and child. We can change the size of the variability in the evolution process by changing the mutation probability, crossover type and other variables, so as to meet the needs. Revise the design plan and rationally plan the curriculum In the past, courses were offered in the second semester of the first grade or the first semester of the second grade, with class hours ranging from 48 to 64 periods. It is very difficult to fully master the content and skills of the course in such a short time. Moreover, as students continue to improve their ability to master knowledge, their understanding and needs for computer-aided design will change. According to this situation, we have revised the design plan of the computer-aided design course. The course involves three semesters. The courses are taught in semesters according to the progressive relationship of knowledge and skills. The new model is organized according to the three-stage pre-employment design and implemented with the company. Design process outsourcing and other methods of cooperative design. Build a modular curriculum system and implement a project-based design plan The computer-aided design course is rich in content and has many knowledge points. The 3DSMAX design software is mainly used to express the three-dimensional scene effect, and it is also integrated into AUTOCAD, PHOTOSHOP and other design software. The early stage of the course focuses on the operation method of the software, the teaching plan is arranged according to the order of the chapters of the textbook, and the design content is made according to the content of the textbook. Students cannot systematically master the design process. According to the needs of the job and the actual project operation, and integrating the knowledge, skills and attitude requirements of the relevant professional qualification certificates, we now introduce the real design project into the curriculum design and subdivide it into multiple sub-projects. Each sub-project is It is a small work flow, from shallow to deep, step by step, mastering the rendering method of indoor 3D scene renderings in completing tasks. The whole project can link up the theoretical knowledge and practical skills of 3DSMAX, CAD, PHOTOSHOP, and highlight the training of students' professional competence in vivid and interesting. After mastering the complete work process, the students' comprehensive competence is strengthened in the form of reports, comments, and calibration. Development of "workbook"-style teaching materials At present, there are a dazzling array of teaching materials and reference books for computer-aided design, but it is difficult to find a teaching material that is completely suitable for design, especially for project design. In view of this situation, we are professionally rooted in the development of the industry, cooperate with well-known decoration design companies, refer to the latest international teaching materials to compile design models, and develop a batch of three-dimensional teaching materials that closely integrate the actual production and reflect the latest technological achievements and real production processes. The textbooks related to computer-aided design have been practiced and have outstanding effects, mainly with the following characteristics: (1) The theoretical part of the content of the textbook should be concise, concise and easy to understand, covering the knowledge required by the corresponding professional qualification standards; (2) The productive training part is connected with the corresponding vocational qualification standard skill training, prepared with reference to the actual work process and typical project cases, with training steps, technical indicators, assessment and evaluation standards, etc.; (3) The combination of graphics and text is professional, practical, and maneuverable. It can be adopted by the front line of enterprises as a "work manual" and become industry technical standards or vocational skills training materials. Adopt a variety of design methods and means According to the training objectives of the course, in order to cultivate applied talents that meet the needs of the market, this course adopts a variety of design methods. Through project design, task-driven, competition and other design methods, students' comprehensive knowledge, application skills and professional qualities are effectively improved. Reflected in: (1) Task-driven design method for embedded projects. There are many knowledge points in this course. If you study according to the knowledge system, students often cannot connect the knowledge they have learned to complete the actual design. For this reason, we carefully selected cases that can cover the knowledge points of the course and are extracted from the actual projects of the enterprise and introduced them into the design. Through the process of "propose tasks→analyze tasks→complete tasks → learn while doing → summary", the design style of interactive innovation, promotion of individuality, application and collaboration is reflected. With the gradual deepening of the case, the students not only exercised their skills, but also unknowingly consolidate their theoretical knowledge, and truly apply what they have learned. (2) A competitive design method that simulates professional positions. In the project case design process, competition-style design methods are also introduced to promote the overall improvement of students. In addition to the need for students to master the work process, operating methods, and design skills, the computer-aided design course is more important to strengthen the high quality and efficiency of student drawing to meet the needs of the design industry under the market economy. In each project, a certain small competition is set up in the course to sum up experience, exchange experience and innovate methods through the competition. The competitive design method effectively taps students' potential for active learning, effectively stimulates students' enthusiasm for independent learning, exercises their acumen and agility, and cultivates a sense of collectivism and a sense of interaction of unity and cooperation. (3) Use modern educational technology. In the design process of this course, the key points and difficulties are demonstrated through multimedia courseware, video recording and other means to help students learn theories and master practical skills. These design methods not only provide students with high-speed and high-capacity information resources, but also increase the attractiveness and appeal of the design with its dynamic scenes combining sound, shape, sound, and painting, and huge virtual design functions. Students create an environment that can fully mobilize learning interests and activate creative thinking. (4) Establish an interactive learning website to enrich design resources and improve design quality. The curriculum has a moodle interactive network design platform, which teachers use for classroom design, homework correction, and extracurricular Q&A guidance. Students use this platform outside class to personally study the content of the course units, online questions, discussions, homework, and tests. Through the network design platform, synchronous or asynchronous interaction between teachers and students, students and students can be carried out, which stimulates the collaboration and participation of students in learning, and helps learners to solve the problems encountered in learning in time. Practice has proved that the rich design methods and means stimulate students' interest in learning and make teaching and learning easy and enjoyable. Teachers mainly play a guiding role in design. Students learn from real cases and projects to understand the development of the industry, master the first-line design skills of the enterprise, and learn from passive to active. The harmonious interaction of teaching and learning creates a new classroom surroundings. Innovate the school-enterprise cooperative design organization form to meet the requirements of the combination of work and study School-enterprise cooperation is the most discussed topic in recent years. This major has also been actively promoting school-enterprise cooperation and has signed cooperation agreements with many companies. When the school and the enterprise cooperate to a certain degree, various problems are prone to occur. Both parties hope to find a new way to cooperate. This course has found an innovative design organization form in the cooperative design process. (1) Mini order education. It means that students strengthen specific professional skills through pre-employment in "school-based enterprises" or enterprises. Nowadays, the number of large-scale enterprises in the decoration industry is gradually decreasing, the number of specialized and characteristic enterprises is gradually increasing, the number of job segments is increasing, and the difference in core professional skills required to be mastered by the job is increasing. Therefore, the number of talents required for each position will not be large, and a specialization direction micro-order cooperation training method has emerged. Computer-aided design is a course that trains students to independently draw 3D scene renderings. Mastering this course can provide necessary skills for various design positions and can also achieve employment independently. The company selects several outstanding students to strengthen training through micro-orders, so that they can better master unique professional skills to be competent for specific positions, and have a good job promotion and migration ability. (2) Design process outsourcing. Introduce industry backbone enterprises or growth-oriented enterprises to invest in the establishment of "school-based enterprises" in the school, establish studios, and realize enterprises in the form of "design process outsourcing", implement design training and realize cooperative employment. Students, enterprises, and schools achieve a win-win situation, thereby establishing a long-term school-enterprise cooperation mechanism. In the process of process outsourcing, through the introduction of actual work projects of the enterprise, the students are organized into multiple project teams to simulate the actual work process. Under the guidance of the company's part-time teachers and class teachers, the project team members collaborate to complete the project, and cultivate students' collaborative cooperation Ability and comprehensive skills in analyzing and solving practical problems have strengthened students' resilience to market demands and improved their job skills and professional quality capabilities. Sharing of human resources, improving the overall quality of the design team Through the establishment of enterprise teacher workstations and enterprise backbones serving as part-time teachers, we have realized the exchange of staff positions. While improving the quality of teachers, we also build a stable high-quality part-time teacher resource bank. (1) Select key teachers as academic mentors, and hire industry technical experts as corporate mentors. Use the rich experience and experience of the tutor to help students make academic plans and cultivate good study habits and methods. And use the "pass, help, and lead" method to teach students to gain work experience and improve professional skills. (2) In the design process, corporate mentors are allowed to participate in professional planning, curriculum construction, design reforms, and textbook revisions throughout the entire process, making professional development more forward-looking and curriculum construction closer to market needs. (3) In the design process, let academic tutors and corporate tutors discuss design methods, learn new technologies and new processes, learn from each other and exchanges, learn from each other and improve together. (4) Provide high-quality design management and technical support at each stage of students' learning, so that students can master knowledge and skills in stages and levels, and strive to achieve the goal of training talents in their profession. Strengthen design quality monitoring and break through traditional assessment methods We set up a design supervision group, a practical design working group, a quality education working group and other working institutions to evaluate and feedback on the design content, and monitor the design process throughout the design process; guide young teachers and part-time teachers on design methods; manage design work Put forward reasonable suggestions; monitor the production training at each stage, the pre-employment situation at each stage, and the internship at each stage, and form a teaching evaluation mechanism combining qualitative and quantitative evaluation to promote the improvement of design quality. Break through the traditional assessment plan, emphasize the assessment of the learning process, let the corporate mentors participate in the evaluation and scoring, and provide timely feedback. Encourage students to use their hands and brains, use creativity and tools to achieve the desired results, and at the same time fully assess the students' comprehensive ability. Conclusion Computer-aided design for the interior design major of higher vocational colleges is a course that needs continuous reform and continuous innovation. In order to be in line with the market and be at the forefront, we need to constantly adjust the design plan, update the design content, strengthen the school-enterprise cooperation method, and coordinate the relationship with other design courses. It is necessary to continuously inspire students' thinking, cultivate their ability to research, analyze and solve problems, enhance their competitiveness in the future interior design industry, and promote the improvement of the overall level of design.
2021-08-26T20:08:25.348Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "4446449a22135bad3048328ed49c5d13fe098041", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1992/2/022035", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "4446449a22135bad3048328ed49c5d13fe098041", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
250689305
pes2o/s2orc
v3-fos-license
Magnetic and X-ray absorption investigations of Co-doped ZnO films We present an investigation of magnetic and structural properties of Co-doped ZnO (ZCO) film grown by pulsed laser deposition at different dopant concentrations (cCo). X-ray diffraction patterns show that the films are single phase and exhibit ferromagnetism (FM) above room temperature (RT) with coercive fields up to 700 Oe. X-ray absorption fine-structure spectroscopy (XAFS) at the Co edge suggests that in films grown below 600°C dopant clustering involve less than 10% of the Co atoms in the alloy. In samples grown at higher temperature a larger fraction of Co atoms is involved in the formation of small metallic clusters. The experimental work has been accompanied by preliminary first-principles Density Functional Theory calculations. Introduction ZnO is expected to play an important role for many optoelectronic applications, such as ultraviolet (UV) lasers, light emitting diodes, thin film transistors. In fact ZnO is transparent in the visible region and the recent discovery of RT FM, after the Dietl's theoretical predictions 1 , makes transparent ferromagnets possible. There are several reports where RT FM has been observed either in bulk materials or thin films. In Mn-doped ZnO FM has been attributed to the coexistence of different oxidation states of dopant ions (3+ and 4+) and the formation of the ZnxMn3-xO4 spinel 2 (0 x 1) as well as to ≤ ≤ an oxygen vacancy-stabilized metastable phase 3 , ZnxMn2-xO3δ . On the other hand, recent papers reported that in ZnO films doped with transition metal ions RT ferromagnetism can be due to intrinsic point defects of the lattice 4 or to the carrier concentration 5 rather than to the doping with magnetic ions. In this paper we investigated the effects of the growth parameters, such as substrate temperature (T s ), oxygen pressure (pO 2 ), and c Co on the structural and magnetic properties of c-axis oriented ZCO thin films. A preliminary theoretical investigation of structural and electronic properties of oxygen vacancy and Co impurity in ZnO has been also performed, in order to set up the theoretical framework to be employed for a more quantitative analysis of experimental data. Experimental and theoretical methods Targets with nominal compositions of 2 at.%, 4 at.% and 6 at.% were sintered to deposit ZCO thin films by PLD on Al 2 O 3 (0001) substrates, using a Nd-YAG laser operating at 355 nm. Hysteresis cycles have been measured at RT by means of a commercial vibrating sample magnetometer (ADE mod 10 VSM). X-ray Absorption Fine-structure Spectroscopy (XAFS) at the Co K-edge has been performed on some of the samples at the Samba beamline of SOLEIL synchrotron radiation facility (Saint-Aubin, France) by using a double-crystal monochromator equipped with Si (111) crystals and fluorescence mode with a single-element silicon drift detector. First-principles calculations have been performed with the VASP (Vienna Ab Initio Simulation Package) code 6 . Exchange and correlation effects are treated in Generalized Gradient Approximation, in Perdew-Wang formalism. The positions of defect or impurity electronic levels in the ZnO energy gap have been estimated by calculating transition energy levels. 7 Results and discussions The XRD patterns ( Fig.1) show mainly the presence of the wurtzite phase; all the spectra mainly exhibit the (002) peak, so that a fully c-axis orientation results. However, small quantities of Co-rich phases (CoO, ZnCoO 3 ), appear in the spectra at high c Co values. Magnetization (M) versus field (H) measurements showed a clear ferromagnetic behaviour in our samples, with saturation magnetization (M s ) values up to ~1 emu/g, depending on T s , pO 2 and c Co values. The optimal pO 2 value (~ 2x10 -5 mbar) is lower than ones found for Mn-doped films 8 , whereas the T s value for which films with highest M s result, depends on dopant concentration. For films with c Co = 4 at.% the highest M s value (~ 1 emu/g) is measured for the film deposited at T s = 600 °C (pO 2 = 2x10 -5 mbar). The influence of c Co on the magnetic properties of the films is clearly visible in fig. 2. On increasing c Co , M s is found to increase up to ~1 emu/g (c Co =4 at.%) and then it decreases. FT amplitude (arb. units) In Fig. 3 we show the Fourier Transform (FT) of the EXAFS spectra taken at the Co K-edge for two ZCO samples, with c Co values of 4 at.% and 6 at.%. These FTs are compared to the spectrum recorded on a Co foil in transmission mode. To obtain the FTs, the raw absorption spectra were background subtracted, weighed by a k 2 , and the [2.5-13.5 Å -1 ] k-range was selected before transformation to R space. Phase shift correction was applied to the data, hence the peak positions of the FTs roughly correspond to the different interatomic distance between an "average" Co absorber and its neighbours. In the spectrum of the sample with c Co = 6 at.% (grown at lower temperature) we can clearly distinguish two main peaks, the first at about 1.9 Å and the second at 3.12 Å (red spectrum). These values are within 0.1 Å from the Zn-O first shell and Zn-Zn second shell distances extracted in wurtzite ZnO by neutron diffraction using Rietveld method 9 and close to the results of EXAFS experiments on similar samples 10,11 . Considering the approximation for the calculated phase shift in Fig. 3 and the very close atomic number and X-ray backscattering factors of Co and Zn atoms, it turns out that in this sample Co impurities substitute for Zn atoms in the wurtzite structure. The FT of the Co foil spectrum (blue line) shows a dominant peak at about 2.45 Å that corresponds to the Co-Co first shell coordination in the metal. The intensity of such peak is weak in the spectrum of the ZCO sample with c Co = 6 at.% suggesting that Co metallic clusters, if present, have rather low concentration. A more quantitative analysis of these EXAFS data is in progress and detailed results (including precise determination of the bond lengths, substitutional and metallic fraction) will be published in a forthcoming paper; nevertheless, preliminary considerations based on the examination of Fig. 3 lead us to believe that the percentage of Co atoms involved in the formation of metallic clusters should be smaller than 10% of the total number of Co atoms in the sample. As for the sample grown at higher temperature with c Co = 4 at.% Co, the relative FT (black line) shows a main broad peak which reaches its maximum at a distance close to the one of the Co-Co first shell in the metal; however its FT seems to arise from a mixed contribution of substitutional, metallic (around 50 %) and, probably, other minority signals. This result indicates that changing the growth parameters, and in particular the growth temperature, can have large effects on the Co local structure. It should be noted that, since Co clusters in the sample with c Co = 4 at.% do not give any signature in XRD rocking curves, their size has to be very low and possibly nanometric. Anyway, EXAFS can seek out the formation of nanoclusters thanks to its strong local and chemical sensitivity. Regarding the theoretical investigations, standard DFT formalism fails to reproduce the electronic properties of ZnO, underestimating its band gap and underbinding Zn-d bands 7 . The description of native defects (V O ) and transition metal impurities in ZnO is also inaccurate. Ad-hoc correction schemes have been applied, as the use of Hubbard-U corrections for Zn-3d and Zn-4s orbitals 7 , as well as the application of hybrid DFT functionals or self-interaction schemes 12 . While curing the ZnO band gap problem, there is still some uncertainty on the positions of the electronic levels induced by native defects and Co impurities. Thus, we have started our investigation on Co:ZnO applying the Hubbard-U correction scheme to different orbitals (3d orbital of Zn and Co, and O-2p orbitals), studying their effects on ZnO lattice parameters and band gap and on structural and electronic properties of V O and substitutional Co (Co Zn ) atom in ZnO. Results are summarized in Table I In conclusion our results point out the role of growth parameters (T s and pO 2 ) and dopant concentration in the tailoring of the magnetic properties of the Co-doped ZnO thin films. Although the origin of the FM interaction is still to be clarified, we can rule out that double exchange mechanism, from the secondary phases, is the main responsible for the observed FM. In fact the estimated contribution of the total M s value due to the Co clustering is about 60% for the 6 at.% Co-doped film. The remaining fraction of M s (40%) is not due to secondary phases because XRD spectra gave no evidence of any RT ferromagnetic phases. Moreover, the secondary phases detected in the XRD spectra increase with c Co so that, even if they could be responsible for the RT FM, the decrease of M s for c Co > 4 at.% cannot be accounted for. On the other hand, defects (oxygen and/or Zn vacancies in the ZnO lattice) should play a very important role. Their role has been recently discussed in the RT ferromagnetism observed in undoped ZnO films 7,8 as well as in Co-doped ones, where Co atoms exhibit a paramagnetic behaviour 13 . The role of defects and the mechanisms of magnetic coupling will be the object of future combined experimental and theoretical investigations.
2022-06-28T02:41:57.139Z
2010-01-01T00:00:00.000
{ "year": 2010, "sha1": "b6fe7915ff023155de073509b61f90dc83d898ea", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/200/7/072025", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "b6fe7915ff023155de073509b61f90dc83d898ea", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
258393783
pes2o/s2orc
v3-fos-license
Developing a System for Monitoring Human Resource Risks in a Digital Economy : Human resource (HR) risks are significant negative aspects of any organization. The main problem in the theory and practice of modern organizations is that there is no complex model and algorithm for managing HR risks. To define the essence of HR risks and basic approaches to their management, the authors conducted a survey of employees concerning the HR sphere. The authors used cluster and correlation–regression analysis to process the results of the survey conducted among employees about HR risks. Relying on general scientific research methods, data from open sources, including the review of scientific papers of foreign and national researchers and practitioners, and considering the opinions of the sociological survey respondents, the authors concluded that there is a need for carrying out close work with personnel to prevent conflicts in the working environment, increase the motivation for work, and involve the management team in regulating labor relationships. The scientific novelty of the study is that it considers the process of managing HR risks from a systemic perspective, while they are monitored based on the conceptual model suggested in the study. The models developed by the authors can be used in reality for managing HR risks faced by economic entities. Introduction Due to the growing dynamism and uncertainty inherent in the current state of the socio-economic environment where economic entities operate, the number of risks faced by organizations today is increasing.The risks and their impact on the current state of an organization and its prospects should be studied in order to constrain negative consequences in operational activities and to prevent security threats in the future.Any risk should be considered given its multidimensional nature (Gorlenko and Mozhaeva 2017), which makes it necessary to study its relationships with all the processes in the management system of an organization, from the managers' mindset and threat acceptance to the analysis and assessment of possibilities to minimize it. Today, the spread of digital technologies is accompanied by a sophistication of managerial decision making caused by the virtualization of many business processes.Problems are aggravating because personnel have to adapt to new working conditions.The problems of a generation gap in digital competencies are growing too (Ivanova and Pulyaeva 2022). All these require the constant monitoring of an HR management system for its sustainable development and improvement. The processes of HR management and HR risk management are insufficiently integrated with each other in a modern management model of an organization.They are often considered separately due to the wish to control negative situations in the work of personnel by different departments (internal audit service, economic security, etc.).This approach is adopted due to the definition of HR risk, which focuses on the existence of danger and undesirable scenarios that will have either a direct or indirect impact on the sustainability of the organization's activities.Thus, HR risks are mainly understood as a loss or shortage of income, the probability of deviation from the planned result, a threat to safety, or, in general, a decreasing efficiency of labor. The wish to control HR risks has led to various classifications which can consider many factors causing HR risks.The conceptual basis of HR risk management consists of an objective to keep HR risks within the limits set by the personnel management strategy, the problems that have to be solved to ensure it, the object whose definition embraces the factors and sources of HR risks in the organization, the subject (participants in joint activities), as well as principles, functions, tools, and methods of management that can either prevent or compensate losses (Mitrofanova 2003).The analysis of the expert opinions in the field of personnel management and risks allows for the identification of the most common types of HR risks, including an imbalance of the age groups of the personnel, an insufficient monitoring of threats that are vital for the organization, a lack of measures for identifying and suppressing the personnel's undesirable actions, selecting personnel that is not competent enough, an insufficiency of measures aimed at forming the employees' motivation and loyalty, conflicts of interest between the employees and the employer, and the dismissal or resignation of key employees.In addition to the above, the existing specifics of HR risks' occurrence and management were considered, depending on the scope of the organization's operations and the relevance of a risk-based approach for controlling the HR processes of the organization.All of the above had an impact on this research work's purpose, objectives, and methods and the way in which the results obtained were interpreted. The purpose of the work is determining an algorithm for managing HR risks based on monitoring their effects in the personnel management system of an organization, which is reflected in the conceptual model being developed.In order to achieve this purpose, the following objectives were fulfilled: 1. The HR risks of an organization were analyzed using the methods of the generalization of scientific sources and by collecting relevant information from participants of the sociological survey.2. Significant correlations were identified in the respondents' answers to determine the significance and impact of HR risks on the security of the organization based on the cluster and correlation-regression analysis. 3. A conceptual model of HR risk management was developed. The second section of this paper presents a review of the literature on the problems of HR risks.The section 'Materials and methods' presents the main methods used by the authors in their research.The 'Results' section describes the results of the correlation and cluster analysis.The 'Discussion' section presents a conceptual model of HR risks management and an algorithm of this process on the basis of the received results.The final section presents the conclusions of the research. Literature Review When interpreting risk as a probability of deviation from the planned result (Tapman 2002), it is only true with regard to the nature of risks present in the management system.The concept of HR risk is multidimensional and requires that many related aspects be considered, as confirmed in various studies.In 2015, it was revealed that there was no theoretical and methodological approach to human capital management, which would contribute to the development of the best methods for analyzing HR risks associated with the development of qualities that influence income generation (Balykhin et al. 2015).According to experts, the main sources of HR risks faced by knowledge-intensive companies are external conditions, since such firms attract highly qualified personnel using a strategy of an open innovative market (Shpilina et al. 2019).It is important to highlight a study discussing the risk of key knowledge leakage from an organization due to the dismissal or resignation of highly qualified personnel (Jennex 2013). Many scientists present the results of their research, which indicate the specifics of HR risk management given the industry affiliation of an organization.Thus, A. Tikhonov studied the experience of an organization operating in the Russian aviation industry.He developed his own approach to the classification of HR risks and identified four groups of them: HR structure risks caused by unsatisfactorily forming the personnel for the organization and/or its turnover; risks from using the personnel (problems with discipline and labor productivity); risks of the personnel reserve (inefficient personnel development and training); and risks of dismissal or resignation: the loss of the organization's reputation, the disclosure of commercial and confidential information, the low morale of the workforce, the deterioration of the psychological climate, the various court proceedings in the process of dismissal, etc. (Tikhonov 2020). Scientists from Slovakia studied the impact of HR risks on the country's transportation sector (Masár and Hudáková 2020).They identified major HR risks such as the insufficient qualification of personnel, human errors, and a decreased involvement of personnel.Then, these scientists conducted a further larger-scale study together with researchers from Poland.As a result, it was found that the longer an enterprise operates on the market, the less its managers and owners feel the impact of HR risks, whereas young enterprises suffer mostly from market, financial, and economic risks (Hudáková et al. 2021).In these studies, the main method was an online or offline survey of entrepreneurs or experts.Scientists from Slovakia used Google forms for the survey, their uploading to MS Excel, and their statistical processing of the results, including the calculation of the Bartlett sphericity test and the calculation of the normality criteria for the statistical analysis (Anders Darling criterion, Ryan Joyner criterion, Kolmogorov-Smirnov criterion). Scientists from Bogota determined that one of the sources of operational risks in the Colombian banking sector was fraud committed by personnel and problems in hiring procedures (Campo Elías and Miguel Alejandro 2021).A study of the banking sector was conducted in South Africa to learn about the attitude of the staff to the privacy policy, and it was found that representatives of generation Y were more positive towards these policies than generation X.Therefore, generation X staff needed targeted training to increase their awareness of the bank privacy policy (Swartz et al. 2021). In 2019, a methodology for assessing 30 types of risk in a trade organization was developed and presented (Tselyutina et al. 2019).A total of 20 experts were involved in ranking 30 types of HR risks identified according to this methodology.As a result, a two-stage methodology was developed for determining the probability of HR risks in the personnel management system of a trade organization.The main method of research was the method of an expert survey and the cluster method of processing the responses of experts, thanks to which the main trends in personnel risks by groups of clusters were highlighted.At the second stage, they developed an algorithm for assessing the organization's personnel potential and the level of personnel risk using the Scenario Manager method. HR risks in the process of training managerial personnel for educational organizations were also studied.The sources of such risks included the founder of an educational organization, its managers, and providers of advanced training programs (Guzairova et al. 2018). Another research work worth mentioning studied the HR risks in Russian public authorities (Polyanin et al. 2018).It resulted in a list of HR risks for key elements of personnel management subsystems that mostly cover the assessment, formation, and development of civil servants' competences. Scientists from the Czech Republic, Hungary, Poland, and Slovakia assessed the impact of corporate social responsibility (CSR) programs on the performance of smalland medium-sized business in the V4 countries.They found that if entrepreneurs carried out CSR programs, they perceived their personnel as their main capital, appreciated their employees' contribution, and were less afraid of HR risks and mistakes that could be made by their staff (Rozsa et al. 2021).The authors from Russia established a link between CSR, working conditions, and risks to staff health, including the increased mortality of industrial personnel (Kozlova et al. 2016). It should be noted that, according to the results of an online survey, representatives of small-and medium-sized business think that HR risks are the most significant, although businessmen from the Czech Republic rate their staff as more qualified compared to the rankings of their peers from Slovakia.Businessmen from both countries stated that staff turnover was low (Kotaskova et al. 2020). A special aspect in the study of HR risks is the falsification of financial statements and fraud.According to Russian authors, these should be assessed during an audit (Lishchuk and Zolotareva 2017).Some researchers consider HR risks as a part of economic security and the accounting and analytical functions of personnel (Illiashenko et al. 2020). Russian scientists consider HR risks that arise when the labor market is precarious, with unreliable forms of employment emerging and workers being deprived of their basic rights (Solovova et al. 2021). The digital economy has brought about new HR risks: cybersecurity violations, data leakages, malware, collusion with fraudsters, false information, forgery of personal data, including social media, etc. (Cunningham et al. 2018).Given the above, it is worth noting the study discussing the use of information platforms for reducing HR management risks (Veres et al. 2020).Authors from Russia studied the specifics of HR risks in the digital economy, identified the HR risks of a digital organization, and justified a model of key competencies that personnel must have in order to prevent them (Manakhova et al. 2020). An important aspect in the study of HR risks is the definition of psychosocial risks-threats that can lead to negative psychological, physical, and social effects.These risks are a result of inefficient organizational structures and teamwork management (Paurova et al. 2020). In summary, today, scientific literature accounts for a certain number of studies devoted to HR risks and various aspects of their occurrence Most of the studies used the methods of sociological surveys and the statistical processing of results, as well as monographic and socio-economic methods.As a result, different types of HR risks, their specification, and approaches to assessment were obtained.However, so far, nobody has assessed how HR risks are perceived directly by employees.Different authors described various aspects of HR risks and offered their classification, but there was no research on how employees themselves perceive HR risks, though it is important, because the occurrence of many HR risks depends on the personnel.In addition, no model or algorithm of HR risk management have been developed.All these gaps in the scientific literature are addressed by this study.The conceptual model and algorithm of HR risk management developed by the authors will allow for heads of organizations to implement, in practice, HR risk management in all subsystems of personnel management. Materials and Methods To resolve the problems posed in this study, the authors reviewed national and foreign sources, conducted a sociological survey (Appendix A), with the questionnaire being presented in the Appendix A, used statistical methods, carried out analyses and syntheses, and generalized the results.The survey was conducted in an online format using Google forms.The survey was conducted between March and April 2022 on the territory of the Russian Federation.The reliability of the questionnaire was assessed using the method of calculating the α-Cronbach coefficient: where α is α-Cronbach's coefficient, N is the number of questions in the questionnaire, and r is the average correlation coefficient between each question and the sum of other questions. The calculation of the α-Cronbach's coefficient for the questionnaire used in this study resulted in a value of 0.6798, which indicated the acceptable reliability of the questionnaire. Study Participants 237 respondents took part in the study; 68.8% of them were women and 31.2% were men.The average age of the respondents was over 33.A total of 59.5% of the survey participants worked in large organizations employing over 500 people.The respondents are representatives of various sectors, including industrial production (8.5%), trade (7.8%), construction (7.8%),IT (11.3%), etc. Almost 70% of the respondents were working in their organization for more than 1 year.A total of 23.4% of participants held a managerial position. Research Tools Cluster analysis, correlation analysis Data Processing.To process the data, the R programming language in an "Rstudio" environment was used.The tidyverse package was used for data preparation and manipulation, the psych package was used for regression analysis, and the cluster package was used for cluster analysis.The ggplot2 package was used for plotting.The built-in MS Excel functions were used for assessing the correlation ratio between the respondents' answers. Correlation and cluster analyses were used in processing the answers of the respondents: correlation analysis was used to identify the closeness of the relationship between the answers and the correlation in the respondents' choices, and cluster analysis was used to identify homogeneous groups of respondents and a detailed analysis of their answers.The Pearson coefficient was used in the correlation analysis, and the Gower distance, k-means method, and silhouette method were used in the cluster analysis.All these tools made it possible to identify lawful patterns in the respondents' answers about personnel risks and requirements for the system of their management.Figure 1 shows the algorithm of this study. -- As a basis for clustering, the study used a dissimilarity matrix, which, in mathematical terms, described how different (distant) points in the dataset were from each other.It allowed for the further grouping of those points (survey results from the questionnaires) that were closest to each other or separated the most distant ones from each other-that was the main idea of clustering.In this study, the Gower distance was used, which is the sum of the squares of the differences of the corresponding distances (Gower 1983).The matrix was constructed using the daisy tool from the cluster package.The result of this procedure was an idea of how the answers to the questions in different questionnaires were similar to each other, which was a prerequisite for the identification of clusters. The next procedure was clustering itself, which divided the entire sample into k number of clusters based on the dissimilarity matrix by a certain criterion.The choice of the clustering method was the k-means method.In R, this method can be implemented by Diana from the cluster package.The estimation of the k-number of clusters, or, in other words, the search for the optimal number of homogeneous groups of interviewees, was carried out by silhouette estimation: the silhouette graph used as a measure of data consistency showed how close each of the points within one cluster was to points in neighboring clusters: sharp inflections of the graph show the optimal number of partitions.As a result, 5-7 homogeneous clusters were identified for each of the groups of questions from the questionnaire (three groups in total). Among the parameters for assessing the quality of clustering, for example, one could refer to the parameter of the average silhouette width s i .Its value can be described as a measure of the degree to which object i belongs.It can be calculated by Formula (2).The index ranged from −1 to 1.It was assumed that if \ s i .\< 0.25, objects were grouped "loosely"; in other words, clusters were heterogeneous, and clustering was performed poorly. where d i within is the distance to other objects from the same cluster, and d i to nearest cluster is the distance to the nearest cluster. The cluster analysis algorithm is shown in Figure 2. --- Clustering by k-means method Clustering quality assessment by the silhouette method In general, most of the respondents (58.2%) understood HR risk as decreased labor efficiency.At the same time, the correlation analysis did not show any close relationship between the personal characteristics of the survey participants (Questions 1-6) and their understanding of HR risk (Question 7) (Figure 3).As for the stages of work with personnel at which there was the greatest probability of HR risks, the respondents were not unanimous in their answers to this question: 38.3% pointed out direct work with personnel, 31.9%pointed out the selection and assessment of candidates, 22.7% pointed out the dismissal or resignation of employees working in the organization, and 7.1% pointed out hiring (conclusion of labor relations).The correlation analysis of the dependence between the chosen answer to this question and the personal characteristics of the respondents also did not show that there was a close relationship between these data and the answers.In addition, the correlation ratio between the answers to the question about the concept of HR risk and the stages of personnel management at which it arose was assessed.In this case, no strong connection was established either (r = 0.1). Cluster Analysis The clustering process consisted of three steps: (1) The basis for clustering is a dissimilarity matrix, which, in mathematical terms, describes how the points in the dataset are different (remote) from each other.It allows for the further grouping of the points that are closest to each other or for separating the points that are most distant from each other, which is the main idea of clustering.This study uses Gower's distance. (2) The k-means method was chosen as a clustering method. (3) The k-number of clusters was evaluated using a silhouette evaluation method: the silhouette graph used for measuring the data consistency shows how close each of the points within one cluster is to points in the neighboring clusters.The graph in Figure 4 shows the distribution of the silhouette width of the clusters (consistency of the data within the clusters).The sharp inflections of the graph demonstrate the optimal number of partitions. --- A total of 137 filled questionnaires that contained no unanswered questions were used for clustering.Clustering was carried out using three compositions of key questions of the questionnaire: I (2, 7, 11), II (4, 10, 12), III (3,6,19).Table 1 presents the characteristics of clustering.As can be seen in Table 1, the Gower distance (row 2 in the table) varies from 17.65 to 35.94.The smaller the Gower distance is, the better it is.In this case, the values were relatively small.The average silhouette width s i must be greater than 0.25.As can be seen in Table 1, this condition is satisfied everywhere.Thus, clustering was performed qualitatively. Number of clusters Figure 5 shows a heat map of clustering by cluster groups I according to the following questions: respondent's age (2), definition of HR risk (7), and type of personnel management functional subsystem in which risks mainly arise, according to the participant of the study (11). Distribution of Characteristics by Clusters Personnel development management The cluster in Figure 5 illustrates that the age division is predominant among the participants.Figure 3 shows that clusters 3, 4, 5, 6, and 7 represent five age groups.The third cluster is the largest and includes an audience of participants under the age of 25 (70 persons).When asked about the functional subsystems in which HR risks mainly arise, they demonstrated a diversified opinion and did not choose one leading factor.In giving an answer about the type of functional subsystem in which HR risks mainly occur, the participants had different opinions.However, in most cases, the survey participants understood HR risk as decreased labor efficiency.In cluster 7, the attitude of respondents aged over 60 was more consolidated.When asked what HR risk was, all 14 respondents in this age category said that it was decreased labor efficiency, and the intensity of the effect of this factor was beyond doubt.The interrelations between the objects from a given set are most clearly shown in Figure 6. Figure 7 shows a heat map of the distribution of characteristics by the clusters of group II, which include the following questions of the study: the organization's headcount (4), types of HR risks with the greatest threat to the effective operation of the organization (10), and the form of manifestation of HR risks in the organization (12). The cluster in Figure 7 shows that the leading indicator is the size of the organization, as indicated by the headcount.The third cluster was presented by workers of a large organization employing 1000 to 5000 people.In their opinion, the greatest threat to the effective operation of the organization is the conflicts of interest between employees and employers. For the representatives of the fifth cluster, who worked in medium-sized organizations, the most pronounced form of HR risks manifestation was the violation of the labor discipline and code of ethics.The interrelations between the objects from a given set are most clearly shown in Figure 8. - Figure 9 shows a heat map of the distribution of characteristics by the clusters of group III, which refer to the following problems of the study: industry of the organization (3), participant's status (position) (6), and properties essential for the methodology for managing HR risks in the organization (19). The leading indicator was the position held in the organization.There were respondents holding a managerial position in cluster 4, 5, but by number, this was the smallest group of respondents working mainly in the field of education.In their opinion, the main properties of the HR risk management methodology were flexibility and low labor costs of implementation.For cluster 3, the largest group of respondents, the central feature of the HR risk management methodology was also the low labor costs of implementation. The interrelations between the objects from a given set are most clearly shown in Figure 10. The respondents believed that HR risks cause maximum damage to the financial and personal qualification spheres of the activity of economic entities.The greatest threat to the efficiency of the organization was the recruitment of staff with insufficient competences and a conflict of interest between employees and the employer.According to the survey participants, HR risks mainly occur in labor motivation management (first place), the provision of decent working conditions (second place), and labor relations management (third place).HR risks in the organizations where the respondents work take one of the following forms: the insufficient professional competence of employees; the high turnover of personnel; the personnel's resistance to innovation; the destructive conflict and stress of the organizational environment; the organizational structure is neither effective nor optimal.At the same time, 26.1% of the respondents believed that there were no significant HR risks in their organization.When answering if they have encountered employees' mistakes in their professional activity that entailed serious negative effects for the organization (legal costs, financial and/or material losses, expired deadlines and broken agreements, etc.), the respondents replied "rarely" (respondents holding a managerial position) and "very rarely" (respondents not holding a managerial position).At the same time, only 16.0% of the survey participants sometimes encountered situations in which the organization incurred significant costs upon terminating the labor contract with a key employee.The rest of the respondents said they had encountered such a situation rarely, very rarely, or never at all.Those who faced it rarely now work as executives in five cases out of six. Distribution of Characteristics by Clusters It should be noted that the consensus of the survey participants was that the key role in various HR risks was played by remuneration: low paychecks as a common reason for staff turnover (70.7%) and the imperfect remuneration system as a cause of workforce conflicts (54%). Thus, the results of the processed answers of the respondents revealed a heterogeneous perception of HR risks among employees.However, the survey participants were unanimous in determining the areas and forms of HR risks and the factors causing them. Discussion The data obtained as a result of generalizing the experts' opinions and the survey carried out among the respondents of various organizations prove that in order to assess risks and make good management decisions aimed at eliminating the effect of these risks on the security of the organization, an algorithm has to be developed for indicating the sequence of system actions and measures. Figure 11 shows a conceptual model of an HR risk monitoring system.The existing personnel management system of an organization is chosen as the object of management.The content and processes in it act as an internal environment (of the system).The structure of the external system has constituent elements such as: authorities of various levels and the organization's owners and founders.Their activities contribute to the formation of a legal, economic, informational, and administrative mechanism that determines the main regulation aspects of HR processes in the organization and establishes norms and standards they must comply with.The HR risk monitoring system includes some indicators of the state of the personnel management system and a number of factors that have to be considered in the assessment: socio-cultural, legal, and economic factors and the information environment of the organization.The HR risk monitoring system can be used to identify compliance with security standards and prevent security threats, which is accomplished through diagnosing the current state of the personnel management system. Table 2 shows the stages of HR risk management.Their practical significance is preconditioned by the wish to create a transparent and flexible methodology for HR risk management.It is especially important to highlight the need for ensuring the continuous monitoring of the personnel situation and the possibility of integrating such monitoring into the overall corporate security system of the organization, given the requirements of international standards.The employees who took part in the survey carried out by the authors also highlighted important characteristics of the HR risk monitoring and management system such as flexibility and comprehensibility (78% of respondents).The survey revealed the riskiest areas in personnel management, namely: motivation, the stimulation and remuneration of workers, ensuring normal working conditions, and the management of labor relations.Thus, the respondents more often specified such manifestations of personnel risks: the insufficient level of the professional qualification of the workers; the high turnover of the personnel; the unreceptivity of the personnel to innovations; the destructive conflicts and stress load of the organizational environment; the organizational structure is not effective and not optimum.The following managerial conclusions can be drawn from the obtained results: it is important to provide suitable working conditions for employees, including an optimal organizational structure, safe working conditions, and the localization and resolution of conflicts.It is necessary to provide a high level of motivation for personnel to work effectively and improve qualifications, as well as to create conditions for building long-term relations with the personnel (personnel reserve, increase in loyalty, etc.).All this is reflected in the presented algorithm of personnel risk management.The distinctive feature of the developed model and algorithm is the consideration of the system approach to HR risk management, which is manifested in each of the seven stages of the algorithm (Table 2).In contrast to the results of earlier studies, the proposed model allows for managing HR risks at all stages of personnel management. Conclusions This study demonstrates that HR risk management calls for comprehensive work in the HR sphere.HR risks depend on the specifics of the organization, the sector in which it operates, its size, maturity, and strategy, and the degree of uncertainty of the internal and external environment.By developing stages and measures for eliminating the security threat to the organization, one can assess the effect of risk and form an effective management mechanism to support the stability and harmonious development of the organization.An important scientific contribution of the authors' work is the developed conceptual model of HR risk management and the algorithm of its implementation in the context of subsystems (elements) of personnel management.Thus, based on the original research and the compilation of various approaches to personnel management and HR risk management, the authors developed a conceptual system for monitoring HR risks in an organization in order to minimize them and prevent negative effects. The authors of this study conducted a sociological survey of employees, including HR specialists, which allowed for the identification of high-risk areas of personnel management and the requirements that should be met by the HR risk management system.The uniqueness of this research lies in the results of the survey, clustering groups of respondents and identifying typical responses.On this basis, the author's methodology of personnel risk management was made, which takes into account the opinion of respondents, who are the professionals of different organizations. A limitation of this study is the relatively small number of respondents, representing only one country.In the future, it is recommended to continue this research by expanding the number of respondents and attracting respondents from different regions and countries.This will highlight the regional and national peculiarities of HR risks and offer specific recommendations for their minimization.stricter control over the activities of the personnel (c) informing employees about their liability (d) forming a signaling system (informing about the negative situation in the organization) (e) "exemplary dismissals" (f) other______________ Figure 1.General algorithm of the research. Figure 3 . Figure 3. Correlogram of the respondents' answers to the questionnaire.Note: Xi is the number of the question, the value in the Pearson correlation coefficient cell. -Figure 4 . Visualization of the silhouette method applied for sample II.The optimal number of clusters is 5. Figure 5 . Figure 5. Heat map of the distribution of characteristics by the clusters of group I (2, 7, 11). Figure 9 . Figure 9. Heat map of the distribution of characteristics by the clusters of group III (3, 6, 19). Figure 11 . Figure 11.Conceptual model of the HR risk monitoring system. ( your opinion, what causes violations of labor discipline (choose no more than 3 answers) (a) lack of clear understanding of the goals of the organization by employees (b) barriers in communications between front-line employees and/or management team (c) unstable work rates (d) abuse of authority by the heads of departments, inconsistency of their requirements, etc. (e) unjustified disciplinary rules and restrictions 17.In your opinion, what causes conflict situations among workforce (choose no more than your opinion, staff turnover is caused by: (choose no more than 3 answers) (a) low pay grade (b) lack of respectful business relationships with the management team (c) unsatisfactory labor conditions (d) unfavorable socio-psychological climate in the team (e) high intensity of work 19.Which properties should the HR risk management methodology of the organization have (any number of answers can be chosen): (a) comprehensibility for the management team and employees (b) low labor costs of the implementation (c) flexibility for adapting the methodology to meet the changes in the organization; (d) ability to continuously monitor the HR situation (e)ability to get integrated into the corporate general security system (f) meeting the requirements of international standards 20.In your opinion, which actions should be taken to minimize the risks of negative situations in the work with the personnel (any number of answers can be chosen) Table 1 . Characteristics of Clustering. Table 2 . Stages and measures for HR risk management.Evaluate the efficiency of HR risk management and profitability.7.2 Audit the personnel and HR risks.7.3 Continuously improve the system. Have you ever encountered with employees' mistakes your professional activity that entailed serious negative effects for the organization (legal costs, financial and/or material losses, expired deadlines and broken agreements, etc.) Have you ever encountered in your professional experience with a situation when a dismissal or resignation of a key employee brought your organization substantial losses?
2023-04-29T15:12:04.942Z
2023-04-27T00:00:00.000
{ "year": 2023, "sha1": "9fd9a55459f8f1466bd5f80aa8408aedf1b86d05", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9091/11/5/82/pdf?version=1682566395", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "2bdbcabf3c0ea8d3a0e9e39d1264914672655f7d", "s2fieldsofstudy": [ "Business", "Computer Science" ], "extfieldsofstudy": [] }
62817497
pes2o/s2orc
v3-fos-license
Scattering data computation for the Zakharov-Shabat system A numerical method to solve the direct scattering problem for the Zakharov-Shabat system associated to the initial value problem for the nonlinear Schroedinger equation is proposed. The method involves the numerical solution of Volterra integral systems with structured kernels and the identification of coefficients and parameters appearing in monomial-exponential sums. Numerical experiments confirm the effectiveness of the proposed technique. Introduction The problem we are addressing concerns the numerical computation of the scattering data of the Zakharov-Shabat (ZS) system associated to the initial value problem (IVP) for the nonlinear Schrödinger (NLS) equation (1.1) iu t + u xx ± 2|u| 2 u = 0, x ∈ R, t > 0 u(x, 0) = u 0 (x), x ∈ R where i denotes the imaginary unit, u = u(x, t) is the unknown potential, the subscripts x and t designate partial derivatives with respect to position and time, u 0 ∈ L 1 (R) is the initial potential and the ± sign depends on symmetry properties of u. The plus sign regards the focusing case and the minus sign the defocusing case. The solution of the IVP (1.1) can theoretically be obtained by means of the so-called Inverse Scattering Transform (IST) technique [1,3]. The IST allows one, in fact, to obtain the solution of (1.1) by means of the following three steps: (i) starting from the initial potential u 0 , solve the Zakharov-Shabat (ZS) system associated to the NLS to obtain the initial scattering data; (ii) propagate the initial scattering data in time; (iii) solve the associated Marchenko equations whose kernels are obtained from the initial scattering data evolved in time, to obtain the solution u(x, t) we are looking for. An effective numerical method to solve steps (ii) and (iii) has been proposed in [4] under the hypothesis that the initial scattering data are known. In this paper we propose a numerical method to solve the direct scattering problem (i) which is also of independent interest in some engineering fields [14]. To the best of our knowledge our method is the first numerical method proposed for the computation of all scattering data. The paper is organized as follows. In Section 2 we recall the ZS system associated to the IVP for the NLS equation. Then we recall the definition of the initial scattering data, i.e. the transmission coefficient T (λ), the reflection coefficient from the left L(λ) and from the right R(λ), the bound states {λ j } with their multiplicities {m j } and the norming constants from the left {(Γ ℓ ) j,s } and from the right {(Γ r ) js }. After that, we introduce the initial Marchenko kernels from the left Ω ℓ (α) and from the right Ω r (α), the inverse Fourier transform ρ(α) of R(λ) and the Fourier transform ℓ(α) of L(λ), respectively. Then, we show that the spectral sums from the left S ℓ (α) and from the right S r (α) which depend on the bound states with the respective multiplicities and norming constants from the left and from the right, can be expressed as a difference between the initial Marchenko kernels and the inverse Fourier transform ρ(α) of R(λ) and the Fourier transform ℓ(α) of L(λ), respectively. As these differences are monomial-exponential sums, their parameters and coefficients can be identified by using the numerical method proposed in [6,8]. Section 3 is devoted to the characterization of auxiliary functions which are basic to the computation of the initial Marchenko kernels as well as of ρ(α) and ℓ(α). In Section 4 we characterize the scattering matrix, derive and analyze the Volterra integral equations of the second kind that characterize the initial Marchenko kernels and formulate the Fredholm integral equations that characterize the Fourier transforms ρ(α) and ℓ(α). The numerical method we propose to obtain the initial scattering data is illustrated in Section 5 while in Section 6 we consider two different initial potentials for which the numerical results are given in Section 7. Finally, we conclude the paper by an Appendix concerning the study of the support of the auxiliary functions introduced in Section 3. Initial scattering data Following the IST technique, to determine the initial scattering data, we must consider the ZS system associated to the NLS (1.1) [2], that is the system where λ ∈ C is a spectral parameter and with v 0 = u * 0 in the focusing case and v 0 = −u 0 (x) * in the defocusing case. Here and in the sequel the asterisk denotes the complex conjugate. The initial scattering data are the entries of the so-called scattering matrix and the coefficients and parameters of two spectral sums. Denoting by S(λ) = T (λ) L(λ) R(λ) T (λ) , the scattering matrix, T (λ) represents the (initial) transmission coefficient, while L(λ) and R(λ) stand for the initial reflection coefficients from the left and from the right, respectively. This matrix satisfies the following symmetry properties [13] in the defocusing case and in the focusing case where I denotes the identity matrix. Here and in the sequel the dagger denotes the matrix conjugate transpose. The numerical validity of these properties is used in Section 7 to check the effectiveness of our algorithms. If T (λ) has no poles in the complex upper half plane C + , there are no spectral sums to identify. Otherwise, denoting by λ 1 , . . . , λ n the so-called bound states, that is the finitely many poles of T (λ) in C + , and by m 1 , . . . , m n the corresponding multiplicities, we have to identify the parameters {n, m j , λ j } as well as the coefficients {(Γ ℓ ) js , (Γ r ) js } of the initial spectral sums from the left and from the right In (2.5) and (2.6) the coefficients (Γ ℓ ) js and (Γ r ) js are the so-called norming constants from the left and from the right, respectively, and 0! = 1. In the IST technique, a crucial role is played by the initial Marchenko kernels from the left Ω ℓ (α) and from the right Ω r (α), which are connected to the above spectral coefficients and spectral sums as follows: is the inverse Fourier transform of the reflection coefficient from the right R(λ) and apart from the factor 1/2π, is the Fourier transform of the reflection coefficient from the left L(λ). Auxiliary functions In this section we introduce four pairs of auxiliary functions and the Volterra integral equations that characterize them. Their solution, as shown in the next section (see also [7,13]), is fundamental for computing the initial Marchenko kernels as well as ρ(α) and ℓ(α). Following [7], let us introduce, for y ≥ x, the two pairs of unknown auxiliary functionsK and, for y ≤ x, the two pairs of unknown auxiliary functions For the sake of clarity, let us explain how these functions are connected to the Jost matrices associated to the ZS system (2.1). As in [2,13], we represent the Jost matrices as the Fourier transforms of the auxiliary functions: from which inverting the Fourier transforms we get Now, for y ≥ x, the pair (K up ,K dn ) is the solution of the following system of two structured Volterra integral equations [7,13]: while the pair (K up , K dn ) is the solution of the system Similarly, for y ≤ x the pair (M up ,M dn ) is the solution of the system of two structured Volterra equations: and the pair (M up , M dn ) is the solution of the following system From the computational point of view, on the bisector y = x, it is important to note that each auxiliary function is uniquely determined by the initial solution or its partial integral energy. In fact, setting y = x in each of the four Volterra systems, we immediately obtain: Moreover, let us mention that the functionsK and K, as well as the functions M and M, are related to each other. Indeed, in the focusing case the following symmetry properties hold true [13] (3.13) while in the defocusing case the following symmetry relations can be proved (3.14) Similarly, for y ≤ x ≤ 0, the Marchenko kernel Ω r is connected to the auxiliary functions M dn andM dn in this way: As a result, assuming known the auxiliary functions, (4.1) and (4.2) can be interpreted as structured Volterra integral equations having the initial Marchenko kernels Ω ℓ and Ω r as their unknowns. It is important to note that, from the computational point of view, each Marchenko kernel can be treated as a function of only one variable, as we only have to deal with the sum of the two variables. 4.2. The scattering matrix and the Fourier transforms of the reflection coefficients. Let us begin by recalling that, as proposed in [13], the coefficients of the scattering matrix S(λ) can be represented as follows: where the {a ℓj (λ)} and the {a rj (λ)} denote the entries of the transition matrices from the left and from the right, respectively. More precisely, While the approximation of T simply requires the computation of a ℓ4 (λ) and a r1 (λ), that of ρ and ℓ is more complicated. In fact, to approximate ρ(α) and ℓ(α) we first have to compute the scattering coefficients by means of (4.6)-(4.9), then the reflection coefficients R(λ) and L(λ) by using (4.5) and (4.4) and, finally, ρ(α) and ℓ(α) by resorting to the inverse and direct Fourier transforms as indicated in (2.9) and (2.10). The stability of this numerical procedure essentially depends on the decay of R(λ) and L(λ) for λ → ±∞ since the smoother the initial potential the faster their decay. If the initial potential has jump discontinuities then R and L decay as λ −1 for λ → ∞ while if u 0 ∈ C ∞ (R) then R and L decay superpolynomially. Hence, this procedure is effective whenever the initial potential is smooth enough, that is at least u 0 ∈ C(R). If this is not the case the Fourier transforms ρ(α) and ℓ(α) could be approximated by solving structured Fredholm integral equations stated in the following theorems. The development of an effective algorithm for solving these equations is devoted to a subsequent paper. Proof. Let us first note that from (4.5) where a ℓ4 and a ℓ3 are defined in (4.6). Introducing the Heaviside function H(z) = 1 for z ≥ 0 and H(z) = 0 for z < 0, taking into account that and using (4.14) we can write Hence, applying the inverse Fourier transform and the convolution theorem, we have and then the equation (4.12) is an immediate consequence of the convolution definition and the Heaviside function. Equation (4.13) can be obtained similarly, noting that R(λ) satisfies the relation and that We note that, from the numerical point of view, it is irrelevant if we solve (4.12) rather than (4.13), since both are Fredholm integral equations of the second kind, equally structured. Applying the same technique we obtain the analogous where Ψ dn is defined in (4.10) and Φ up and Φ dn are given in (4.7)-(4.8). We omit the proof, as it is analogous to the previous one, after noting that The numerical method Let us now assume, for computational simplicity, that the support of the initial solution is bounded, that is which can be considered acceptable whenever u 0 (x) → 0 for |x| → ∞, provided that L is taken large enough. This hypothesis, as in part already proved in [7], allows us to greatly simplify the algorithms for the computation of the auxiliary functions and also those for the computation of the Marchenko kernels and the Fourier transforms of the reflection coefficients. The method we propose provides successively the numerical solution of: (1) the four systems (3.5)-(3.8) of Volterra integral equations for the computation of the four pairs of auxiliary functions; (2) the two Volterra integral equations (4.1)-(4.2) for the computation of the Marchenko kernels from the left and from the right Ω ℓ and Ω r , respectively; (3) the transition matrices from the left and from the right, the scattering matrix and then the inverse Fourier transforms ρ of the reflection coefficients from the right R and the Fourier transform ℓ of the reflection coefficient from the left L. Once the Marchenko kernels Ω ℓ (α) and Ω r (α) and the functions ρ(α) and ℓ(α) have been obtained, the bound states {λ j } n j=1 with their multiplicities {m j } n j=1 and the norming constants {(Γ ℓ ) js , (Γ r ) js } are computed by applying to the monomialexponential sums (2.5)-(2.6) the matrix-pencil method proposed in [8] and [6]. 5.1. Auxiliary functions computation. As said before, our numerical method for the solution of the Volterra systems (3.5)-(3.8) is greatly influenced by the hypothesis (5.1). It implies a reduction of the auxiliary function supports, which allows us to develop algorithms that are simpler and numerically stable. As proved in [7],K up andK dn have the supports depicted in Figure 1. Taking into account the symmetry properties (3.13) or (3.14) of systems (3.5) and (3.6), it is immediate to check that supp(K up ) = supp(K dn ) and supp(K dn ) = supp(K up ). Figure 1. Supports of the auxiliary functionsK up and K dn (to the left) andK dn and K up (to the right) For the numerical solution of system (3.5), the following properties, proved in [7], are also important: These two results are graphically represented in Figure 2, Analogous considerations, based on results reported in [7] allow us to claim that the supports of (M up ,M dn ) are those depicted in Figure 3. As for (K up ,K dn ) and (K up , K dn ) as for the pairs (M up ,M dn ) we have additional properties very useful from the numerical point of view. With obvious meaning of the symbols, they are reported in Figure 4. A simple inspection of Figures 1 and 3 makes it evident that the area where we need to computeK up andK dn , as well as K up and K dn , is given by the orange triangle represented in Figure 5. In the remaining areas of the respective supports their values are immediately obtained by using those of the orange triangle. The orange line shows, in particular, the values of the orange triangle we use to compute (K up ,K dn ) and (K up , K dn ) in the point of the gray area. Similar considerations Figure 3. Supports of the auxiliary functions M up andM dn (to the left) and M dn andM up (to the right) Algorithm. Given the initial solution u 0 and v 0 = u * 0 in the focusing case or v 0 = −u * 0 in the defocusing case, we have to solve Volterra systems (3.5)-(3.8). Let us start with the numerical solution of system (3.5). As noted before, under the hypothesis (5.1), we can limit ourselves to solve this system in the triangular computational area represented in Figure 5, as the values ofK up andK dn in the remaining parts of their support are then automatically known. The algorithm that we propose in this paper is more effective that the one reported in [7] whose aim was simply to check the effectiveness of our approach, highlighting the mathematical problems to overcome to obtain a satisfactory solution of the problem. Though the collocation strategy is the same used in [7], the algorithm used here is more complex and effective. In fact, it is based on the combined use of the trapezoidal rule, the composite Simpson quadrature formula and the 3/8 Simpson quadrature rule [12, Section 3.1], instead of only the composite trapezoidal quadrature formula used there. The first step is to fix a proper mesh in the computational area which can be done by fixing n ∈ N, taking h = L n and introducing the following mesh points: where the index k = 0, . . . , 2n identifies the line y = x + 2kh on which we want to compute the unknown functions, whereas i labels the abscissa of the i-th mesh point on the line. For the sake of simplicity, let us hereafter write u and v in place of u 0 and v 0 , respectively. The computational strategy requires us to compute firstK up and K dn in the nodal points of the bisector (x i , x i ). Consequently, recalling (3.9) and denoting byK up r,s ,K dn r,s the approximation ofK up (x, y),K dn (x, y) in the nodal points of D 0 , we can writē To approximate the above integral, it is convenient to use different quadrature formulae, according to the node x i . More precisely for: • i = n, being involved only two nodal points, we use the trapezoidal rulē as, for (3.9), u n+1 v n+1 = 0; • i = n − ℓ, ℓ = 1, 3, 5, . . . , 2n − 1, we apply the composite Simpson rule. Recalling that u n+1 = v n+1 = 0, we then obtain • i = n − ℓ, ℓ = 2, 4, 6, . . . , 2n, noting that and that the first integral involves four nodes, while the second involves an odd number of nodes, we can apply the 3/8 Simpson rule [12, p.128] for computing the first integral and the composite Simpson quadrature formula for the second one. Hence, recalling again that u n+1 = v n+1 = 0, we havē OnceK up andK dn on the nodal points of the bisector y = x are known, to evaluate them on the nodal points of the parallel lines to the bisector, we collocate system (3.5) on the nodes of the mesh (x i , x i+2k ), taking successively k = 1, . . . , 2n and, fixing k, assuming i = n − k, . . . , −n + 1, −n. Hence, we can write These formulae, taking into account the support of the functions involved ( Figure 5), reduce to To compute the first integral we use different quadrature formulae, according to the node x i . More precisely, fixing k, for: • i = n − k, being involved only two nodal points, we use the trapezoidal rule and then take as the nodal point (x n−k+1 , x n+k+1 ) is outside of the support ofK dn (x, y); • i = n − k − ℓ, with ℓ odd and ℓ ≤ 2n − k, applying the composite Simpson's rule, we obtain asK dn n−k+1,n+k+1 = 0. • i = n − k − ℓ, with ℓ even and ℓ ≤ 2n − k, noting that we apply the 3/8 Simpson's rule for the first integral and the composite Simpson's quadrature formula for the second one. Hence, we have   as the nodal point (x n−k+1 , x n+k+1 ) is outside the support ofK dn (x, y). The computation of the second integral is also based on the use of quadrature formulae, essentially dependent on the line y = x + 2kh. More precisely, for: • k = 1, as only two nodal points are involved, we apply the trapezoidal rule, obtaining ,i+1 }; • k = 2, 4, 6, ..., 2n, we use the composite Simpson quadrature formula. Proceeding in this way we obtain for i = n − k, ..., −n where w k,i is the sum of theK up values in the nodal points belonging to the bisector and the previous parallels. In fact, theK up values of the first sum belong to the lines y = x + [2k − 2(2j − 1)]h, those of the second one belong to the lines y = x + [2k − 4j]h and the last term to y = x. • k = 3, 5, 7, ..., 2n − 1, we write and then we use the 3/8 Simpson rule for the first integral and again the composite Simpson quadrature formula for the second one: where w k,i is known, being a linear combination ofK up values already computed. Once the integrals have been approximated as described above, we obtain the 2n following structured systems of order 2(2n + 1 − k), k = 1, . . . , 2n that allow us to compute the functionsK up andK dn in the 2n + 1 − k nodal points of D k as k up k = (K up n−k,n+k ,K up n−k−1,n+k−1 , . . . ,K up −n+1,−n+2k+1 ,K up −n,−n+2k ) T k dn k = (K dn n−k,n+k ,K dn n−k−1,n+k−1 , . . . ,K up −n+1−n+2k+1 ,K dn −n,−n+2k ) T . Notice that U k,1 , U k,2 are the following structured matrices: with c 1 = 1/2, c 2 = c 4 = ... = c 2n = 1/3 and c 3 = c 5 = ... = c 2n−1 = 3/8 and The most obvious computational strategy is to reduce (5.2) to a sequence of n − k systems of order two. However, our numerical experiments indicate that the numerical stability increases by using a suitable iterative method. It requires solving iteratively the system (5.3) (I − U k,1 U k,2 )K up k = U k,1 w k , and then computing k . The matrix of system (5.3), for h small enough, is diagonally dominant as each nonzero element of U k,1 U k,2 contains a factor h 2 , so that the Gauss-Seidel method is a suitable choice of iteration method, assuming as an initial vector the values ofk up k in the previous parallel, that is taking in the (k + 1)th parallel to the bisector As I − U k,1 U k,2 is lower triangular, it is of course possible to solve it by a descending technique. Remark 3. Once we have solved system (3.5) we can immediately deduce the solution of system (3.6) taking into account Remark 1. In any case, we note that, as the computational area of system (3.5) is the same as that of (3.6), the algorithm to solve (3.6) is analogous to that adopted for system (3.5). The same comparative considerations hold true for the computation of (M up , M dn ) and (M up , M dn ) in the nodal points of their computational area. Moreover, although the computational area for (M up ,M dn ) is not the same as that for (K up , K dn ), the technique for their computation is essentially the same. Noting that (Figures 5, 6) the two computational areas are symmetric with respect to each other, we first have to compute (M up ,M dn ) in the bisector and then on the parallel lines y = x − 2kh, k = 1, 2, ..., 2n. Furthermore, to compute M dn in the bisector we can adopt the same algorithm forK up as relations (3.9) and (3.12) indicate. A comparison between the systems (3.5) and (3.7) also suggests to approximate the first integral in (3.7) by a simple adaptation of the method developed for the second one in (3.5), as well as the second integral of (3.7) by adapting the method for the first integral of (3.5). Marchenko kernel computation. To compute Ω ℓ and Ω r , that is to solve the integral equations (4.1) and (4.2), we first note that ( For the approximation of Ω ℓ we collocate (4.1) in the nodal points Now, to compute the above integral we use different quadrature formula by adopting a steplenght δ = 2h that is twice the one considered in the numerical solution of system (3.6) to avoid the interpolation among the values of the auxiliary functions computed before. More precisely, for • i = 0 we immediately obtain that Ω ℓ,2n = −K dn n,n = − 1 2 v n in virtue of (3.9); • i = 1 we use the trapezoidal rule by getting 1 + δ 2 K dn n−2,n−2 Ω ℓ,2(n−1) = −K dn n−2,n − K dn n−2,n Ω ℓ,2n ; • i = 2, 4, 6, ... we use the Simpson quadrature formula we approximate the first integral by using the 3/8 Simpson rule and the last integral by adopting the composite Simpson quadrature formula. Hence we An analogous procedure can be applied to approximate Ω r in [−2L, 0]. More precisely, we collocate (4.2) in the nodal points Hence, by adopting the technique illustrated above, • for i = 0 we immediately obtain • for i = 2, 4, 6, ... we obtain • for i = 3, 5, 7, ... as we can write we approximate the first integral by using the composite Simpson rule and the second one by adopting the 3/8 Simpson's quadrature formula. Hence we get 5.3. Computation of the scattering matrix and inverse Fourier transforms of reflection coefficients. In this section we illustrate our method to approximate the scattering matrix and then to compute the transmission coefficients T defined in (4.3), the reflection coefficients R and L introduced in (4.5)-(4.4) and their Fourier transforms ρ and ℓ given in (2.9)-(2.10), under the assumption that u 0 ∈ C(R). Approximation of the transmission coefficient T. It is based on the two equivalent definitions of the transmission coefficient that is on the computation of the coefficients of the transition matrices a ℓ4 (λ) = 1 + where H denotes the Heaviside function and F −1 {g} stands for the inverse Fourier transform of g. Let us only illustrated the algorithm for the computation of the coefficient a ℓ4 as the computation of a r1 is analogous. We remark that its computation requires only the values of K up (y, y + 2hj) which we have already computed since they are the values of K up on the jth parallel to the bisector y = x. For this reason Φ up j can be computed by simply adopting the computational strategy that we developed for computing K up . At this point the approximation of T (λ), easily follows by using (5.8). Approximation of the reflection coefficients R and L. In the matter of the computation of the reflection coefficients, taking into account (4.5) and (4.4), we can write Other equivalent expressions can be deducted by using the definitions of R, L and T in terms of the coefficients of the transmission matrix from the right. To approximate a ℓ3 , taking into account (5.1) and the support ofK up , first we note that Moreover, adopting the notation used before and noting thatΦ up −n =Φ up (−n) = Φ up (−2nh) = 0, we can writē HenceΦ up i , as well as Φ up j , can be computed by simply adapting the computational strategy developed for K up . The approximation of R and L immediately follow by using (5.12). 5.4. Computation of the bound states and the norming constants. For the sake of completeness, we now give a brief description of the matrix-pencil method that we have recently developed for the identification of the bound states and the norming constants [6,8]. Setting z j = e iλj , the spectral function sum S ℓ (α) introduced in (2.5) can be represented as the monomial-power sum p k S k+α0 = 0 whose characteristic polynomial (Prony's polynomial) is uniquely characterized by the z j values we are looking for. The identification of the zeros {z j } allows one to compute the coefficients c js by solving in the least squares sense a linear system. For the computation of {z j } and then of the bound states λ j , the given data are arranged in the two Hankel matrices of order N To these matrices we then associate the M × M matrix-pencil where the asterisk denotes the conjugate transpose. As proved in [8], the zeros z j of the Prony polynomial, with their multiplicities, are exactly the generalized eigenvalues of the matrix-pencil S MM (z). The simultaneous factorization of the matrices S 0 N M and S 1 N M by the Generalized Singular Value Decomposition allows us to compute the zeros z j and then the bound states λ j , as λ j = −i log z j . Analogous results can be obtained by a proper factorization of the augmented Hankel matrix S ℓ = [S 0 ℓ,1 , S 1 ℓ ], where S 0 ℓ,1 is the first column of S 0 N M and S 0 ℓ is obtained by S 1 ℓ by simply deleting its last column. As shown in [6], the QR factorization of S ℓ is as effective as its SVD factorization considered in [8], though its computational complexity is generally smaller. The The coefficients {(Γ r ) js } are then obtained by solving, in the least square sense, a linear system whose vector of known data is given by Ω r (α) evaluated in a set of N points, with a sufficiently large N > M . Examples Let us now present two examples. The first one is a reflectionless case while the second one has reflection coefficients different from zero. Each of them will be used in the next section to give a numerical evidence of the effectiveness of our method. Example 1 (One soliton potential) Considering the initial potential for the NLS in the focusing case we take where ξ, φ, x 0 ∈ R and 0 = η ∈ R. As proved in [9], the corresponding initial value problem (1.1) can be solved exactly, as already considered in several papers and in particular in [5] and [4]. Let us note that 2η > 0 represents the amplitude of the initial potential and µ 0 = x 0 /2η is the initial peak position. In this example the norming constants from the left and from the right are [4]: Moreover, setting a = η + iξ, as it is immediate to check, the exact solution of the Volterra system (3.6) for y ≥ x is while the exact solution of system (3.5) can be obtained by resorting to relation (3.13). Furthermore, the closed form solution of the Volterra system (3.8) is while the solution of system (3.7) can be deducted by using relation (3.13). As it represents a reflectionless case, so that the exact initial Marchenko kernels are [4] Ω ℓ (x) = Γ ℓ e −ax , Finally, the scattering matrix is Example 2 (Gaussian potential) As a second example of the initial potential for the NLS, we take where q 0 > 0, σ > 0 and µ ∈ R. As in [11,14] we investigate the defocusing case in which the scattering coefficients T (λ), R(λ) and L(λ) are all continuous functions and there are no bound states. Hence, in this case the following relations hold true Moreover, we also consider the focusing case. In such a case, whenever there are no discrete eigenvalues. On the contrary we have n discrete eigenvalues, all of them simple and having real part − µ 2 , if [10] (6.5) n − 1 2 π < q 0 √ πσ < n + 1 2 π. Numerical results and conclusions Test 1 (One soliton potential) Let us consider as in [4] the initial potential (6.1) with ξ = 1/10, x 0 = φ = 0 and η = 2. In order to compute the non-zero scattering parameters that in this case are the norming constants, the bound states and the transmission coefficient, at first we solve the Volterra's system (3.6) and (3.8) with L = 8 and n = 3000 by obtaining the following relative errors where here and in the sequel the ∼ sign denotes the approximation of the exact function previously given and · denotes the maximum norm of the involved function in their computational areas. Identical relative errors are of course obtained for the remainding auxiliary functions, as a result of the symmetry properties (3.13) and (3.14). Once these auxiliary functions are computed we numerically solve equations getting for the Marchenko kernels from the right and from the left with the following relative errors: where the symbol ≃ means that the left term coincide with the right term up to the third decimal digit. At this point, by using such kernels, we apply our matrix pencil method [6] by finding a single bound state term, a norming constant from the left and a norming constant from the right with the following relative errors: In the matter of the relative errors of the scattering matrix, we obtain Concerning the trasmission coefficient, we can compute it by approximating at first the integral Φ up defined in (4.8), and then using (5.8). In Table 1 we give the following relative errors we obtain for such a coefficient over segment of width 4L of three different lines Let us consider first the initial potential (6.3) in the defocusing case with q 0 = 1.9, µ = 1, σ = 2 as in [11,14]. To this end, we compute the solution of systems (3.5)-(3.8) considering as in the soliton case L = 8 and n = 3000, then we solve equations (4.1)-(4.2), compute the scattering matrix and thus the Fourier transforms of the reflection coefficients. Our numerical method recognizes that, as theoretically expected, there are no bound states and relations (6.4) are numerically satisfied since we have the following errors: As in the one soliton case, we checked if our numerical results satisfy the algebraic property (2.4) for the scattering matrix, by considering in semi logarithmic scale the error function E GD (λ) = 1 2 (S † (λ)S(λ) + S(λ)S † (λ)) − I for λ ∈ [−2L, 2L]. As shown in Figure 10 its numerical validity is satisfactory as in the soliton case. Now let us investigate on the focusing case considering the initial potential (6.3) with q 0 = 2.5, µ = 1, σ = 2. As a result, inequality (6.5) implies that we have two simple bound states {λ 1 , λ 2 } whose real part is −1/2. At first we compute the auxiliary functions by solving systems (3.5)-(3.8) with L = 8 and n = 3000, then we solve equations (4.1)-(4.2), compute the scattering matrix and the Fourier transforms of the reflection coefficients. At this point, we apply the matrix pencil method described in Section 5.4 assuming that we have not more than five bound states. Our method recognize that, as theoretically expected, we have two simple bound states having real part equal to −µ/2. In fact we get Conclusions The numerical results show that our numerical method is effective in both the focusing and defocusing cases, provided the initial potential decays to zero at infinity and is at least continuous. This positive result is due to the possibility to know each pair of functions on the whole plane, by solving the relative Volterra system on a bounded computational triangle. The accuracy of the identification of the spectral parameters strongly depends on this result, since all the subsequent computations require the knowledge of the auxiliary functions on their computational triangles. We believe that the method can be extended, with the same accuracy of the results, in the presence of jump discontinuities of the initial potential. To this end, a numerically stable method for the solution of Fredholm integral equations (4.12)-(4.13) and (4.15)-(4.16) should be developed. The development of such a method should also be accompanied by an extensive numerical experimentation which requires the exact knowledge of scattering data in at least one case in which the initial potential has jump discontinuities. Considering that such research takes a rather long time, the development of such a method is postponed to a next paper. Supports of the auxiliary functions In this section we determine the supports of the auxiliary functions K(x, y) and M (x, y) if the potentials u 0 (x) and v 0 (x) have their supports in [−L, L]. It suffices to prove parts (2) of Lemmas 5.1 and 5.2 in [7], because the proofs of the other three parts of these two lemmas are immediate and proceed as in the discrete case. where Q and P are bounded [13]. Then for x ≤ L and x + y ≥ 2L the integral equations (3.1) have zero right-hand sides, because v 0 ( 1 2 (x + y)) = 0 for x + y > 2L. Integrating the absolute values ofK up (x, y) andK dn (x, y) with respect to y ∈ (x, +∞), we obtain ν(K up ; x) ≤ Taking the limit as n → +∞, we get P (x) = 0 and henceK up (x, y) =K dn (x, y) = 0 for almost every y > x, as claimed. The proof of part (2) of Lemma 5.2 is analogous. Acknowledgements The research has been partially supported by INdAM (National Institute for Advanced Mathematics, Italy).
2015-02-16T17:05:34.000Z
2015-02-16T00:00:00.000
{ "year": 2016, "sha1": "38d28c87666de571bf902065d6551b3e8c6fe1e2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1502.04628", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "38d28c87666de571bf902065d6551b3e8c6fe1e2", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
216032789
pes2o/s2orc
v3-fos-license
Advances in COVID-19: the virus, the pathogenesis, and evidence-based control and therapeutic strategies Since the outbreak of the COVID-19 pandemic in early December 2019, 81 174 confirmed cases and 3242 deaths have been reported in China as of March 19, 2020. The Chinese people and government have contributed huge efforts to combat this disease, resulting in significant improvement of the situation, with 58 new cases (34 were imported cases) and 11 new deaths reported on March 19, 2020. However, as of March 19, 2020, the COVID-19 pandemic continues to develop in 167 countries/territories outside of China, and 128 665 confirmed cases and 5536 deaths have been reported, with 16 498 new cases and 817 new deaths occurring in last 24 hours. Therefore, the world should work together to fight against this pandemic. Here, we review the recent advances in COVID-19, including the insights in the virus, the responses of the host cells, the cytokine release syndrome, and the therapeutic approaches to inhibit the virus and alleviate the cytokine storm. By sharing knowledge and deepening our understanding of the virus and the disease pathogenesis, we believe that the community can efficiently develop effective vaccines and drugs, and the mankind will eventually win this battle against this pandemic. Introduction The past three months have witnessed the tremendous efforts that China has been contributing to tackle the outbreak of the epidemic coronavirus disease 2019 (COVID-19) [1]. These efforts have resulted in the fact that the epidemic peaked on February 19, 2020, and has been declining steadily since then [2]. As of , 81 174 confirmed cases and 3242 deaths were reported in China, including 58 new cases (34 were imported cases) and 11 new deaths reported in the past 24 h [3]. In contrast, there have been more new cases reported from countries outside of China than from China since February 26, 2020 [4]. The situation of Wuhan is getting better, and all of its Fangcang (makeshift) hospitals have been closed after the last group of patients moved to other hospitals on March 11, 2020 [5]. By March 19, 2020, however, 128 665 confirmed cases and 5536 deaths have been reported in 167 countries/territories outside of China, with 16 498 new cases and 817 new deaths occurring in the past 24 h [3]. World Health Organization (WHO) describes SARS-CoV-2 as a pandemic to spur countries to action on March 11, 2020 [6]. Genome type The SARS-CoV-2 viruses are positive single-stranded RNA viruses [7]. The whole viral architecture was examined by transmission electron microscopy, and the results showed that the virion particles are roughly spherical or moderately pleiomorphic, with spikes as nail-like shape toward outside with a long body embedded in the envelope [8]. Population genetic analyses of 103 SARS-CoV-2 genomes indicate that these viruses evolved into leucine (L) and serine (S) types, whereas the L type might be more aggressive and spread more quickly and the S type might be the ancestral version [9]. While criticisms on this work have been posted [10], other reports indicate that genomic variations of SARS-CoV-2 may lead to multiple outbreak sources of transmission [11,12]. The typing of SARS-CoV-2 can be repeated by other group, though the two clades exhibited similar virulence and clinical outcomes. Analysis of whole genome sequence of 104 strains of the COVID-19 virus isolated from patients in different localities between the end of December 2019 and mid-February 2020 showed 99.9% homology, without significant mutations [2]. In another study, however, analysis of 120 genomic sequences of SARS-CoV-2 reported that this virus may increase its infectivity through the receptor binding domain recombination and a cleavage site insertion [13]. However, phylogenetic analysis of the SARS-CoV-2 and its closely related reference genomes indicate that the origin of this virus remains to be determined [9,14]. SARS-CoV-like viruses usually have six critical amino acids in the receptor binding domain (RBD) of the spike (S) protein for binding to receptor ACE2 and for determining the host range, and 5 of the 6 amino acids in SARS-CoV-2 differed from SARS-CoV [15]. Moreover, a sequence encoding amino acids PRRA is inserted into the genome of SARS-CoV-2; together with the following R within the original sequence of the S protein, a polybasic cleavage site (RRAR) at the junction of S1 and S2 for Furin protease is generated, making the virus more infectious than SARS-CoV [15,16]. Because SARS-CoV-2 does not bear any genetic manipulation clue in its genome but exhibits notable features in relationship to coronaviruses in nature, including the optimized RBD and the polybasic cleavage site, any type of laboratory-based scenario of the virus would not be plausible [15]. Cell entry and life cycle SARS-CoV-2 uses the SARS-CoV receptor ACE2 for entry and the serine protease TMPRSS2 for S protein priming [17]. The target cells of SARS-CoV-2 have been reportedly to include type II alveolar cells, myocardial cells, proximal tubule cells of kidney, ileum and esophagus epithelial cells, and bladder urothelial cells [18]. According to the SARS-CoV model [19] and recent advances [15,16], SARS-CoV-2 may enter target cells through an endosomal pathway ( Fig. 1): S protein binds to ACE2 and is translocated to endosomes, where S protein is cleaved by the endosomal acid proteases (cathepsin L) to activate its fusion activity. The SARS-CoV-2 S glycoprotein harbors a Furin cleavage site (R-X-X-R; X, any amino acid), facilitating the virus to enter into target cells and making it more infectious than the SARS virus [16]. The SARS-CoV-2 genome is released and translated and the protein products are processed by viral proteinases. Meanwhile, the subgenomic negative-strand templates are synthesized and made as a template for genomic RNA. The synthesized genomic RNA assembles with nucleocapsid (N) protein in the cytoplasm to form viral nucleocapsids, which bud into the lumen of the endoplasmic reticulum-Golgi intermediate compartment [19]. The replicated virions are released from the cell through exocytosis to infect other cells (Fig. 1). Pathogenesis of persistent cytokine release syndrome In response to pathogens, the innate immune system releases cytokines to antagonize the pathogens and recruit additional immune responses. CRS, or cytokine storm, is the uncontrolled release of cytokines that can be triggered by a variety of factors including virus, bacterial components, sepsis, superantigens, toxins, antibodies, and chimeric antigen receptor T cells [28]. CRS was first reported in 1989 when the anti-T cell antibody muromonab-CD3 was used in the treatment of solid organ transplantation [29]. CRS is a life-threatening toxicity that may lead to detrimental effects such as leakage from capillaries, tissue toxicity and edema, organ failure and shock. The syndromes of CRS include sustained fever, hepatomegaly with liver dysfunction, coagulopathy, cytopenia, skin rash, and variable neurologic symptoms (Table 1), which are sometimes difficult to distinguish from those of the underlying diseases [30]. CRS is usually initiated by macrophages, dendritic cell, NK cell, and T cell, in response to pathogen-associated molecular patterns [28]. In SARS-CoV-induced severe disease, the levels of IL-6 was significantly elevated [31]. In influenza virus infection, infiltration of innate immune cells to the lung and the subsequent CRS are the key contributors to morbidity and mortality, whereas the endothelial cells exhibit a central role in orchestration of cytokine amplification [32]. Pathogenesis of cytokine release syndrome induced by SARS-Cov-2 Human coronaviruses can be divided into low pathogenic and highly pathogenic coronaviruses [33], and SARS-CoV-2 is obviously a highly pathogenic virus. Clinical studies have shown significant elevation of cytokines and lymphocytopenia in COVID-19. These cytokines include IFN-γ, TNF-α, IL-6, IL-10, IL-2, IL-1, and others. The elevated IL-6 was significantly related to the clinical manifestation of severe type patients [34]. An analysis of dynamic characteristics of host immune system in 3 critical cases showed that hypoxemia severity was closely related with host immune cell levels [35], and the lymphocytopenia and cytotoxicity may be the result of SARS-CoV-2 infection [36]. An observation showed that after SARS-CoV-2 infection, CD4 + T lymphocytes are rapidly activated to become pathogenic T helper (Th) 1 cells and generate cytokines including GM-CSF. The cytokines environment induces inflammatory CD14 + CD16 + monocytes with high level of IL-6 and accelerates the inflammation. These T cells and monocytes may enter the pulmonary circulation, where the monocytes become macrophages [9]. These cells, together with other cells [28], trigger CRS [37]. While IFN-γ may initiate cytokine storm in SARS patients [38], several cytokines including IL-6 may trigger CRS in COVID-19. Other cells such as natural killer (NK) cells, may also play a role in SARS-CoV-2-induced CRS, and transcription factors such as NF-kB may play a role in regulating cytokine release. The consequence of CRS include epithelial and endothelial cell apoptosis and vascular leakage, suboptimal T cell response (impaired virus clearance), accumulation of alternatively activated macrophages and altered tissue homeostasis, acute lung injury, and acute respiratory distress syndrome (ARDS) [33]. CRS is associated with necrosis and tissue destruction and related symptoms such as extensive pulmonary edema, acute bronchopneumonia, alveolar hemorrhage, reactive hemophagocytosis, and ARDS [10] (Fig. 3), which is confirmed by histological examination of COVID-19 patients' lungs [36,39]. The necropsy investigation also shows the infiltration of macrophages and activation of alveolar macrophages in fatal patients. However, whether the CRS results from persistent viral infection of immune cells, for example, alveolar macrophages, or represents the over-activated post-viral infection immune reaction, is worth particular attention. Control and therapeutics for COVID-19 Containability of SARS-CoV-2 The SARS-CoV-2 is a new virus that shares 79.5% sequence identified to genome sequence of SARS-CoV [40]. The virus exhibits a high reproduction number (R 0 ) [41,42], is more infectious and spreads easier between people than the SARS virus probably due to gain of S glycoprotein Furin-cleavage site [16]. Some people doubted that this virus might be uncontainable and a "let go" policy might be suitable for this pandemic since the cost will be too high to afford strict social distancing and isolation. This notion may be inappropriate for some reasons. Firstly, the historic experience with both SARS and MERS demonstrated that coronavirus with high virulence do have tendency of self-limitation. Secondly, recent studies reported that the asymptomatic cases with transmissibility account for only a small proportion (889/ 72 314, 1.2%) of COVID-19 patient [43]. Thirdly, preliminary data on the recovered cases showed the presence of a very high titer of neutralizing antibodies (39/40 with titer of at least 1:640 while the remaining one had a titer of 1:32 [44]), indicating the high probability of viral clearance in the great majority of infected populations. Thanks to domestic medical workers' great efforts, central and local governments' tremendous input, the contributions from the volunteers and warm-hearted people, and international supports, the spread of SARS-CoV-2 in China has been significantly constrained, providing firm evidence that this virus is containable. In some countries outside of China, the policy-makers recognize China's experience in combating COVID-19, while some others decide to apply sound policy of welldesigned containment strategy and to avoid limitation of herd immunity. In extreme cases, use of national machinery including police and armed force is necessary to meet this unprecedented public health crisis. Anti-virus and CRS-clearing approaches In addition to oxygen and other supportive therapeutics, some therapeutics are being tested in clinical trials. On one hand, the anti-virus agents including convalescent patient plasma [44,45] and remdesivir [46] are being tested in clinical trials. In some COVID-19 patients who had viremia, the transfusion of convalescent plasma (CP) from recovered patients significantly reduced the viral load. Recent studies provided evidence that even after viremia, the viral infection may persist in the target organs including lungs, necessitating the CP therapy even in the relatively late stage of severe disease (Personal communication, Prof. Chaofu Wang, Ruijin Hospital Affiliated to Shanghai Jiao Tong University School of Medicine). The structure of the primary target of remdesivir, the RNAdependent RNA polymerase of the virus, has recently been solved [47]. On the other hand, CRS-clearing drugs represent another key remedy to save severe cases. A monoclonal antibody against IL-6 receptor, tocilizumab, is shown to be effective in treating COVID-19. Other approaches to eliminate CRS including antibody against GM-CSF [37], are emerging. Disruption of a selfamplifying catecholamine loop is shown to be able to reduce CRS caused by infections and agents including oncolytic bacteria, T cell-targeting antibodies, and CAR-T cells [48]. Activating with the soluble ligand Slit, an endothelium-specific, Robo4-dependent signaling pathway that strengthens the vascular barrier, diminishes deleterious aspects of CRS-induced organ toxicity [49]. To achieve maximal efficacy, combined usage of anti-viral drugs such as CP and drugs against cytokine storm (Fig. 3), should be considered in clinical trials for severe cases. Other therapeutic targets and agents Vaccines are designed for SARS-CoV-2, and two vaccines are being tested in phase I clinical trials for their safety and immunogenicity in USA and China, respectively. In 12 COVID-19 patients with prophylactic anti-coagulation therapy, an anticoagulant agent dipyridamole (DIP) exerted beneficial effects by reducing viral replication, suppressing hypercoagulability and enhancing immune recovery [50]. A TMPRSS2 inhibitor approved for clinical use blocked entry and might constitute a treatment option, whereas the sera from convalescent SARS patients crossneutralized SARS-2-S-driven entry [17]. Interferon-inducible lymphocyte antigen 6 complex, locus E (LY6E) inhibits SARS-CoV-2 entry into cells in vitro and in vivo by interfering with spike protein-mediated membrane fusion [51]. Potential therapeutic targets for SARS-CoV-2 were analyzed, and inhibitors of 3-chymotrypsin-like protease, spike, RNA-dependent RNA polymerase (RdRp), and papain like protease (PLpro) were screened [52]. The viral 3-chymotrypsin-like cysteine protease enzyme and papain-like protease [53] were used as drug target to screen for lead compounds [54]. A recent trial in 199 COVID-19 patients showed that no benefit was observed with lopinavir-ritonavir treatment beyond standard care [55]. Perspectives Owing to the dedication of medical community and evidence-based, responsible policy-making of the Chinese highest level leadership, which won the support of the public, with high appreciation of World Health Organization (WHO) and firm support of international society, the SARS-CoV-2 outbreak has been contained and very effective therapeutic strategies are being developed. However, continued vigilance is needed, and only time can tell whether this virus will disappear in summer or retain in community and whether COVID-19 will become influenza-like disease. Now the virus is rapidly spreading in more than 160 countries/territories outside of China, in some of which the experience of China may be helpful to combat this pandemic. Clinically, more methods should be developed to stop transition of mild cases to severe ones, more effective anti-virus agents should be unveiled, and the harmful effect of CRS should be alleviated to rescue severe cases. With our deepened understanding of the virus and the disease and the development of vaccine and effective drugs, together with the necessary but temporally costing public health intervention measures, we believe that the human life and dignity as the fundamental part of the human rights can be protected and the mankind will eventually win this battle against SARS-CoV-2. Chinese medical community will be working hand-in-hand with colleagues in other countries to fight against the common enemy-COVID-19. Laboratory of Medical Genomics of Shanghai Jiao Tong University, Overseas Expertise Introduction Project for Discipline Innovation (111 Project) (No. B17029). The study sponsors had no role in the design of the study; the data collection, analysis, or interpretation; the writing of the article; or the decision to submit for publication. Compliance with ethics guidelines Guangbiao Zhou, Saijuan Chen, and Zhu Chen declare no conflict of interests. This manuscript does not involve a research protocol requiring approval by the relevant institutional review board or ethics committee.
2020-04-21T14:33:12.744Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "cca8177b94ece2863bbc73840a7b76a9843685a0", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s11684-020-0773-x.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "cca8177b94ece2863bbc73840a7b76a9843685a0", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
25190122
pes2o/s2orc
v3-fos-license
Large-scale inference of conjunctive Bayesian networks The continuous time conjunctive Bayesian network (CT-CBN) is a graphical model for analyzing the waiting time process of the accumulation of genetic changes (mutations). CT-CBN models have been successfully used in several biological applications such as HIV drug resistance development and genetic progression of cancer. However, current approaches for parameter estimation and network structure learning of CBNs can only deal with a small number of mutations (<20). Here, we address this limitation by presenting an efficient and accurate approximate inference algorithm using a Monte Carlo expectation-maximization algorithm based on importance sampling. The new method can now be used for a large number of mutations, up to one thousand, an increase by two orders of magnitude. In simulation studies, we present the accuracy as well as the running time efficiency of the new inference method and compare it with a MLE method, expectationmaximization, and discrete time CBN model, i.e. a first-order approximation of the CT-CBN model. We also study the application of the new model on HIV drug resistance datasets for the combination therapy with zidovudine plus lamivudine (AZTþ3TC) as well as under no treatment, both extracted from the Swiss HIV Cohort Study database. Availability and implementation: The proposed method is implemented as an R package available at https://github.com/cbg-ethz/MC-CBN. Contact: niko.beerenwinkel@bsse.ethz.ch Supplementary information: Supplementary data are available at Bioinformatics online. Introduction In many biological systems, there are constraints on the order in which genetic alterations (mutations) fixate in a population of biological entities such as viruses and cancer cells (Lozovsky et al., 2009;Poelwijk et al., 2007;Weinreich et al., 2006).In HIV infection, drug resistance development is due to the accumulation of resistance mutations, i.e. point mutations in the viral genome that increase viral fitness under the selective pressure of antiretroviral drugs (Seifert et al., 2015).Since resistance mutations give a strong selective advantage under a fixed drug pressure, their fixation is almost an irreversible process.More than 70 mutations in the reverse transcriptase, protease, envelope and integrase genes of viral genome are known to be associated with HIV drug resistance (Wensing et al. 2014).Models of the accumulation of mutations have been proposed and extensively used in HIV treatment optimization (Altmann et al., 2009;Beerenwinkel et al., 2005;Deforche et al., 2008;Prosperi et al., 2009;Beerenwinkel et al., 2013;Montazeri et al., 2013).Similarly, genetic progression of cancer is caused by the accumulation of mutations such as changes in single nucleotides or copy numbers (Hanahan and Weinberg, 2011;Merlo et al., 2006).Around 140 mutations are known to have effect on cancer progression (Vogelstein et al., 2013).Mathematical and statistical models have also been used to describe the genetic progression of cancer (Desper et al., 1999;Gerstung et al., 2011;Heydebreck et al., 2004;Hjelm et al., 2006;Jiang et al., 2000;Mattias, 2004;Rahnenfu ¨hrer et al., 2005). In the present paper, we study the continuous time conjunctive Bayesian network (CT-CBN) model for describing the accumulation of mutations (Beerenwinkel and Sullivant, 2009).The CT-CBN is a continuous time Markov chain defined on a partially ordered set (poset) of mutations.The poset encodes temporal ordering among mutations by assuming that the waiting time of each mutation begins only after all its predecessor mutations have already occurred.To each mutation, a rate of evolution is assigned, which includes generation of the mutation and its fixation in the population.In large populations, mutations will be generated almost immediately and the waiting time is effectively dominated by the fixation time.Hence, we use the terms "rate of evolution" and "fixation rate" interchangeably throughout this paper.The fixation rates characterize the waiting time process of mutations. Maximum likelihood estimation (MLE) is mainly used for inference of CBN models from censored cross-sectional genotypes.Observed genotypes are censored since explicit occurrence times of mutations are not known and it is only possible to measure which mutations have occurred up to a certain time point (i.e. the time of diagnosis).In addition, observed data are often subject to errors in the observation (sequencing) process and consequently true genoytpes are not directly observable.Hence, the expectation-maximization (EM) algorithm, which can properly handle unobserved latent variables, has been extensively used for inference of CBN models (Beerenwinkel and Sullivant, 2009;Gerstung et al., 2009;Beerenwinkel et al., 2007, 2011, Montazeri et al., 2015).In (Sakoparnig and Beerenwinkel, 2012), a Bayesian inference approach is used for network learning and parameter estimation of the discrete time CBN.However, all the above-mentioned inference methods for CBN models are only feasible for small sets of mutations of less than around 20 mutations. The main goal of the present paper is to address this limitation and to develop inference schemes for CBN models that scale to several hundreds of mutations.This model is particularly useful in modeling HIV evolution.Modeling the dynamics and dependencies among hundreds of HIV mutations makes it possible to better quantify drug resistance development and consequently improve the prediction of therapy response.In the present paper, we propose two novel inference methods for parameter estimation of the CBN models.The first one is an exact MLE method that uses general-purpose optimization methods to directly maximize the observed likelihood.For this method, a new formulation of the likelihood and its gradient are given based on the properties of continuous-time Markov chains.This method is feasible for up to about 30 mutations.However, for some classes of posets, e.g.empty posets, it is feasible for much larger posets with several hundreds of mutations.The second method is an approximate inference scheme, namely a Monte Carlo expectation-maximization (MC-EM) algorithm (Wei and Tanner, 1990) with importance sampling.We demonstrate that the MC-EM method is almost as accurate as the exact MLE method in simulation studies.The method is applicable for parameter estimation of large posets with around 1000 mutations, an increase by two orders of magnitude.For network learning of the CT-CBN model, we adapted the mixture-model approach outlined in (Beerenwinkel and Sullivant, 2009;Montazeri et al., 2015) and address some of its limitations in order to make it applicable for large numbers of mutations. The rest of this paper is organized as follows.In Section 2, after giving a brief introduction to the CT-CBN model, we present a new MLE method based on continuous time Markov chain for the CBN model.In addition, we propose an approximate large-scale inference method using MC-EM with importance sampling for the CBN model.We close this section by discussing how to reconstruct the underlying network topology from observed data.Section 3 reports the performance of the MC-EM method in comparison to other inference methods.In addition, we analyze thoroughly the HIV drug resistance development in a clinical dataset of Swiss HIV infected patients.We close with conclusions in Section 4. Methods The CT-CBN is defined on a set of genetic events (mutations) P and a partial order ՟ among the mutations.A relation e 1 0 e 2 in ðP; ՟Þ indicates that mutation e 2 can only happen after the occurrence of e 1 .The relation e 1 0 e 2 is called a cover relation if it exists in the transitive reduction of ðP; 0Þ, i.e. if @ e 0 2 Pnfe 1 ; e 2 g with e 1 0 e 0 0 e 2 .A genotype g is a subset of P. The set of all genotypes compatible with the order constraints of the partially ordered set (poset) P is denoted by J(P).For example, the poset shown in Figure 1(a) consists of four mutations f1; 2; 3; 4g subject to the relations 1 0 3; 2 0 3, and 2 0 4. Its corresponding genotype lattice, shown in Figure 1(b), is JðPÞ ¼ ;; f1g; f2g; f1; 2g; f2; 4g; f1; 2; 3g; f f1; 2; 4g; f1; 2; 3; 4gg.The set ExitðgÞ is defined as fe 2 Pje 6 2 g; g [ e 2 JðPÞg, i.e. the subset of events in Png that can happen next.In the continuous time CBN, for each mutation i 2 P the waiting time to its occurrence, denoted as T i , is defined as (1) where paðiÞ is the set of parents of mutation i for the poset P. The expression max j2paðiÞ T j indicates the parent mutations happen first and then the waiting process for the mutation i itself begins.The time needed exclusively for the mutation i is represented by Z i and is assumed to be an exponentially distributed random variable where p ¼ jPj and the density function f ki is the univariate exponential probability density function with rate k i .A genotype with mutation times t ¼ ðt 1 ; . . .; t p Þ is not compatible with the poset P if the density f is zero, or equivalently, if there exists an event i 2 P such that t i < max j2paðiÞ t j .In real world applications, the random vector T is not observed and the mutations are only observable at a certain sampling (sequencing) time, denoted by t s , which itself might not be observable in all settings.Formally, the observed genotype g at time t s is defined as g ¼ fejt e < t s g.The reader can refer to (Beerenwinkel and Sullivant, 2009;Gerstung et al., 2009;Montazeri et al., 2015) for a more detailed introduction to CBN models.The inference of the model consists of two parts namely (i) parameter estimation: estimation of the exponential rates k i for i ¼ 1; . . .; p for a given poset and (ii) network learning.For the parameter estimation, we first propose a new MLE method that directly maximizes the observed likelihood of the CBN model.In this approach, the likelihood and its gradient functions are formulated using some properties of continuous time Markov chains.In addition, we propose a largescale efficient approximate algorithm using Monte Carlo expectation maximization based on importance sampling for the parameter estimation.Finally, a mixture-model approach outlined in Section 2.3 is used to find the maximum likelihood estimate of true poset. Exact inference using continuous time Markov chain The CT-CBN is a continuous time Markov chain with transition rate matrix S defined as (Beerenwinkel and Sullivant, 2009), where i(g) is the index of the corresponding row or column to genotype g in the transition matrix. The transition probability from genotype g to h in time t is denoted by p gh ðtÞ and is equal to the element ðiðgÞ; iðhÞÞ of the matrix exponential e tS .Consequently, the probability that genotype g will be observed at time t s starting from the wild-type, ;, at time zero is simply p ;g ðt s Þ.In terms of hidden random vector T and observed sampling time t s , this is equivalent to the probability that events in g happen before the sampling time t s and other events happen after t S , Prðmax e2g T e < t s ; min e2Png T e > t s Þ.The log-likelihood of the exponential rates of the fixed poset P for given observations D (pairs of genotypes and sampling times) is then and the gradient of the log-likelihood function, rlðkÞ ¼ ðð@lðkÞ=@k 1 Þ; . . .; ð@lðkÞ=@k n ÞÞ, is given by The term @p ;g ðt s Þ=@k i can be computed using the matrix exponential of an augmented matrix (Fung, 2004).Here, the 2M  2M augmented matrix is where M is the size of transition matrix S. The matrix exponential of the submatrix ðAS i Þ ðMþ1:2M; 1:MÞ contains the derivative of transition probabilities with respect to parameter k i .In particular, @p ;g ðt s Þ=@k i can be calculated by (M þ 1, i(g)) th element of the matrix exponential of the augmented matrix. Example 2 For the poset shown in Figure 1(a), the matrix @S=@k 1 is @S=@k1 ¼ ; f1g f2g f1; 2g f2; 4g f1; 2; 3g f1; 2; 4g f1; 2; 3; 4g ; f1g f2g f1; 2g f2; 4g f1; 2; 3g f1; 2; 4g f1; 2; 3; 4g The ML estimation can be performed by standard gradient ascent optimization methods such as the L-BFGS method (Liu and Nocedal, 1989) or by derivative-free optimization algorithms using a quadratic approximation (Powell, 2006) Since there are more genotypes compatible with sparser posets, the size of the transition matrix is larger for sparser posets.For a given number of mutations p, the transition matrix has the maximum size 2 p  2 p for the empty poset.Hence, the likelihood computation that involves computing matrix exponential is not feasible for large posets.However, when the poset consists of m independent components C i ; i ¼ 1; . . .; m, the probability of observing genotype g can be written as In particular, for the empty poset with p mutations, we have p different components and transition matrices.Each component consists of two genotypes: the wild-type and a mutated genotype.Consequently, we need only to compute p matrix exponentials of size 2  2. In general, the complexity of computing the genotype probabilities is determined by the size of the largest component of the poset.In addition, because we are only interested in the first row of e St , the matrix exponential of the whole matrix S is not required.A faster method is to compute the action of matrix exponential ue St where u ¼ ½1; 0; . . .; 0 with size of 1  p (method expAtv in the package expm in R).With these improvements, the likelihood computation and consequently the parameter estimation is possible for larger posets up to around 30 mutations. In most biological applications, the sampling times are not available.For example, in cancer progression, the start of the tumor evolutionary process is not known.Hence, the time elapsed from the beginning of the process till the sampling time point is not known.In these applications, the sampling time is integrated out for likelihood computation p ;g ¼ ð 1 ts¼0 p ;g ðt s Þdt s : (5) A reasonable assumption for the sampling time is where k s is the rate of sampling process (Beerenwinkel and Sullivant, 2009).This quantity has been computed in Beerenwinkel and Sullivant, 2009, Theorem 4.1].A more elegant solution to compute this quantity is given in Appendix A. Since the sampling time is not observed, different connected components are not independent of each other anymore and the probability p ;g is not decomposable over the components of the poset.Hence in this case, the likelihood computation is not possible when the number of genotypes compatible with a poset is very large, irrespective of the sparseness of the poset.Finally, one can learn the CT-CBN model when some sampling times are missing and some of them are observed by maximizing the following log-likelihood function where O is the set of observed genotype-sampling time pairs and M is the set of genotypes for which the corresponding sampling times are missing. Approximate parameter estimation Since mutation occurrence times, t i i ¼ 1; . . .; p, are not directly observable, the direct optimization of the density (2) to find the rate parameters is not possible.The EM algorithm has previously been used in (Beerenwinkel and Sullivant, 2009;Gerstung et al., 2011;Montazeri et al., 2015) for parameter estimation of a given poset.It is easy to see that time differences, t i À maxt paðiÞ , are the sufficient statistics for estimating k i .The maximum likelihood estimate of k i is Beerenwinkel and Sullivant, 2009), where t ðjÞ i is the occurrence time of mutation i for the j th observation.In the E-step of the EM algorithm, expected values of the time differences are computed for all mutations given the observation ðg; t s Þ and the estimate of the parameters from the previous iteration.This expectation is defined as e i ðg; t s Þ ¼ E T i À max j2paðiÞ T j jg; t s ; k; P  à and is computed analytically in Theorem 2.5 of (Montazeri et al., 2015).A similar expectation when the sampling time is not observed is computed in (Beerenwinkel and Sullivant, 2009) f ðtÞdt where t ' ðg; t s Þ denotes that the occurrence time vector t is in concordance with the observation ðg; t s Þ. Computing these integrals is complicated due to the fact that the density function (2) contains the maximum function in the exponent.In the previous approaches, the above-mentioned integrals were decomposed into simpler integrals over all possible maximal extensions of a given poset.Since the total number of maximal chains is factorial in the number of mutations in the worst case, these methods are not feasible in general for large posets.In this paper, we use Monte Carlo integration by importance sampling to compute e i ðg; t s Þ.In particular, we draw L samples from a proposal distribution q(t) and e i ðg; t s Þ is approximated as The quantities w k ¼ f ðt ðkÞ Þ=qðt ðkÞ Þ are called importance weights.The Monte Carlo EM (Wei and Tanner, 1990) is called the stochastic EM for the case L ¼ 1 (Bishop, 2006;Nielsen 2000.The choice of the proposal distribution plays an important role in efficiency and accuracy of the estimation.We use the following proposal distribution and the equation T i ¼ max j2paðiÞ T j þ Z i to generate mutation occurrence times t for genotype g and sampling time t s . where TExpðk; a; bÞ is an exponential truncated to the intervals a and b.Since we use the ancestral sampling method (Bishop, 2006) to sample from the proposal distribution, the maximum of parent occurrence times, i.e. max j2paðiÞ t e is known before we sample the occurrence time of mutation i using the distribution of Z i .This proposal distribution is a good choice for this problem because all the generated samples are consistent with the observation of interest, g and t s .In addition, due to the memorylessness of exponentials, the proposal distribution of mutation j 6 2 g is the same as the true conditional distribution, Z j jg; t s .In the M-step, the new estimate of the mutation rate i is computed as b k i;new ¼ N= X N k¼1 e i ðg ðkÞ ; t ðkÞ s Þ.It has been shown that averaging the Markov chain improves the estimation (Nielsen, 2000).Hence, we average the last f  maxIter iterations of the Markov chain where f 2 0; 1 ð .The MC-EM algorithm is given in Algorithm 1. Network learning Now we explain how we perform the network learning.Under the strong assumption that all observed genotypes are perfect realizations of the CBN model, it has been shown that the largest poset compatible with all the observations is the MLE poset (Beerenwinkel and Sullivant, 2009).However, due to the imperfectness of the model and the fact that observations are subject to noise, the ML poset is often very sparse for most real-world applications.In (Beerenwinkel and Sullivant, 2009;Montazeri et al., 2015), a mixture model approach was employed to address the problem of noisy observations.We follow a similar approach here and modify the method such that it will be tractable for large posets as well.In this method, the CBN model is extended to allow some degree of violations of the poset relations by observed genotypes.In the extended model, P c represents the maximal poset in which each relation is violated by at most a fraction c of the genotypes.In fact, the extended model is a mixture model with two components.The main component, which is the CBN model, is responsible for generating genotypes compatible to the poset P c .The probability of a genotype being compatible with P c under the CBN component is Pðgjt s ; P c ; kÞ. The second component is a noise component defined as a generative model for the genotypes that are incompatible with the poset P c .The probability of a genotype in the noise component is denoted by q c .In this mixture model, the probability that an observation belongs to the CBN component is denoted by a, the mixing proportion.The maximum likelihood estimate of a is the fraction of genotypes compatible with the poset P c .It is noteworthy that c itself is subject to optimization. In this paper, we define the noise component in two different ways.The first approach is a uniform noise model, in which the probability of observing a genotype incompatible with the poset follows the uniform distribution q c ¼ 1=ð2 jPcj À jJðP c ÞjÞ (Beerenwinkel and Sullivant, 2009;Montazeri et al., 2015).In order to compute q c , we need to compute the number of genotypes compatible with the poset, jJðP c Þj.An algorithm that simply calculates the number of compatible genotypes by enumeration is not feasible for large posets.Hence, we use the following efficient divide and conquer algorithm to calculate jJðP c Þj (Davey and Priestley, 2002), where e # ¼ fy 2 P c jy՟eg and e " ¼ fy 2 P c jyՠeg.The recursion holds for every e 2 P c and is reasonably fast for posets with up to 50 mutations as well as large sparse posets.For dense large posets (more than 50 mutations and more than 10 edges), we have q c % 1=2 jPj and it is not necessary to compute jJðP c Þj.However, a limitation of the uniform noise model is that the contribution of the noise component to the likelihood is much smaller than the contribution of the CBN component particularly for larger posets.Consequently, this approach will result in very sparse posets when dealing with large numbers of mutations.To address this limitation, we assume in a second alternative approach that the noise component is the independence model.The independence model is the CBN model with empty poset (i.e.no edges), and it can explain all genotypes incompatible with P c .Its mutation rates are estimated from the incompatible genotypes.A similar noise model has been used in (Beerenwinkel et al., 2005) with an additional latent variable specifying the CBN or noise component that each genotype belongs to.To avoid an additional hidden layer that would require, for example, a nested EM algorithm, here we employ an approximate solution and estimate the mutation rates directly from all incompatible genotypes.This approach has a superior performance in comparison to the uniform noise model for larger posets while still being computational efficient.We use this approach as the main noise model in this paper. Results and discussion In this section, we assess the performance of the MC-EM algorithm, the MLE method explained in Section 2.1, and the discrete time CBN (D-CBN) in different simulation experiments.In addition, we analyze the application of the CT-CBN model on two HIV drug resistance datasets extracted from the SHCS. Simulation study First, we analyzed the performance of the MC-EM algorithm in rate estimation for different simulation experiments.We investigated different posets with 2, 4, 8, 16, . .., 1024 mutations.For each poset size, we drew 100 random posets from the space of CBNs.To generate a new CBN sample, we first drew a random directed acyclic graph (DAG) by generating a random upper triangular matrix, representing the edges of the DAG.By computing transitive reduction of the generated DAG, we got a new CBN sample.The sampling time distribution was assumed to be exponential with rate k s , T s $ Expðk s Þ.However, the EM method has been shown to work well with other sampling time distributions (Montazeri et al., 2015).Mutation rates were drawn uniformly between k s and 5k s . We drew N observations, pairs of genotypes and sampling times, for each parameter setting.Since we need more observations for a larger poset, we chose N equal to maxð50p; 1000Þ, where p is the number of mutations in the poset.In the first simulation experiment, we compared the MC-EM, the D-CBN (Beerenwinkel et al., 2007), and the MLE method in their parameter estimation for a given poset.It has been shown that the D-CBN method is a first-order approximation of the CT-CBN model (Beerenwinkel and Sullivant, 2009).In this experiment, we used the following parameters for the MC-EM method L ¼ 5, maxIter ¼ 100 and f ¼ 0:2.However, the Algorithm 1 Parameter estimation of the CBN model using the Monte Carlo EM algorithm based on importance sampling. INPUT: N genotype-sampling time pairs ðg ðkÞ ; t ðkÞ s Þ, k 0 : initial mutation rates, P: an input poset, maxIter: maximum number of iterations of the EM, L: sample size of the importance sampling integration OUTPUT: b k final : mutation rate estimates. b for all ðg ðkÞ ; t ðkÞ s Þ in D do Generate L samples from the proposal distribution q(t).compute e i ðg ðkÞ ; t ðkÞ s Þ for i ¼ 1; . . .; p using Equation ( 7) from the L samples generated in the previous step.end for M-step: is the average of the estimated rates over the last f  maxIter iterations. Large-scale inference of conjunctive Bayesian networks i731 choice of parameters does not have a huge effect on the performance of the MC-EM method.We have shown that the MC-EM method converges for a broad range of parameters using the same simulated genotypes (Supplementary Figs S2-S4). Due to the fact that the transition rate matrix grows exponentially with the number of mutations, in general the MLE method is only feasible for posets with up to 32 mutations.It was not possible to run the MLE method in 22 out 100 posets with size 32 (due to memory constraints).The comparison of log-likelihoods of the MLE, the MC-EM, and the D-CBN is depicted in Figure 2(a), along with the log-likelihoods of true rates.The MC-EM method performs as accurately as the MLE method for small posets.Furthermore, the log-likelihood of the MC-EM is almost equal to the log-likelihood of the true rates for all poset sizes.The similarity of the log-likelihood of the true rates to those from the MC-EM and the MLE method indicates that the estimation process did not get stuck in a low-quality local mode of the likelihood function.The D-CBN method has the worst performance for all poset sizes.Figure 2(b) shows the relative absolute errors between estimated rates and true rates for different parameter estimation methods.For each method, we compute the absolute error, j b k e À k e j, for each event e of a given poset.The relative absolute error of all mutations is summarized as medianðj b k À kjÞ=medianðkÞ.The performance of the MLE and the MC-EM are very similar for small posets.In addition, MC-EM is better than the D-CBN method for all poset sizes.Figures 2(a) and (b) indicate that MC-EM is an accurate method for parameter estimation of small and large posets.A similar result to that shown in Figure 2(b) is obtained if the relative absolute error, defined as meanðj b k À kjÞ=meanðkÞ (Supplementary Fig. S1), is used. Supplementary Figure S5 shows the comparison of running times of the MC-EM and the MLE method.As mentioned above, the MLE method is not possible for large posets (due to memory and running time constraints), hence its running times are only shown for small posets.The MC-EM running time on a single computer took between a few seconds and an hour for posets from 2 to 1024 mutations.The complexity of the MC-EM is OðNLpÞ for each iteration of the EM algorithm.In the next experiment, since MC-EM has a stochastic behavior in the parameter estimation, we assessed the sensitivity of the MC-EM against the parameter L and compare it with the EM method in (Montazeri et al., 2015) for the example poset shown at Supplementary Figure S6(a).The results are illustrated for Mutations 2 and 6, as examples of early and late events, in Supplementary Figure S6(b) and (c), respectively.The MC-EM estimates converge to the EM estimates, shown by the dashed lines, for larger L. In the next experiments, we assessed the performance of the network learning method described in Section 2.3 for posets from 2 to 1024 mutations.For each candidate poset P c (see Section 2.3 for the definition), we used MC-EM for computing the approximate MLE of rate parameters.In addition to the specifications that were used earlier for drawing genotypes for each poset, we perturbed genotypes by adding observational errors with a per-locus error rate of to make the generated genotypes more similar to real-world applications.The error rates were selected as 0, 0.001, 0.005, 0.01 and 0.05 in agreement with the reported values for sequencing error rates in (Hoff, 2009).First, we performed a sensitivity analysis to see the impact of the parameter L in the network learning performance.We observed that the poset learning is not sensitive to this parameter.Estimated posets for L ¼ 5 and 100 were the same for 2314 out of 2500 considered posets and only in 66 cases posets were different by more than two edges.Hence, we choose the parameter L equal to 5 in the subsequent experiments. Figure 3 shows the performance of the network learning method in terms of true positive rate (TPR) and true negative rate (TNR) for different number of mutations p and observational error rates .The TPR is defined as the proportion of correctly recovered edges of the estimated poset to the total number of edges of the true poset.TNR is defined as 1-false positive rate (FPR) where FPR is the number of edges that are falsely estimated to the total number of absent relations in the true poset.Figure 3(a) and (b) shows TPRs and TNRs computed based on transitive closures of estimated posets and true posets, respectively.Supplementary Figure S7 illustrates the same quantities for transitive reductions of these posets.The network learning method performs well for small error rates for all poset sizes.The TPR values of the estimated posets decrease for larger and p.The algorithm tends to have better performance in terms of TNR particularly for p ! 256 at the expense of decreasing performance in TPR.The increase of TNR values for p ! 256 is due to fact that larger posets are much sparser in comparison to smaller posets.Therefore, it is easier for the network learning method to estimate a large fraction of true negatives (absent edges) correctly.As shown in Supplementary Figure S8, the network learning running time takes from a minute for small posets to at most an hour for larger posets.The presented methods are available as an R package at https://github.com/cbg-ethz/MC-CBN. HIV drug resistance data In this section, we analyze two HIV drug resistance datasets from Swiss HIV Cohort Study database (SHCS).In particular, we study the accumulation of mutations in the reverse transcriptase (RT) gene of the HIV genome under the drug pressure of the combination therapy zidovudine plus lamivudine, 3TC þ AZT, as well as under no treatment, with 264 and 615 observations, respectively.The mutation 41L, for example, denotes that the amino acid Leucine (L) is observed at position 41 of the reverse transcriptase gene of HIV genome.We required that the genotype was measured at least 90 days after the onset of treatment and no more than 30 days after treatment end.For all genotypes, the time difference between the treatment start and genotyping is available.Sampling time in HIV can be approximated by the time difference between the start of the treatment (as a proxy for the start of viral progression) to the genotyping time point.Average treatment time is around 700 days for all datasets.We are interested in modeling the accumulation of mutations in the reverse transcriptase for different datasets.For each dataset, we considered all the RT mutations that happen more than 10 times.This helps to get rid of spurious edges in the estimated posets.In total, we obtained 107 and 155 mutations for 3TC þ AZT and no-treatment datasets, respectively, and CBN models were estimated using these datasets.According to the simulation studies in the previous section, we estimate the expected TPR of the estimated posets is roughly between 62 and 87% while expected TNR is between 92 and 94%.We obtained these estimates based on the expected TPR and TNR for p ¼ 128, which is the closest poset size in the considered simulations to the numbers of mutations in both datasets (Fig. 3). In this analysis, we mainly focused on the thymidine-analog mutations (TAMs) that arise under selective pressure of zidovudine.In particular, we are interested in two well-known pathways TAM1 (41L, 215Y and 210W mutations) and TAM2 (67N, 70R and 219Q mutations) (Yahi et al., 1999).Figure 4(a) shows the learned network for 3TC þ AZT dataset.The estimated CBN model for 3TC þ AZT recovered successfully both TAM1 and TAM2 clusters (Fig. 4(b)).The learned network for the dataset 3TC þ AZT can explain 69% of the observations.Similarly for no-treatment dataset, the learned poset and the subset of the poset for TAM mutations are shown in Supplementary Figure S9.The corresponding poset as well as inferred temporal relation between TAM mutations for no-treatment dataset are much sparser than those for 3TC þ AZT, which indicate that temporal dependencies among mutations are more likely to exist under the selective drug pressure.Quantitatively, the density of the poset for 3TC þ AZT is 0.034 as opposed to 0.008 for no-treatment poset.The density of a poset is defined as the ratio of the number of edges of the poset to the number of possible edges.In addition, analysis of estimated average waiting times for both datasets, obtained from the learned CBN models, reveals that mutations tend to happen much faster under the selective drug pressure of 3TC þ AZT in comparison to the no-treatment case (Supplementary Table S1). Conclusion CT-CBN models have been used for modeling the waiting time process of the accumulation of mutations under temporal ordering constraints.In these models, a waiting time process for a mutation only begins after the occurrence of its predecessor mutations.The waiting time of a mutation is assumed to be exponentially distributed in the CT-CBN model.In addition, temporal ordering constraints of the CT-CBN model are encoded by a partially ordered set.Inference of CT-CBN models consists of parameter estimation and network learning.For the parameter estimation, the EM algorithm (Beerenwinkel and Sullivant, 2009;Montazeri et al, 2015) and the MCMC method (Sakoparnig and Beerenwinkel, 2012) have been used.Both approaches are limited to at most 20 mutations. In this paper, we introduced a Monte Carlo EM algorithm with importance sampling for the parameter estimation of CT-CBN models.We demonstrated that this efficient method can be used for accurate parameter estimation of large networks.For the network learning, we modified a mixture-model approach that have been previously used for CBN models (Beerenwinkel and Sullivant, 2009;Beerenwinkel et al., 2007;Montazeri et al, 2015) and made it computationally feasible for large posets.In future works, we aim to work on more sophisticated network learning algorithms that can handle high sequencing rates, particularly when dealing with high number of mutations.A possible approach is to use search algorithms.However, the search space for large number of mutations is huge and algorithms such as simulated annealing or MCMC do not have any chance to find the optimal network.One possibility is to use the PC algorithm (Spirtes et al., 2000) to first reduce the search space significantly so that search algorithms can then be employed. In summary, in this paper we show the MC-EM with importance sampling is an accurate and efficient parameter estimation method for the CBN models and in future works we aim to use this inference algorithm for more complex extensions of the CBN models such as taking into account patient-specific covariates in the model. Fig. 1 . Fig. 1.The poset P, consisting of four elements subject to the relations 1 0 3; 2 0 3, and 2 0 4, is shown in (a).The corresponding genotype lattice J(P), consisting of eight genotypes compatible with the poset P, is shown in (b).Directed transition rates among neighboring genotypes are shown on the edges of the lattice Fig. 2 . Fig. 2. The performance of different parameter estimation methods is compared for different poset sizes.Likelihood values are depicted in (a) and the relative absolute rate errors defined as medianðj b k À kjÞ=medianðkÞ are compared in (b).The MC-EM and MLE estimates are very similar and close to the true values, and clearly outperform the D-CBN method Fig. 3 . Fig. 3. Performance of the network learning method as the poset size is increased for various observation error rates.True positive rates of the transitive closure of estimated posets against true posets are shown in (a) and true negative rates are shown in (b) Fig. 4 . Fig. 4. The ML poset learned from the AZT þ 3TC dataset (a) and the corresponding subset of the poset for TAM mutations (b).The network learning method successfully recovered the well-known TAM1 and TAM2 pathways
2018-04-03T01:52:09.692Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "6fcc4120aa1899f03752a2b2686a7e335581dc33", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/bioinformatics/article-pdf/32/17/i727/24151366/btw459.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "6fcc4120aa1899f03752a2b2686a7e335581dc33", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
245200606
pes2o/s2orc
v3-fos-license
It Is High Time Physicians Thought of Natural Products for Alleviating NAFLD. Is There Sufficient Evidence to Use Them? Non-alcoholic fatty liver disease (NAFLD) is the most common form of liver disease all over the world due to the obesity pandemic; currently, therapeutic options for NAFLD are scarce, except for diet recommendations and physical activity. NAFLD is characterized by excessive accumulation of fat deposits (>5%) in the liver with subsequent inflammation and fibrosis. Studies in the literature show that insulin resistance (IR) may be considered as the key mechanism in the onset and progression of NAFLD. Recently, using natural products as an alternative approach in the treatment of NAFLD has drawn growing attention among physicians. In this review, the authors present the most recent randomized controlled trials (RCTs) and lines of evidence from animal models about the efficacy of nutraceutics in alleviating NAFLD. Among the most studied substances in the literature, the following molecules were chosen because of their presence in the literature of both clinical and preclinical studies: spirulina, oleuropein, garlic, berberine, resveratrol, curcumin, ginseng, glycyrrhizin, coffee, cocoa powder, epigallocatechin-3-gallate, and bromelain. Introduction Non-alcoholic fatty liver disease (NAFLD) is a complicated disease impacted by the complex interplay of genetic, epigenetic, and environmental factors [1,2]. In addition, several lifestyle factors, such as sedentary lifestyle, westernized diet, and smoking, enhance NAFLD risk [3]. Unfortunately, mechanisms inducing/worsening NAFLD/nonalcoholic steatohepatitis (NASH) are until now far from being completely clarified [4]. Nevertheless, there are many lines of research that should be reckoned as highly plausible. The excessive lipid storage in the hepatocytes of NAFLD patients is represented by triglycerides (TG). The augmented influx of fatty acids (FFAs) derived from the diet, associated with de novo lipogenesis (DNL), and FFAs liberated from the adipose tissue contribute to accumulating TG in the liver, although not in a similar entity. FFAs stored in the liver and secreted via lipoproteins in NAFLD patients originate approximately 60% from adipose tissue, 25% from DNL, and 15% from the diet [5]. Accumulation of fat in the liver is associated with impaired insulin suppression of glucose production and serum FFAs [6]. FFAs are liberated by subcutaneous and visceral adipose tissue under the action of cytokines, such as tumor necrosis factor-alpha (TNF-a), interleukin-6 (IL-6), and interleukin-1b (IL-1b) [7], as well as leptin [8], while adiponectin (APN) plays a protective role in these molecular signals in the sense that it decreases elevated FFAs by oxidizing them in muscle [9,10] (Figure 1). Figure 1. Principal pathophysiologic mechanisms in NAFLD. Insulin resistance (IR) is a multiorgan phenomenon. Additionally, adipose tissue and liver secrete proinflammatory cytokines. An unhealthy diet, obesity, insulin resistance, dysbiosis, and external factors such as drugs contribute to NAFLD progression. Changes in Mitochondrial Function Mitochondrial dysfunction is an important mechanism giving place to NAFLD and the more critical spectrum, i.e., NASH. Overload of FFAs or conditions inducing hyperglycemia produces increased reactive oxygen species (ROS) and reduces mitochondrial biogenesis, prompting mitochondrial dysfunction that, in turn, gives place to both decreased β-oxidation and ATP production, as well as further increased ROS production, in a vicious circle, eventually resulting in IR, central to NAFLD. Genetic factors related (mt-CYB, POLG, HSD17B13) or not (PNPLA3, GCKR, TM6SF2, MBOAT7) to mitochondria could impact this phenomenon [21][22][23][24]. Hepatic mitochondrial DNA (mtDNA) in NAFLD patients has been demonstrated to host complex genomes with a mutation rate and a heteroplasty grade higher (1.28 times) than normal ones [25]. The mitochondrial genome is particularly prone to various mutagenic stressors because mitochondrial genes are more adjacent to ROS provenance and are not preserved by histones. The mitochondrial respiratory chain is the main ROS subcellular source, which can damage mitochondrial proteins, lipids, and mtDNA [26]. Studies have shown that the intake of FFAs, which modifies the mitochondrial membranes and causes the production of ROS and damage to nearby structures, ultimately leading to inflammation, apoptosis, and progression of NAFLD [27]. Moreover, IR is intertwined with a decreased number of mitochondria, abnormal morphology, lower levels of mitochondrial oxidative enzymes, and lower ATP synthesis in human muscle biopsies [28]. These abnormalities comprehending depletion of mtDNA, reduced activity of respiratory chain complexes, and impaired mitochondrial β-oxidation are connected to the progression of NAFLD through NASH [29]. Mitochondrial biogenesis is propelled by peroxisome proliferator-activated receptor co-activator (PGC)-1, a transcriptional regulator of uncoupling protein (UCP) that is deeply involved in the insulin/gluconeogenesis signaling pathway and plays an important role in thermogenesis in adipose tissue [30]. Insulin resistance (IR) is a multiorgan phenomenon. Additionally, adipose tissue and liver secrete proinflammatory cytokines. An unhealthy diet, obesity, insulin resistance, dysbiosis, and external factors such as drugs contribute to NAFLD progression. Elevated concentrations of FFAs cause peripheral and hepatic insulin resistance (IR) by inhibiting insulin-stimulated peripheral glucose uptake. Two mechanisms are responsible: (a) a fat-related inhibition of glucose transport or phosphorylation and (b) a decrease in muscle glycogen synthase activity. Interestingly, FFAs stimulate insulin secretion [11] ( Figure 1). Continuing to prove the theory that extrahepatic tissue contributes to liver disease; we should mention the key role of the small intestine. The gut microbiota as a mechanism inducing NAFLD has been receiving utmost interest from researchers. Increased intestinal permeability is related to obesity and NAFLD, and researchers are still debating whether this alteration represents an origin or an effect of disease [12]. Obesity and other metabolic dysfunctions associated with obesity are identified by peculiar transformations in the assembly/constitution and, consequently, the function of the human gut microbiota. These impairments are linked to decreased microbiome diversity (relative abundance of Firmicutes to the cost of Bacteroidetes) [13], which can be affected by various components of the diet. Specifically, the fasting-induced adipocyte factor is a serum lipoprotein lipase inhibitor. and its elimination is central to the deposition of TG in adipocytes, a process likely produced by microbiota [14]. It should be emphasized that other results contradict previous findings regarding the contribution of various bacterial groups to the progress of obesity, pointing to the production of short-chain fatty acids (SCFAs) [15]. It is noteworthy to stress that FFAs have a wide range of antibacterial activity comprehending lysis and solubilization of bacterial cell membranes as well as interference of adenosine triphosphate (ATP) production [16]. Furthermore, increased lipopolysaccharide production, also termed "metabolic endotoxemia", may play an important role in obesity and related diseases such as NAFLD, due to being associated with an increased pro-inflammatory and oxidant environment, thus representing a key mediator of metabolic derangements observed in obesity [17]. Still, secondary bile acids, trimethylamine, and pro-inflammatory factors, i.e., the well-known lipopolysaccharide, may negatively impact hepatic lipid metabolism mediating the production of SCFAs [18]. Finally, the chemical modification of bile acids plays a further role in modifying lipid metabolism [19]. Bile acids activate the farnesoid X receptor (FXR) in the liver and through the enterohepatic circulation repress bile acid synthesis. Obesity and type 2 diabetes mellitus (T2DM) are both combined with decreased FXR activity and impaired metabolism of bile acids, with consequent alteration of hepatic lipid homeostasis and, what is more important, of insulin sensitivity [20]. Changes in Mitochondrial Function Mitochondrial dysfunction is an important mechanism giving place to NAFLD and the more critical spectrum, i.e., NASH. Overload of FFAs or conditions inducing hyperglycemia produces increased reactive oxygen species (ROS) and reduces mitochondrial biogenesis, prompting mitochondrial dysfunction that, in turn, gives place to both decreased β-oxidation and ATP production, as well as further increased ROS production, in a vicious circle, eventually resulting in IR, central to NAFLD. Genetic factors related (mt-CYB, POLG, HSD17B13) or not (PNPLA3, GCKR, TM6SF2, MBOAT7) to mitochondria could impact this phenomenon [21][22][23][24]. Hepatic mitochondrial DNA (mtDNA) in NAFLD patients has been demonstrated to host complex genomes with a mutation rate and a heteroplasty grade higher (1.28 times) than normal ones [25]. The mitochondrial genome is particularly prone to various mutagenic stressors because mitochondrial genes are more adjacent to ROS provenance and are not preserved by histones. The mitochondrial respiratory chain is the main ROS subcellular source, which can damage mitochondrial proteins, lipids, and mtDNA [26]. Studies have shown that the intake of FFAs, which modifies the mitochondrial membranes and causes the production of ROS and damage to nearby structures, ultimately leading to inflammation, apoptosis, and progression of NAFLD [27]. Moreover, IR is intertwined with a decreased number of mitochondria, abnormal morphology, lower levels of mitochondrial oxidative enzymes, and lower ATP synthesis in human muscle biopsies [28]. These abnormalities comprehending depletion of mtDNA, reduced activity of respiratory chain complexes, and impaired mitochondrial β-oxidation are connected to the progression of NAFLD through NASH [29]. Mitochondrial biogenesis is propelled by peroxisome proliferator-activated receptor co-activator (PGC)-1, a transcriptional regulator of uncoupling protein (UCP) that is deeply involved in the insulin/gluconeogenesis signaling pathway and plays an important role in thermogenesis in adipose tissue [30]. A further key factor regulating mitochondrial biogenesis is adenosine monophosphateactivated protein kinase (AMPK) [31]. With aging fat mass, mainly visceral adiposity is disposed to steady augment, and both daily energy expenditure and physical activity are inclined to become lower since regulation of energy production is dependent on ATP needs. This process leads to decreased oxidative capacity in skeletal muscles [32]. As previously emphasized, due to the complexity of NAFLD pathogenesis, drug options for this very common disease are very poor. A different and more healthy diet combined with increased physical activity and supplemented by plant elements and extracts containing natural substances is considered useful and safe in order to reduce excess liver fat and decrease the risk of the progression to more severe liver disease. Clinical Trials and Studies in Animal Models Interestingly, there are several lines of research that have ascertained a likely therapeutic effect of natural products on NAFLD. Many promising drug candidates are present in the current development pipeline that are of natural origin. We chose to select ongoing studies concerning natural products performed in both animal models of NAFLD and in patients suffering from NAFLD with the aim to show the utility of these compounds. Alga Spirulina Maxima Spirulina maxima is a cyanobacterium characterized by a gross content of proteins comprehending essential amino acids and by other factors, including the vitamin B complex associated with various minerals, as well as carotenoids, gamma-linolenic acid, and omega-3 and omega-6 fatty acids [33]. A pilot study, implanted to determine the effects of Spirulina on 55 Cretan patients with NAFLD, orally supplemented with 6 g of this dietary supplement per day, showed at the end of the 6-month intervention period that the mean levels of AST, ALT, gamma-glutamyl-transpeptidase (gamma-GT), triglycerides (TG), LDL-C, total cholesterol (TC), and the ratio of TC to HDL-C were significantly decreased. More interestingly, a significant reduction in weight and HOMA-IR index was found. Unfortunately, no modifications in sonographic features were observed [34]. Three Hispanic Mexican patients were treated with 4.5 g/day of spirulina maxima for 12 weeks; it is interesting that these patients showed a decrease in TG, TC, LDL-C, and TC/HDL ratio. Two of them showed a reduction in parenchyma heterogeneity when ultrasonography was performed, while the third patient showed a complete resolution of the "brilliant liver", compared with before treatment with ultrasonography [35] (Tables 1 and 2). Olive Oil Olive oil has been reckoned to have a protective effect on the cardiovascular (CV) system, impacting obesity, type 2 diabetes mellitus (T2DM), and related metabolic disorders [36]. A double-blinded RCT was conducted on 66 NAFLD patients, randomized into two groups, and 20 g/day of either olive oil or sunflower oil for 12 weeks was administered. A hypocaloric diet (nearly 500 kcal/d) was recommended to all participants. The following parameters were examined before and after intervention: fatty liver severity, liver enzymes, anthropometric parameters, blood pressure, serum lipid profile, glucose, insulin, malondialdehyde, total antioxidant capacity, and IL-6. Olive oil only decreased serum AST. Serum TG and fat mass significantly decreased after the ingestion of olive oil. Changes in fatty liver damage grade, as well as in skeletal muscle mass, were most important in subjects who were in the olive oil group, although the trials reported no modifications in body fat percentage [37]. Indeed, the beneficial effects of the Mediterranean diet on human health have been mainly attributed to its high content of extra virgin olive oil [38]. Santini et al. demonstrated that oleuropein (Ole) is able to improve the pro-inflammatory and antioxidant defense status in a murine model of NAFLD [39]. Moreover, oral administration of Ole in C57BL/6J mice, fed with an unhealthy diet, induced activation of autophagy characterized through AMPK-dependent phosphorylation of ULK1 at Ser555, regardless of the sex [40] (Tables 1 and 2). Garlic In a recent RCT, 90 NAFLD patients were assigned to take either a garlic powder supplement (1600 mg) or a placebo for 12 weeks. At the end of the study, features of hepatic steatosis were significantly reduced in the treatment group, compared with the control group. Specifically, ALT, AST, and gamma-GT, but not ALP levels, significantly decreased, similar to TC, TG, and LDL-C, which also decreased in the treatment group, compared with the control group [41]. The same NAFLD population, in a parallel study, revealed a reduction in HOMA-IR, as well as a significant increase in skeletal muscle mass, serum concentration of superoxide dismutase, and total antioxidant capacity in the treatment group [42]. An insulin-resistant mouse (ddY-H), a mouse model of NAFLD, showed improved glucose intolerance and reduced hepatic TG accumulation when treated with garlic extract. Additionally, the intestinal microbiota pattern showed a better condition [43] (Tables 1 and 2). Berberine Berberine (BBR) is reckoned as an alkaloid extracted from plants such as European barberry, goldenseal, and goldthread [44]. A parallel, open-labeled RCT was implanted enrolling patients from three investigation centers. In total, 184 patients suffering from NAFLD were studied and randomly received (1) lifestyle intervention (LSI), (2) LSI plus pioglitazone (PGZ) 15 mg qd, and (3) LSI plus BBR 0.5 g, respectively, for a period time of three months and a half. The authors, interestingly, offered evidence of hepatic BBR content and examined the expression of genes related to glucose and lipid metabolism in an animal model of NAFLD, to which BBR was successively administered. With respect to LSI, the combination of BBR plus LSI ended in a significant reduction in high-fat content (52.7% vs. 36.4%). To this effect followed a consistent recovery in body weight and an improvement in homeostasis model assessment of insulin resistance (HOMA-IR) and serum lipid profiles. BBR only was more functional than PGZ 15 mg qd in lessening body weight and ameliorating lipid profile. It is necessary to highlight the fact that adverse events, likely associated with BBR administration, were mild and affected mainly the digestive system [45]. Again, 35 Sprague Dawley rats were randomly split into the NAFLD group and the control group that was fed a normal diet for two months. The rats treated with BBR presented reduced liver wet weight, with ameliorated liver steatosis and a significant decrease in liver TG, ALT, AST, TC, TG. Notably, LDL levels significantly diminished. This effect was coupled with the significant upregulation of microsomal triglyceride transfer protein (MTTP), with increased levels of the same. All these findings were not present in the saline-treated NAFLD rats. Interestingly, BBR can cause adverse effects, including unexpected and not convenient interactions with prescription drugs, due to interference with the CYP2D6 and CYP3A4 enzymes, which are implicated in the biotransformation of endogenous compounds and xenobiotics [46,47] (Tables 1 and 2). Resveratrol Resveratrol is a polyphenolic compound naturally found in peanuts, grapes, red wine, and some berries. In a double-blind, placebo-controlled RCT, 60 subjects with NAFLD were given two placebo capsules (placebo group) or 300 mg resveratrol capsules (resveratrol group) twice daily for three months. Compared with the placebo group, resveratrol significantly decreased GPT, glucose, and LDL-C, TC, and HOMA-IR. In the resveratrol group, significant reductions in the levels of TNF-a, cytokeratin 18 fragments, and FGF-21 and elevation of APN level were observed [48]. A crossover randomized double-blind study was led, including 44 youth adults, divided into a group intaking 250 mL of bayberry juice twice daily for 4 weeks, and a placebo control group. The first one showed decreased plasma levels of TNF-a and IL-8, proving inhibition in inflammatory and apoptotic response involved NAFLD. Additionally, an increased plasma antioxidant status and HDL-C level were detected [49] (Tables 1 and 2). Curcumin Curcumin (Cur) belongs to the Curcuma longa species and it is highly present in Zingiberaceae, a member of the ginger family, as well as the turmeric. NAFLD patients with different grades of disease were enrolled in an RCT, and 1 g/day of Cur was administered for 8 weeks. Supplementation with Cur was associated with a significant reduction in body max index (BMI) and waist circumference in the curcumin and placebo groups. Ultrasound analysis displayed a significant improvement in 75.0% of patients treated with Cur respect to the 4.7% of the control group. Serum levels of ALT and AST significantly slowed down only in the Cur group. The authors found the Cur administration significantly reduced TG, LDL-C, fasting blood glucose (FBG), HOMA-IR, body weight, and AST levels. However, the observed decrease in TC, HbA1c, ALT, and insulin levels by Cur was not significant [50]. Authors of another study, very recently, performed a preclinical study on mice fed, for 10 weeks, a high-fat diet (HFD) or a normal diet supplemented or not with 0.2% Cur. The administration of Cur improved body fat, liver steatosis, insulin resistance and LPS serum levels. Interestingly enough, the related-Cur effects were appreciated also on the gut microbiota composition; in fact, the ratio of Firmicutes/Bacteroidetes and endotoxin-producing Desulfovibrio bacteria were decreased, whereas Akkermansia population and SCFA-producing bacteria were increased. These last bacterial genera altered by Cur were already reported to be correlated to the metabolic parameters in HFD-fed mice [51] (Tables 1 and 2). Ginseng Many types of this herb are reckoned, but the most renowned ones are American ginseng (Panax quinquefolium) and Asian ginseng (Panax ginseng). In total, 80 patients with NAFLD were prospectively randomized to receive a three-week route of Korean red ginseng (KRG) or placebo. KRG was effective, in overweight patients with NAFLD, in restoring liver functional parameters, as well as in decreasing fat-related cytokines and molecules with antioxidant activity, whereas APN levels were increased [52] (Tables 1 and 2). Glycyrrhizin Glycyrrhizin (GL) is the main bioactive element of licorice root. In a double-blind RCT, 66 NAFLD patients were enrolled and were separated into two groups: (i) treated group received 2 g aqueous licorice root extract per day for 2 months and (ii) placebo-control group. The authors found that GL administration significantly reduced ALT and AST serum levels, whereas the BMI did not significantly change in both groups [53]. The most important GL-related side effects were: hypertension and hypokalemic-induced secondary disorders [54]. Additionally, authors of another study conducted a preclinical study on 32 male Wistar rats randomly divided into (1) control group, fed a normal diet; (2) high-cholesterol diet (HCD) group; (3) normal diet plus GL 20 mg/kg; (4) normal diet plus GL 100 mg/kg, respectively, for 12 weeks. Interestingly, GL treatment at both doses, and especially at 100 mg/kg, significantly decreased levels of uncoupling protein 2 (UCP2) gene expression, which is involved in the decrease in ROS production by mitochondria [55] (Tables 1 and 2). Coffee In a prospective, cross-sectional study, 1998 NAFLD patients were studied. Coffee drinking was categorized into no (0), moderate (1-2), and frequent (≥3) consumption (in cups/day). Most frequent coffee consumers (≥3 cups per day) had an inverse correlation with BMI, waist circumference, T2DM, liver enzymes, HOMA-IR, controlled attenuation parameter (CAP), and liver stiffness, in contrast with those who consumed 1-2 cups of coffee per day. In contrast, the female gender positively correlated with HDL-C [56]. Coffee intake reduced hepatic fibrosis in NASH patients. A validated questionnaire was used to assess for a relationship between caffeine and four groups: ultrasound negative (controls), light steatosis/not-NASH, NASH stage 0-1, and NASH stage 2-4 [57]. Furthermore, the authors studied the inverse correlation between coffee intake and the risk of NAFLD on C57BL/6 mice. Mice were treated, for 12 weeks, with a high-fat diet (HFD) or a normal diet supplemented or not with decaffeinated coffee. The coffee intake reduced liver steatosis beyond reducing transaminases and improved the oxidation of FFAs by the upregulation of acyl-CoA oxidase1 (ACOX1). Interestingly, the related coffee effects were also observed in its improvement of gut barrier function [58] (Tables 1 and 2). Cocoa Powder Recent studies revealed that the consumption of cocoa powder, derived from Theobroma cacao, has a positive correlation with reduced risk of CV and metabolic diseases. However, the mechanisms of its hepatoprotective role on NAFLD were investigated only in limited studies. Dark chocolate consumption is associated with a decrease in lipid peroxidation. A total of 100 subjects with T2DM were enrolled in an RCT and randomly assigned to the cocoa group (n = 50; received 10 g cocoa powder) or placebo group (n = 50), for 6 weeks. Cocoa consumption aimed to show probable interactions with prostaglandin synthase-2 (PTGS-2/COX-2), and it significantly decreased TG, LDL-C, HDL-C, TNF-α, and IL-6 [59]. The key mechanism at the basis of clinical benefits of dark chocolate is represented by its polyphenolic compounds, through the ability to inhibit the activity of nicotinamide adenine dinucleotide phosphate-oxidase (NADPH), which is the major source of oxidative stress [60,61]. Among possible side effects, chocolate has been implicated in conditions, such as acne and gastroesophageal reflux disease. Overall, the benefits of moderate cocoa consumption likely outweigh the risks [62]. In total, 19 NASH patients were enrolled in a cross-sectional study and separated into two groups of patients who took 40 g/day of dark chocolate (>85% cocoa) or 40 g/day of milk chocolate, for 2 weeks. The study demonstrated improvement of oxidative stress, which was evaluated by the activity of NOX2 and F2-isoprostanes, whereas hepatocyte apoptosis by cytokeratin-18 (CK-18) levels [63]. A study by Sun et al. examined the hepatoprotective effects of 80 mg/g cocoa powder supplementation for 10 weeks in HFD obese male mice. Cocoa induced an important antioxidant response and mitochondrial biogenesis, ameliorating hepatic oxidative stress and liver steatosis [64] (Tables 1 and 2). A double-blinded RCT has demonstrated that ingestion of a green tea beverage enriched with catechins with an EGCG -HFD reduced body weight (BW) in 126 obese adult patients. The patients were divided into the placebo, low-dose, and high-dose groups. BW decreased significantly in the low-dose group and in the high-dose group [66]. The gut microbiota and their metabolites abnormalities are increasingly indicated to be at the bases of NAFLD. In fact, gut microbes produce SCFAs, hydrogen peroxides, trimethylamine, and ammonia. In recent years, several metabolites produced by microbiota have been shown to control lipid, carbohydrate homeostasis, and energy homeostasis in both extrahepatic and hepatic tissues. Oral administration of EGCG in mice fed HFD has effects on the gut microbiota, serum bile acid profile, and gene expression. EGCG significantly improved liver steatosis and intestinal dysbiosis [67] (Tables 1 and 2). Bromelain Bromelain is extracted from stems of pineapples but is present in all parts of the fresh pineapple. As it is a concentrate of proteolytic enzymes, it may enhance anticoagulant activity (60). In an up-to-date study, HFD mice were treated or not with bromelain (20 mg/kg) for 12 weeks. Bromelain improved BW by~30%, liver weight~20%, and adipose tissue~40%. The pathogenic mechanisms seem to be due to the reduced uptake of FFA by the intestinal wall and the better lipoprotein internalization. Moreover, the bromelain treatment increased bile acid metabolism, cholesterol clearance, the assembly and secretion of very-low-density lipoprotein (VLDL), and the β-oxidation of FFAs [68]. Bromelain treatment in 24 rats ameliorated the non-surgical treatment of periodontitis decreasing TNF-a. It was also able to reduce cholesterol, TG, ALT, and AST [69] (Tables 1 and 2). Criticism Many studies presented in this review are consistent with the positive effects of natural products on histology features/laboratory data characteristic of NAFLD, but it should be highlighted that animal models of NAFLD do not completely mirror the human NAFLD. Furthermore, no single animal model has encompassed the whole spectrum of human NAFLD, mainly when dealing with the more severe and progressive form, i.e., NASH, although very important in discovering some basic molecular processes [70]. Moreover, animal models dealing with natural products do not permit understanding the complex process of drug-drug interactions, very frequent in subjects on various drugs due to their co-morbidities such as T2DM, hypertension, or CV diseases, and the altered drug metabolism capacity in NAFLD patients [71]. Finally, adverse events (AEs) associated with the multiple uses of natural products should be identified [72]. AEs have different causes, such as impurities, batch-to-batch variability, misidentification and/or labeling, and different source of used production materials. Unfortunately, classic reporting systems do not always gather sufficient data on adverse events. Further research is mandatory to build up models that more accurately mimic the disease spectrum to provide an increased understanding of the inner mechanisms and consequently identify future correct therapeutic approaches [73]. Conclusions Diet and lifestyle modification are the cornerstones of the therapy of NAFLD, although many drugs are on the verge of being licensed. In this review, the authors presented both RCTs and lines of research on animal models suggestive of a possible therapeutical effect by natural products, even though conclusive evidence will be reached with larger sample size studies in different populations, mainly evaluating the possible AEs. Conflicts of Interest: The authors declare no conflict of interest.
2021-12-16T16:43:17.740Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "3b7bd32cbc212e0b9dd7f180405a2e4517f08d37", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/22/24/13424/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e69083262e9b4e15d5be23b85e7798665b46f5fc", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
18245078
pes2o/s2orc
v3-fos-license
The anomalous Cepheid XZ Ceti XZ Ceti is the only known anomalous Cepheid in the Galactic field. Being the nearest and brightest such variable star, a detailed study of XZ Ceti may shed light on the behaviour of anomalous Cepheids whose representatives have been mostly detected in external galaxies. CCD photometric and radial velocity observations have been obtained. The actual period and amplitude of pulsation were determined by Fourier analysis. The long time scale behaviour of the pulsation period was studied by the method of the O-C diagram using the archival Harvard photographic plates and published photometric data. XZ Ceti differs from the ordinary classical Cepheids in several respects. Its most peculiar feature is cycle-to-cycle variability of the light curve. The radial velocity phase curve is not stable either. The pulsation period is subjected to strong changes on various time scales including a very short one. The ratio of amplitudes determined from the photometric and radial velocity observations indicates that this Cepheid performs an overtone pulsation, in accord with the other known anomalous Cepheid in our Galaxy, BL Boo (V19 in the globular cluster NGC 5466). Continued observations are necessary to study the deviations from regularity, to determine their time scale, as well as to confirm binarity of XZ Ceti and to study its role in the observed peculiar behaviour. Introduction Variable stars classified as anomalous Cepheids are neglected objects from an observational point of view. Although the first representatives of such variables have been known for more than half a century, the available scanty data are insufficient for the proper explanation of their behaviour. The distinctive features of these variable stars are the short pulsational period (in the period range of RRab stars) and -strangely enough -up to 2 magnitudes higher luminosity than that of the RR Lyrae type stars of the corresponding period. The higher luminosity relates them to Cepheids, and this is the origin of the term "anomalous" Cepheid coined for such variables by Zinn & Searle (1976). Its first representatives were found in the Sculptor dwarf galaxy (Thackeray 1950) and somewhat later in the Ursa Major (van Agt 1967) and Draco dwarf galaxies (Baade & Swope 1961). From the 1970es dozens of Importance of anomalous Cepheids is twofold. On the one hand, comparison with classical Cepheids may shed light on the differences in the internal structure and evolutionary phase of the two types of pulsators. On the other hand, anomalous Cepheids follow a distinct periodluminosity relationship (see Pritzl et al. 2002 for its most recent form), therefore they serve as standard candles in distance determination for extragalactic systems with no recent star formation. Existence of pulsating stars in such unusual region in the H-R diagram has been explained in two different ways: (i) by mass transfer (and possibly coalescence) in a close binary system producing a more massive and luminous star than other pulsators on and above the horizontal branch (Zinn & Searle 1976;Renzini et al. 1977) and (ii) assuming that anomalous Cepheids are not old stars but their peculiarity is the extremely low-metallicity (Norris & Zinn 1975;Demarque & Hirshfeld 1975). The first reliable model for supporting this latter explanation was calculated by Bono et al. (1997) with confronting evolutionary tracks and pulsational calculations. This nonlinear convective pulsational model was updated by , and these new calculations resulted in a good agreement with the observed behaviour (period, amplitude, colour, luminosity) of the anomalous Cepheids in dwarf spheroidal galaxies. Two recent studies, an observational (Dolphin et al. 2002) and a theoretical one ) explicitly declare that anomalous Cepheids occur in the region which is the extension of the normal Cepheid sequence toward higher temperature for extreme low metallicity (Z = 0.0002 to 0.008). Spectroscopic confirmation of this explanation is, however, very difficult, because the extragalactic anomalous Cepheids are very faint stars for a detailed spectroscopic study. However, anomalous Cepheids occur in our own Galaxy, in metal-poor environment. Zinn & Dahn (1976) identified the first such variable: the star V19 in the globular cluster NGC 5466. Curiously, a conventional variable star name, BL Bootis, has been assigned to this star, though variable stars belonging to globular clusters are catalogued separately. Due to its membership in a globular cluster, luminosity (and luminosity difference with respect to the RR Lyrae type variables in the same system) of BL Boo could be reliably determined, and it is this star which advanced to be the prototype of anomalous Cepheids. The BLBOO type variable stars are thus identical with the anomalous Cepheids, and this designation was introduced in the fifth volume of the General Catalog of Variable Stars (Samus 1995) listing the extragalactic variables. The first detailed spectroscopic analysis ever made on an anomalous Cepheid was that of BL Boo (McCarthy & Nemec 1997), a 15th magnitude star. Various pieces of evidence suggest that a much brighter star in the galactic field, XZ Ceti, also belongs to anomalous Cepheids. In this paper the behaviour of XZ Ceti is discussed based on new photometric and radial velocity data. Following the overview of the available information on XZ Ceti (Sect. 2) our observational data are described (Sect. 3). Then we discuss the value of the pulsation period and its changes, issues related to the pulsation amplitudes of XZ Ceti and its possible binarity (Sect. 4). Our results are briefly summarized in Sect. 5. XZ Ceti Photometric variability of XZ Ceti (HD 12293) was first detected by Hoffmeister (1933). A detailed analysis of the Sonneberg photographic plates led Meinunger (1965) to conclude that XZ Ceti was an RR Lyrae type variable with a period of 0. d 451. This period, however, turned out to be wrong, and Dean et al. (1977) determined the correct periodicity of 0. d 8231 from the first photoelectric photometry of XZ Ceti. If XZ Ceti were really an RR Lyrae type variable, the 0. d 8231 long pulsation period would imply fundamental mode pulsation and a corresponding RRab type asymmetric light curve. The oscillations of XZ Ceti, however, result in nearly sinusoidal light variations resembling those of RRc type variables. This peculiarity stimulated a further thorough study carried out by Teays & Simon (1985). They obtained a photoelectric light curve in Johnson's B and V bands covering a time interval as short as a week. Based on their precise light curve, Teays & Simon confirmed the value of the pulsation period, and were able to study the shape of the light curve in a quantitative manner, with the help of the Fourier coefficients. In addition, Teays & Simon (1985) determined the energy distribution from spectrum scans which enabled them to derive the temperature and approximate surface gravity of XZ Ceti. From a quantitative analysis of the light curve (resemblance of the period, amplitude, and Fourier coefficients to those of BL Bootis itself), the estimated temperature (6450 K), surface gravity (log g = 2.0), and their own pulsation models, Teays & Simon (1985) suggested the possibility that XZ Ceti is an anomalous Cepheid. Although XZ Ceti would be a promising target for a detailed study with its apparent brightness of about 9.6 magnitude in V, it has not been observed purposely since then. Two major sky surveys, however, supplied photometric data on XZ Ceti: Hipparcos (ESA 1997) and ASAS (Pojmanski 2002). Fortunately, XZ Ceti was among the targets of RAVE, the most ambitious radial velocity project till now, and its radial velocity value has been made available through the 1st data release (Steinmetz et al. 2006): 204.1 ± 1.2 km s −1 at JD 2 452 889.213. Since XZ Ceti had been considered as an RR Lyrae type variable for long (and is still classified as an RRab star in the GCVS!), it was included in various surveys of RR Lyrae variables. In a major study of kinematics and metallicity of 300 Galactic RR Lyrae type stars, Layden (1994) published the values 167±10 km/s for the radial velocity of XZ Ceti and [Fe/H] = −2.27±0.13 for its metallicity. In another study, Fernley & Barnes (1997) determined v rad = 190 ± 10 km/s and [Fe/H] = −2.10 ± 0.13, in a reasonable agreement with Layden's results for XZ Ceti. It is to be noted that metallicity XZ Ceti corresponds to that of the most metal deficient RR Lyrae type variables. Photometry We made time-series V -band observations with the 1.0m telescope of the Australian National University in Siding Spring on seven nights between 2004 Dec. 25 and 2005 Jan. 11. The detector was one of the eight 2k×4k chips of the Wide Field Imager, giving 13. ′ 0 × 26. ′ 0 field of view (this corresponds to 0. ′′ 38/pixel image scale). The exposure time was between 3 s and 10 s, depending on seeing. The observations were reduced with standard tasks in IRAF, using the daophot package. Images were corrected with bias and sky-flat frames, while differential magnitudes were determined with simple aperture photometry, relative to two local comparison stars, HD 12451 (V = 9.94) and BD −16 • 350 (V = 10.62). Typical photometric uncertainty was better than ±0.01 mag, although the relatively large distance between XZ Cet and HD 12451 (about 23 arcmin) may have introduced slightly larger errors on non-photometric nights. The time series of the magnitude differences between the comparison stars testifies that the brightness of both stars is stable. Because the data were secured in one band (Johnson V ) only, the magnitudes have not been transformed into the standard system and are treated as differential magnitudes. The existing BV photometry, however, indicates that the difference between the B −V colour indices of the comparison star (HD 12451) and XZ Ceti is as small as about 0.2 magnitudes, implying that neglect of the colour correction does not have any influence on the results deduced. The photometric observational data (about a thousand points distributed on six nights) are listed in Table 1. The light curve folded on the best fitting period (0. d 819) is seen in the left panel of Fig. 1. The actual period, however, could be different from this value, as discussed in Section 4.1. (In this paper, zero phase is arbitrarily set at JD 2 400 000.0 for all figures.) Spectroscopy Spectroscopic observations were carried out with the 2.3m telescope at the Siding Spring Observatory, Australia, on four nights between 2004 Dec. 23-29 and during another observational run between 2005 Aug. 17-23. All spectra were taken with the Double Beam Spectrograph using 1200 mm −1 gratings in both arms of the spectrograph. The projected slit width was 2 ′′ on the sky, which was about the median seeing during our observations. The spectra covered the wavelength range of 5700-6700Å, however, we used only 200Å centered on the Hα line for radial velocity determination. The exposure time was 90-120 s, depending on weather conditions. The dispersion was 0.55Å px −1 , leading to a nominal resolution of about 1Å. In addition to XZ Cet, we regularly observed telluric and radial velocity standards. All spectra were reduced with standard tasks in IRAF. Reduction consisted of bias and flat field corrections, aperture extraction, wavelength calibration and continuum normalization. We checked the consistency of wavelength calibrations via the constant positions of strong telluric features, which proved the stability of the system. Radial velocities were determined with the task fxcor, including barycentric corrections. In applying this cross-correlation method, the target spectra were related to the average spectrum of HD 187 691. This method results in 1-2 km/s uncertainty in the individual radial velocities of XZ Ceti. Different velocity standards (β Virginis, HD 22 484) have shown that our absolute velocity frame was stable to within ±2-3 km s −1 . The radial velocity data are listed in Table 2. The data folded on the best fitting period (0. d 82604, determined from the whole radial velocity data set), is seen in the left panel of Fig. 2 for the 2004 data only, while the middle panel of Fig. 2 shows the phase curve constructed from the whole sample. In this latter plot the radial velocities measured in 2004 and 2005 are discerned by different symbols, in order to visualize their deviation from each other. Similarly to the light curve, the actual pulsation period could be different from the value 0. d 82604 obtained from the formal fitting procedure. This suspicion has been confirmed by the available single RAVE data denoted by a filled triangle in Figure 2. Though the radial velocity phase curve is not covered completely, the mean radial velocity averaged over the whole pulsational cycle, about 175 km s −1 , clearly indicates that XZ Ceti is a Pop. II object from a kinematical point of view. The pulsation period In order to determine the actual value of the pulsation period the program package MUFRAN was used. This software developed by Kolláth (1990) is an efficient tool to point out periodic patterns in various time series. The MUFRAN software for time-series analysis is a collection of methods for period determination, sine fitting to the observational data, and graphic routines for visualizing the results. Its mathematical basis is the Fourier transform. The period analysis of our recent dataset gave the smoothest light curve if the actual pulsation period was assumed to be 0.819 d, as is seen in the left panel of Figure 1. It is noteworthy, however, that the data segments obtained in different nights do not overlap perfectly even if this best fitting period is used which is an evidence of instability of the period and/or light curve. It is to be emphasized that the data have been reduced carefully, and the whole reduction procedure was double-checked when this light curve anomaly was revealed. Adverse effects of bad pixels can be excluded because of the following reason. During the photometric observations we were constantly monitoring the image of XZ Ceti and the two comparison stars in the CCD frames, because being the brightest stars in the field, they could have been close to saturation even with the shortest exposure times. For that reason, almost every image was checked with the task imexam and we can safely exclude the possibility of being affected by bad pixels -there were no such pixels in the vicinity of XZ Cet or the comparison stars. Due to the photometric constancy of the comparison star, the light curve variability of the order of 0.02-0.03 mag is intrinsic to XZ Ceti. The instantaneous period can be determined from the radial velocity data, as well. Although they are relatively less precise than the photometric data, the favourable circumstance that spectroscopic observations were carried out during two runs separated by more than half a year, in principle allows a reliable determination of the pulsation period from the recent radial velocity data. The period search routine of MUFRAN gave the best fit with a period of 0. d 82604. This value, though considerably differs from that derived from the light curve, seems to be more realistic because of the longer time base. Here, however, another complication emerges: a systematic shift between radial velocities in the subsequent years. An obvious explanation for this feature is that XZ Ceti belongs to a binary system in which the effect of the orbital motion is superimposed on the pulsational radial velocity changes. The apparently unstable light curve, however, implies that the radial velocity variations are not strictly repetitive, either. This phenomenon and possible binarity of XZ Ceti is discussed further in Section 4.3. O − C diagram Given this ambiguity concerning the instantaneous pulsation period, the period of pulsation is further investigated using previous photometric data including those obtained by Dean et al. (1977) and Teays & Simon (1985). Additional data are available in the databases of Hipparcos (ESA 1997) and ASAS (Pojmanski 2002). The available precise photometric observations of XZ Ceti cover about thirty years. Such a long time base is sufficient for studying stability of the pulsation and changes of any origin in the pulsation period. In order to study the behaviour of the pulsation period the usual method of constructing the O − C diagram was applied. Furthermore, we could extend the time base by utilizing the Harvard College Observatory Photographic Plate Collection. When visiting the Harvard-Smithsonian Centre for Astrophysics in early 2006, one of us (LLK) obtained photovisual magnitudes of XZ Ceti on about 2000 photographic plates of the archive. XZ Ceti appears on plates exposed between JD 2 410 955 (1888) and JD 2 447 771 (1989) and the photographic brightness of XZ Ceti was determined from each plate visually, using local comparison stars. The estimated photometric ac- curacy is about ±0.1 mag per point which, despite the ∼0.5 mag full photographic amplitude, allowed us calculating useful normal light curves for shorter segments. After JD 2 434 692 the observations of the given celestial region became very sporadic: only 1 per cent of the plates cover the last third of the whole interval. Therefore, we used only photographic data covering the time interval 1888-1953. Omission of the scanty Harvard data from the years 1979-1989 is also justified by existence of more accurate photoelectric data from these years. The photographic magnitudes distributed over 65 years were then arbitrarily divided into 21 groups, each segment being about three-year-long, and the O −C residual was determined from each normal light curve drawn by about 90 data points on average. A typical binned phase diagram is shown in Fig. 3. In view of the small amplitude of the brightness variation, flatness of the light curve near maximum light, and the possible cycle-to-cycle changes in the light curve, behaviour of the pulsation period of XZ Ceti has been studied by timing the moments of the median brightness on the ascending branch of the light curve. The phase of occurrence of this feature can be timed more accurately than the phase of maximum brightness, a well proved feature utilized in the O − C analysis of larger amplitude pulsating variables. For smaller amplitude variable stars, like our target, XZ Ceti, median brightness is reached at the steepest part of the light curve, so its timing has an error as small as about 0.001 in phase, i.e. less than 0. d 001. The individual O − C residuals are listed in Table 3, and shown plotted in Figure 4 where the size of the circles refers to the weight assigned to the given residual in the fitting procedure. In representing the O −C residuals after JD 2400000, circles of increasing diameters correspond to weights 1, 2, and 3. The Harvard data have been treated separately and with equal weight. The O − C diagram in Fig. 4 indicates that the period of XZ Ceti behaves in a peculiar way: definite short term changes appear in the period of pulsation. Another obvious feature is that a secular variation also occurs in the Table 3. pulsation period of XZ Ceti: in the first half of the 20th century the period was shorter than its average present value. A simple least squares fit to the O − C residuals obtained from Harvard data of XZ Ceti indicates that the oscillation period was 0. d 8231057 between JD 2 413 000 and JD 2 434 000. The deviations of the O − C residuals from the straight line at about JD 2 424 000 are intrinsic to the pulsation because their values much exceed the uncertainty from the photographic magnitude estimations. The average value of the pulsation period determined by a weighted least squares fit resulted in the ephemeris: C = JD 2445285.4260 + 0.8231298×E ±.0072 ± 0.0000010 This ephemeris has been valid after JD 2 440 000 but a period jitter is also present in the oscillation of XZ Ceti throughout the last decades. As a result of this feature, the instantaneous period in a given moment can considerably differ from this value as indicated by the ASAS data and especially by our recent photometry. The best fitting period to the ASAS photometric data is 0. d 82319 by the MUFRAN , and 0. d 82318 according to the ASAS webpage (http://www.astrouw.edu.pl/ ∼ gp/asas/asas.html). The practically coinciding values show reliability of the mathematical methods involved. Although the non-repetitive character of the phase curves (see Sect. 4.1.3) hinders from deriving the precise value of the pulsation period, the O −C diagram itself testifies that period changes have occurred on time scales of years and decades. In this respect the anomalous Cepheid XZ Ceti differs from classical Cepheids of shortest period because those latter stars pulsate with a more or less stable period. In the case of short period Cepheids only the s-Cepheids have changeable period (Szabados 1983) which stars are thought to pulsate in an overtone mode. A comparison with the stability of the pulsation period of BL Bootis would be appropriate. The photometric data of the prototype anomalous Cepheid are, however, of lower quality, so the error bars in the O − C graph constructed by McCarthy & Nemec (1997) are too large to draw any firm conclusion on the period changes of BL Boo. (In)stability of the pulsation amplitudes In order to see whether the light curve becomes stable when using a properly chosen pulsation period, the photometric data have been folded on the period of 0. d 8231561, too. This period was obtained by fitting a straight line to the last four O − C residuals, i.e. the ASAS and our photometric data, and it can be considered as the instantaneous value of the period characterising the oscillations in XZ Ceti. The phase curve plotted with this period (longer than the average value of the period valid for the last decades) is seen in the right panel of Fig. 1. Again, a definite scatter is seen near the maximum brightness testifying non-repetitive behaviour from cycle-to-cycle. The radial velocity data have been also folded on the 0. d 8231561 period (see the phase curve in the right panel of Fig. 2). Although the phase coverage is not perfect, it is clearly seen that the smallest radial velocity values (coinciding with the brightness maxima in phase) are shifted vertically to each other in the subsequent years. If this is a sign of the non-repetitive behaviour of the radial velocity curve, then this effect much exceeds variability of the light curve. Note, however, that our photometry of XZ Ceti covers only a week, while the time base of our radial velocity data is about half a year. The radial velocity value obtained in the RAVE project for XZ Ceti strongly deviates from its 'expected' position in the phase curve. The large shift along the horizontal direction cannot be explained by a considerably different period because this radial velocity observation was obtained at an epoch (JD 2 452 889) when the period behaviour of XZ Ceti is well known from the ASAS photometry. Binarity of XZ Ceti can also explain this shift. In this case orbital motion of the pulsating star around the common mass center causes a long period (corresponding to the orbital period) modulation in the radial velocity variation and the shift occurs in vertical direction. However, intrinsic amplitude changes and the effect of a companion star cannot be separated from the available data. Pulsation amplitudes and their ratio Unlike all other known anomalous Cepheids, XZ Ceti is not a member in any stellar aggregation (cluster or external galaxy) whose distance can be determined by some astrophysical method(s), so it is impossible to deduce its position on the colour-magnitude diagram which is necessary to derive its absolute brightness and pulsation mode. The observational data, however, enable us to confirm indirectly that XZ Ceti is in fact an anomalous Cepheid. To this end, amplitudes of the photometric as well as the radial velocity variations have been studied which are instrumental in revealing various properties of the pulsating star. The peak-to-peak amplitudes of the brightness and radial velocity variations (based on the 0. d 8231561 instantaneous pulsation period) are as follows: 0.450 mag in photometric V-band and 47.28 km s −1 in the variations of the radial velocity (using the 2005 v rad data only). The ratio of the radial velocity amplitude, A v rad , and the photometric amplitude, A V , is A v rad /A V = 105.1 km s −1 mag −1 for XZ Ceti. It is known that BL Boo is an overtone pulsator (McCarthy & Nemec 1997, and references therein). From the phase curves shown in the paper by McCarthy & Nemec, an amplitude ratio of 111.5 can be determined which is even larger than the value for XZ Ceti. This amplitude ratio serves as a useful diagnostic tool in classifying pulsating variable stars. For RR Lyrae type stars this amplitude ratio is 36.4 on average (Liu, 1991). In the case of BL Her type variables, the photometric and radial velocity data taken from the literature give an average ratio of 47.2. For classical Cepheids, this ratio is 43.6 for fundamental mode and 59.7 for first overtone pulsators (adapted from Szabados 2000). The value exceeding 100 for XZ Ceti is extremely large to be an RR Lyrae or BL Herculis type variable. Based on its phenomenological similarity with BL Boo (shape and amplitude of the light curve and amplitude ratio discussed above), XZ Ceti is also an overtone pul-sator. The amplitude ratio larger than 100 may even indicate second overtone pulsation (Balona & Stobie 1979a, 1979b). Binarity As shown by the radial velocity measurements (Fig. 2), XZ Ceti may belong to a spectroscopic binary system. The shift in the radial velocity at a given phase between the 2004 and 2005 data is as large as about 3σ of the A v rad data, so the systematic difference is significant. A companion decreases the observable photometric amplitude of the pulsating component, while leaves unaffected the corresponding radial velocity variations, thus increasing the amplitude ratio. However, one cannot a priori exclude the pulsation in the second overtone, and in this case an amplitude ratio of about a hundred (twice the value characteristic of fundamental mode pulsation) is expected, corresponding to the frequency ratio of the second overtone and fundamental mode pulsation, f 2 /f 0 ≈ 0.5 for Cepheids. However, the analogy with BL Boo implies first overtone pulsation for XZ Ceti. In this case the large amplitude ratio hints at binarity for both XZ Ceti and BL Bootis. Conclusions We studied photometric and radial velocity variability of XZ Ceti, a star classified as an anomalous Cepheid. The term "anomalous" is obsolete in its original sense, because the recent pulsation models elaborated by Bono et al. (1997), Marconi et al. (2004), and Caputo et al. (2004) give a natural explanation why such variables exist in the given region of the H-R diagram: they are extremely metal-poor classical Cepheids. This new paradigm gives motives for comparing behaviour of XZ Ceti with classical Cepheids. Resemblance of the phenomenological parameters of the light and radial velocity curves (period, Fourier parameters, amplitude ratios) of XZ Ceti and the prototype anomalous Cepheid, BL Bootis corroborates that XZ Cet is also an anomalous Cepheid. The large value of the mean radial velocity (about 175 km s −1 ) is a further piece of evidence that XZ Ceti does not belong to the same subsystem that includes classical Cepheids. The systematic shift in the radial velocities at given phases of pulsation hints at possible binarity of XZ Ceti. Presence of a companion is also inferred from the large radial velocity amplitude as compared with the photometric amplitude, even if greatness of this amplitude ratio is partly caused by the overtone mode of pulsation. XZ Ceti remains anomalous in the sense that it shows strong period changes on a very short time scale, and possibly slightly unstable pulsation with cycle-to-cycle variations. Summarizing its longer time scale period behaviour, the characteristic periods have been as follows: between JD 2 413 000 and JD 2 434 000: 0. d 8231057; between JD 2 442 000 and JD 2 453 400: 0. d 8231298; however, for the interval JD 2 452 000-JD 2 453 400 the most appropriate value is 0. d 8231561. Further monitoring of this unique variable star is highly desirable.
2014-10-01T00:00:00.000Z
2006-09-05T00:00:00.000
{ "year": 2006, "sha1": "fb92c0493879dd13f7f8279142d2feeec42a2acc", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2007/02/aa5690-06.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "fb92c0493879dd13f7f8279142d2feeec42a2acc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
103970700
pes2o/s2orc
v3-fos-license
Phosphonate analogues of nucleoside polyphosphates This article provides an overview of the efforts toward the synthesis of nucleoside polyphosphate mimics featuring a P-CXY-P scaffold. The following synthetic approaches to these compounds are summarized: (i) nucleophilic displacement of 5´-O -tosyl nucleoside by the ammonium salts of methylenebisphosphonic acid; (ii) synthesis via activated phosphate/phosphonate substrates; (iii) Mitsunobu coupling between a nucleoside and methylenebisphosphonic acid; (iv) phosphorylation of a protected nucleo side under Yoshikawa’s reaction conditions with methylenebis(phosphonic dichloride); (v) synthesis via nucleophilic cleavage of cyclic trimetaphosph(on)ates; (vi) enzyme-mediated reactions. Synthesis via 5´-O-tosyl nucleosides Lipophilic salts of methylenebisphosphonic acid such as tris(tetra-n-butylammonium) bisphosphonate are good reagents for synthesis of nucleotide analogues from nucleosides by direct nucleophilic displacement at the 5´-position of 5´-O-tosyl nucleosides.While this is multi-step synthesis (particularly if protected nucleosides are used), the relative simplicity and high reliability of the procedure make it a good supplement to existing methods based on addition of the nucleophilic 5'-hydroxyl group to activated phosphonate derivatives. The preparation of methylene-modified nucleoside polyphosphates by the nucleophilic displacement of Osulfonyl groups such as p-toluenesulfonyl (Ts) or methylsulfonyl (Ms), was first recorded by Stock in 1979 when it was reported that the action of trialkylammonium salts of methylenebisphosphonic acids on 5´-O-tosyl thymidine in DMF leads to the corresponding bisphosphonate analogues of thymidine di-and triphosphate 1, 2 (Scheme 1). 25 Later, this approach has been adapted by Poulter and co-workers to the preparation of a variety of nucleoside diphosphates and their bisphosphonate analogues. 26Thus, reactions between 5´-O-tosyl derivatives of adenosine and 2´-deoxyadenosine and tris(tetra-n-butylammonium)salts of methylenebisphosphonic acids in freshly distilled dry acetonitrile afforded the nucleoside bisphosphonates 3-5 in good yields (Scheme 2). 27By essentially the same procedure (sometimes including variations in the reaction conditions), CHF-and CF2-analogues of 2´-deoxythymidine diphosphate 28 , CF2-modified guanosine diphosphate 29 and adenosine 5'-(α,β:β,γ-dimethylenetriphosphate) 30 have been synthesized and characterized by spectral methods.Findings on the selectivity of the series of modified ATPs for rat P2X2 and P2X2/3 receptors are summarized in ref. 15.Scheme 1. Synthesis of bisphosphonate analogues of thymidine di-and triphosphate from 5´-O-tosylthymidine. Scheme 2. Poulter's synthesis of α,β-methylene-modified ADP derivatives. An attractive feature of Poulter's phosphorylation method is that either protected or unprotected nucleosides can be phosphorylated.Yields are higher for protected nucleosides, suggesting that in appropriate cases it may be worthwhile to use protected strategy in spite of the added synthetic steps. 31Thus, α,βmethylene-modified derivatives of ADP 6-8 were prepared using 2´,3´-O-isopropylidene-adenosine 5´-O-tosylate.After purification of the intermediates they were deprotected by treatment with 6-8% trifluoroacetic acid to obtain desired products (Scheme 3). 32ith a view to set rules for design of UDP-based reversible P2Y6 receptor antagonists as potential drugs, a variety of protected uridine 5´-tosylates were tested in nucleophilic displacement reactions with tetrabutylammonium bisphosphonates giving the desired uracil nucleotide analogues 9-17 after acidic workup (Scheme 4). 33heme 3. Synthesis of α,β-methylene-substituted ADP derivatives via 2´,3´-O-isopropylidene-adenosine 5´-O-tosylate. Scheme 4. Synthesis of uracil nucleotide analogues. A new type of nucleoside polyphosphate analogues in which pyrophosphate oxygen is replaced by a potentially reactive carbonyl group was obtained by displacement of the 5´-mesyl group of the corresponding 5´-mesylnucleoside with carbonylbisphosphonate.Reaction of the tributylammonium salt of carbonylbisphosphonate with N 2 -(4-butylphenyl)-5´-mesyl-2´-deoxyguanosine in acetonitrile gave 18 isolated as a yellow solid in 93% by ion-exchange chromatography.This compound was a potent, competitive inhibitor of human DNA polymerase α (Scheme 5). 34Scheme 5. Synthesis carbonylbisphosphonate analogue of BuPdGDP via displacement of 5´-mesyl group. Blackburn and Langston prepared α,β-substituted phosphonate analogues of 2´-deoxyadenosine and 2´-deoxythymidine 5´-triphosphates by a two-step reaction starting from the appropriate 5´-O-tosyl deoxynucleosides (Scheme 6). 28In the first step, they prepared 2´-deoxynucleoside 5´-diphosphate analogues 19 in yield around 50-60%.The -phosphate group was attached subsequently either via activation of P β of the dNDP as its morpholidate followed by reaction with inorganic phosphate (method A) or phosphorylation of nucleoside 5´-diphosphate with an excess of p-nitrobenzyl phosphoromorpholidate (method B).The p-nitrobenzyl group was removed by catalytic hydrogenolysis to give the dNTP analogues 20 in good yields.Scheme 6. Synthesis of α,β-CXY dNTP analogues by combination of tosylate substitution and phosphoromorpholidate protocol. α,β:β,γ-BisCF2 substituted RNA nucleotide analogues 21-24 potentially stable to enzymatic hydrolysis in RNA and DNA polymerase assay were prepared via nucleophilic displacement of 5´-tosylate in benzoyl protected nucleosides by the tetra-n-butylammonium salt of bis(difluoromethylene)triphosphonic acid (Scheme 7).Two equivalents of the tosyl nucleoside were required to ensure maximum consumption of triphosphonate salt.In the case of ATP, CTP and UTP nucleotide analogues authors were able to achieve conversion close to 90%, although in case of GTP analogue conversion did not exceed 20%.Preliminary biological results have shown that this class of nucleotides with modified triphosphate moiety revealed the correct polarity and minimal steric effects compared to the natural molecules. 35As part of a program to investigate the mechanism of action of dinucleoside polyphosphate hydrolases, British biochemists described the synthesis of a range of analogues of diadenosine 5´,5´´´-triphosphate (Ap3A). 37The most effective route to compounds 30 involves the condensation of α,β-methylene analogues of ADP, conveniently prepared by the method of Poulter, with adenosine 5´-phosphoromorpholidate (Scheme 9).The α,β:β,-bismethylene analogue 31 was prepared by the reaction between 2´,3´-O-isopropylidene adenosine 5´-tosylate and bis(dihydroxyphosphonomethyl)phosphinic acid.Unfortunately, the yield of pure material was only 3% (Scheme 10). 37Scheme 10.Synthesis of α,β:β,γ-bismethylene Ap3A analogue via 5´-tosylate substitution. Pankiewicz and co-workers have reported a one-pot reaction involving initial displacement of the mesyl group of 2΄,3΄-O-isopropylidene-5΄-O-mesylthiazofurin (32) with the tris(tetrabutylammonium salt) of difluoromethylenebisphosphonic acid followed by DCC-coupling of compound 33 with nucleoside 34 to give the desired bisphosphonate analogue 35 (Scheme 11).The latter was found to be a potent inducer of differentiation of K562 erythroid leukemia cells. 38heme 11.Synthesis of a nonhydrolyzable CF2-MBP analogue of thiazole-4-carboxamide and benzamide adenine dinucleotide. Synthesis via activated phosph(on)ate substrates Diphenyl chlorophosphate [40][41][42][43] , N,N´-carbonyldiimidazole (CDI) 31,44,45 , imidazole/2,2´-dithiopyridine/Ph3P system [46][47][48][49][50] , dicyclohexylcarbodiimide (DCC) 51,52 , and trifluoracetic anhydride 53,54 are the most widely used activating reagents for the synthesis of methylene-modified nucleoside tri-and polyphosphates.All coupling methods have the same strategy in common: one nucleotide subunit (usually nucleoside monophosphate or bisphosphonate) is converted via an activation process into an electrophilic substrate and then reacted with a second phosphate or phosphonate subunit acting as a nucleophile.In principle, two alternative approaches can be used for triphosphate bridge formation.In the first, a nucleotide is activated at the stage of monophosphate and then coupled to a bisphosphonate.In the second, a bisphosphonate is activated and coupled with a nucleoside monophosphate.Thus, the reaction sequences shown in Scheme 14 were the basis of numerous works in which nucleotide 5´-(β,γ-methylene) triphosphates were prepared via activated phosphate/phosphonate substrates. 55heme 14. Simplified general approach for the synthesis of β,γ-methylene nucleoside triphosphates via phosphate/phosphonate activation. A simple, one-pot method using the diphenyl chlorophosphate technique to prepare β,γ-modified ATP analogues 38 and 39 is illustrated in Scheme 15. 40,56 The reactions proceed at room temperature in Py or Py/DMF solution to give the corresponding products in moderate yields.This approach was the most successful to obtain solid-supported 5´-(α-P-thio) triphosphate oligonucleotide analogues compared to other methods involving the use of phosphoroamidate or salicyl-phosphite intermediates. 41cheme 15.Synthesis of β,γ-CH2-and β,γ-CF2 ATP analogues via (PhO)2P(O)-activated AMP. Schmitt and Tampé reported an example for the application of diphenyl chlorophosphate activation in a late step of the synthesis of a novel class of nonhydrolyzable ATP-lipids 40 where the nucleotides are covalently attached via C 8 (or N 6 )-position of the adenine ring to a synthetic lipid (Scheme 16). 42Possible applications of the novel class of ATP-lipid have been discussed. Currently, the most commonly used methods for the preparation of β,-methylene-modified nucleotides involve either a morpholidate or imidazolidate activation.The morpholidate method, introduced by Khorana as one of the first successful strategies for the synthesis of nucleoside-5´-polyphosphates, 58 employs a twostep process which involves conversion of nucleoside monophosphate (NMP) to the corresponding morpholidate via dicyclohexylcarbodiimide (DCC) activation followed by conjugation with the appropriate bisphosphonate.Myers prepared the first bisphosphonate analogue of adenosine 5´-triphosphate, β,-CH2 ATP, by condensing methylenebisphosphonic acid with adenosine 5´-phosphoromorpholidate. 59[62] Scheme 18. Synthesis of β,-methylene bisphosphonate (d)NTP analogues via morpholidate intermediates. The effective route to analogues of diadenosine 5',5'''-P 1 , P 3 -triphosphate 45 involves the condensation of adenosine C-phosphoromorpholidate 44 with P 1 , P 2 -methylene analogues of ADP 43, conveniently prepared by the method (Scheme 19). 37he search for carbocyclic nucleotides with potent anti-HIV activity led to the synthesis of the pyrophosphoryl phosphonate 48 63,64 and its diphosphonate analogues 49 65,66 with progressive fluorosubstitution within the βγ-methylene linker group as shown in Scheme 20. Noteworthy features of the chemical syntheses include the transformation of the nucleoside monophosphate 46 into the activated morpholidate 47, and the coupling of 47 with the corresponding bisphosphonate.Nucleotides 50, 51 were found to be potent inhibitors of HIV reverse transcriptase.The three nucleotide triphosphate mimics 52-54 were also synthesized and tested as inhibitors of HIV RT in an enzyme assay.Both 52 and the monofluorinated analogue 53 showed relatively poor activity, being three orders of magnitude less active than the parent compound 48.The difluorinated analogue 54 was markedly more effective than the monofluorinated substrate but was still two hundred times less potent than 48.The disappointing activity of 52-54 may be due to the fact that the carboxy group is a poor mimic of the terminal phosphonate group in compound 51. 66cheme 19.Synthesis of analogues of diadenosine 5',5'''-P 1 , P 3 -triphosphate utilizing the combination of the Poulter method and morpholidate activation.Scheme 20. Preparation of carbocyclic nucleotide analogues with progressive fluoro-substitution within the β,γ-methylene linker group. Phosphorimidazolidate intermediates result from activation of nucleoside monophosphates with 1,1´carbonyldiimidazole (CDI).Hoard and Ott's original research on this transformation featured syntheses of triphosphates from dNMPs and inorganic pyrophosphate. 68A similar approach is applicable to the synthesis of β,-methylene nucleoside triphosphates.Addition of acidic bisphosphonate activates the phosphorimidazolidate toward nucleophilic displacement giving phosphonate-modified NTP.The imidazolidate method is usually performed as a one-pot synthesis; the nucleoside monophosphate is converted to the imidazolidate by activation with CDI followed by the addition of the appropriate bisphosphonate. 31,44,45omplications in the imidazolidate procedure have been reported when ribonucleosides with unprotected vicinal diols were activated with CDI.This phosgene equivalent easily forms cyclic carbonates that were carried through as impurities in the triphosphorylation procedure.Additionally, nucleoside phosphorimidazolidate can react sluggishly with methylenebisphosphonate, requiring prolonged reaction times or the use of catalysts such as ZnCl2 or CdCl2. 48,69In some cases, reversal of the commonly used strategy, such that the α,βmethylene NDP is the nucleophile and the phosphorimidazolidate is the electrophile, resulting in a high yield of the desired NTP analogue without the need for catalyst or long reaction time. Ingall et al. reported the synthesis of a series of ATP analogues 64 designed to act as antithrombotic agents.Substitution of the adenine moiety enhanced affinity and selectivity for the P2T receptor and led to the development of a highly potent compound 64a with IC50=0.4 nM.The whole series were prepared via phosphorimidazolidate protocol (Scheme 23). 70ucleotide analogues modified at the glycone and all three phosphate residues were reported by Roberts and co-workers as highly stable in human blood serum with half-lives toward hydrolysis up to 4.5 days. 71These analogues were shown to be selective inhibitors of DNA synthesis, catalyzed by retroviral reverse transcriptases and terminal deoxynucleotidyl transferazes.A typical synthetic procedure is shown for ATP analogues 65 and 66 in Scheme 24. Scheme 24.Synthesis of glycone and triphosphate modified nucleotide analogues. The special need for nucleotides with a modified polyphosphate chain as rapid and highly efficient coupling reagents led to the development of an effective method of preparation of phosphorimidazolidate intermediates via the Mukaiyama-Hashimoto oxidation-reduction conditions. 72cheme 25 illustrates the synthetic pathway for obtaining methylene analogue of nucleoside tri-and tetraphosphates.The reaction was performed by activation of a nucleotide unit with imidazole in the presence of triphenylphosphine/2,2´-dithiodipyridine (DTDP) system, followed by coupling with organic salt of bisphosphonate carried out in DMF in a presence of an 8-fold excess of ZnCl2. 697][48][49][50] The coupling reactions occurred efficiently without significant accumulation of by-products which is important because of purification difficulties common for this class of compounds. Scheme 25. Synthesis of CH2-modified nucleotide analogues via the Mukaiyama-Hashimoto oxidationreduction conditions. In order to obtain dinucletide cap analogues labeled at the ribose of the 7-methylguanosine moiety with N-methylanthraniloyl, Jemielity and co-workers have used the reverse strategy involving ZnCl2-mediated coupling of bisphosphonate-modified nucleotide P-imidazolidate 67 with fluorescently labeled nucleoside monophosphate 68 (Scheme 26). 73Compound 69 was obtained with a yield of 12% after two purification steps, ion-exchange chromatography and HPLC. Recently, Sun and co-workers have developed a novel P V -N activation method for the synthesis of nucleoside 5´-triphosphates and their β,-bridging oxygen-modified analogues from nucleoside 5´phosphoropiperidates with 4,5-dicyanoimidazole (DCI) as the activator. 74A high-yielding and chromatographyfree method for preparation of nucleoside 5´-phosphoropiperidates 72-76 is shown in Scheme 28.Scheme 28.Method for synthesis of nucleoside 5'-phosphoropiperidates. The obtained nucleoside 5´-phosphoropiperidates exhibited excellent reactivity toward bisphosphonate reagents in the presence of 4,5-dicyanoimidazole and afforded β,-CX2-NTP products in high isolated yields (Scheme 29).In the following research, the same authors extended the application of the phosphoropiperidate/DCI system for the preparation of symmetrical and asymmetrical P 2 ,P 3 -CX2-dinucleoside tetraphosphates (Scheme 30).Compared to the conventional phosphoromorpholidate method, this approach afforded products in shorter reaction time and higher isolated yields. 75A one-pot method for DCI-promoted synthesis of symmetrical NppCX 2 ppN bisphosphonate analogues simply from nucleoside 5´-phosphoropiperidates without using nucleoside phosphonates has also been described. 76 very effective synthetic procedure for the preparation of a series of dinucleoside tetraphosphate analogues via activated bisphosphonates and nucleoside monophosphates was developed by Yanachkov et al. 77 They found that organic salts of pyrophosphoric acid and its halomethylenebisphosphonate analogues react with an excess of 1,1‫-׳‬carbonyldiimidazole (CDI) to give stable, isolable diimidazolidates 77, and that these diimidazolidates react with nucleoside 5´-monophosphates or monothiophosphate 78 to give the corresponding nucleotide analogues 79 conveniently and in high yield (Scheme 31).Several bisphosphonate analogues of P 1 ,P 4 -di(adenosine-5´) tetraphosphate were evaluated with respect to their effects on platelet aggregation and function of the platelet P2Y1, P2Y12, and P2X1 receptors.Some of the compounds showed very potent (nanomolecular level) inhibition of ADP induced human platelet aggregation, thus presenting a new and promising class of antiplatelet drugs (Figure 1). 78 The activation strategy in which a bisphosphonate is activated with imidazole and then coupled with a non-activated nucleotide has been applied by Polish researchers to the synthesis of di-(7-methylguanosine)-tetraphosphates and their α,δ-diborano and α,δ-dithio analogues 80, 81 containing a β,γ-methylene group (Scheme 32). 79heme 32.Synthesis of dinucleotide cap analogues modified within a polyphosphate chain. Synthesis via the Mitsunobu reaction Discovered in 1967 by Oyo Mitsunobu, this mild multicomponent reaction permits esterification of an acidic component (HX, pKa < 11) with a primary or a secondary alcohol (ROH) in the presence of triphenylphosphine and diethyl azodicarboxylate (DEAD) or diisopropyl azodicarboxylate (DIAD) (Scheme 33). 80,81heme 33.The Mitsunobu reaction Coupling of nucleosides with phosphoric or phosphonic acids using the Mitsunobu reaction, pioneered by Mioskowski and co-workers, 82 has become one of the important methods available for the synthesis of nucleoside polyphosphates and polyphosphonates.The earliest report of the effectiveness of the Mitsunobu condensation in nucleoside phosphonate chemistry was the preparation of 6-chloroadenosine α,β:β,bismethylenetriphosphate analogue 82 (Scheme 34). 83The original procedure involves treatment of phosphonic acid salt (1 equiv), nucleoside (1 equiv), and triphenylphosphine (3 equiv) in anhydrous pyridine with HBF4 (1 equiv) followed by dropwise addition of DEAD (3 equiv).Scheme 34.Synthesis of a 6-chloroadenosine α,β:β,-bismethylenetriphosphate analogue. The same strategy, namely the Mitsunobu condensation of tribenzyl methylenebisphosphonate with protected guanosine 83, has been utilized for the synthesis of nucleoside bisphosphonate 85.Hydrogenolysis of compound 84 was achieved using a mixture of Pd/C and Pearlman catalyst 84 (Scheme 35). Scheme 35. Synthesis of 9-[5´-O-(methylenebisphosphonate)-β-D-ribofuranosyl]guanine. An attractive feature of Mitsunobu's phosphorylation is the good or high yield of products for some nucleoside substrates.Disadvantages of the method are a lack of tolerance to purine bases and the difficulties that may be encountered in the synthesis of the suitable methylene phosphate analogues.3][84][85] Another possible side reaction involves formation of the 5´-hydrazo substituted compounds, which are most likely formed due to the rearrangement of an intermediate nucleoside-PPh3/DEAD complex. 85Moreover, the Mitsunobu reaction is a poor choice if an incoming phosphonate substitute has several hydroxyl groups available for coupling. 86,87Nevertheless, in spite of these limitations, several recent innovations have significantly extended the scope and synthetic utility of the method. 86,88,89Thus, Taylor and co-workers have developed an unsymmetrical approach to the synthesis of bismethylene triphosphate analogue 86 via sequential Michaelis-Arbuzov reactions on bis-halomethylenephosphinates.The ester 86 was monodeprotected at one of the terminal phosphonate groups by reaction with 1.0 equiv of KCN in DMF at 70 o C. The resulting monodeprotected compounds 87 were used to achieve the first synthesis of the bismethylene analogues of UTP and CTP.Acid 87a was coupled to 2´,3´-O,N 3 -tribenzoyluridine 88 via the Mitsunobu reaction to give 90.However, while this reaction smoothly proceeded to give 79% yield of the product, the authors had to use the triethylammonium salt 87b to get a good yield in the case of 2´,3´-O,N 3 -tribenzoylcytidine 89.Complete deprotection of 90 and 91 was achieved by subjecting them to bromotrimethylsilane followed by treatment with aq.NH4OH-MeOH (Scheme 36). 86heme 36.Synthesis of the bismethylene analogues of UTP and CTP. Another example of successful application of the Mitsunobu coupling is the synthesis of bismethylene triphosphate nucleotides of uridine 4-phosphate analogues 101.Bismethylene triphosphate derivative 97, a phosphorus component in the synthesis of 101, was prepared by a Michaelis-Arbuzov route from compound 94.The selective cleavage of 5´ ester moiety of 2´,3´,5´-tri-O-acetyl or tri-O-benzoyl U-4-P analogues 98 was accomplished with the aid of a tin catalyst.The Mitsunobu coupling of 5´-deprotected U-4-P analogues 99 to an unsymmetrical bismethylene triphosphate bearing a free phosphonic acid moiety at one of the terminal positions gave fully protected bismethylene triphosphate U-4-P analogues 100.Global deprotection of nucleotides 100 was carried out by treatment with 6-9 equiv of TMSBr followed by ammonium hydroxide in methanol (Scheme 37). 87Scheme 37. Synthesis of bismethylene triphosphate nucleotides of uridine 4-phosphate analogues. To prepare enzymatically and chemically non-hydrolyzable analogues of dinucleoside triphosphates Ap3A and Gp3G, Lebeau and co-workers have developed a new methodology based on O,O-dialkyl selenophosphonate chemistry. 85,90The bisphosphonic acid 102, a key building block in the synthesis of dinucleoside triphosphate analogues ApCH2pCH2pA and GpCH2pCH2pG, was prepared via a one-pot condensation / transesterification / oxidation / dealkylation sequence involving O,O-dialkyl methaneselenophosphonates. The bisphosphonic acid was then condensed with 2´,3´-O-benzylidene-6-chloroadenosine 103 under modified conditions of the Mitsunobu reaction to afford dinucleoside triphosphate analogue 104 in 40% yield (Scheme 38).The diguanosine derivative was prepared using a similar strategy. The Mitsunobu esterification was found to be particularly effective for the preparation of potential bisubstrate inhibitors of Leishmania elongating α-D-mannosyl phosphate transferase. 91Thus, coupling between the phosphonodisaccharide methylenebisphosphonate derivative 105 and the guanosine derivative 106 is a crucial step of the synthesis of the required transition state analogue 107 in which a guanosine moiety is linked to the acceptor substrate through the methylenebisphosphonate bridge, mimicking the important guanosine-pyrophosphate motif present in the natural substrate donor GDP-mannose (Scheme 39). Electrophilic phosphorylation of nucleosides by the Yoshikawa and Ludwig-Eckstein approaches The Yoshikawa procedure involves the selective 5´-monophosphorylation of a nucleoside with the electrophilic phosphorus oxychloride (POCl3) using trimethyl or triethyl phosphate as the solvent (Scheme 40). 92© ARKAT USA, Inc Yoshikawa's initial studies were performed on 2´,3´-O-isopropylidene-protected NTPs, but later it was found that selective reaction at the 5´-OH was possible for unprotected NTPs and dNTPs.An acidic medium was reported to be critical for selective reaction at the 5´-hydroxyl for unprotected NTPs and dNTPs.In particular, the addition of water to the phosphorylating reagent results in selective 5´-phosphorylation of nucleosides in moderate to high yield. 93Nevertheless, the literature data on the phosphorylation with POCl3 are contradictory and reveal that good regioselectivity can be also obtained when the medium is slightly basic, so the relationship between regioselectivity of phosphorylation and pH remains unclear. 55Yoshikawa and coworkers also used pyrophosphoryl chloride in place of POCl3 but reported no significant advantages. 93hiophosphoryl derivatives of nucleotides can be also obtained via a Yoshikawa procedure that employs PSCl3 to generate 1-thiotriphosphates. 94 Scheme 40.Yoshikawa approach for the synthesis of nucleoside monophosphates. An important feature of Yoshikawa monophosphorylation reactions is that phosphorodichloridate intermediates can be used directly for the synthesis of nucleoside triphosphates.Ludwig 103 and Ruth 94 have shown that treatment of phosphorochloridates, generated in situ via Yoshikawa procedure, with bis(tri-nbutylammonium) pyrophosphate in dry DMF affords the nucleoside triphosphates in good yields.This approach was successfully adopted for the synthesis of β,-methylene-substituted nucleoside triphosphates.Thus, Fisher and co-workers have proposed a short one-pot synthesis of 2-MeS-β,-CH2-ATP (116) represented in Scheme 45. 104 To ensure a selective reaction of 2-methylthioadenosine at 5´-OH, they used 2´,3´methoxymethylidene-2-methylthioadenosine 114 as the starting material.Nucleoside 114 was first treated with Cl3PO in (MeO)3PO in the presence of 1,8-bis(dimethylamino)naphthalene (proton sponge), followed by the addition of bis(tributylammonium)methylenebisphosphonate and tributylamine.Finally, hydrolysis of the cyclic intermediate 115 and deprotection of the methoxymethylidene group afforded 116 in 35% overall yield.Scheme 45.Application of Yoshikawa approach to the synthesis of a β,-CH2 ATP analogue. The fact that P2Y receptors have been found to be implicated in a variety of pathophysiological states such as vascular, inflammatory, and immune diseases pushed Müller's team to synthesize a series of UTP, UDP, and UMP derivatives and analogues modified in the uracil part of the molecule. 105Thus, a triphosphate-analogous structure containing a β,-dichloromethylene bridge was successfully introduced into 5-bromouridine via Yoshikawa approach, yielding the nucleotide analogue 117 as shown in Scheme 46.A β,-dichloromethylene modification in the triphosphate chain of 5-bromo-UTP was tolerated by all three receptor subtypes, thus opening up a new strategy to obtain ectonucleotide diphosphohydrolase-and phosphatase-resistant P2Y2, P2Y4, and P2Y6 receptor agonist.Scheme 46.Synthesis of β,-dichloromethylene-substituted 5-bromo-UTP analogue. A similar procedure was used by Müller and co-workers for the preparation of new β,-CCl2 substituted ATP based 3 H-labeled radioligand 120 ([ 3 H]PSB-0413) (Scheme 47). 106As a precursor for tritiation authors selected the corresponding propargyl derivative.Reaction of 118 with phosphorus oxychloride in trimethyl phosphate followed by reaction with dichloromethylenediphosphonic acid in DMF afforded the corresponding triphosphate analogues 119.The latter was subsequently subjected to catalytic hydrogenation using tritium gas.In preliminary saturation binding studies, [ 3 H]PSB-0413 showed high affinity for platelet P2Y12 receptors with a KD value of 4.57 nM.Scheme 47. Synthesis of nucleotide analogue [ 3 H]PSB-0413 via the Yoshikawa procedure. In 1989, Ludwig and Eckstein published a modification of electrophilic phosphorylation that employs 2chloro-4H-1,3,2-benzodioxaphosphorin-4-one. 107This reaction gave an activated phosphite that was reacted © ARKAT USA, Inc with pyrophosphate to form the cyclic intermediate.The latter can be oxidized and hydrolyzed to give the corresponding triphosphate (Scheme 48).It was shown that protection of nucleobase functionality for A, T, G, and C was not required, but selectivity for the 5´-hydroxyl in the initial phosphitylation step was marginal if the 3´-and the 2´-hydroxyl were not protected. 55heme 48.Ludwig-Eckstein electrophilic phosphorylation of nucleosides.Scheme 49.Synthesis of AZT 5´-α-P-borano-β,-bridge-modified triphosphates. The usefulness of the Ludwig-Eckstein approach in the development of a convenient synthetic route to β,-methylene modified nucleotides was demonstrated by Wang and co-workers who reported the synthesis of AZT 5´-triphosphate mimics 123. 108Thus, reaction of AZT with 2-chloro-4H-1,3,2-benzodioxaphosphorin-4one, followed by treatment of the phosphite intermediate 121 with a bisphosphonate salt, yielded the cyclic triphosphate analogues 122, which were subjected to boronation and subsequent hydrolysis to give AZT 5´-α-P-borano-β,-bridge-modified triphosphates 123 in moderate to good yields (Scheme 49). Synthesis of a series of 2΄,3΄-dideoxynucleoside 5΄-α-P-borano-β,γ-(difluoromethylene)-triphosphates, ddN5΄-αB-β,γ-CF2 TPs, and their inhibitory properties on HIV-1 RT have also been studied (Scheme 51). 109ompounds 126 were prepared according to a similar procedure for preparation of AZT 5΄-αB-β,γ-CF2 TPs (see Scheme 49).However this synthetic route did not apply well to the nucleosides having an exocyclic amino group; therefore, an alternative synthetic procedure was developed (Scheme 52).The course of the reactions was similar to that in Scheme 49 except that the bis(diisopropylamino)phosphites were the active phosphite intermediates.Treatment of 127 with bis(tributylammonium) difluoromethylenediphosphonate presumably yielded the cyclic intermediates 128, which were subsequently subjected to boronation and hydrolysis to give 129.All the resulting ddN5΄-αB-β,γ-CF2 TPs demonstrated essentially the same level of inhibition of HIV-1 RT as the corresponding ddNTPs.Given their enhanced biological stability, these compounds represent a new class of potential antiviral agents. 109cheme 51.Synthesis of the triphosphate mimics of antiviral 2',3'-dideoxynucleosides.Scheme 52.Synthetic pathway for the preparation of ddN 5′-αB-β,γCF2 TPs. The modified Ludwig-Eckstein protocol with methylenebisphosphonic acids was also successfully employed for the preparation of AZT tetraphosphate mimics 132 as depicted in Scheme 53.Oxidation of phosphite intermediates with sulfur followed by condensation of 130 with H-phosphonate monoesters 131 (in the presence of excess S8) opens a route to nucleotide analogues AZTpSpCX2ppSA containing two outer thiophosphate moieties and a central bisphosphonate, and related compounds AZTpSpCX2ppSAZT with AZT at both ends.This family of compounds is a hydrolysis-resistant version of the AZTppppA that results from excision of AZT by AZT-resistant HIV reverse transcriptase and therefore may be useful in drug design. 110cheme 53.Synthesis of AZT tetraphosphate mimics via the modified Ludwig-Eckstein procedure. Synthesis involving nucleophilic cleavage of cyclic trimetaphosph(on)ates Kenyon et al. reported the synthesis of the anhydride of bismethylenetriphosphonic acid 133 via DCCmediated condensation of bismethylenetriphosphonic acid, HO(O)P[CH2P(O)(OH)2]2. 111The same group has described ring-opening of 133 by 2´,3´-isopropylideneadenosine in a polar aprotic solvent at an elevated temperature in the presence of a strong acid.This led to a simple synthesis of α,β;β,-bismethylene analogue of ATP 134 as shown in Scheme 54. 112 Attempts to use this approach for the synthesis of phosphonate analogue of thymidine triphosphate were unsuccessful. 25cheme 54.Synthesis of α,β;β,-bis-CH2 ATP via anhydride of bismethylenetriphosphonic acid. Another approach to methylene-modified nucleoside polyphosphates via ring-opening reactions is based on the reactions of a P 1 ,P 3 -cyclic nucleoside trimetaphosph(on)ates which can be prepared by treatment of the corresponding nucleoside triphosph(on)ate analogues with carbodiimides or phosphitylation of the 5´hydroxy group of 2´,3´-protected nucleosides, followed by double substitution of salicylate with bisphosphonates and oxidation of the resulting cyclic phosphite (see also Section iv).For example, an effective method for the synthesis of bis-nucleoside tetraphosphate analogues 135 involves the treatment of nucleoside trimetaphosph(on)ates with nucleoside monophosphates or monothiophosphates (Scheme 55). 77heme 55.Synthesis of bis-nucleoside tetraphosphate analogues from nucleoside P 1 ,P 3 -cyclic triphosph(on)ates.Scheme 56.Synthesis of methylenebisphosphonate analogues of P 1 ,P 2 -disubstituted pyrophosphates via bicyclic trisanhydrides. Pankiewicz and co-workers have developed a synthesis of the novel nucleoside bicyclic trisanhydrides 136 in the reaction of nucleoside-5´-methylenebisphosphonates with DCC.These authors took advantage of generated anhydrides as intermediates in the synthesis of methylenebisphosphonate analogues of P 1 ,P 2disubstituted pyrophosphates.Thus, the reaction of 136 with benzyl 2´,3´-O-isopropylidene-β-D-ribose followed by hydrolysis and deprotection afforded ADP-ribose analogue 137 in 72% overall yield.Treatment of 136 (R = C Ac ) with N-acetylethanolamine or 1,2-dipalmitoyl-sn-glycerol gave methylenebisphosphonate analogues of CDP-ethanolamine and CDP-DAG (138 and 139, respectively), in high yield (Scheme 56). 113 Enzyme-mediated reactions Several review articles highlight developments in this field. 17,55,114Enzymatic phosphorylation was shown to be an ideal method for certain applications.However, enzyme-mediated reactions do not allow their routine use for the synthesis of nucleotides with unnatural base, sugar and polyphosphate residues.Worth quoting are Burgess and Cook who noted that "enzyme-mediated syntheses of unnatural nucleoside triphosphates are only cost-effective if the expected advantages of this approach are likely to offset the costs of the additional development time required". 55The merits of enzymatic methods are their minimal need for protection / deprotection steps and the regio-and stereo-chemical unambiguity of biocatalytic reactions.A combination of chemical and enzymatic methods has particular utility in cases where the reactivity of the nucleobase precludes the use of electrophilic phosphorylation reagents.An elegant example of this technique was reported by Slama and co-workers who carried out the synthesis of the two novel cyclic ADP-ribose (cADPR) analogues 142 and 143 via cyclization of the corresponding linear nicotinamide adenine dinucleotides (NADs) 140 and 141 catalyzed by Aplysia californica ADP-ribosyl cyclase (Scheme 57). 102heme 57.Aplysia ADP-ribosyl cyclase-catalyzed synthesis of cADPR[CH2] (142) and 3-deaza-cADPR[CH2] (143) from their linear precursors. Prior to this study it had been shown that cyclic ADP-ribose (cADPR) is a natural metabolite of NAD and a potent calcium-releasing second-messenger. 115Substitution of the bridging pyrophosphate oxygen with methylene group resulted in compounds that are full agonist but with decreased agonist potency.These nucleotide analogues can be useful as a starting point for the development of membrane permeant cADPR prodrugs. 102arious kinases have been used in biocatalytic conversions of nucleoside diphosphates to the corresponding triphosphates. 30,51,116,117In the example depicted in Scheme 58, synthesis of α,β-methylene-2´deoxynucleoside 5´-triphosphates involves preparation of 2´-deoxynucleoside 5´-diphosphate precursors followed by an enzymatic -phosphorylation. 118Enzymatic phosphorylation has been shown to be more efficient than the chemical approach for preparation of α,β-methylene-dNTPs.The α,β-methylene dNDP analogues examined are poor substrate for pyruvate kinase (PK).Therefore, the authors employed the substrate nonspecific nucleoside diphosphate kinase (NDPK).All synthesized α,β-methylene-dNTPs were found to be potent inhibitors of polymerase β, with Ki values ranging 1-5 μM.McKenna and co-workers have recently described the synthesis of α,β-difluoromethylene deoxynucleoside 5´-triphosphates (α,β-CF2 dNTPs, N = A or C) using a modified chemical-enzymological approach. 16They first converted dA or N 4 -benzoyl-dC to the corresponding 5´-tosylates 144 or 147, respectively, by reaction with tosyl chloride in pyridine.The tosylates were converted to the dNTP α,β-CF2 analogues 145 or 148 via condensation with the tris(tetrabutylammonium) salt of difluoromethylenebisphosphonic acid.Phosphorylation to the dNTP analogues 146 or 149 was achieved using nucleoside diphosphate kinase and a catalytic amount of ATP, regenerated with 2.5 eq. of phosphoenolpyruvate (PEP) with pyruvate kinase (PK) in 50 mM HEPES buffer (Scheme 59).The latter modification renders unnecessary the use of an affinity column to purify the product from excess ATP. Conclusion In 2016, gem-bisphosphonates celebrated 45 years of application in medicinal chemistry, but in view of the burgeoning interest in bisphosphonate drugs and the concomitant desirability of expanding the structural scope of this class, it is not surprising that the synthesis of nucleoside polyphosphate analogues containing the P-CXY-P structural motif remains a challenging topic, and the development of highly efficient methodologies for synthesis of these species is of significant importance in biology and medicine.Landmarks in the development of contemporary chemistry of bisphosphonate analogues of nucleoside polyphosphates include Blackburn's synthesis of β,γ-fluoromethylene-bridged analogues of adenosine triphosphate and guanosine triphosphate (1984), Wang's synthesis of 2´,3´-dideoxynucleoside 5´-α-P-borano-β,γ-(difluoromethylene)triphosphates (2005), Prakash's synthesis of fluorinated deoxynucleoside analogues based on bis(difluoromethylene)triphosphoric acid (2010), Pankiewicz's synthesis of the mycophenolic adenine dinucleotide as a potent inhibitor of hIMPDH and leukemia K562 cells proliferation (2011), and McKenna's and Goodman's synthesis of the first individual β,-CHX-dGTP diastereomers [(R)-or (S)-CHX, where X is F or Cl] and determination their structures in ternary complexes with DNA polymerase β (2012).An area of great promise remains regio-and diastereoselective synthesis of nucleoside polyphosphate analogues with CXY-modified phosphate chains; much attention will be focused in the recent years to realize this problem.Moreover, in the next few years further studies to obtain details concerning the interactions of such species with enzymatic binding partners are likely to be the next challenge in bioorganic chemistry. Figure 1 . Figure 1.Structures of bisphosphonate Ap 4 A analogues presenting a new promising class of antiplatelet drugs. Scheme 41 . Scheme 41.Electrophilic phosphorylation of nucleosides by the Yoshikawa approach. Valery Kukhar was born in Kiev, Ukraine, in 1942.He graduated from Dnepropetrovsk Institute of Chemical Technology in 1963, and received his Cand.Chem.Sci.degree in 1967 under the supervision of Professor Alexander Kirsanov from Institute of Organic Chemistry of National Academy of Sciences of Ukraine.He received his Doctor of Chemistry degree in 1974 from Institute of Organic chemistry.In 1978-1988 he was the Chief of Chemical Department of National Academy of Sciences of Ukraine (NASU).Since 1987, he has been Director of Institute of Bioorganic Chemistry and Petrochemistry of NASU.Professor Valery Kukhar is a member of National Academy of Sciences (1985) and he was President of Ukrainian Chemical Society from 1992 to 2002.His research interests concentrated mainly on organophosphorus and organofluorine chemistry.He is the author and editor of 6 books, including Chemistry of Fluorine-Containing Amino Acids (1994) and Aminophosphonic and Aminophosphinic Acids.Chemistry and Biological Activity (2000).He was recipient of GLOBAL -500 Prize (UNEP, 1993), San-Valentino Award (World Federation of Scientists, 1999), and Ukrainian State Award in Science & Technology (1999).Valery Kukhar is a member of OPCW Scientific Advisory Board and International Advisory Group for Chernobyl Shelter Fund, EBRD.
2019-04-09T13:07:21.159Z
2017-11-05T00:00:00.000
{ "year": 2017, "sha1": "7904105094432dcefe8e6feba47f7703d76ae998", "oa_license": "CCBY", "oa_url": "https://www.arkat-usa.org/get-file/61542/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "97e87284d80b6d05e8778a20ed4fff58e2863f91", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
271392193
pes2o/s2orc
v3-fos-license
Development of Norelgestromin Dissolving Bilayer Microarray Patches for Sustained Release of Hormonal Contraceptive Microarray patches (MAPs) offer a noninvasive and patient-friendly drug delivery method, suitable for self-administration, which is especially promising for low- and middle-income country settings. This study focuses on the development of dissolving bilayer MAPs loaded with norelgestromin (NGMN) as a first step towards developing a future potential drug delivery system for sustained hormonal contraception. The fabricated MAPs were designed with the appropriate needle lengths to penetrate the stratum corneum, while remaining minimally stimulating to dermal nociceptors. Ex vivo assessments showed that the MAPs delivered an average of 176 ± 60.9 μg of NGMN per MAP into excised neonatal porcine skin, representing 15.3 ± 5.3% of the loaded drug. In vivo pharmacokinetic analysis in Sprague Dawley rats demonstrated a Tmax of 4 h and a Cmax of 67.4 ± 20.1 ng/mL for the MAP-treated group, compared to a Tmax of 1 h and a Cmax of 700 ± 138 ng/mL for the intramuscular (IM) injection group, with a relative bioavailability of approximately 10% for the MAPs. The MAP-treated rats maintained plasma levels sufficient for therapeutic effects for up to 7 days after a single application. These results indicate the potential of NGMN-loaded dissolving bilayer MAPs, with further development focused on extending the release duration and improving bioavailability for prolonged contraceptive effects. Introduction According to recent estimates by the United Nations, approximately 164 million women who want to avoid pregnancy are not using safe, modern contraception [1].Additionally, nearly 85 percent of these women live in low-and middle-income countries [2].Almost half of all pregnancies globally (121 million) are unintended, which may lead to significant health, economic, and psychosocial costs.Many factors influence a woman's ability to use contraception, including societal and health system-and product-related factors [3]. Recent analysis of unmet need revealed that a lack of awareness of and access to contraception are no longer cited as the leading causes of poor contraceptive usage [4].Additionally, although a range of both short-term and long-acting contraceptive methods exist, high rates of method discontinuation are being observed [5].Women who have access to contraception may not use or may discontinue use for several reasons, including changes in relationship status or fertility intentions, concerns over side effects, and opposition from others [5].Thus, the definition of unmet need should include both women who do not have access to appropriate preventive methods and those who are dissatisfied with their current method [6]. Pharmaceutics 2024, 16, 946 2 of 13 Among the reversible contraceptive methods, hormonal drug delivery systems represent the broadest array of products, including multiple types of oral pills, injectables, and transdermal patches, as well as vaginal rings, implants, and hormone-releasing intrauterine devices [1,7].These products differ in the types of hormones delivered and the dosing regimen.User-initiated methods such as oral pills, transdermal patches, and vaginal rings require daily, weekly, or sometimes monthly dosing.Injectables provide 2 to 3 months of protection, and in most countries require a clinic visit for administration but can also be self-administered.Methods such as intrauterine devices and implants provide long-term protection (3 to 8 years) and require a provider to insert the product [8,9]. Nonetheless, there remain product-specific concerns that may impact patient nonuse, noncompliance, or discontinuation of these methods, which affects their overall success rates.For example, when considering options that require daily use for effective protection, like oral pills, the cost of the medication and strict adherence requirements are common challenges.Concerns regarding other longer-acting methods include cost, injection site pain, the need for frequent clinic visits (for injectables), and the need for surgical procedures to insert and remove implants.Since the effectiveness of user-initiated methods depends on patient compliance and adherence to product use, new drug delivery systems are needed to address user concerns and provide women and girls with increased options of contraception that fit with their needs and lifestyles [10]. Microarray patches (MAPs), also known as microneedle patches, are under development for the delivery of drugs and vaccines, including for the prevention of pregnancy [11][12][13].Dissolving MAPs, in particular, consist of an array of water-soluble or biodegradable polymers and the drug(s) that dissolve and release their payload upon exposure to interstitial fluid in the skin [14,15].MAPs are minimally invasive devices that are inserted into the skin through applied pressure, facilitating drug delivery in a way that is typically perceived as less painful than needles and syringes [16][17][18]. Contraceptive MAPs can be formulated for the controlled release of the drug and are considered easy to use [12,19].Once a user has been trained to apply MAPs correctly, they can potentially be self-applied at home whenever subsequent dosing is required [20].This is especially beneficial for women and girls in low-and middle-income countries, where there is great need for contraceptive options that do not require frequent health care visits, as access to health facilities and trained health care providers may be limited, particularly in more rural areas. There are studies in the literature that support the potential feasibility of MAPs as a long-acting contraceptive delivery platform.One study highlighted the possibility of a controlled release of levonorgestrel for 6 months in vitro through the development of a coreshell MAP made from biodegradable poly(lactic-co-glycolic acid) (PLGA) and polylactic acid (PLA) polymers [21].Another study demonstrated preclinical in vivo success in the development of contraceptive MAPs that can deliver a sustained release of levonorgestrel through the detachment and embedding of biodegradable PLGA and PLA microneedles into the skin for at least 1 month [22].Other studies have also highlighted the use of other hormonal contraceptive drugs, including etonogestrel and progesterone, as well as biodegradable materials, including silk fibroin, polyvinylpyrrolidone (PVP), polyvinyl alcohol (PVA), and hydroxypropyl cellulose, to fabricate MAPs with a range of other delivery efficiencies and durations of protection [23][24][25]. Considering the current contraceptive MAP landscape, we designed a dissolving bilayer MAP that aimed to deliver an in vivo sustained release of norelgestromin (NGMN) for a 1-to 3-month duration of protection.Feedback on the ideal duration provided by end users in several studies that assessed MAP product preferences and potential acceptability influenced this target; women in several low-and middle-income countries expressed a desire for MAPs that could protect users for up to 1, 3, or 6 months [26][27][28].Additionally, NGMN was chosen as the candidate drug for this MAP, since it is already marketed in a conventional transdermal patch format, as Mylan Pharmaceuticals' Xulane ® (delivered with ethinyl estradiol), potentially simplifying the regulatory approval process for such a product.However, unlike the Xulane patch, which was designed to be worn and reapplied every week [29], our MAP was designed to be removed after a short application period of 20 min to 1 h, while still aiming to maintain long-term therapeutic effects. Materials Norelgestromin was purchased from Toronto Research Chemicals, Inc. (Toronto, ON, Canada).PVA of molecular weight 9 to 10 kDa, 80% hydrolyzed (PVA 10K), and of molecular weight 31 to 50 kDa, 87% to 89% hydrolyzed (PVA 50K), was purchased from Sigma-Aldrich (Dorset, UK).PVP K-29/32 of molecular weight 58 kDa was provided by Ashland (Kidderminster, UK).A liquid silicone elastomer mix was purchased from Nusil Technology (Buckinghamshire, UK).Ultrapure water was obtained from a water purification system (Elga PURELAB DV 25, Veolia Water Systems, Dublin, Ireland).All other chemicals and materials were of analytical reagent grade and supplied by Sigma-Aldrich. Fabrication of NGMN-Loaded Bilayer Dissolving MAPs The overall manufacturing process is illustrated in the steps below (Figure 1).First, NGMN (100 mg) was mixed with 187.5 mg of PVA 10K (40% w/w), 187.5 mg of PVP K-29/32 (40% w/w), and 525 mg of water.This mixture was then homogenized with two metal beads in a TissueLyser LT (QIAGEN ® , Manchester, UK) at 50 Hz for 30 min to create the uniform drug suspension.The first layer was cast with this NGMN mixture into MAP molds (needle density of 16 × 16, cuboidal needles 900 µm in height, column part 300 µm, and pyramidal tip part 600 µm in height, with a base width of 300 µm and interspacing of 100 µm).Subsequently, MAP molds were placed in a positive pressure chamber for 3 min at a pressure of 5 bars, after which excess mixture was scraped.The molds were then placed in the positive pressure chamber for 30 min before being left to dry at an ambient temperature for 24 h.Afterward, the drug-free second layer was cast with a mixture of PVP K-29/32 (20% w/w) and PVA 50K (15% w/w) hydrogels (50:50 w/w).The MAP molds were then placed in a positive pressure chamber for 15 min at 5 bars.Finally, the arrays were dried at room temperature for 24 h and then at 37 • C for 24 h.marketed in a conventional transdermal patch format, as Mylan Pharmaceuticals' Xulane ® (delivered with ethinyl estradiol), potentially simplifying the regulatory approval process for such a product.However, unlike the Xulane patch, which was designed to be worn and reapplied every week [29], our MAP was designed to be removed after a short application period of 20 min to 1 h, while still aiming to maintain long-term therapeutic effects. Materials Norelgestromin was purchased from Toronto Research Chemicals, Inc. (Toronto, ON, Canada).PVA of molecular weight 9 to 10 kDa, 80% hydrolyzed (PVA 10K), and of molecular weight 31 to 50 kDa, 87% to 89% hydrolyzed (PVA 50K), was purchased from Sigma-Aldrich (Dorset, UK).PVP K-29/32 of molecular weight 58 kDa was provided by Ashland (Kidderminster, UK).A liquid silicone elastomer mix was purchased from Nusil Technology (Buckinghamshire, UK).Ultrapure water was obtained from a water purification system (Elga PURELAB DV 25, Veolia Water Systems, Dublin, Ireland).All other chemicals and materials were of analytical reagent grade and supplied by Sigma-Aldrich. Fabrication of NGMN-Loaded Bilayer Dissolving MAPs The overall manufacturing process is illustrated in the steps below (Figure 1).First, NGMN (100 mg) was mixed with 187.5 mg of PVA 10K (40% w/w), 187.5 mg of PVP K-29/32 (40% w/w), and 525 mg of water.This mixture was then homogenized with two metal beads in a TissueLyser LT (QIAGEN ® , Manchester, UK) at 50 Hz for 30 min to create the uniform drug suspension.The first layer was cast with this NGMN mixture into MAP molds (needle density of 16 × 16, cuboidal needles 900 µm in height, column part 300 µm, and pyramidal tip part 600 µm in height, with a base width of 300 µm and interspacing of 100 µm).Subsequently, MAP molds were placed in a positive pressure chamber for 3 min at a pressure of 5 bars, after which excess mixture was scraped.The molds were then placed in the positive pressure chamber for 30 min before being left to dry at an ambient temperature for 24 h.Afterward, the drug-free second layer was cast with a mixture of PVP K-29/32 (20% w/w) and PVA 50K (15% w/w) hydrogels (50:50 w/w).The MAP molds were then placed in a positive pressure chamber for 15 min at 5 bars.Finally, the arrays were dried at room temperature for 24 h and then at 37 °C for 24 h. Physical Characterization The NGMN microneedles were visualized with the help of digital and scanning electron microscopes (SEM).The stereo microscope, specifically the EMZ4™ model from Leica Microsystems (Milton Keynes, UK), was used to obtain three-dimensional visualizations of the microneedles.The SEM was used to achieve high-resolution images of the microneedles, offering detailed insights into their surface morphology and structural features.A High-Resolution Environmental SEM (Quanta FEG 250, FEI, Hillsboro, OR, USA) was used for the detailed characterization of the microneedles.This SEM was operated at acceleration voltages ranging from 10 to 20 kV, allowing us to achieve an optimal resolution and the depth of field necessary for capturing the fine structural details of the microneedles.The use of high chamber pressure was particularly beneficial in maintaining the integrity of the samples by reducing the charging effects, which is crucial for the accurate imaging of non-conductive and hydrated specimens.This feature eliminated the need for conductive coatings, thus preserving the natural state of the microneedles.The Quanta FEG 250 proved ideal for the detailed examination of the microneedles' pyramidal tips, enabling us to gather valuable insights into their sharpness and potential penetration efficiency, which are critical parameters for their effectiveness in transdermal drug delivery applications. The compression properties of the microneedles were detected with a TA.XT2 texture analyzer (Stable Microsystem, Haslemere, UK) in compression mode, as previously reported [11,16].The initial heights of the microneedles were first measured using a stereo microscope.Subsequently, microneedle arrays were affixed to the movable cylindrical probe of the texture analyzer using double-sided adhesive tape and forced by the test station against a flat aluminum block at a rate of 0.5 mm/s for 30 s and a force of 32 N (0.088 N/needle).Pretest and post-test speeds were specified as 1 mm/s, and the trigger force was specified as 0.049 N. Microneedle heights were determined again using the stereo microscope, and the percentage reduction in height following the application of the axial compression load was calculated using Equation (1). The insertion properties of the microneedle arrays were analyzed with the same setup as the mechanical strength test by lowering the microneedle arrays onto a stack of eight layers of Parafilm M ® (Bemis, Inc., Soignies, Belgium). Analytical Method An HPLC method was developed and validated to quantify the NGMN in the conditions for routine analysis.In total, 10 mg of NGMN was weighed and dissolved in 2 mL of methanol, then diluted in range of 0.5-50 µg/mL for the NGMN calibration curve, covering the detected concentrations in the in vitro study.Analyses of drug samples were performed on the Agilent 1260 Infinity II LC system (Agilent Technologies UK Ltd., Stockport, UK).The separation and quantification of the drug was accomplished using a Symmetry ® C18 column (4.6 mm × 150 mm, 5 µm) (Waters; Milford, MA, USA) with an isocratic elution.The eluent consisted of acetonitrile (mobile phase A) and water containing 0.1% v/v triethylamine adjusted to pH 6.6 (mobile phase B) with a ratio of 35:65 (v/v).The flow rate was set to 0.8 mL/min.The injection volume was 50 µL, and the column temperature was maintained at 40 • C. The detection was performed at a wavelength of 254 nm.The total run time for the analysis was 10 min.The final method had an R 2 value of 1 and a limit of quantitation (LOQ) of 0.25 µg/mL.The method had an inter-day variability of 1.62%, and an intra-day variability of 0.83%, indicating good precision.Furthermore, the method exhibited stability over 3 days at room temperature with an assay percentage of 98.58 ± 0.85%, indicating the method's reliability and robustness for NGMN detection. Drug Content and Deposition Study The drug content of the microneedle arrays was determined by dissolving them in 5 mL of deionized water and stirring them at 200 rpm for 30 min, and then transferring 100 µL of the resultant suspension into 1.5 mL Eppendorf Tubes ® (Eppendorf SE, Hamburg, Germany).The suspension was then diluted with 900 µL of acetonitrile to dissolve the NGMN and ensure the precipitation of the PVA and PVP polymers, and then vortexed for 5 min.Subsequently, the mixture was centrifuged at 14,800 rpm for 15 min.The amount of NGMN in the resulting supernatants was analyzed using the high-performance liquid chromatography-ultraviolet method. NGMN skin deposition was studied using full-thickness excised and shaved neonatal porcine skin.The NGMN microneedle array was inserted into the skin using thumb pressure for 30 s and secured using a 20 g stainless steel weight for 24 h, after which the drug was extracted from the skin and quantified. In Vivo Study The Queen's University Belfast Committee of the Biological Services Unit approved the animal studies.All researchers conducting the animal work had obtained personal licenses from the UK Home Office.Female Sprague Dawley rats (Charles River Laboratories, Harlow, UK), 8 to 10 weeks of age (weight: 244 g ± 12 g for the intramuscular (IM) group and 237 g ± 6 g for the MAP group), were used to compare the pharmacokinetics of the two different drug administration methods-IM injection and MAP-with a focus on their potential for a sustained release profile.The rats were acclimatized to the animal housing conditions for 1 week prior to the experiment and separated into two cohorts of six rats each.Table 1 presents the experimental setup.For MAP application, the animals were anesthetized using gas anesthesia with 5% isoflurane in oxygen at a flow rate of 2 L/min.Maintenance anesthesia was achieved by reducing the isoflurane concentration to 2.5% v/v, with a flow rate of 2 L/min.First, the back of each animal (i.e., the intended site of application) was shaved using electric hair clippers (Remington Co., London, UK) to minimize the interference of fur during MAP application.Afterwards, the remaining fur was removed by applying a depilatory cream (Boots Smooth Care Hair Removal Cream for sensitive skin, Boots, Nottingham, UK).Following this, the rats were left for a 24 h period to allow their skin to recover and to ensure the complete restoration of the skin's barrier function before MAP application.On the following day, the MAPs were administered to the rats.The MAPs were secured in place using Microfoam™ Surgical Tape (3M, St. Paul, MN, USA), which was additionally secured using 3M Tegaderm™ film and kinesiology tape for 24 h. To prepare the NGMN suspension for IM injection, 50 mg of NGMN was dispersed in 1 mL of 2% PVA (stabilizer) and homogenized for 30 min using a TissueLyser (Qiagen, Hilden, Germany) at 50 rpm.This prepared NGMN suspension was IM injected at a dose of 2 mg (40 µL) of NGMN per animal. Blood samples were collected in 1.5 mL pre-heparinized microtubes over 2 weeks via tail vein bleeds following NGMN administration (either IM or using MAPs).After collection, the samples were centrifuged immediately at 2200× g for 10 min at 4 • C, and the plasma was collected and stored at −20 • C until further analysis with high-performance liquid chromatography-mass spectrometry. Microscopy and Mechanical Characterization The microscopy images below show that the bilayer microneedles used in this study had sharp tips and smooth surfaces (Figure 2A), implying the successful use of the manufacturing method to produce consistently defect-free bilayer structures.This was also confirmed using the SEM (Figure 2B), which showed that the NGMN microneedle shafts appeared to have a smooth surface.Drug particulates distributed within the microneedle polymeric matrix can be seen in Figure 2C. Pharmaceutics 2023, 15, x FOR PEER REVIEW 6 of 13 plasma was collected and stored at −20 °C until further analysis with high-performance liquid chromatography-mass spectrometry. Microscopy and Mechanical Characterization The microscopy images below show that the bilayer microneedles used in this study had sharp tips and smooth surfaces (Figure 2A), implying the successful use of the manufacturing method to produce consistently defect-free bilayer structures.This was also confirmed using the SEM (Figure 2B), which showed that the NGMN microneedle shafts appeared to have a smooth surface.Drug particulates distributed within the microneedle polymeric matrix can be seen in Figure 2C.For MAPs to successfully deliver the payload into the patient following skin application, the system must possess sufficient mechanical strength to withstand the application force to enable the insertion of the microneedles into the skin.NGMN microneedle arrays were tested for mechanical strength via a height reduction test [30,31].In the current work, the NGMN microneedle array displayed a height reduction of 5.4 ± 2% (shown in Figure 3a).Based on our previous work, this height reduction indicates this microneedle array exhibited sufficient mechanical robustness to withstand the application force during insertion without buckling or fracturing on the skin surface [11,14]. Further mechanical characterization was conducted by evaluating the insertion profile of the NGMN MAP using Parafilm M as an in vitro skin model [32].The mean thickness of a Parafilm M layer is 126 ± 7 µm.The MAP insertion studies found that microneedles inserted within the third and fourth layers of the skin simulant model resulted in an overall insertion depth of approximately 378 µm (shown in Figure 3b).This insertion depth would suggest the MAP could be adequately inserted into the dermal layer of the For MAPs to successfully deliver the payload into the patient following skin application, the system must possess sufficient mechanical strength to withstand the application force to enable the insertion of the microneedles into the skin.NGMN microneedle arrays were tested for mechanical strength via a height reduction test [30,31].In the current work, the NGMN microneedle array displayed a height reduction of 5.4 ± 2% (shown in Figure 3a).Based on our previous work, this height reduction indicates this microneedle array exhibited sufficient mechanical robustness to withstand the application force during insertion without buckling or fracturing on the skin surface [11,14]. Further mechanical characterization was conducted by evaluating the insertion profile of the NGMN MAP using Parafilm M as an in vitro skin model [32].The mean thickness of a Parafilm M layer is 126 ± 7 µm.The MAP insertion studies found that microneedles inserted within the third and fourth layers of the skin simulant model resulted in an overall insertion depth of approximately 378 µm (shown in Figure 3b).This insertion depth would suggest the MAP could be adequately inserted into the dermal layer of the skin [33].This would be the targeted insertion depth needed to deliver the payload, as the dermal layer is rich in microcirculation for carrying the released payload into systemic circulation. skin [33].This would be the targeted insertion depth needed to deliver the payload, as the dermal layer is rich in microcirculation for carrying the released payload into systemic circulation. Drug Loading and Skin Deposition Following an evaluation of the mechanical strength and insertion capabilities of the patches, a skin deposition study was conducted to evaluate the amount of drug that can be successfully delivered into the skin, as well as the delivery efficiency of the formulation.In general, the formulated patches had an overall drug load of 1150 ± 177 µg.The ex vivo skin deposition of the NGMN MAP was found to be 176 ± 60.9 µg per MAP, equating to a deposition of 15.3 ± 5.3% of drug payload in excised neonatal porcine skin, as shown in Table 2.This low delivery efficiency may be attributed to the micronized hydrophobic nature of the drug, indicating a less favorable deposition into the water-rich dermis [34].The physiochemical properties of the active pharmaceutical ingredient and particle size have a critical impact on the delivery efficiency of this dissolving MAP design.For example, in a previous research study assessing contraceptive delivery via microneedles, Nestorone ® nanosuspension-loaded microneedles demonstrated a high drug deposition at 904 µg (about 40% of the loaded dose of 2260 µg).This higher deposition could be the result of the nanosuspension formulation, which likely enhanced the solubility and bioavailability of the drug within the skin layers [35].In contrast, the delivery efficiency of microneedles loaded with the micronized form of Nesterone was comparatively low under similar conditions, with the highest deposition recorded at 504 µg, representing about 25% of the loaded dose (2160 µg).This reduction in the deposition efficiency of the Nestorone nanosuspension relative to the micronized form of Nestorone could be attributed to differences in the formulation and physical state of the drug within the microneedles. The low delivery efficiency observed in our skin deposition studies can be attributed to the physicochemical properties of the drug.The hydrophobic nature of NGMN, which exhibits a logP of 4.04, likely reduced the dissolution of the drug into the skin, culminating Drug Loading and Skin Deposition Following an evaluation of the mechanical strength and insertion capabilities of the patches, a skin deposition study was conducted to evaluate the amount of drug that can be successfully delivered into the skin, as well as the delivery efficiency of the formulation.In general, the formulated patches had an overall drug load of 1150 ± 177 µg.The ex vivo skin deposition of the NGMN MAP was found to be 176 ± 60.9 µg per MAP, equating to a deposition of 15.3 ± 5.3% of drug payload in excised neonatal porcine skin, as shown in Table 2.This low delivery efficiency may be attributed to the micronized hydrophobic nature of the drug, indicating a less favorable deposition into the water-rich dermis [34].The physiochemical properties of the active pharmaceutical ingredient and particle size have a critical impact on the delivery efficiency of this dissolving MAP design.For example, in a previous research study assessing contraceptive delivery via microneedles, Nestorone ® nanosuspension-loaded microneedles demonstrated a high drug deposition at 904 µg (about 40% of the loaded dose of 2260 µg).This higher deposition could be the result of the nanosuspension formulation, which likely enhanced the solubility and bioavailability of the drug within the skin layers [35].In contrast, the delivery efficiency of microneedles loaded with the micronized form of Nesterone was comparatively low under similar conditions, with the highest deposition recorded at 504 µg, representing about 25% of the loaded dose (2160 µg).This reduction in the deposition efficiency of the Nestorone nanosuspension relative to the micronized form of Nestorone could be attributed to differences in the formulation and physical state of the drug within the microneedles. The low delivery efficiency observed in our skin deposition studies can be attributed to the physicochemical properties of the drug.The hydrophobic nature of NGMN, which exhibits a logP of 4.04, likely reduced the dissolution of the drug into the skin, culminating in a reduction in payload delivery.In addition, the high level of drug loading (40% w/w) in this study may have contributed to the incomplete dissolution of the microneedles within the 24 h timeframe of the experiment, resulting in some of the matrix being dislodged during patch removal, and thus reducing delivery efficiency. In Vivo Study Evaluation Following the in vitro characterization of the fabricated NGMN-loaded MAP, the formulation was evaluated further in vivo.The pharmacokinetics of NGMN following administration via MAP or IM injection are shown in Figure 4 and Table 3.The rats that received NGMN by IM injection reached peak plasma concentrations of the drug (C max of 700 ± 138 ng/mL) at 1 h (T max ).After that, NGMN levels decreased rapidly to approximately 25.7 ± 12.2 ng/mL on day 1 and then declined gradually to reach their lowest concentration of 11.2 ng/mL on day 7.By day 14, NGMN was below the limit of quantification.In contrast, the rats that received the drug via dissolving MAP exhibited a steady increase in NGMN concentration in the plasma that reached a C max of 67.4 ± 20.1 ng/mL at 4 h.Afterwards, the drug plasma levels decreased to approximately 5 ± 0.638 ng/mL on day 7 and went below the limit of quantification after day 14. in a reduction in payload delivery.In addition, the high level of drug loading (40% w/w) in this study may have contributed to the incomplete dissolution of the microneedles within the 24 h timeframe of the experiment, resulting in some of the matrix being dislodged during patch removal, and thus reducing delivery efficiency. In Vivo Study Evaluation Following the in vitro characterization of the fabricated NGMN-loaded MAP, the formulation was evaluated further in vivo.The pharmacokinetics of NGMN following administration via MAP or IM injection are shown in Figure 4 and Table 3.The rats that received NGMN by IM injection reached peak plasma concentrations of the drug (Cmax of 700 ± 138 ng/mL) at 1 h (Tmax).After that, NGMN levels decreased rapidly to approximately 25.7 ± 12.2 ng/mL on day 1 and then declined gradually to reach their lowest concentration of 11.2 ng/mL on day 7.By day 14, NGMN was below the limit of quantification.In contrast, the rats that received the drug via dissolving MAP exhibited a steady increase in NGMN concentration in the plasma that reached a Cmax of 67.4 ± 20.1 ng/mL at 4 h.Afterwards, the drug plasma levels decreased to approximately 5 ± 0.638 ng/mL on day 7 and went below the limit of quantification after day 14.The IM group had a much more rapid Cmax and a higher area under the curve (AUC) relative to the MAP-treated group.This is due to higher dose administered creating a large diffusion gradient and the administration of the drug directly in the capillary-rich muscle The IM group had a much more rapid C max and a higher area under the curve (AUC) relative to the MAP-treated group.This is due to higher dose administered creating a large diffusion gradient and the administration of the drug directly in the capillary-rich muscle tissue, which enables dissolution and the diffusion of the drug into the systemic circulation.This is further enhanced by the that muscle tissue, due to its intrinsically high metabolic rate, has a much denser network of blood capillaries relative to skin tissue [36].This would provide more circulatory surface area for NGMN to diffuse from the injection site and into the systemic circulation.In contrast, the MAP treatment group had a more delayed T max relative to the IM treatment group.This may be attributed to the fact that when NGMN was deposited into the skin via MAP application, the drug needed to first dissolve from the PVP/PVA matrix to then diffuse through the skin tissue before reaching dermal circulation. The C max and AUC for the MAP-treated group were much lower (p < 0.05) than those for the IM treatment group.This may be attributed to the lower estimated dose delivered (approximately 15%) or the lower MAP bioavailability (10.36%) for the MAP-treated group relative to the IM group.This finding is consistent with the NGMN ex vivo skin deposition studies, which suggest incomplete drug delivery from the MAPs.Unlike the complete delivery achieved with IM injections, an incomplete delivery of drugs loaded within the MAP tips was observed in previous studies [37].The lower dose delivery could be caused by the "bed-of-nails" effect, which could be mitigated by further optimizing the drug load towards the microneedle tips, increasing the dissolution of the microneedle matrix, or optimizing the release of the microneedles from the patch backing.The bioavailability of drugs delivered transdermally via dissolving MAPs tends to decrease with increasing logP values [38].The logP values of NGMN is 4, indicating a low bioavailability in transdermal absorption when administered by dissolving MAPs due to the hydrophobic nature of the micronized drug.This results in more undissolved drug remaining in the skin tissue.Therefore, enhancing solubility through techniques such as nanosuspension [35] or cyclodextrin complexation [39] may mitigate the effects of high logP values, potentially improving overall drug bioavailability. The mechanism of MAP-based transdermal delivery can be broken down into three main steps-(i) application, (ii) dissolution, and (iii) diffusion.First, the bilayer design of the MAP allows for the initial penetration of the microneedle across the lipid-rich stratum corneum, enabling the deposition of the drug-loaded polymeric matrix into the dermis.The interstitial fluid in the dermis encounters the fast-dissolving PVA and PVP matrix of the MAP backing, leading to its rapid dissolution.The drug-loaded tips are left in the skin and exposed to the interstitial fluid.The dissolution of the drug from the surface of the MAP tips then results in the initial burst release of NGMN.The inner portion of the tips form a drug depot in the skin, allowing for the removal of the backing after the wear time. The release of the NGMN from the MAP will result in the formation of a localized region of high drug concentration.This would then promote the NGMN to undergo Fickian diffusion from the site of administration to the surrounding capillary bed, where the drug would diffuse into the micro-capillaries, resulting in transdermal delivery. The micronized form of NGMN used in the microneedle impacts its solubility and deposition.Smaller particle sizes generally enhance the dissolution rate due to the larger surface to volume ratio.The initial phase of the drug release from the MAP may be attributed to the drug dissolution that occurs on the surface of the microneedle, as well as the dissolution of smaller NGMN particles.In contrast, larger particles, as well as the payload located in the upper portion of the polymeric matrix, will undergo a much more delayed release that may result in a lower deposition, due to incomplete dissolution during the application period.Our in vivo studies showed that the NGMN MAPs maintained detectable plasma levels for up to 7 days.This sustained release is attributed to the gradual dissolution of the microneedles and the continuous release of NGMN from the particulate depot into the systemic circulation. NGMN plasma levels achieved using the MAPs for 7 days exceeded 0.6 to 1.2 ng/mL, which is the target level needed in humans for contraception [40].In addition, the plasma levels exhibited by MAP-treated rats between days 2 and 7 were similar (p > 0.05) to those exhibited by IM-treated rats.The much lower AUC (p < 0.05) achieved by the MAP treatment group relative to IM injection would suggest there would be limited systemic exposure to NGMN following patch application.These findings align with what was seen previously with Nestorone when delivered via dissolving MAPs in vivo, which found sustained plasma levels for up to 6 days [35].Other dissolving contraceptive MAP development efforts using levonorgestrel found that alternative formulation approaches, such as the inclusion of PLGA and/or PLA releaseretarding polymers, achieved a higher dose delivery efficiency and capability for sustained release for several months [21,41].To advance towards the development of MAPs capable of meeting women's needs for long-acting contraception [25][26][27], further research is needed on such strategies for the incorporation of these release-retarding polymers as a polymeric particulate form or as a part of a MAP tip matrix, and to optimize MAP formulations for enhanced drug stability and release kinetics.Future studies should also focus on increasing the bioavailability of contraceptive drugs delivered via MAPs and on addressing the challenge of the incomplete dose delivery observed in our findings.This progression will involve detailed investigations into innovative polymer matrices and comprehensive in vivo studies to ensure long-term therapeutic efficacy and safety. Despite our comprehensive evaluation of these MAPs at an in vitro and in vivo level, there are several limitations associated with the current work.The use of Sprague Dawley rats as a model for human skin and systemic absorption may not fully replicate the human physiological response to NGMN delivery via MAPs.For instance, the presence of panniculus, a layer of muscle in rat skin but absent in humans, may impact the insertion of MAP into the skin.In addition, rat skin is intrinsically thinner than that of human skin, which may have resulted in an over estimation of microneedle penetration and delivery.This is further compounded by inter-species metabolic differences between rats and humans that could affect the extrapolation of the results. Also, this study focused on a single application of MAPs.The results might differ with repeated applications, which would be more accurate and representative of actual clinical scenarios.Therefore, future studies ought to investigate the long-term effect after repeated MAP application on the plasma profile of the drug, as this could provide a more comprehensive understanding of the long-term effects and safety of the MAPs.Another apparent limitation of this study was the size of the patch used in this study, which had an area of 0.36 cm 2 , appropriate for insertion into rats.It is most likely that in order to translate this formulation and technology into a clinical setting, a scaling up of these MAPs would be pertinent.This in turn would require the evaluation of these patches from in vivo pharmacokinetic and manufacturability perspectives.In addition, these larger patches would also require end-user evaluation to determine any human factors which would promote or hinder the acceptance or usage of these patches. Conclusions The research findings demonstrate the fabrication of dissolving bilayer MAPs for the delivery of NGMN, evaluating a potential approach to the delivery of hormonal contraception.Microscopic and mechanical characterization confirmed the integrity of the microneedle structures, indicating their suitability for skin penetration.The microneedles exhibited minimal height reductions under compression, ensuring their mechanical robustness during insertion.Ex vivo skin deposition studies revealed that the drug deposition efficiency was low; the in vivo study confirmed this and found that the MAP provided a sustained delivery of the drug over a period of only 7 days.Therefore, improvements would be needed to reach the targeted 1-to 3-month duration of efficacy, such as incorporating polymers that can prolong drug release and enhance the MAP's drug delivery efficiency. Figure 2 . Figure 2. (A) Digital microscopy image.Representative SEM images of tip-loaded microneedles containing NGMN, showing (B) a fully formed microneedle array and (C) a needle tip matrix. Figure 2 . Figure 2. (A) Digital microscopy image.Representative SEM images of tip-loaded microneedles containing NGMN, showing (B) a fully formed microneedle array and (C) a needle tip matrix. Figure 3 . Figure 3. (a) Assessment of mechanical properties of NGMN MAPs using a texture analyzer, measured by the percentage of height reduction in the MAP shaft under a 32 N compression force against an aluminum block (mean ± standard deviation [SD], n = 3).(b) Evaluation of NGMN microneedle penetration in eight Parafilm M layers as an artificial skin model, indicating the percentage of holes and corresponding insertion depths achieved with a 32 N force (mean ± SD, n = 3). Figure 3 . Figure 3. (a) Assessment of mechanical properties of NGMN MAPs using a texture analyzer, measured by the percentage of height reduction in the MAP shaft under a 32 N compression force against an aluminum block (mean ± standard deviation [SD], n = 3).(b) Evaluation of NGMN microneedle penetration in eight Parafilm M layers as an artificial skin model, indicating the percentage of holes and corresponding insertion depths achieved with a 32 N force (mean ± SD, n = 3). Figure 4 . Figure 4. Pharmacokinetic profile of NGMN in Sprague Dawley rats following administration (2 mg NGMN suspension per rat) by IM injection or by applying four dissolving NGMN MAPs.Data are reported as the means ± SDs (n ≥ 3). Figure 4 . Figure 4. Pharmacokinetic profile of NGMN in Sprague Dawley rats following administration (2 mg NGMN suspension per rat) by IM injection or by applying four dissolving NGMN MAPs.Data are reported as the means ± SDs (n ≥ 3). Table 1 . Rat cohorts, treatment groups, and doses applied per rat. Table 2 . Drug content (mg/MAP) and amount of NGMN deposited in skin following MAP application (mean ± SD, n ≥ 3). Table 2 . Drug content (mg/MAP) and amount of NGMN deposited in skin following MAP application (mean ± SD, n ≥ 3). Table 3 . The pharmacokinetic parameters of NGMN in Sprague Dawley rats following administration of IM injection or application of four dissolving NGMN MAPs.Data are reported as the means ± SDs, n = 6. Table 3 . The pharmacokinetic parameters of NGMN in Sprague Dawley rats following administration of IM injection or application of four dissolving NGMN MAPs.Data are reported as the means ± SDs, n = 6.
2024-07-24T16:14:00.211Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "792bb2d2cd579b1bd0c48ace9379f57055fbbea3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/pharmaceutics16070946", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f88d8c7ea68b8ad91201f37a8c62422403b06d1d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
213743414
pes2o/s2orc
v3-fos-license
Phytoplankton and dimethylsulfide dynamics at two contrasting Arctic ice edges Abstract. Arctic sea ice is retreating, thinning and its rate of decline has steepened in the last decades. While phytoplankton blooms are known to seasonally propagate along the ice edge as it recedes from spring to summer, the substitution of thick multi-year ice (MYI) with thinner, ponded first-year ice (FYI) represents an unequal exchange when considering the roles sea ice plays in the ecology and climate of the Arctic. Consequences of this shifting sea ice on the phenology of phytoplankton and the associated cycling of the climate-relevant gas dimethylsulfide (DMS) and its precursor dimethylsulfoniopropionate (DMSP) remain ill constrained. In July–August 2014, two contrasting ice edges in the Canadian High Arctic were explored: a FYI-dominated ice edge in Barrow Strait and a MYI-dominated ice edge in Nares Strait. Our results reveal two distinct planktonic systems and associated DMS dynamics in connection to these diverging ice types. The surface waters exiting the ponded FYI in Barrow Strait were characterized by moderate chlorophyll a (Chl a, 1 Introduction unicellular algae including osmoregulation, cryoprotection, scavenging of free radicals, and overflow of carbon and sulfur (Stefels et al. 2007). The production of DMSP by unicellular algae is highly species-specific with Bacillariophyceae and Dinophyceae/Prymnesiophyceae being lesser and greater producers, respectively (Keller et al. 75 1989). The DMSP-to-DMS conversion involves the entire microbial food web and part of the DMS is produced directly by phytoplankton while another part is produced indirectly via the release of DMSP in the aqueous environment and its subsequent degradation by bacterioplankton (Kiene et al. 2000;Simó 2001;Stefels et al. 2007). The relative importance of these processes is unclear, however abiotic stressors involving sudden modifications in light intensity, salinity, and temperature may all contribute to the enhanced direct and indirect production of DMS by 80 plankton communities (Sunda et al. 2002;Toole and Siegel 2004). In the Arctic, peaks in atmospheric methane sulfonic acid (MSA, a DMS proxy) have frequently been measured in spring and in mid-summer (Sharma et al. 2012). The spring peaks have been attributed to phytoplankton blooms at low latitudes while the mid-summer peaks have been related to more localized high latitude ice edge blooms (Sharma et al. 2012;Becagli et al. 2016;. This interpretation is consistent with the elevated DMS concentrations generally 85 measured at or close to ice edges in association with developing phytoplankton blooms in the North Atlantic and European sectors of the Arctic (Matrai and Vernet 1997;Galí and Simó 2010;Park et al. 2018). The high DMS concentrations measured at ice edges have been associated with a combination of factors including: 1) an increase in phytoplankton biomass and hence DMSP concentrations; 2) the selective growth of strong DMSP and DMS producers such as the prymnesiophyceae Phaeocystis; 3) a physiological stimulation of DMS production due to the increase in 90 irradiance; and 4) an increase in bacterial activity (Galí and Simó 2010). In the eastern Canadian High Arctic, only a fragmented picture of summer oceanic DMS distributions was available until recently and none of the snapshots captured the presumably most biologically productive time of July-August: average of 1.1 nmol DMS L-1 in the North Water and Nares Strait in June (Bouillon et al. 2002 October/November (Luce et al. 2011). In spite of the recurring mid-summer atmospheric MSA peak measured at Alert, evidence of high oceanic DMS concentrations associated with summer phytoplankton blooms remained scarce for this part of the Arctic until very recently (Mungall et al. 2016;Collins et al. 2017;Jarníková et al. 2018;Abbatt et al. 2019). The rapid shifting of the Arctic icescape bears consequences for Arctic primary production and associated DMS 100 dynamics that are still poorly understood. While observations from the field are sparse (Wassmann et al. 2011) and challenging due the remoteness and harshness of the environment as well as the dynamic nature of ice and its margins (Sakshaug and Skjoldal 1989), it is critical that impacts of ongoing physical changes on the dynamics of bloomforming microorganisms and their production of DMS be better constrained. The main objective of this study was to assess and compare mid-summer (July-August) phytoplankton and DMS dynamics at two contrasting ice edges in 105 regions of the eastern Canadian Arctic: the Barrow Strait first-year ice (FYI) dominated ice edge and the Nares Strait multi-year ice (MYI) dominated ice edge. The opportunity was also taken to investigate the ice-free waters of Lancaster Sound and North Water (northern Baffin Bay) contiguous to the Barrow Strait and Nares Strait ice edge regions, respectively. Our results reveal two distinct planktonic systems and ensuing DMS dynamics related to the presence of dissimilar icescapes. The two Straits (Barrow and Nares) were characterized by distinct and well-defined ice edges at the time of sampling ( Fig. 2). In Barrow Strait, the ice edge was located at the western end of Lancaster Sound, perpendicular to the channel, 125 between Devon Island and Somerset Island (Fig. 2a). The ice pack was mostly composed of ca. 1 m thick FYI covered by melt ponds at approximately 40% of total surface (Fig. 3 picture of melt ponds). Soon after our arrival in the study area, a large lead developed south of Griffith Island (south of Cornwallis Island), pushing the detached part of the ice pack slightly eastward (Fig. 2). The BS transect was conducted along the ice edge in this lead. In Barrow Strait, the net surface circulation is predominantly eastward at 10-15 cm s-1 in mid-summer on the south shore with a mild 130 westward current of ca. 5 cm s-1 on the north shore (Lemon and Fissel 1982;Prinsenberg and Bennett 1987;Pettipas et al. 2008, Michel et al. 2015. This region stands as an important waterway for the transport of fresher Pacific waters, originally from the inflow through Bering Strait, towards the North Atlantic (Jones et al. 2003). The water sampled across this transect was thus mostly exiting the ice pack which extended several km westwards. In July 2014, an ice arch formed in the Kennedy Channel of Nares Strait leaving Kane Basin, and the North Water 135 region to the south largely ice-free. The comparison of the position of the ice arch in July 2014 with a decade of remotely sensed data (1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007), shows that it formed that year approximately 130 km north of a median historical position (near 79oN) in southern Kane Basin (Kwok et al. 2010), in line with recent trends (2006-2010) of more northern ice bridge formation in the area (Ryan and Münchow 2017). By the time of the sampling (3-6 August), it had retreated to the head of Kennedy Channel ( Fig. 2A), leaving a 350 km stretch of open water north of Smith Sound 140 (Burgers et al. 2017). As expected for this part of the Arctic Ocean, the ice pack north of the ice arch was composed of MYI (Fig. 2C). Presence of MYI (5+ years) north of Nares Strait, near Robeson Channel, was confirmed by the Ease-Grid Sea Ice Age, Version 3 data set , which compiles weekly estimates of sea ice age in the Arctic between 1978 and 2017. Data from 2014, week 31 (28 July-3 August) and week 32 (4-10 August) were consulted for the purpose of this study. Beyond the MYI and to the south, a band of thick (>1.2 m) FYI, without any melt ponds, was also present (Canadian Ice Service (CIS) analysis, Fig. 2B). Because Nares Strait represents a major outflow path for water exiting the Arctic Ocean (Jones et al. 2003;Münchow et al. 2007;McGeehan and Maslowski 2012), the water sampled along the NS transect was exiting the northern MYI edge as it flowed southbound towards Baffin Bay. Physical, chemical and biological measurements Water samples were collected at 5 to 9 depths from the surface down to a maximum of 100 m depth with 12-L Niskintype bottles mounted on a General Oceanics 24-bottle rosette. The rosette sampler was equipped with a Sea-Bird 911plus Conductivity Temperature Depth (CTD) probe and a sensor for the measurement of fluorescence (Seapoint). Water for Chl a concentration analysis was collected in 1-L brown polyethylene bottles (Nalgene) and then passed onto a 25-mm filter (Whatman GF/F). Phytoplankton pigments on the filter were extracted in 90% acetone and stored at 4oC in the dark during a period of 18-24 hours. Fluorescence of extracted pigments was then measured using a Turner 165 Designs fluorometer 10-AU after the acidification method described by Parsons et al. (1984). Chl a concentrations were calculated from the equation published in Holm- Hansen and collaborators (1965). Samples for phytoplankton taxonomy were collected at the surface and at the subsurface chlorophyll maximum (SCM) and preserved in an acidic Lugol's solution (final concentration of 0.4% v:v; Parsons et al. 1984). Identification and enumeration of cells > 2 µm were conducted with a Zeiss Axiovert 10 inverted microscope following the Utermöhl and 170 Lund method (Lund et al. 1958;Parsons et al. 1984). A minimum of 400 cells was enumerated to be statistically significant. Samples of DMS were collected in 23-ml serum vials and allowed to gently overflow, avoiding any bubbling, before capping. Concentrations of DMS were determined onboard within 2 hours of collection using purging, cryotrapping, and sulfur-specific gas chromatography (GC, Varian 3800) as described by Lizotte et al. (2012) and further 175 modifications described here. Briefly, 15 to 20-ml subsamples of DMS were gently filtered through a GF/F syringe filter and immediately injected into a sparging vessel. The DMS was stripped from the liquid samples using a constant flow of Ultra High Purity (UHP) helium (He) prepared using a permeation tube (certified calibration by Kin-Tek Laboratories Inc.) maintained at 40oC and volatile DMS was trapped in a Teflon loop held in liquid N2. Gaseous samples were then analyzed using a Varian 3800 gas chromatograph (GC), equipped with a Pulsed Flame Photometric 180 Detector (PFPD) and a capillary column (DB-5ms, 60 m x 320 µm x 1µm). The samples were calibrated against microliter injections of DMS diluted with UHP He (certified calibration by Kin-Tek Laboratories Inc.) maintained at 40oC. Duplicate tubes for total DMSP (DMSPt) samples were filled with 3.5 mL of unfiltered water. For conservation purposes, 50 µL of 50% sulfuric acid (H2SO4) was added in each 3.5 mL liquid sample of DMSPt. All tubes were stored at 4oC in the dark until analysis in laboratory. DMSP concentrations were quantified over the course of two 185 periods using two analytical systems. A first series of DMSPt samples (stations 323, 322, 325, 301, 304, 305, 305A, 305B, 305C, 305D, and 305E) was analyzed in the laboratories of Laval University using a purge and trap system coupled to a Varian 3800 GC PFPD as described above. DMSPt samples were hydrolyzed with a 5N NaOH solution in order to convert DMSP into DMS which was purged from the samples via an Ultra High Purity (UHP) helium stream, cryo-trapped and analyzed via gas chromatography (Lizotte et al. 2012). For these DMSP samples, the GC 190 was calibrated with milliliter injections of a 100 nmol L-1 solution of hydrolyzed DMSP (Research Plus Inc.). The analytical detection limit on the Varian GC system was 0.1 nmol L−1 for all sulfur compounds and the analytical precision (CV) for triplicate measurements of DMS and DMSP was better than 10%. After shortcomings with the aforementioned GC system, a second series of DMSPt samples (stations 300,324,346,115,111,108,105,101,KEN1,KEN3,KANE1,KANE3,314,312,310,335,210,204,200,and 120) were determined using an automated purge and Our analysis shows an average loss of 9% in the DMSPt samples between times of sampling and analysis. MODIS images, as well as ice charts produced by CIS, were used to visually assess the presence of ice edges. CIS ice charts, based on Radarsat 2 and NOAA-18 images, show ice properties including stage of development, concentration 210 and form of the ice (Environment Canada 2005). Color schemes of the CIS ice chart were modified using Adobe Illustrator CS6. A FYI edge appears in Lancaster Sound as a curved line between Devon Island and Somerset Island on July 22 ( Fig. 2A). The presence of MYI appears at the northern extremity of Nares Strait, i.e., at the entrance of Robeson Channel between Ellesmere Island and Greenland, on August 1 (Fig. 2C). The MYI ice was contiguous to a band of thick (> 1.2 m) FYI descending into Nares Strait (Fig. 2B). 215 The surface mixed layer depth (Zm) was estimated as the depth at which the gradient in density (σt) between two successive depths was greater than 0.03 kg m-4 following the threshold gradient method of Thomson and Fine (2003) with adaptations from Tremblay et al. (2009). Oceanic vertical cross sections and contour plots were drawn using weighted averaging gridding and linear mapping using Ocean Data View 5.1.5 macx software (Schlitzer, 2018) and schematic models of FYI and MYI dynamics were constructed in Adobe Illustrator CS6. Statistical analysis was 220 conducted using SYSTAT 13.2 software, as well as JASP 0.9.2.0 computer software, an open-source project supported by the University of Amsterdam (JASP Team 2018). Variables were tested for normality using the Shapiro-Wilk test with a 0.05 significance level, and Spearman's rank correlations (rs) were used to assess the strength of association between variables. 225 3 Results Overview of the sea surface physicochemical and biological characteristics The main physical and chemical characteristics of the sea surface water at the sampling stations are presented in Table 1 (Table 2). At a broad scale, and considering only sea surface data from all regions under investigation in this study, Spearman's rank correlation tests (n = 33) reveal no significant relationships between DMS and abiotic or biotic variables presented in tables 2 and 3. Beyond sea surface data, water column vertical profiles were also plotted as cross sections in order to identify key features associated with ice dynamics and bloom development in certain regions of the CAA and Baffin Bay. 245 Information is presented below and grouped as a function of targeted transects. Barrow Strait (BS) transect Variables measured across the BS transect are presented in Figure 4. Seawater temperatures ranged from -1.6 to -1.2oC, with the lowest values found at intermediate depths (ca. 40-60 m). Surface water temperatures were below -250 1.4°C at all stations. Salinity varied between 30.4 and 33.0 across the transect, with the lowest and highest surface values measured at the north and south extremities of the transect, respectively. Nitrate concentrations ranged from 0.6 to 11.0 µmol L-1, with lowest and highest values measured close to the surface and at depth, respectively. The nitracline was located at ca. 30 m. Close to the surface, nitrate concentrations were low at the south end of the transect (0.6 µmol L-1 at station 305B) and increased northward to reach 2.1 µmol L-1 at station 305E. Silicic acid concentrations 255 showed a similar pattern, with a positive south-north gradient ranging from 3.5 to 10.5 µmol L-1 in the upper 30 m and high values at depth (up to 29.2 µmol L-1). Chl a concentrations varied between 0.2 and 2.1 µg L-1 with highest values measured in the upper 30 m of the water column and toward the northern tip of the transect. Phytoplankton identification and enumeration were conducted at one station on the BS transect (stations 305E) and at two stations located in the vicinity under the ponded ice cover (see stations 304 and 305 in Fig. 1 and Table 2). The phytoplankton 260 assemblages at these three stations were similar, dominated by the pennate diatoms Fossula arctica and Pseudonitzschia spp. (delicatissima group), the two taxa being responsible for 29 to 71% of the total phytoplankton abundance (Table 2). Another abundant pennate species at these stations was Fragilariopsis oceanica. 270 Variables measured across the LS transect are presented in Figure 5. Surface temperatures were at least 3 times warmer than those measured across the BS transect, with values ranging between 3.0 and 4.1oC. Surface salinities varied between 30.7 and 32.4, with the highest values measured at stations 323 and 322 towards the north shore. Concentrations of nitrate and silicic acid exhibited no particular cross-channel pattern in the surface mixed layer, with values below 0.5 and 2 µmol L-1 in the upper 20 m of the water column, respectively. Maximum Chl a concentrations 275 were in the same range as in the BS transect (between 1.5 and 2.5 µg L-1) but exhibited a different vertical distribution. Across the BS transect, Chl a concentrations were generally highest in the surface mixed layer (SML) while they formed a SCM at ca. 30-40 m at the stations located across the LS transect suggesting a more advanced bloom stage in the LS area. The two transects also showed distinct phytoplankton assemblages (Table 2) Nares Strait (NS) transect Variables measured across the NS transect are presented in Figure 6. Sea surface temperatures started at ca. -1.3oC at 295 the ice edge and increased more or less regularly southward to reach 2oC at the last station (KANE5) of the transect. In contrast, sea surface salinities were relatively constant at 30.5 along the transect. Nitrate and silicic acid concentrations in surface waters near the ice arch were ca. 1.5 µmol L-1 and 6 µmol L-1, respectively. In the upper 20 m of the water column, concentrations of nitrate and silicic acid decreased with distance from the ice arch as a first algal bloom developed (see below), reaching 0.4 µmol L-1 and 1.9 µmol L-1, respectively, at the southernmost station 300 (KANE5). The silicic acid drawdown along the transect was indicative of a strong diatom dominance (see Table 2). Concentrations of DMSPt were highest in the first 20-30 m of the water column, ranging from 34 to 88 nmol L-1 at the near surface, with a distinct positive gradient from west to east. A subsurface peak was observed in the three most eastern stations (108, 111, and 115) with the highest concentrations of DMSPt (112 nmol L-1, station 115) measured at 335 12 m depth. DMS concentrations in the near surface waters were relatively high and stable at 4.1-5.3 nmol L-1 between stations 101 and 111 and reached 19.5 nmol L-1 at station 115, the highest value measured during this campaign. Discussion During the joint ArcticNet/NETCARE cruise, summertime DMS distributions were studied in two regions of the High 340 Canadian Arctic characterized by distinct ice edges: a first one featuring mainly ponded FYI, and a second one composed mainly of MYI. Both Barrow and Nares straits, as well as the contiguous regions of Lancaster Sound and North Water (northern Baffin Bay), embody significant oceanic gateways for Pacific-originating waters towards the North Atlantic (Jones et al. 2003). The results from the four transects conducted in these regions reveal distinctive features in DMS dynamics. The highlights of this study are discussed in the context of a predicted warmer Arctic, loss 345 of perennial sea ice, and increase in the prevalence of seasonal FYI (Nghiem et al. 2007;Kwok and Rothrock 2009; Overland and Wang 2013; AMAP 2017). Broad regional sea surface distributions of DMS Over the entire study area, the distribution of sea surface concentrations of DMS (Fig. 8) Table 3). These findings also bring further support to the hypothesis that local DMS sources explain the mid-summer peaks of atmospheric MSA, a DMS proxy, in the High Arctic (Sharma et al. 2012;Becagli et al. 2019). Not surprisingly, considering our limited sea surface dataset (n = 33) and the overall complexity of the DMS cycle, no significant relationships were found between broad regional blooms. The melt of sea ice and snow covers also influence surface water stratification and the ensuing shifts in salinity, temperature and solar radiation doses experienced by potential DMS-producing communities. The inherent 365 heterogeneity that characterizes spatial distributions of DMS in the Arctic as well as the presence of sea ice as a potentially critical driving force of these patterns warrants further investigations into underlying mechanisms. The FYI edge in Barrow Strait and the adjacent Lancaster Sound The seasonal sea ice zone (SIZ) in the Arctic is modulated by large interannual variability (Parkinson and Comiso 370 2013;Simmonds 2015;Comiso et al. 2017;Serreze and Meier 2019). Correspondingly, the position of the ice edge in the Barrow Strait/Lancaster Sound area during spring may vary yearly from the mouth of the sound on the east (80oW) to Lowther Island in Barrow Strait on the west (97oW) as revealed by the analysis of CIS ice charts by Peterson et al. (2008). On July 17, 2014, the ice edge was located approximately mid-way of this historical spatial range near the longitude of Prince Leopold Island (90oW, see Fig. 2A). Satellite imagery reveals that this distinct ice edge was already 375 present a month prior to the arrival of the icebreaker CCGS Amundsen in the area and that the eastern part of Lancaster Sound (east of 90oW) was already mostly ice-free by June 16, 2014 (data from CIS not shown). The ice cover in Barrow Strait, west of the ice edge, was composed mostly of FYI ca. 1 m thick covered with melt ponds at ca. 40% of its surface. On July 20, part of the ice diverged towards the east creating a small lead in the FYI near the northern tip of Somerset Island ( Fig. 2A). The opportunity was taken to sample the western border of the lead, very close to 380 the newly formed ice edge in order to capture the outflow of under-ice waters. The predominantly eastward transport of water in the southern portion of the Strait is estimated at 14 ± 4 cm s−1 annually and is strongest in late summer at 27 ± 8 cm s−1 (Hamilton et al. 2013), suggesting that the residence time of seawater in the lead was short lived. Biogeochemical characteristics of the surface waters sampled on July 22-23 along the BS transect, particularly its southern area, thus likely reflect conditions prevailing in the ice-covered western portion of the Strait. 385 Vertical profiles from the BS transect (Fig. 4) in proximity to the newly formed ice edge indicate that an under-ice phytoplankton bloom had developed in the ice-covered Barrow Strait area and was captured during our sampling as it exited the ice. This under-ice bloom coincided with relatively low salinities (ca. 31.5) and temperatures (ca. -1.5oC) within the surface waters. These results suggest that the bloom was linked to the development of a fresher water lens below the ice, likely resulting from the melting of snow and ice covers. Events that were also likely associated with These results agree with studies emphasizing the importance of ice algal communities as a seeding source during spring over oceanic regions when algal abundance in the water column is low (e.g., Arctic 405 Ocean north of Svalbard by Kauko et al. (2018); Frobisher Bay in Davis Strait by Hsiao (1992)). The presence of species endemic to Arctic sea ice such as Nitzschia frigida, Fragilariopsis cylindrus and Fragilariopsis oceanica (Poulin et al. 2011) in the surface waters of the Barrow Strait region brings further support to the ice origin of this under-ice bloom. The taxonomic composition of the drifting under-ice bloom at station 305E was also dominated by pennate diatoms, 410 but with lower total cell abundance (0.48  106 cells L-1 at 305E) as compared to the two other Barrow Strait stations (> 2.00  106 cells L-1 at 304 and 305, data not shown), as well as slightly different species. The phytoplankton assemblage at 305E was similar to the one previously described by Galindo et al. (2014) for the under-ice bloom developing at a shallow station (50 m) in Allen Bay in 2011, located ca. 15 km west of 305E. In both studies, the under-ice bloom was dominated by pennate diatoms, with Fossula arctica and Fragilariopsis oceanica contributing 415 8.2 % and 7.8 %, respectively, to the total protist abundance at station 305E. In the ice-free area of Lancaster Sound, the lower Chl a (0.2 to 1.2 µg L-1) and nutrient concentrations measured in the 13-16 m depth SML as well as the presence of an SCM (Fig. 5) observed by Galindo et al. (2014) in nearby Allen Bay. Despite seemingly varying vertical distribution patterns of in significant correlation (rs = 0.80, p < 0.001, n = 20) suggesting that the bulk of DMSPt was intimately linked to algal biomass. In contrast, across much of the LS transect, particularly towards its southern portion, concentrations of DMSPt were highest near the nitracline, deeper in the water column (peak of 96 nmol L-1 at 20 m, station 325). The role played by environmental drivers, such as nutrients, in the accumulation of DMSP-rich organisms at this depth 445 was substantiated by the significant correlation found between water column distributions of NO3-and DMSPt (rs = -0.59, p < 0.001, n = 36). However, contrary to patterns observed in the BS transect, concentrations of DMSPt bore no significantly association with in vivo fluorescence of chlorophyll in this part of the study area, suggesting that the bulk of algal biomass was not necessarily responsible for the variability in DMSPt concentrations in these waters characterized by mixed algal populations. The above results are not unexpected seeing as the nature of DMSP 450 synthesis itself is highly species-specific (Keller et al. 1989) and subject to physiological up-or down-regulation and excretion linked to environmental stressors (see review by Stefels et al. 2007). Assuming that almost all DMSPt was particulate (see Kiene and Slezak 2006) Notwithstanding the lower DMSPt:Chl a ratios in the BS transect, DMS levels were high in surface waters, ranging from 7.2 to 12 nmol L-1 and revealed two hot spots at either end of the sampled transect (Fig. 4). One in association with a peak in DMSPt (115 nmol L-1, 305E) and a second in conjunction with relatively low DMSPt (ca. 25 nmol L-1, 465 305B) and Chl a (0.83 µg L-1) at station 305B. Statistical analysis suggests that, in the waters exiting the FYI pack in Barrow Strait, variability in DMS concentrations was significantly associated with that of it's precursor DMSPt (rs = 0.76, p < 0.001, n = 20) but was most strongly associated with fluctuations in salinity. The highly significant negative correlation ( Fig. 9) found between DMS and salinity (rs = -0.91, p < 0.001, n = 20) in the upper ca. 80 m of the water column in this region suggests a strong physical control of DMS distributions associated with ice and snow melting 470 processes. The generally sunny forecast in the days prior to the sampling excludes heavy rain as a significant contributor to this signal. During the thawing season, the increase in ice permeability and basal melting may trigger important releases of DMS in the waters just below the ice cover (Trevena and Jones 2006;Kiene et al. 2007;Tison et al. 2010;Carnat et al. 2014). The formation of an upper fresher water "lens" associated with the FYI melt may also have led to a certain accumulation of DMS following its release from the sea ice. Furthermore, it cannot be totally excluded that the stratification of the upper water column ensuing from the melting ice could have entailed higher and longer exposures of phytoplankton communities to solar radiation with enhanced DMS production as a coping mechanism against light-induced stress via an antioxidant cascade (Sunda et al. 2002;Toole and Siegel 2004;Vallina and Simó 2007;Galí and Simó 2010). Indirectly, DMS production could also have been stimulated through the possible increased availability of dissolved DMSP (DMSPd) in the environment and its bacterially-mediated enzymatic 480 conversion into DMS (Kiene et al. 2000). Laboratory salinity downshock experiments with batch cultures of diatoms and dinoflagellates have shown an increase in the excretion of cellular DMSP (Van Bergeijk et al. 2003) and an increase in the production of DMS (Stefels et al. 1996;Niki et al. 2007). A DMSP-related osmo-acclimation response to shifts in salinity (Stefels 2000) could be particularly beneficial for algae developing in highly fluctuating environments, such as in the Arctic during the thaw season, a phenomenon which could ultimately strengthen DMS 485 production. The strength of the association between DMS and salinity in the waters however suggests that physical drivers exercised the greatest control over the distribution of DMS near the FYI ice edge. Water column patterns of salinity along the NS transect were relatively uniform between stations with fresher waters 510 reaching deeper into the water column at the most northern stations (Fig. 6). This pattern is consistent with the presence of Pacific-originating waters of lower salinity and density that enter the central Arctic Basin through Bering Strait and that partly flow south through Nares Strait as a sub-surface current (Jones et al. 2003). It may also reflect the southbound flow through Nares Strait of first-year or multiyear ice floes (Münchow 2016), or icebergs originating from the glaciers of Greenland or Ellesmere Island (Burgers et al. 2017) that can partially melt in transit and thus freshen the ocean surface waters the impact of which lessens to the south as the ice melts away. Vertical patterns of temperature along the NS transect showed well mixed waters down to 58 m in the station nearest to the ice edge (KEN1) and a progressive warming of the upper layers of the water column with decreasing latitude (Fig. 6) 520 Reservoirs of nutrients throughout the water column at station KEN1, with 1.4 and 6 µmol L-1 of nitrate and silicic acid, respectively, were at the lower end of expected pre-bloom values for Pacific-derived water of the same salinity in the higher Arctic (Tremblay et al. 2002). As the sampling stations progressed to the south, a drawdown of both those nutrients, associated with the development of phytoplankton biomass, was evident at the surface of the vertical profiles (Fig. 6). At station KANE3, nutrients exhibited a swell-like pattern, associated with an increase of nutrients 525 throughout the water column. The presence of a sill (Bourke et al. 1989 In surface waters near the MYI edge, the phytoplankton community (dominated by unidentified flagellates and Prymnesiophyceae, KEN1, Table 2), showed a moderate abundance (1.3  106 cells L-1, data not shown), suggesting that the initiation of a phytoplankton bloom had not yet occurred in waters underneath the northern ice pack. The presence of sufficient amounts of nutrients in the surface waters near the ice edge points towards light availability as 540 the primary limiting factor for the proliferation of primary producers under the ice. In seasonally ice-covered seas, the growth of shade-adapted algal cells may begin once a critical incident irradiance threshold is reached at the ice-water interface (Horner and Schrader 1982;Gosselin et al. 1986). These results are in sharp contrast to the patterns observed in the waters exiting the ponded FYI in Barrow Strait where a bloom had already begun to develop underneath the ice. The drawdown of silicic acid in the following NS transect stations concurred with the development and dominance 545 of diatoms (see Table 2), notably centrics such as Chaetoceros spp. (5-20 µm) and Chaetoceros gelidus, an assemblage similar to those previously described in the LS transect (as well as in the NOW transect later discussed). Species of the genus Chaetoceros were thus widespread throughout the study area, as previously reported in the Canadian High Arctic (Booth et al. 2002;Ardyna et al. 2011;Poulin et al. 2011). In proximity to the northern ice edge in Nares Strait (KEN1), concentrations of DMSPt and DMS were rather modest 550 throughout the water column (< 16 nmol L-1 and < 0.4 nmol L-1, respectively). These results reinforce the notion that autotrophic and heterotrophic processes associated with the production of DMSP and DMS in the waters under the thick non-ponded MYI may have only truly taken off upon reaching ice-free, light-sufficient conditions found farther south. This is again in cutting contrast with DMSP and DMS patterns observed at the Barrow Strait ponded ice edge. Surface peaks of 27 nmol DMSP L-1 and 2.6 nmol DMS L-1 were measured in the following station (KEN3) adding 555 support to the requisite of suitable doses of solar radiation to ensure the development of microalgae in ice-covered waters of the Arctic (Horner and Schrader 1982;Gosselin et al. 1986) and the ensuing production of S compounds. In the three southernmost stations of the Nares Strait transect, a subsurface maximum of DMSPt was present at ca. 20 m depth with a high value of 59 nmol L-1 reached at KANE5 likely in association with an increase in autotrophic biomass fueled by nutrients near the sill, hitherto discussed. Maximal concentrations of DMS were, for the most part, confined 560 to the upper 20 m of the water column within or above the SCM, with a high value of 10 nmol L-1 reached at KANE5. Along this transect, variations in the vertical distribution of DMS were significantly correlated with its precursor DMSPt, however the strongest association was found between variations in DMS and seawater temperature (rs = 0.81, p < 0.001, n = 44) likely reflecting seasonal warming of the ice-free surface waters and ensuing development of DMSproducing organisms. The significant positive correlation found between concentrations of DMS and in vivo 565 fluorescence of chlorophyll (rs = 0.64, p < 0.001, n = 44) throughout the water column in Nares Strait reinforces this suggestion (Fig. 9). Ratios of DMSPt:Chl a (ranging from 10 to 23 nmol µg-1) averaged over the first 20 m of the water column of the NS transect were low compared to those found in the Lancaster Sound transect (max of 170 nmol µg-1). Taking into account that our DMSPt:Chl a ratios include both particulate and dissolved pools, and considering that dissolved 570 DMSP typically contributes a small fraction of DMSPt (although highly variable; Kiene et al. 2000;Kiene and Slezak 2006), these values are nonetheless similar to previously reported DMSPp:Chl a ratios with a maximum of 39 nmol µg-1 (Luce et al. 2011) and a maximum of 17 nmol µg-1 (Matrai and Vernet 1997), at diatom-dominated stations of the Canadian High Arctic and of the Barents Sea, respectively. Along the ice-free west-east transect in the North Water (NOW), patterns of temperature and salinity (Fig. 7) revealed 575 the interactions between the southward advection of fresh and cold Arctic waters along Ellesmere Island and saltier and warmer Atlantic waters flowing northward along western Greenland via the West Greenland Current (WGC) (Curry et al. 2011;Münchow et al. 2015). Surface water concentrations of nitrate were below 0.04 µmol L-1 across the entire transect, exposing more mature blooming stage conditions similar to those found in the LS transect. As such, maximal accumulation of biomass occurred below the surface in most stations along the NOW transect in association 580 with the nitracline (Spearman's rank correlation between in vivo fluorescence and NO3-, rs = -0.86, p < 0.001, n = 42). The phytoplankton assemblage along the NOW transect was similar to the ones observed further south at the mouth of Lancaster Sound and further north along Nares Strait. In the surface waters of stations 101, 108 and 111, the Phaeocystis is widespread across the globe, including in high boreal and arctic waters (Verity et al. 2007) and its blooming has been linked to vast amounts of DMSP in the marine environment (van Duyl et al. 1998;Stefels et al. 2007;Asher et al. 2017). In this study, the presence of a DMSP hotspot (up to 113 nmol L-1 at ca. 12 m depth) in the upper waters of the easternmost station 115 of the NOW transect may be partially explained by the occurrence of 595 Phaeocystis pouchetii as well as the numerical dominance of unidentified flagellates, including potentially DMSPrich species (Keller 1989 and 12 nmol L-1, respectively, suggesting that a bloom had already started to develop under the melt pond-covered ice through the potential seeding of autotrophic organisms from the ice. The strong negative association found between salinity and DMS points towards ice itself as an important vector for sea surface DMS, contributing to its seeding at 630 the ice-sea interface as observed elsewhere (Trevena and Jones 2006;Kiene et al. 2007;Tison et al. 2010). Halinedriven stratification of waters under the ice cover likely promoted the physical accumulation of DMS. Alternately, the surface stratification may have favored the biological production of DMS. The formation of a fresher water lens at the surface of the water could have led to the entrapment of algal cells and to an increase in solar radiation exposure with heightened DMS production as a defense strategy against light-associated oxidative stress (Sunda et al. 2002;Toole 635 and Siegel 2004;Vallina and Simó 2007;Galí and Simó 2010). The fresher water lens may also have indirectly stimulated DMS production through the possible enhancement of DMSPd availability and its bacterial conversion into DMS, following an osmotic-related excretion of cellular DMSP (Stefels 2000;Van Bergeijk et al. 2003;Niki et al. 2007). Although biological processes cannot be completely ruled out, the strength of the association between DMS and salinity near the FYI edge suggests that physical drivers most strongly shaped DMS dynamics in Barrow Strait. 640 In contrast to the FYI-dominated region described above, the waters exiting the MYI-dominated region of Nares Strait did not exhibit the same potential under-ice development of autotrophic organisms. The phytoplankton community in the surface waters of the station sampled nearest to the ice edge was dominated by flagellates and Chl a concentrations were comparatively low (< 0.5 µg L-1), as were the concentrations of DMSPt (< 16 nmol L-1) and DMS (< 0.4 nmol L-1). The development of a phytoplankton bloom, and increase in both DMSP and DMS concentrations, occurred several 645 km (ca. 100 km, Station KEN3) away from the ice edge highlighting the requirement for sufficient light to initiate the growth of primary producers. One of the distinguishing features between the two ice edges was the presence/absence of melt ponds at their surfaces. This factor likely played a major role in driving the availability of light through the ice as suggested by Nicolaus et al. (2012), leading to the earlier onset of a bloom (Fig. 10) and shaping the associated DMS cycling under the ice in the Barrow Strait region where melt ponds covered ca. 40% of the total surface. Findings 650 from this study are of particular significance in light of the suggestion that regions of the CAA (Fortier et al. 2002;Mundy et al. 2014), the Beaufort Sea (Mundy et al. 2014) and Baffin Bay (Oziel et al. 2019) may hold regular, yet under-documented, under-ice phytoplankton blooms. The occurrence of these blooms may be linked to the fact that the archipelago is characterized by narrow waterways where landfast ice tends to linger longer, allowing advanced stages of ice melt to be reached prior to break up, and where shallow waters act to enhance the supply of nutrients into 655 surface waters fueling the potential growth of under-ice blooms (Michel et al. 2006). Autotrophic biomass accumulations below the Chukchi Sea ice cover described by Arrigo et al. (2012) bring further support to the possible widespread importance of these blooms in waters of the Arctic. Furthermore, FYI has become the prevailing type of ice in the Arctic at the expense of swiftly declining MYI (Comiso et al. 2008). As such, and because FYI tends to have greater areal melt pond coverage than MYI due to a smoother topography , climate-driven 660 changes in sea ice dynamics may lead to modifications in the timing and frequency of under-ice blooms, their role in seeding ice-edge blooms in summer (Strass and Nöthig 1996) and the associated production of DMS (Galí and Simó 2010;Levasseur 2013). It is also worth noting that the highest sea surface DMS concentration measured during this expedition was associated with the presence of Phaeocystis (STN115, West Greenland current), a genus for which a few modelling studies point towards a poleward expansion in its geographical extent (Cameron-Smith et al. 2011; 665 Menzo et al. 2018) associated with the increased intrusion of warm Atlantic water masses in the Arctic (Neukermans et al. 2018). Altogether, these factors in conjunction with the projected increase in melt pond cover and their temporal span (Agarwal et al. 2011;Stroeve et al. 2014;Holland and Landrum 2015;Liu et al. 2015) and the direct role melt ponds may play in the production of DMS (Gourdal et al. 2018) suggests that there is a need to review the potential production and cycling of DMS in ice-covered areas of the Arctic during summer. As thinner, younger and more 670 dynamic icescapes may prevail in the Arctic, earlier and more ubiquitous under ice blooms may lead to earlier pulses of DMS through leads, cracks and edges of the ice with implications for climate forecasting. Recent modelling studies predict an increase of DMS emissions in the Arctic, predominantly associated with sea ice retreat, and inducing a negative climate feedback through the influence of atmospheric DMS on cloud formation and radiative forcing Mahmood et al. 2019). Most models however consider the ice-atmosphere interface 675 to be inert. Possible diffusion of DMS through porous ice during spring (Gourdal et al. 2019), as well as potential DMS pulses venting to the atmosphere via melt ponds (Gourdal et al. 2018) and through cracks and leads in thinner ice and at ice edges (Hayashida et al. 2017, this study) could lead to a strengthening of the DMS-related "polar- Author contributions M. Lizotte was responsible for a large part of the sampling as well as the data analysis and processing. M. Levasseur and M. Lizotte wrote the initial version of the paper together. Several co-authors provided specific data included in 695 the paper and all co-authors contributed to the final edition of the paper. Competing interests The authors declare that they have no conflict of interest. Author's response. We agree with the referee: on L148 (now L155), chlorophyll a should be written out and there were different forms of the abbreviation of chlorophyll a. Author's changes in manuscript. On L148 (now L155), the word "chlorophyll a" was added and we changed "chl a" for "(Chl a)". On L163, we changed "chlorophyll a (chl a)" to simply "Chl a". 1415 Author's response. Yes, we agree with both the comment and the suggestion. Author's changes in manuscript. The size of the letters on the map were made larger. A new version of Figure 2 was added to the manuscript. Figure 10: For FYI diagram, the relationship between phytoplankton bloom and light availability is 1420 clearly indicated, but I'm afraid that the reader may not catch what the authors would like to show in MYI diagram. Please modify the MYI diagram to show the relationship of phytoplankton abundance and light availability. Also, the second sentence (How these physical changes. . .) may be omitted from the figure caption. Author's response. We thank the referee for the insight and agree that the figure should be made clearer. 1425 Author's changes in manuscript. The following modifications were made to Figure 10. 1. On the first panel (MYI) the arrow (light) going from the sun and through the thicker ice was presented as discontinued (dotted arrow) to signify reduced intensity of light reaching the surface 1430 of the water and available for phytoplankton growth. Part of the light is absorbed by the ice (one arrow ending in the ice), and another part of the light is reflected back (2 arrows pointing upwards). 2. On the second panel (FYI) the arrows (light) going from the sun and through the thinner ice and the melt ponds at the surface of the ice show scattering and an increase in the amount of light 1435 reaching the surface of the water and available for phytoplankton. Part of the light is absorbed by the ice (one arrow ending in the ice), and another part of the light is reflected back (1 arrow pointing upwards). Furthermore, as suggested, we modified the caption as follows and took the second sentence out. Initial version: Figure 10: Conspicuous alterations in the Arctic Ocean are underway and include reductions in snow cover, sea ice extent and thickness, and increase in melt pond areal coverage, the occurrence of which is linked to profound modifications in light availability in surface waters below the ice and at its margin. How these physical changes will impact the dynamics of bloom-forming microorganisms and their production of the 1445 biogenic climate-active gas DMS are still unknown. The conceptual diagram depicts two types of ice edges (top panel MYI and lower panel FYI) and their potential role in modulation light penetration under the ice pack and the development of phytoplankton blooms and associated DMS dynamics. (Table 1) and biogeochemical characteristics (Table 2), but also present the same inherent structure, reason why several elements are repeated. Author's changes in manuscript. The caption for Table 2 was modified from its original version to include a more detailed description of the biogeochemical characteristics found in the Table per . Values that were not available are noted as 'n.a.'
2019-10-31T08:53:56.603Z
2019-10-25T00:00:00.000
{ "year": 2020, "sha1": "3361f38f81b799df19ed0b07b4225e2ece4edfeb", "oa_license": "CCBY", "oa_url": "https://bg.copernicus.org/articles/17/1557/2020/bg-17-1557-2020.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "17d2cef6ee61567008d2c177f5bf3428b6cdee2f", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
245852181
pes2o/s2orc
v3-fos-license
Accuracy Assessment for points coordinates surveyed using low-cost Unmanned Aerial Vehicle and Global Positioning System with 3Dsurvey and 3DF Zephyr software Ground Control Points GCPs are the only way to obtain accurate positions in aerial surveys. At least three points should be utilized, and the model will get increasingly accurate in X, Y, and Z coordinates as the number rises. The accuracy of the 3D model created from aerial photography is also affected by the arrangement of GCPs. The goal of this research is to determine the optimal number and arrangement of GCPs in order to obtain the lowest possible error in point positioning. A conventional UAV called DJI Mavic 2 pro was used to photograph one and a half square kilometer site at an elevation equal to hundred meters from earth’s surface with nadir camera configuration. GSD (ground sampling distance) of 2.3 centimeters was used to collect 1515 pictures. 62 GCPs were observed in PPK (Post Processing kinematic) method using a DGPS (differential global positioning system) receiver GS 15 from Leica. The study area was split into two areas, one with a straight arrangement of GCPs and the other with a diagonal arrangement of GCPs. The pictures were processed using 3Dsurvey and 3DF Zephyr software utilizing a full bundle adjustment procedure with increasing GCPs number beginning with three GCPs and ending with twenty-six GCPs for both arrangement layout, with the other points serving as check points for the model’s accuracy at each attempt. The check point coordinates obtained were compared to the DGPS coordinates. The result indicates the optimal GCP number needed for the most accurate position and spread layout. That the minimum gap between adjacent GCPs ought to be not over than 100 meters and spread homogenously. Introduction Unmanned aerial vehicles stereo photography has lately got a lot of interest in engineering fields such as surveying and mapping, environment monitoring, and Geographical Information System (GIS) [1,2]. Stereo photography plays a prominent part in generating three -dimensional models for multiple sites and features using three-Dimensional photogrammetry [3,4]. Take advantage of its accessibility, affordability, portability, and simplicity of usage. Researchers from all around the world have begun to investigate the capabilities and benefits which these systems provide, and many breakthroughs have been achieved in this area. One of its major issues with using unmanned aerial vehicles mapping systems would be that they need GCPs to attain highly accurate results [5,6] concerning the locations of the sites and features getting surveyed, allowing people to obtain highly precise measurements on the results of these systems, including othromosics [7,8], 3-dimentional point clouds, and digital surface models [9]. Ground control points are utilized to link the project to a well-known reference system to ensure getting precise measurements [10]. The arrangement and amount of these control points have a significant impact on the accuracy of the results. The accuracy of the resulting model will improve as the number [11]. The model's accuracy is also affected by how the GCPs are dispersed across the project [12]. The optimal number of GCPs for UAV mapping systems is the subject of considerable study [13,14]. Oniga et al. looked into the best amount of GCPs for UAS images, however they didn't examine the impact of point arrangement on the project's end outcomes, and their research was restricted to a limited region [16][17][18]. This research seeks to find the optimum amount and arrangement of ground control points for producing precise three-dimensional model from a conventional UAV photography system, that has an impact on results like orthomosic and DSM. The ground control points surveyed with millimeter accuracy using post processing kinematic approach and used to rectify 1515 overlapping nadiral photographs collected at hundred meters from the ground. The ground control points were laid out in a diagonal arrangement from the start point of the project till the third kilometer and in straight arrangement for the remainder distance of the study area. The pictures were processed using two distinct software solutions: 3Dsurvey program and 3DF zephyr software. By checking the produced coordinates of the ground control points with the actual coordinates obtained by GPS, the project's precision was evaluated. In this research, a DJI Mavic 2 Pro UAV was used to photograph a 1.5 km2 region at 100 meters above ground level with 2.3-centimeter GSD, and 62 ground control points were surveyed using a GS15 GPS receiver from Leica and dispersed in a straight and diagonal arrangement. Area of the Study Unmanned aerial The study area has been the initial 6 km of Qanat Al-Jaish Project, a 25-kilometer canal that connects the Tigris River in Baghdad, Iraq, to Diyala River in the south. Research area is a longitudinal region with an approximate width of 250 meters and a length of 6.15 kilometers that includes the canal and two gardens and highways on each bank. (See Figure 1). Materials and methodology 3.1. Ground Control Points Observation observation of the geographic coordinates of GCPs is critical for obtaining exact positions, and all these GCPs must have a minor error to produce extremely accurate measurements. Through using Post Processing Kinematic PPK method, GS15 GPS receiver from Leica has been employed to determine the geographic locations of the ground control points in this research. The base station has been employed to measure one of the ground control points in the middle of the site. duration of measurement was 15 and a half hours. The remaining GCPs have been observed using the rover receiver, for each GCP having a 15-minute observation duration. A red dot approximately 15 cm in diameter and a center white dot around 5 cm in diameter were placed on each GCP. After 2 weeks, the data from the base station is processed using the OPUS website with an RMSE (Root Mean Square Error) of 1.5 millimeter in easting, 2 millimeters in northing, and 2 millimeters in elevation. The remaining GCPs then processed by LGO (leica geo office) program using the base station as a reference. These GCPs had an average RMS error of 1.8 millimeters in the easting coordinate, 1.3 millimeters in the northing coordinate, and 2 millimeters in the elevation coordinate. The site includes sixty-two ground control point that are placed in two ways along the canal route. The GCPs are spread in straight arrangement throughout the first 3 kilometers of the site, with two aligned stations each 100 meters along the waterway's route. The GCPs throughout the site's second 3 km are placed in a diagonal arrangement, with one station on one bank of the canal and the next station on the opposite bank separated by approximately 100 meters. (See figure 2, figure 3 and Table 1). Data Capturing DJI's Mavic 2 Pro UAV was used to photograph the site. The UAV features an integrated camera with a sensor width and height of 13.2 millimeter and 8.8 millimeter respectively, with 5472 pixels and 3648 pixels image wide and height respectively. DJI Pilot program is used to plan the flight, which has a Linear Flight Mission option with identical right and left distances of 250 meters and a band length of 2 kilometers. On a single grid, 9 bands must be fully surveyed with 80 % side overlapping and a 70 % frontal overlapping for the study area. All of the photographs were shot at nadir, at a height of 100 meters above the earth. A total of 1515 photographs were used to photograph the 1.5 km2 project, with an averaged ground sampling distance of 2. Processing of the Data processing of the data was carried out by two distinct programs in order to get a basic understanding about how each operates and which one is superior. The region is divided into two sections in both software: one is for straight GCP arrangement, that has 31 GCPs, and the other one for diagonal GCP arrangement, that has 31 GCPs as well. The model is processed with an increasing amount of Ground control points, beginning with just three GCPs, the remaining twenty-eight marks serving as check points, and finishing with 26 GCPs and 5 check points. The Root Mean Square error of the check points, which describes to the distance between check points estimated coordinates from the program and their actual coordinates observed using DGPS, was used to indicate the model's accuracy. 3.3.1. Using 3Dsurvey software to process the data 3Dsurvey software processing is done in a series of stages. For each processing stage, the user may choose his own preferences and settings. The user must first establish a new project before importing the pictures that make up the survey model into the program. In our instance, we used the program to import 1515 pictures for the whole project. Each picture has a geotag attached to it, which allows the program to determine the starting location and orientation of the preceptive center of the camera, although these locations aren't very accurate. After importing the pictures, you must choose the coordinate system used to measure the camera perspective center's coordinate and its precision). In our instance, the horizontal coordinate system is WGS 1984 UTM zone 38N, while the vertical Datum is EGM 2008. The same coordinate system is used for both the input and output coordinate systems. Because we have to create a three-dimensional model, we selected 3D map from the processing choices, that is a predefined set of options for various processing styles. Geometrically validated matching utilized as the project's matching technique to begin the picture matching process. The program will be able to compute the inside sensor parameters. The GCPs imported to the project as csv files from the Ground Control Points Manager after executing the initial processing phase for first time, for each point's vertical and horizontal precision equal to 2 millimeters. Every GCP will be meticulously annotated on the photos in order to link the model to a precise location, these ground control points will be modified to either CK (Check Point) or CP (Control Point). After that, we reoptimize the project to recalculate tie point positions in relation to the ground control points added lately. For each trial, a report produced for the first processing step and kept in order to perform a comparison to the Root Mean Square error for each number of ground control point and Checkpoints employed. (See figure 6). Using 3DF Zephyr to process the data With minor workflow changes, the 3DF Zephyr conducts processing almost identically to the 3Dsurvey. The processing starts with the creation of a new project and the importation of photos from the menu, that includes the necessary processing stages. We began aligning the photos and generating initial points that represents the model through picking align photos out from the menu as well as picking high for precision. The thresholds for the critical point and tie point were established at forty thousand and ten thousand. that will lead to align the perspective center in a manner comparable to the travel route, resulting in the model's three-dimensional tie points. Following that, ground control points are inserted into the project as csv format, each with a unique coordinate, the precision of the points set to two millimeters. The method of picking Ground control points on the photos is identical to how the 3Dsurvey program works, every point being meticulously picked on a minimum of five photographs. The final step in referencing is to optimize cameras to the recently added GCPs, choose point kind (check or control point), then reoptimize the project for each iteration of control to check point utilized. A report of the processing outcome is produced and stored for comparing for each repetition. Results for Straight arrangement While 3Dsurvey software has been utilized to conduct full bundle adjustment to the model, the root mean square error is 5.091 meter when the minimum number of Ground control points has been used, and that was three ground control points, and computed out from the other twenty eight check points, whereas the full number of Ground control points, that was 26 GCPs, did result in a root mean square of 0.046 m measured out from the other 5 check points, but the root mean square was quickly reduced to 0.268 m after having added 7 more GCPs As 15 GCPs are employed to reference the model, the root mean square decreases to under 10 cm. As we bring extra GCPs, the quantity of root mean square reduces, but the variance between successive trials diminishes as well, until it reaches approximately two to five millimetres (See table 2 and chart 1). As we used 3DF Zephyr program to adjust the bundle, a 5.704-meter RMSE was obtained by employing the smallest number of GCPs, three ground control points, and measuring the other twenty-eight check points. RMSE for the last 5 check points was 0.049 m when the maximum number of ground control points were used, which was 26. The RMSE was reduced to 0.294 m with the addition of seven additional GCPs, bringing the total number of GCPs to ten. while fifteen ground control points are obtained to reference the model, the RMSE is less than 10 cm after adding additional GCPs. As we add additional GCPs, the quantity of RMSE reduces, but the variance between successive trials diminishes as well, until it reaches approximately four to eight millimetres. (See table 2 Results for diagonal arrangement While 3Dsurvey program was used to conduct adjustment for the project, the root mean square error was 2.919 m while the minimum number of Ground control points has been used, that was three ground control points, and computed out from the other twenty eight check points, whereas the highest number of Ground control points, that was 26 GCPs, did result in an root mean square error of 0.036 m measured out from the other 5 check points, but the root mean square error was quickly reduced to 0.221 m after assigning 7 more GCPs As 15 GCPs are utilized to reference the model, the root mean square error falls below 10 cm. When we add additional Ground control points, the quantity of root mean square error reduces, but the variance between successive trials also reduces until it reaches approximately two to five millimeters. (See table 3 10 When using 3DF Zephyr program to adjust the bundle, a 3.674 meter RMSE was obtained by employing the smallest number of GCPs, three ground control points, and measuring the other twenty eight check points. The RMSE for the last 5 check points was 0.041 m while the maximum number of ground control points used, which was 26. The RMSE was reduced to 0.688 m with the addition of seven additional GCPs, bringing the total number of GCPs to ten. When 19 GCPs are obtained to reference the model, the RMSE is less than 10 cm after adding additional GCPs. As we add additional GCPs, the quantity of RMSE reduces, but the variance between successive trials diminishes as well, until it reaches approximately four to eight millimeters. (See table 3 5.Discussion The data obtained from each project's bundle adjustment provides an idea of different variables influence the precision of point cloud positions. Because this is a longitudinal project, the referencing procedure differs from that of a wide-area project. Because the area varies quickly at long directions compared to short ones, ground control points must be arranged in such manner where the additional area is covered. The two configurations chosen are the most reasonable ones for ensuring complete coverage of the project area. To evaluate the quality of each arrangement, each point arrangement type was handled independently in a distinct project. The findings indicate that the diagonal design is more efficient than the straight configuration, since it produces superior outcomes. The minimum RMSE achieved with the straight configuration is 0.046 m using 3Dsurvey software and 0.049 m using 3DF Zephyr software, whereas the minimum RMSE obtained with the diagonal layout is 0.036 m using 3Dsurvey software and 0.041 m using 3DF Zephyr software. This is due to the fact that the diagonal layout provides greater coverage for the project region, allowing each point cloud to reference more adjacent GCPs. The amount of RMSE drops every time we add a new GCP as a control point for the optimal number of GCPs issue, 12 but this amount begins to become lower after the 20 control points, but the best results are obtained with the maximum number of GCPs with RMSE equal to 0.036 m. This is because a longitudinal project needs more control points as the surveying area becomes longer, while a broad area project's restricted number of GCPs would suffice. With a difference ranging from 0.755 m with the smallest number of GCPs to 5 mm with the largest number of GCPs, 3Dsurvey software produces somewhat more accurate bundle adjustment findings than 3DF Zephyr software. As a consequence, 3Dsurvey is able to handle bundle modification better than the 3DF Zephyr. 6.Conclusion The influence of using additional ground control points to reference the three -dimensional model of an unmanned aerial vehicles survey using a low conventional UAV called the DJI Mavic 2 Pro, utilizing 2 distinct program packages for processing the photos and create three -dimensional model, Orthomosic, as well as DTM, after referencing the three -dimensional model by making adjustments to the camera positions and orientations that use the full bundle adjustment technique, were examined in this research. 3Dsurvey Mapper and 3DF Zephyr were the programs utilized. To assess the impact of each arrangement pattern on the accuracy, the GCPs are dispersed differently across the project area. We discovered that a diagonal layout is better than a straight configuration for obtaining a low level of RMSE for the check points. The ideal number of GCPs is proportional to the longitudinal project's size, with RMSE almost equal for GCPs greater than 23. The research concludes that the minimum gap between adjacent GCPs ought to be no over than 100 meters in order to achieve reasonably precise results in low-cost UAV mapping.
2022-01-11T20:05:49.965Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "9d20f884c32985037897b95faf0657d11553dec5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1755-1315/961/1/012046", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "9d20f884c32985037897b95faf0657d11553dec5", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
204728703
pes2o/s2orc
v3-fos-license
The Next 700 Accelerated Layers: From Mathematical Expressions of Network Computation Graphs to Accelerated GPU Kernels, Automatically Deep learning frameworks automate the deployment, distribution, synchronization, memory allocation (2) an intermediate representation and Just-in-Time optimizing compiler based on the polyhedral framework, enabling complex program transformations and levels of automation unmatched by any other compiler for the acceleration of computational sub-graphs of neural networks; (3) coordinated optimization algorithms with integrated functional correctness, profitability modeling, domain and target specialization; we propose a layered approach, relying on integer linear programming and other polyhedral algorithms to address the core program optimization and synthesis challenges, while resorting to evolutionary algorithms as a higher level of control, to select high-level strategies and fine-tune transformation parameters; (4) the transparent integration of our flow into PyTorch [48] and Caffe2 [29], providing the fully automatic synthesis of high-performance GPU kernels from simple tensor algebra. The TC flow is also portable to other ML frameworks with a few lines of code. While our initial implementation focuses on Nvidia GPUs, the core technology applies to other types of accelerators with shared or partitioned memory [43,51,70,76]; these include vector and SIMD accelerators and also the generation of computational patterns suitable for ASICs with systolic designs and efficient storage management involving non-volatile memory technologies. TENSOR COMPREHENSIONS Tensor Comprehensions (TC) are an algorithmic notation for computing on multi-dimensional arrays. It borrows from the Einstein notation, a.k.a. summation convention: (1) index variables are defined implicitly, and their range is inferred from what they index; (2) indices that only appear on the right-hand side of a statement are assumed to be reduction dimensions; (3) the evaluation order of points in the iteration space does not affect the output. A tensor comprehension function (or tensor comprehension for short) defines output tensors from pointwise and reduction operations over input tensors. These operations are defined declaratively as a sequence of pointwise equations or reductions, called tensor comprehension statements (or statements for short). Let us consider matrix-vector product as a simple example of a tensor comprehension with two statements: This defines the function mv with A and x as input tensors and C as an output. The shapes of A and X are of size (M, K ) and (K ), respectively. The shape of C is inferred automatically. The statements introduce two indices "i" and "k." Variables not defined in the function signature implicitly become indices. Their range is inferred based on how they are used in indexing (see Section 3.1); here, we will discover i ∈ [0, M ) and k ∈ [0, K ). Because k only appears on the right-hand side, stores into C will reduce over k with the reduction operator +. Intuitively, a tensor comprehension may be thought of as the body of a loop whose control flow is inferred from context. The equivalent C-style pseudo-code is: Importantly, the nesting order (i then k) is arbitrary: The semantics of a tensor comprehension is always invariant to loop permutation. 2 TC allows in-place updates while preserving a functional semantics that is atomic on full tensors: RHS expressions are read in full before assigning any element on the LHS. This specification is important in case the LHS tensor also occurs in the RHS [24]: The compiler is responsible for checking the causality of in-place updates on element-wise dependences, currently allowing only pointwise updates. Also, to enable in-place updates across TC functions, outputs of a TC statement can also be used as inputs. We provide a short-cut for an initializing reduction, where the result is initialized to the operator's neutral element before reduction by appending "!" to the operator, e.g., "+=!" instead of "+=". A one-line definition of the matrix-vector product mv is given below; and common ML kernels can be written in just a few lines, such as the sgemm function from BLAS: Expressing general tensor contractions is equally easy. A fully connected layer followed by a rectified linear unit takes the form of a transposed matrix multiplication initialized to a broadcast bias term and followed by pointwise clamping (applying the built-in scalar function fmaxf with 0): The where annotation informs the inference algorithm of the intended index variable ranges when they cannot be unambiguously inferred. In this case, "b" indexes only "out" whose size also needs to be inferred. Unlike tensor kernel libraries with predefined layout conventions, notice that TC lets the user control data layout through the order of tensor indexing dimensions. Here, we chose to reuse the out tensor across all comprehensions, indicating the absence of temporary storage. Similarly, the where clause serves to indicate ranges of kh and kw in the max pooling layer, which would otherwise be under-constrained: A 2-D convolution is also simple. Its reduction is initialized to 0 (note the use of +=!) with reduction dimensions kh, kw: Subscript expressions can be any affine function of iterators, or subscript-of-subscript expressions (a tensor element indexing another), and combinations thereof. The latter capture datadependent accesses such as a gather operation: TC algorithmic notation differs from today's prominent frameworks where most operators are defined as black-box functions. The design of TC makes it easy to experiment with small layer variations while preserving a concise, in-place expression. Thus, a strided convolution is easily created as a tweak on convolution, e.g., strided by 2 along h and 3 along w is: Fig. 1. Simplified EBNF syntax for core TC. Parentheses denote inline alternatives, brackets denote optional clauses, and angle brackets contain textual descriptions used for simplicity. Figure 1 shows the grammar of the Tensor Comprehension language in EBNF notation. Data Layout TC makes data layout explicit and easy to reason about. It supports generalized tensor transpositions (i.e., applying an n-D permutation matrix where n > 2), and data tiling can be achieved by reshaping tensors and adjusting the index expressions. Range inference and checking guarantees such reshaping will always be consistent throughout the statements of a tensor comprehension. For instance, NCHW convolution operates on an explicit input, declared as float I(N, C, H, W), with the layout matching the expected row-major semantics. In addition, the TC compiler may transparently apply layout transformations, e.g., when mapping tensor tiles to GPU shared memory. Automatic Differentiation TC does not natively deal with automatic differentiation, but we aim to add TC support to an existing differentiation tool in the future. DSLs like PlaidML [49] already demonstrated this. However, backward passes can readily be implemented in TC as a few lines of code. Here is the backward pass of matrix multiplication: TENSOR COMPREHENSIONS WORKFLOW The Tensor Comprehensions workflow consists of several stages, progressively lowering the level of abstraction (Figure 2). Given a TC with specialized tensor sizes and strides, 3 we lower it to a parametric Halide-IR expression, which is further lowered to a polyhedral representation where most transformations are applied. The output of the polyhedral flow is CUDA code that can be further JIT-compiled with NVRTC and executed. Complementing this flow, an autotuner and serializable compilation engine interacts with scheduling and mapping strategies to search the optimization space. Much of TC's versatility and effectiveness resides in its embedding of a polyhedral compiler as the main optimization engine. The polyhedral framework is an algebraic representation of "sufficiently regular" program parts, covering arithmetic expressions on arrays surrounded by static control flow [23]. It has been a cornerstone of loop optimization in the past three decades [3,8,14,22,32,70] and is integrated into production compilers [13,30,43,62]. Despite its deceiving apparent simplicity, it covers a large class of computationally intensive kernels. It is parametric on loop bounds and array sizes and captures more transformations of the control and data flow than domain-specific representations such as Halide [55] or TVM [17]. The use of the polyhedral model by TC is derived from that of PPCG [70], and this section only provides a general overview. Our transformation engine is composed of the following specially adapted or algorithmically novel components: (1) range inference and lowering from high-level TC abstraction to the polyhedral representation; (2) core affine scheduling adapted from isl that automatically optimizes for (outer) loop parallelism and locality, tuned towards folding a complete TC function into a single GPU kernel; (3) the schedule is further tiled to facilitate the mapping and temporal reuse on the deep parallelism and memory hierarchy of GPUs [72]; (4) mapping to GPUs borrows from PPCG [70] with extensions to support the more complex and imperfectly nested control structures of ML kernels; (5) memory promotion deals with explicit data transfers to and from shared and private memory. This work demonstrates that the polyhedral framework is particularly well suited for deep neural networks, featuring large and deeply nested loops with long dependence chains and non-uniform or all-to-all patterns-arising from fully connected layers and tensor contractions, and transpositions. These features push the optimization problem into a different heuristic space than Halide's for image processing, and a wider space than linear algebra alone. Range Inference TC loops are implicit and output tensor sizes are inferred from index ranges, which themselves may also be inferred. Our algorithm infers the largest rectangular ranges that avoid out-of-bounds reads on inputs. A where clause allows for disambiguation if multiple such ranges exist. Consider the conv2d kernel on page four. The sizes of the input tensors, in and weight, are known from the function signature. The algorithm needs to infer the ranges of the iterators and the size of the output tensor out. The iterators b, op, kh, and kw appear only once on the RHS and their ranges are therefore [0, B), [0, OP), [0, KH), [0, KW) so they index the input tensors maximally. The iterator ip appears twice, but indexes the dimension of the same size, so its range is [0, IP). Had it been indexing dimensions of different sizes, its range would have been the intersection of all size-imposed ranges. Once the ranges of kh and kw are known, it is possible to infer those of h and w: We require h + kh ≤ H and w + kw ≤ W, which leads to the maximal ranges of [0, H − KH) and [0, W − KW), respectively. Finally, the size of out can be inferred given the ranges of the iterators that index it, yielding float(B, OP, H − KW, W − KW). The user of TC is able to inspect the symbolic sizes inferred for the output tensors using a command-line flag. Consider now a typical stencil operation A(i)+= B(i + k) * K(k): There are multiple ways to maximize the ranges of i and k. To disambiguate without annotations, range inference proceeds in rounds. It maintains a set of index variables whose ranges are not yet resolved. Initially, it contains all variables not in any where clause. Each step considers argument expressions that contain a single unresolved variable and constructs a Boolean condition stating the accesses are within bounds. Using Halide [55] mechanisms, range inference computes the maximal range that satisfies this condition given the already known ranges of other variables. If different ranges are computed for the same variable, they are then intersected. For the stencil above, in the first round, we ignore the expression B(i + k), because it contains multiple unresolved variables. We use K(k) to deduce a range for k. In the second round, B(i + k) contains a single unresolved variable, and we use the already-inferred range of k to deduce a maximal range for i. Lowering to the Polyhedral Representation The role of lowering is to bridge the impedance mismatch between the logical layout of highlevel tensor operations (dimension ordering) and the data format the polyhedral code generator expects (C-style row-major arrays). It ensures the absence of aliasing and performs range inference for output tensors. Based on range inference, TC differs from NumPy-style implicit "broadcast" semantics (non-trivial tensor dimensionality extension) adopted by XLA, PyTorch, and MXNet. Our representation derives from schedule trees [71], implemented in the isl library [68], and uses a set of node types. Each TC-statement corresponds to multiple runtime statement instances-one for every valuation of the index variables. The root domain node defines the set of statement instances to be executed. Due to the nature of the TC-language, the constraints on the index variables are always affine, resulting in an exact representation of the set of operations. A band node defines a partial execution order through one or multiple piecewise affine functions defined over iteration domains. The name refers to the notion of a permutable schedule band, a tuple of one-dimensional schedule functions that can be freely interchanged while preserving the semantics of the program. A filter node partitions the iteration space, binding its sub-tree to a subset of the iteration domain. It can be arranged into set or sequence nodes depending on whether or not the order of execution must be serialized. Context nodes provide additional information on the parameters, e.g., tensor extents or GPU grid/block sizes. Finally, extension nodes introduce auxiliary computations that are not part of the original iteration domain, which is useful for, e.g., introducing data-copy statements. A canonical schedule tree for a TC is defined by an outer sequence node, followed by filter nodes for each TC statement. Inside each filtered branch, band nodes define an identity schedule with as many one-dimensional schedule functions as loop iterators for the statement. The implicit loops form a permutable band as per TC semantics. In addition to the schedule tree, our representation includes tensor access functions that map the index variables to the subscripts of tensors they access. These subscripts are not necessarily affine, in which case over-approximations are used [11]: A non-affine access is assumed to potentially access all values along the given dimension. After the polyhedral representation is constructed, dependence analysis can be used to ensure the absence of out-of-bounds accesses [53]. Additional lowering steps include forward substitution of convolution expressions (storage/ computation trade-off), padding, mirroring, and clipping. The process is analogous to Halide's [55]. Figure 3(a) shows the canonical schedule tree for unions of relations where tuples of iterators are guarded with syntactic identifiers [53] 4 for the sgemm TC defined on page 4. One recognizes a 2-D nest from the initialization statement followed by a 3-D nest for the update statement. The schedule can be either parametric in input sizes or have extra context information on the tensor sizes. In cases where band nodes do not define an injective schedule, the statement instances are scheduled following the lexicographical order of their domain coordinates. Tunable Polyhedral Scheduling Program transformation in the polyhedral model involves defining a different schedule, which corresponds to a different (partial or total) order of traversing the iteration domain. The instances of all statements are scheduled completely automatically [14] using one of several scheduling strategies with which we extended the isl scheduler [72]. The isl scheduler iteratively solves integer linear programming problems to compute piece-wise affine functions that form new schedule band nodes. Internally, it operates on a data dependence graph where nodes correspond to statements and edges express dependences between them. It introduces the affine clustering technique that is based on computing the schedule bands separately for individual strongly connected components of the dependence graph and then clustering these components iteratively and scheduling them with respect to each other. Clustering not only decreases the size of the linear problems the scheduler has to solve, but also serves as a basis for isl's loop fusion heuristic. We extended isl to provide finer-grained control over the scheduling process. For affine transformations, the user can set additional scheduling options. For clustering, the user can supply a decision function for pairwise dependence graph component combination, after this combination was demonstrated to be valid by the scheduler. These configuration points serve as a basis for both fixed scheduling choices made by TC and scheduling strategies. In particular, TC tells the scheduler to produce schedules with only non-negative coefficients and without any skewing. Clustering decisions allow TC to control the conventional minimum and maximum fusion targets, and additionally, maximum fusion that preserves at least three nested parallel loops (to be mapped to CUDA blocks and threads). With the scheduling strategies, one may optionally enable point band rescheduling (i.e., scheduling the inner dimensions after tiling). In particular, two fusion strategies can be specified, one for the global schedule and one for the point band. If these fusion strategies are different, then the point band (along with all its descendants) is rescheduled after tiling, preserving only the outer tile band of the original schedule. Scheduling strategies can be selected through the autotuning process. In all cases, we enforce that a single GPU kernel is generated. Example. Observing that the C tensor in sgemm (see page four) is reused between two nests, the scheduler constructs the tree in Figure 3(b) to leverage access locality and improve performance. This tree features an outer band node with i and j loops that became common to both statements, which corresponds to loop fusion. The sequence node ensures that instances of S are executed before respective instances of T enabling proper initialization. The second band is only applicable to T and corresponds to the innermost (reduction) loop k. Overall, the tuning process is greatly simplified compared to Halide and TVM. Relying on a heavy-duty, well-understood analytical optimization framework based on integer linear programming, TC exposes a small, dedicated search space of high-level strategies and block-size parameters. Beyond guaranteeing the validity of the transformation, dependences can be used to explore parallelization opportunities (independent instances can be executed in parallel), to improve data access locality (dependent instances executed close in time) or to automate vectorization [14,50,66,72,77]. Imperfectly Nested Loop Tiling Let us first describe the general setting for loop tiling on schedule trees, before developing the TC-specific specialization and extensions. Tiling Permutable Bands. Pluto has been very successful at decoupling the actual implementation of loop tiling from the preparation of an affine schedule exposing permutable loops amenable to tiling [14]. This design allows exploring locality and parallelization tradeoffs without bloating the schedule representation with complex quasi-affine forms capturing the precise distribution of iterations into tile and point loops. Schedule trees ease the implementation of such a decoupled design, capturing tiling as the conversion of a permutable schedule band into a chain of two bands, with the outer band containing tile loops and the inner band containing point loops with fixed trip count. This can be seen as a conventional strip-mine and sink transformation. In addition to conventional loop tiling, the schedule tree representation allows tiling imperfectly nested loops. The technique is based on the following observation: If a loop does not carry dependences, it can be sunk below any other loop. In valid schedules, all dependences are carried (or satisfied) by some loop, along which they feature a positive distance. A dependence is only violated if it has a negative distance along some loop before it is carried by another loop [35]. Parallel loops do not carry dependences by definition and therefore do not affect dependence satisfaction or violation. Therefore, imperfectly nested tiling may be implemented by first tiling bands in isolation and then sinking parallel point loops in the tree. During this process, the point band is replicated in each sub-tree below a sequence (or set) node and its schedule is restricted to only map the relevant points in the iteration domain. Such an extension is particularly helpful in Pluto, where bands of permutable loops are rediscovered through a post-pass traversal of the affine schedule. Parallelism and Locality Trade-offs. TC applies two tiling schemes with complementary purposes. The first one takes place immediately after affine scheduling. It aims at exposing a sufficient number of parallel dimensions, some of which amenable to memory coalescing, and some better suited to block-level parallelism. It also aims at exploiting data locality within thread blocks (through shared memory) and individual threads (through register reuse). This tiling scheme is influenced by the strong emphasis on loop fusion in the affine scheduling heuristic (to enforce that the generated code runs as a single GPU kernel). In this context, conventional loop nest tilingconsidering a single band at a time-appears to be sufficient. This is the hypothesis we make in this article. 5 The second tiling scheme takes place in the block and thread mapping algorithm, which is the topic of the next sub-section. Example. Figure 3(c) shows the schedule tree for the fused and tiled sgemm. It purposely has two imperfectly nested bands. Dependence analysis shows that loops i and j are parallel. Therefore, we can tile them and sink the point loops below the band of the reduction k loop, resulting in the schedule tree in Figure 3(d). Innermost nested bands with point loops can be joined together into a single band after checking for permutability. As indicated earlier, TC implements the fusion and tiling scheme of Figure 3(c) but not the sunk, imperfect scheme of Figure 3(d). Mapping to Blocks and Threads A schedule tree can also be used to represent the mapping to an accelerator, in particular a GPU with multiple blocks and threads. This operation is performed by associating certain schedule band members, and the corresponding loops, to thread or block indices. The polyhedral code generator then omits the loops, if possible, and rewrites the index expressions accordingly. Building on PPCG, our mapping approach is decoupled from tiling for data locality: Grid and block sizes are specified independently from tile sizes and are exposed as tunable parameters. Due to the semantics of blocks and threads, only parallel loops that belong to a permutable schedule band can be mapped. If point loops are mapped to threads, the ratio between tile sizes and block sizes controls the number of iterations executed by each thread. Note that tile sizes smaller than the block sizes lead to some threads not performing any computation. Contrary to PPCG, which may generate multiple kernels for a given input program, our mapping approach handles imperfectly nested loops in a way that generates a single kernel as expected by ML frameworks. We require the schedule tree to have at least an outermost band with outer parallel dimensions. The parallel dimensions of the (single) outermost band are mapped to GPU blocks. In each schedule tree branch, the innermost permutable band, typically consisting of point loops, is mapped to GPU threads with the following restrictions: The number of mapped dimensions must be equal across branches, and on each branch, there must be exactly one band mapped to threads. The mapping is performed bottom-up, first attempting to map the leaf bands to threads, before moving to a parent band only if none of the children could be mapped to threads. Thread mapping can be extended to imperfectly nested loops, following the same principle as imperfect loop tiling. Within a given thread block, one may sink parallel point loops so multiple bands in a sequence (or set) may be equalized in depth and mapped together. However, TC currently does not perform any such sinking. Example. Our mapping strategy produces the schedule tree in Figure 3(e). We introduced a context node in the schedule tree to indicate the effective sizes of the parameters as well as the grid and block sizes (denoted as b x , b y and t x , t y , respectively, standing for the values eventually taken by blockIdx.x, blockIdx.x and threadIdx.x, threadIdx.y). This insertion is performed just in time, when the effective tensor sizes are known. Also notice the filter nodes referring to the b x , b y , t x , and t y parameters: these nodes express the mapping to the GPU. Memory Promotion We are interested in promoting parts of tensors into shared or private GPU memory. While the promotion decision is taken by a heuristic and the corresponding imperative code is generated at a later stage, schedule trees offer a convenient interface for attaching memory-related information. Memory promotion is based on the notion of an array tile, a form of data tiling for softwarecontrolled local memories. It is a constant-size potentially strided block in the array that covers all elements accessed by within a given (schedule) tile. We build upon and extend PPCG's support for memory promotion [70,72] and expose the promotion to shared and private memory as Boolean options for the autotuner. Promotion of Indirectly Accessed Arrays. Memory promotion is also applicable to indirectly accessed arrays. These frequently occur when modeling variable length data through embedding layers such as word embeddings in natural language processing. This is particularly important in the case of latency-bound benchmarks where there is little computational or additional data processing work to hide global memory latency. Indirect arrays used to be promoted in the initial TC implementation based on PPCG. When implementing parallel reductions, working towards the first released version of TC, we realized that parallelizing reductions was sufficient to deliver comparable or higher speedups in our word-embedding benchmarks. For this reason, indirect array promotion was dropped from the publicly available version of TC. We still report on the design, for it remains interesting to describe how the polyhedral TC flow may optimize non-affine data flow. . Because some values can be duplicated, indirect promotion is only possible if both the outer and the index arrays are only read, since writing to them could result in different values that cannot be trivially merged. In general, we require the index array to have an array tile, i.e., only a fixed-sized block of it is accessed. When computing the array tile for the outer array, we ignore the indirect parts of the subscript (affine parts are treated as usual). We then introduce as many additional index expressions in the promoted outer array as are associated to the index array. Extents of the array along these new dimensions correspond exactly to the array tile sizes of the index array. Hence, an element of the promoted array contains a copy of the global array element that would be accessed with the given index array. Indirect subscripts are only used when copying from global memory, while all other accesses are rewritten through code generation. In presence of multiple indirect index expressions that share sub-expressions and have equal tile sizes along the corresponding dimensions, it is sufficient to introduce a single index expression in the promoted array for all identical sub-expressions. Promotion Heuristics. Directly accessed arrays are promoted to shared memory if there exists an array tile of fixed size, if individual elements are accessed more than once, and if at least one of the accesses does not feature memory coalescing. The latter is visible from the access relation with the schedule applied to the domain: The last access dimension should be aligned with the schedule dimension mapped to x threads. For indirect arrays, the coalescing requirement may be dropped because of the presence of additional long memory dependences that these cases entail. The total amount of shared memory being fixed, one may follow a simple greedy heuristic, refusing promotion if the required amount of shared memory would outgrow the available resources. Matching Library Calls While TC aims at generating code for any computational kernel expressible in the DSL, if (part of) a kernel happens to match a pattern that is heavily optimized by some library, then it may as well be handled by that library. In particular, and as a proof of concept, TC looks for opportunities for letting CUB handle specific forms of reductions [57]. It is currently restricted to single-dimensional addition reductions. A reduction is represented in TC by a binary relation between updated tensor elements and the statement instances that perform the corresponding updates. 6 Right before the mapping to threads, each permutable band with a sufficient number of parallel members is checked for reductions. In particular, the band should have at least one non-parallel member and the number of parallel members plus one (corresponding to the non-parallel member) should be greater than or equal to the number of dimensions that will be mapped to threads. If the band schedules instances of exactly one reduction statement and if the instances of any other statement scheduled by the band can be moved before or after the reduction instances, taking into account the active dependence at (the top of) the band, then the remaining band (involving only reduction statement instances) will be considered for replacement by a library call during thread mapping. When a band marked for replacement is considered during thread mapping, full/partial tile separation is applied-using the block size tuning parameter-since only the full tiles can be handled directly by CUB. Furthermore, the condition separating full tiles from partial tiles should be simple enough, as otherwise the cost of determining when to invoke CUB would outweigh any possible benefit obtained from the invocation. If the condition is too complicated, the separation is discarded and the band is treated in the same way as bands that were not marked for replacement. Otherwise, the collection of full tiles is tiled along the parallel dimensions, since a single scalar variable is used to hold the result of the reduction mapped to CUB. Synchronization and a special marking is then inserted around the point band of this tiling, which is later used during code generation to replace each full tile by a call to CUB. Finally, since CUB uses some shared memory, its consumption is taken into account during the downstream memory promotion step. Autotuning and Caching While the polyhedral core of TC is capable of optimizing and generating code for any TC function, it is well known that the state-of-the-art linear optimization heuristics are not sufficient to account for all performance anomalies and interactions with downstream program transformations [39,77]. Different kernels need different, target-specific optimization trade-offs. We thus complement our flow by an autotuner that varies the options of the polyhedral JIT compiler marked as tunable in the previous section. These options can be stored and reused for similar operations/kernels (similar shapes, target architecture), since autotuning may require significantly more time than compilation. The tuning session is defined by a list of parameters to tune and their admissible values, initial values, and the search strategy. We currently implement a genetic search strategy [27]. It runs for multiple steps, each one evaluating multiple candidate values. Each candidate is assigned a fitness value inversely proportional to its runtime. The pool is updated on each generation by cross-breeding three candidates, chosen from the pool at random, with fitter candidates having a higher chance of being chosen, such that each candidate's value is inherited from one of its parents. A subsequent mutation phase can change the candidate's values at random with some low probability. Much of the autotuning effort resides in tile size selection, for which no linear objective functions exist in polyhedral compilers. Genetic approaches have been used successfully to explore such spaces, performing better than random search due to the strong coupling of optimization decisions-including tile sizes bound by the limits of the memory hierarchy- [18,50]. Autotuning evaluates hundreds to thousands of versions for each kernel. We devise a generic multi-threaded, multi-GPU autotuner. It maintains a queue of candidates to compile with the polyhedral flow and a queue of compiled kernels ready to be profiled on the GPU (see Figure 4). Candidates or kernels are picked up by available worker threads and compiled or profiled concurrently. Profiling results are accumulated in the tuning database and used for setting up successive search steps. Each generated version is "warmed up" by a few executions before being profiled. Without any performance guarantees, autotuning needs to quickly prune poor candidates. Because CUDA kernels cannot be stopped once launched, we rely on the following pruning heuristics to decrease the autotuning time by an order of magnitude. (1) Parameter specialization allows the exact number of active threads and blocks to be computed beforehand. Kernels with fewer threads than some configurable threshold (e.g., 256) are not launched. (2) If during the first run, a kernel is more than 100× slower than the best version so far, or it is 5× slower after warmup, it is pruned immediately. While autotuning time may become significant, compilation and autotuning time is not a fundamental limit to TC's applicability. In training scenarios, a significant amount of time is spent on computing the same kernel repeatedly over different data during the (stochastic) gradient descent. In inference scenarios, the network is optimized ahead of time. As a result, although TC operates as a JIT compiler, it only marginally hits the typical compilation/run-time trade-offs of JIT compilers. Autotuning time may become an issue in specific training scenarios where hyper-parameters would need to be frequently updated, but in such a case one may leverage TC's intrinsic handling of dynamic shapes and generate a single version of each operator or fused operators to handle all hyper-parameter configurations. INTEGRATION WITH ML FRAMEWORKS TC is designed to optimize individual layers or small subgraphs of an ML model. The entire model is not only computationally expensive, but often leads to most transformations being hindered by a large number of data dependences. Furthermore, ML frameworks perform work distribution and placement at the model level, treating a layer as a unit of work; extremely large layers could interfere with the framework operation. Unlike XLA or Glow, TC supports completely custom layers. In TC, layer fusion is merely pasting the code that constitutes the layers into a single function, or inlining TC functions at the AST level. Unlike Halide and TVM, the polyhedral backbone of TC includes instance-wise dependence analysis, capturing dependences and tensor access relations at the level of individual loop iterations and tensor elements. This allows TC to fuse operations without introducing redundant computation, and to combine fusion with enabling transformations such as shifting (for convolutions) or scaling (for pooling layers). TC's polyhedral representation also enables it to automatically infer sizes, and to discover parallelism and locality-parallelism trade-offs beyond a predefined collection of map/reduce/scan combinators. Let us now describe the transparent integration into a ML framework, from a user perspective. Until now, such levels of integration had only been demonstrated on operator graph compilers such as XLA [28] and Glow [58], starting from a lower level of abstraction than TC, and missing the genericity and high reusability of a polyhedral framework as well as feedback-directed autotuning. We opted for an "in process" implementation, streamlining the interaction with computation graph engines and ML applications built on top of them, a unique feature for a fully automated scheduling and mapping flow. TC is integrated into any ML framework as follows: We provide a thin API that translates the specific tensor object model to our own (see Figure 5). Operator definitions are overridden to generate TC rather than the framework's backend implementation, as well as provide users the ability to write their own TC. A single TC may correspond to a DAG of operators in the ML framework. The tensor comprehensions are then JIT-compiled as shown in Figure 2. DAG partitioning, matching, and rewriting (like, e.g., TensorRT [47]) is currently not part of the flow, although this would make an interesting future combination, with feedback from the compiler. We report results for nine TC functions ranging from a simple matrix multiplication kernel to a full WaveNet cell [64]. The individual benchmarks are described below: Figure 6 and Figure 7 show the complete source code. The matrix multiplication and convolution kernels were selected for their dominance of the training and inference time of the most classical networks [4,75]. The other kernels bring interesting computation patterns to enable expressiveness and performance comparisons in more diverse network architectures. These results are all based on TC commit, 2e1a0dc54850 available at https://github.com/nicolasvasilache/TensorComprehensions. Running the autotuner for 25 generations of 100 candidates, the (parallel) autotuning process takes up to 1h on the longest running kernels, and 6h in total. 7 The relative performance of kernels automatically generated with TC compared to Caffe2 is shown in Figure 8 and Figure 9. 8 Caffe2 provides a very strong baseline by wrapping tuned Table 1. implementations, which originate from either hand-tuned libraries or other high-performance code generators. 9 We chose to compare against Caffe2 rather than against other optimization flows due to expressivity and automation limitations: XLA or Glow do not support custom layers, and Halide or TVM lack range inference and automatic parallelism discovery, which significantly complicates the expression of new layers such as KRU and WaveNet. The common set of comparable layers would be limited to matrix multiplications and convolutions, while one of the main contributions of TC is to enable exploration of new unconventional layers before super-optimized implementations are available. In addition, Figure 10 brings together the performance of TC-compiled kernels on both GPU systems, normalized to Caffe2 on P100. This consolidated graph conveys three classes of information in a common context: (1) speedup of Caffe2 V100 over Caffe2 P100 to illustrate the out-of-the-box benefits (or lack thereof) of a faster GPU; (2) speedup of TC over Caffe2 on P100 (main comparison); 9 A recent unification effort [59] made Caffe2 the backend for PyTorch 1.0. and (3) speedup of TC V100 over Caffe2 P100. The last choice may seem surprising, but presented in the context of the other two, allows for relative comparisons: the height of the Caffe2 V100 and TC V100 captures the raw speedups of TC on V100. We aim at compactly illustrating that TC provides a path to performance portability, improving on state-of-the-art frameworks and library primitives. TMM: Transposed Matrix-Multiplication. On matrix multiplications of shapes and sizes relevant to deep learning workloads (i.e., small 128 × 32 × 256, medium 128 × 1,024 × 1,024 and large 128 × 4,096 × 16,384), TC does not perform competitively, except in the low-latency small case. This is due to: (1) the lack of a target-specific register blocking optimization, making kernels bound by shared memory bandwidth that is an order of magnitude slower than register bandwidth; (2) the lack of target-specific, basic-block level optimizations including careful register allocation and instruction scheduling. Matrix multiplication is the most tuned computation kernel in history: The missing optimizations are all well known and may be found in use cases and open-source implementations such like CUTLASS [36]. Alternatively, polyhedral compilation has been shown to match or outperform cuBLAS, provided sufficient target-and operator-specific information has been captured in the optimization heuristic and code generator [20]. While our scientific focus was on covering a wide range of layers with TC, a production release would need to embed such operator-specific strategies as well. One strategy would be to follow the classification and heuristic steering of Kong et al. [39]. Also, TC does not replace all layers: It only acts as a custom operation in a graph; one may use TC concurrently with numerical libraries as well as custom implementations provided through TVM. Group Convolution. Group convolution is expressible with two lines of TC. We report comparisons for sizes relevant to the ResNext model [75]. Despite not using either register optimizations, Fourier or Winograd domain convolutions, TC produces faster kernels than the cuDNN ones, with running times between 250μs and 750μs. To check how TC fares w.r.t. recent advances in optimizing group convolutions, we performed an additional comparison with the PyTorch nightly package py36_cuda9.0.176_cudnn7.1.2_1 with torch.backends.cudnn.benchmark=True. TC speedups range from −2% to 8×. We also observe PyTorch performance on V100 to be worse than on P100, while TC achieves performance portability. Group Normalization. Group Normalization was recently proposed as a way to overcome limitations of Batch Normalization at smaller batch sizes and increase parallelism [74]. In TC, group normalization is a five-line function. TC performance is roughly 30% better than the hand-tuned Caffe2 implementation. Whereas Caffe2 uses four handwritten CUDA kernels, we chose to write the TC version as two separately compiled TC functions for better reuse and overall performance. We also experimented with writing a single fused TC but performance degraded. This is mostly due to kernels requiring substantially different grid configurations, which makes their fusion unprofitable. A larger, graph-level compiler that decides on TC function granularity, informed by the TC mapper and the autotuner, is necessary to automate this decision process but is left for future work. Production Model. The kernels 1LUT, 2LUT, MLP1, and MLP3 are the backbone of a low-latency production model used at scale in a large company and correspond to (1) reductions over a large lookup table embedding (10M rows); (2) fused reduction over two large lookup table embeddings (10M rows); (3) small size Multi-Layer Perceptron (fully connected, bias, ReLU); and (4) very small size, three consecutive Multi-Layer Perceptrons. Despite LUT sizes, this model is essentially latencybound. Existing libraries are often not tuned for low-latency regimes and tend to perform poorly. On these examples, the need for reuse and instruction-level parallelism is dwarfed by the need to quickly load data from the memory into registers. TC is able to adapt to the problem size, leveraging reduction parallelism to hide memory latency. This results in large speedups over Caffe2 with cuBLAS 9.0. Transposed Batch MatMul. This kernel is meant as a case study to characterize performance benefits and losses in the current flow, compared with reference libraries. For the sizes relevant to Factorization Machines [56], (500 × 26 × 72 × 26), Nvidia Profiler reports the TC autotuned kernel taking 56μs on the Nvidia Quadro P6000 GPU (Pascal), while both Pytorch and Caffe2 resort to the specialized cuBLAS function maxwell_sgemm_128x64_nn that takes 87μs. Beyond architecture mismatch indicated in the function name, a detailed performance comparison demonstrates that TC executes 500 blocks of 26 × 13 = 338 threads, compared to 500 blocks of 128 threads for cuBLAS, reaching 81.8% occupancy instead of 23.6%. Additionally, the cuBLAS kernel shows a large number of predicated-off instructions due to the block size not matching the problem size. Occupancy is limited by the number of registers in both cases (11,264 vs. 15,360), but the TC version can be distributed over five blocks instead of four. 10 TC promotes all tensors to shared memory, saturating its bandwidth, whereas arithmetic instructions are the performance limiter for cuBLAS. Given the large occupancy metric, performance can be further increased by promoting one tensor to registers instead, trading off lower occupancy for reduced pressure on memory bandwidth. Kronecker Recurrent Units. These have been recently proposed as a solution to drastically reduce model sizes by replacing the weights matrix of a linear layer by a Kronecker product of much smaller matrices [33]. In TC, a Kronecker product of three matrices is easily written as shown in the kronecker3 function in Figure 6. The following table shows the running time in μs-or out of memory (OOM)-of a large matrix multiplication in Caffe2 and the equivalent Kronecker product of three matrices. Note that the performance difference mostly comes from using a different algorithm. While no specialized GPU library primitives exist for Kronecker recurrent units, TC's automatic flow enabled rapid exploration and reached unprecedented levels of performance, as shown in Table 2. Clearly, this benchmark deserves a deeper discussion of the space of possible TC derivations, including memory/computation/parallelism trade-offs falling outside the scope of this article. The kronecker3 function is one such possible implementation that performed well for the three selected matrix shapes; it avoids redundant computation at the expense of storage (two tensors for intermediate computations). WaveNet. WaveNet [64] is a popular model that enables generation of realistic sounding voices as highlighted at Google I/O 2018. We encoded a full WaveNet cell using a single TC function and compared our generated kernel with a WaveNet layer from PyTorch. This experiment uses a batch size of 1, residual and dilation channels of 32, and 256 skip channels. With TC, we observe performance improvements up to 4× on Volta, as shown in Table 2. RELATED WORK Despite decades of progress in optimizing and parallelizing compilation, programmers of computationally intensive applications complain about the poor performance of optimizing compilers, often missing the machine peak by orders of magnitude. Among the reasons for this state of affairs, one may cite the complexity and dynamic behavior of modern processors, domain knowledge required to prove optimizations' validity or profitability being unavailable to the compiler, program transformations whose profitability is difficult to assess, and the intrinsic difficulty of composing complex transformations, particularly in the case of computationally intensive loop nests [6,26]. Several contributions have successfully addressed this issue, not by improving a general-purpose compiler, but through the design of application-specific program generators, a.k.a. active libraries [67]. Such generators often rely on feedback-directed optimization to select the best generation schema [60], as popularized by ATLAS [73] for dense matrix operations (and more recently BTO [10]) and FFTW [25] for the fast Fourier transform. Most of these generators use transformations previously proposed for traditional compilers, which fail to apply them for the aforementioned reasons. The SPIRAL project [54] made a quantum leap over these active libraries, operating on a domain-specific language (DSL) of digital signal processing formulas. Compilers for DSLs typically rely on domain-specific constructs to capture the intrinsic parallelism and locality of the application. Using such an approach, DSL compilers such as Halide [55] for image processing show impressive results. Its inputs are images defined on an infinite range while TC sets a fixed size for each dimension using range inference. This is better suited to ML applications, dominated by fixed-size tensors with higher temporal locality than 2-D images; it is also less verbose in the case of reductions and does not carry the syntactic burden of anticipating the declaration of stage names and free variables (Halide needs this as a C++ embedded DSL). OoLaLa [42] takes a similar approach for linear algebra, and TACO [37] and Simit [38] use a similar notation as TC but generate sparse matrix code for numerical solvers. Following this trend in the context of deep neural networks, we not only design yet another DSL and compiler but propose a more generic code generation and optimization framework, bringing together decades of research in loop nest optimization and parallelization for high-performance computing. We also design the domain language to cover a variety of existing and emerging machine learning models. Our framework automates a combination of affine transformations involving hierarchical tiling, mapping, shifting, fusion, distribution, interchange, on either parametric or fully instantiated problems, that are not accessible to Halide [45,55], Latte [63], or XLA's [28] representations of tensor operations. The polyhedral framework is a powerful abstraction for the analysis and transformation of loop nests, and a number of tools and libraries have been developed to realize its benefits [12,14,22,70,77], including production compilers such as GCC (Graphite) and LLVM (Polly). Polyhedral techniques have also been tailored for domain-specific purposes. State-of-the-art examples include the PolyMage [46] DSL for image processing pipelines and the PENCIL approach to the construction of parallelizing and compilers for DSLs [5,9]. PolyMage is a clear illustration of the benefits of operating at a high level of abstraction, closer to the mathematics of the domain of interest: While GCC/Graphite and LLVM/Polly struggle to recover affine control and flow from low-level code, PolyMage natively captures patterns amenable to domain-specific optimization, such as stencil-specific overlapped tiling with or without recomputation, and cache-conscious fusion and tiling heuristics; it also offers a more productive programming experience for end-users. Interestingly, some techniques derived from PolyMage crossed out of polyhedral representations into Halide's automatic scheduler [45]. Back to deep learning frameworks, TVM extends Halide with recurrent (parallel scan) operators, support for ML accelerators, and tight integration with ML frameworks [17]. It also provides autotuning capabilities [18] and shares several engineering goals of TC, such as transparent ML framework integration. Much like PolyMage, TC implements optimizations well suited to the long-distance, non-uniform reuse patterns of deep learning models; these heuristics are not available in general-purpose compilers such as LLVM/Polly, Pluto, or PPCG, or semi-automatic frameworks such as Halide and TVM. None of the aforementioned frameworks offer the complete transparency of TC's end-to-end compilation flow. TVM involves some level of manual intervention and/or feedback-directed optimization even for producing the most baseline GPU implementation, and it guarantees functional correctness for a subset of the scheduling primitives and tensor operations: e.g., convolutions can only be fused at the expense of introducing redundant computations or involving lower-level transformations that cannot be verified at compilation time. In addition, the balance between analytical objective functions (profitability heuristics) and feedback-directed autotuning is completely different: Halide and TVM auto-schedulers expose all scheduling decisions to the autotuner and infer most performance-related information from execution profiles, while TC's polyhedral flow reduces the autotuning space to a narrow set of optimization options and tile sizes. TC also shares several motivations with Latte [63] and PlaidML [49], including a high-level domain-specific language and an end-to-end flow. TC provides elementwise access that is just as expressive when implementing custom layers, but unlike Latte it is more concise (thanks to type and shape inference), safer regarding static bound checking and graph connectivity, and more flexible by decoupling indexing from representation and layout choices. In addition, our framework implements more complex scheduling and mapping transformations than both Latte and PlaidML, some of which are essential to GPU targets with partitioned memory architectures. Unlike Latte, it is also designed as a JIT compilation library for seamless integration with deep learning frameworks. Unlike PlaidML, it is not limited to high-level patterns and rewrite rules, but captures complex affine transformations resulting from analytical modeling and autotuning. As a consequence, the TC compilation process takes generally more time than PlaidML, a price to pay for the ability to implement a wider range of optimizations. Like TC, XLA [28] provides automatic shape and size inference, it may operate "in process" as a JIT compilation library, and it integrates into a production deep learning framework (TensorFlow, Caffe2 [29]). XLA shares many motivations with Latte, with a focus on integration and completeness of functionality rather than on the complexity of the optimizations and mapping strategies. Glow [58] is a recent domain-specific, retargetable compiler for PyTorch/Caffe2. It shares many of the motivations and capabilities of XLA, while emphasizing retargetability (CPUs as well GPUs and ML accelerators from multiple vendors) and the ability to differentiate, optimize, and lower operations and sub-graphs of operations within its own hierarchy of intermediate representations. It can leverage blackbox numerical libraries as well as generate custom vector processing kernels relying on LLVM. Our compiler design and algorithmic contributions would naturally fit XLA, Latte, or Glow, except for the following: TC remains independent from a specific computation graph while preserving tight integration with production frameworks; we did not use an embedded DSL approach-keeping C++ as an interface for implementing optimization strategies only-isolating the user from complexity and debugging hurdles of embedded DSLs, and we leverage polyhedral techniques to factor out most of the optimization heavy-lifting, while XLA, Latte, and Glow resort to operation-specific emitters/lowering, optimization schemas, and heuristics. Recently, R-Stream·TF [52] was presented as a proof-of-concept adaptation of the R-Stream polyhedral compiler to the automatic optimization of TensorFlow operators. Similarly to our approach, the generated code is wrapped as a custom operator of TensorFlow. The tool takes a computation graph as input and partitions it into sub-graphs amenable to tensor fusion, contraction, and layout optimization. R-Stream·TF also leverages the broadcast semantics of TensorFlow to maximize the operator's polymorphism w.r.t. input tensor dimension and shapes. This makes R-Stream·TF very aggressive in terms of static memory management and kernel partitioning. We made the more pragmatic choice of leaving most of these decisions to the level of tensor algebra, allowing a domain-specific optimizer or ML expert to rewrite declarative comprehensions into capacity-and layout-optimized ones. However, TC is more ambitious in its domain-specialization of affine scheduling and mapping, aiming for the generation of a single accelerated kernel, with heuristics adapted to the high-dimensional, non-uniform, long-distance reuse patterns of neural networks. The lack of algorithmic detail in the R-Stream·TF paper prevents us from comparing those affine transformation heuristics. CONCLUSION We presented and evaluated the first fully automatic, end-to-end flow, mapping a high-level mathematical language to high-performance accelerated GPU kernels. TC resembles the mathematical notation of a deep neural network and makes it easy to reason about, communicate, and to manually alter the computation and storage/computation trade-offs. Our flow leverages decades of progress in polyhedral compilation to implement the heavy-duty program transformations, analytical modeling of profitable optimizations, and code synthesis. It also implements domain-specific optimizations, code generation, autotuning with a compilation cache, and lightweight integration within Caffe2 and PyTorch. This unique combination differs from alternative proposals relying mainly on autotuning such as TVM [18], or pattern-based transformations such as PlaidML [49]. TC is capable of quickly synthesizing solid accelerated implementations that effectively lift bottlenecks in large training runs. In practice, such bottlenecks slow down ML research significantly, requiring substantial engineering efforts to be mobilized. Our contribution addresses this productivity gap; it brings more expressive power and control into the hands of domain experts, relieving ML frameworks' dependence on highly tuned vendor libraries without compromising performance. TC automates boilerplate optimization that has been replicated over the numerous deep learning frameworks and builds on a generic polyhedral intermediate representation and libraries shared with other domains (image processing, linear algebra) and general-purpose compilers (LLVM/Polly). Future work includes additional model-based domain-specific optimizations, CPU code generation, learning best mapping configurations automatically, automatic differentiation, interaction with the graph-level optimizer, and providing a path to emit a series of calls to a native library or hardware acceleration blocks.
2019-10-16T13:00:36.327Z
2019-10-11T00:00:00.000
{ "year": 2019, "sha1": "065b316adc4589eb43cebca91faae2bbcea3d011", "oa_license": null, "oa_url": "https://dl.acm.org/doi/pdf/10.1145/3355606", "oa_status": "GOLD", "pdf_src": "ACM", "pdf_hash": "065b316adc4589eb43cebca91faae2bbcea3d011", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
220499210
pes2o/s2orc
v3-fos-license
Intra and inter: Alterations in functional brain resting‐state networks after peripheral nerve injury Abstract Introduction Numerous treatments suggest that brain plasticity changes after peripheral nerve injury (PNI), and most studies examining functional magnetic resonance imaging focused on abnormal changes in specific brain regions. However, it is the large‐scale interaction of neuronal networks instead of isolated brain regions contributed to the functional recovery after PNI. In the present study, we examined the intra‐ and internetworks alterations between the related functional resting‐state networks (RSNs) in a sciatic nerve injury rat model. Methods Ninety‐six female rats were divided into a control and model group. Unilateral sciatic nerve transection and direct anastomosis were performed in the latter group. We used an independent component analysis (ICA) algorithm to observe the changes in RSNs and assessed functional connectivity between different networks using the functional networks connectivity (FNC) toolbox. Results Six RSNs related to PNI were identified, including the basal ganglia network (BGN), sensorimotor network (SMN), salience network (SN), interoceptive network (IN), cerebellar network (CN), and default mode network (DMN). The model group showed significant changes in whole‐brain FC changes within these resting‐state networks (RSNs), but four of these RSNs exhibited a conspicuous decrease. The interalterations performed that significantly decreased FNC existed between the BGN and SMN, BGN and IN, and BGN and DMN (p < .05, corrected). A significant increase in FNC existed between DMN and CN and between CN and SN (p < .05, corrected). Conclusion The results showed the large‐scale functional reorganization at the network level after PNI. This evidence reveals new implications to the pathophysiological mechanisms in brain plasticity of PNI. Signif ic ance St atement Brain plasticity contributes to the sensorimotor recovery after peripheral nerve injury (PNI), Neuroimaging studies often focused on abnormal changes in isolated brain regions but not the functional reorganization of brain networks. The significance of this work is revealed that intra-and internetwork alterations between the related functional resting-state networks (RSNs) in PNI rat models. Our study suggested that activity was declined within brain functional networks in model group, and significant interactivity alterations were existed between these RSNs which verified the damage in motor-related functional neural circuits. Our result can strengthen theoretical basis for brain plasticity after PNI. Recent accumulating evidence suggests that brain plasticity contributes to the sensorimotor recovery after PNI (Alvarado, Szyf, & Millecamps, 2013;Qiu et al., 2013). Mohanty summarized that brain plasticity after nerve injury consisted of two stages (Mohanty, Bhat, & Devi, 2015). The first stage is denervation, in which the cortical area represented by the damaged nerve is invaded by the surrounding area. The second is reinnervation, in which the axonal reorientation at the site of PNI and the defined organizational cortical area exhibit ill-defined transformations. Hisham et al. used transcranial magnetic stimulation (TMS) to evaluate four brachial plexus injured patients who underwent intercosto-musculocutaneous nerve transfer (Hisham and Hollis, 2018). TMS showed that the original bicep cortical area regained control of the biceps muscle via the intercostal neurons after a period of time. Functional magnetic resonance imaging (fMRI) is an important neuroimaging technique that is used for dynamic observations of spontaneous activity in the brain (Biswal, 2012;Logothetis, 2008). However, most neuroimaging studies focused on abnormal changes in one or several specific brain regions. For example , Fornander, Nyman, Hansson, Brismar, & Engström, 2016) found that activity in the ipsilateral primary sensory cortex increased significantly during tactile stimulation in the median nerve of the injured hand of patients Onishi et al. (2018) reported that, significant signal changes occurred in the amygdala, cingulate cortex, basal ganglia, and insular cortex in rats one week after sciatic nerve injury. Our group found that the activation mode of the supplementary motor area played a key role in brain remodeling and clinical functional recovery after PNI (Lu et al., 2016). Electroacupuncture intervention in rats after PNI showed different remodeling patterns in the somatosensory cortex from the model group (Wu, Yechen, Hua, Shujie, & Jianguang, 2018). The amplitudes of low-frequency fluctuation were significantly increased in the ipsilateral insular of facial synkinesis patients . The 18 F-FDG exhibited significantly increased intake in the contralateral anterodorsal hippocampus and ipsilateral dorsolateral thalamus after right brachial plexus avulsion (Shen et al., 2019). However, the hypothesis of these studies was that isolated brain regions contributed to brain plasticity. Actually, behavior-related brain activation patterns depended on the integration of brain networks, which consisted of several homogenous brain regions. Similarly, the modifications in the central nerve system after PNI depended on the large-scale interaction of neuronal networks (Bhat, Indira, Bharti, & Panda, 2017;Feng et al., 2015), such as the DMN, the executive control network (ECN), and the salience network (SN). Independent component analysis (ICA) is commonly used for identifying latent networks and describing the characteristic spatial patterns and temporal dynamics in most situations (Fox & Raichle, 2007;Wang & Guo, 2019). It decomposes fMRI signals into potential spatial source signals, which correspond to various functional networks. These specific and highly reliably networks were called RSNs. ICA supplies an approach to investigate the connectivity in the whole brain. It has been successfully applied to assess the synchronous fluctuations of intrinsic activity in brain, which is important to communication and collaboration (Feng et al., 2015;Lin, Wu, Liu, Lv, & Yang, 2016). In this study, we used ICA to examine the intra-and internetwork alterations between the related functional RSNs in sciatic nerve injury rat models, which is a typical and complete peripheral nerve injury model (Andersson, Oradd, Sultan, & Novikov, 2018). The results strengthen the theoretical basis for brain plasticity after peripheral nerve injury. | Animals Ninety-six healthy adult female clean-grade Sprague Dawley (SD) rats were involved in our study. All rats were aged 6-8 weeks, weighed 180-240 g, and were provided by Shanghai Slack Laboratory Animal Limited Liability Company (Shanghai, China). There is no evidence confirmed that any significant difference between genders in PNI and regeneration, according to a wide review of available literature and our previous studies on peripheral nerve injury in rats, female rats were preferred (Afshari et al., 2018; Vergara, Romano, Stanca, La Pesa, & Maffia, 2018). And the same sex could eliminate possible result differences due to gender. The rats were kept in a laboratory environment with a 12/12 hr light-dark cycle at 20-22°C and provided with adequate food and water. The rats were kept for 7 days for acclimation to the new situation before any experiments were started. The Animal Ethics Committee of Shanghai University of Traditional Chinese Medicine approved the study. All procedures and protocols were performed in accordance with the Guide for the Care and Use of Laboratory Animals described by the U.S. National Institutes of Health. | PNI procedure A total of 96 rats were randomly assigned into two groups: controls (n = 24), models (n = 72). In the PNI model groups, sciatic nerve transection and direct anastomosis were applied to the right hindlimb in each rat. First, the rats were anesthetized with an intraperitoneal injection of sodium pentobarbital (40 mg/kg) and placed on a clean operating table in a prone position. The hair was shaved under the sciatic tubercle of the right hip 5 mm, and an incision was made along the route of sciatic nerve. Under a 10-fold microscope, the right sciatic nerve trunk was exposed and separated from the gluteal muscle. It was transected with a blade 1 cm below the lower edge of the piriformis muscle. The sciatic nerve was repaired via epineurium suture with 11-0 single-strand nylon wires. And the control group without treatment. | MRI image acquisition Functional magnetic resonance imaging scans of the brain were performed 4 weeks after surgery using a Bruker 7T magnetic resonance system (Bruker Corporation) with a coil for two groups. After 2.5% isoflurane-induced anesthesia, rats were fixed on the scanner and given a continuous 1.5%-2% isoflurane anesthesia with ventilator support and respiratory monitoring. The following interlayer scan- | fMRI data preprocessing Data preprocessing procedures were performed using the Statistical Parametric Mapping 12 (SPM 12) toolbox (http://www.fil.ion.ucl. ac.uk/spm/) based on the MATLAB 2014a platform. At first, we removed the first five time points from the data preprocessing and expanded the images by 10 × 10 × 10 times to match the size of the human brain, which made it possible to develop processing algorithms originally designed for human data. The amplified procedure only changed the dimension descriptor fields in the file header without interpolation. Second, the none-brain tissue was stripped manually before further preprocessing. FMRI images were corrected from the temporal bias of slice acquisition using the slice timing procedure. The images were spatially realigned with rigid-body transformations to correct the misplacement of voxels, which was caused by in-scanner head motion. The standard brain template in Schwarz' study was adopted to achieve normalization of the standard space, and the voxel size for normalized images was 2.06 × 2.06 × 2 mm (Schwarz et al., 2006). And following, the images were smoothed by a full width at half maximum quadruple as the voxel size (8.24 × 8.24 × 8 mm). The further preprocessing includes temporal bandpass filtering (0.01-0.1 Hz) to decrease the low-frequency drift. | The intranetwork alteration of RSNs The preprocessed data of the two groups were combined into one group. Group spatial ICA was performed to analyze the combined data using the GIFT software (http://trend scent er.org/softw are/ gift/) (Calhoun, Adali, Pearlson, & Pekar, 2001). The procedures included three steps: dimension reduction by principle component analysis (PCA), ICA decomposition, and back reconstruction for individual level components (Bell & Sejnowski, 1995). A two-level PCA was performed to reduce the dimensionality of the data. The information-maximization (infomax) algorithm was used as an independent component estimation. The IC number was identified to be 20 according to the minimum description length criteria and previous studies (Calhoun et al., 2001;Hutchison, Mirsattari, Jones, Gati, & Leung, 2010). Then, the data were decomposed into 20 components using the infomax algorithm. This analysis was repeated 100 times to achieve robust and accurate results. Next, the ICs at the group level (both spatial maps and time courses) were back reconstructed for each subject, and the ICA-determined networks were converted to Z-maps before entering group statistics to obtain voxel values comparable across subjects. Six meaningful RSNs were identified as anatomically and functionally classical RSNs via visual inspection (Bajic, Craig, Mongerson, Borsook, & Becerra, 2017). The individual subject spatial maps for each selected RSN were converted to Z-values (Song et al., 2011). For each selected RSN, a voxel-wise one-sample t test was employed to determine the group spatial map for all subjects (p < .05, FDR corrected, false discovery rate), and the statistically thresholded t-value map was used to define brain regions that belonged to the RSN. Differences between the control and model groups were then examined using a voxel-wise two-sample t test. | The internetworks analysis of RSNs (FNC analysis) According to ICA algorithm, the time courses of cortical areas within one IC are synchronous, and the time courses of each RSN were extracted and used to calculate the temporal correction. Although the components were spatially independent, significant temporal correlations could exist between them. As an extension of the ICA, the functional network connectivity (FNC) toolbox (http://mialab. mrn.org/softw are/#fnc) was employed to examine the temporal relationships between brain networks. Corresponding to the significant correlation combinations, the average time lags, which represent the amount of delay between the time courses of the two correlated RSNs, were calculated for each group. The maximum time lag was set to 6 s. One-sample t test (p < .05, FDR corrected) for each group and two-sample t test (p < .05, FDR corrected) for group comparisons were performed on all possible combinations. | RE SULT All the rats survived after sciatic nerve transection and repair surgery. Reddening and swelling around the wound was only observed in three rats within one post-PNI week. And the wound completely recovered in all rats without obvious infection. There was no abnormal image after data acquisition and preprocessing. | The intranetwork alterations in identified RSNs in PNI rats The results of two-sample t tests between the PNI and control groups are shown in Figure 2 and Table 1. The significance level of the t-value in BGN, CN, IN, DMN, and SN were set at p < .05 (FDR, corrected). And the SMN did not pass FDR correction We found significant changes in wholebrain FC changes in these RSNs, but four of these networks exhibited a conspicuous decrease in the model group. In the BGN, activities of bilateral caudate, putamen, and corpus collosum and the left cingulate cortex were significantly decreased. In the DMN, the activities of right cingulate cortex, motor cortex, caudate, and putamen were decreased. In the SN, the activity of the right insular cortex was significantly decreased, and the bilateral cingulate cortex was increased. In the IN, the activities of the left somatosensory cortex were significantly decreased, but the right somatosensory cortex was significantly increased. | The interactive alterations between the RSNs The results of FNC analyses between RSNs are shown in Figure 3. The significant level of difference between two RSNs correlations was set at p < .05 (corrected) in Table. | D ISCUSS I ON The present study investigated the intranetwork and interactive alterations of brain networks in PNI rats based on rs-fMRI data using ICA and FNC algorithms. Our study confirmed decreased activity in the PNI group within RSNs and the functional connectivity changed between RSNs. These alterations revealed that cortical remodeling was extensive within and across related RSNs after PNI. Although most previous studies focused on the plastic changes in isolated brain regions, recent investigations examined the relationship between behavioral recovery and the homogenous brain RSNs, such as SMN (Sammons & Keck, 2015;Taylor, Anastakis, & Davis, 2009). However, these literatures primarily used FNC analysis to investigate cognitive dysfunction, such as dementia, attention deficit hyperactivity disorder, and schizophrenia (Fu et al., 2019;de Lacy & Calhoun, 2019;White, Joseph, Francis, & Liddle, 2010). Few studies focused on whole-brain analyses of static FNC after PNI. Each RSN has a complex internal anatomical structure with special function. The SMN is composed of bilateral sensory, motor, and visual cortexes. The function of the SMN includes the integration of motor, sensory, emotional, and executive control. The IN is similar to the SMN with sensory cortices, and the two networks were classified as sensory and interoceptive networks, respectively. The IN handles information of physiological conditions in the body (Lino, Gautam, Pei-Ching, James, & David, 2011). The basal ganglia consist of the striatum, globus pallidus, substantia nigra, and subthalamic nucleus (Plenz & Kital, 1999). The striatum receives input from the sensorimotor cortex and cerebellum, and the pallidum sends inhibitory output to motor-related areas (Bernhard, 1997). Therefore, the F I G U R E 2 Results of RS FC analysis between control and model group. Altered FC in the basal ganglia network, sensorimotor network, cerebellar network, salience network, interoceptive network and default mode network. The hot color denotes higher functional activity in the model group compared with the control group, and the winter color denotes lower functional activity in the model group BGN is inextricably linked to spontaneous movement. The CN is primarily composed of the cerebellum and other brainstem areas, such as the periaqueductal gray and raphe nuclei. The CN plays a critical role in sensory-motor integration, arousal, and protective processing (Berridge, 2008;Habas et al., 2009). The DMN is composed of prefrontal cortical regions, the cingulate cortex, and retrosplenial cortex (Sierakowiak et al., 2015). The SN contains the insular and cingulate cortices (Valerie et al., 2012). The current study revealed reduced intranetwork activities in the model group in the SMN, BGN, IN, DMN, and CN. These results are consistent with dysfunction of motor and sensory after peripheral nerve injury (Michal et al., 2011). These results are also similar to the results of blood oxygen-dependent level signals in human and animal models in previous studies (Onishi et al., 2018;Wu et al., 2018). Extensive decline of the activity within brain functional networks may have a direct correlation with functional loss after PNI. Notably, the BGN played a special role in our experiment. The FNC results showed that the connectivity between BGN and SMN, BGN and IN, BGN, and DMN decreased significantly in the model group. It is generally accepted that there is an existing neural loop between the thalamo-cortex-basal ganglia that contributes to the motor execution of sensory integration and sensory-motor feedback (Cole, Sudhir, & Walter, 2010), which has great significance in the generation of autonomous motion (Filip, Habas et al. found that the cerebellum contributed to executive control, salience detection, memory, and self-reflection. Particularly, it was also an important part of the ECN (Habas et al., 2009). These results may explain why the FC between CN and DMN, CN and SN were increased. Coincidentally, the FC between DMN and ECN was also significantly increased in patients with brachial plexus avulsion (Feng et al., 2015). These results may suggest that the cooperation between the cerebellum in ECN and DMN is more intimate during cognitive processes after PNI. We would set up the intervention group in subsequent experiments to make the study more convincing. The 7T magnetic resonance system was used to scan the brain because we needed a higher scan resolution to further study functional connectivity in PNI, such as the 11.2T system. Few researchers focused on ICA and FNC in rodents, and the standards of RSNs are not perfect. Therefore, we could not identify components with a specific function. And in further research, we would reveal the dynamic changes in the brain networks of patients after peripheral nerve injury. The current study clarified the large-scale functional reorganization at the network level, and whole brain activities were significantly decreased after PNI. Alterations in connectivity between RSNs verified the damage in motor-related functional neural circuits. This evidence strengthens the theoretical basis for brain plasticity after peripheral nerve injury. CO N FLI C T O F I NTE R E S T The authors declare no conflict of interest. None of the authors have a commercial interest in the material presented in this work. contributed to writing-review & editing; All authors had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. DATA AVA I L A B I L I T Y S TAT E M E N T We will share the data at the end of our project. Identified data from this study will be shared upon reasonable request from a qualified investigator.
2020-07-14T13:01:27.630Z
2020-07-12T00:00:00.000
{ "year": 2020, "sha1": "4d3f615f3f29e5fa6ba02757ee1321628fc9db20", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/brb3.1747", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fdba9a4c14cd84ec0aa92ee727dfdcb558d644b6", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
233256326
pes2o/s2orc
v3-fos-license
Cytotoxic T-lymphocyte infiltration and chemokine predict long-term patient survival independently of tumor mutational burden in triple-negative breast cancer Background: Cytotoxic T-lymphocyte (CTL) infiltration into tumor is a positive prognostic factor in breast cancer. High tumor mutational burden (TMB) is also considered as a predictor of tumor immunogenicity and response to immunotherapy. However, it is unclear whether the infiltration of functional CTL simply reflects the TMB or represents an independent prognostic value. Methods: Utilizing The Cancer Genome Atlas (TCGA) breast cancer cohort, we established the Functional Hotness Score (FHS). The associations of FHS and breast cancer patient prognosis as well as distinct immunity markers were analyzed in a total of 3011 breast cancer patients using TCGA, METABRIC and metastatic breast cancer (MBC) cohort GSE110590. Results: We established FHS, based on CD8A, GZMB and CXCL10 gene expression levels of bulk tumors, which delivered the best prognostic value among some gene combinations. Breast cancer patients with the high-FHS tumors showed significantly better survival. FHS was lower in the MBCs. Triple-negative breast cancer (TNBC) showed the highest FHS among subtypes. FHS predicted patient survival in hormone receptor (HR)-negative, especially in TNBC, but not in HR-positive breast cancer. FHS predicted patient prognosis independently in TNBC. The high-FHS TNBCs showed not only higher CD8+ T cell infiltration, but also enhanced broader type-1 anti-cancer immunity. The patients with the high-FHS tumors showed better prognosis not only in high-TMB tumors but also in low-TMB TNBCs. The combination of high-TMB with high-FHS identified a unique subset of patients who do not recur over time in TNBC. Conclusion: TNBCs with high FHS based on the expression levels of CD8A, GZMB and CXCL10 showed improved prognosis with enhanced anti-cancer immunity regardless of TMB. FHS constitutes an independent prognostic marker of survival, particularly robustly when combined with TMB in TNBC. As a result, TNBC is more sensitive to immune checkpoint inhibitors (ICIs), such as PD1/PD-L1 blockade, which enhance CTL survival and cytolytic activity, 6,7 resulting in their Food and Drug Administration approval in TNBC. 8 However, PD-L1 blockade is effective in only a small portion of TNBC patients, and the predictive value of PD-L1 immunohistochemistry (IHC) is very limited, with some of the PD-L1 negative patients still responding to PD-L1 blockade. 9 Similarly, the predictive value of individual CTL markers of IHC is not reliable and limited by their variation and spatial heterogeneity within individual tumors. 10 These considerations highlight the importance of identification of the improved markers predicting the ability of the immune system to control breast cancer progression and responsiveness to treatments. CTLs are identified by CD8 surface marker, which is encoded by CD8A gene. 11 Granzyme B (GZMB) is a serine protease that is secreted by activated CTLs and natural killer (NK) cells to induce apoptosis of the target cells. 12,13 Chemokines, such as CXCL10 and CCL5, are key to the selective attraction of activated (effector, effector-memory and memory) CTLs into tumors, as shown in multiple cancers. 12,13 In addition to mutation-dependent neoantigens, CTLs can also recognize elevated levels of self-antigens, [14][15][16] raising the possibility that their influx may be also important in the control of weakly immunogenic cancers with limited TMB. However, it remains unknown whether tumor infiltrating functional CTL levels correlate with improved patient survival and are independent of TMB. In order to investigate it, we developed the Functional Hotness Score (FHS), combining gene expressions of markers and attractants of activated CTL. Study design and patient cohorts A total of 3011 breast cancer patients were analyzed. We used the breast cancer cohort from The Cancer Genome Atlas (TCGA) 17 as a testing cohort to establish FHS and to characterize the high-FHS cohort. As a validation cohort, we used the Molecular Taxonomy of Breast Cancer International Consortium (METABRIC) cohort. 18,19 There are 1091 and 1904 primary breast cancer tissues with gene expression, in TCGA and METABRIC, respectively, and patient demographics are shown in Table 1. TCGA provisional CIBERSORT The infiltrating immune cell fractions into tumors were estimated by the CIBERSORT algorithm. 25 The calculated data was downloaded through the TCIA website (https://tcia.at/home). 26 The TNBC patients were divided into high and low CD8+ T cell groups using same percentage of high and low FHS groups. Gene set enrichment analysis Gene set enrichment analysis (GSEA) was conducted comparing high and low FHS TNBCs among 50 hallmark gene sets 27 using software provided by the Broad Institute (http://software. broadinstitute.org/gsea/index.jsp) as previously described. 23,28,29 False discovery rate <0.01 was considered as significant. Statistical analysis Score and TMB differences between two groups were analyzed using Student's t-test, and one-way analysis of variance was used for the comparison of more than two groups. Pearson correlations were calculated based on the expression levels of the genes and plotted. The survival analyses were conducted by Kaplan-Meier curve with log-rank test, and univariate and multivariate analyses were conducted by Cox regression model. The data of infiltrating immune cell fractions was compared by Wilcoxon test. All statistical analyses were performed using R software (http:///www. r-project.org/) and Bioconductor (http://biocond uctor.org/). Development of the FHS The FHS was developed using the combination of CD8A, GZMB and chemokine gene expression that deliver the best hazard ratio of overall survival (OS) in TCGA. Hazard ratio of CCL5 is 0.677, which was much lower than that of CXCL10 (0.890). However, CCL5 expression was highly correlated with CD8A and GZMB (R 2 = 0.840 and R 2 = 0.787, respectively) [ Figure 1 Decreased FHS in MBC Since MBC is known to be particularly immunosuppressed, we investigated the association of FHS with MBC. Among the primary tumors, the stage IV tumors that have metastasis showed trend towards lower FHS than stage I/II/III tumors in TCGA, although it did not reach statistical significance (p = 0.082) [ Figure 3 Highest FHS in TNBC among breast cancer subtype Since TNBC is the most immunogenic subtype, we hypothesized that FHS is higher in TNBC than in other subtypes. As expected, TNBC showed the highest FHS among all subtypes in TCGA Anti-cancer immune signature in high-FHS TNBC Cell composition fraction estimation analysis revealed that the high-FHS TNBCs were significantly associated with higher infiltration of anti-cancer immune cells, such as CD8+ T cells (p < 0.001), activated memory CD4+ T cells These findings indicate that FHS reflects not only CD8+ T cell infiltration, but also a broader type-1 anti-cancer immunity. . Similar findings were shown in the METABRIC cohort; however, this is limited by mutation data with only representative genes rather than the whole genome ( Figure S2). The high-FHS group showed better survival only in the low-TMB group (p = 0.003) and not in the high-TMB group, most likely because it has only 40 targeted DNA mutation data, thus, there were only 30 patients in total in the high-TMB group ( Figure S2). These findings suggest that high FHS associates with better prognosis in TNBC regardless of TMB. Discussion In this study, we established FHS combining CD8A, GZMB and CXCL10 gene expression levels of a bulk tumor of TNBC to identify "hot tumors" with improved prognosis despite high risk subtype. The FHS has a stronger prognostic value than each individual gene. The FHS is lower in stage IV than earlier stage tumors, and lower in metastatic tumors than primary tumors from the same patients. TNBC has the highest FHS among breast cancer subtypes, and high FHS predicted prolonged survival in TNBCs. The high FHS is associated with not only CD8+ T cell infiltration, but also a broader type-1 anticancer immunity in TNBCs. Importantly, the prognostic value of the FHS is independent of the TMB. In fact, FHS used jointly with the TMB index allowed us to identify the unique subset of TNBC patients with particularly good prognosis. FHS combines the expression levels of CTL lineage marker CD8A, 12 and GZMB, an enzyme secreted by CTLs and NK cells to induce apoptosis target cells. 12,13 In addition, it includes CXCL10. CXCL10 is a chemokine which attracts not only activated CTLs (effector, effector-memory and central memory, but not naïve or suppressed cells), but also multiple immune cells, including NK cells, dendritic cells, and macrophages towards cancer lesions. 32 Consequently, CXCL10 is involved in modulating both innate and adaptive immunity, but selectively their desirable effector, rather than suppressive, components. 33 Indeed, our results demonstrated that high-FHS tumors showed not only enhanced markers of cellular immunity, such as higher CD8+ T cell infiltration, but also indications of enhanced humoral immunity, judging by activated memory CD4+ T cells. Further, high-FHS tumors are associated with M1 macrophages, which can produce CXCL10 and attract Th1 CD4+ helper T cells. This is in agreement with the commonly accepted notion that tumor infiltrating immune cells are highly correlated with each other. 11 CCL5 is also a chemokine which attracts effector T cells, 34 but it is also produced by CTLs themselves in tumor tissues. 12 Accordingly, CCL5 alone predicts patient prognosis better than CXCL10 alone, but, since its expression is tightly correlated with CD8A and GZMB, it does not provide additional prognostic value as a part of composite FHS. Calculation of FHS requires only three genes in a bulk unseparated tumor tissue, which can be measured by quantitative polymerase chain reaction (qPCR). It is more time, cost and labor efficient than IHC and minimizes the evaluation bias. It addresses complementary aspects of CTL (numbers/expansion, effector function and journals.sagepub.com/home/tam 9 migratory function) and differentiates between patients with good and poor prognosis within the same histological tumor cohorts. The prognostic value of FHS will be confirmed by qPCR in our upcoming prospective study. The number of TILs is a known prognostic biomarker in some cancers, including breast cancer and melanoma. [1][2][3]35 The relationship between TILs and PD-1/PD-L1 expression has been reported in multiple types of cancer. PD-1 expression in lymphocytes correlates with PD-L1 expression in cancer cells in breast cancer and melanoma. 36,37 The number of TILs correlates with PD-1 expression in lymphocyte and PD-L1 expression in cancer cells in breast cancer and melanoma, 36,37 whereas it also inversely correlates with plasma PD-1 levels in melanoma. 35 Combination of PD-L1 expression in cancer cells and the number of infiltrating CTLs predicts patient prognosis in gastric cancer. 38 PD-1 positive T cell characteristics, including cytokine and chemokine productions, are different in the tumor (TIL) and in the peripheral blood in lung cancer. 39 Positive stainings of PD-L1 in cancer cells and PD-1 in lymphocytes are associated with aggressive cancer biology, on the other hand, they are also associated with increased pathological complete response to the neoadjuvant chemotherapy in breast cancer. 36 ICI is only effective in a small portion of TNBC patients. Although PD-1/PD-L1 expressions can be utilized as a prognostic biomarker and predict cancer aggressiveness in association with TIL levels as mentioned above, the predictive value of PD-L1 expression by IHC is limited because some PD-L1 negative patients still respond to PD-1 blockade. 9 Similarly, the predictive value of individual CTL markers is not reliable, partially due to a huge variation and spatial heterogeneity within individual tumors. 10 We believe that FHS may provide a way to overcome these challenges, to identify patients who respond to ICI treatment. Therefore, our follow-up study will evaluate the association of FHS with the response to immunotherapy. TMB has been proposed as a key factor in the generation of immunogenic tumor-associated antigenic epitopes, acting as primary targets for CTLs in many types of tumors. 40 Indeed, it has been shown that TMB and CD8+ T cell infiltration are correlated with each other in several types of cancers, including renal cell carcinoma, pancreatic, thyroid, skin and uterine cancers. 41 TMB is also known to be associated with higher sensitivity of ICIs in breast cancer, which is thought to be due to enhanced anti-cancer immune response. 42 Thus, it was of interest to investigate the relationship of our FHS with TMB. Unexpectedly, we found that the TMB, although it is a prognostic biomarker by itself, does not determine the FHS, which can be used independently in a complementary fashion to predict prognosis in both the high-and low-TMB patient cohorts. What was particularly striking was that even the low-TMB tumors showed improved prognosis, and with the combination of high FHS with high TMB showed an optimal prognostic value to identify a unique subset of TNBC patients with uniformly long survival. This study has limitations. Since publicly available cohorts were analyzed using a bioinformatical approach alone, this is a retrospective study with its known biases. Future prospective studies to investigate the utility of qPCR to easily measure FHS and to investigate the association between FHS and outcome of immunotherapy are necessary to confirm the findings. In summary, we demonstrated that high FHS based on the gene expression levels of CD8A, GZMB and CXCL10 predicts excellent long-term TNBC patient survival with enhanced anti-cancer immunity regardless of TMB. FHS constitutes an independent prognostic marker of survival that is particularly robust when combined with TMB in TNBCs.
2021-04-17T05:22:17.713Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "b2f94c276dbc0e63fe81332767ef0db50a85361f", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/17588359211006680", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b2f94c276dbc0e63fe81332767ef0db50a85361f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
249129055
pes2o/s2orc
v3-fos-license
Microbiota Transplantation in an Antibiotic-Induced Bacterial Depletion Mouse Model: Reproducible Establishment, Analysis, and Application The fecal bacteria transplantation (FMT) technique is indispensable when exploring the pathogenesis and potential treatments for microbiota-related diseases. For FMT clinical treatments, there are already systematic guidelines for donor selection, fecal bacterial separation, FMT frequency, and infusion methods. However, only a few studies have demonstrated the use of standardized FMT procedures for animal models used in theoretical research, creating difficulties for many new researchers in this field. In the present paper, we provide a brief overview of FMT and discuss its contribution to the current understanding of disease mechanisms that relate to microbiota. This protocol can be used to generate a commonly used FMT mouse model and provides a literature reference of customizable steps. Introduction The animal body is inherently metagenomic, not only in relation to the eukaryotic genome that makes up the body, but also the genomes of the microbiomes colonizing the surface of the body, which includes bacteria, archaea, fungi, protozoa, and viruses [1]. A growing body of research has shown that commensal microbial communities interact with almost all physiological aspects of the host in health and disease [2][3][4][5][6][7]. The microbiome within the gut is the most widely studied because its microbial biomass exceeds that of other bodily habitats by a large order of magnitude, and it is separated from the host only by a single layer of epithelial cells [5]. Numerous studies have shown that gut microbiota dysbiosis contributes to the onset of many diseases, from gastrointestinal and metabolic disorders to immune and neuropsychiatric diseases [8][9][10][11]. In this context, fecal microbiota transplantation (FMT), an important means of regulating the composition and functions of the gut microbiota [12], is often used in studies of the gut microbiota. Clinically, FMT, also known as fecal transplantation, is a procedure in which stool from a healthy donor is placed into another patient's intestine [13]. However, in experimental studies, its definition is broader. Common research modes include transplanting wild-type (WT) mice or healthy human microbiota into disease-model recipients, [14][15][16] transplanting disease model microbiota into recipients [17][18][19], or even transplanting a customized combination of microbiota (selective microbiota transplantation, SMT) to achieve specific experimental purposes [20]. Some studies have also applied a combination of several modes, for example, the inclusion of a model group that acts as both the donor and recipient to control for handling and allows for the analysis of confounding factors that may affect the experimental groups [21]. In clinical situations, there are already systematic FMT treatment guidelines in place that are constantly being updated and improved [22][23][24]. There have been many reports Microorganisms 2022, 10, 902 2 of 10 on donor selection, the separation of fecal bacteria, the frequency of FMT, and infusion procedures [13,22]. Nevertheless, only a few studies have been designed to explore the methodology of FMT, and even fewer have provided standardized FMT procedures for use in animal models used in research [25], creating difficulties for many beginners in microbiota research. Therefore, the aim of the present review is to provide a simple and repeatable FMT protocol, as well as a summary of the literature references for each adaptable step to aid in customizing microbiota. Moreover, necessary analyses related to FMT, as well as common patterns among studies that have developed this technique to investigate the disease mechanisms related to the intestinal ecosystem, are discussed. The Development and Overview of the Procedure The history of using stool from healthy people to treat human diseases dates back to the fourth century AD [26]. Hong Ge, a Chinese doctor during the Dong Jin Dynasty (AD 300-400), recorded the treatment of "Wen Bing" (febrile disease) and "Shang Han" (typhoid fever) by drinking the fecal suspension or fermented feces [27,28]. Later, in the Compendium of Materia Medica, which is the most comprehensive record of resolving diseases in traditional medicine, Shizhen Li described more than 20 indications that can be effectively treated with fecal suspension or fermented feces [29]. In 1958, Eiseman et al. successfully treated patients with severe pseudomembranous enteritis using a fecal suspension, which was the first recorded instance of such a treatment in the English literature [30]. In 2011, the method was officially termed fecal microbiota transplantation (FMT) [31], and, in 2013, it was included in the medical guidelines for the treatment of refractory Clostridiodies difficile infection (CDI), which represented a milestone in the history of FMT application [32]. The most successful application of FMT, to date, has been in the treatment of refractory CDI [22,33], and there is growing evidence that FMT also has great potential for interventions in other enteric-related diseases and in neurological conditions [14,20,[34][35][36]. FMT, as a strategy to modulate gut microbes, is not only a breakthrough medical technique, but also a breakthrough in technological and theoretical research. In theoretical studies, interest in FMT has not been limited to its use as a therapeutic method (in which the fecal microflora from healthy donors is transmitted to patients to restore a healthy microbial composition to the gut), because it has also involved the transfer of bacteria from one individual to another to induce a desired physiologic effect. Potential microbial material from donors is not limited to feces, and may also comprise intestinal contents or specially modified microbiota (such as SMT) [19,20]. Researchers have often applied FMT to studies of mouse models, which, because of their genetic proximity to humans, our ability to genetically manipulate their genomes, and the availability of many tools, mutants, and inbred strains, have become the mammalian model of choice [37,38]. Therefore, recipients are not limited to germ-free mice and may also be genetically engineered mouse models [15,39]. Of all the available research methods, the most basic is the transplantation of the microbiota of target mouse donors into antibiotic-induced bacterial-depletion mouse recipients, and other research models can be modified on this basis. In Sections 3 and 4 of this review, we provide a brief overview of the selected models of FMT (mouse donors; antibiotic-induced bacterial-depletion mouse recipients), along with the literature references and considerations for each step. In the following sections, we provide some of the necessary analyses related to FMT. Finally, we discuss the contribution that FMT has made to the current understanding of disease mechanisms related to gut microbiota. Donors (1) All the donors are raised in separate cages. Place all the regents and buffers on ice. (4) Add an appropriate amount of pre-cooled sterile PBS (V 1 ) to produce a feces concentration of 50-100 mg/mL. Reach as close to the upper limit as possible. Recipients (1) Administer to the recipients the broad-spectrum antibiotic mix instead of drinking water for at least 14 days (see Note 8), and allow to "rest" for 1-2 days (see Note 9). (2) Before intragastric administration, fast all recipients but allow them to drink freely for 1 day (see Note 10). (3) Intragastrically administer each recipient with 200-300 µL (see Note 6) of transplantation solution once a day for 5 consecutive days ( Table 2) (see Note 11). Notes (1) When the concentration of glycerol in the sample storage buffer is 50%, add the sample storage buffer to the resuspended solution at a ratio of 1:1 to prepare the transplantation solution and obtain a final glycerol concentration of 25%. The final concentration glycerol can be adjusted within the range of 10-30% according to different experimental purposes and conditions. (2) Use a placebo as a control for the transplantation solution. The final glycerol concentration in the placebo should be equal to that of the transplantation solution. (3) Metronidazole can be added to the broad-spectrum antibiotic mix, but it must be used with caution. This can achieve better bacterial depletion results, but may cause weight loss in mice [42,43]. You can try to gradually introduce metronidazole to the solution [42]. (4) The procedure is suitable for commonly used 6-to-8-week-old mice (25-30 g). If using other target animals, refer to Tables 1 and 2. (5) Compared to a tapered-bottom tube, using a round-bottom tube can achieve better homogenization effects. (6) A 1 mL syringe is suitable for commonly used 6-to-8-week-old mice (25-30 g). If using other target animals, refer to To determine the optimum conditions for a particular model, a pilot experiment is required. (7) Generally, each mouse can provide 50-100 mg of fresh feces (6-to-8-week-old mice, 25-30 g). However, if the donors are enteritis-mouse models, there will be less feces. Therefore, the stool from mice of matched weight and sex can be mixed depending on the experimental design. The number of animals used can be customized to the experiment. Fresh feces should be used for transplantation within 6 h [22,36,44], as oxygen exposure degrades the fecal bacterial communities [45]. If frozen feces are required for subsequent use, aim to complete the freezing operation within 15 min as much as possible [46]. (8) The duration of the broad-spectrum antibiotic mix treatment of mice can be customized, but, generally, it lasts for at least 14 days. (9) Before FMT, a "rest" period of 12-48 h is required [22,36,42,44]. (10) Fasting should be started for at least 4-8 h before gavage to avoid the gastric contents hindering gavage injection and affecting the drug absorption rate. (11) The transplanting of recipients once a day for 5 consecutive days is suitable for commonly used 6-to-8-week-old mice (25-30 g). To determine the optimum conditions for a particular model, a pilot experiment is required. For the solutions to other common problems, see Table 3. Table 3. Troubleshooting table. Problem Possible Reason Solution High incidence of animal death Unskilled intragastric operation, resulting in excessive stress Gavage should be painless. If the animal persistently struggles, has difficulty breathing, or resists needle insertion, immediately stop needle insertion and pull the needle out. Try again after the animal has become calm. After the mice have been injected and released, and observations have been conducted for any respiratory abnormalities, the success of the gavage insertion should be confirmed Low colonization efficiency (1) Low volume of feces collected; (2) short duration of intragastric administration of bacterial liquid; (3) operation from solution preparation to intragastric administration more than 6 h; (4) and inappropriate glycerin concentration The Detection of Donors The detection of the donor microbiota is a necessary stage. First, the homogeneity of the donor microbiota can be determined to avoid a great degree of variability within recipient groups after FMT. Second, the composition of the donor microbiota is used to verify the successful separation of fecal bacteria. The Detection of the Transplantation Solution First, the similarity between the microbiota of transplantation solution and that of the donor is detected, because a low similarity indicates that the separation of the flora has failed. Second, the microbial composition of the transplantation solution is clarified, which provides a data reference for subsequent analysis and experiments. Most studies suggest that the microbiota in recipients after FMT tends towards the donor composition [35,[47][48][49]. Therefore, this test can also evaluate the success of FMT. Baseline Before fecal transplantation, baseline microbial composition testing is critical. Studies have shown that the recipient's microbial diversity at baseline predicts their responses to FMT [50]. Generally, broad-spectrum antibiotic mix treatment depletes mice of their intestinal microbiota by hundreds of times to a level similar to that observed in germ-free mice [42,43]. After FMT After FMT, 16S rRNA gene sequencing is recommended for both the FMT and placebo group to identify the microbiota. In general, a statistically significant change in the microbiota composition in the FMT group, compared to the placebo group, was found after treatment [34]. Most studies compare the changes in the flora of fecal samples to determine the success of FMT [44]. Ishikawa et al. stated that the feces, luminal contents, or mucosa of the target intestinal segment can be analyzed on the basis of the needs of the experiment, and even the sites of the sampling can be chosen as microbial donor sites. Different detection sites illustrate different mechanisms. Stool is almost identical to the contents of the rectum, and the contents of a particular intestinal site are more reflective of the physiological conditions at that particular location in the intestine than feces. Their mucosa provides the best reflection of the colonization of microorganisms and, due to their proximity, may have an advantage in reflecting microbial interactions within the enteric nervous system or mucosal immune system. Application Research into FMT, combined with sequencing, bioinformatics techniques, and an up-to-date holistic understanding of the microbiome, provides new intuitive evidence for the treatment and mechanisms of microbiota-related diseases. Healthy Individuals as Donors In general, this method of applying FMT is used to transplant the microbiota of healthy individuals to regulate the microbiota of diseased recipients, thereby verifying the therapeutic effect of the microbiota on the disease. For example, Ishikawa et al. found that FMT, following antibiotic pretreatment with Amoxicillin, Fosfomycin and Metronidazole, may be useful for the treatment of ulcerative colitis [44]. Claudia et al. transplanted microbiota from normal donors into a dextran sodium sulphate-induced colitis mouse model and found that restoring a normobiotic core ecology contributed to the resolution of inflammation [14]. Researchers often combine FMT with other techniques to further elucidate the underlying mechanisms of FMT treatment of diseases. Using the sorting and sequencing of immunoglobulin (Ig) A-coated microbiota (called IgA-seq) techniques, Lima et al. identified immune-reactive microbiota during FMT [49]. In recent years, the active communication between the gut microbiota and the nervous system was discovered [9,10,51]. Studies have shown that FMT treatment can improve abnormal gut microbiota and cognitive deficits and, therefore, its potential as a therapeutic strategy for cognitive dysfunction and Alzheimer's disease (AD) [15,16]. Disease Models as Donors As the donor, the bacterial community of the disease model is generally significantly different from that of the healthy control, and is often used to study the disease's characteristic microbial impact on various aspects of the body's physiology. Studies have shown that donor mice display disease-related phenotypic alterations that can be transferred from donors to recipients by FMT [19,52,53]. Furthermore, other techniques, such as using genetically engineered mice or META analysis, can be combined to further understand the mechanisms of the microbial influence on disease development, such as core flora [19] or immune regulation [49,52]. Furthermore, human donors can be used to transplant microbiota to mice. After the colonization of germ-free mice with hypertensive patient-derived strains, elevated blood pressure was observed and, although needing to be transferred through the microbiota, illustrating a novel causal role for abnormal gut microbiota in the pathogenesis of hypertension [54]. Customized Microbiota as Donors Once a substance or gene is known to have a positive effect on a disease, then the substance-modified microbiota from donors can be used in FMT as a transplantation solution. For example, a ketogenic diet (KD) is known to be useful in the treatment of refractory epilepsy, but the mechanisms underlying its neuroprotective effects remain unclear. By transplanting the most abundant microbiota from the KD-diet mice into antibiotic-treated mice, Olson et al. revealed a potential mechanism by which the gut microbiota modulates the host's metabolism and susceptibility to seizures [20]. Similarly, phlorizin (PHZ), a phytonutrient in apples, can promote good body health. Zhang et al. performed FMT by transplanting the feces of PHZ-fed mice to the high-fat-diet (HFD)-fed mice, confirming that feeding HFD mice the gut contents of the PHZ-modulated mice attenuates HFD-induced metabolic disorders [55]. Similar studies also used genetically engineered mice or known disease-tolerant races as bacteria donors [56,57]. Another method is the application of a mixture of several bacteria from a donor. These can be core microbiota found through meta-analysis in previous studies, and their function can be re-validated by FMT [2,58]. A Combination of the above Donors It is more common to use combinations of multiple models than the above two models alone. Compared with healthy individuals, using disease models as donors can reproduce a disease phenotype in recipient mice [39,[59][60][61]. Sharon et al. conducted further studies, including in vivo metabolome and validation tests, proposing that the gut microbiota regulates behaviors in mice via the production of neuroactive metabolites [61]. Similarly, Kundu et al. used metagenome analysis to select sodium butyrate as a candidate metabolite and, on in vivo re-validation, reproduced the phenotype of FMT [62]. Britton et al. combined the results of 16S rRNA gene sequencing with the detection of homeostatic intestinal T-cell responses to interpret a general mechanism for the microbial contribution to inflammatory bowel disease [63]. Conclusions and Perspectives Strategically FMT is the most direct method used to change the composition of gut microbiota. In the present study, we provided a brief overview of the FMT protocol and summarized the research progress of FMT. However, this review has some limitations. One limitation is that we only included oral administration, the most common route. The protocol provided in a previous section may not be generalizable to other routes, such as rectal FMT. Additionally, we only provide the primary means of FMT and solutions to common problems, and some other difficulties and innovations are not included. These limitations may mean that the instructions are only informative for beginners. However, with the rapid progress of gut microbiology, it is hoped that more studies will be conducted in the
2022-04-29T15:23:14.347Z
2022-04-26T00:00:00.000
{ "year": 2022, "sha1": "b96ba25611b73a016332927df98945ae3c0a691a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2607/10/5/902/pdf?version=1650936543", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2923a9cfc59adb433e52384aeb0f9f11fac2b579", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
16719023
pes2o/s2orc
v3-fos-license
Intra-Chunk Dependency Annotation : Expanding Hindi Inter-Chunk Annotated Treebank We present two approaches (rule-based and statistical) for automatically annotating intra-chunk dependencies in Hindi. The intra-chunk dependencies are added to the dependency trees for Hindi which are already annotated with inter-chunk dependencies. Thus, the intra-chunk annotator finally provides a fully parsed dependency tree for a Hindi sentence. In this paper, we first describe the guidelines for marking intra-chunk dependency relations. Although the guidelines are for Hindi, they can easily be extended to other Indian languages. These guidelines are used for framing the rules in the rule-based approach. For the statistical approach, we use MaltParser, a data driven parser. A part of the ICON 2010 tools contest data for Hindi is used for training and testing the MaltParser. The same set is used for testing the rule-based approach. Introduction Treebanks are corpora in which each sentence pairs with a parse tree. These are linguistic resources in which the morphological, syntactic and lexical information for each sentence has been explicitly marked. Some notable efforts in this direction are the Penn Tree Bank (Marcus et al., 1993) for English and the Prague Dependency Bank (Hajicova, 1998) for Czech. Lack of such treebanks has been a major bottleneck in various efforts in advance research and development of NLP tools and applications for Indian languages. Treebanks can be created manually or semiautomatically. Manual creation of treebank is a costly task both in terms of money and time. The annotators follow a set of prescribed guidelines for the annotation task. Semi-automatic creation of treebank involves first running of tools/parsers and then manual correction of errors. An accurate annotating parser/tool saves cost and time for both the annotation as well as the validation task. A multi-layered Hindi treebank is in the process of being created . Dependency treebank forms the first layer in this annotation. To save annotation effort, manual annotation of the dependency relations for Hindi dependency treebank is carried at the inter-chunk level. The intra-chunk relations are marked automatically. The focus of this paper is the task of automatically marking intra-chunk relations. We present both a rule-based and a statistical approach for this expansion process. We call this process 'expansion' since the intra-chunk dependencies are made explicit by removing the chunk encapsulation; one could visualize this as expanding the chunk into sub-trees. The rest of the paper is organized as follows. Sections 2 & 3 give an overview of Hindi treebank and the steps involved in its development. Section 4 describes the guidelines for annotating intra-chunk dependencies. Section 5 shows our approach to building an automatic intra-chunk annotator. Section 6 talks about issues with a couple of dependency relations and how these are handled by the automatic annotator. We conclude in section 7 and present future work in Section 8. Hindi Dependency Treebank A multi-layered and multi-representational Treebank for Hindi ) is currently being developed. The treebank will have dependency relations, verb-arguments (PropBank, Palmer et al., 2005) and phrase structure (PS) representations. The dependency treebank contains information encoded at the morpho-syntactic (morphological, part-of-speech and chunk information) and syntactico-semantic (dependency) levels The manual annotation of the dependency treebank entails the annotation of part of speech (POS) tag, morphological information for each word, identification of chunk boundary (and chunk tag) and marking inter-chunk dependency relation between word pairs. The intra-chunk dependencies are left unannotated. The decision to leave intra-chunk relations unmarked is based on the understanding that their identification is quite deterministic and can be automatically annotated with high degree of accuracy. The notion of chunk is, in essence, used as a device for modularity in the process of annotation. The relations among the words in a chunk are not marked in the initial phase of annotation and hence allow us to ignore local details while building the sentence level dependency tree. An example of inter-chunk dependency annotation is given in Figure 1 below. Note how the two chunks (the noun chunk, NP and the verb chunk, VGF) are related to each other using the attribute 'drel' (dependency relation), also note that the relations between the chunk-internal words (e.g. and in the NP chunk) are The inter-chunk dependency annotation is done following the dependency guidelines in Bharati et al., (2009) that uses a dependency framework inspired by Panini's grammar of Sanskrit (see, Begum et al., 2008 for more details). Subsequent to inter-chunk dependency annotation, intra-chunk annotation is done automatically following the guidelines described in this paper. The final treebank for Hindi would have other layers annotation such as Propbank and Phrase structure. The conversion to phrase structure depends on the expanded version of the treebank (i.e. trees with inter-chunk, as well as, intra-chunk relations marked).Hence, it is important to have high quality complete dependency structure for each sentence, and since inter-chunk annotation is manual, this implies that the process of automatic expansion (i.e. the task of making intra-chunk relations explicit) should be very accurate. Intra-Chunk Annotation Showing intra-chunk relations and thereby a fully parsed dependency tree implies chunk removal from the inter-chunk dependency annotation. Once the intra-chunk dependencies are made explicit, every sentential token becomes part of the dependency tree. However, it can be useful to retain the chunk information which has been manually validated for inter-chunk dependency annotation. Indeed, previous parsing experiments for Hindi during the ICON2010 tools contest (Husain et al., 2010) have shown that this information consistently improves performance. Thus, during the process of expansion, we introduce two attribute-value pairs for this purpose. This way we maintain chunk information after making the intra-chunk relations explicit. This makes it possible for the users of the treebank to select the chunk head and ignore the intra-chunk information if so desired. Alternatively, it is also possible to access the complete dependency tree. In Figure 1, the dependency relations are marked between chunk heads, i.e. 'kitaab' is seen related to 'gir' with a 'k1' relation. 'niilii' and 'gaii', on the other hand, are not shown related to any other word. Also note that the chunk boundaries are shown using brackets. Once we show all the tokens as part of the dependency tree, this information goes in the feature structure of individual nodes. This can be seen in figure 3. The attribute, 'chunkId' and 'chunkType' substitute the bracketing, as well as show the chunk members in the role of head and child. The head node has 'chunkId' that gives it a unique chunk name; note that this is same as the value of 'name' for the original chunk. When multiple chunks with same name occur in a sentence, we append a number along with the name. For example, if there are multiple NP's then the chunk ids will be NP, NP2 and NP3 etc. In addition, all the chunk members have 'chunkType' that gives their membership type. In the example (figure 3), the adjective 'nIlI' modifies the head noun 'kiwAba' with 'nmod__adj' relation. The chunk membership is also shown for both these tokens, nIlI is the 'child of the chunk with chunkId=NP' shown by chunkType. kiwAba on the other hand is the 'head of the chunk with chunkId=NP', it has both chunkType and chunkId. Intra-Chunk Dependency Guidelines Intra-chunk labels are used when the dependencies within a chunk are made explicit. There are a total of 12 major intra-chunk tags. The tags are of three types: (a) normal dependencies, eg. nmod__adj, jjmod__intf, etc., (b) local word group dependencies(lwg), eg. lwg__psp, lwg__vaux, etc., and (c) linking lwg dependencies, eg. lwg_cont. Local word dependencies themselves can be broadly classified into two types, one that handles post-positions and auxiliary verbs and the other that handles negations, particles, etc. Following guidelines are used to annotate the intra-chunk dependencies. 1. nmod__adj: Various types of adjectival modifications are shown using this label. An adjective modifying a head noun is one such instance. The label also incorporates various other modifications such as a demonstrative or a quantifier modifying a noun. Chunk: In the above example NP is the chunk with words 'niilii' (blue) and 'kitaab' (book) with POS tags JJ and NN respectively. 2. lwg__psp: This relation is used to attach post-positions/auxiliaries associated with the noun or a verb. 'lwg' in the label name stands for local word grouping and associates all the postpositions with the head noun. These relations are distinct from normal dependency relations as they are more morphological in nature. Intra-Chunk Dependency Annotator In this section we discuss our approach to building an intra-chunk dependency annotator/parser for Hindi. We describe three experiments; the first two are rule-based and statistical based, while the third is hybrid in a sense that it adds on a heuristic based post-processing component on top of the statistical technique. We evaluate about approaches in section 5.3 after describing rule-based and statistical approaches in sections 5.1 and 5.2 respectively. Rule-Based Dependency Annotator The rule-based approach identifies the modifiermodified (parent-child) relationship inside a chunk with the help of the rules provided in a rule template. The inter-chunk dependency annotated data is run through a head computation module (a rule-based tool), which marks the head of each chunk. After getting the heads for each chunk, we get the intra-chunk relations using a rule-base that has been manually created. The design of the rule template allows capturing all the information in a SSF representation. The rule template is a 5columned table with each row representing a rule. Table1 shows a sample rule written using the rule template. The five columns are 1. Chunk Name: Specifies the name of the chunk for which this expansion rule can be applied. Parent Constraints: Lexical item which satisfies these constraints will be identified as the parent. Constraints are designed capturing POS, chunk, word and morphological features. In Table1 the constraint on the parent is specified using its POS category (NN: common noun). Child Constraints: Lexical item satisfying these constraints becomes the child. Constraints are designed similar to the parent constraints. In Table 1 the constraint on the child is specified using its POS category ( JJ:adjective ). Contextual Constraints: Lexical items satisfying constraints 1, 2 &3 become parent and child in a chunk. One can access the previous and next words of parent and child by applying arithmetic on posn attribute. Information about the lexical item can be accessed by applying attributes like POS (for part of speech tag), CAT (category), and LEMMA (for root form of lexical item). Here an example of a contextual constraint taken from Table1: posn(parent) > posn(child) Parent and child constraint look at the properties of word but there are cases where the constraint needs to be formed beyond word level information. These constraints involve capturing of word order information. In such cases we use the operator '>'. It can be used only when 'posn' attribute is used. Here the constraint means that this rule is applicable only when child occurs before parent inside the chunk. One can also specify constraints in form of: POS__posn(parent) -1 == NN Here the Part of Speech of word preceding parent is accessed and compared with NN. posn(parent) -1 retrieves the position of preceding word of parent and POS__ on this position gives us the Part of Speech tag of that lexical item. 5. Dependency Relation: If all the constraints are satisfied, then the dependency relation from this column is marked on the parent-child arc. Sub-tree Parsing using MaltParser We use MaltParser (Nivre et al., 2007) as an alternative method to identify the intra-chunk relations. It is well known in the literature that transition-based dependency parsing techniques (e.g. Nivre, 2003) work best for marking short distance dependencies in a sentence. As must be clear by now, intra-chunk relations are in fact short distance dependencies; and we basically use MaltParser to predict the internal structure of a chunk. So instead of using it to parse a sentence, we parse individual chunks. Each chunk is treated as a sub-tree. The training data contains sub-trees with intra-chunk relations marked between chunkinternal nodes, the head of the chunk becomes the root node of the sub-tree. The MaltParser is trained on these sub-trees and a model is created. We run the test data on this model for marking intra-chunk dependencies among the sub-trees and then postprocess them to obtain complete dependency tree for the data. Results In this section we evaluate the three approaches that were explored to build the automatic intrachunk annotator. A total of 320 sentences extracted from the ICON2010 tools contest data for Hindi (Husain et al., 2010) have been manually annotated for intra-chunk relations. Table 2 shows the statistics for this gold data that has been used for evaluation (and training). Number of Sentences Training 192 Development 64 Testing 64 Table 2: Gold data Rule-Based Approach: As discussed in section 5.1, the rule-based approach marks dependency relation mainly by using POS patterns in a chunk. Table 3: Parsing accuracies 2 obtained using rulebased tool Statistical/MaltParser-based approach: Table 2 shows the division of data into training, development and test. The experimentation procedure is similar to the one used in Kosaraju et al., (2010). We prepared a list of features with the aim of getting a better parse. A simple forward selector is used to prune the list and prepare the best feature template. The selector's task is to include the feature into feature template if this template improves the LAS score over the previous template. These feature optimization experiments were conducted over 5-fold cross-validation of the combined training and development data. The best feature template was used to get the final accuracies for the test data. The POS-based template scores can be compared with the results obtained from the rulebased scores (Table 3) since the rules are formed using POS patterns. We see that both rule-based and statistical approach give very high accuracies on the test data. These results validate our initial intuition that identification of intra-chunk relations is quite deterministic. These results also support our annotation design choice of leaving the annotation of intra-chunk relations out of the initial manual phase. Table 5 shows percentage error contribution of some major tags to total Error of their respective systems. Table 6 shows precision (P) and recall (R) of some major tags. We made the statistical approach hybrid by post-processing the output of the MaltParser. This involves correction of some dependency relations based on heuristics framed from the rules of the rule-based tool. Heuristics are formed for those dependency relations that have higher recall in the rule-based approach compared to the statistical approach. Rule-based appraoch The modification resulted in improvement in parsing accuracies. This can be seen in Table 7. Special Cases The neat division between the task of inter-chunk parsing and intra-chunk parsing is based on the following assumption: 'Chunks are self contained units. Intra-chunk dependencies are chunk internal and do not span outside a chunk.' However, there are two special cases where this constraint does not hold, i.e. in these two cases a chunk internal element that is not the head of the chunk has a relation with a lexical item outside its chunk and therefore, these two relations have to be handled separately. These are related to punctuation and coordination. 1. rsym__eos: The EOS (end-of-sentence) marker occurs in the last chunk of the sentence. It attaches to the head of the sentence (which may lie in the same chunk or another chunk) with this relation. 2. lwg__psp: As noted in section 4, a PSP (postposition) attaches to the head of its chunk with a lwg__psp relation. However, if the right most child of a CCP (conjunction chunk) is a nominal (NP or VGNN), one needs to attach the PSP of this nominal child to the head of the CCP during expansion. If there are multiple PSP then the first PSP gets lwg__psp and the following gets lwg__cont relation. Take the following example NP(raama_NNP) CCP(aur_CC) NP(siitaa_NNP 'ram' 'and' 'sita' ne_PSP) 'ERG' In this case the PSP connects to the CC with the relation lwg__psp. The subtree after expansion is shown in figure 6. aur ccof ccof lwg__psp raama ne sita Figure 6: Expanded sub-tree with PSP connected with CC. Conclusion In this paper we described annotation guidelines for marking intra-chunk dependency relations. We then went on to show that these relations can be automatically identified with high accuracy. This was illustrated using (1) a rule-based approach that mainly used intra-chunk POS patterns, and (2) a statistical approach using MaltParser. We also showed that these two systems can be combined together to achieve even higher accuracy. From the report of error analysis, it is been shown that there are certain relations that are not being marked successfully. This is good news because then one can make very targeted manual corrections after the automatic tool is run.
2014-10-01T00:00:00.000Z
2012-07-12T00:00:00.000
{ "year": 2012, "sha1": "5740f4932e3cd2a4eef19e043973618dff64d87c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "cca9bff90640fd4036ea0bbf0bb37d72af62ae56", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
257304559
pes2o/s2orc
v3-fos-license
Determination of Fv/Fm from Chlorophyll a Fluorescence without Dark Adaptation by an LSSVM Model Evaluation of photosynthetic quantum yield is important for analyzing the phenotype of plants. Chlorophyll a fluorescence (ChlF) has been widely used to estimate plant photosynthesis and its regulatory mechanisms. The ratio of variable to maximum fluorescence, Fv/Fm, obtained from a ChlF induction curve, is commonly used to reflect the maximum photochemical quantum yield of photosystem II (PSII), but it is measured after a sample is dark-adapted for a long time, which limits its practical use. In this research, a least-squares support vector machine (LSSVM) model was developed to explore whether Fv/Fm can be determined from ChlF induction curves measured without dark adaptation. A total of 7,231 samples of 8 different experiments, under diverse conditions, were used to train the LSSVM model. Model evaluation with different samples showed excellent performance in determining Fv/Fm from ChlF signals without dark adaptation. Computation time for each test sample was less than 4 ms. Further, the prediction performance of test dataset was found to be very desirable: a high correlation coefficient (0.762 to 0.974); a low root mean squared error (0.005 to 0.021); and a residual prediction deviation of 1.254 to 4.933. These results clearly demonstrate that Fv/Fm, the widely used ChlF induction feature, can be determined from measurements without dark adaptation of samples. This will not only save experiment time but also make Fv/Fm useful in real-time and field applications. This work provides a high-throughput method to determine the important photosynthetic feature through ChlF for phenotyping plants. Introduction Photosynthesis is the source of food, energy, fiber, and oxygen for all living organisms including humans. Evaluation of photosynthetic quantum yield is important for analyzing plant phenotypes; however, the research of current plant phenomics is often limited to external geometry features. When the chloroplasts in plants and algae absorb sunlight, pigments, mainly chlorophyll molecules, in the light-harvesting pigment protein (antenna) complexes are excited and the absorbed energy is transferred to photosystem II (PSII) and photosystem I (PSI) reaction centers [1]. The absorbed light energy is used mostly for photosynthesis but is partly dissipated in the form of chlorophyll a fluorescence (ChlF) or heat [2]. Background on the various steps of photosynthesis is available in several publications [3,4]. Environmental or plant physiological changes that affect PSII lead to changes in ChlF, which can be used as a fast, sensitive, and a nondestructive indicator of the status of PSII [5,6]. Analysis of ChlF changes is one of the most powerful and widely used techniques to study the effects of various types of stress on the photosynthetic process [7][8][9]. At present, ChlF is widely used as a probe for not only PSII but overall photosynthesis [10], photosynthetic systems [11], photochemistry and heat dissipation [12], several photosynthetic reactions [13], and photoinhibition [14]. Furthermore, it is used to monitor different types of abiotic stress [15], including drought [16], heat [17,18], environmental pollution [19], nutrient status [20], and plant phenotyping [21]. ChlF measurement can serve as a plant physiological variable related to photosynthesis in phenotypic analysis. Advances in optical phenotyping (including that by ChlF) of cereal crops have been summarized by Sun et al. [22]. Although ChlF has been used for many purposes, as mentioned above, the interpretation of ChlF measurement is quite complex. A very important feature derived from the ChlF induction curve is F v /F m [23], which allows us to provide information on effects of carbon metabolism and has been successfully used as a sensitive indicator of the photosynthetic performance of plants [24]. To determine the F v /F m ratio, dark adaptation is needed to open all the PSII reaction centers, and only then can the minimal fluorescence (F o ) be measured. (For a discussion on the timing for measuring F o , see the study by Padhi et al. [25].) After excitation with strong continuous light, most, if not all, the reaction centers are closed, and thus, ChlF reaches a maximum value (F m ). The difference, F v = F m − F o , is referred to as the variable fluorescence. The ratio, F v /F m = (F m − F o )/F m , reflects an intrinsic PSII efficiency and measures the quantum yield of the primary PSII photochemistry in dark-adapted photosynthetic samples [26,27]. F v /F m has been successfully used as an indicator of plant photosynthetic performance [28]. It has also been used to obtain information on photoinhibition induced by abiotic stress [29]. The F v /F m can also reflect the severity of plant phenotypic diseases, and it is an important indicator of plant stress. Rousseau et al. [21] focused on phenotyping by analyzing F v /F m images, and their results showed that there was a clear strong difference between the infected tissues and the healthy tissues. Zhou et al. [30] used ChlF in the phenotypic analysis of faba beans (Vica faba L.) under both cold and heat stress and found that F v /F m is a very effective parameter in detecting the damage by low and high temperatures to PSII; further, they identified high-temperature-tolerant broad bean genotypes. Therefore, F v /F m can be used as a physiological marker for phenotyping. Before measuring F o , it is necessary to dark-adapt a plant sample for 15 to 30 min [31] or even longer [32]. This darkadaptation process is time-consuming. Far-red light, absorbed mainly by PSI, might be used to speed up the oxidation of the reduced plastoquinone (PQ) pool and thus suppress the measured F o , i.e., F o ′ (minimum ChlF intensity in the light-adapted state) increase, and this method is often applied following dark adaptation. It is thus desirable to find a method to determine F v /F m from ChlF measurement without dark adaptation. The exact relationship between ChlF with dark adaptation and that without dark adaptation is complex and has not yet been established. By using contemporary computational methods, this hidden relationship can be explored to determine accurate F v /F m from ChlF mea surement without dark adaptation, but this has not yet been done by any research group. Artificial intelligence methods have been widely used to identify hidden relationships in many fields. Using these methods to analyze ChlF data can identify complex relationships in plant responses to stresses [33]. Tyystjärvi et al. [34] have identified species of crops and weeds by analyzing ChlF induction curve with an artificial neural network method. This method has been used to identify plant species by analyzing ChlF induced by different types of illumination [35]. Furthermore, Goltsev et al. [36] have constructed and trained an artificial neural network by using photoinduced prompt ChlF, delayed ChlF, and 820-nm modulated reflection signal (measuring PSI) to identify changes in the photosynthetic activity in bean leaves during drying. Yao et al. [37] have applied kinetic ChlF and multi-color fluorescence imaging technology for phenotypic analysis of Arabidopsis drought stress response, and, from it, they have successfully classified Arabidopsis under different drought stress levels by a support vector machine (SVM). Artificial intelligence methods may be potentially used to find the hidden relationship between the F v /F m and ChlF measurement, without any dark adaptation of plants, but by using a general learning strategy (i.e., a mathematical method), so that F v /F m under dark adaptation can be predicted from ChlF measurement without dark adaptation. In our present study, a least-squares SVM (LSSVM), an artificial intelligence method, was used to determine F v /F m from ChlF measurement without dark adaption for multiple plant species and conditions, which allows one to save tremendous amount of experimental time and provides an important feature for plant phenomics. Plant samples Eight sets of experiments with a total of 7,231 samples were performed on 6 plant species (Oryza sativa L. [rice], Camellia japonica, Euonymus japonicus Thunb, Osmanthus sp, Cerasus lannesiana var. speciosa, and Capsicum annuum). These plant species are under different drought stress, ambient growth temperature, growing seasons, and measured environments. Details are described below in the order they were done from the summer of 2019 until the winter of 2021 for different plant species, described below. Rice (Oryza sativa L.) The first set of experiments was conducted on rice plants (Oryza sativa L.) under 4 different drought stress conditions. Rice plants were taken with roots from a production field in Jiangsu, China, in the early mornings, during the growing season in the summer of 2019, when the ambient temperature was ~28 °C. To reduce the effects of variations in moisture in different samples, during ChlF measurements, the roots of the plants were completely immersed in water for at least 2 h. Then, the roots were placed in 20% polyethylene glycol for different durations (0, 1, 2, and 4 h) of treatment to achieve different levels of drought stress or physiological state [38]. The number of samples of rice plants without drought or with drought treatment for 1, 2, and 4 h was 1,335, 1,093, 1,322, and 1,146, respectively. The temperature during ChlF measurement was between 30 and 36 °C, and the ambient photosynthetic photon flux density (PPFD) was between 3 and 7 μmol photons m −2 s −1 . Camellia japonica and Euonymus japonicus Thunb The second set of experiments was carried out on Japanese Camellia (Camellia japonica) leaves, using 314 samples. The third set of experiments was done on leaves of Euonymus japonicus Thunb, also using 314 samples. Both Camellia japonica and Euonymus japonicus Thunb were grown on the campus of Jiangnan University (Wuxi, China). Leaves from these 2 plants were picked in the mornings in April 2021 and were transferred immediately to the laboratory for measurements. To reduce the effect of variations in the water condition, the sampled leaves of the second and the third sets of experiments were floated on water for at least an hour. The temperature during ChlF measurement was ~23 °C and the ambient PPFD was ~5 μmol photons m −2 s −1 . Osmanthus sp. and Cerasus lannesiana var. speciosa The fourth and the fifth sets of experiments were carried out on intact plants in the wild field, using leaves of Osmanthus sp. with 237 samples and those of Cerasus lannesiana var. speciosa with 335 samples. These plants in the fourth and the fifth experiments were grown naturally on the campus of Jiangnan University (Wuxi, China). The ChlF data of the fourth and the fifth experiments were collected at the end of July 2021, the ambient temperature was ~33 °C, and the ambient PPFD was between 58 and 1,960 μmol photons m −2 s −1 . Capsicum annuum The sixth set of experiments was performed on attached leaves of Capsicum annuum. Here, 356 samples were tested in the field, which were grown in a greenhouse in Wuxi, China. The ChlF data were collected at the beginning of August 2021. The temperature was between 36 and 40 °C, and the ambient PPFD was between 58 and 1,770 μmol photons m −2 s −1 during measurements in the greenhouse. Camellia japonica and Osmanthus sp. The seventh and eighth experiments were carried out on intact plants on the campus of Jiangnan University (Wuxi, China), which included leaves of Osmanthus sp. with 379 samples and of Camellia japonica with 400 samples. These experiments were done in December 2021; the ambient temperature was between 8 and 15 °C, and the ambient PPFD was between 78 and 1,380 μmol photons m −2 s −1 . Table 1 shows all plant samples and experiment specifics. Instrumentation and measurements The ChlF parameter F v /F m (ratio of variable to maximum fluorescence) was measured under 2 conditions: with and without dark adaptation of the leaves. The illumination condition without dark adaptation means that the plant leaves are not dark-adapted before the ChlF measurement. The leaves were measured without dark adaptation, and then they were measured in dark-adapted state after dark adaptation. Twentyminute dark adaptation was applied through dark-adaptation clips [39]. A FluorPen ChlF measurement device (Photon Systems Instruments, Drásov, Czech Republic) was used to measure ChlF transient, ChlF induction of the leaves, where O is the minimum fluorescence, J and I are inflection steps, and P is for the peak (the maximum). The illumination light intensity to excite the ChlF of leaves was set as 2,400 μmol photons m −2 s −1 for all samples. The ambient light intensities for all our experiments were measured by a light intensity meter (VC1010A, Victor, Shenzhen, China). The light intensity read in Lux, from the measured light intensity meter, was converted to PPFD. The conversion relationships are 1 Klux = 19.5 μmol photons m −2 s −1 for daylight PPFD [40] and 1 Klux = 12 μmol photons m −2 s −1 for white fluorescent light [41]. The values of ambient light intensities in this work are only used to show that measurements were made on samples illuminated with a wide range of initial lighting conditions. Estimation errors of PPFD from Lux have no effect on the conclusion of this work. Development of an LSSVM model An SVM maps high-dimensional data from an input space to a feature space through a nonlinear mapping process. LSSVM is an extension of SVM; it uses inequality constraints instead of equality constraints and the sum of squared-error loss function as the "experience loss" to transform a problem into a linear one. In this work, an LSSVM model was employed to map the relationship between the ChlF induction feature F v /F m with and without dark adaptation of the photosynthetic samples. The LSSVM regression equation is: where x is the ChlF response without dark adaptation, f(x) is the corresponding output, φ(x) is a nonlinear mapping function that maps x to a high-dimensional feature space, w is a weighting vector, and b is a bias variable. Based on the principle of structural risk minimization, the function becomes: where K is a kernel function, a i is the Lagrangian multiplier, i is an integer, and m is the number of samples in a training dataset. According to the Mercer condition, the kernel function can be written as: The following radial basis function was used as the kernel function in our research: where τ represents the parameter of the Gaussian radial basis kernel function. For the training dataset {(x i , y i ), i = 1, 2, …, m}, x i ∈R m represents the input of the i-th training sample (ChlF measured without dark adaptation), y i ∈R is the target value of the i-th training dataset (F v /F m measured with dark adaptation), and m is the number of samples in the training dataset. For the testing dataset , and n is the number of samples in the test dataset. X i is fed to the trained LSSVM model (Eq. 2) to calculate the corresponding predicted F v /F m value, and the i-th predicted F v /F m value is expressed as YY i (i = 1, 2, …, n). Data normalization To reduce the influence of differences in data magnitudes, the following zero-mean normalization method (Z-score normalization) was used to normalize both the ChlF signal data without dark adaptation and the F v /F m target values with dark adaptation so that both were in the same order of magnitude: where μ denotes the mean and σ is the SD of the original data x, and Z represents standard normal distribution. The predicted F v /F m values from the model were denormalized to their original scale for testing and evaluation. Model testing and evaluation To evaluate the performance and generalization ability of the model, the following metrics computed from the test samples were used to assess the predicted F v /F m : (a) root mean square error (RMSE); (b) correlation coefficient (CC); and (c) residual predictive deviation (RPD), as shown below in Eqs. 6 to 8. (1) mean value of the test samples, and n is the number of samples in the test dataset. All these metrics measure the deviation of the predicted F v /F m values from the true values. As is commonly known, the smaller RMSE or the closer to unity CC is, the higher the prediction performance. For most applications, models with RPD values lower than 1.5 are considered insufficient, while models with values greater than 2.0 have good robustness [42]. In the training of the LSSVM model, a 10-fold cross-validation, and a grid optimization, was used to optimize the 2 parameters (regularization coefficient and parameter of the Gaussian radial basis kernel function) that affect the accuracy and the complexity of the model. In each of the 10 runs, 10%, 20%, …, and 90% of each sample type was randomly selected as the training dataset, and the remaining was used as the testing dataset. The average values of RMSE, CC, and RPD obtained in the 10 runs (RMSE, CC, and RPD) were used to evaluate model performance. The LSSVM model was implemented in MATLAB 2019b (Mathworks, Inc., Natwick, MA, USA). Variations in F v /F m with dark adaptation and without dark adaptation To explore the difference between different sample types of the F v /F m measured with and without dark adaptation, statistical comparisons on the F v /F m from different sample types are presented in Table 2. Values indicated with different letters in a column are significantly (P < 0.05) different from one another by the LSD (least signification difference) test. The F v /F m measured with and without dark adaptation show statistical differences between most different sample types and treatments, as shown below in Table 2. Tables 3 to 5. When the training dataset sample exceeds 70%, the CC of most sample types for the training dataset is greater than 0.80 in Table 3, and the RPD of most sample types for the training dataset is greater than 1.5 for the training dataset in Table 5. The RMSE of different sample types for the training dataset is less than 0.016 in Table 4. Prediction of F v /F m using ChlF without dark adaptation on the test dataset The test dataset results of using the LSSVM model to determine F v /F m from ChlF measured without dark adaptation under different training dataset sample numbers are presented in Tables Table 8 show nearly negligible differences between the predicted and the real F v /F m , the RPD values of the most sample types are much greater than 2, and all RPD values are greater than 1.5, which shows that the model has good robustness for the test dataset. Figure 1 shows a comparison of the F v /F m values predicted by the LSSVM model obtained from different training dataset sample numbers with the experimental values measured after dark adaptation for all the tested samples. It is obvious from the plots that the predicted F v /F m values by the LSSVM model match the real values of F v /F m well. To further evaluate model prediction performance, we computed a regression line to verify whether it is close to the 1:1 line. As shown in Figure 1, the fitted regression lines have small slopes and intercept errors; further, the predicted values for F v /F m almost coincide with the perfect 1:1 line for the sample types used. The data points are tightly distributed around the ideal straight line, which means that the predicted values are linearly related to the real values. The coefficient of determination (R 2 ) values between the predicted F v /F m and the measured F v /F m values with dark adaptation is 0.970 for all plant samples, which is close to 1, and the P value of 0.000 is less than the default significance level of 0.05. We emphasize that a significant linear regression relationship exists between the predicted F v /F m from ChlF signal without dark adaptation and the F v /F m with dark adaptation. Our data clearly show that the LSSVM model is highly effective in predicting F v /F m from ChlF measured without dark adaptation. Discussion Understanding the physiological mechanism of plant genetic phenotype is of great significance for improving the growth and yield of crops. ChlF is a very useful phenotypic tool for plant phenotyping and photosynthesis, and the F v /F m is subject to genetic control. The genetic phenotype of ChlF parameters is affected under stress conditions. It is very important to study the correlation between the internal difference of F v /F m among different varieties and the growth and yield of crops. Dark adaptation has been the usual treatment before ChlF induction measurement, and it can often be used as a reference for plant stress research. Papageorgiou et al. [43] reported that different dark adaptation times had an important impact on the ChlF results. In addition, dark adaptation needs additional equipment and is very time-consuming. In this work, ChlF signals measured without dark adaptation have been used to obtain true F v /F m successfully by using an LSSVM model. The experiments in this work involved the use of 6 different genetic varieties of plants, 4 levels of drought stress conditions, several different environmental temperatures (8 to 40 °C), 3 different growing seasons (spring, summer, and winter), wide range of PPFD (between 3 and 1,960 μmol photons m −2 s −1 ), and 3 different measured locations (wild field, greenhouse, and laboratory) ( Table 1). All of the above lead to enormous differences in the ChlF parameters under a large variety of physiological conditions among different plants under different conditions ( Table 2). As is well known, F v /F m is closely related to physiological status of plants. Our results clearly show that the developed model predicts the F v /F m among different samples with only very small errors (Tables 3 to 5). These data clearly prove that the LSSVM model can indeed discern the hidden relationship between ChlF signal without dark adaptation and F v /F m values with good robustness. The computation time for each test sample is less than 4 ms (processor: Intel Core i5-9400F CPU @ 2.90GHz) and much less than the dark-adaptation time (almost 20 min) taken in the traditional experiments. The machine learning method proved effective in uncovering the hidden relationships between ChlF signals of plant leaves with and without dark adaptation. The ability to measure F v /F m without dark adaptation will save experimental time and cost. More important, this will allow F v /F m to be used in the field and in real time, which will make F v /F m a much more convenient measure in probing the physiological status of plants. This work provides a high-throughput method for determining the important photosynthetic feature through ChlF, which would provide plant physiological features in phenotyping. This work also implies that the hidden nonlinear biological photosynthetic behavior can be discerned by artificial intelligence. The concept in this work is not only limited to predicting F v /F m , but it may be also used to predict other ChlF parameters, such as effective photochemical quantum yield of PSII (Y[II]), quantum yield of regulated energy dissipation in PSII (Y[NPQ]), and quantum yield of nonregulated heat energy dissipation and fluorescence emission (Y[NO]) after model retraining. Recently, there have been many updated deep learning networks in the literature [44], such as Extreme Gradient Boosting (XGboost) [45] and Light Gradient Boosting Machine (LightGBM) [46]. The performance of XGboost and LightGBM were tested for predicting F v /F m values from ChlF measurements without dark adaptation in this work for comparison, but their performance is similar to the LSSVM model, which implies that an LSSVM model is enough for this application. In this work, we thus report only the results from the simple LSSVM model as its performance is already very promising. The LSSVM model, used here, has shown great promise with small prediction errors, but, as is the case for other neural network-based tools, more experiments are needed to build a much bigger public training and testing dataset like the well-known imageNet for human face recognition [47] to call for improvements of the prediction model. Dark adaptation of photosynthetic samples has been essential in measuring quantum yield of PSII via F v /F m through ChlFbased analysis of photosynthesis and plant responses. We developed an LSSVM model that can obtain F v /F m from ChlF signals measured without dark adaptation. The model was validated with data collected from many different plants under varied conditions. Our results have established that the LSSVM model could indeed determine F v /F m from ChlF measurements without dark adaptation. We emphasize that this work demonstrates that F v /F m can be determined without dark adaptation of plants, which will make the measurement more convenient and enhance the research of plant physiology and phenotyping.
2023-03-03T16:12:02.802Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "854a381f93a8a09b5d023e05e3b2f171ba885420", "oa_license": "CCBY", "oa_url": "https://doi.org/10.34133/plantphenomics.0034", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e9a09c160666749eaa270dcd383c739c8399b805", "s2fieldsofstudy": [ "Environmental Science", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
18650798
pes2o/s2orc
v3-fos-license
Cyclical and Patch-Like GDNF Distribution along the Basal Surface of Sertoli Cells in Mouse and Hamster Testes Background and Aims In mammalian spermatogenesis, glial cell line-derived neurotrophic factor (GDNF) is one of the major Sertoli cell-derived factors which regulates the maintenance of undifferentiated spermatogonia including spermatogonial stem cells (SSCs) through GDNF family receptor α1 (GFRα1). It remains unclear as to when, where and how GDNF molecules are produced and exposed to the GFRα1-positive spermatogonia in vivo. Methodology and Principal Findings Here we show the cyclical and patch-like distribution of immunoreactive GDNF-positive signals and their close co-localization with a subpopulation of GFRα1-positive spermatogonia along the basal surface of Sertoli cells in mice and hamsters. Anti-GDNF section immunostaining revealed that GDNF-positive signals are mainly cytoplasmic and observed specifically in the Sertoli cells in a species-specific as well as a seminiferous cycle- and spermatogenic activity-dependent manner. In contrast to the ubiquitous GDNF signals in mouse testes, high levels of its signals were cyclically observed in hamster testes prior to spermiation. Whole-mount anti-GDNF staining of the seminiferous tubules successfully visualized the cyclical and patch-like extracellular distribution of GDNF-positive granular deposits along the basal surface of Sertoli cells in both species. Double-staining of GDNF and GFRα1 demonstrated the close co-localization of GDNF deposits and a subpopulation of GFRα1-positive spermatogonia. In both species, GFRα1-positive cells showed a slender bipolar shape as well as a tendency for increased cell numbers in the GDNF-enriched area, as compared with those in the GDNF-low/negative area of the seminiferous tubules. Conclusion/Significance Our data provide direct evidence of regionally defined patch-like GDNF-positive signal site in which GFRα1-positive spermatogonia possibly interact with GDNF in the basal compartment of the seminiferous tubules. Introduction In mammalian testes, spermatogonial stem cells (SSCs) are continuously maintained by self-renewal in the basal compartment of seminiferous tubules. This compartment is defined as the area between the tight junction of Sertoli cells and the continuous basal lamina [1][2][3][4]. SSCs are a small subset of spermatogonia that express GFRa1 (a GPI-linked cell surface protein) and Nanos2 (a zinc-finger RNA-binding protein), and are mostly A single (singly isolated) and A paired (two interconnected cells) cells [5][6][7]. GFRa1positive cells then give rise to longer spermatogonial chain (A aligned [chains of 4, 8, 16 and 32 cells etc.]), which then differentiate into a larger number of advanced spermatogenic cells during the basalto-adluminal translocation of the seminiferous epithelium [8][9][10]. These cells ultimately form spermatozoa at the luminal edge. In most mammals, it is likely that the balance between selfrenewal and differentiation in the SSCs pool is primarily regulated by glial cell line-derived neurotrophic factor (GDNF) which is produced by Sertoli cells [3,[11][12][13]. In Gdnf-heterozygote mice, the undifferentiated spermatogonia including SSCs are reduced in number in the basal compartment of the seminiferous tubules, resulting in a lack of spermatogenic cells from the basal to apical side in older mice [11]. Moreover, in Gdnf-overexpressing mice, the SSC-like cells are clearly increased in number in the basal region, leading to defective spermatogonial differentiation without any spermatozoa [11]. It has also been shown that GDNF is essential for the maintenance of the SSC self-renewal in vitro (germ line stem [GS] cells) [14][15][16]. SSCs, with both self-renewal and differentiation capabilities, can be maintained in serum-free conditions with GDNF and several other factors including bFGF and EGF [14][15][16][17][18]. Moreover, GDNF enhances the short-term proliferation and survival of bovine SSCs [19][20][21] and the longterm expansion of hamster SSCs [22] in vitro. These data indicate that, in mammalian spermatogenesis, GDNF is one of the major regulators which control the maintenance of the SSC pool in a dose-dependent manner. In A single and A paired spermatogonia, including the SSC pool, it has been shown that GFRa1/c-Ret mediates secreted GDNF signals to involve in regulation of their proliferation and survival [12,23,24] positively through Srk and AKT signaling [25][26][27] and negatively through PLZF-derived mTOR signaling [28]. On the other hand, it is likely that Gdnf mRNA expression is also regulated in Sertoli cells in relation to their spermatogenic activities. For example, Gdnf expression level is up-regulated in W/W v testes which lack spermatogenic cells due to a germ cell-autonomous defect, as compared with that in normal testes [29]. This upregulation is possibly a positive response to compensate for the reduction in the number of germ cells in the tubules. It has also been shown that Gdnf expression can be up-regulated by pituitary follicle stimulating hormone, FSH [29,30]. This finding suggests a possible mechanism for the control of SSC self-renewal at the hypothalamic-pituitary-gonadal axis, especially in seasonal breeders like hamster and some domestic animals (e.g., horse, sheep, and goat). Moreover, these findings suggest that finely-tuned control of the level of GDNF expression is crucial for the maintenance of a constant number of SSCs which, in turn, leads to normal spermatogenesis and fertility. It remains unclear, however, as to when and where GDNF molecules are produced, secreted, and exposed to the GFRa1-positive spermatogonia in the basal compartment of seminiferous epithelia in vivo. In this study, we demonstrated the spatiotemporal patterns of immunoreactive GDNF molecules in mouse and hamster testes in active and inactive states to identify a potential interaction site between GDNF molecules and GFRa1-positive spermatogonia. Here, we showed the cyclical and patch-like distribution of immunoreactive granular GDNF deposits along the basal surface of Sertoli cells and their close co-localization with a subpopulation of GFRa1-positive spermatogonia in vivo. Ethics statement All animal experiments in this study were carried out in strict accordance with the recommendations in accordance with the Guidelines for Animal Use and Experimentation as set out by the University of Tokyo. The procedures were approved by the Institutional Animal Care and Use Committee of the graduate school of agricultural and life sciences in the University of Tokyo, and the approval IDs are P11-500 and P11-503. Animals ICR, C57BL6, W/W v and Green (B6-Tg(CAG-EGFP) mice (8-week-old; SLC, Japan) and Syrian hamsters (8-week-old; Nisseiken and SLC, Japan) were used in this study (more than four animals were examined for each experiment group). Antibody specificity was confirmed using testes obtained from three Gdnf-null newborn pups, a mutant line known for postnatal lethality [31]. Gonadally inactive (i.e., photoinhibited and hibernating) testes were prepared by transferring male hamsters (8-week-old; total 36 animals) from a long (conventional) photoperiod (14 h L, 10 h D) to a short photoperiod (6 h L, 18 h D), as described previously [32]. Then, at Week 13 at the peak of testicular photoregression, approximately half of the animals were transferred from an ambient temperature of 23uC to 5uC to induce hibernation 4 to 8 weeks later. Preparation of W/W v Testes Transplanted with Spermatogonial Stem Cells For spermatogonial transplantation, cell suspensions including spermatogonia were prepared from 10-day-old wild-type and Green C57BL6 males and transplanted into the testes of 8-weekold recipient W/W v mice (total 6 recipient males), as described previously [33][34][35]. At 3 months after transplantation, the recipient W/W v testes were dissected and processed for immunohistochemical analyses. Immunohistochemistry For section immunohistochemical staining, testes were isolated at various stages, and fixed in Bouin's solution or 4% PFA for 4 h. The specimens were dehydrated in ethanol, cleared in xylene, and then routinely embedded in paraffin. The deparaffinized sections were incubated with rabbit anti-GDNF (1:200 dilution; sc-328, against the sequences within amino acids 155-205 of GDNF [protein accession P39905]; Santa Cruz Biotechnology) antibody at 4uC for 12 h. The reaction was visualized with biotinconjugated secondary antibody in combination with Elite ABC kit (Vector Laboratories, CA). Some sections stained with anti-GDNF antibody were re-stained with periodic acid Schiff (PAS) reagent to accurately stage the seminiferous cycle. In the immunostained testicular samples of hamsters exposed to a short photoperiod/low ambient temperature, we counted the number of seminiferous tubules with or without GDNF-positive signals, and then estimated the relative incidence of GDNF-positive seminiferous epithelia at each stage. Immunohistochemical staining in each sample was conducted at least three times to confirm its reproducibility. For whole-mount immunohistochemistry without permeabilization, all testes (16 and 28 testes used in hamsters and mice, respectively) were removed from the tunica albuginea and dispersed roughly in cold PBS. The tissues were then fixed in 4% PFA for 8 to 12 h at 4uC and washed with cold PBS several times. Without any permeabilization steps using methanol and detergent (Tween20/Triton X-100), the seminiferous tubule fragments were incubated with rabbit anti-GDNF (1:200 dilution) and goat anti-GFRa1 (1:100 dilution; R&D Systems)/goat antimouse c-kit (1:100 dilution in mice, 1:20 in hamsters; R&D Systems) antibodies at 4uC for 12 h. After being washed with PBS, the samples were incubated with Alexa-488/594 conjugated secondary antibodies, including DAPI, at room temperature for 2 h. After counter-staining with DAPI, the samples were analyzed under Olympus fluorescent microscope (BX51N-34-FL2) and stereomicroscope (SZX16 plus U-LH100HG) systems and Olympus FluoView confocal laser microscope (FV10i; Olympus, Japan) in combination with Volocity software (Mitani Sangyo, Japan). Whole-mount samples stained with anti-GDNF (green) and anti-GFRa1 (red) were photographed (6400) separately in the GDNF-high and GDNF-low/negative regions of the seminiferous tubule, and then the relative cell number of A single ,A aligned subpopulations and A single subpopulation per mm 2 of seminiferous tubule was estimated in each region. Moreover, the lengths of cell processes at both the long (maximum) and short (minimum) axes were separately analyzed in each selected GFRa1-positive cell (i.e., only A single cells that are located near the centerline/away from the shoulder of the whole-mount tubule samples) using a CV-9 Um pen-type mapmeter (Koizumi Sokki Mfg, Japan). In some whole-mount stained samples, Z-stack images on the X-Y plane were collected via confocal microscopy, and then three-dimensional reconstructions and their X-Z plane images were analyzed. For transmission electron microscopy, the PFA-fixed seminiferous tubules were stained with anti-GDNF antibody in combination with HRP-conjugated anti-rabbit antibody as described above. After development with DAB-H 2 O 2 , the samples were re-fixed in 2.5% glutaraldehyde-0.1 M phosphate buffer (PB) at 4uC for 4 h. The samples were postfixed in 1% OsO 4 at 4uC for 2 h, dehydrated in ethanol, and then embedded in EPON 812. Ultrathin sections were cut, and then observed under a JEM 1010 transmission electron microscope (JEOL, Japan) at 80 kV. As negative controls, anti-GDNF antibody was pre-incubated with GDNF peptide (sc-328P; Santa Cruz Biotechnology) prior to use in section and whole mount immunostaining. In situ Hybridization Whole-mount and section in situ hybridization were performed as previously described [36][37][38]. The PFA-fixed seminiferous tubules were directly applied for whole mount in situ hybridization, while the deparaffinized sections of the testes fixed in Bouin's solution were used for section in situ hybridization. Hybridization of hamster Gdnf was carried out at 68uC for 16 h. Hamster Gdnf cDNA fragments were isolated by RT-PCR using mouse and hamster testes cDNA samples, and then subcloned into pGEM-T (Promega) to prepare RNA probes and determine the amino acid sequence of hamster GDNF. The section and whole-mount in situ hybridization staining was conducted four and three times to confirm its reproducibility, respectively. Statistical Analysis Student's t-test was performed for quantitative data of cell number and length of cell processes of GFRa1-positive cells in whole-mount immunostained samples (Table 1). For the relative numbers of the GDNF-positive seminiferous tubules in the immunostained section samples, Dennett test was performed to determine their statistical significance (see Table S1). Results A Spatiotemporal Pattern of Immunoreactive GDNF Distribution in the Seminiferous Tubules of Normal Wildtype Testes and SSC-transplanted W/W v Testes in Adult Mice First, GDNF expression in mouse and hamster testes was examined by conventional section immunohistochemistry (Figs. 1,2). Of the various commercially available antibodies, we used rabbit anti-GDNF antibody against the C-terminal sequences within amino acids 155-205 of human GDNF (protein accession P39905). We confirmed the presence of highly conserved amino acid sequences corresponding to the C-terminal region of hamster GDNF with those in human, mouse and rat GDNF, and the transcross reactivity of this antibody to recombinant GDNF among these species (see Fig. S1). Moreover, anti-GDNF positive signals were mostly cytoplasmic and observed specifically in Sertoli cells, a major population of GDNF-secreting cells [3,11,13] which is located along the basal compartment of seminiferous tubules in the newborn testes (Fig. 1A). In contrast, signals were greatly reduced in the basal compartment of the seminiferous epithelia of the Gdnfnull newborn testes [31] (Fig. 1B). These findings suggest that immunoreactive GDNF signals are detectable by this antibody in mouse testes, although non-specific background staining was found on the luminal side. In contrast to the high levels of GDNF expression observed in newborn testes, only weak signals were seen in Sertoli cells of the Table 1. Comparison of quantitative data showing cell morphological parameters (slender shape) and relative cell density of GFRa1-positive spermatogonia between immunoreactive GDNF-high and -low/negative areas of the seminiferous tubule in mice and hamsters 1) . 1) Whole seminiferous tubules were double-stained with anti-GDNF (green) and anti-GFRa1 (red), and then photographed (6400). Both the length of the cell process (at long [maximum radius] and short [minimum radius] axes) and relative cell shape (long/short value) of GFRa1-positive cells (only A single type) in GDNF-high and -low/ negative areas of the seminiferous tubule were separately calculated. Moreover, the relative number of GFRa1-positive cells (A single ,A aligned types) was separately estimated in GDNF-high and -low/negative areas of the seminiferous tubules. Data are expressed as mean value6SEM (Student's t-test, two-tailed). In both hamsters and mice, the cell shape of GFRa1-positive (A single ) cells is significantly (*p,0.05) more slender in the GDNF-high area than in the GDNF-low/ negative area. In addition, cell processes at both the long axis and the long/short ratio (slender shape) were longer in hamsters than in mice (p,0.05). c,d The cell number of both A single ,A aligned spermatogonia and A single spermatogonia is significantly (*p,0.05 or **p,0.01) higher in mice than in hamsters. doi:10.1371/journal.pone.0028367.t001 adult mouse testes (8-week-old) (Fig. 1C, 1D). Anti-GDNF and PAS (periodic acid Schiff) double-staining revealed that GDNF signals were weakly, but ubiquitously, detected in the Sertoli cells at all stages of the seminiferous epithelial cycles (Fig. 1E). This is clearly in contrast to high cyclical patterns of GDNF expression in hamster testes as described below (see Fig. 2). In order to examine the influence of advanced spermatogenic cells on GDNF expression [29,39], we transplanted spermatogonial stem cells (SSCs) into the W/W v testes (germ cell-less mutant due to germ cell autonomous defects) [40] and, at 3-month posttransplant, examined immunoreactive GDNF expression in the SSC-transplanted testes (Fig. 1F, 1G). In the SSC transplantation experiment, the seminiferous tubules containing donor advanced germ cells and those lacking germ cells are located close to each other within the same testis (Fig. 1F). This allows us to directly evaluate and compare the signal intensities of GDNF expression between the seminiferous epithelia supporting advanced germ cells and those lacking them within one focus area, excluding other influences in physiological and experimental conditions (e.g., interstitial environment, fixation, and all immunostaining procedures). Anti-GDNF staining showed high GDNF immunoreactivity in Sertoli cells of seminiferous tubules which lacked (Fig. 1F, 1G). In contrast, in seminiferous tubules which contained spermatogenic cells, the level of immunoreactive GDNF expression was low (asterisks in Fig. 1F, 1G) and similar to that in normal adult testes (Fig. 1C, 1D). These findings are in agreement with the previous data indicating higher levels of Gdnf mRNA expression in W/W v testes [29]. A Spatiotemporal Pattern of GDNF Expression in the Seminiferous Tubules of Normal ''Active'' Testes and Short Photoperiod/Low Ambient Temperature-induced ''Inactive'' Testes in Adult Hamsters Next, immunoreactive GDNF expression in adult hamster testes (8-week-old) in the ''active'' state was examined. Interestingly, in contrast to the less-cyclical pattern of GDNF expression in mouse testes, dynamic cyclical changes in GDNF expression were observed in normal ''active'' testes of hamsters ( Fig. 2A, 2B; also see ''Cont'' in Table S1). Briefly, in 50% of seminiferous tubules, nearly all Sertoli cells showed no signals. However, in 20.5% of tubules, some Sertoli cells were found to have strong positive signals in their cytoplasm while, in 29.5% of the tubules, almost all Sertoli cells were positive for anti-GDNF staining ( Fig. 2A, 2B). Anti-GDNF and PAS double-staining revealed that no signals were detected in the seminiferous tubules at stages VIII-I (Fig. 2C, 2D). At stages II-IV, some Sertoli cells located within the same cross section were positive for anti-GDNF staining, and almost all Sertoli cells were positive at stages V-VII (Fig. 2C, 2D). Interestingly, at stage VIII when spermiation (i.e., a release of matured spermatozoa from the apical area of Sertoli cells into the lumen) occurs, a rapid reduction in GDNF-positive signals was observed in all Sertoli cells, leading to the loss of GDNF expression during stages VIII-I. This immunostaining pattern is clearly in agreement with the present in situ hybridization data showing dynamic cyclical patterns of Gdnf expression at high levels before spermiation stages (see Fig. S2). It is well known that Gdnf expression in Sertoli cells is positively up-regulated by FSH [29,30]. In seasonally breeding hamsters, it has been shown that a short photoperiod and a low ambient temperature can induce the most ''inactive'' state of spermatogenic activity with low levels of serum FSH/LH [41,42]. In order to confirm reduced GDNF expression in Sertoli cells in an ''inactive'' state, we examined immunoreactive GDNF expression in the hamster testes in the photoregressed and hibernating states and during subsequent spontaneous recrudescence by prolonged exposure to inhibitory photoperiods (Fig. 3, see Fig. S3). In adult hamsters (8-week-old) that were exposed to a short photoperiod (6 h light, 18 h dark) and an ambient temperature of 23uC, the testes photoregress to the most ''inactive'' state at Week 13 of treatment, showing atrophied seminiferous tubules with closed lumen [32]. As anticipated, anti-GDNF staining showed a marked reduction in GDNF-positive signals in the inactive testes at Week 13 (''D0'' in Fig. 3A, 3B, also see Table S1), suggesting the . Changes in immunoreactive GDNF expression in hamster testes in photoregression, hibernating and subsequent recrudescent states. Adult hamsters (8-week-old) were exposed to a short photoperiod (6 h light, 18 h dark) and an ambient temperature of 23uC. The testes photoregressed to the most ''inactive'' state at 13 weeks of treatment (D0). Then, in half of the hamsters, the ambient temperature was reduced from 23uC to 5uC to induce a hibernated state (C6). It was shown that, after 13 weeks of exposure to a short photoperiod (D0), spermatogenic activity spontaneously recrudesced after 10 to 20 weeks in both the 5uC and 23uC groups (''D10'' and ''C20''), albeit a 7 to 10 week delay in GDNF expression was observed in the 5uC group. (A) A line graph, including small circle graphs at each stage, shows changes in testicular weight (Y axis in the line graph; error bars indicate6SD) and the appearance ratio of GDNF-positive seminiferous tubules (small circle graph at each stage; also see Table S1) in adult testes exposed to a short photoperiod (solid lines in the line graph; D0, D6, and D10) in combination with a low ambient temperature (broken lines in the line graph; C6, C13, and C20). X axis represents weeks before and after the most ''inactive'' state (D0) was reached in Week 13 of treatment. (B) Anti-GDNF immunostaining patterns show no appreciable positive signals in almost any of the seminiferous tubules in ''inactive'' testes at D0 and C6 stages. A rapid recovery of immunoreactive GDNF-positive signals is ubiquitously observed, even at C13 stage (arrows) when the testicular weight is at a similarly low level to that in the inactive state, D0 (''D0'', ''C13'' in A, B). A seminiferous cycle-down-regulation of GDNF expression in the gonadally inactive stages. When the hamsters were transferred from an ambient temperature of 23uC to 5uC at Week 13 and maintained in a short photoperiod (5uC group), most animals entered a hibernated state within 4 to 8 weeks of transfer. At this stage, no appreciable signals were detected in almost any of the seminiferous tubules in these testes (''C6'' in Fig. 3A, 3B). This finding is in contrast to the constantly moderate levels of nuclear-positive signals for anti-GATA4 staining in Sertoli cells throughout all (i.e., inactive and active) stages examined in this study (inset of Fig. 3B). After exposure to a short photoperiod for 13 weeks, spermatogenic activity began to spontaneously recrudesce with complete recovery observed within 6 to 10 weeks in the 23uC group (''D6'' and ''D10'' in Fig. 3A, 3B, respectively) or within 13 to 20 weeks in the 5uC group (''C13'' and ''C20'' in Fig. 3A, 3B, respectively). GDNF expression in the 5uC group, however, was delayed by 7 to 10 weeks. As anticipated, anti-GDNF staining showed a rapid recovery of GDNF-positive signals even in D6 and C13 before the recovery in testicular weight to the active level (''D6'', ''C13'' in Fig. 3A, 3B). This finding is consistent with previous reports showing a rapid recovery in plasma FSH/LH levels within 1-3 weeks prior to recrudescence of spermatogenic activity in adult photoinhibited hamster [42][43][44]. Whole-mount Immunostaining Visualized Cyclical and Patch-like GDNF-positive Deposits along the Basal Surface of Sertoli Cells in Hamster and Mouse Testes As described above, anti-GDNF section immunohistochemistry showed species-specific as well as seminiferous cycle-and spermatogenic activity-dependent changes in immunoreactive GDNF expression in Sertoli cells. It is unclear, however, which GDNF-positive signal sites correspond to the extracellular GDNF molecules that can be accessed by GFRa1-positive spermatogonia in vivo. Therefore, in order to selectively visualize GDNF molecules in the extracellular region of the basal compartment of seminiferous epithelia, we applied anti-GDNF staining for PFAfixed whole seminiferous tubules without any permeabilization steps in the adult testes of hamsters (Fig. 4) and mice (Fig. 5). Whole-mount anti-GDNF immunostaining showed seminiferous cycle-dependent patterns in immunoreactive granular GDNF signals along the basal wall of the seminiferous tubules in hamsters, albeit with slightly cyclical patterns in mice (green fluorescence in Figs. 4A-D, 5A-C; also see negative controls in Fig. S4). Fluorescent microscopy of DAPI images of round and elongate spermatids showed that the border between immunoreactive GDNF-high and GDNF-negative areas roughly corresponded to the stages VII/VIII, just at spermiation in hamsters ( Fig. 2; data not shown). In mice, the GDNF-high and -low/negative regions were also distinguishable along the basal wall of the seminiferous tubules ( Fig. 5A-C), although such cyclical patterns were not evident in the immunostained sections (Fig. 1C-E). Interestingly, whole-mount immunostaining visualized a patch-like distribution pattern of granular GDNF deposits along the basal wall of the seminiferous tubules in hamsters (green in Fig. 4B, 4D-G). Even in the GDNF-positive area during the peak of its expression, GDNF deposits appeared to be restricted to a regionally defined, patchlike distribution pattern along the basal surface areas of Sertoli cells (Fig. 4F). This is in sharp contrast to almost all Sertoli cells becoming positive for anti-GDNF staining in transverse sections in hamsters (''V-VII'' in Fig. 2). In mice, GDNF-positive deposits were more granular in shape and wider in distribution than those in hamsters (green in Fig. 5A-E). In some regions, these granular deposits appeared to be, at times, defined by a patch-like restricted area similar to the GDNF-positive patches seen in hamsters (green signals in Fig. 5E). Immunoelectron microscopy of whole-mount samples stained with the same non-permeabilization procedures revealed that the majority of GDNF-positive signals were located within the extracellular space adjacent to the spermatogonia and basal lamina along the basal surface of some Sertoli cells (arrows and arrowheads in Fig. 6). Moreover, certain weak signals were detectable in the outer tubular region between the basal lamina and peritubular myoid cells and in the transported vesicles in the cytoplasm of the peritubular myoid cells (double-arrowhead in Fig. 6F), suggesting a possible removal process of the GDNF molecules from the basal compartment of the seminiferous epithelium through the basal lamina and peritubular myoid cells. Taken together, these data suggest that patch-like GDNF deposits are formed in a seminiferous cycle-dependent manner in the extracellular space along the basal surface of Sertoli cells. The present non-permeabilized condition of whole-mount staining allowed us to mainly detect GDNF-positive signals within the extracellular spaces of the basal compartment in seminiferous tubules. However, we should consider the presence of cytoplasmic GDNF-positive signals near the plasma membrane of the Sertoli cells as a possible source of some GDNF-positive signals in this whole-mount staining. Close Co-localization of Immunoreactive GDNF Deposits with a Subpopulation of GFRa1-positive Spermatogonia in the Basal Compartment of Seminiferous Tubules in both Hamster and Mouse Testes Finally, we visualized a possible interaction between immunoreactive GDNF molecules and its GPI-linked cell surface receptor, GFRa1, on undifferentiated spermatogonia including SSCs, by whole-mount double-staining of seminiferous tubules (without permeabilization). GFRa1/GDNF-double-staining allowed us to quantitatively compare the number and shape of GFRa1-positive cells (A single ,A aligned ) in GDNF-high and -low/negative areas of seminiferous tubules (Table 1). In both hamster and mouse seminiferous tubules, the relative number of GFRa1-positive spermatogonia tended to be increased in the GDNF-high area as compared with that in the GDNF-low/negative area (Table 1). Moreover, we occasionally noticed a tilted distribution of GFRa1positive cells toward the GDNF-high area in the border region between GDNF-high and -low/negative areas (see Fig. 5A; GDNF-high left-half area versus GDNF-low/negative right-half area). As for cell morphology, GFRa1-positive cells (A single ) in the GDNF-high area were significantly (p,0.05) more slender in shape, as compared with cells in the GDNF-low/negative area in both mice and hamsters ( Table 1). As for species differences, GFRa1-positive cells were significantly more slender in shape and lower in number in hamsters, as compared with GFRa1-positive cells in mice (Table 1; Fig. 5E). Confocal microscopy clearly revealed that the apical surface of some GFRa1-positive cells was colocalized to GDNF-positive signals, suggesting a possible interaction site between GDNF and GFRa1- On the other hand, we also noticed that a considerable number of GFRa1-positive cells were not directly associated with any GDNF deposit, especially in hamsters (see Figs. 4C, 5B). Both in mice and hamsters, the majority of GDNF-positive signals appear to correspond to the cell surface area of the c-kit-positive differentiated spermatogonia (Fig. 8) rather than the cell process or cell body of GFRa1-positive cells in the basal compartment ( Fig. 4E-G; Fig. 5D,E). Discussion This study was the first to visualize the changes in immunoreactive GDNF expression in the adult testes in a species-specific as well as spermatogenic activity-and seminiferous cycle-dependent manner. As anticipated, GDNF expression was specifically observed in Sertoli cells of the seminiferous epithelia, and its expression levels appear to be tightly regulated by the spermatogenic activity of the testes in both mice and hamsters. In mice, higher levels of GDNF expression were observed in seminiferous epithelium lacking germ cells than in seminiferous tubules colonized by donor germ cells (Fig. 1F, 1G), which may reflect a positive response to compensate for the reduced germ cell number in the basal compartment. In contrast, lower levels of GDNF expression were noted in almost all Sertoli cells in photoregressed and hibernating hamster testes (Fig. 3). Moreover, during the subsequent testicular recrudescence, GDNF expression was shown to be clearly up-regulated at the initial phases which coincide with the resumption of spermatogenesis (Fig. 3). Since a rapid recovery in serum FSH/LH levels occurs before testicular function in the adult photoinhibited hamster [42,44], this is clearly consistent with the previous data that GDNF expression is tightly regulated immediately downstream of the gonadotropin-gonadal axis [29,30]. Moreover, in ''active'' testes in hamsters, the stages II-VII of the high levels of GDNF-positive signals in the Sertoli cells (Fig. 2C, 2D) roughly coincide with those of the highest levels of FSH-induced cAMP production in the seminiferous epithelia (stages II-VI) [45,46]. Anti-GDNF immunostaining showed high levels of immunoreactive GDNF expression at the timing of spermiation in a seminiferous cycle-dependent pattern in hamsters. It is well known that, at the same seminiferous cycle stage as spermiation (stages VII/VIII in hamsters or stages VIII/IX in mice), preleptotene spermatocytes move across the blood-testis barrier from the basal to the adluminal compartment of the seminiferous epithelium [47,48]. It is likely that the transition of preleptotene spermatocytes from the basal to adluminal compartment also leads to a transient increase in ''spare room'' for spermatogonia within the basal compartment of the seminiferous epithelium. Since higher levels of GDNF expression were observed in seminiferous epithelia lacking germ cells than in seminiferous epithelia colonized by germ cells in SSC-transplanted W/W v testes, these data suggest that some other signals which are derived from the presence or absence of ''spare room'' and/or advanced spermatogenic cells within the basal compartment may partially contribute to cyclical changes in GDNF expression in mammalian spermatogenesis. The present anti-GDNF staining of whole seminiferous tubules successfully visualized the cyclical and patch-like distribution patterns of GDNF-positive granular deposits along the basal surface of Sertoli cells in both species. Double-staining of GDNF and its receptor, GFRa1, showed close co-localization of GDNF deposits and a subpopulation of GFRa1-positive spermatogonia in the basal region. Moreover, the present quantitative analysis revealed that GFRa1-positive cells showed a slender bipolar shape as well as a tendency for increased cell numbers in the GDNFenriched area, as compared with those in the GDNF-low/negative area of the seminiferous tubules. For morphometric determination, further studies are required to generate hard data which can be statistically verified by using more accurate quantification of the GDNF signal levels around each GFRa1-positive cell. However, structure in several lower vertebrates and invertebrate species [49][50][51]. The cyclical and patch-like distribution of GDNF deposits along the basal surface of Sertoli cells possibly leads to the asymmetric interaction of GDNF signals with some A paired and A aligned GFRa1-positive cells (arrow in Fig. 4E; double-arrowhead in Fig. 5E), which may be consistent with recent suggestion showing asymmetric selection of SSCs from A aligned spermatogonia after fragmentation in vivo [7]. Moreover, the present regionalized GDNF regulation in a small subpopulation of GFRa1-positive cells would explain the findings of a recent clone-fate study which showed that SSCs have an unexpectedly short life-span (average: #2 weeks) in the seminiferous epithelia [52]. This is because, in both hamster and mouse, many GFRa1-positive cells do not appear to physically associate and co-localize with GDNF deposits (see Figs. 4C, 5B), possibly leading to their eventual removal from a potential SSCs pool. Taken together, it is reasonable to speculate that such regionalized GDNF regulation may define the size of a pool of GFRa1-positive spermatogonia, especially in hamsters, possibly leading to the finely-tuned control between the selfrenewal/survival and differentiation of the SSCs in the basal compartment of seminiferous tubules. Moreover, the present study demonstrated that the GDNF-positive signals are accumulated largely on the c-kit-positive spermatogonia along the basal surface of the seminiferous tubules (Fig. 8). This in turn suggests that the dynamics of a c-kit-positive population of A aligned spermatogonia clearly affects the size of a pool of GFRa1-positive spermatogonia (mostly A single and A paired ) in a positive feedback fashion. The components of GDNF-positive granular deposits, their association with the blood vessels, interstitial cells, peritubular myoid cells, and the molecular mechanisms underlying their distribution and turnover within the basal compartment of seminiferous tubules, could be a focus for future studies. Kanatsu-Shinohara et al. (2008) [22] reported that the general characteristics of hamster germline stem (GS) cells are similar to those of mouse and rat GS cells, indicating a conserved GDNF action of self-renewal and maintenance of the SSCs pool between seasonal and non-seasonal breeding rodents [53]. Interestingly, we noticed the following species-specific differences in the expression profiles of GDNF and GFRa1 between mouse and hamster testes: 1) Hamster GFRa1-positive spermatogonia are more slender in shape and lower in cell density than those in mice, 2) GDNF expression in hamsters is more cyclical, is restricted to a narrower area along the longitudinal seminiferous tubule (i.e., only at stages II,VII), and consists of patch-like deposits. In contrast, GNDF expression in mice is ubiquitous/less cyclical, with granular GDNF deposits in a wider area along the longitudinal seminiferous tubule. These findings imply that the less cyclical and ubiquitous GDNF distribution in mice is closely associated with the maintenance of a large number of GFRa1-positive cells. On the other hand, the more restricted GDNF distribution would explain the relatively small number of GFRa1-positive spermatogonia in hamsters, as compared with that in mice. Interestingly, hamster GFRa1positive spermatogonia are significantly more slender in shape than those in mice, which might possibly reflect the high migratory activity in hamsters. This small number of GFRa1-positive cells with a high migratory activity may have advantages over a SSCs pool which is rapidly changing in size during the transition between inactive and active states in seasonal breeding animals. This is because the up-and down-regulation of GDNF expression is directly transmitted to the rapid expansion of, and/or reduction in, the SSCs pool throughout the longitudinal seminiferous tubule. This observation is consistent with the present data which demonstrated the ubiquitous and widespread nature of GDNF expression in most seminiferous tubules in the initial phases of spontaneous testicular recrudescence in hamsters (''D6'', ''C13'' in Fig. 3). Both GDNF and GFRa1 may be highly conserved molecules between mice and hamsters [22], reflecting the successful maintenance and colonization of hamster SSCs in mouse testicular soma [53] and the higher cross-species reactivity of anti-GDNF and anti-GFRa1 antibodies (this study). Taken together, these findings indicate that the hamster testes in photoregressed, hibernating and subsequent recrudescent states are very useful in a comparative animal approach to understand the seasonal regulation and evolution of the SSCs and their niche in mammalian spermatogenesis. In conclusion, the present study was the first to demonstrate the dynamic changes in immunoreactive GDNF expression and its close association with a small subpopulation of GFRa1-positive spermatogonia in the basal compartment of seminiferous epithelia. The unexpectedly cyclical and patch-like distribution of GDNF deposits implicates a novel hypothesis for in vivo maintenance of SSCs based on highly regionalized association between GFRa1positive cells and extracellular GDNF signals in the basal compartment of the seminiferous epithelia of mammalian testes. Whole-mount in situ hybridization analysis reveals seminiferous cycle-dependent expression of Gdnf mRNA in hamster seminiferous tubules (purple staining). In A and B, arrowheads indicate the border between high-and low Gdnf-positive areas. In C, SBA lectin staining (red fluorescence for acrosome staining; DAPI, blue in lower plate) using transverse sections of whole-mount stained seminiferous tubules (Gdnf signal, purple; upper plate) reveals the reduction in Gdnf expression between stages VII and VIII (inset indicates positive signals in Sertoli cells at stage VII). The changes are consistent with the immunohistochemical data (Fig. 2). (D-E) Section in situ hybridization analysis demonstrates high levels of Gdnf expression before spermiation in hamster testes (purple staining). Asterisks, non-specific signals in the acrosomes of round spermatids. Scale bars represent 100 mm in C and D, and 10 mm in E. (TIF) Figure S3 Histological analysis of seminiferous tubules in short photoperiod/low ambient temperature-induced ''inactive'' testes in adult hamsters. Adult hamsters (8-week-old) were exposed to a short photoperiod (6 h light, 18 h dark) and an ambient temperature of 23uC. After the testes reached the most ''inactive'' state in Week 13 of treatment (D0), half of the hamsters were maintained in an environment with an ambient temperature of 5uC (5uC group) for 6 (C6), 13 (C13), or 20 weeks (C20), respectively. The remaining hamsters were maintained in an environment with a stable ambient temperature of 23uC (23uC group) for 6 (D6) or 10 weeks (D10), respectively. After exposure to a short photoperiod for 13 weeks (D0), spermatogenic activity began to recover autonomously, with complete recovery observed within 10 to 20 weeks in both the 5uC (C20) and 23uC groups (D10). Scale bars represent 100 mm. (TIF) Figure S4 Negative controls for whole-mount anti-GDNF immunostaining (without permeabilization) of seminiferous tubules in hamsters and mice. Anti-GDNF antibody was pre-incubated with GDNF peptides (sc-328P; Santa Cruz Biotechnology) prior to use for whole mount immunostaining. The pre-treatment with GDNF peptides (+pep) greatly reduced GDNF-positive signals in both hamster (A, B) and mouse (C, D) samples. Each plate includes the inset panel showing a higher magnification image of upper panel. Scale bar represents 100 mm. (TIF) Figure S5 Comparative GFRa1/GDNF-double-staining images of the seminiferous tubules in hamsters and mice. Whole-mount immunostaining (without permeabilization) of seminiferous tubules showing GDNF-positive deposits (green) and GFRa1-positive spermatogonia (red) in the basal compartment of the seminiferous epithelia in hamsters (left) and mice (right). In each plate, the seminiferous tubule is shown at the same magnification. In the left plate, the lower edge of the tubule wall is missing due to the larger diameter of the seminiferous tubule in hamster than that of the mouse. Hamster GFRa1-positive cells are more slender in shape and lower in number than those in mouse GFRa1-positive cells. In both plates, dotted lines roughly indicate the border between GDNF-high and -low/negative areas of the seminiferous tubules. Scale bar represents 10 mm. (TIF) Movie S1 Rotating 3D reconstruction showing the close co-localization of a GFRa1-positive spermatogonial cell with immunoreactive GDNF-positive deposits in the basal compartment of seminiferous tubule in hamster testes. PFA-fixed seminiferous tubule fragments were doublestained with anti-GDNF (green) and GFRa1 (red) antibodies (DAPI, blue) without any permeabilization steps, and then analyzed to reconstruct a three-dimensional image using an Olympus FluoView confocal laser microscope (FV10i; Olympus, Japan) in combination with Volocity software (Mitani Sangyo, Japan) (see also Fig. 7). (MP4) Movie S2 Rotating 3D reconstruction showing the close co-localization of a GFRa1-positive spermatogonial cell with immunoreactive GDNF-positive deposits in the basal compartment of seminiferous tubule in mouse testes. PFA-fixed seminiferous tubule fragments were doublestained with anti-GDNF (green) and GFRa1 (red) antibodies (DAPI, blue) without any permeabilization steps, and then analyzed to reconstruct a three-dimensional image using an Olympus FluoView confocal laser microscope (FV10i; Olympus, Japan) in combination with Volocity software (Mitani Sangyo, Japan) (see also Fig. 7). (MP4)
2016-05-03T22:56:22.947Z
2011-12-09T00:00:00.000
{ "year": 2011, "sha1": "42688ed60256783ce87a79bbfda7e309bd165186", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0028367&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "42688ed60256783ce87a79bbfda7e309bd165186", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
226202668
pes2o/s2orc
v3-fos-license
Use of Big Data Tools and Industrial Internet of Things: An Overview Big data is ever playing an important role in the industry as well as many other organizations. With the passage of time, the volume of data is increasing. This increase will create huge bulk of data which needs proper tools and techniques to handle its management and organization. Different techniques and tools are being used to properly handle the management of data. A detailed report of these techniques and tools is needed which will help researchers to easily identify a tool for their data and take help to easily manage the data, organize the data, and extract meaningful information from it. The proposed study is an endeavour toward summarizing and identifying the tools and techniques for big data used in Industrial Internet of Things. This report will certainly help researchers and practitioners to easily use the tools and techniques for their need in an effective way. Introduction With the passage of time, the volume of data is increasing. In today's digital world, the information surges with the extensive use of the Internet and global communication systems. is increase will create huge bulk of data which needs proper tools and techniques to handle its management and organization. Big data is ever playing an important role in the industry as well as many other organizations. Huge bulk of data is produced from the healthcare information systems, electronic records, wearables, smart devices, handheld devices, and so on. e recent increase in medical big data and the development of computational techniques in the field of information technology enable researchers and practitioners to extract and visualize big data in a new spectrum of use. e industry is leading toward the spreading out and developments of IIoT with the incorporation of emerging technologies and applications of IoT. e aim of the IIoT is to achieve high efficiency of operations for management of industrial assets and to increase the productivity of industries. More attention is given to the applications of IoT with its integration to industries. e applications of IoT are obvious in every field of life from industry to education, healthcare, and to other places. A number of studies are available related to the applications, uses, and different approaches to handle big data [1][2][3][4][5][6][7][8]. Different techniques and tools are being used for extracting important information from big data. e data are mostly unstructured which need proper structure, shape, and management through which the data can easily be accessed and processed. e role of visualization is to capture the important information from the data and to visualize it for the easiness of practitioners. Some of the programming tools which deal with big data are Informatica PowerCenter, Apache Hadoop, and Tableau, which analyze data extremely efficiently and enable the visualization of meaningful insights extracted from big data. To facilitate the management of data for easy access and to operate, there should be a detailed report on the existing tools and techniques which can easily access, manage, operate, and execute useful information from the data for different purposes. erefore, to facilitate this process, a detailed report of the existing literature is presented in this study. is detailed report will help researchers and scholars to devise new algorithms, techniques, and tools for the analysis and management of big data. e organization of the paper is as follows. Section 2 shows the related work to big data tools and support of the industry. Section 3 presents the existing approaches to support big data in IIoT. Section 4 shows the support of IIoT regarding big data tools and techniques. e paper is concluded in Section 5. Big Data Tools and Support of the Industry With the advancements in Industrial Internet of ings (IIoT) sensing, communication, technology characterizations, and high throughput instrumentation, the level of data generation is expected to grow exponentially [9]. Lin et al. [10] presented an approach of integrating sensing data from diverse sources and equipment to apply on IIoT. e industrial Micro Control Unit is connected to interface with actuator, data sources, and equipment. e experimental results show that IIoT can reduce the problem of heterogeneous protocol and database manufacturing data transmission. is article demonstrates the complexity and unique nature of multimedia big data (MMBD) computing for Internet of ings (IoT) applications as well as builds up an inclusive taxonomy used for MMBD abstracted into a new process model reflecting MMBD over IoT. Many research challenges linked with MMBD, for example, quality of service requirements, heterogeneity, reliability, accessibility, and scalability, are addressed by the process model. e process model is discussed through a case study [11]. In this work, architecture for flood forecasting and monitoring is proposed by means of convergence between HPC and big data. is architecture can analyze, store, and collect big data as well as help in the flood prediction result generation [12]. Mobile computing services can be used in IoT by using services of mobile phones, apps, or through M-Health care system [13]. Alexopoulos et al. [14] presented the IIoT architecture and its development details to support the industrial product service system life cycle. In this article, a novel model is developed in the perspective of manufacturing progression that reviews the key big data analytics (BDA) capabilities. e findings are beneficial for the companies in order to understand big data potential implications as well as their analytics capabilities for their manufacturing processes and efficient BDA-enabler infrastructure design [15]. Boyes et al. [16] presented the concept of IIoT and the association to the ideas such as cyber physical systems and Industry 4.0. IoTrelated taxonomies were analyzed and an analysis framework was developed for IIoT that can be used to list and characterize the devices of IIoT when analyzing security vulnerability and threats. For the big data sentiment analysis (BDSA) and for best or optimal decision selection, a framework was proposed and also applied as a mathematical algorithm [17]. In this study, for big data and Cognitive Internet of ings (CIoT), a new architecture is proposed. e planned architecture helps the computing systems through combining data lake (DL) and warehouse (DWH), and for the collection of heterogeneous data, a tool is defined [18]. Urquhart and Mcauley [19] presented an approach for the risks of IIoT drawn both on the regulatory and technical perspectives. In this study, functional and structural properties of cloud manufacturing (CMfg) were analyzed, and a business intelligence architecture was proposed that plans to empower distributing pertinent KPIs identified with intrigued process data, with the helpful layer of dependability [20]. An overview of big data in smart manufacturing was directed, and an applied framework was proposed from the viewpoint of item life cycle. is framework permits examining key advantages and potential applications, and the debate of future research directions and current challenges gives essential insights for the industry and scholarly world [21]. is paper examines the current big data analytics (BDA) technologies, strategies, and algorithms that can prompt the improvement of insightful Industrial Internet of ings (IIoT) frameworks. We devise a scientific classification by characterizing and classifying the literature based on essential factors (for example, analytics types, industrial analytics applications, requirements, analytics techniques, analytics tools, and data sources). e case studies and frameworks of different endeavours were presented which have been profited by BDA [22]. is paper investigates how firms can capture an incentive from big data to improve green commitment by giving an applied model through an exhaustive and all-encompassing writing that relates big data sources to the reception of various green systems. e principle finding of the examination is that organizations that need to execute clean innovation strategy frequently allude to outside accomplice to build up the essential architecture expected to abuse enormous information potentialities [23]. Apart from these approaches, the big data and IoT have several other applications in diverse issues of the real world [24][25][26][27][28]. Existing Approaches to Support Big Data in IIoT Humayun et al. [29] presented a comprehensive report of the evolution, prevention, and mitigation of ransomware in the context of IoT. For smart factories, construction path and reference architecture were proposed by examining IIoT technology as well as their application in assembling workshops. Joined with the examination of business as usual and requirements of the discrete assembling undertaking workshops, this paper structures the overall theoretical model architecture of the framework [30]. In this examination, a blockchain-dependent data sharing scheme was proposed that entirely considers efficiency as well as security of data sharing. In this plan, a Hyperledger Fabric and identity authentication-dependent secure data sharing structure was designed for the data sharing security. Additionally, a network recognition algorithm was proposed to partition the customers into various data sharing networks as per the comparability of mark data. e exploratory outcomes demonstrate that the proposed colloboration is successful for efficient and secure data sharing among various customers [31]. is paper discusses about the IoT data management concepts and current and survey solutions, talks about the most encouraging solutions, and recognizes important open exploration issues on the theme giving rules to assist further contributions [32]. In this article, for a scalable pipeline to distribute as well as process data as of blend of shop-floor sources, an architecture was proposed. e architecture was implemented in order to explore the feasibility of this methodology and bring together ad hoc power data and MTConnect-compliant machine to help analytics applications [33]. is work presents a procedure data examination stage which worked around the idea of Industry 4.0. e platform uses the big data software tools, ML algorithms, and state-of-the-art IIoT platforms. e results indicated that in situations where process information about the procedure within reach is restricted, information-driven delicate sensors are helpful instruments for predictive data investigation [34]. For industrial data processing, an Industrial Internet of ings cloud-fog hybrid network (ITCFN) framework was proposed. e results have shown that the proposed framework effectively reduces the processing delay of industrial data [35]. In this study, a systematic strategy was used to review the weaknesses as well as strengths of open-source technologies for stream processing and big data to set up its usage for Industry 4.0 use cases [36]. A framework was developed for the additive manufacturing enterprises by combining sustainable smart manufacturing technologies, additive manufacturing, and big data analytics. e proposed framework is beneficial for additive manufacturing industry leaders to take the right decision at the beginning stage of the product life cycle [37]. e big data characteristic of the testbed was studied by using an inhouse-developed IoT-enabled manufacturing testbed [38]. A distributed service-oriented architecture was provided for the solution of problem of product tracing [39]. e distributions of droplet size with high-velocity airblast atomization were examined [40]. In this article, an interactive data investigation framework was proposed, which poses a service-oriented perspective on the smart factory [23]. is article investigates the potential of artificial intelligence (AI) as well as machine learning (ML) to lever big data and Internet of ings (IoT) in smart cities in personalised service development. IoT smart city applications are suggested so as to benefit from this work [41]. Gierej [42] presented the idea of a business model for the companies implementing IIoT technologies. e approach is developed to help traditional companies in the transition of the digital market. e proof procurement challenge is examined. A contextual investigation of a smart city venture with IoT administrations gathering big data which are put away in the cloud processing condition is presented. e strategies can be summed up to other big data in the cloud environment [43]. A fault prediction technique dependent on industrial big data is presented, which legitimately exhumes the connection between the data, for example, the status as well as sound data, and the equipment faults by machine learning techniques [44]. Distributed growing self-organizing map (DGSOM) and a novel distributed selfadaptive neural network algorithm were presented to tackle unsupervised machine learning need of big data [45]. Younan et al. [46] presented a study with a comprehensive review of the existing challenges in the literature and recommended technologies for enabling the analysis of data and search in the future IoT search engines. Two case studies are presented to show promising growth on smartness and intelligence of applications of IoT based on the integration of information and communication technologies. e applications of smart phones enable the patients to know about their diseases after the analysis in the field of gynaecology and paediatrics [47]. In this article, an architecture based on Internet of ings is proposed for big data that is used for diverse smart cities. e results demonstrated that this kind of method has the potential of the applicability to give beneficial services of smart cities, for example, detection of travel profiles in smart transport, comfort in smart buildings, and management of the energy consumption [48]. Jiang [49] presented an approach which studies the IoT developments and technologies related to cloud computing and smart cities and then focussed on the IoT technologies and cloud computing. Dachyar et al. [50] conducted an in-depth analysis of the 26420 papers published in the area of IoT. is article aims to adapt and detect concept drift dependent on cognitive learning principles. e approach executes to detect concept drift, determines concept drift type as well as in automated time windows [51]. Table 1 shows the existing approaches, methods, and tools to support big data. Support of IIoT regarding Big Data Tools and Techniques Several studies exist related to the applications of big data in IIoT. e study presented an enhanced platform of industrial big data for the reduction of time and data storage space of data processing [54]. e aim of the paper is to assess the impact of different serialization and compression methods on the platform of big data and then attempt to select the most suitable method for the platform of industry. e aim of the study is to propose a fabric which is a technique of blockchain-based data transmission for IIoT [56]. e approach uses secret sharing mechanism based on blockchain. e paper presented an approach of city geospatial dashboard for the collection, sharing, and visualization of the data collected from different sources like satellite data, IoT devices, and other big data [58]. e contribution of the paper is to present the concept of constructing community-based platform of cross IIoT service through utilizing the existing mobile and fixed facilities as wireless IoT gateway in a city which facilitates the easy implementation of IoT gateway at local service for bringing economical and social values [59]. e study focussed on the spatiotemporal modeling to organize the data in temporal, attributive, and spatial dimensions [60]. To manage the multisource manufacturing data, ontologybased big data integration mechanism is presented. e authors proposed an ADTT-advanced distributed tensor-train-decomposition approach along with a computational method for the IIoT big data processing [64]. e existing literature was searched in order to identify the associated materials related to the proposed study. For this purpose, the popular libraries such as ACM, IEEE, Sci-enceDirect, and Springer were considered to show the related materials. e reason behind these libraries was that these libraries publish quality materials which are peer reviewed. Figure 1 shows the number of papers published in the given years in the library of ScienceDirect. e last five years were considered as the latest research published in these recent years. Figure 2 shows the article type along with the number of publications in the given library. Figure 3 shows publication titles and percentage of publications. Figure 4 shows the articles types and number of publications in the library IEEE. Figure 5 shows the publication topics and percentages of number of publications. Figure 6 shows the media format and number of publications in the ACM library. Figure 7 shows the publication types and number of papers published in the given library. Figure 8 shows the number of publications in the given years. Figure 9 shows the article types and percentages of publication in the Springer library. Table 1: Existing approaches, methods, and tools to support big data. S.No Reference Title 1 [9] Big data analytics tool based on statistical process monitoring for smart manufacturing 2 [11] Multimedia big data computation and applications of IoT 3 [12] IoT, big data, and HPC-based smart flood management framework 4 [15] Big data analytics for manufacturing processes 5 [17] An algorithmic implementation of entropic ternary reduct soft sentiment set using soft computing technique on big data sentiment analysis for optimal selection of a decision based on real-time update in online reviews 6 [18] Architecture for Cognitive IoT and big data 7 [20] Challenges and opportunities for publishing IIoT data in manufacturing 8 [21] A comprehensive review of big data analytics throughout product life cycle to support sustainable smart manufacturing 9 [22] Role of big data analytics in IIoT 10 [23] Big data and natural environment 11 [30] Intelligent manufacturing production line data monitoring system for IIoT 12 [31] A secure and efficient data sharing scheme based on blockchain in IIoT 13 [32] Data management techniques for IoT 14 [33] Scalable data pipeline architecture to support the IIoT 15 [34] Industry 4.0-based process data analytics platform 16 [35] Optimization of IIoT data processing latency 17 [36] Big data and stream processing platforms for Industry 4.0 requirements mapping for a predictive maintenance use case 18 [37] Framework of big data for sustainable and smart additive manufacturing 19 [38] Feature engineering in big data analytics for IoT-enabled smart manufacturing 20 [39] An architecture for aggregating information from distributed data nodes for IIoT 21 [40] Application of big data analysis technique on high-velocity airblast atomization 22 [23] Interactive data exploration as a service for the smart factory 23 [41] Smart city services using machine learning, IoT, and big data 24 [43] Digital forensics challenges to big data in the cloud 25 [44] On fault prediction based on industrial big data 26 [45] Apache spark-based distributed self-organizing map algorithm for sensor data analysis 27 [48] Techniques of big data to smart city deployments 28 [51] A cognitive data stream mining technique for context-aware IoT systems 29 [52] Implementation of the FSO2 30 [53] An intelligent outlier detection method with one class support tucker machine and genetic algorithm toward big sensor data in IoT 31 [54] Big data-based improved data acquisition and storage system for designing industrial data platform 32 [55] Cybersecurity in an IIoT environment 33 [56] A secure fabric blockchain-based data transmission technique for IIoT 34 [57] Concept drift detection and adaption in big imbalance IIoT data using an ensemble learning method of offline classifiers 35 [58] City geospatial dashboard 36 [59] A community-based IoT service platform to locally disseminate socially valuable data 37 [60] e spatiotemporal modeling and integration of manufacturing big data in job shop 38 [61] A big data-enabled consolidated framework for energy efficient software defined data centers in IoT setups 39 [62] A parallel military dog-based algorithm for clustering big data in cognitive IIoT 40 [63] Big data cleaning based on mobile edge computing in industrial sensor cloud 41 [64] A highly efficient distributed tensor-train decomposition method for IIoT big data 42 [65] Big data-driven edge-cloud collaboration architecture for cloud manufacturing Conclusion With the passage of time, the volume of data is increasing. is increase will create huge bulk of data which needs proper tools and techniques to handle its management and organization. Big data is ever playing an important role in the industry as well as many other organizations. Huge bulk of data is produced from the healthcare information systems, electronic records, wearables, smart devices, handheld devices, and so on. e recent increase in medical big data and the development of computational techniques in the field of information technology enable researchers and practitioners to extract and visualize big data in a new spectrum of use. Different techniques and tools are being used to properly handle the management of data. A detailed report of these techniques and tools is needed which will help researchers to easily identify a tool for their data and take help to easily manage the data, organize the data, and extract meaningful information from it. e proposed study is an endeavour toward summarizing and identifing the tools and techniques for big data used in IIoT. is report will help researchers and practitioners to easily use the tools and techniques for their need in an effective way and will devise new solutions for the industry of big data. Data Availability No data were used to support this study. Conflicts of Interest e authors declare that they have no conflicts of interest regarding this paper. Acknowledgments is study was sponsored in part by the Intelligent Manufacturing Project of Tianjin (20193155).
2020-10-29T09:08:56.209Z
2020-10-21T00:00:00.000
{ "year": 2020, "sha1": "72d0d6f5f4a4b9a1063c982a0ed3654aa5c42415", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/sp/2020/8810634.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f97fa7d7d664faa962def4d15e7090bbe0bd6d4e", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
271499102
pes2o/s2orc
v3-fos-license
OctoVision: A Smart System for Diabetic Retinopathy Disease Detection :- Diabetic retinopathy (DR) is the major cause of vision impairment and blindness in diabetics. Early detection and treatments are critical in preventing irreparable retinal damage. Manual detection of diabetic retinopathy by an ophthalmologist takes a long time, and patients must suffer greatly during this time. This paper presents an automated approach for rapid DR detection using the DenseNet-121 architecture. Our model achieves an accuracy exceeding 80%, with a precision score of 81% and a recall score of 86%, indicating its high effectiveness in detecting DR. Additionally, we developed a server-based implementation where the trained model is deployed. Images captured by a camera are uploaded to a cloud server, which processes them and sends back a diagnostic response. This study contributes to continuing efforts to create efficient and reliable techniques for early DR identification, resulting in earlier management and better patient outcomes. INTRODUCTION Diabetic Retinopathy (DR) is a significant consequence of diabetes mellitus that causes damage to the retinal blood vessels as a result of persistently high blood glucose levels.In severe circumstances, DR can result in considerable vision loss and perhaps complete blindness.Early signs of DR include black spots, floaters, blurred vision, and difficulties differentiating colours.Early and correct identification of DR is critical for avoiding irreparable blindness. Globally, approximately one-third of the estimated 285 million individuals with diabetes exhibit symptoms of DR.The overall number of cases of DR is expected to increase from 126.6 million in 2010 to 191.0 million by 2030.In the early stage of diabetes, known as Non-Proliferative Diabetic Retinopathy (NPDR), little red spots (microaneurysms) form on the retina, which might indicative of hemorrhages.Blood vessel damage can produce exudates, which include fluid and fatty deposits, to flow into the retina. As DR advances, abnormal blood vessel development (proliferative diabetic retinopathy) can occur, causing retinal scarring or bleeding and eventually leading to progressive vision loss and blindness.DR accounts for 2.6% of global blindness cases.Major risk factors for developing DR include the duration of diabetes, high levels of hemoglobin A1c, and hypertension.Regular screening is crucial for diabetic patients to ensure that DR is detected at an early stage.DR detection traditionally involves a physician's examination of retinal imaging for the shape and appearance of different types of lesions.Available physical tests to detect diabetic retinopathy includes pupil dilation, visual acuity test, optical coherence tomography, etc.But they are time consuming and patients need to suffer a lot.This paper focuses on detection of diabetic retinopathy using deep learning techniques.Our trained model may be used to diagnose, process, and post camera photos in realtime on a cloud server, providing a useful and effective way to check for DR early on. II. LITERATURE REVIEW In the first study conducted in 2019 to classify diabetic retinopathy using the aptos dataset, which involved a rigorous investigation.Models such as inceptionv3, vgg16, and resnet50 were meticulously evaluated.The findings revealed that inceptionv3, with its exceptional accuracy of 96.18%, outperformed other models.Nevertheless, the study stressed the importance of conducting additional research to evaluate the model's effectiveness across various demographic groups and external datasets to ensure its applicability in different contexts.Furthermore, improving interpretability by utilizing explainable AI techniques and mitigating the model's vulnerability to misclassifications are vital for ensuring transparency in medical decisionmaking.To fully harness the potential of artificial intelligence in preventive and personalized medicine for In a study conducted in 2023 at the Sindh Institute of Ophthalmology Visual Sciences, researchers employed fundus images to detect diabetic retinopathy using a modified convolutional neural network (cnn).The real-time testing incorporated an image quality rating evaluated by clinical experts.This research highlighted the importance of having a large dataset with labeled examples and introduced the area under the receiver operating characteristic curve (auc) as a comprehensive metric for evaluating performance.Using ultra-wide-field fundus pictures and deep learning, the 2021 project aimed to detect diabetic retinopathy early.In order to eliminate unwanted elements, the suggested approach au-tomatically subdivided the ETDRS 7 Standard Fields (7SF).The ResNet-34 model, with the aid of optic disc and macula recognition for precise picture alignment, identified diabetic retinopathy using Ultra-Wide-Field (UWF) fundus images.For efficient diabetic retinopathy identification, the pre-trained ResNet-34 model showed strong performance in combining UWF fundus pictures with the ETDRS 7SF, even with a very small dataset.[2] In 2020, machine learning made it easier to detect diabetic retinopathy by extracting yellow exudates from RGB photos through preprocessing.The abnormality segmentation process combined Random Forest, K-Nearest Neighbours, and Sup-port Vector Machines into a hybrid classification technique.This approach improved the accuracy of identifying retinal abnormalities greatly, demonstrating its potential to advance ophthalmological diagnosis.[3] The 2019 study investigated the identification and catego-rization of diabetic retinopathy using adaptive boosting and colour space conversion and vessel removal to improve the quality of retinal images by resolving problems with illumi-nation and contrast.Promising F1 Score values, sensitivity, accuracy, and precision were displayed by the Adaptive Boost-ing Algorithm.With regard to diabetic retinopathy, the ANN approach-specifically, Architecture III-rose to prominence for its exceptional precision and accuracy.[4] III. DATASET & METHODS The dataset used in this research comes from Kaggle, a reliable website well-known for its wide range of competitions and dataset collections.This particular dataset consists of 3,380 images that have been assigned to training and 376 images that have been assigned to testing.The training dataset was further split into training and validation subsets, keeping an 80:20 ratio in each case, to ensure robust model evaluation.A variety of data augmentation methods were used on the photos to improve the dataset and strengthen the resilience of the model.These methods mitigated the effects of any unclear images and simulated real-world fluctuations through random flips, rotations, zooms, and brightness modifications. The identification of diabetic retinopathy within the dataset is based on an analysis of various visual characteristics, in-cluding the appearance, number, spread, and size of exudates, microaneurysms, and hemorrhages.Exudates, characterized by bright yellowish areas, are distinguished from the optic disc by their color variance.The presence of lipids within rup-tured blood vessels contributes to the formation of exudates.Similarly, the rupture of microaneurysms within blood vessels results in the formation of hemorrhages. IV. IMAGE PREPROCESSING The preprocessing of images is a critical phase in enhancing the quality and relevance of input data before integration into the deep learning model.The primary objective is to improve the model's capacity to extract meaningful features and make precise predictions. To optimize the interpretability of captured images, a strate-gic combination of advanced techniques was employed.First, a circle mask was carefully placed to every picture, keeping the centre area and removing unnecessary parts from the edges.The goal of this exact process was to draw attention to and preserve the key elements of the photograph. Moreover, a sophisticated enhancement process was imple-mented, involving the judicious amalgamation of the original image with a Gaussian-blurred counterpart.This method was carefully calibrated to emphasize intrinsic features while con-currently mitigating noise.The incorporation of the Gaussian-blur variant introduced a controlled smoothing effect, con-tributing to a more refined and visually comprehensible rep-resentation. V. METHODOLOGY The DenseNet-121 model is used as the backbone archi-tecture in the suggested methodology because of its high efficiency in feature extraction and classification tasks, making it ideal for image-based applications.DenseNet-121's unique structural qualities allow for the continuous flow of informa-tion across layers, encouraging feature reuse and addressing issues such as vanishing gradients.Transition blocks in the architecture control spatial dimensions and channel depth, whilst bottleneck layers improve computational efficiency.The network comprises an initial convolution layer, followed by multiple dense blocks interspersed with transition layers. These dense blocks are designed to concatenate outputs from preceding layers, thereby improving gradient flow and feature propagation. The DenseNet-121 model architecture has many key com-ponents, including an initial convolution layer, batch normal-ization, and ReLU activation.This is followed by a maximum pooling layer.The model then moves through a succession of dense blocks, each containing multiple convolutional layers, with transition layers in between to reduce spatial dimensions and prevent overfitting.The bottleneck layers within these blocks use 1x1 and 3x3 convolutions to enhance computational efficiency.Following the final dense block, global average pooling is used to build a fixed-size feature vector, which is then passed through a fully connected layer with softmax activation to get a probability distribution over the target classes. The DenseNet-121 model was built with the Keras package, which includes pre-trained models for transfer learning.The model was built using the Adamax optimizer and a categorical cross-entropy loss function.To improve performance and avoid overfitting, data augmentation and actions such as learning rate reduction, early stopping, and model checkpointing were used during training.The model's final output is a probability distribution between two classes, with the highest probability indicating the anticipated class.Class 0 denotes the normal class, whereas Class 1 represents the abnormal class.This architectural choice, as well as the subsequent output inter-pretation, contribute to the proposed method's resilience and utility for detecting diabetic retinopathy. VI. CLOUD DEPLOYMENT AND REAL-TIME IMAGE PROCESSING The DenseNet-121 model, after being trained and validated, is deployed on a cloud server to leverage the computational power and scalability of cloud infrastructure.The system will be able to process several requests at once and provide prompt, accurate replies thanks to this configuration.The photos go through the previously mentioned preparation pipeline as soon as they are received by the cloud server.This includes applying a circular mask to focus on the relevant parts of the image and using Gaussian blurring to enhance key features.The preprocessed images are then fed into the deployed DenseNet-121 model. The model processes the images and generates predictions in real-time.The output, which includes a probability distri-bution across the predefined classes (normal and abnormal), is then sent back to the user's device. VII. RESULTS AND DISCUSSION Our developed model demonstrated a commendable ac-curacy of around 80%, indicating its capability to correctly predict outcomes across the dataset.This level of accuracy is promising for the intended application of diabetic retinopathy detection. The precision measure reflects how many of the positively detected occurrences were truly right.It is calculated as the ratio of true positives to the total of true positives and false positives.The precision for Class 0 (no diabetic retinopathy) is 0.74, meaning that 74% of the instances predicted as no diabetic retinopathy were indeed true negatives.For Class 1 (diabetic retinopathy), the precision is 0.81, highlighting the model's efficiancy in correctly identifying positive cases.Thus, 81% of the instances predicted as diabetic retinopathy were accurate, indicating a high level of precision for identifying this condition.Recall, which reflects the model's ability to identify all ac-tual positive instances, is another critical metric.It is calculated as the ratio of true to the sum of true positives and false negatives.The recall for Class 0 is 0.86, indicating that the model successfully identified 86% of the actual instances of no diabetic retinopathy.Conversely, the recall for Class 1 is 0.66, meaning that the model correctly identified 66% of the actual instances of diabetic retinopathy. VIII. CONCLUSION The use of DenseNet-121 architecture in this work is a big step forward in applying deep learning to healthcare, particularly diabetic retinopathy detection.When combined with a unique weighted Gaussian blur preprocessing approach, DenseNet-121 improves our capacity to distinguish subtle aspects of diabetic retinopathy from retinal pictures.Furthermore, the use of a cloud server architecture for real-time image processing and model inference demonstrates our approach's scalability and usability.By installing the model on a cloud server, we offer seamless picture capture via networked cameras, allowing for quick diagnosis and action.The insights derived from this research expand our un-derstanding of diabetic retinopathy and provide robust models and interpretability analyses.These outcomes establish a solid foundation for future advancements in automated screening for diabetic retinopathy, with the overarching goal of improving patient outcomes and alleviating healthcare burdens. Early detection and diagnosis enabled by our methodology are critical in mitigating the risk of blindness and addressing the severe implications of diabetic retinopathy.This study emphasizes the social impact of advancements in automated screening processes, advocating for enhanced patient care and making a significant contribution to continuing efforts to successfully treat diabetic retinopathy. Fig 1 Fig 1 Image with no DR Fig 3 Flowchart
2024-07-28T15:08:33.403Z
2024-07-26T00:00:00.000
{ "year": 2024, "sha1": "9a95e6cf94a0def4b1613a2b02b2101baeb6f869", "oa_license": null, "oa_url": "https://doi.org/10.38124/ijisrt/ijisrt24jul605", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "72ab59e2df69cb7deaaf399d4ee49f873862b082", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [] }
252399805
pes2o/s2orc
v3-fos-license
A Plasmid Carrying blaIMP-56 in Pseudomonas aeruginosa Belonging to a Novel Resistance Plasmid Family blaIMP and blaVIM are the most detected plasmid-encoded carbapenemase genes in Pseudomonas aeruginosa. Previous studies have reported plasmid sequences carrying blaIMP variants, except blaIMP-56. In this study, we aimed to characterize a plasmid carrying blaIMP-56 in a P. aeruginosa strain isolated from a Mexican hospital. The whole genome of P. aeruginosa strain PE52 was sequenced using Illumina Miseq 2 × 150 bp, with 5 million paired-end reads. We characterized a 27 kb plasmid (pPE52IMP) that carried blaIMP-56. The phylogenetic analysis of RepA in pPE52IMP and 33 P. aeruginosa plasmids carrying resistance genes reported in the GenBank revealed that pPE52IMP and four plasmids (pMATVIM-7, unnamed (FDAARGOS_570), pD5170990, and pMRVIM0713) were in the same clade. These closely related plasmids belonged to the MOBP11 subfamily and had similar backbones. Another plasmid (p4130-KPC) had a similar backbone to pPE52IMP; however, its RepA was truncated. In these plasmids, the resistance genes blaKPC-2, blaVIM variants, aac(6′)-Ib4, blaOXA variants, and blaIMP-56 were inserted between phd and resolvase genes. This study describes a new family of plasmids carrying resistance genes, with a similar backbone, the same RepA, and belonging to the MOBP11 subfamily in P. aeruginosa. In addition, our characterized plasmid harboring blaIMP-56 (pPE52IMP) belongs to this family. Introduction Pseudomonas aeruginosa is an opportunistic pathogen causing nosocomial infections such as ventilator-associated pneumonia, urinary tract infections, blood-associated infections, and skin and soft tissue infections [1][2][3]. Infections with this microorganism are challenging to treat due to its natural resistance and the accelerated emergence of strains resistant to almost all antibiotics, including carbapenems (last-resort treatments) [1]. Therefore, the World Health Organization in 2017 included P. aeruginosa in the critical-level priority pathogens group, along with Acinetobacter baumannii and carbapenem-resistant Enterobacteriaceae [4]. Plasmid Characterization The MOBScan web application [38] was used to identify relaxases and classify the plasmids into any of the nine MOB families. For in silico classification by the replicon method, we used PlasmidFinder [39]. To determine the phylogenetic relationship of RepA protein in pPE52IMP and plasmids from P. aeruginosa, we analyzed the RepA of plasmids carrying resistance genes and constructed a phylogenetic tree. A total of 164 nucleotide sequences of complete and partial plasmids from the GenBank database were obtained (until October 2021). The plasmid sequences were annotated with Rapid Annotations using Subsystem Technology (RAST) [35] and ResFinder version 4.1 [40] for the detection of antibiotic resistance genes in all plasmids. It is essential to point out that plasmids that did not carry resistance genes were not included. RepA amino acid sequences of the plasmids carrying resistance genes were searched in the annotations using the keywords "replicase", "repA", and "helixturn-helix domain-containing protein", and their replicase A domains were corroborated with Pfam [41]. In addition, RepA with premature stop codons or ORF changes were discarded. Finally, 33 RepA proteins of plasmids (Table S1) (including pPE52IMP) were used to construct the phylogenetic tree. The Molecular Evolutionary Genetics Analysis tool, MEGA version 11.0.10 [42], was used to infer RepA proteins' phylogeny using the UPGMA method (the parameters used were amino acid substitution type, no. of differences method, and 100 bootstrap replicates). Comparative Analysis of Plasmids Obtained from GenBank and pPE52IMP For the comparative analysis of plasmids, we selected the complete sequences of the plasmids that shared 100% identity with repA of pPE52IMP and were in the same clade in the phylogenetic tree. To align and compare the sequences, we used MAUVE version 20150226 [43] and CLC Sequence Viewer version 8.0 (CLC bio A/S, Aarhus N, Denmark). To represent the comparison of plasmids, EASYFIG 2.2.5 was used [44]. Structural Features of the pPE52IMP Plasmid Whole-genome sequencing revealed the presence of a single plasmid, pPE52IMP, that carries the bla IMP-56 variant (GenBank accession no. CP102481.1). The pPE52IMP plasmid had a size of 27,635 bp, 39 open reading frames (ORFs), and guanine-cytosine (G+C) content of 62.2%. Moreover, 32 of the 39 open reading frames had a predicted function: 1 of replication, 6 of stability, 7 of transfer, 13 of adaptation, and 5 transposon-related genes. We could not determine the functional domain of seven hypothetical proteins (Figure 1). and consisted of 113 bp. The stability module involves the partitioning genes parA and parC; however, the parB gene was not found. In addition, the toxin-antitoxin genes phd/doc and krfA gene were identified. The repA was part of the replication module, and no iterons or replication origins close to the repA gene were found ( Figure 1). The adaptation module contained a class 1 integron carrying blaIMP-56, aadA1, and blaOXA-2 genes. In addition, the Tn3 family transposon carrying a mercury resistance operon (merR, merT, merP, merA, merD, and merE genes) was located ( Figure 1). Figure 1. Structure of the pPE52IMP plasmid of P. aeruginosa strain PE52. Plasmid modules are represented with different colors. Blue: adaptation; yellow: replication; orange: mobilization; purple: stability; green: transposons; gray: hypothetical proteins. GC content, GC skew+ and GC skew-are represented in colors black, purple and green, respectively on the inner map. Phylogenetic Analysis of RepA To infer a possible phylogenetic relationship between pPE52IMP and other plasmids from P. aeruginosa, we used RepA to build a phylogenetic tree. For analysis, we included the amino acid sequence of 33 RepA from plasmids carrying antibiotic resistance genes (Table S1) (including RepA of pPE52IMP). The analysis showed a wide diversity of replicases among P. aeruginosa plasmids grouped in 11 clades ( Figure 2). Furthermore, RepA proteins of plasmids with the same incompatibility group were clustered in the same clade such as IncP-2 (pOZ176, pJB37, pPUV-1), IncP-6 (C79, p10265-KPC, pCOL-1), and IncP-7 (p1160-VIM and pNK546b); however, the incompatibility group of one plasmid within the IncP-7 clade (unnnamed1 P8W) was not reported (Figure 2). On the other hand, it is important to note that RepA proteins from pPE52IMP, pMATVIM-7, unnamed1 (FDAARGOS_570), pD5170990, and pMRVIM0713 were in the same clade ( Figure 2). The transfer module consisted of the genes traJ, traK, trbL, trbK, trbJ, virB4, and a relaxase traI belonging to the MOB P11 subfamily. The oriT was located upstream of traK and consisted of 113 bp. The stability module involves the partitioning genes parA and parC; however, the parB gene was not found. In addition, the toxin-antitoxin genes phd/doc and krfA gene were identified. The repA was part of the replication module, and no iterons or replication origins close to the repA gene were found ( Figure 1). Phylogenetic Analysis of RepA To infer a possible phylogenetic relationship between pPE52IMP and other plasmids from P. aeruginosa, we used RepA to build a phylogenetic tree. For analysis, we included the amino acid sequence of 33 RepA from plasmids carrying antibiotic resistance genes (Table S1) (including RepA of pPE52IMP). The analysis showed a wide diversity of replicases among P. aeruginosa plasmids grouped in 11 clades ( Figure 2). Furthermore, RepA proteins of plasmids with the same incompatibility group were clustered in the same clade such as IncP-2 (pOZ176, pJB37, pPUV-1), IncP-6 (C79, p10265-KPC, pCOL-1), and IncP-7 (p1160-VIM and pNK546b); however, the incompatibility group of one plasmid within the IncP-7 clade (unnnamed1 P8W) was not reported ( Figure 2). On the other hand, it is important to note that RepA proteins from pPE52IMP, pMATVIM-7, unnamed1 (FDAARGOS_570), pD5170990, and pMRVIM0713 were in the same clade ( Figure 2). Comparative Analysis of pPE52IMP and Plasmids with Same RepA and Similar Structure pPE52IMP structure was compared with closed plasmids from the GenBank, which clustered in the same clade of the RepA phylogenetic tree ( Figure 2): pMATVIM-7 (GenBank accession no. AM778842.1), plasmid unnamed (GenBank accession no. CP033834.1), pD5170990 (GenBank accession no. KX169264.1), and pMRVIM0713 (GenBank accession no. KP975076.1). In addition, we found the plasmid p4130-KPC was not included in the phylogenetic analysis because its RepA was truncated, but it was incorporated in the comparative analysis of the structure (Figure 2). These plasmids ranged from 24 kb to approximately 58 kb, and were isolated in the USA, Brazil, and France. The characteristics of these plasmids are shown in Table S2. The comparative analysis showed that these six plasmids shared a similar backbone, including genes for replication (repA), partition (parA, parC), and transfer (tra and virB4); however, we found some differences. traJ, traK, and kfrA genes were absent of pD517099. trbJ gene in p4130-KPC was interrupted by a transposon, the N-terminus of TraK in pMAT-VIM-7 is absent, and RepA of p4130-KPC lacks the C-terminus ( Figure 3). however, we found some differences. traJ, traK, and kfrA genes were absent of pD517099. trbJ gene in p4130-KPC was interrupted by a transposon, the N-terminus of TraK in pMAT-VIM-7 is absent, and RepA of p4130-KPC lacks the C-terminus (Figure 3). Figure 3 illustrates that the variable region was found downstream of the phd gene and upstream of the resolvase gene and consisted of genes for adaptation such as carbapenemases type blaIMP-56 (pPE52IMP) and blaVIM-6 (plasmid unnamed and pMRVIM0713) carried by a class 1 integron, blaVIM-7 (pMATVIM-7) carried by a partial class 1 integron, blaKPC-2 (pD5170990) carried by a transposon, and blaOXA-779, blaOXA-732, and blaKPC-2 brought by a class 1 integron and a transposon, respectively (p4130-KPC) ( Figure 3). As previously mentioned, pPE52IMP was classified into the MOBP11 subfamily [28] but was not classifiable by replicon typing. In silico analysis revealed that plasmids pMATVIM-7, unnamed (FDAARGOS_570), pD5170990, pMRVIM0713, and p4130-KPC were classified into the MOBP11 subfamily but were not classifiable according to the replicon typing scheme [26,45]. These plasmids shared some characteristics, such as having a same replicase and similar backbone, and were classified as MOBP11, but were not classifiable by the incompatibility group. Figure 3 illustrates that the variable region was found downstream of the phd gene and upstream of the resolvase gene and consisted of genes for adaptation such as carbapenemases type bla IMP-56 (pPE52IMP) and bla VIM-6 (plasmid unnamed and pMRVIM0713) carried by a class 1 integron, bla VIM-7 (pMATVIM-7) carried by a partial class 1 integron, bla KPC-2 (pD5170990) carried by a transposon, and bla OXA-779 , bla OXA-732 , and bla KPC-2 brought by a class 1 integron and a transposon, respectively (p4130-KPC) ( Figure 3). As previously mentioned, pPE52IMP was classified into the MOB P11 subfamily [28] but was not classifiable by replicon typing. In silico analysis revealed that plasmids pMATVIM-7, unnamed (FDAARGOS_570), pD5170990, pMRVIM0713, and p4130-KPC were classified into the MOB P11 subfamily but were not classifiable according to the replicon typing scheme [26,45]. These plasmids shared some characteristics, such as having a same replicase and similar backbone, and were classified as MOB P11 , but were not classifiable by the incompatibility group. Plasmids with Similar Backbone as pPE52IMP Present in Other Bacterial Genera By searching GenBank using the repA gene from pPE52IMP, we found two plasmids with the same repA and similar backbones in Achromobacter ruhlandii (plasmid p138R) and Serratia marcescens (plasmid pSMC1). The sizes of the plasmids were 34 and 41.5 kb and they were isolated from Argentina and Japan, respectively (Table S3). Comparing the complete sequence of the plasmids, we determined that these two plasmids shared a conserved backbone with pPE52IMP. Furthermore, we found that almost all backbone genes shared 100% identity and coverage, except for repA of p138R, which was truncated, and traI of pSMC1, which shared 97.65% nucleotide similarity and 100% coverage. In addition, the variable region of these plasmids carried different carbapenemases (bla IMP-1 , bla CMY-8 ) and other resistance genes such as aac(6 )-Ib4 and aadA2, commonly found in enterobacteria ( Figure S1). Discussion The emergence of beta-lactamases with activity against carbapenems has compromised the clinical utility of this class of antibiotics [46]. In P. aeruginosa, class A and B β-lactamases with carbapenemase activity are reported, including VIM, IMP, SPM, NDM, GIM, GES, and KPC [47,48]. IMP, VIM, NDM, and GES types comprise several variants, whereas only one variant for SPM-1 and GIM-1 have been reported [49]. These enzymes are carried in plasmids, integrons, and transposons, which play an important role in their dissemination [49]. Recently, carbapenemases mobilized by mobile genetic elements in Pseudomonas aeruginosa were reviewed and it was found that bla KPC-2 , bla VIM-1 , and bla IMP-45 are carried by plasmids belonging to different incompatibility groups [7]. In addition, other carbapenemases such as bla VIM-2 , bla IMP-6 , and bla IMP-9 are carried by plasmids [18,21,50,51]. Little is known about P. aeruginosa plasmids and their role in resistance gene dissemination; therefore, characterizing plasmids will help better understand this dissemination mechanism. In this work, we determined the structure of the plasmid pPE52IMP carrying bla IMP-56 (Figure 1), finding that it has lower G+C content (62.2%) than the P. aeruginosa chromosome (approximately 66.6%) [52]; however, it is consistent with the GC content reported in other P. aeruginosa plasmids (from 45.8% to 63.8%) [25]. A previous study revealed that the average GC content of plasmids was 10% lower than their host's chromosome, which suggests that plasmids with very different GC content could not be maintained in their host [53]. The stability module comprises a partitioning system that contributes to the segregation of the plasmid, an addiction system that ensures the killing of plasmid-free cells, and multimer resolution systems that prevent the formation of plasmid multimers [54]. The partitioning system consists of ATPase (parA), centromere-like DNA sequence (parC), and DNA-binding protein (parB) [55]; the latter is composed of a central HTH DNA binding domain flanked by a C-terminal dimer domain and an N-terminal region necessary for protein oligomerization [56]. In the case of pPE52IMP, we found only the parA and parC genes, while the parB gene was absent, and none of the hypothetical proteins present in the plasmid had domains parB-like (Figure 1). On the other hand, the kfrA gene has been shown to act as a transcriptional autoregulator and participates in plasmid stability [57][58][59], suggesting that this gene could be involved in pPE52IMP stability; however, other studies are necessary to understand how the segregation process is carried out in this plasmid. In addition, the addiction system is composed of the Doc toxin (death on curing) and Phd antitoxin (prevents host death) (Figure 1) that belongs to type II systems, where the toxin is directly blocked by the antitoxin [60]; besides, this toxin/antitoxin system plays an important role in plasmid stability persistence, programmed cell death, and stress response [61]. Conjugative plasmids carry two sets of genes; the first allows DNA processing (DNA transfer and replication (Dtr) genes), and the second is a membrane-associated mating pair formation (Mpf) complex (a form of type 4 secretion system). In contrast, mobilizable plasmids use the Mpf of another genetic element in the same cell [62,63]. The transfer module of pPE52IMP consists of IncP-like plasmid genes traK, traJ, and traI, which are essential for relaxosome formation, and the conjugative transfer genes trbJ, trbK, and trbL are involved in the formation of the Mpf system; however, the genes traH (chaperone activity), traG (coupling protein), traA, traB, traD, and traE (not essential for conjugation) and the genes trbBCDEFGHI (necessary for the formation of the Mpf system) are absent in pPE52IMP [64], suggesting that it could be a mobilizable plasmid. Furthermore, the lack of transconjugants in the conjugation experiment reinforces this analysis (data not shown). Mercury operons comprise mercury resistance-conferring genes (merEDAPTR) and are commonly located on transposons and integrons carried by plasmids [65]. pPE52IMP carry the mer operon located next to the tn21 and tnpR genes ( Figure 1) that are part of transposable elements of the Tn3 family [66]. On the other hand, some authors have used features of the plasmid backbone to design classification schemes such as PCR-based replicon typing (PBRT) [26] and degenerate primer MOB typing (DPMT) [28] based on plasmid replication and mobility functions, respectively [67]. Plasmids of P. aeruginosa with a similar backbone to pPE52IMP have a MOB P11 subfamily relaxase according to MOB typing [28]; this is consistent with findings reported by Lopez-García [15]. The MOB P11 subfamily belongs to the MOBP superfamily, one of the most abundant in plasmids among gammaproteobacterial (including Pseudomonas) [68]. pPE52IMP and plasmids with similar backbone could not be classified by PBRT [26], which may be related to the fact that this scheme is focused on classifying plasmids from Enterobacteriaceae but not from other bacterial families. pPE52IMP does not belong to any of the 14 incompatibility groups (IncP-1 to IncP-14) described in P. aeruginosa; this is consistent with Shintani et al., 2015 [25], who found that only 21 of 183 Pseudomonadales plasmids analyzed could be classified into the IncP group. The above reflects the need to develop a technique to classify P. aeruginosa plasmids; however, classifying plasmids using MOB typing could help in some cases. A classification based on replicase sequence homology was designed by Bertini for Acinetobacter baumannii plasmids, identifying 19 homology groups (GRs) [27]. Rep genes that shared at least 74% of identity were in the same group. Other authors have added more groups using the same identity criteria, reporting, to date, 33 GRs [69]. Therefore, we used similar parameters to know the distribution and behavior of RepA in pPE52IMP and plasmids of Pseudomonas aeruginosa reported in the GenBank (Table S1 and Figure 2). It is important to highlight that we included only plasmids carrying resistance genes in the analysis. RepA of plasmids belonging to the same incompatibility group (IncP-2, IncP-7, IncP-6) were clustered in three clades, likely because plasmids belonging to the same incompatibility group have the same or related replication/partitioning system [70]. On the other hand, the RepA of pPE52IMP and plasmids with a similar backbone were clustered together in a separated clade, indicating that they are closely related genetically and are probably a new family of plasmids. According to the information available in the GenBank, the strains of P. aeruginosa and the other genera that carried plasmids similar to pPE52IMP were isolated from the USA (mainly), Brazil, France, Argentina, and Japan (Tables S2 and S3), which would indicate that these plasmids are circulating in different countries and acting as vehicles for the dissemination of antibiotic resistance genes. Closely related plasmids commonly have a core called the "backbone" associated with plasmid-specific functions such as replication initiation, conjugation, and stability. In addition, the backbone can include virulence genes and antibiotic-and heavy metalresistance genes that confer adaptive advantages to the bacterium [71]. In the analysis of the phylogenetic tree, we found four plasmids of P. aeruginosa strains, and one plasmid of a strain reported in the GenBank with a backbone similar to pPE52IMP. In addition, the plasmids had a variable region with carbapenem resistance genes such as bla VIM-6 , bla VIM-7 , bla KPC-2 , and other beta-lactamase encoding genes such as bla OXA-779 , bla OXA-732 , and bla OXA-10 (Table S2 and Figure 3) carried by class 1 integrons and transposons. Our working group reported that pPE52IMP carries bla IMP-56 in a class 1 integron (GenBank accession no. KY646161) [15]; nevertheless, in this study, we report the structure of the plasmid carrying bla IMP-56 , which belongs to a new family of plasmids. Plasmids with a conserved backbone carrying resistance genes inserted into hotspot sites have been reported, and the repA gene serves as a hotspot in some of them [72][73][74]. However, in the plasmids analyzed, the resistance genes are inserted between phd and a resolvase gene so that it could be a potential hotspot for integrating the resistance genes in these plasmids, but more studies are necessary. We also found two plasmids with backbone similar to pPE52IMP in bacteria not closely related to P. aeruginosa, such as p138R from A. ruhlandii, and pSMC1 from S. marcescens (Table S3 and Figure S1). These plasmids carried the aadA1, aac(6 )-lb4 acetylase, bla CMY-8 , and bla IMP-1 genes. These observations could indicate that plasmids of this type could be of a broad host range [73], allowing the dissemination of resistance genes between bacteria different from P. aeruginosa. However, transformation experiments with hosts of other bacterial genera are needed to confirm the host range of this plasmid. Conclusions In this study, we described a new family of plasmids carrying resistance genes with the same RepA, a similar backbone, and belonging to the MOB P11 subfamily in P. aeruginosa. In addition, we characterized the first plasmid harboring bla IMP-56 (pPE52IMP), isolated from a Mexican hospital, belonging to this family. This study contributes to understanding how these plasmids encoding carbapenemases spread among bacteria. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/microorganisms10091863/s1, Table S1: Characteristics of the plasmids included in the phylogenetic tree of the RepA proteins. Table S2: Characteristics of P. aeruginosa plasmids with similar structure to pPE52IMP. Table S3: Characteristics of plasmids from other bacterial genera with similar structure to pPE52IMP. Figure S1: Comparison of pPE52IMP with plasmids from other bacterial genera with a similar backbone. Funding: This work was supported by 100031833/VIEP2019, VIEP/CONACyT2497/16, and 100031833/VIEP2020. Data Availability Statement: P. aeruginosa PE52 strain was recovered from routine culture and informed patient consent was not required. The protocol to perform this study was approved by the Ethical Committee of Hospital Regional del ISSSTE, Puebla, under number 188-2018.
2022-09-21T15:18:30.759Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "c0ff5a0e13750dc987b4bee3ca348f0e7d6afccb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2607/10/9/1863/pdf?version=1663647689", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cf755c4ae97834fc03e82384a7aae944dfd81efa", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
235478530
pes2o/s2orc
v3-fos-license
Magnetic nanoparticles for hydroxy-PAHs removal from synthetic urine The study summarizes using a new synthesized core/shell type of nanoparticles to remove hydroxy-polycyclic aromatic hydrocarbons (PAHs). The Fe2O3 coated with compound ((3-aminopropyl) triethoxy silane) and tested for removal of hydroxy PAHs from synthetic urine, The synthesized Core/shell nanoparticles were characterized using several techniques such as XRD analysis of The synthesized nanoparticles show amorphous structure. The results obtained showed the high value of the removal ratio that indicates of the tremendous power of the synthesized nanoparticles on removal of PAH. INTRODUCTION: PAHs is a hydrocarbon composed of molten aromatic ring molecules. These are the rings that share one or more sides and contain electrons, they are molecules made by combining two or more benzene rings, aromatic multi-core hydrocarbons contain only two carbon and hydrogen atoms, as well as hydrocarbons consisting of more than 100 different chemicals that are formed during incomplete combustion For coal, oil, gas, garbage, or other organic materials such as tobacco or charcoal grilled meat, the majority of PAHs do not dissolve easily in water. They attach to solid particles and settle in the depths of lakes and rivers. But microorganisms can break down aromatic hydrocarbons. • There are 16 PAHS compounds that have been identified as a priority in the EPA (Enviromented Protection Agency) as their concentrations must be monitored continuously. Polycyclic compounds present in soil or water after a period ranging between weeks and months. In soils, it is very certain that polycyclic aromatic hydrocarbons (PAHs) are tightly attached to the particles, and some PAHs penetrate the soil and pollute the groundwater. The polycyclic aromatic hydrocarbon content in plants and animals may be much higher than the polycyclic aromatic hydrocarbon content in the soil and water in which these animals and plants live. PAHs are a complex contaminant 1,2,3,4,5 The most important problem facing the world directly is pollution of all kinds 6,7,8,9,10,11 according to the World Health Organization that a very high percentage of the world lives in a polluted environment. Air pollution may be watery, Some polycyclic aromatic hydrocarbons are known for their ability to cause cancer and mutations, and thus pose a serious threat to human health. Various physico-chemical methods have been used to remove these compounds from our environment 12,13,14,15 Chemicals: All the materials used were HPLC grade, the chemicals used were, pure and needing no purification, while (1-hydroxy pyerne) were purchased from Sigma-Aldrich as well as all other chemicals from other commercial sources such as Merck. Methodes: 3.1. synthetic Urine Preparation of a medium similar to natural urine by simulating the optimal conditions of the medium by knowing the properties of the urine medium and its content of dissolved substances in it and the operation of a medium similar to a large extent, characteristics of urine: the hydration is naturally formed from water as a basic component, and it also contains nitrogen molecules, including urea , In addition to containing creatinine and other metabolic waste components, the following are the normal proportions of the chemical composition of urine: Water, more than 95% urea, 9.3 grams / liter. Chloride, 1.87 g / l. Sodium, 1.17 g / l. Potassium, 0.75 g / l. Creatinine, 0.67 g / L. Other dissolved ions, organic and inorganic compounds, such as: proteins, hormones and receptors. The acidity of urine ranges between 5.5 to 7.5 and it depends on the diet. Therefore, the medium was prepared based on the previous data, where each of urea with a weight of 0.58 grams, chloride of 0.11 grams, sodium with a weight of 0.075 grams, potassium with a weight of 0.045 grams, ammonium with a weight of 0.11 grams, phosphates and sulfates with a weight of 0.0625 in 60 ml of aqueous medium Preparation of stock solution of PAHs Stock solutions of (1-Hydroxypyrene ) were prepared by dissolving ( 3) mg of standards in (0.5) mL of methanol and comple the volume with distilled water to(10) ml. stock solution was kept in the dark place at.( 4) °C. Prior to use, stock solutions were monitored via Room temperature fluorescence spectroscopy for possible photo-degradation of the PAHs. stock solution has been used for a period of less than (6 months). Working solutions of PAHs were prepared daily by serial dilution of stock solutions. Results and Discussion: 4.1. Fluorescence Spectroscopic Study 4.1.1. Excitation-Emission spectrum Excitation and emission spectrum are obtained using the commercial spectrometer (Shimadzu RF-5301pc).The excitation source is a( 150-watt xenon lamp),( 220 -900) nm. The precision reached a uniform color (± 1.5 nm).The wavelength scan was performed at (5500 nmmin). The excitation and emission spectrum was calculated for multi-drug compounds ( 1-Hydroxy Pyrene ). compound is dissolved in a certain amount of synthetic urine.the excitation spectrum was used to determine the excitation and emission wavelength.measurements at room temperature, monitoring the performance of the device using standard materials and determining the radiation intensity at the highest spectrum Using of Fe 2 O 3 -SiO 2 -R Core/Shell NPs for extraction of PAHs from synthetic urine The Fe 2 O 3 -SiO 2 -R core/shell was used to study the interference with hydroxy PAHs and obtain the extraction process. Preparation of different concentrations of (1-hydroxypyrene( (ppb) (20,40,60,80,100) In synthetic urine, )1-hydroxy pyrene( Where the extraction ratios were estimated and monitored by means of fluorescence spectroscopy, and the best concentration was chosen for the highest extraction ratio, where the best extraction concentration reached ((40, 60, 80 ppb), as shown in the figures (2,3,4) where the extraction ratio was (96.30% ± 0.30, 86.00% ± 1.30, 91.55% ± 2.50). For an idea of extraction more than once, and it was used in the collection process (3 ml) of the 2-prpanol compound into the nanocomposite, where it is placed in the shaker device) for no more than (20) minutes and no less than that, as the most suitable was chosen after evaluating experiments, then placed in The centrifuge for a period of (20) minutes is also done, after separation, the intensity of the emission of the medium is measured by the fluorine device, and it was found that 2-propanol could disassemble the bond between the envelope and R (Fe 2 O 3 -SiO 2 -R) as in Figure (6) so that we can then take advantage of From the nanoparticle in the extraction process more than once. Conclusion: In this recearch, an advanced method for synthesis and using of core / shell nanoparticles to remove hydroxy PAHs from the synthetic urine, the results demonstrate the advantage of using ( (3aminopropyl) triethoxy silane ) as a shell of nanoparticles. The extraction ratios were around 90%, which gives an indication of the benefit for the extraction of hydroxy PAHs.
2021-06-19T20:03:22.517Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "5e3dbc194fd6eca3b0cb7d101c08682097e25dcc", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/790/1/012037", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "5e3dbc194fd6eca3b0cb7d101c08682097e25dcc", "s2fieldsofstudy": [ "Environmental Science", "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
6424239
pes2o/s2orc
v3-fos-license
System-level performance of LTE-Advanced with joint transmission and dynamic point selection schemes In this article, we present a practical coordinated multipoint (CoMP) system for LTE-Advanced. In this CoMP system, cooperation is enabled for cell-edge users via dynamic switching between the normal single-cell operation and CoMP. We first formulate a general CoMP system model of several CoMP schemes. We then investigate a practical finite-rate feedback design that simultaneously supports interference coordination, joint transmission (JT), and dynamic point selection (DPS) with a varying number of cooperating transmission points while operating a single-cell transmission as a fallback mode. We provide both link-level and system-level results for the evaluation of different feedback options for general CoMP operation. The results show that there are substantial performance gains in cell-edge throughputs for both JT and DPS CoMP over the baseline Release 10 LTE-Advanced with practical feedback options. We also show that CoMP can enable improved mobility management in real networks. Introduction Multiple-input multiple-output (MIMO) systems have the potential to provide the capacity needed for futuregeneration wireless systems, and for this reason they have been adopted by 3GPP Long-Term Evolution (LTE) and LTE-Advanced (LTE-A) [1,2].MIMO operation was already defined in the early stage of LTE specification work.In the downlink, 2 × 2 and 4 × 4 MIMO operation have been defined in Release 8 [3], and these have been further extended to 8 × 8 MIMO in Release 10 [2].The main scenario is single-user (SU)-MIMO, where spatial multiplexing within individual time-frequency resource blocks is performed for a single user equipment (UE) at a time.In addition, multi-user (MU)-MIMO operation, where a time-frequency resource block is shared by multiple users in the spatial domain, has been possible since Release 8.In LTE Release 8, MU-MIMO is allowed only in a standard non-transparent manner, but in LTE Release 9 and 10 it can be enabled in a standard transparent manner.In Release 10, certain features have been included to improve the MU-MIMO performance com-*Correspondence: helka.maattanen@renesasmobile.com 1 Renesas Mobile Europe Ltd., Porkkalankatu 24, 00180 Helsinki, Finland Full list of author information is available at the end of the article pared to Release 8.One such feature is a user-specific reference signal (RS) that makes it possible to suppress MU interference with a linear receiver. With a frequency re-use factor of 1, single-cell SUand MU-MIMO network performance is highly interference limited, especially at the cell-edge.Therefore, the introduction of coordinated multipoint (CoMP) transmission/reception was already considered in Release 10.In downlink CoMP, the transmission points co-operate in scheduling and transmission in order to strengthen the desired signal and mitigate inter-cell interference.In a typical homogeneous cellular system, one site has three macro cells/sectors.Each cell has its own identification number, which is determined, for example, by the RSs that are configured for the UEs.Because of the increasing use of heterogeneous networks (HetNets), where pico cells are placed inside macro cells in order to increase network capacity, the concept of cell identity is no longer as straight forward since it is possible to assign to the picos the same cell identities as to the macro cells.Therefore, a definition of a point is needed.A point is defined as a transmission point having transmit antennas in a single geographical location [30].Thus, one cell is formed http://asp.eurasipjournals.com/content/2012/1/247by one or multiple points, meaning that one cell can comprise transmit antennas distributed in multiple geographical locations.In practice, the points may be base stations (evolved Node B or eNB for short) or remote radio heads (RRHs).An RRH does not include a scheduling unit but is controlled by an eNB. Figure 1 shows an example of a HetNet deployment, which has received considerable amount of attention from researchers, and which is one key scenario of interest for deploying CoMP in LTE systems. In general, CoMP techniques have received increasing interest within the 3GPP community during Release 11 [4].The primary focus has been on schemes called joint transmission (JT), dynamic point selection (DPS), dynamic point blanking (DPB), and coordinated scheduling/beamforming (CS/CB).In JT CoMP, two or more points transmit simultaneously to a CoMP user in a coherent or non-coherent manner.JT CoMP is depicted in Figure 2. Coherent JT means that the transmitted signals are phase aligned to achieve constructive combining of the signals at the receiver side, whereas in non-coherent JT such phase alignment is not performed.DPS refers to a scheme where the transmission point is varied according to changes in channel and interference conditions.A DPS scheme is shown in Figure 3.In CS/CB, the scheduling decisions of neighboring points are coordinated in order to reduce the interference, as in the scenario shown in Figure 4.In principle, all schemes may include point blanking/muting which means that one or more transmission points are turned off in order to decrease the interference.The overall objective of these schemes is to reduce interference and, as a result, to improve the LTE cell-edge performance.The schemes may be deployed independently or in the form of a hybrid scheme.For example, in a hybrid mode a UE may be scheduled to receive data from two points while a third point is muted, or a UE may be scheduled to receive data only from one point, but one or more points coordinate scheduling or are muted to reduce the interference. There are a number of studies in the literature of CoMP in the context of LTE.A discussion paper on CS/CB, JT CoMP, and relaying can be found in [5].In [6], JT CoMP is evaluated for increase of throughput and for energy efficiency when assuming that the channel quality indication (CQI) is derived from an accurate JT CoMP signal-to-interference-plus-noise ratio (SINR).The results show an increase of throughput at the cell edge and also 80% savings in energy efficiency per transmitted bit.In [7], a CS/CB scheme is studied for the case of full channel knowledge at the transmitter.The precoder design in this scheme exploits leakage of signal information to other cell.A similar approach has been used in [8], where JT CoMP is applied to cell-edge UEs and CS/CB to all users.In [9], interference coordination utilizing long-term channel covariance matrix information is studied.The use of long-term channelstate information (CSI) is reasonable when the cooperating points are not connected through a high-capacity and low-latency backhaul like optical fiber.Dynamic cell selection, in turn, has been studied in [10][11][12][13].In [10], a long-term channel quality measure is used for cell selection, and in [11] the cell selection metric is a wideband short-term channel quality, equal to the averaged SINR prior to receiver processing.System-level evaluation for dynamic cell selection based on post-processing SINR values can be found for homogeneous networks in [12] and for HetNets in [13].The system-level results of [14] show that CoMP techniques like JT and CS/CB meet the ITU global standard for international mobile telecommunications (IMT-Advanced) performance targets.In addition, the impact of network load on CoMP network performance is studied; however, the CQI feedback is not discussed. In [15], certain selected results from the 3GPP study item phase are shown.Some study item phase results are referred to in [16], where field test results of JT CoMP in the China 4G TDD mobile communication trial network are also presented.The results show prominent gains for JT CoMP in that TDD test network.An earlier field test for CS/CB and JT CoMP may be found in [17].Both schemes were found beneficial and possible to implement.As future challenges to be addressed they raise the issue of backhaul assumptions, clustering and multisite scheduling, downlink feedback design and synchronization between sites.During the study item phase, assumptions varied with regard to impairments modeling and feedback.For example, the CQI feedback was assumed ideal, and even when quantized, the post-scheduling CQI was assumed to be known by the network.Thus, the effect of different CQI feedback assumptions was not studied.Currently, in the Release 11 work item stage more specific evaluations are being conducted in order to extract gains under specific feedback assumptions.The CoMP work item addresses both frequency division duplexing (FDD) and time division duplexing (TDD), hence unified solutions should be targeted, as always in the case of LTE specifications. In this article, we look at CoMP transmission from an LTE downlink perspective, and focus in particular on the feedback signaling design and associated achievable system-level performance.Both closed-loop precoding and adaptive modulation and coding are applied to improve link performance.For closed-loop precoding, the base stations and the UEs share predefined codebooks [1].The eNB selects the transmission weights and rates, and performs scheduling, in accordance with finite-rate user CSI feedback.The feedback consists of a CQI, a precoding matrix index (PMI) and a rank indication (RI).The CQI value represents the estimated post-processing SINR derived by the UE assuming the selected PMI.For SU single-cell transmission, the CQI estimation is straightforward, since the intercell interference is not coordinated, and therefore the level of interference estimated for CQI evaluation corresponds to the actual time of receiving the data signal.In CoMP operation, the CQI depends on the CoMP scheme and the interference hypothesis.For example, the interference level depends on CS/CB and whether or not a cooperating point is muted.Also, there exist several tradeoffs when designing the feedback for CoMP.In addition to the traditional feedback load versus performance tradeoff, one may attempt to design a unified feedback that supports all available CoMP schemes or design a scheme-specific feedback, which then requires some higher-level control or other signaling to differentiate between different CoMP modes.There exists also a tradeoff between network and UE centric operation, which means that the decision or control of the cooperation level and the specific scheme is at eNB or at UE.Typically, the network has the control but to some extent the UE is best aware of the current signal and interference conditions that it is experiencing.CQI accuracy and UE complexity also need to be taken into account.These are issues that have not so far been studied or reported systematically in the literature. In this article, we examine the problem of feedback design and study the associated realistic system-level performance of CoMP in LTE.The higher-level starting point in this study is that different CoMP schemes require different CSI feedback.The minimum feedback needed for interference coordination is the precoder that causes the worst interference if used at the interfering point.If that precoder is known, interference may be reduced by avoiding that spatial direction.For DPS, a metric for selecting the transmission point is needed.If a UE provides feedback per point, the selection may be made in accordance with the CQI.For JT, there exist several options from per point feedback to aggregated feedback.Aggregated feedback means that the UE assumes JT transmission from N points and calculates the RI, PMI, and CQI for the aggregated channel.The main contributions of this article, addressing the above fundamental challenges in practical deployment of CoMP in cellular mobile radio, are as follows: We present unified signal and system modeling to support a general hybrid CoMP scenario with varying numbers of transmission points in the JT.In an LTE compliant model, we study and propose a practical CoMP feedback design for different CoMP modes.We evaluate the tradeoff between feedback load and complexity on the one hand and the achieved performance improvements on the other hand.Realistic system-level performance of LTE-Advanced network is evaluated for different CoMP modes, and covers various practical deployment scenarios, including an intra-site coordination where multiple co-located sectors of an eNB are cooperating, as well as cooperation within a sector, where RRHs are operating within the coverage area of a high-power macro cell.These simulation results with realistic UE feedback indicate that CoMP is providing considerable cell-edge gains over the baseline Release 10 system.Further, when studying the CoMP schemes under biased handover conditions, it is seen that CoMP and especially DPS is a scheme that can aid in the mobility issues in real networks.This, in addition to improved cell-edge user performance, is seen as an important practical finding in this study. The rest of the article is organized as follows.Section 2 presents the system model for LTE-Advanced and for hybrid CoMP.Section 3 describes CoMP in LTE, especially from the perspective of system and deployment scenarios, and Section 4 presents the feedback framework developed for CoMP.In Section 5, the system-level simulation results are presented.The conclusions are given in Section 6. Notations: Throughout the article, upper case bold letter A is used for matrices, lower case bold letter a for column vectors.E(.) denotes expectation, Re(c) denotes the real part of a complex number c, Tr(.) denotes the trace of a matrix, |a| denotes the L2 norm of a vector a and |a| denotes the absolute value of a scalar a. System model In this article, we consider the physical layer of LTE-Advanced downlink for FDD operation where the transmission scheme is orthogonal frequency division multiplexing (OFDM).In LTE-Advanced, the physical resource blocks (PRB) are defined as groups of 12 consecutive subcarriers in frequency while the subframe/transmit time interval (TTI) duration is 1 ms which consists of 14 OFDM symbols.Thus, the minimum timefrequency resource allocation is 12 subcarriers over 14 OFDM symbols.More details on bandwidths and subcarrier spacings, for example, can be found in [1,18].As inter symbol interference may be removed using a cyclic prefix that is longer than the length of the channel impulse response, we can consider the received signal per subcarrier in frequency domain.To simplify notation, we omit the frequency and time domain indexing, and the signal model reflects subcarrier level spatial samples within one multicarrier symbol, unless otherwise stated. Signal model We consider a downlink multi-cell system with total of M transmission points, where each point has N t transmit antennas and each user has N r receive antennas.Stating the matrix dimensions of the variables beneath the symbols, the signal y k received by the user k can be written as where H k,i is the N r × N t MIMO channel between the serving base station i and user k, and n k denotes the scaled noise vector whose entries are i.i.d.complex Gaussian variables with zero mean and variance σ 2 P , where σ 2 is the variance of additive white Gaussian noise and P is the transmitted signal power.The precoding matrix W i applied for the transmission has r k columns, and r k is the transmission rank for user k.The transmitted signal x i is of length r k × 1.Assuming spatially uncorrelated and equal-variance transmit signal elements, we have E(x i x H i ) = I r k and the total transmission power is controlled by precoding matrix by requiring Tr W H i W i = 1.Each element of x i , or each column of W i , corresponds to a transmission layer for user k.The matrices H k,j , where index j ∈ {1, . . ., M}, j = i, are the MIMO channels between interfering transmission points and user k.The interfering transmission points are transmitting r j layers, where each signal vector x j is precoded by the precoding matrix W j , where index j ∈ {1, . . ., M}, j = i. If the transmission points cooperate, the interference conditions change.For example, a UE may be scheduled to receive data from two points while the third point is muted.Alternatively, a UE may be scheduled to receive data only from one point, but one or more points coordinate scheduling or mute to reduce the interference.A general signal model for the hybrid CoMP, where M is the total number of interfering points and N ≤ M points cooperate for user k, reads Here L ≤ N denotes the number of points that operate in JT.N is the total number of points that cooperate which means that N − L points cooperate by reducing interference.M is the total number of points in the network.Thus, M−N points are operating in an uncoordinated way with respect to the other points.The term α n describes the level by which the interference is reduced by cooperation of the N − L points, and the subscript n is the point index.If α n = 0 it means that point n is muted and if α n = 1 that point n is in normal operation.http://asp.eurasipjournals.com/content/2012/1/247 Single-cell operation in LTE/LTE-Advanced system The typical operation in LTE/LTE-Advanced is a singlecell operation which means that there is no cooperation between the eNBs.A UE selects the serving cell on the basis of received signal quality.In Release 10 LTE, different RSs are defined for channel estimation, namely CSI reference symbols (CSI-RS) and demodulation reference symbols (DM-RS).After cell selection, the eNB configures the CSI-RS and DM-RS configurations for the UE.From the CSI-RS configuration, the UE k measures the MIMO channel H k,i and calculates the CSI feedback.The DM-RS is transmitted for demodulation purposes and enables the UE to measure the effective channel H k,i W i . The UE feedback consists of a wideband RI and a wideband or subband PMI and CQI.The CQI may be seen as indicative of the post-processing SINR, i.e., the SINR per stream after receiver processing.It is possible to have less independently modulated and coded data streams N s than there are transmitted layers r k .In this case, one data stream is transmitted on several layers.In LTE, the maximum number of independently modulated and coded data streams N s is two.This means that when the number of transmission layers, or equally the transmission rank, is higher than two, a so-called layer to codeword mapping procedure is applied [1].In this context, a codeword means a block of channel coded bits. For the estimated MIMO channel, the UE selects a precoding matrix F (r k ) k of size N t ×r k from a predefined codebook and feeds back the index, PMI, as a recommendation for the serving eNB for the precoder W i .Note that with these deliberately separate notations of F k and W i , we intend to point out that the precoder selection done by the UE is only a recommendation towards eNB.For single stream single-user transmission, the optimal choice for a precoding vector f k for user k is known to be [19,20] where G C (N t , 1) is the predefined codebook.The nested property of a codebook containing codewords for different ranks means that codewords of the codebook of higher rank include a codeword of lower rank codebook as columns.This kind of design has been introduced in order to aid rank override at the eNB.However, it depends on codeword selection metrics whether the selected codewords for higher and lower rank transmission options for the same channel realization follow the nested property. For multiple transmission layers, the optimal codeword selection criterion is a sum over the rates of the layers when the receiver processing is linear and a codeword selected with this metric does not always contain the lower rank codeword as columns [21]. In LTE-Advanced, the number of antennas at the base station may be two, four, or eight.For eight transmit antennas, the codebook has a double codeword structure [1,22].One part of the codebook targets the wideband/long-term properties of the channel and the second part targets the narrowband/short-term properties.Further details of the double codebook structure are out of the scope of this article.The codebooks to support two and four downlink transmit antennas are single codebooks with separate codebooks for each transmission rank.In 4-Tx (2-Tx) case, the UE selects one precoding matrix of size 4×r k (2×r k ) for rank r k transmission for each subband (i.e., a given number of PRBs). The CSI feedback is derived at the UE on the basis of SU-MIMO transmission assumptions.However, MU-MIMO transmission is also possible in a standard transparent manner which means that an eNB may dynamically switch between SU and MU transmission strategies based on the available single-user feedback.In general, MU transmission has a CQI mismatch problem since the postprocessing SINR depends on the precoding matrix used for multiplexing the users which depends, in turn, on the eNb scheduling decision [23][24][25].Therefore, MU performance is greatly affected by the outer loop link adaptation (OLLA) algorithm [26] which tunes the link adaptation during the CQI reporting period based on ACK/NACK received from the UE. Similarly, an MU CoMP can be considered in a standard transparent way.For DPS and CS/CB, the MU scenario has similar issues as for single-cell transmission.For JT CoMP, there is an additional power allocation problem if the zero forcing beamforming is used [27].In this article, we consider SU single-cell MIMO operation as the baseline against which the SU CoMP methods are compared in terms of network performance. CoMP in LTE-Advanced Users in CoMP mode receive data from one or multiple points in the coordination area, hence prior to receiving the data, they need to report the CSI feedback for these coordinated points.A CoMP measurement set is formed by the N cells/points for which the UE is measuring the CSI.For Release 11, the maximum CoMP measurement set size is N = 3.The point from which the UE would receive transmission in single point mode is defined as the serving/fallback point. In addition to the information exchange between the users and the transmission points, the cooperation requires information exchange between the cooperating points or a common scheduling entity that controls the set of cooperating points.The information that needs to be shared includes UE CSI feedback, scheduling decisions, and possible user data.All delays in the information exchange affect the CoMP operation and especially exchanging the user data between the points may require some extra capacity from the backhaul link.In addition, the requirement for JT and DPS is that the user data is available and synchronized in the transmission points participating in JT or DPS for a particular UE.Especially, the synchronization of the user data requires fairly ideal backhaul both in capacity and delay.Iterative CS/CB schemes are also prone to extra delays of the backhaul.The CoMP operation specified in Release 11 assumes ideal fiber connection between the points that may cooperate.From the backhaul perspective this enables JT and DPS as well as iterative CS/CB CoMP methods.The effects of a nonideal backhaul and the X2 interface are to be evaluated in Release 12.The X2 interface is a protocol stack defined in the LTE standard for connecting eNBs [28].The purpose of the X2 interface is to enable information exchange between different vendors' eNBs.The schemes that can be envisioned operating over non-ideal backhaul and requiring information exchange over X2 are for example simple non-iterative CS/CB schemes, where eNBs simply avoid scheduling UEs that would likely cause strong interference to each other.These schemes need PMI feedback in the form of short-term feedback, or long-term interference covariance matrix CSI.The typical X2 backhaul average latency is 10 ms; however, the latency may also be around 20 ms [29].For comparison, the subframe length is 1 ms and CSI feedback may be triggered with 5 ms periodicity.Thus, the scheduling decisions and consequently the interference conditions may vary rapidly even if the channel was more stable, e.g., for low mobility users.For these reasons, the short-term feedback might not be convenient due to the aging problem of the CSI report if exchanged through X2 backhaul. CoMP network scenarios The agreed CoMP work item targets specification of intraand inter-cell DL CoMP schemes operating in homogeneous and HetNet deployments [30].Four main scenarios have been studied so far • intra-site scenario where multiple co-located sectors of the same eNB site are cooperating (Scenario 1), illustrated in Figure 5, • inter-site scenario with high-power RRHs where multiple non-co-located points having the same transmit power are cooperating (Scenario 2), illustrated in Figure 6, • low-power RRHs within the coverage of the high-power macro cell, each operating its own cell ID (Scenario 3), illustrated in Figure 1, and • low-power RRHs within the coverage of the high-power macro cell, each operating with the same cell ID (Scenario 4).In [31], Scenario 4 is discussed in RSs for CoMP in LTE In Release 11, it has been agreed that the UE may receive multiple CSI-RS configurations corresponding to the points in the measurement set.One CSI-RS configuration corresponds typically to transmission from one point, but it is possible to configure two transmission points under one CSI-RS configuration transparently to a UE.For example, there can be two 2Tx transmission points coordination Inter-site coordination that can be configured to a UE as two separate transmission points or as one virtual 4Tx transmission point. In addition, a term CSI-RS resource is defined as a CSI-RS configuration and an interference assumption, which provides a CQI assumption. For selecting the points forming the CoMP measurement set, an eNB can monitor the uplink signal received powers, for example through sounding RSs.As multiple transmission points are connected to a centralized CoMP scheduler that receives the sounding RSs, a classification can be made of the link qualities for the points involved in a CoMP cluster.After this, the best two or three points that are reliable for CoMP transmission are selected.The reliability of a point is defined such that the link power is within an X dB power window (usually of 5-6 dB) from the serving point link power.Alternatively, the UEs may compute and report the received power value of the CSI-RS, that is receiver power for the CSI-RS transmission from points in CoMP cluster.The eNB then selects the best points which are the most suitable for CoMP transmission. CSI feedback in CoMP After measuring the channels of the cooperating points, UE derives the RI, PMI, and CQI feedback.The feedback can be derived per CSI-RS configuration, that is per point.In addition, it is possible to configure the CSI-RS over multiple points, a UE being configured to calculate feedback over geographically separated antennas in a standard transparent manner.This feature is not evaluated in this article and is left for future work. Here, we select and feed back per point PMIs, because in this way existing per point single-cell codebooks can be reused.In addition, we select the per point PMIs independently.Joint per point PMI selection for JT transmission has been proposed in [32].While joint per point PMI selection improves the performance of JT transmission compared to independent per point PMI selection, such a joint selection increases the selection complexity and moreover is suboptimal for DPS and fallback transmission.In [33], Stiefel-Grassmannian per-point codebooks have been proposed together with Stiefel distance selection metric used for the second/weaker transmission point.The proposed Stiefel distance selection metric balances between maximizing the received power and maximizing the coherency of the transmission.The performance of JT transmission is improved; however, the selected codeword for the second/weaker point is no longer optimal for single point transmission.With the per point independently selected PMIs, being a unified feedback, we study the need for additional inter-point PMI feedback for JT transmission and different CQI feedback options for JT and DPS CoMP.In CoMP operation, the CQI depends on the CoMP scheme and the interference hypothesis.That is, the CQI depends on L, N, and the interference assumption in Equation (2).The size of the measurement set, N, is known by the UE as the network configures the CSI-RS resources for it. CQI feedback options Reducing the interference is beneficial for the selected transmission rate because improved signal conditions increase the reliability of the link.However, from the link adaptation point of view, especially if there is a clear improvement in the interference conditions, as for example due to muted points, full advantage can only be gained if the CQI feedback reflects the improved link quality.Therefore, precise CQI information capturing the interference conditions accurately is important from the performance point of view even though OLLA can, to some extend, compensate CQI inaccuracies. From a feedback design point of view, the N = 2 case already results in several CQI options as shown in Table 1, where S and I denote the respective signal and interference powers.Considering that the CSI-RS is configured per point and the UE selects one PMI per point, then it is possible to derive several different CQIs to support different CoMP schemes simultaneously.The UE may derive an aggregated CQI for the JT transmission and multiple CQIs per point with different interference assumptions, thus making use of different α values.If N = 3, the CQI options are shown in Table 2 where there are four different CQI options for the JT transmission, i.e., JT from all three points and JT from two out of three points, all with possible different interference assumption from the third point.In addition, there are per point CQIs with different interference assumption combinations from the two cooperation points.Note that if α < 1 for the CQI for the serving point, then an additional fallback CQI is needed for the serving point to secure the baseline single-cell transmission. It is clear that full CQI feedback supporting all transmission options is not feasible as the number of CQIs may grow enormously.Note that the CQIs discussed above are per independently modulated and coded data stream, thus rank two transmission assumption for one scheme would mean two CQIs for that scheme instead of one.In addition, CQI may be per subband.Hence, the rank utilization, feedback frequency granularity, and the number of points for which CSI feedback is computed are all factorizing the overall feedback overhead that needs to be sent from the receiver to the transmitter.In the following section, we conduct further analysis of these topics. Tradeoffs in CoMP feedback design The traditional tradeoff between feedback load versus performance relates to the tradeoff between network centric and UE centric CoMP.The UE centric CoMP refers to the operation where the UE selects the coordination set and the preferable CoMP scheme based on channel and interference measurements and sends the corresponding feedback.The advantages are that because the UE has the instantaneous knowledge on the downlink channel and interference conditions, it may deduce the best CoMP feedback for these conditions.Thus, feedback savings are possible in principle because, for example, a UE could send feedback only when the channel conditions are good and only for specific CoMP schemes.From the network perspective, the richer the feedback the scheduler entity has, the better the expected network performance is.If the network may receive information from every active UE and it has, for example, information about the number of served UEs and achieved transmissions rates, it can more efficiently evaluate which CoMP schemes should be applied.This could be beneficial in enabling a flexible balance between transmission methods to the users.Thus, receiving feedback for multiple CoMP transmission hypothesis from one UE would be beneficial.When considering network centric CoMP, which is the commonly supported method, higher layer signaling should be considered as well.This means that the CoMP operation can be designed either transparent to the UE meaning that the UE always feeds back certain CQIs based on CSI-RS resources configured for it, or the UE may be configured by higher protocol layers to calculate a scheme-specific feedback. CoMP scheduling In 3GPP, the signaling and feedback between the network and the users are specified but the packet scheduler is an eNB implementation-specific feature.The performance of an LTE/LTE-Advanced system largely depends on the packet scheduling algorithm applied at the network side. In the system-level evaluations of this article, a proportionally fair (PF) packet scheduler with properly tuned scheduling parameters is used with the aim of maximizing the baseline Release 10 performance.A single point PF scheduler is analyzed and described in detail in [34]. If CoMP is enabled, the same baseline PF scheduling with the same parametrization is used in the first stage to find the single-cell candidates to be scheduled, while in the second stage a CoMP-specific scheduling is performed.All the JT CoMP reporting UEs are sorted according to their PF-metrics derived from CoMP feedback.The highest JT CoMP PF-metric in a given subband is compared against the sum of single-cell users' , also called the victim users' , PF-metrics.If the JT CoMP PF metric is higher than the sum of victim UE's metrics, CoMP UE is scheduled and victim UEs allocations are altered accordingly.This scheduling algorithm is applied for each subband.DPS CoMP allocates resources to UE from the point in which UE reported the highest instantaneous wideband CQI.OLLA and UEs scheduling history are assumed to be shared between the points with no delay.In addition, the network is assumed to be fully synchronized. Feedback to support DPS CoMP The feedback to support DPS CoMP is per point feedback including RI, PMI, and CQI.PMIs are derived normally as for single-cell transmission and CQI is derived from the SINR value.SINR for user k from point i with single stream transmission assumption may be written as where g k is the normalized receiver combiner for user k and σ 2 is the noise variance.The CQI feedback options for DPS are relatively simple since DPS refers to single point transmission with possible muting assumptions from the cooperating points.For CoMP with two cooperating points there are two CQI options for both points.The cooperating point may be muted or transmitting normally.We refer to these options as CQI DPS k,i when one point is not muted and CQI DPB k,i when the other point is muted.The DPS feedback can be network centric or UE centric.In the network centric option, the UE feeds back per point feedback to all points and in the UE centric option only to the strongest point.Special care needs to be taken when thinking about fallback/single-cell performance, because the single-cell operation is performed also in CoMP eligible cells.A fallback point means that the serving point and the corresponding feedback should be Release 10 specific.Release 10-specific CQI refers to the case where no muting or other cooperation form is applied, that is α n = 1, ∀ n.The importance of always feeding back the fallback CQI is evaluated and illustrated in the results section. Feedback to support JT CoMP For JT CoMP, the comparison between the aggregated feedback and per point feedback is highly relevant.JT transmission is possible with per point PMI and CQI feedback.In this case, the transmitter would combine the PMIs and CQIs for the JT transmission.It is expected that inter-point feedback and aggregated CQI would improve performance for JT CoMP.In the next sections, we present various precoding and CQI feedback options for JT CoMP. PMI feedback and inter-point combiner for JT The simplest form of the PMI feedback is per CSI-RS resource feedback.From a transmission perspective, each point is independently transmitting the same data to the user, hence coherent transmission is not possible without additional feedback.The additional feedback required for coherent transmission is an inter-point combiner describing the amplitude and phase of that transmission.The inter-point combiner for point n for single stream transmissions can be written as where θ n is the inter-point phase combiner and a n is the inter-point amplitude.The combiner phase is always a relative quantity, thus without loss of generality we may select θ 1 = 0 always.For multi-stream transmission the combiner can be defined per transmission layer, or in the most general form, as a matrix of dimension r k × r k , where the off diagonal elements characterize the inter-layer effects.The transmission equation (2) for single stream transmission, where all cooperating points perform JT, N = L, can be written as where h eff k,n = H k,n w n is the precoded channel between the kth user and nth transmission point.For the two transmission points case, i.e., N = 2, optimal amplitude combiners a n can be selected as in [35].In practice, however, the power pooling between transmission points is not possible, because total transmission power at the transmission point cannot be exceeded due to system specifications and regulatory issues.If the resources at both transmission points have been scheduled to a single user, it is from a user perspective always worth transmitting from both transmission points with full power rather than muting the weaker transmission point completely.Therefore, in the rest of the article, we will set a n = 1.For N = 2, which is the primary case in this article, we employ optimal combiner phase θ 2 quantized uniformly with B bits.The optimal combiner phase θ 2 maximizes the norm of the sum of two effective channels While aggregated PMI across all received CSI-RS resources may offer better feedback compression/ performance compared to per CSI-RS resource feedback, it has several drawbacks.First, codebooks for various combinations of transmit points with different antenna configurations and types needs to be designed.Second, the aggregated PMI selected with the JT hypothesis is not optimal for DPS and CS/CB schemes.Unlike the aggregated PMI, the per-point PMI feedback may be improved by the additional combiner (inter-CSI-RS resource) feedback.Although the separately coded inter-point feedback with combiner may require additional feedback compared to the aggregated PMI, it does not require new codebooks to be designed and such a feedback is optimal for DPS CS/CB transmission schemes as well. CQI feedback for JT The JT CQI used for JT may be estimated from per-cell CQIs or an additional aggregated JT CQI (CQI JT,aggr. ) can be fed back.The aggregated SINR for JT for user k can be expressed as From Equation ( 8), we note that SINR JT,aggr.k is a function of the channel gains.The channel gains or the channels are not available at the transmitter as such but it is convenient to assume such availability in this discussion.For two transmission points and single stream transmission, the channel gain G JT k for the user k can be written as Plugging the first two channel gains into the nominator of the SINR equation (4) for DPB transmission, we may rewrite the SINR JT k as SINR JT,aggr.k where SINR is a CQI mismatch which corresponds to the constructive/ destructive addition of the channels from the two points.In other words, if the third term of Equation ( 9) is negative, the channel addition is destructive and SINR is negative.When the term is positive, the addition is constructive and SINR is positive.The constructiveness/destructiveness depends on the phase between the effective channel vectors and makes the SINR positive/negative with 50% probability assuming no inter-point feedback information is used.In Equation ( 9), per-cell CQIs with muting hypothesis are used.In order to investigate the impact of CQI http://asp.eurasipjournals.com/content/2012/1/247 Feedback load increased mismatch on the link performance, extended link simulations have been carried out under various CQI feedback hypotheses.The main simulation assumptions are summarized in Table 3.The simulation procedure is as follows: Four RRHs are dropped into every sector of the hexagonal macro network.The users are dropped non-uniformly (Configuration 4b) into the middle site until a user satisfying the CoMP threshold is found.Network generation and user dropping are according to Scenario 3/4 in [18].The found CoMP user is scheduled in JT CoMP mode and its feedback is computed.Finally, a pre-defined number of TTIs is simulated while OLLA is employed. Figure 7 shows the performance of the estimated CQI for several settings of muting hypothesis.In the case that the CQI DPB are fed back, performance suffers only minor degradation.A similar investigation has been run with a QPSK combiner.Figure 8 shows that with the QPSK combiner, the CQI mismatch can be kept even smaller and the performance of CQI JT,aggr.can already be reached within 20 iterations of OLLA algorithm.The CQI mismatch with CQI DPB feedback can be minimized by the following approaches 1. Adapting the phase combiner (BPSK) with outer-loop-phase-adaptation (OLPA); 2. Cyclical phase shift at the time of transmission, random/cyclical phase of the combiner.3. Scheduling of sufficiently large bandwidth, where the SINR averages out due to frequency selective channel. While the first approach always aims to keep the CQI mismatch positive, the two other approaches aim at setting E(SINR ) = 0. Figure 9 shows the impact of BPSK cyclical phase shift per PRB on the CQI mismatch.A single frequency chunk of six PRBs has been scheduled in a round-robin manner.It can be seen significant that the cyclical phase shift efficiently averages out the above-mentioned CQI mismatch.While the LTE standard allows the phase shift per PRB, it might negatively impact the reliability of the dedicated channel estimation. Figure 10 shows the average throughputs as a function of simulated TTIs per user drop.Again a single frequency chunk of six PRBs is being scheduled.The impact of OLLA correcting the CQI mismatch is visible.While the cyclical phase shift improves the performance of the link with a small amount of scheduled TTIs, after OLLA corrects the offset, the system without the cyclical phase shift performs better.In the case that the OLPA mechanism is applied, the performance of the link is significantly improved.The OLPA mechanism triggers the BPSK change of phase combiners θ 2 between two transmission points across all scheduled PRBs.In this way, the transmission is kept coherent most of the time. Figure 11 shows the impact of allocated bandwidth on the CQI mismatch.The CQI mismatch decreases with the scheduled bandwidth, though not as much as with CPS.Moreover, scheduling of 24 PRBs to a SU is very rare. System-level CoMP simulation results For the evaluation of the network-level downlink performance of the LTE-Advanced system, we simulate 19 sites, each having 3 sectors as illustrated in Figure 5.In Scenario 3, four RRHs are randomly located in the geographical area of each sector of a site.All the transmit points located in one site are assumed to be connected to the eNB with fiber connection.In these simulations, UEs are allowed to connect to center site points only, and points located in the rest of the sites are considered as interfering points.This is done to achieve a realistic UE placement so that the examined UEs are surrounded by interfering points, which is the case in real networks.Interfering points are transmitting using random ranks and PMIs.Two different UE dropping methods are used, uniform UE dropping (Configuration 1) and clustered dropping (Configuration 4b).After the UE is dropped, it selects its serving point.If the serving point is not located in the center site area, the UE is killed and a new UE is dropped.This is done until we have achieved the total number of UEs.All the points and UEs have two crosspolarized transmit antenna elements.Simulation flow consists of several simulation drops, where each drop has randomly generated UE positions.The simulation parameters follow 3GPP specification [30], while the UE dropping and the antenna radiation pattern are specified in [18].In Table 4, we list the essential parameters and their values.All transmit points and UEs have two cross-polarized antenna elements, thus we simulate 2 × 2 MIMO. In the following, the performance of JT and DPS CoMP is analyzed at system-level.Normal operation in the simulations is single-cell SU transmission.The selection of the CoMP reporting UEs is based on an average signal level of the serving point and the strongest interferer.CoMP is enabled to such cell-edge users that experience an average signal level difference between serving point and strongest interferer of less than 6 dB.We have utilized OLLA operation per UE, and for each UE the eNB updates single OLLA value regardless of the transmission mode used.The major difference between the link-level studies presented in Section 4.5.2 and the system-level results presented in this section is the OLLA operation and the dynamic switching between the fallback single point mode and CoMP mode.For JT CoMP, the performance of different CQI options and the phase combiner feedback are shown in Sections 5.1 and 5.2, respectively.In Section 5.3, we present a comparison of DPS and JT with different handover margins.The handover margin is described in [30] and it is used as a threshold to avoid repetitive UE handovers between cells.In the simulated network operation, the serving point selection is biased by the handover margin such that the serving point is a random selection among points that have average signal strength within the handover margin compared to the strongest point. Non-coherent JT performance with different CQI options Non-coherent JT CoMP is simulated at system-level to see the effect of the different CoMP CQI alternatives described in Table 5. Simulation results are shown in Tables 6 and 7 for HetNet Scenario 3 Configurations 1 and 4b, respectively.Average transmit point spectral efficiency is defined as the average transmit point downlink throughput divided by the system bandwidth.The coverage is defined as the 5th percentile UE spectral efficiency that is the cell-edge user throughput divided by the system bandwidth.The average transmit point spectral efficiencies of JT with different CQI assumptions are similar to Release 10 SU-MIMO baseline.The minor performance degradation observed when CoMP is enabled is natural as the normal operation in the cell is single-cell operation and CoMP is performed mainly to cell-edge users.Overall, the best coverage gain is achieved with JT CoMP and aggregated CQI in both scenario configurations.Muted CQIs (CQI DPB ) without correct fallback CQI shows the worst performance due the approximated fallback CQI in both configurations.Interestingly, the two CQI feedback options, where one CQI is a non-muted CQI and the other CQI is the muted CQI, perform better than the feedback option having three CQIs, i.e., two muted CQIs with the additional fallback CQI.It may be noted that this is not in line with the link-level results presented in Section 4.5.2,where the sum of two muted CQIs was shown to have the best performance.Note that in the link-level simulations the OLLA process was scheduled band specific (roundrobin scheduling) and no dynamic switching between fallback and JT CoMP was allowed.With PF scheduling utilized here, different frequency sub-band resources can be assigned to users on a TTI basis.Thus, in the case of frequency selective channel, the CQI mismatch SINR may vary according to results shown in Figure 9 as much as 13 dB between frequency sub-bands within one TTI.In system-level simulations, the single wideband OLLA process used both for JT CoMP as well as fallback operation works better if the estimated JT CQI is more pessimistic.The sum of two CQI DPS or the sum of a CQI DPS and a CQI DPB gives a more pessimistic estimate of the CQI JT than the sum of two CQI DPB .The impact of an overly optimistic CQI estimate can be seen in Figure 12, where a higher OLLA backoff for two CQI PDB is observed.In contrast, the more pessimistic approach shows similar OLLA backoff as aggregated CQI, especially in Configuration 4b. Coherent JT performance with quantized phase combiner System-level performance results of the phase combiner with different quantizations are shown in Tables 8 and 9 for HetNet Scenario 3 Configurations 1 and 4b, respectively.We used aggregated CQI (CQI JT,aggr. ) since the aggregated CQI reflects the coherence gain estimated at the UE.Measurement error and delays are modeled to the phase combiner in the same way as to the other feedback. In the case of single-stream transmission, one phase combiner is needed but in the case that the UE reports rank 2, phase combiner per layer is assumed to be signaled.As in the previous case, the average transmit point spectral efficiencies are close to each other and only coverage gains are observed.Phase combiner gives a maximum of 7.9% coverage gain over the non-coherent JT in the case of Configuration 1 when 4-bits are used for the phase quantization.Based on these simulation results, simple 1-bit quantization captures the major part of the phase combiner gains and it seems to be a balanced compromise between the overhead and performance.However, one should note that phase combiner only attempts to improve the JT CoMP scheme and it has no use in the case of DPS or CS/CB CoMP. DPS versus JT CoMP and the effect of handover margin In addition to JT CoMP, other CoMP schemes are important in the LTE-Advanced evolution.In Tables 10 and 11, the performance of DPS CoMP and JT CoMP is shown with different handover margins (HO).The handover margin biases the transmit point selection in the simulation modeling, i.e., any of the potential serving points providing the strongest links within the margin according to the UE's measurements,may become the Based on these results, we conclude that DPS CoMP can outperform JT CoMP in Configuration 1, however, in Configuration 4b the situation changes.Overall, the gains between DPS and JT CoMP schemes are quite similar.In terms of the UE signal quality, JT CoMP is superior to the DPS as shown in Figure 13, where the the CoMP reporting UE's SINRs are compared.However, the JT CoMP SINR gain comes at the cost of using the resources from two different points.Therefore, in terms of system performance, the DPS CoMP can be a more efficient scheme than the JT CoMP. When comparing the performance shown in Tables 10 and 11, it can be seen that with higher handover margins, overall performance degrades in both baseline and CoMP cases.For the SU-MIMO baseline, the point that is selected within the handover margin remains the serving point.Conversely, for DPS, the performance is partly recovered as the change of the transmission point is possible, thereby boosting CoMP performance relative to the baseline.These results show that there are substantial performance increases in CoMP gains for both JT and DPS CoMP.In the case of JT COMP, the 5th percentile throughput gain is roughly doubled, and in the case of DPS CoMP, the coverage gain of Configuration 4b increases from 9 to 26%.These simulation results indicate that CoMP is providing the highest gains over the baseline Release 10 system when handover cannot be performed in an optimal way.Thus, CoMP and especially DPS can be seen as a scheme to aid the mobility issues in real networks.This is an interesting and important practical finding of this study. Conclusions In this article, we have addressed the problem of the feedback design and studied the associated link-level performance and the realistic system-level performance of CoMP in LTE-Advanced.We have studied practical finiterate CSI feedback and CoMP feedback design, namely PMI and CQI feedback, for different CoMP modes, and also evaluated the associated performance with both linklevel and system-level simulations.The realistic systemlevel evaluations of LTE-Advanced CoMP were performed for different CoMP modes and for different practical deployment scenarios.These simulation results indicate that CoMP can provide considerable cell-edge gains over the baseline Release 10 system with realistic UE feedback.The results that are obtained and reported in this study also indicate that the nature of the deployment scenario has a clear impact on the relative performance of JT and DPS type CoMP schemes.Relatively simple DPS schemes can outperform JT schemes in heterogeneous networks when the user distribution is not uniform but concentrated around the coverage area of the RRHs.When studying the CoMP schemes under biased handover conditions, it was observed that the DPS CoMP scheme can clearly aid in the mobility management of real networks.This is a very important practical benefit, in addition to improved cell edge performance, in cellular mobile radio systems. Figure 1 Figure 2 Figure 1 Illustration of a heterogenous network scenario with three base-stations, each one connected by an interface to three low-power nodes.Transmission is coordinated within sectors of one base station as well as within its corresponding three low power nodes. Figure 3 Figure 4 Figure 3 Illustration of dynamic point selection where the user is served by the single point with better channel conditions. Figure 5 Figure 5 Illustration of intrasite coordination where transmission is coordinated within sectors of one base station. Figure 6 Figure 6 Illustration of intersite coordination where all three base stations are connected by fiber and controlled by one scheduling unit. Figure 7 Figure 8 Figure 7 Extended link performance of non-coherent JT with several different CQI feedback hypotheses as a function of a scheduled link duration.OLLA mechanism corrects CQI mismatch at the transmitter. 35 (Figure 9 Figure 10 Figure 9 Cumulative density function of CQI mismatch with and without BPSK cyclical phase shift. Figure 12 Figure 12 Cumulative density function of OLLA offset with different CQI feedback hypothesis. Figure 13 Figure 13 Cumulative density function of SINR with single-cell (point) transmission and two different multi-point schemes. Table 11 DPS and JT CoMP performance in HetNet Scenario 3 Configuration 4b with different handover margins serving point.With 0 dB handover margin the DPS CoMP provides approximately 1% decrease in average transmit point spectral efficiency compared to the Release 10 SU-MIMO baseline and over 16 and 9% coverage gains for the simulated CoMP HetNet scenario 3 configurations 1 and 4b, respectively.The JT CoMP provides similar average spectral efficiency as baseline, while the coverage gains over the baseline are 11 and 14% for HetNet Scenario 3 Configuration 1 and 4b, respectively.
2016-01-15T18:20:01.362Z
2012-11-27T00:00:00.000
{ "year": 2012, "sha1": "8ff176ada431daefa390699d30664c5948683a6f", "oa_license": "CCBY", "oa_url": "https://asp-eurasipjournals.springeropen.com/counter/pdf/10.1186/1687-6180-2012-247", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "3a348d97d5fe16d8a6d5a8bc8581e9292a5a54dc", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
199465577
pes2o/s2orc
v3-fos-license
Self-efficacy and enjoyment of physical activity in children: factorial validity of two pictorial scales Background Self-efficacy and enjoyment are two main constructs proposed within many motivational theories in any human endeavor, sport and physical activity included. Methods The purpose of this study was to examine the factor structure of two pictorial scales measuring self-efficacy and enjoyment levels in a sample of 14,035 Italian schoolchildren (7,075 boys and 6,960 girls, 6- to 7-year-olds). An important feature of the two scales is that they are in a pictorial format in order to prompt a straightforward understanding in children. The whole sample was randomly split in two subsamples according to gender and age and the factor structure of the measures was examined across subsamples. Results Data were subjected to confirmatory factor analysis, which yielded satisfactory fit indices on the measures of both subsamples. Overall findings supported the single factor structure of the scales, which can be easily administered to 6- to 7-year-old children to assess two relevant psychological constructs in physical education. INTRODUCTION Self-efficacy and enjoyment are central mechanisms underlying motivated behaviours, such as sport and physical activity (see Bandura, 1997;Ryan & Deci, 2017). Self-efficacy is defined as an individual's belief in their own capabilities to accomplish a task or succeed in specific situations; it is considered the cognitive mechanism that mediates information on personal capacities to successfully execute necessary courses of action in a specific domain (Bandura, 2001). In the context of physical education and sport, self-efficacy has been extensively examined (Feltz, Short & Sullivan, 2008) and identified as an important correlate of physical activity and fitness in mediating children achievement striving (Feltz, 1992;McAuley & Blissmer, 2000; Barnett et al., 2011). In fact, research evidence has shown physical self-efficacy to be both a determinant and a consequence of physical activity (McAuley, Peña & Jerome, 2001). Efficacy beliefs refer to judgments about the ability to accomplish a task, while previous positive experiences of accomplishments can enhance self-efficacy beliefs. Hence, an individual's judgement of motor ability and skill levels can be considered an important factor in self-perception, because successful performance is associated with high self-efficacy (Moritz et al., 2000). Furthermore, physical performance and self-perceived physical fitness are positively related to perceived competence (Sollerhed et al., 2008), which has been shown to predict participation in physical activity (Bauman et al., 2012;Di Battista et al., 2018) and enjoyment (Cairney et al., 2012). In the physical activity and sport domains, enjoyment is conceptualized as a positive affective response resulting from participation that reflects generalized feelings typically described as pleasure, liking, and fun (Scanlan & Simons, 1992). The experience of enjoyment during physical education classes is associated with enhanced intrinsic motivation, increased physical activity participation, and the adoption of active and healthy lifestyles (Wallhead & Buckworth, 2004;Dishman et al., 2005;Jaakkola et al., 2017;Bortoli et al., 2018;Vitali et al., 2019). Thus, understanding enjoyment motives and other variables known to influence physical activity levels, successful motor experiences, and improvement of physical fitness can help researchers and practitioners design more effective intervention strategies to promote healthy lifestyles among school-aged children. In health psychology literature, physical self-efficacy and enjoyment are currently considered important correlates and determinants of physical activity and healthy behaviors in adults (Trost et al., 2002), children, andadolescents (Lubans, Foster &Biddle, 2008). To broaden our understanding of the mechanisms underlying the antecedents and consequences of these two variables, there is a need of valid and reliable measures in the assessment of people of all ages, children included. Different measures have been developed to gauge self-efficacy in 8-12-year-old children. Some of them are often used in health research and refer to coping self-efficacy, which is the confidence in performing physical activity despite encountering social or environmental barriers (e.g., ''I can be physically active even if I have to stay at home''; Bartholomew et al., 2006). Other measures refer to task self-efficacy, namely, the confidence in using specific motor capabilities or performing skills (e.g., ''I am able do to very difficult exercises' ';Colella et al., 2008). Enjoyment in physical activity is often measured using the Physical Activity Enjoyment Scale (PACES; Kendzierski & DeCarlo, 1991). The PACES is a scale consisting of 16 bipolar items, originally developed to assess the extent to which individuals enjoy doing any given physical activity. Preliminary evidence of reliability and validity has been found with samples of university students. Motl et al. (2001) revised the PACES for use with young adolescent females. Moore et al. (2009) found the revised form of the PACES a valid measure of enjoyment of physical activity also in 8-year-old children. De Civita et al. (2005) highlighted the need to pay special attention in developing assessment instruments adequately validated for young children. To maximize reliability and validity, the items need to be adapted to the individual developmental stage, level of emerging sense of self, cognitive capacity, and emotional awareness. Given the importance of assessing self-efficacy and enjoyment constructs in children, the purpose of this study was to develop two new short measures typified by a response pictorial format adequate for 6-7-year-olds, and to examine their factor structure. We also examined possible differences in self-efficacy and enjoyment by gender and age. Participants The sample consisted of 14,035 children aged 6 to 7 years (7,075 boys and 6,960 girls). The participants were drawn from about 800 mixed gender classes of primary schools located in a region in Central Italy. All classes were involved in a large project of physical activity named ''At school of health: Increase in physical activity in the I and II classes of Primary School''. The main goal of the project was to prevent obesity and promote healthy lifestyles in children. Physical activity was conducted during customary lessons held by expert physical education teachers. Colella et al. (2008) developed a 6-item physical self-efficacy scale to assess perceived speed, strength, coordination, and fatigue in girls and boys ranging in age from 8 to 10 years. To render the scale more easily understandable by younger children (i.e., 6-7-year-olds) and to help them grasp the meaning of the items, we selected four statements and slightly modify the items by representing them with emoticons and pictograms (see Scales S1). Item scores ranged from 1, indicating low efficacy (e.g., ''I run very slow'') to 4, representing high efficacy (e.g., ''I run very fast''). Enjoyment Individuals' enjoyment of physical activity was measured using four items selected from the 16-item Physical Activity Enjoyment Scale (PACES; Carraro, Young & Robazza, 2008), which was intended to gauge enjoyment of 11 to 19 years old students involved in physical education classes at school. The scale was slightly modified by anchoring the items to emoticons to render them easily understandable by children (see Scales S1). Item scores ranged from 1 (not at all) to 5 (very much). For both scales, participants are required to think of themselves when playing or performing physical education exercises. They are then asked to indicate for each item the response that best represents their personal feelings. Procedure Agreement to conduct the study was sought from the school headmasters after the purpose of the study had been explained to them. All the participants' parents provided written informed consent with anonymity and confidentiality being assured for all the participants. Ethical approval for the study was obtained from the Health Department of the Abruzzo Region in reference to the Regional Prevention Plan 2014-2018 -Program 2, Action 2. Assessment was conducted by a team of experts specifically instructed in the assessment procedure. The measures were administered in small groups of children at the end of physical education lessons and without the presence of the teachers. Children were presented with the scale, informed that there were no right or wrong responses, and assured that their answers were confidential. Before commencing the assessment, the researchers made sure that all children had a correct understanding of the instructions and items. Data analysis Data were preliminarily examined for missing values, and 124 cases with missing data were deleted. To examine the factor structure of the measures, the whole sample was randomly split in two subsamples, which were homogeneous in terms of gender and age (see Data S2). Confirmatory factor analysis (CFA) was then conducted to assess the factorial validity of the scales across subsamples. Given that data distribution of both measures was negatively skewed, a robust diagonally weighted least squares (DWLS) estimation was used. In particular, model parameters, standard errors, and chi-square statistics robust to non-normality were estimated using the weighted least squares meanand variance-adjusted method (WLSMV; Muthén & Muthén, 2017), which is appropriate for estimating CFA model parameters with ordered categorical variables (see Finney & DiStefano, 2013). Flora & Curran (2004) demonstrated that WLSMV produces accurate test statistics, parameter estimates, and standard errors of CFA models with sample sizes ranging from 100 to 1,000. Although WLSMV works well with relatively small sample sizes, according to Brown (2015) very skewed categorical indicators call for larger samples. This is a reason why we involved a very large sample of children in our study. The comparative fit index (CFI), the Tucker Lewis fit index (TLI), the root mean square error of approximation (RMSEA), and the standardized root mean square residual (SRMR) were examined. A good model fit is inferred when CFI and TLI values are close to .95, SRMR is lower than .08, and RMSEA is lower than .06 (Browne & Cudeck, 1993;Hu & Bentler, 1999;Schumacker & Lomax, 2004). All data analyses were conducted in Mplus version 8.3 (Muthén & Muthén, 2017). RESULTS Descriptive statistics and fit indices of the self-efficacy scale and the enjoyment scale across subsamples, gender, and age are reported in Table 1. As can be seen, CFA results yielded satisfactory fit indices on both measures. CFA was also conducted on the measurement model (i.e., the two measures together) on the data of the whole sample. Good fit indices for this model were found: CFI = .987, TLI = .980, RMSEA = .039 (90% CI [.036-.042]), SRMR = .027, with standardized factor loadings ranging from .649 to .810. Correlation between self-efficacy scale and enjoyment latent factors was .540. To examine possible gender and age differences on the total sample, mean scores of the two scales were transformed using the NEWX = 1/(K-X) formula proposed to adjust negatively skewed data (Tabachnick & Fidell, 2013), where the constant K was the largest score + 1. Multivariate analysis of variance (MANOVA) by gender and age on the transformed mean scores of the scales yielded significant results for gender, Wilks' λ = .984, F (2,14030) = 111.095, p < .001, η 2 p = .016. Univariate follow-up showed significant DISCUSSION The aim of this study was to examine the factor structure of two pictorial scales measuring self-efficacy and enjoyment levels. Findings provided support for the internal consistency and factorial validity of a single-factor structure of the scales, which can be easily administered to 6-7-year-old children. Therefore, the results suggest that these scales can be used to assess self-efficacy and enjoyment of children involved in physical activities in school settings. Their use could promote additional research to examine, in particular, concurrent and predictive validity. An important feature of the two scales is their pictorial format to make them easily understandable for children. The advantages of using pictorial scales initially emerged in clinical and experimental contexts interested in measuring perception of physical exertion in children and adolescents (Robertson et al., 2000). Indeed, perceived exertion scales for adults proved to be unreliable, because not equated with the cognitive development of children under the age of 9 and 10 years. In particular, 6-7-year-old children are at the start of the middle childhood stage, a developmental phase of cognitive capacities, physical self-perception, and emotional awareness. At this stage, children are unable to reliably report their perceptions and feelings by assigning numbers to words or phrases. They may also find difficult to understand words that do not belong to their current vocabulary (Williams, Eston & Furlong, 1994). For these reasons, Robertson et al. (2000) developed the Children's OMNI Scale of Perceived Exertion, a perceived exertion scale specifically designed for use with children. The scale contains both pictorial and verbal descriptors representing a cyclist on a slope, positioned along a numerical scale ranging from 0 (not tired at all) to 10 (very, very tired). The term OMNI is a contraction of the word omnibus, to indicate a scale with broadly generalizable measurement properties. Different OMNI Scales were later developed to assess exertional perceptions of children engaged in other dynamic exercise modes, such as walking or running (Utter et al., 2002; for other examples of OMNI Scales, see Heyward & Gibson, 2014;Armstrong & Van Mechelen, 2017). In contrast to the single item format of the OMNI Scales, Coulter & Woods (2011) used a pictorial style, self-report measure comprised of six items to examine children's active behaviors out of school and their enjoyment in physical activities they partake in. However, the factor structure of the scale was not examined. In a more detailed perspective, Barnett et al. (2016) developed a pictorial instrument (PMSC; Pictorial scale for Perceived Movement Skill Competence) for children aged 4-5 years to gauge, with the help of an adult, their perception of six locomotor and six object control skills based on the Ulrich's (2000) Test of Gross Motor Development (TGMD-2). Drawing on previous studies using pictorial scales, we developed two new short, selfassessment pictorial scales measuring self-efficacy and enjoyment to be easily and quickly administered to children. These scales contained selected and adapted items from the Colella et al. 's (2008) physical self-efficacy scale for children and the Carraro, Young & Robazza's (2008) Physical Activity Enjoyment Scale. Item responses were then transformed into a pictorial format to make them more ''child friendly''. CFA yielded satisfactory fit indices across gender and age, thereby suggesting that the two scales can reliably gauge self-efficacy and enjoyment of physical activity in children. The negatively skewed distribution of scores in both measures indicates a general perception of high self-efficacy and enjoyment levels, which is desirable in children engaged in physical activity. Therefore, low scores on one or both scales may reveal individual issues related to the participation in physical tasks and suggest a need for appropriate interventions. In a review of potential mediators of children's physical activity, Brown, Hume & ChinAPaw (2009) underlined the need for future research examining the psychometric properties of measures of potential mediators in different study samples to ensure that appropriate, valid, and reliable instruments are used. More clearly identified associations between hypothesized mediators and physical activity can facilitate the development of more effective interventions. Physical activity is considered one of the most important factors to successfully prevent or treat childhood overweight and obesity (World Health Organization, 2016), and interventions are often targeting young children, even preschool children (Ward et al., 2016). Our scales may contribute to this call for use of appropriate instruments in the assessment of relevant constructs in physical activity settings, adequately validated in the specific population of interest. For applied purposes, assessing self-efficacy and enjoyment can enable teachers to early identify children with negative attitudes toward physical activities. Most children like moving and playing, which are essential components of their development. Through active play children not only refine their physical abilities and learn motor skills, but also develop social relationships, self-confidence, and creativity (Truelove, Vanderloo & Tucker, 2017). Therefore, teachers should pay special attention to less active children, and find specific goals and strategies to enhance their intrinsic motivation. Moreover, valid measures would enable teachers to assess the effectiveness of physical activity programs aimed at enhancing children's interest and motivation. CONCLUSIONS Self-efficacy and enjoyment have important consequences on individuals' quality of life (Kahneman, Diener & Schwarz, 1999;Morano et al., 2016), which is a multidimensional construct that reflects one's perceptions of fitness, life satisfaction, and wellbeing (Bowling, 2001). Enhancing children enjoyment for physical education and their actual and perceived physical abilities are expected to stimulate the adoption of an active lifestyle and improve health-related quality of life (Vitali et al., 2019). Given the importance of the two constructs, we have developed two new short pictorial scales, very easy to administer, and adequate for 6-7-year-olds. Of note, the two scales do not require a direct assistance from an adult for the assessment of children. A limitation of this study is that we only examined the factor structure of the two scales. Additional research is necessary to establish the validity and reliability of these scales in physical education and different physical activity domains, including sport and leisure. Further validation is also needed to take account of a range of variables, such as age, gender, physical abilities, body mass index, and culture (Crocker, Bouffard & Gessaroli, 1995). Taken together, our study findings provide initial evidence for the use of the two pictorial scales with 6-7-year-old children in physical education contexts.
2019-08-10T13:03:55.913Z
2019-07-29T00:00:00.000
{ "year": 2019, "sha1": "83162dbb30ddf6023c90cfefa8a10d10a5d77f23", "oa_license": "CCBY", "oa_url": "https://peerj.com/articles/7402.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "83162dbb30ddf6023c90cfefa8a10d10a5d77f23", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
234113975
pes2o/s2orc
v3-fos-license
Quadrature Integration Techniques for Random Hyperbolic PDE Problems : In this paper, we consider random hyperbolic partial differential equation (PDE) problems following the mean square approach and Laplace transform technique. Randomness requires not only the computation of the approximating stochastic processes, but also its statistical moments. Hence, appropriate numerical methods should allow for the efficient computation of the expectation and variance. Here, we analyse different numerical methods around the inverse Laplace transform and its evaluation by using several integration techniques, including midpoint quadrature rule, Gauss–Laguerre quadrature and its extensions, and the Talbot algorithm. Simulations, numerical convergence, and computational process time with experiments are shown. Introduction Random hyperbolic partial differential equations (PDEs) are mathematical models that describe wave phenomena with applications in various fields: fluid mechanics [1,2], electromagnetic radiation [3], geosciences [4], and many others. The theory of hyperbolic problems has been well developed based on the assumption that parameters of the model, such as coefficients or initial values are exactly known, which is not available in the real world, where error measurement and the unavailability of the measurement occur. It causes the increasing interest for the random models, which can estimate the impact of the uncertainty to the predicted solution. The solution is found numerically due to the complexity of random models. Following the mean square approach [5], we can extend existing numerical methods for deterministic problems to the random case by applying the Monte Carlo method [6,7] in order to approximate the statistical moments of the solution. Nevertheless, iterative numerical methods require the storage of the preliminary results and huge number of repetitions, which leads to the the necessity of enormous computational resources and makes them not appropriated to deal with random models. Thus, it becomes urgent to search for an accurate and fast numerical algorithm. Integral transform is a good alternative, as it allows us to construct the solution at one fixed point, not necessarily in the whole domain as it occurs in the case of the finite difference methods, as it is shown in the literature [8]. Integral transform methods convert the original random PDE to an ordinary differential equation (ODE), which can be solved analytically, in some cases, or numerically. Once obtained the solution of the random ODE, the inverse transform is applied in order o restore the solution of the original problem. This inverse transform can be done by the definition, i.e., integrating over the infinite domain, or by using some numerical techniques [9]. There are several widely used methods: Fourier Series, Stehfest approach [10], and Talbot inverse algorithm [11]. Because the inverse Laplace transform is ill-posed problem, the regularization property of the numerical algorithm is necessary. In this sense, the Talbot inverse becomes the best option, since it guarantees the regularization property, while other numerical inversion schemes fail in dealing with noisy data [12]. In this work, we construct a numerical solution for random hyperbolic PDE models, not only by constructing the approximating stochastic process solution, but also while computing its expectation and variance. Thinking of practical applications, we deal with random models where the uncertainty is described by stochastic processes (s.p.'s) having a finite degree of randomness ( [5], p. 37); this means that the involved s.p.'s take the form where V i , 1 ≤ i ≤ m, are mutually independent random variables (r.v.'s). We propose an analytic-numerical approach that is based on random integral transform technique combined with various numerical integration methods, such as midpoint rule, Gauss-quadratures, and Talbot inverse [11]. The Monte Carlo method is used for the evaluations of the integrands involving the solution of random ordinary differential problems and also for the computation of the expectation and variance of the approximating stochastic process solution. The oscillatory nature of the appearing integrands deserves careful attention, because not all of the quadrature rules are advisable [13][14][15]. The proposed analytical-numerical approach for solving random hyperbolic PDE problems considered in this paper includes known state of the art of numerical integration methods, which are compared between themselves in terms of accuracy and computational time: the midpoint quadrature rule, the Talbot algorithm for Laplace inverse, the Gauss-Laguerre quadrature, the Exponential-Fitting Gauss-Laguerre quadrature, and the adaptative quadrature. This comparison is provided to highlight the advantages and drawbacks of each method. Moreover, this complex approach is compared with standard finite-difference methods for solving the random hyperbolic PDE problem. In all cases, the Monte Carlo simulations are used in order to calculate the statistical moments of the random solution process. The rest of the paper is organized, as follows. In Section 2, the random hyperbolic PDE problem is formulated and the random Laplace transform method is briefly described. Section 3 proposes numerical integration methods for Laplace inverse, while Section 4 gives an algorithm for Monte Carlo simulations. All of the proposed methods are compared by the series of numerical tests in Section 5. Section 6 discusses the results. Preliminaries and Integral Transform for Random Hyperbolic PDE This section begins by recalling previous results and definitions [8,16]. Let us consider a complete probability space (Ω, F , R) and the set L p with the p-norm of a real-values random variable Y ∈ L p (Ω), as defined by where the expectation E[|Y| p ] < ∞, and L p (Ω) is a Banach space [17]. By using definition (2), the integrability, continuity, and differentiability of a function Y(t) ∈ L p (Ω) can be defined straightforwardly. Note that, if p = 2, then it is a mean square (m.s.) case. Let C be the class of all m.s. locally integrated two-stochastic processes (s.p.'s) h(t) defined in R such that h(t) = 0 , for all negative arguments and the two-norm satisfies Subsequently, for h(t) ∈ C, the m.s. integral where s is a complex number with real part Re(s) > c 0 ≥ 0, and it is called the random Laplace transform of 2-s.p. h(t). The constant c 0 is chosen, such that Re(s) > c 0 specifies the region where H(s) is analytic and it has some form of singularity on the line Re(s) = c 0 [9]. If H(s) is known, then the random inverse transform for t > 0 is defined, as follows where i stands for the imaginary unit and α > c 0 [16]. For the purposes of present study, we recall some of the important properties of the random Laplace transform (4): if s.p. h(t) is twice m.s. differentiable and h (t), h (t) belong to C, then In this paper, we consider a one-dimensional random hyperbolic PDE modelling the s.p. of the vibrating string motion u(x, t), depending on the spatial variable x and time t, where a(x)(ξ) > 0, b(x)(ξ) are m.s.-continuous stochastic processes with a finite degree of randomness and absolutely integrable with respect to the spatial variable in R; c(ξ) is a random variable (r.v.). The s.p.'s f 0 (x)(ξ), f 1 (x)(ξ), g 0 (t)(ξ), and g 1 (t)(ξ) are functions depending on a finite number of r.v. that represent random initial and boundary conditions with a finite degree of randomness. The random hyperbolic partial differential equation (PDE) (7) is solved using an analytic-numerical method that is based on Laplace transform combined with an appropriate numerical integration technique. In this paper, we consider various quadratures for inverse Laplace transform. Following the ideas of [8,18], let us define the random Laplace transform with respect to the temporal variable, as Because u(x, t)(ξ) is a twice m.s. differentiable s.p., one gets Subsequently, (7) is transformed to the following random non-homogeneous ordinary differential equation (ODE) with respect to the spatial variable Assuming a(x)(ξ) > 0 for each event ξ ∈ Ω, one gets Equation (13) is a linear second order ODE with respect to the spatial variable, which can be analytically solved in some cases, or numerically in other cases. Because the boundary conditions (9) for the PDE are functions on t, the boundary conditions for (13) are the corresponding Laplace transforms of (9): Once obtaining the solution U(x, s)(ξ), a real-valued u(x, t)(ξ) is restored by while using random inverse Laplace transform that is given by (5). Taking advantage of the relationship between the inverse Laplace transform and Fourier cosine integrals, see [9], the following formula is used where Re[·] stands for the real part of a complex number. Note that the integrand appearing in (15) has an oscillatory kernel that deserves special care for the numerical integration. Numerical Integration Methods This section describes briefly acknowledged integration methods for the integrals of the type (15). THe numerical solution of Equation (7) is constructed in the domain ∆ = [0; L] × [0; T] for each fixed event ξ. Let us introduce a uniform grid {x j , t n }, such that At each node (x j , t n ), the numerical solution is defined by u n j (ξ) for each realization of ξ and it is obtained by approximating the integral (15). Hence, at every fixed (x j , t n ), the following function is defined where U(x j , α + iw)(ξ) is the numerical solution of ODE (13) at the point x j for fixed value of s = α + iw. Now, we briefly describe all of the considered methods for numerical integration. Midpoint Quadrature Rule The midpoint quadrature rule is a method of approximation of integral (15) based on the Riemann sums, the simplest case of Newton-Cotes open formulas, for truncated domain [0, R]. In the general case, the midpoint quadrature rule is written, as follows where It is well known that the main advantage of this method is its simplicity of implementation and the consideration of all the information regarding the integrand, which makes it applicable for a wider class of integrand functions [14]. However, the high accuracy of the quadrature requires large enough value of N, leading to the increasing computational cost. In the case of improper integral (in the infinite domain), the method can also be sensitive to the choice of R. Gauss-Laguerre Quadrature The novelty of Gauss quadratures is to choose nodes where the integrand is evaluated in order to minimize the error of approximation. It is a good alternative to Newton-Cotes formulas, especially when the evaluation of function itself requires a lot of computational resources, because good accuracy can be reached with a small number, four or five, of nodes if the integrand is well conditioned. This is not the case when the integrand is of oscillatory type [19]. The improper integral is approximated by Gauss-Laguerre (GL) quadrature of N GL nodes by the following sum, see [20], where w k is the k-th root of Laguerre polynomial L N GL (w), γ k is the weight of the quadrature given by Exponentially-Fitted Gauss-Laguerre Quadrature Exponential fitting is an approach that is used in numerical differentiation, interpolation, and integration for improving the accuracy of the methods. Because integrand in (15) is oscillating, Exponentially-fitted Gauss-Laguerre quadrature (EF-GL), as proposed in [21], could be a good option. For EF-GL, nodes and weights depend on integrand and cannot be defined a priori. The computation of these N GL pairs of nodes and weights is based on the solution of a nonlinear system of N GL equations, which leads to additional computational cost. In [21], the numerical algorithm is described in details. Further, in Section 5, we compare the accuracy and computational time of GL and EF-GL quadrature rules. Talbot Inverse The method of Talbot for the Laplace inversion problem [11] is based on numerical contour integration. Instead of formula (15), the Bromwich integral is used The contour deformation is used in order to obtain the Hankel contour and exploit the exponential factor, which makes the integral suitable for further application of a Newton-Cotes formula [22]. The Talbot inversion quadrature for N TI nodes is written, as follows where w k are the nodes and γ k are the weights defined by Here, the number of nodes N TI should be chosen in accordance with desired accuracy: for n significant digits N TI = 1.7n . It shows the flexibility of the method and the high degree of accuracy with fast convergence. Moreover, as stated in [12], the main advantage of the Talbot algorithm is the regularization property, which means the ability to handle noisy data. It is important for the inverse Laplace transform problem due to its ill-posedness and it becomes even more urgent in the random case dealing with perturbed initial conditions or parameters of the problem. Summarizing, a numerical solution is constructed following the steps of Algorithm 1 for all of the described methods. Algorithm 1: Numerical solution for deterministic string vibrating problem Initialization: set the mesh {x j , t n } by (16); Set number of nodes of the quadrature N; Set n = 0; while t n < T do Increment n; for j = 0, . . . , N x do Compute nodes and weigths {w k , γ k } of the chosen quadrature -Midpoint rule: uniform grid with N nodes ; -GL quadrature: nodes w k are the roots of the Laguerre polynomial of N-th order, k = 1, . . . , N; -EF-GL quadrature [21]: nodes w k and weights γ k are found by solving nonlinear system of 2N equations; -Talbot inverse: nodes w k and weights γ k , k = 0, . . . , N − 1 are defined by (23)-(24); Get the approximated value u n j : -Midpoint rule: integral in (15) is approximated by (18); -GL and EF-GL quadratures: integral in (15) is approximated by (19)-(20); -Talbot inverse: formula (22) ; end end Monte Carlo Method for Random Hyperbolic PDE The coefficients of the random m.s. Equation (7) and corresponding initial and boundary conditions (9) are stochastic processes (s.p's) that are defined in a complete probability space (Ω, F , P), i.e., s.p.'s a(x), b(x), f 0 (x), f 1 (x), g 0 (x) and g 1 (x) are described as continuous s.p.'s with with one-degree of randomness. The solution of the random m.s. problem is approximated by using the the Monte Carlo approach [6,7], when the expectation E[u(x, t)] is approximated by the average of a sufficiently large number of realizations ξ ∈ Ω of the corresponding deterministic realized transformed random ordinary differential problem. The Algorithm 2 describes the steps of the numerical solution. Numerical Results This section deals with the comparison of the above-described methods of numerical integration and Laplace inversion for several test problems. Deterministic PDE Problem with Constant Coefficients We start with simple one dimensional deterministic problem with a known analytical solution in order to check the viability of the proposed numerical integration techniques. The deterministic example corresponds to one fixed event ξ ∈ Ω. Instead of the bounded spatial domain [0; L], the whole real axis R is considered. Thus, no boundary conditions are needed. We also assume that a > 0, b, and c are constants, i.e., the following wave equation is considered subject to initial conditions u( This problem admits an analytical solution that can be written in terms of Bessel function of the first kind, see [23], p. 574, Equation 6.1.5, as follows where I 0 (z) and I 1 (z) are the modified Bessel function of the first kind; where J 0 (z) and J 1 (z) are Bessel function of the first kind. In order to test the proposed numerical integration methods we apply Laplace transform, as described in Section 2, and obtain a deterministic version of Equation (13): Applying the non-unitary Fourier transform with angular frequencŷ Equation (13) takes the following form Algebraic Equation (30) is solved directlŷ Hence, the solution U(x, s) of (28) can be obtained by applying inverse Fourier transform to (31). In the next Example 1, we consider a particular case of Equation (25) with constant coefficients and trigonometric initial conditions. Example 1. Let us consider deterministic problem (25) with coefficients a = 2, b = 1, c = 3, and initial conditions f 0 (x) = cos(x) and f 1 (x) = sin(x). Applying the inverse Fourier transform to (31), one obtains where i is the imaginary unit. Once the solution of ODE (28) is obtained, formula (15) is used to restore the solution of the PDE while using various numerical integration techniques. Note that Equation (25) admits the analytical solution, as described above. Because the function u(x, t) is close to zero, we compute the relative error of the discrete numerical solution at the mesh nodes in order to estimate the accuracy of the methods where U num is the matrix of numerical solution U num = {u n j }, j = 0, . . . , N x , n = 0, . . . , N t , as computed by Algorithm 1; u ref (x j , t n ) is the reference value at the point (x j , t n ). In this example, as the exact solution is known, the reference value is equal to this exact solution. For other cases where the exact solution is not available, a reference value is obtained using accurate finite difference method (FDM) for solving the original PDE (7). The total computational time for the proposed methods are presented in Table 1, together with the maximum of RelErr(j, n). The adaptative quadrature (MatLAB function integral [24]) has the same order of accuracy as the midpoint rule, but it requires greater computational resources. Thus, it will not be considered in further more complicated examples. For the Talbot algorithm M = 17 is chosen to guarantee the accuracy up to 10 significant digits [22]. Even in that case, the method performs much faster than standard numerical integration methods for (15). Thus, the Talbot inverse method is found to be the most effective method for the deterministic case with constant coefficients. The relative errors for Midpoint rule and Talbot inverse methods are plotted in Figure 1. Because no boundary conditions are posed for the problem, the largest values of the relative errors are situated at the boundary x = L. Table 2 presents a comparison of GL and EF-GL quadratures in terms of the maximum relative error and the CPU-time varying the number of nodes N GL . It is important to notice that the CPU-time may vary from simulation to simulation, thus only the order should be taken into account. In the case of GL quadrature, we find out that the computational time is similar with increasing number of nodes, while the CPU-time for EF-GL method is increasing exponentially. The convergence of the GL quadrature is shown, while taking the results shown in Table 1 into account: the error reduces significantly with an increasing number of nodes. The potential improvement of the GL method by the exponential fitting expectedly has higher computational cost, due to the solution of the nonlinear system at each point of the computational domain. However, the accuracy of the EF-Gl quadrature for this example with oscillating integrand has not been improved when comparing with the standard GL rule. Thus, it will not be considered in further more complicated examples. The accuracy of the midpoint rule depends on the truncation R and step size h MP . A bigger domain, as well as smaller step size, lead to an increased computational time. Figure 2 presents the plots of errors and the CPU-time for fixed step size h MP = 10 −1 with respect to increasing domain. The accuracy in dependence on the step size h MP is also studied. In Table 3, the maximum relative error is reported for various h MP and fixed R = 10 4 . The maximum relative error is decreasing with step size until 4.4699 × 10 −7 (h MP = 1/16); further fragmentation of the step size does not reduce the error for R = 10 4 . Deterministic PDE with Non-Constant Coefficients In the case of non-constant coefficients in (13), the analytical solution is not always available; thus, FDM is applied to construct a reference numerical solution. Note that the function U(x, s) that is used in expression (17) means the value of the numerical solution of the ODE (13) at the fixed point x for fixed parameter s. Equation (13) is discretized by the central differences on the same mesh {x j }, j = 0, . . . , N x , as follows where U j stands for the approximated value of U(x, s) at the node x j . The values at the boundaries are found from the boundary conditions by applying the Laplace transform Hence, the integrand (17) has to be evaluated at each fixed node of the computational grid in order to approximate integral (15), which provokes a significant augment of the CPU-time. In the next example, we increase the complexity by regarding a variable coefficients deterministic problem. Because the analytical solution for the deterministic PDE problem in general form (7) is not available, a numerical method has to be employed to obtain the reference numerical solution. We consider an explicitly centred in time and space finite difference scheme for the mesh function u n j ≈ u(x j , t n ): where j = 1, . . . , N x , n = 2, . . . , N t . The initial conditions (8) are used in order to obtain the solution at the first time levels t 0 and t 1 . The derivative in (8) is approximated by the forward difference. Because the considered scheme is conditionally stable, the step sizes ∆t and ∆x are chosen to guarantee the stability. In order to obtain a good approximation, which could be considered as the reference solution, the mesh should be chosen appropriately fine. The numerical solution is constructed by the Algorithm 1, choosing N x = 10, N t = 5. For the midpoint rule, N = 100 and R = 100 are used. Table 4 presents the comparison of the methods in terms of maximum relative error and computational time. The reference solution is the numerical solution that is computed by the FDM (36) in refined mesh (N x = 100, N t = 16,000), which preserves the stability of the scheme. Because an explicit method is used and no iterative procedures are needed for solving nonlinear system at each time-level, the total computational time is comparably small: 0.15 s. Figure 3 plots the reference solution. Figure 4 plots the solution at the moment t = T. The midpoint rule and Talbot inverse method perform more accurately than GL quadrature of nine nodes, but they require more computational time due to larger number of calls of integrand (17). However, taking 25 nodes in the GL quadrature, the accuracy has been improved significantly. Random PDE with Constant Coefficients In this subsection, we deal with random models with constant coefficient random variables. It is remarkable that, in this case, we need not only the computation of the approximation s.p. solution, but also the computation of its statistical moments. Example 3. We consider a random version of problem (25), with a ∼ N (2, 0.25), b, c ∼ Beta (2,5). In order to approximate the mean and variance of the solutions, the Monte Carlo method with N MC simulations is used. Expectation and variance of the exact solution for the random hyperbolic PDE (25) are plotted in Figure 5. As in previous examples, we compare the proposed methods of integration and Laplace inverse in terms of maximum relative error and computational time. Table 5 presents the results for various N MC . The CPU-time refers to the total computational time for all N MC simulations. Note that, for 1000 simulations, the exact solution (26)-(27) requires 28.41 s to perform the simulations. Thus, Midpoint rule (R = 100, h = 0.1), Talbot inverse and GL quadrature require less computational time than calculation by the exact formula. As expected, the computational time is increasing with the number of simulations linearly, but errors preserve the order in most cases. Random PDE with Non-Constant Coefficients To complete the study, a random variable coefficient problem is considered. Example 4. The vibration of the string in [0, L] is described by Equation (7), subject to the initial conditions f 0 (x) = x(x − L) and f 1 (x) = 0; and boundary conditions g 0 (t) = g 1 (t) = 0. We set up the parameters: Unlike the deterministic Example 2 with non-constant coefficients where FDM provides a reference analytical solution, reference values are not available here due to the computational complexity that arises in the evaluation of the statistical moments of the approximate stochastic process when time step advances [18]. A survival reference FDM solution is taking the Monte Carlo method for an appropriate set of realizations. In this case, the number of realizations is N MC = 10 3 and CPU-time is 16,212 s. Figure 6 plots the numerical solution. The zero-variance at the boundaries is caused by the boundary conditions. Similar plots are obtained for the considered methods. Thus, we compare them in terms of the maximum relative error, see Table 6. As it is expected from the previous examples, the most accurate solution is obtained by the midpoint rule and Talbot inverse, although this advantage pays the price of additional computational cost. Conclusions The solution of a random hyperbolic PDE problem is a challenging task that is demanded in many practical applications. Computing an expression of the approximating stochastic process makes the computation of its statistical moments available. In this paper, we propose a combination of the random Laplace transform with the numerical integration techniques for its inverse, and the Monte Carlo method for the evaluation of numerical solution of the transformed problem at a particular required point. The Monte Carlo simulations require a fast and efficient basis numerical algorithm for solving deterministic hyperbolic PDE problem, for every fixed realization. FDM could not be an option due to the high computational cost and memory requirements. In order to avoid the numerical differentiation of the PDE, Laplace transform is applied, which results in ODE equation. In some cases, as it has been shown in present paper, the analytical solution of ODE is known; thus, we use numerical integration methods for inverse Laplace transform. If the solution of ODE is not available, then numerical techniques for boundary value problem have to be employed. Several numerical integration methods have been considered: midpoint rule and GL-quadrature for improper integrals. However, due to the oscillatory behaviour of the integrand function GL quadrature with a small number of nodes shows comparatively poor results, while the midpoint rule is comparable with Talbot's Laplace inverse for random hyperbolic PDEs. The proposed complex analytic-numerical approach is compared with the classical explicit FDM scheme for the original random PDE problem.
2021-05-11T00:03:58.567Z
2021-01-14T00:00:00.000
{ "year": 2021, "sha1": "685c08564a42e21e823ff9d81a9ad0af8dfb1a81", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-7390/9/2/160/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c82485f54ed33d4653281c18d144606bd69a5276", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
15132531
pes2o/s2orc
v3-fos-license
Nogo receptor 1 is expressed in both primary cultured glial cells and neurons. Nogo receptor (NgR) is common in myelin-derived molecules, i.e., Nogo, MAG, and OMgp, and plays important roles in both axon fasciculation and the inhibition of axonal regeneration. In contrast to NgR's roles in neurons, its roles in glial cells have been poorly explored. Here, we found a dynamic regulation of NgR1 expression during development and neuronal injury. NgR1 mRNA was consistently expressed in the brain from embryonic day 18 to postnatal day 25. In contrast, its expression significantly decreased in the spinal cord during development. Primary cultured neurons, microglia, and astrocytes expressed NgR1. Interestingly, a contusion injury in the spinal cord led to elevated NgR1 mRNA expression at the injury site, but not in the motor cortex, 14 days after injury. Consistent with this, astrocyte activation by TGFβ1 increased NgR1 expression, while microglia activation rather decreased NgR1 expression. These results collectively suggest that NgR1 expression is enhanced in a milieu of neural injury. Our findings may provide insight into the roles of NgR1 in glial cells. Therefore, the functions of NgR1 have been extensively studied in neurons. However, NgR1 may be also expressed in other cell types. Here, we highlight NgR1 expressed in glial cells. Primary culture Primary cultured neurons were prepared from the cerebellar granule neurons (CGNs) of postnatal day 7 Sprague Dawley WT rats. The meninges of the brain were carefully removed with fine forceps, and the remaining tissues were minced and digested using a Papain Dissociation System (Worthington, Lakewood, NJ, USA). Dissociated cells were applied to a 35/60% two-step Percoll gradient and centrifuged at 3000×g for 15 min. Cerebellar granule neurons at the interface were collected. Cells were suspended in Neurobasal medium (Invitrogen, Carlsbad, CA, USA) supplemented with 2 % B27 (Invitrogen), 2 mM glutamine, an additional 20 mM KCl, 50 U/ml penicillin, and 50 μg/ml streptomycin. 13) Primary cultured glial cells were prepared from the cortexes of postnatal day 1 WT rats. The whole brain was removed aseptically from the skull and the meninges were excised carefully under a dissecting microscope. The brain was strained through sterile mesh, and small pieces of tissues were cultured in flasks in Dulbecco's modified Eagle's medium (DMEM) containing 10% fetal bovine serum, then incubated at 37 ºC in a humidified atmosphere containing 5% CO2. The culture medium was renewed every 7 days. The mixed glia cultured for 3 weeks was shaken at 120 rpm on a gyratory shaker for 6 hours. More than 95% of adhering cells were glial fibrillary acidic protein (GFAP)-positive astrocytes. The detached cells were reseeded in fresh culture dishes, and any contaminated oligodendrocyte progenitors were removed by changing the DMEM after 15 min incubation. More than 95% of the cells were found to be CD11b-positive. 14) Astrocytes (7×10 5 cells/ml) and microglia (7×10 5 cells/ml) was additionally cultured for 24 h in serum-free DMEM. Astrocytes were stimulated with TGF-b (Peprotech, London, UK) 5 ng/ml, 10 ng/ml, and 20 ng/ml and with EGF (Peprotech) 10 ng/ml and 20 ng/ml. After stimulation with cytokines for 48 hr, cells were collected. Microglia were stimulated with LPS (100 ng/ml). RT-PCR and Quantitative RT-PCR Total RNA was extracted from the rat brain, spinal cord, and primary cultured cells using an RNeasy Lipid Tissue kit and RNeasy Mini kit (Qiagen, Valencia, CA, USA) according to the manufacturer's recommendations. cDNA was prepared from 1 μg of total RNA by using a Transcriptor First Strand cDNA Synthesis kit (Roche Diagnostics, Mannheim, Germany) following the standard protocols. The cDNA products were used for reverse-transcription polymerase chain reaction (PCR) and quantitative real-time PCR (QRT-PCR). QRT-PCR was performed on M×3000P (Agilent Technologies, Santa Clara, CA, USA) using synthetic primers and SYBR Green (Agilent Technologies). Samples were subjected to 45 cycles of amplification at 95˚C for 15 s and 60˚C for 30 s after holding 68˚C for 15 s and 95˚C for 1 min. Relative expression was calculated using the 2-(Ct experimental sample-Ct internal control sample (GAPDH)) method. The primer sequences used are listed in Table 1. Animal Surgery Adult female Sprague Dawley (SD) rats weighing 200-230 g were used in the study of spinal cord injuries (SCI). The animals were anesthetized with an intraperitoneal injection of Somnopentyl (Kyoritsu Seiyaku, Tokyo, Japan). After Th 9 laminectomy, we exposed the dura mater and induced injury using a force of 200 kdyn using a commercially available SCI device (Infinite Horizon Impactor; Precision Systems and Instrumentation, Lexington, KY, USA) that provided a consistent degree of spinal cord contusion injury. All injuries included the dorsal CST and dorsal gray matter. After SCI was induced, the muscles and skin were closed in layers. The bladder was compressed by manual abdominal pressure twice daily until bladder function was restored. Food was provided on the cage floor, and the rats had no difficulty reaching their water bottles. All animals were given antibiotics in their drinking water [1.0 ml of Bactrim (Roche, Basel, Swiss Confederation) in 500 ml of acidified water] for 2 weeks after SCI. We excluded the rats without complete paraplegia on the next day after operation in all groups, as they were inappropriate for further evaluation. Animal care and experimental procedures were approved by the Animal Experimentation Committee of the Nagoya University Graduate School of Medicine and were conducted according to the Nagoya University Regulations for Experiments. Statistical analysis We performed statistical analysis using SPSS software (SPSS, Chicago, IL, USA). NgR1 expression during development NgR1 mRNA expression was estimated by quantitative RT-PCR. NgR1 mRNA was consistently expressed in the brain from embryonic day 18 to postnatal day 25 (Fig. 1A). In contrast, the spinal cord showed a striking decrease in NgR1 expression during development (Fig. 1B). Thus, NgR1 expression is differentially regulated in the brain and spinal cord during development. To identify the source of NgR1 expression, we primary cultured cerebellar granule neurons from P7 rats, and astrocytes and microglia from the brain cortex of P1 rats. As shown in Fig. 2, all these cells expressed NgR1. These results suggest that not only neurons but also glial cells could be the source of NgR1 expression. NgR1 expression after SCI To address NgR1 expression regulation in a pathological condition, we employed a model of SCI. A contusion injury was made at thoracic level 9. Since NgR1 expressed in neurons plays an important role in the inhibition of axonal regeneration/sprouting, we expected that the SCI might influence its expression in the upper motor neurons. To address this question, the motor cortex area was removed after the SCI and subjected to analysis. However, the injury did not affect Fig. 1 Temporal regulation of NgR1 mRNA as revealed by Q-PCR in the developing rat total brain and spinal cord. NgR1 mRNA expression in the developing rat spinal cord was decreased significantly more than E18, although there was no remarkable change in the brain. a), b) The graph represents NgR1 mRNA expression in the brain and spinal cord. a); total brain, b); spinal cord (n=3) E; embryo, P; postnatal NgR1 expression in the motor cortex (Fig. 3A). We then examined the spinal cord, removing the injury site (5 mm length) for analysis. Interestingly, NgR1 mRNA expression at the injury site was elevated at 14 days after SCI (Fig. 3B). Enhanced NgR1 expression in activated astrocytes Elevated NgR1 expression at the injury site 14 days after SCI suggests that NgR1 expression may be regulated by glial activation. To address this hypothesis, we employed in vitro activation models of glial cells. TGFb1 increased NgR1 expression in primary cultured astrocytes (Fig. 4A). The expression of neurocan, a hallmark of astrocyte activation, was also enhanced by TGFb1 ( Fig. 4B). In contrast, microglia activation induced by LPS rather decreased NgR1 expression, while the other markers, CD86, TNF-a, and IFN-g were increased in this activation (Fig. 4C). DISCUSSION We found that NgR1 is differentially expressed in the brain and the spinal cord during development. Thus, while NgR1 is consistently expressed in the brain, its expression is significantly downregulated in the spinal cord as development proceeds. Therefore, NgR1 expression is at its lowest level in the adult spinal cord. However, SCI enhances its expression at the injury site 14 days after injury. In contrast, NgR1 mRNA expression is not affected in the motor cortex after SCI. These results collectively suggest that SCI influences NgR1 expression in neural cells including glial cells but not in upper motor neurons. Consistent with this idea, in vitro activation of astrocytes enhances NgR1 expression. But microglial activation rather decreases its expression. Therefore, our data collectively suggest that glial cells should be taken into consideration if we address the roles of NgR1 in the nervous system. The axis of Nogo, MAG, Omgp, and NgR1 has been extensively studied with regard to the inhibition of axonal regeneration/sprouting. Also, NgR1 has been implicated in axon fasciculation. Thus, the functions of NgR1 have been attributed to NgR1 expressed in neurons. Our present study highlighted NgR1 expressed in glial cells, particularly astrocytes. Although its biological roles during neural injuries and/or their recovery processes remain to be further investigated, NgR1 expressed in activated astrocytes should be examined. In this context, it is noteworthy that NgR1 is expressed in immune cells, i.e., B cells, T cells, and monocytes, in multiple sclerosis. 15) Although Nogo does not influence the proliferation or the cytokine production of these cells, myelin containing Nogo reduces adhesion and enhances motility. In this way, NgR1 on immune cells may be involved in pathogeneses under the condition of immune cell activation, such as multiple sclerosis. 15) In addition, a slow induction of NgR1 (14 days after SCI) was unexpected. Although we could not investigate whether this expression was transient or continue to express thereafter, this question is important to be addressed in a future study. Our findings of NgR1 expression during development are consistent with previous reports. Thus, it was reported that, although NgR1 mRNA estimated by in situ hybridization is diffusely expressed in the fetal mouse brain and spinal cord, its expression is not detected in the adult mouse spinal cord. 11) NgR1 mRNA expression is also not detected in the spinal cord of adult rat and human. 11,16) We also found that, although TGFb1 increased NgR1 expression, EGF, another activator of astrocytes, did not enhance NgR1 expression in astrocytes (data not shown). This suggests that a specific signaling mediated by some insults, such as TGFb1, can activate astrocytes and enhance NgR1 expression. Regarding intracellular signaling through NgR1 in a milieu of neural injury, it is known that upon binding of myelin-derived molecules (MAG, Nogo and OMgp) or CSPG, NgR1 makes a complex with p75 and Lingo-1 on neurons. The NgR1 complex activates Rho-GTPase and consequently inhibit axon regeneration. 3) In addition to this c) NgR1 mRNA in activated microglia stimulated with LPS was expressed at a significantly lower level compared to that without stimulation. intracellular signaling in neurons, NgR1 also contribute to oligodendrocyte differentiation where Myocilin, a glycoprotein secreted by astrocytes, binds to NgR1/Lingo-1 complex and suppresses Rho-GTPase in oligodendrocytes. 17) However, little is known about the significance of NgR1 expression in astrocytes. Our study demonstrate that astrocytes express NgR1 at least in vitro. Considering that intercellular cross-talk between glial cells and neurons plays a pivotal role in neuronal network reconstitution and functional recovery after SCI, NgR1 expressed in astrocytes would be an important subject for future studies.
2018-04-03T01:40:09.578Z
2016-08-01T00:00:00.000
{ "year": 2016, "sha1": "5244c3e89762dd68bfa323720fd5dfc99c174cb4", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "5244c3e89762dd68bfa323720fd5dfc99c174cb4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
52115381
pes2o/s2orc
v3-fos-license
Selective Oxidation of Veratryl Alcohol over Au-Pd/Ce0.62Zr0.38O2 Catalysts Synthesized by Sol-Immobilization: Effect of Au:Pd Molar Ratio The selective oxidation of veratryl alcohol (VA), a model compound of lignin, with oxygen molecules to produce veratraldehyde (VAld) was studied over monometallic Au, Pd, and bimetallic Au:Pd nanoparticles supported on a Ce0.62Zr0.38O2 mixed oxide for the first time. These bimetallic Au-Pd catalysts with Au:Pd molar ratios from 0.4 to 4.3 were synthesized by the sol-immobilization method. Furthermore, all the catalysts were characterized by inductively coupled plasma-atomic emission spectroscopy (ICP-AES), N2 physisorption, X-ray photoelectron spectroscopy (XPS), scanning transmission electron microscopy-high angle annular dark field (STEM-HAADF) imaging, energy dispersive X-ray spectroscopy (EDXS), and temperature programmed reduction (TPR) techniques. A synergistic effect between gold and palladium was observed over all the bimetallic catalysts in a wide range of studied Au:Pd ratios. Remarkably, the optimum Au:Pd ratio for this reaction was 1.4 with a turnover frequency of almost six times larger than for the monometallic gold and palladium catalysts. Selectivity to veratraldehyde was higher than 99% for the monometallic Au, Pd, and all the bimetallic Au-Pd catalysts, and stayed constant during the reaction time. Introduction The development of strategies for the valorization of lignin to high-value chemicals is becoming increasingly important due to their potential application as sustainable supplements or replacements of fossil-based chemicals or fuels. Lignin is the second most abundant resource in nature after cellulose and accounts for about 25% of the world's biomass. In addition, it is estimated that the pulp and paper industry produce around 70 million tons of lignin per year, which is burnt for heat and power generation [1]. Lignin is a three-dimensional polymer with methoxylated phenylpropane structures. Three primary monomers of lignin are p-coumaryl, coniferyl, and sinapyl alcohol and are combined with C-O (e.g., β-O-4, α-O-4, and 4-O-5) and C-C (e.g., β-5 and 5-5) bonds. The β-O-4 bond is the predominant linkage in lignin and occupies 50-60% of all the C-O linkages. It is highly desirable to open the lignin molecules to produce other chemicals, such as aromatics. Catalytic processes are promising strategies for the lignin transformations and can be divided into cracking (pyrolysis, fast thermolysis, Catalyst Preparation The support used in all the preparations was a Ce 0.62 Zr 0.38 O 2 (CZ) mixed oxide kindly provided by Grace Davison (Maryland, MD, USA). The Brunauer-Emmett-Teller (BET)-specific surface area of this ceria-zirconia mixed oxide was 67 m 2 g −1 . The monometallic and bimetallic catalysts were synthesized by the sol-immobilization method as described in our previous work [21]. The nominal loadings of all the catalysts were 1 wt %. The Pd and Au precursors were Na 2 PdCl 4 and NaAuCl 4 ·2H 2 O, respectively. Pd and Au sols were prepared with their precursors and polyvinylpyrrolidone (PVP) and reduced by NaBH 4 separately. The colloids of Au or Pd (acidified to pH = 2 by addition of sulfuric acid) were immobilized by adding 2 g of Ce 0.62 Zr 0.38 O 2 mixed oxide support with vigorous stirring for 1 h. Then, the slurry was further filtered, washed with distilled water, and dried at 80 • C for 2 h. An oxidation treatment of the dried catalysts was performed in a flow of O 2 at 250 • C for 1 h and purged with a flow of N 2 during 1 h at the same temperature. The monometallic catalysts were labeled according to their Pd and Au loadings based on weight as determined from inductively coupled plasma-atomic emission spectroscopy (ICP-AES) analysis. The catalysts were 0.82%PdCZ and 0.86%AuCZ, in which 0.82% and 0.86% stand for the actual metal loadings. The first step of the bimetallic Au-Pd supported on ceria-zirconia catalyst preparation was the synthesis of Au sol. Firstly, 2.19 mL of NaAuCl 4 ·2H 2 O (10 mg Au mL −1 ) solution and 2.19 mL PVP solution (1%, w/w) were added to 219 mL of H 2 O. After stirring for 2 min, 0.1 M NaBH 4 (4.38 mL) was added under vigorous magnetic stirring. The ruby red Au sol immediately formed. After 3 min of sol generation, 3 g of Ce 0.62 Zr 0.38 O 2 support was added to the Au sol under vigorous stirring. After 1 h of stirring, the slurry was filtered and washed with distilled water and dried for 15 min at 80 • C. The second step of the preparation was the deposition of Pd on the Au/Ce 0.62 Zr 0.38 O 2 sample. For this purpose, the monometallic Au sample previously prepared was suspended in 250 mL of water at room temperature. H 2 with a flow rate of 50 mL min −1 was bubbled into the suspension under atmospheric pressure and room temperature. Then, a mixture of 0.81 mL of Na 2 PdCl 4 (10 mg Pd mL −1 ) solution and 0.81 mL of PVP solution (1%, w/w) was added to the suspension under magnetic stirring for 1 h. The resulting slurry was filtered and washed with distilled water and dried in air at 80 • C for 2 h. Subsequently, the dried catalyst was oxidized in a flow of O 2 for 1 h at 250 • C and then was purged with a flow of nitrogen at the same temperature for 1 h. The catalyst was cooled to room temperature under the same flow and atmosphere. The obtained bimetallic catalyst was coded as 1. Catalyst Characterization The gold and palladium loadings were determined by ICP-AES (Thermo Scientific, Waltham, MA, USA) from the diluted extract of aqua regia. A volumetric N 2 adsorption at −196 • C using a Micromeritics ASAP-2020 instrument (Micromeritics, Norcross, GA, USA) was performed in order to determine BET-specific surface areas of the catalysts. XRD patterns of the catalysts were obtained using a Bruker diffractometer AXS (Bruker, Germany), Model D8 Advance, operated at 40 kV and 40 mA with Cu Kα radiation source (1.5418 Å). TPR experiments were carried out in a U-shaped quartz reactor filled with 100 mg of the catalyst. The samples were firstly pre-treated with 60 mL min −1 of helium for 30 min at room temperature. Then, a flow of 5% H 2 /Ar (60 mL min −1 ) was switched to the reactor. The temperature of the reactor was increased from room temperature to 900 • C with a heating rate of 10 • C min −1 . A Pfeiffer Thermostar quadrupole mass spectrometer (Pfeiffer, Germany) was used to analyze the composition of the gases released from the outlet of the reactor. The morphology, metal particle size distribution, and compositional information of the catalysts were studied using scanning transmission electron microscopy (STEM) on a JEOL2010 (JEOL, Tokyo, Japan) equipped with an Energy dispersive X-ray spectroscopy (XEDS) spectrometer Oxford INCA Energy 2000 system (Oxford instruments, Abingdon, UK). High angle annular dark field (HAADF)-STEM images were taken by an electron probe of 0.5 nm diameter at a diffraction camera length of 8 cm. More than 150 randomly selected metal particles were measured and corresponding metal particle size distributions were plotted. The average particle diameter (d) and total metal dispersion were calculated using a truncated cuboctahedron particle model [22] and homemade software (GAUSS). The STEM-XEDS technique provided compositional information for around 60 individual particles in each bimetallic catalyst. XPS measurements were performed on a Kratos Axis Ultra DLD instrument (Kratos Analytical, Manchester, UK) with monochromatized Al Kα radiation (1486.6 eV). The spectrometer was operated in the constant analyzer energy mode. Pass energy of 160 eV was used for low resolution and wide range survey spectra, while 20 eV was used for high resolution and narrow core level spectra. The binding energy scale was calibrated with respect to the Zr 3d5/2 component of the mixed oxide support and fixed at 182.64 eV as reported in our previous work [23]. CasaXPS Software version 2.3.17dev6.3a, developed by Neal Fairley (Casa Software Ltd., UK), was employed for the XPS data analysis. Catalytic Activity for Veratryl Alcohol Oxidation Catalytic evaluation was carried out in a thermally controlled glass reactor of 30 mL, equipped with an electronically controlled magnetic stirrer connected to a large reservoir (5000 mL) containing oxygen [24]. A mass-flow controller was used to control the oxygen uptake. Veratryl alcohol and the catalyst (alcohol:total metal = 1000 mol:mol) were mixed in xylene (alcohol 0.3 M in xylene; total volume: 10 mL). The reactor was filled with 200 kPa of oxygen and then the reactor was heated to 80 • C under stirring. Periodic removal of samples from the reactor was performed. Identification and quantification of the products were done by comparison with the external standard calibrated samples by a gas chromatograph (HP 7820A) equipped with a capillary column (HP-5, 30 m, 0.32 mm, 0.25 µm Film, made by Agilent Technologies, Santa Clara, CA, USA) and a thermal conductivity detector. Table 1 lists the BET-specific surface areas and elemental analysis results of the monometallic and bimetallic catalysts. The BET-specific surface areas of the catalysts were very close to that of the Ce 0.62 Zr 0.38 O 2 support. Furthermore, it can be observed that the actual Au and Pd loadings were lower than the expected values, indicating that the sol-immobilization method did not lead to complete deposition of Au and Pd on the Ce 0.62 Zr 0.38 O 2 support. The Au and Pd losses during the synthesis were due to the weak bond between the Au and Pd sols and the support. Hence, part of the Au and Pd sols was removed during the filtering and washing steps [21]. Textural and Structural Properties The XRD patterns ( Figure 1) of bimetallic and monometallic catalysts showed diffraction peaks corresponding to the crystallographic planes of the fluorite-type cubic structure of ceria-zirconia mixed oxide [20,25,26]. The metals were well dispersed on the ceria-zirconia support because no metallic palladium and/or gold, PdO or Au-Pd, alloy diffraction peaks were observed (Figure 1b) [16,21]. STEM Results Representative STEM-HAADF images of the monometallic catalysts have been presented in our previous work [21]. Table 1 shows the average particle size and metal dispersion of these two catalysts. The 0.86%AuCZ catalyst possessed a wider particle size distribution than the 0.82%PdCZ catalyst: from 1 to 12 nm [21]. The 0.82%PdCZ catalyst presented a narrow particle size distribution and most of the particle sizes ranged from 0.5 to 5 nm. The average particle size of the 0.82%PdCZ and 0.86%AuCZ catalysts were 2.7 and 6.0 nm, respectively. Moreover, the metal dispersion of the 0.82%PdCZ and 0.86%AuCZ catalysts were 37% and 20%. Figure 2 includes four types of graphs of each bimetallic catalyst: (1) representative STEM-HAADF image of the catalyst; (2) particle size distribution of the catalyst including monometallic Pd and Au, and bimetallic Au-Pd particles; (3) the composition of each analyzed nanoparticle obtained by the XEDS technique as a function of the size; and (4) relative frequency of each type of particle. The metal particle size of the bimetallic catalysts fell in the 0.5-17 nm range as shown in the particle size distribution of the catalysts. The average metal particle size and metal dispersion of the different bimetallic catalysts showed significant differences. The bimetallic 0.6AuPd-O and 1.4AuPd-O catalysts showed a similar average particle size (3.6 ± 0.1 nm and 3.5 ± 0.1 nm) with similar metal dispersion (27% and 29%). The 0.4AuPd-O, 1.8AuPd-O, 3.7AuPd-O, and 4.3AuPd-O catalysts presented larger average particle sizes (between 4.5 ± 0.2 nm and 5.9 ± 0.2 nm) and metal dispersion from 18% to 25%. www.mdpi.com/journal/nanomaterials than the expected values, indicating that the sol-immobilization method did not lead to complete deposition of Au and Pd on the Ce0.62Zr0.38O2 support. The Au and Pd losses during the synthesis were due to the weak bond between the Au and Pd sols and the support. Hence, part of the Au and Pd sols was removed during the filtering and washing steps [21]. The XRD patterns ( Figure 1) of bimetallic and monometallic catalysts showed diffraction peaks corresponding to the crystallographic planes of the fluorite-type cubic structure of ceria-zirconia mixed oxide [20,25,26]. The metals were well dispersed on the ceria-zirconia support because no metallic palladium and/or gold, PdO or Au-Pd, alloy diffraction peaks were observed (Figure 1b) [16,21]. STEM-XEDS analyses were performed to obtain compositional information of the bimetallic catalysts, which was considered important based on catalytic activity data in Section 3.5. Around 60 individual particles were analyzed for each sample. In the figure of correlation between the content of Au versus the particle size, the dashed line indicates the Au content of each catalyst determined by ICP-AES ( Figure 2). Monometallic Au and bimetallic Au-Pd nanoparticles were observed simultaneously by the STEM-XEDS technique. The Au content of the bimetallic particles was in the 10-96 mol% range. The composition-size diagrams indicate that 1.4AuPd-O and 4.3AuPd-O catalysts predominated the gold monometallic particles, while the 0.6AuPd-O, 1.8AuPd-O, and 3.7AuPd-O catalysts predominated the Au-Pd bimetallic particles. No Pd monometallic particles were detected. The low contrast between palladium and the heavy support (compared to gold) makes it more difficult to detect Pd when working in STEM mode, as shown in our previous work [16,21,25]. A recent paper from Hutchings´group also stated this difficulty in observing Pd and Pd-containing nanoparticles on a ceria-zirconia mixed oxide support [27]. This fact could contribute to underestimating the fraction of palladium-rich nanoparticles (including monometallic palladium). In addition, no correlation was found between the Au and Pd content and the formation of bimetallic particles. On the other hand, an estimation of the average Au:Pd ratio measured by STEM-XEDS was calculated from the size and composition of the individual nanoparticles ( Table 2). All the values ranged from 3.8 to 57, which were much higher than those determined by ICP-AES. The differences could be due to the low contrast between palladium and the heavy support detected by the STEM technique [21]. XPS Results The monometallic and bimetallic catalysts were characterized using the XPS technique to know the oxidation states and chemical composition on the surface of these catalysts. Figures 3 and 4 show the Pd 3d and Au 4f X-ray photoelectron spectra of these catalysts. The Au 4f7/2 binding energy of about 84.1-84.4 eV corresponds either to a reduced phase of metallic Au 0 (mostly like Au 0 ) or larger Au particles [21][22][23]26]. Only metallic gold (Au 0 ) was observed for the Au-containing catalysts. Scanning transmission electron microscopy-high angle annular dark field (STEM-HAADF) images, particle size distribution, relationship between the particle composition and size, and frequencies of Au, Pd, and Au-Pd particles of bimetallic catalysts determined by energy dispersive Xray spectroscopy (EDXS). The dashed line is the actual Au composition obtained by inductively coupled plasma (ICP) analysis. Figure 2. Scanning transmission electron microscopy-high angle annular dark field (STEM-HAADF) images, particle size distribution, relationship between the particle composition and size, and frequencies of Au, Pd, and Au-Pd particles of bimetallic catalysts determined by energy dispersive X-ray spectroscopy (EDXS). The dashed line is the actual Au composition obtained by inductively coupled plasma (ICP) analysis. The XPS spectra of the Pd 3d region in the catalysts showed peaks of binding energy around 337.9 eV and 338.7 eV corresponding to the Pd δ+ species and metallic Pd 0 species between 335.8 and 337.2 eV [21,[28][29][30]. Table 3 lists the percentages of Pd δ+ and metallic Pd 0 species of the catalysts. The 0.82%PdCZ catalyst showed 82% of Pd 0 , while the bimetallic catalysts presented lower percentages of Pd 0 (between 39% and 56%). There were more oxidized Pd δ+ and less metallic Pd 0 on the bimetallic catalysts compared with the monometallic 0.82%PdCZ catalyst. The differences in oxidation states of Pd on the monometallic 0.82%PdCZ catalyst and bimetallic catalysts can be attributed to Pd being easily oxidized on bimetallic catalysts with lower loadings of Pd than that of the monometallic 0.82%PdCZ catalyst. On the other hand, the STEM-HAADF results confirmed that all the Pd particles in the 0.82%PdCZ catalyst were smaller than 10 nm in diameter ( Figure 2). Therefore, all the Pd atoms in this catalyst would have been analyzed by XPS since the sizes were in the XPS analysis range (radius was less than 5.0 nm) [16,21]. For the XPS data analysis, zirconium was considered as a reference for other elements on the catalysts due to its stability and homogeneous distribution in the mixed ceria-zirconia oxide support [21,25]. The Au:Zr and Pd:Zr molar ratios calculated with the XPS data reflect the gold and palladium available on the surface of the catalysts. Table 3 shows the Au:Zr, Pd:Zr, and Au:Pd molar ratios calculated by the XPS data. The Au:Zr molar ratio varied from 0.04 to 0.08. The Pd:Zr molar ratio decreased with the decrease in Pd content in the catalysts. The Au:Pd molar ratios calculated from the XPS results for 1.8AuPd-O, 3.7AuPd-O, and 4.3AuPd-O were 0.70, 1.41, and 2.20, which were lower than the ICP results, but much closer than those obtained by the XEDS technique. This result confirms that there were small Pd nanoparticles that could not be detected by STEM-HAADF. The Ce 3+ percentages in total cerium amount of all the catalysts were also calculated by XPS (Table 3), being in the range of 10 to 30%. TPR Results H 2 -temperature programmed reduction (H 2 -TPR) was performed to investigate the redox properties of the catalysts, as well as the interaction degree between metal and support. Figure 5 shows the TPR profiles of the catalysts and the support. The reduction peak at 550 • C of ceria-zirconia support can be assigned to the reduction of Ce 4+ . The TPR profile of the 0.86%AuCZ catalyst indicates that addition of Au to the support led to a much lower reduction temperature at 150 • C [31]. The profiles of the monometallic 0.82%PdCZ and bimetallic catalysts showed a first reduction peak around 120 • C, which can be attributed to the reduction of the support (Ce 4+ → Ce 3+ ) [32] and oxidized Pd δ+ species weakly interacting with the support [32,33]. TPR Results H2-temperature programmed reduction (H2-TPR) was performed to investigate the redox properties of the catalysts, as well as the interaction degree between metal and support. Figure 5 shows the TPR profiles of the catalysts and the support. The reduction peak at 550 °C of ceria-zirconia support can be assigned to the reduction of Ce 4+ . The TPR profile of the 0.86%AuCZ catalyst indicates that addition of Au to the support led to a much lower reduction temperature at 150 °C [31]. The profiles of the monometallic 0.82%PdCZ and bimetallic catalysts showed a first reduction peak around 120 °C, which can be attributed to the reduction of the support (Ce 4+ → Ce 3+ ) [32] and oxidized Pd δ+ species weakly interacting with the support [32,33]. The reduction temperature in all the bimetallic catalysts was closer to that of the monometallic Pd catalysts, suggesting that a small content of palladium, such as is the case of the 4.3AuPd-O catalyst, resulted in an enhancement of reducibility of catalyst. The 0.82%PdCZ, 0.4AuPd-O, 0.6AuPd-O, 1.4AuPd-O, 1.8AuPd-O, and 3.7AuPd-O catalysts also exhibited H2 consumption peaks in the range of 300-600 °C, which are associated with the reduction of Pd δ+ species that interact strongly with the CZ support [32]. For the 4.3AuPd-O catalyst, a reduction peak at high temperatures was not detected, possibly because of a low Pd loading on this catalyst (<0.1 wt %). The TPR profiles of Pd-containing catalysts obtained from the evolution of H2 consumption did not display a negative peak characteristic of the formation of Pd β-hybrid in the temperature range 50-100 °C. The absence of this peak indicates that PdO was highly dispersed, which is in good accordance with the XRD results [32]. The existence of Pd δ+ species on the surface of the Pd-containing catalysts has been proven by XPS results. Additionally, the addition of Au and Pd promoted reduction of the CZ support, which could be owing to the metal-CZ support interactions and the H2 spillover from metals to the CZ support. The reduction temperature in all the bimetallic catalysts was closer to that of the monometallic Pd catalysts, suggesting that a small content of palladium, such as is the case of the 4.3AuPd-O catalyst, resulted in an enhancement of reducibility of catalyst. The 0.82%PdCZ, 0.4AuPd-O, 0.6AuPd-O, 1.4AuPd-O, 1.8AuPd-O, and 3.7AuPd-O catalysts also exhibited H 2 consumption peaks in the range of 300-600 • C, which are associated with the reduction of Pd δ+ species that interact strongly with the CZ support [32]. For the 4.3AuPd-O catalyst, a reduction peak at high temperatures was not detected, possibly because of a low Pd loading on this catalyst (<0.1 wt %). The TPR profiles of Pd-containing catalysts obtained from the evolution of H 2 consumption did not display a negative peak characteristic of the formation of Pd β-hybrid in the temperature range 50-100 • C. The absence of this peak indicates that PdO was highly dispersed, which is in good accordance with the XRD results [32]. The existence of Pd δ+ species on the surface of the Pd-containing catalysts has been proven by XPS results. Additionally, the addition of Au and Pd promoted reduction of the CZ support, which could be owing to the metal-CZ support interactions and the H 2 spillover from metals to the CZ support. Catalytic Activity for Veratryl Alcohol Oxidation Veratryl alcohol conversion is presented as a function of time in Figure 6. In all cases, the monometallic and bimetallic catalysts showed an increase of conversion with reaction time. After 8 h of reaction, the monometallic 0.86%AuCZ catalyst exhibited the lowest veratryl alcohol conversion of 8% while the 0.82%PdCZ catalyst showed 15.3%. A synergistic effect can be observed on all the bimetallic Au-Pd catalysts evaluated, which exhibited higher catalytic activity than both monometallic catalysts. In addition, the 1.4AuPd-O catalyst with a conversion of 72.3% was the most active catalyst among the monometallic and bimetallic catalysts. Catalytic Activity for Veratryl Alcohol Oxidation Veratryl alcohol conversion is presented as a function of time in Figure 6. In all cases, the monometallic and bimetallic catalysts showed an increase of conversion with reaction time. After 8 h of reaction, the monometallic 0.86%AuCZ catalyst exhibited the lowest veratryl alcohol conversion of 8% while the 0.82%PdCZ catalyst showed 15.3%. A synergistic effect can be observed on all the bimetallic Au-Pd catalysts evaluated, which exhibited higher catalytic activity than both monometallic catalysts. In addition, the 1.4AuPd-O catalyst with a conversion of 72.3% was the most active catalyst among the monometallic and bimetallic catalysts. The possible reaction pathway and products of selective oxidation of veratryl alcohol are shown in Scheme 1. The first step of oxidation of veratryl alcohol is to form veratraldehyde. There are two reaction routes for veratraldehyde. On the one hand, further oxidation of veratraldehyde can produce veratric acid. On the other hand, a carbonyl group can be eliminated to form veratrole. In this work, no other product except veratraldehyde was detected by gas chromatography-mass spectrometry. All the catalysts showed a selectivity >99% to veratraldehyde during the reaction time of 8 h. The initial turnover frequencies (TOFs) at a reaction time of 0.5 h shown in Table 1 and Figure 7 also confirm the enhanced catalytic activity of the 1.4AuPd-O catalyst with a factor of 5.7 times and 5.4 times compared with the 0.86%AuCZ and 0.82%PdCZ catalysts, respectively. The highest TOF was obtained when the Au:Pd ratio was 1.4. It can be observed that the 3.7AuPd-O and 4.3AuPd-O catalysts with relatively lower Pd content exhibited slightly more activity than the monometallic As shown in Table 1, all catalysts presented a similar surface area (~66 m 2 g −1 ). With regard to oxidation state of the two metals, the XPS data analysis indicates the coexistence of metallic Pd 0 and oxidized Pd δ+ species in all the catalysts, while gold was only present as metallic Au 0 . The catalyst with the highest conversion (1.4AuPd-O) presented similar metallic Pd 0 and oxidized Pd δ+ species as the 4.3AuPd-O catalysts with lower conversion. The influence of the Pd oxidation states could not be established in this sense. It is well known that the Au:Pd ratio can affect the catalytic behavior in oxidation reactions [16,26,27,34]. The bimetallic Au-Pd supported on ceria-zirconia catalysts with a wide Au:Pd ratio of 0.4-4.3 and total metal loading of 1 wt % showed a synergistic effect. The 1.4AuPd-O catalyst with an Au:Pd molar ratio of 1.4 and average particle size around 3.6 nm showed the best catalytic activity. The 1.8AuPd-O catalyst with the higher average particle size (5.3 nm) exhibited a slightly lower TOF value than the best catalyst 1.4AuPd-O, as shown in Table 1 and Figure 7. These results certify that the metal particle size is not the only determining parameter for catalytic activity. For this reason, it can be concluded that the Au:Pd molar ratio is one key factor that modulates the catalytic behavior for the oxidation of veratryl alcohol. STEM results showed a considerable change in particle size and bimetallic percentage particles on the catalysts. Due to the difficulties in visualizing small Pd nanoparticles over a heavy ceriazirconia support using the STEM-HAADF technique [14,19,24,26], the percentages of Au, Pd, and bimetallic Au-Pd particles could provide some information about the composition of the metal particles over these catalysts, but they are not very relevant. This result suggests that the composition of the particles play an important but not unique role on veratryl alcohol conversion to veratraldehyde. In addition, it is clear that the appearance of bimetallic particles increases to an increase in the conversion values, from 8% and 15.3% in 0.86%AuCZ and 0.82%PdCZ, respectively, The initial turnover frequencies (TOFs) at a reaction time of 0.5 h shown in Table 1 and Figure 7 also confirm the enhanced catalytic activity of the 1.4AuPd-O catalyst with a factor of 5.7 times and 5.4 times compared with the 0.86%AuCZ and 0.82%PdCZ catalysts, respectively. The highest TOF was obtained when the Au:Pd ratio was 1.4. It can be observed that the 3.7AuPd-O and 4.3AuPd-O catalysts with relatively lower Pd content exhibited slightly more activity than the monometallic 0.86%AuCZ and 0.82%PdCZ catalysts. The order of the TOF values was 0.86%AuCZ < 0.82%PdCZ As shown in Table 1, all catalysts presented a similar surface area (~66 m 2 g −1 ). With regard to oxidation state of the two metals, the XPS data analysis indicates the coexistence of metallic Pd 0 and oxidized Pd δ+ species in all the catalysts, while gold was only present as metallic Au 0 . The catalyst with the highest conversion (1.4AuPd-O) presented similar metallic Pd 0 and oxidized Pd δ+ species as the 4.3AuPd-O catalysts with lower conversion. The influence of the Pd oxidation states could not be established in this sense. It is well known that the Au:Pd ratio can affect the catalytic behavior in oxidation reactions [16,26,27,34]. The bimetallic Au-Pd supported on ceria-zirconia catalysts with a wide Au:Pd ratio of 0.4-4.3 and total metal loading of 1 wt % showed a synergistic effect. The 1.4AuPd-O catalyst with an Au:Pd molar ratio of 1.4 and average particle size around 3.6 nm showed the best catalytic activity. The 1.8AuPd-O catalyst with the higher average particle size (5.3 nm) exhibited a slightly lower TOF value than the best catalyst 1.4AuPd-O, as shown in Table 1 and Figure 7. These results certify that the metal particle size is not the only determining parameter for catalytic activity. For this reason, it can be concluded that the Au:Pd molar ratio is one key factor that modulates the catalytic behavior for the oxidation of veratryl alcohol. STEM results showed a considerable change in particle size and bimetallic percentage particles As shown in Table 1, all catalysts presented a similar surface area (~66 m 2 g −1 ). With regard to oxidation state of the two metals, the XPS data analysis indicates the coexistence of metallic Pd 0 and oxidized Pd δ+ species in all the catalysts, while gold was only present as metallic Au 0 . The catalyst with the highest conversion (1.4AuPd-O) presented similar metallic Pd 0 and oxidized Pd δ+ species as the 4.3AuPd-O catalysts with lower conversion. The influence of the Pd oxidation states could not be established in this sense. It is well known that the Au:Pd ratio can affect the catalytic behavior in oxidation reactions [16,26,27,34]. The bimetallic Au-Pd supported on ceria-zirconia catalysts with a wide Au:Pd ratio of 0.4-4.3 and total metal loading of 1 wt % showed a synergistic effect. The 1.4AuPd-O catalyst with an Au:Pd molar ratio of 1.4 and average particle size around 3.6 nm showed the best catalytic activity. The 1.8AuPd-O catalyst with the higher average particle size (5.3 nm) exhibited a slightly lower TOF value than the best catalyst 1.4AuPd-O, as shown in Table 1 and Figure 7. These results certify that the metal particle size is not the only determining parameter for catalytic activity. For this reason, it can be concluded that the Au:Pd molar ratio is one key factor that modulates the catalytic behavior for the oxidation of veratryl alcohol. STEM results showed a considerable change in particle size and bimetallic percentage particles on the catalysts. Due to the difficulties in visualizing small Pd nanoparticles over a heavy ceria-zirconia support using the STEM-HAADF technique [14,19,24,26], the percentages of Au, Pd, and bimetallic Au-Pd particles could provide some information about the composition of the metal particles over these catalysts, but they are not very relevant. This result suggests that the composition of the particles play an important but not unique role on veratryl alcohol conversion to veratraldehyde. In addition, it is clear that the appearance of bimetallic particles increases to an increase in the conversion values, from 8% and 15.3% in 0.86%AuCZ and 0.82%PdCZ, respectively, to 20.1% in 4.3AuPd-O. Over all the monometallic and bimetallic catalysts studied in this work, the main product veratraldehyde was found with selectivity higher than 99%. This result indicates that the selectivity to veratraldehyde is independent of the coexistence of monometallic and bimetallic particles. This synergy effect observed between Au and Pd over ceria-zirconia mixed oxide support in the form of small nanoparticles provides interesting information. Finally, the results make evident the correlation between factors such as the Au:Pd molar ratio, the frequency of bimetallic particles and Pd content, and Pd oxidation states with the catalytic activity of selective oxidation of veratryl alcohol, which is very complicated and beyond the achievement of this work. Conclusions Bimetallic Au:Pd supported on ceria-zirconia mixed oxide catalysts prepared by the sol-immobilization method have been employed for the first time for the selective oxidation of veratryl alcohol to produce veratraldehyde. The influence of the Au:Pd molar ratios on the catalytic activities for this catalytic reaction has been investigated. The optimized Au:Pd molar ratio found was 1.4, with veratryl alcohol conversion of 72% and selectivity toward veratraldehyde of 99%. Factors such as the Au:Pd molar ratio, bimetallic particle content, and the co-existence of metallic Pd 0 and oxidized Pd δ+ could improve the enhanced catalytic activity for veratryl alcohol oxidation.
2018-09-15T21:18:11.737Z
2018-08-28T00:00:00.000
{ "year": 2018, "sha1": "620263a1ad88c33d887387ddef9bd8f4f814cd46", "oa_license": "CCBY", "oa_url": "https://res.mdpi.com/d_attachment/nanomaterials/nanomaterials-08-00669/article_deploy/nanomaterials-08-00669.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "620263a1ad88c33d887387ddef9bd8f4f814cd46", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
1819331
pes2o/s2orc
v3-fos-license
On the Capacity of Causal Cognitive Interference Channel With Delay In this paper, we introduce the Causal Cognitive Interference Channel With Delay (CC-IFC-WD) in which the cognitive user transmission can depend on $L$ future received symbols as well as the past ones. Taking the effect of the link delays into account, CC-IFC-WD fills the gap between the genie-aided and causal 1cognitive radio channels. We study three special cases: 1) Classical CC-IFC (L=0), 2) CC-IFC without delay (L=1) and 3) CC-IFC with a block length delay (L=n). In each case, we obtain an inner bound on the capacity region. Our coding schemes make use of cooperative strategy by generalized block Markov superposition coding, collaborative strategy by rate splitting, and Gel'fand-Pinsker coding in order to pre-cancel part of the interference. Moreover, instantaneous relaying and non-causal partial Decode-and-Forward strategies are employed in the second and third cases, respectively. The derived regions under special conditions, reduce to several previously known results. Moreover, we show that the coding strategy which we use to derive achievable rate region for the classical CC-IFC achieves capacity for a special case of this channel. Furthermore, we extend our achievable rate regions to Gaussian case. Providing a numerical example for Gaussian CC-IFC-WD, we investigate the rate gain of the cognitive link for different delay values. I. INTRODUCTION Cognitive Interference Channel (C-IFC) refers to a two-user interference channel in which the cognitive user (secondary user) has the ability to obtain the message being transmitted by the other user (primary user), either in a non-causal or causal manner. C-IFC was first introduced in [1], where for the non-causal C-IFC an achievable rate region is derived by combining the Gel'fand-Pinsker (GP) binning [2] and a well known simultaneous superposition coding scheme (rate splitting) applied to the Interference Channel (IFC) [3]. For the non-causal C-IFC, where the cognitive user has non-causal full or partial knowledge of the other user's transmitted message, several achievable rate regions and capacity results in some special cases have been established [4]- [7]. In the Causal C-IFC (CC-IFC), the cognitive user can exploit knowledge of the primary user's message from the causally received signals (information overheard by the feedback link from the channel and not sent back from the receivers). Due to the complex nature of the problem, CC-IFC which is a more realistic and appropriate model for practical applications than the noncausal C-IFC, has been far less investigated compared with the latter [8]. In [1], achievable rate regions for the CC-IFC that consist of the non-cooperative causal transmission protocols have been characterized. An improved rate region for CC-IFC employing a cooperative coding strategy based on the block Markov superposition coding (full Decode-and-Forward (DF) [9]) and GP coding was derived in [10]. A more general model in which both transmitters are causally cognitive was proposed in [11], called Interference Channel with Generalized Feedback (IFC-GF). Different achievable rate regions for IFC-GF were obtained in [11]- [13], combining the methods of rate splitting, block Markov superposition coding, and GP binning. In this paper, we define the Causal Cognitive Interference Channel With Delay (CC-IFC-WD) as an IFC where one of the transmitters can causally overhear the channel and its transmission can depend on the L future received symbols as well as the past ones. This can equivalently be seen as the classical CC-IFC with −L unit delay on the cognitive user's received signal (or on the link between the transmitters). This channel model fits the wireless networks where the transmitters are close. Moreover, CC-IFC-WD is a middle point between the unrealistic genie-aided (non-causal) C-IFC and complex CC-IFC. In fact, a simple strategy such as Instantaneous Relaying (IR) by itself could be beneficial, as the case in the Relay With Delay (RWD) channel [14]. Different upper and lower bounds and some capacity results have been derived for RWD in [14]- [16], where the lower bounds are achieved based on the combination of cooperative strategies such as: full or partial DF, IR (for L > 0), and non-causal DF (for L = n). It has been shown that the capacity of the RWD is strictly larger than the classic relay channel [14]. In this paper, after introducing the general CC-IFC-WD, we focus on three special cases: 1) L = 0 which corresponds to the classical CC-IFC, 2) CC-IFC without delay (L = 1) where current received symbol (at the cognitive user) could also be utilized and 3) CC-IFC with a block length delay (L = n), in which cognitive user knows its entire received sequence non-causally. In each case, we obtain new inner bound on the capacity region (achievable rate region) for the general discrete memoryless case. Our coding schemes benefit the cooperative strategy by generalized block Markov coding (partial DF [9]) and superposition coding, collaborative strategy by rate splitting and GP coding in order to mitigate part of the interference. For the first case (classic CC-IFC), we use a different strategy compared to the previous results. We use partial DF instead of full DF. Therefore our achievable region improves that of [10]. Moreover, since common message should be decoded in both receivers, binning against the common message provides no improvement. Therefore, we use GP binning to pre-cancel the part of the private message. A similar result has been concluded in [17] for the Cognitive Z-IFC. In the second and third cases, besides the approach we adopt for the first case, IR and non-causal partial DF strategies are employed, respectively. The derived achievable rate regions, under special conditions, reduce to several previously known rate regions, such as the ones in [3], [14]. Moreover, we derive the capacity region for a special case of the CC-IFC-WD, where achievablity follows from our derived region. Furthermore, we consider Gaussian CC-IFC-WD and extend the achievable rate regions for L = 0, L = 1 and L = n, to the Gaussian case. Providing a numerical example for Gaussian CC-IFC-WD, we investigate the rate gain of the cognitive link for different delay values. Thus, we compare the strategies which are used for our coding schemes and show that IR and non-causal DF improve the rate region noticeably. The rest of the paper is organized as follows. Section II introduces the general CC-IFC-WD channel model and the notations. In Section III, we consider three different scenarios and derive new inner bound on the capacity region for each scenario. Capacity region for a special case of the CC-IFC-WD is derived in Section IV. In Section V, Gaussian CC-IFC-WD is investigated. II. CHANNEL MODELS AND PRELIMINARIES Throughout the paper, upper case letters (e.g. X) are used to denote random variables (RVs) and lower case letters (e.g. x) show their realizations. The probability mass function (p.m.f) of a random variable (RV) X with alphabet set X , is denoted by p X (x), where occasionally subscript X is omitted. |X | denotes the cardinality of a finite discrete set X . A n ǫ (X, Y ) specifies the set of ǫ-strongly, jointly typical sequences of length n, abbreviated by A n ǫ if it is clear. The notation X j i indicates a sequence of RVs (X i , X i+1 , ..., X j ), where we use X j instead of X j 1 , for brevity. Consider the CC-IFC-WD in Fig.1, which is denoted by (X 1 × X 2 , p(y 2 , y 3 , y 4 |x 1 , , where X 1 ∈ X and X 2 ∈ X 2 are inputs of Transmitter 1 (Tx1) and Transmitter 2 (Tx2), respectively, Y 2 ∈ Y 2 is the secondary user output, Y 3 ∈ Y 3 and Y 4 ∈ Y 4 are channel outputs at the Receiver 1 (Rx1) and Receiver 2 (Rx2), respectively, p(y 2 , y 3 , y 4 |x 1 , x 2 ) is the channel transition probability distribution. In n channel uses, each Txu sends a message m u to the Rxu where u ∈ {1, 2}. Definition 1: A (2 nR1 , 2 nR2 , n) code for the CC-IFC-WD consists of (i) two message sets M 1 = {1, ..., 2 nR1 } and M 2 = {1, ..., 2 nR2 } for the primary and secondary users, respectively, (ii) an encoding function at the primary user f 1 : M 1 → X n 1 , (iii) a set of encoding functions at the secondary user x 2,i = f 2,i (m 2 , y i−1+L 2 ), for 1 ≤ i ≤ n and m 2 ∈ M 2 , (iv) two decoding functions at Rx1 and Rx2, g 1 : Y n 3 → M 1 and g 2 : Y n 4 → M 2 . We assume that the channel is memoryless. Thus, for m 1 ∈ M 1 and m 2 ∈ M 2 , the joint p.m.f of where we avoid instantaneous feedback from X 2 to Y 2 , which delay may cause. The probability of error for this code is defined as P e = max{P e,1 , P e,2 }, where for u ∈ {1, 2} we have: Definition 2: A rate pair (R 1 , R 2 ) is achievable if there exists a sequence of (2 nR1 , 2 nR2 , n) codes with P e → 0 as n → ∞. The capacity region C L , is closure of set of all achievable rates.m III. DISCRETE MEMORYLESS CC-IFC-WD In this section, we consider the discrete memoryless CC-IFC-WD and concentrate on three special cases: 1) Classical CC-IFC (L = 0), 2) CC-IFC without delay (L = 1) where current received symbol (at the cognitive user) can be utilized too and 3) CC-IFC with a block length delay (L = n), in which cognitive user knows its entire received sequence non-causally. For all setups, new inner bounds on the capacity region are derived. We utilize a coding scheme which is based on combining generalized block Markov superposition coding, rate splitting and GP binning for part of the interference. Furthermore, we apply IR in the second setup and non-causal partial DF in the last case. The outline of the proofs are presented. A. Classical CC-IFC (L = 0) We present a new achievable rate region for this setup. Consider auxiliary RVs T c , T p , U 1c , U 1p , V 1c , V 1p , U 2c , U 2p and a time sharing RV Q defined on arbitrary finite sets , and P 1 denotes the set of all joint p.m.fs p(.) on Z 1 that can be factored in the form of such that there exists nonnegative (L 20 , L 22 ) satisfying: Theorem 1: For any p(.) ∈ P 1 the region R 1 (Z 1 ) is an achievable rate region for the discrete memoryless classical CC- Remark 1: Consider the case where cognitive user can not overhear the channel, i.e. Remark 2: If we omit reciever 2, i.e. Y 4 = 0, and cognitive user does not have any message to transmit, i.e. R 2 = 0, the model reduces to the relay channel. By setting T c = T p = U 1p = V 1p = U 2p = ∅, L 20 = L 22 = R 11n = R 11d = R 2 = 0 and U 2c = X 2 , the rate region reduces to the partial DF rate for the relay channel [9], which includes the capacity regions of the degraded [9] and semi-deterministic relay channels [18]. Outline of the Proof: We propose the following random coding scheme, which contains regular generalized block Markov superposition coding, rate splitting and GP coding in the encoding part. For decoding at the receivers we utilize backward decoding. Messages of the primary and cognitive users are split into four and two parts, respectively, i.e.: m 1 = (m 10d , m 10n , m 11d , m 11n ) and m 2 = (m 20 , m 22 ), where subscript d (or n) refers to the part of the primary user's message which can (or can not) be decoded by the cognitive user. Moreover, (m 10 , m 20 ) and (m 11 , m 22 ) are common and private messages as in the HK scheme [3]. m 10d should be decoded at Rx2 (besides its intended receiver), therefore binning against m 10d at the cognitive transmitter provides no improvement. Hence, cognitive user cooperatively with the primary user sends m 10d , while uses GP binning method against m 11d to mitigate the effect of this known interference at Rx2. Now, consider a block Markov encoding scheme with B blocks of transmission, each of n symbols. Codebook Generation: Let q n be a random sequence according For each t n c (m ′ 10d ), generate 2 nR 11d i.i.d t n p sequences, 2 nR 10d i.i.d u n 1c sequences and 2 nR10n i.i.d v n 1c sequences, according to This step can be done with small enough probability of error, for sufficiently large n if (22)-(23) hold. Backward decoding is used at the receivers, hence they start decoding after all B blocks are received. Rx1: In block b, Rx1 looks for a unique quadruple (m 11n,b , m 10n,b , m 11d,b−1 , m 10d,b−1 ) and some pair (m 20,b , l 20,b ) such that where m 10d,b was decoded in the previous step. With arbitrary high probability, no error occurs in Rx2 if n is large enough and (15)- (21) hold. Now to understand the shape of the achievable region, we give a compact expression for R 1 (Z 1 ) which is easier to compute. B. CC-IFC without delay (L = 1) In this case, cognitive user could utilize the current received symbol as well as the past ones in order to cooperate with the primary user or reduce the interference effect. In addition to the scheme used in Theorem 1, IR is employed to achieve higher rates using this additional information. Consider auxiliary RVs T c , T p , U 1c , U 1p , V 1c , V 1p , U 2c , U 2p , V 2 and Q defined on arbitrary finite sets T c , T p , U 1c , U 1p , V 1c , V 1p , U 2c , U 2p , V 2 and Q, respectively. Let Z 2 = (Z 1 , V 2 ), and P 2 be the set of all joint p.m.fs p(.) on Z 2 that can be factored in the form of In fact is an arbitrary deterministic function. Let R 2 (Z 2 ) be the set of all nonnegative rate pairs (R 1 , R 2 ) where R 1 = R 10d + R 10n + R 11d + R 11n and R 2 = R 20 + R 22 , such that there exists nonnegative real (L 20 , L 22 ) which satisfy (3)-(23). Proof: The achievability proof follows by combining the scheme used in Theorem 1 and IR. Encoding and decoding follow the same lines as Theorem 1, except that during the codebook generation at the cognitive user (Tx2), v n 2 is generated according to n i=1 p(v 2,i |u 2p,i , u 2c,i , t p,i , t c,i , q i ), and in the encoding session, Tx2 at time i and upon receiving y 2,i , sends Remark 3: If we assume that V 2 has extended alphabet of size |X 2 | |Y2| (all mappings from Y 2 to X 2 ), this scheme is analogous to Shannon's strategy of cancelling the causally known interference [19], [20]. C. CC-IFC with a block length delay (L = n) In this part, we investigate CC-IFC with a block length delay (L = n). This means that cognitive user knows its entire received sequence non-causally. We derive an achievable rate region using a coding scheme based on combining non-causal partial DF, rate splitting and GP binning against part of the interference. Consider auxiliary RVs U 1c , U 1p , V 1c , V 1p , U 2c , U 2p and a time sharing RV Q defined on arbitrary finite sets U 1c , U 1p , V 1c , V 1p , U 2c , U 2p and Q, respectively. Let Z 3 = (Q, U 1c , U 1p , V 1c , V 1p , U 2c , U 2p , X 1 , X 2 , Y 2 , Y 3 , Y 4 ), and P 3 denote the set of all joint p.m.fs p(.) on Z 3 that can be factored in the form of (2) with (t p , t c ) = (u 1p , u 1c ). Let R 3 (Z 3 ) be the set of all nonnegative rate pairs (R 1 , R 2 ) where R 1 = R 10d + R 10n + R 11d + R 11n and R 2 = R 20 + R 22 such that there exists nonnegative real (L 20 , L 22 ) which satisfy (3)- (21) with (T p , T c ) = (U 1p , U 1c ) and: Theorem 3: For any p(.) ∈ P 3 the region R 3 (Z 3 ) is achievable for the discrete memoryless CC-IFC with a block length delay (CC-IFC-WD with L = n), i.e., Z3∈P3 R 3 (Z 3 ) ⊆ C n . Proof: Proof is similar to Theorem 1, except that there is no dependence on previous block messages. Hence, simultaneous joint decoding is used instead of backward decoding. IV. CAPACITY OF DEGRADED CLASSICAL CC-IFC In this section, we investigate classical CC-IFC (CC-IFC-WD with L = 0), with joint p.m.f p * , given by (1) with L = 0. Using the achievable region in Theorem 1, we find the capacity region for a special case. We define degraded classical CC-IFC as a classical CC-IFC where degradedness condition for the Tx1-Rx1 pair with cognitive user as a relay holds for every p * : i.e., X 1 → (X 2 , Y 2 ) → Y 3 forms a Markov chain. Next, we impose the following strong interference conditions: In fact, under these conditions interfering signals at Rx1 and Rx2 are strong enough to decode both messages. Theorem 4: The capacity region of the degraded classical CC-IFC with joint p.m.f p * , satisfying (32) and (33), is given by Proof: Achievability: For this part, we use the region R 1 in Theorem 1 (or Corollary 1) and we ignore time sharing RV Q. Let, T p = U 1p = V 1p = U 2p = ∅ and R 22 = R 11n = R 11d = 0, which cross out the private parts of both messages, making the messages common to both receivers. Furthermore, assume that cognitive user fully decode the message of the primary user (m 1 ). Hence, set R 10n = 0 and V 1c = ∅. To omit the GP coding, we set L 20 = L 22 = 0. Redefining T c = T , U 2c = X 2 , U 1c = X 1 and applying condition (33) complete the proof for the achievability. Converse: Consider a (2 nR1 , 2 nR2 , n) code with P e → 0. For . Noting joint p.m.f p * , we remark that X 1 → T → X 2 forms a Markov chain. First, we provide a useful lemma which we need in the proof of converse. Lemma 1: If (32) and (33) hold for all distribution p * , then Proof: Proof follows the same lines as in [5,Lemma 5]. Noting the independence of the messages and utilizing Fano's inequality for the first bound, we have: where (a) and (c) due to the fact that mutual information is nonnegative, (b) obtains from the chain rule and the fact that X 2,i is a deterministic functions of M 2 and Y i−1 2 and (d) because channel is memoryless. Using standard time-sharing argument and condition (31), we have Similarly, applying Fano's inequality, we bound R 2 as: where (a) is based on the chain rule, (b) since X 1,i is a deterministic functions of M 1 and mutual information is non-negative and (c) because channel is memoryless with joint p.m.f p * . Now, we utilize Fano's inequality to bound R 1 + R 2 as: where (a) and (b) follow from the non-negativity of the mutual information and the deterministic relation between X n 1 and M 1 , (c) follows from fact that conditioning does not increase the entropy and the conditional independence between Y 4 and (M 1 , M 2 ) given (X 1 , X 2 ), and (d) holds due to (36). Finally, utilizing Fano's inequality, (38-c) and condition (35), the last bound can be shown. Using standard time-sharing argument for these bounds, completes the proof. V. GAUSSIAN CC-IFC-WD We consider Gaussian CC-IFC-WD and extend the achievable rate regions R 1 (Z 1 ), R 2 (Z 2 ) and R 3 (Z 3 ) derived for the discrete memoryless classical CC-IFC (L = 0), CC-IFC without delay (L = 1) and CC-IFC with a block length delay (L = n), respectively, to the Gaussian case. Gaussian CC-IFC-WD at time i = 1, ..., n, can be modeled as where, h 21 , h 31 , h 32 , h 41 and h 42 are known channel gains. X 1,i and X 2,i are input signals with average power constraints P 1 and P 2 , respectively. Z 2,i ,Z 3,i and Z 4,i are i.i.d and independent zero mean Gaussian noise components with powers N 2 , N 3 and N 4 , respectively. Note that, at the secondary user we have a set of encoding functions x 2,i = f 2,i (m 2 , y i−1+L 2 ) for 1 ≤ i ≤ n and m 2 ∈ M 2 . First, we consider Gaussian classical CC-IFC (L = 0). Using standard arguments, region R 1 in Theorem 1 (or Corollary 1) can be extended to the discrete-time Gaussian memoryless case with continuous alphabets (R * 1 ). Hence, it is sufficient to evaluate the (3)-(23) with an appropriate choice of input distribution. We constrain all the inputs to be Gaussian and set the time and γ 1 + γ 2 + γ 3 ≤ 1, consider the following mapping (M AP 1 ) for the generated codebook in Theorem 1 with respect to the p.m.f (2), which contains rate splitting, generalized block Markov superposition and GP coding: . In fact, optimal values for α 1 , α 2 , S 1 and S 2 (used for GP coding) can be found by optimizing the rate region. However, this method is cumbersome and we use the modified version of Costa's dirty paper coding (DPC) results [21]. Finally, we consider the Gaussian CC-IFC with a block length delay (L = n). We can use R * 1 to obtain the Gaussian counterpart of R 3 , namely R * 3 . The only difference is that according to Theorem 3, there is no dependence on the previous block messages. Therefore, we can set T c = U 1c and T p = U 1p , or equivalently β ′ 2 = β 2 = 0 in (M AP 1 ) to obtain M AP 3 . Fig. 2 compares R * 1 , R * 2 , R * 3 with HK region in [3], where the overheard information is neglected. For L = 0, rate improvement over HK region can be seen, especially when cognitive link is good enough (h 21 = 4). Due to IR, even when h 21 = 1, R * 2 outperforms both R * 1 and HK region, significantly. Considering R * 2 and R * 3 , it is seen that when R 2 is small, IR can achieve higher rates than non-causal DF. However, when cognitive user sends at higher rates, condition of the cognitive link determines better strategy. Note that, using a coding scheme based on the combination of IR and non-causal DF, convex hull of R * 2 and R * 3 is achievable for CC-IFC with L = n.
2012-02-02T21:17:34.000Z
2010-01-17T00:00:00.000
{ "year": 2010, "sha1": "a08befee697a5bf4ab80ea70ef5842d2aea62d4c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a08befee697a5bf4ab80ea70ef5842d2aea62d4c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
108294542
pes2o/s2orc
v3-fos-license
An administrative model for benchmarking hospitals on their 30-day sepsis mortality Background Given the increased attention to sepsis at the population level there is a need to assess hospital performance in the care of sepsis patients using widely-available administrative data. The goal of this study was to develop an administrative risk-adjustment model suitable for profiling hospitals on their 30-day mortality rates for patients with sepsis. Methods We conducted a retrospective cohort study using hospital discharge data from general acute care hospitals in Pennsylvania in 2012 and 2013. We identified adult patients with sepsis as determined by validated diagnosis and procedure codes. We developed an administrative risk-adjustment model in 2012 data. We then validated this model in two ways: by examining the stability of performance assessments over time between 2012 and 2013, and by examining the stability of performance assessments in 2012 after the addition of laboratory variables measured on day one of hospital admission. Results In 2012 there were 115,213 sepsis encounters in 152 hospitals. The overall unadjusted mortality rate was 18.5%. The final risk-adjustment model had good discrimination (C-statistic = 0.78) and calibration (slope and intercept of the calibration curve = 0.960 and 0.007, respectively). Based on this model, hospital-specific risk-standardized mortality rates ranged from 12.2 to 24.5%. Comparing performance assessments between years, correlation in risk-adjusted mortality rates was good (Pearson’s correlation = 0.53) and only 19.7% of hospitals changed by more than one quintile in performance rankings. Comparing performance assessments after the addition of laboratory variables, correlation in risk-adjusted mortality rates was excellent (Pearson’s correlation = 0.93) and only 2.6% of hospitals changed by more than one quintile in performance rankings. Conclusions A novel claims-based risk-adjustment model demonstrated wide variation in risk-standardized 30-day sepsis mortality rates across hospitals. Individual hospitals’ performance rankings were stable across years and after the addition of laboratory data. This model provides a robust way to rank hospitals on sepsis mortality while adjusting for patient risk. Electronic supplementary material The online version of this article (10.1186/s12913-019-4037-x) contains supplementary material, which is available to authorized users. Background Sepsis is a leading cause of in-hospital mortality and a major driver of health care spending in developed nations [1]. Several evidence-based practices for sepsis exist, including adequate control of the infectious source, early administration of appropriate antibiotics, and early administration of intravenous fluids to support intravascular volume [2]. However, hospitals deliver these treatments inconsistently, leading to excess morbidity and mortality [3,4]. In response to this persistent quality gap, health systems and governments have developed large scale strategies to improve sepsis care both through traditional clinically-oriented quality improvement [5] and through health policies designed to incentivize quality improvement at the regional and national level [6,7]. Understanding the impact of these efforts and providing hospitals with feedback on their quality of care in patients with sepsis requires a robust method for assessing hospital-specific mortality rates. Such a method would ideally use widely available data that is readily accessible across hospital systems and must effectively account for individual patients' variation in risk of mortality. At the same time, mortality-based performance measures should not adjust for variation in treatment practices that may modify the risk of mortality, which are reflective of hospital quality. To address this need, we used a state-wide Pennsylvania discharge database that captures administrative claims data along with a selection of laboratory data to create a novel method to adjust for individual patients' severity of illness on presentation in order to meaningfully compare sepsis outcomes across hospitals. Study design and data We conducted a retrospective cohort study of patients with sepsis admitted to non-federal general acute care hospitals in the Commonwealth of Pennsylvania in the United States during calendar years 2012 and 2013. First, we developed a de novo risk-adjustment model using 2012 administrative data. Next, we examined the construct validity of our model by examining the stability of hospital rankings over time (comparing the 2012 administrative model to the 2013 administrative model) and after addition of clinical laboratory variables (comparing the 2012 administrative model to a 2012 clinical model with both administrative and laboratory data). In this context, a valid administrative model would produce relatively stable performance estimates over time (i.e. with few exceptions, hospitals that are high performers one year would be high performers the next year). A valid administrative model would also yield performance estimates that are similar to those estimated from a more granular clinical model which better accounts for variation in risk. We used the Pennsylvania Health Care Cost Containment Council (PHC4) database. PHC4 collects administrative data on all hospital admissions in Pennsylvania and makes them available for research, including both demographic information and International Classification of Diseases-version 9.0-Clinical Modification (ICD-9-CM) diagnosis and procedure codes. Unlike most administrative claims-based data sets, these data also contain a selection of laboratory values obtained on the day of admission, enabling us to create a clinical model in addition to the standard administrative model [8]. We augmented these data with the Pennsylvania Department of Health vital status records to capture post-discharge mortality. Patients and hospitals All encounters for patients meeting the "Angus" definition of sepsis-either an explicit ICD-9-CM code for sepsis or co-documentation of ICD-9-CM codes for an infection and an organ dysfunction-were eligible for the study [9,10]. We chose the Angus definition because it is the broadest administrative definition of sepsis and has undergone rigorous clinical validation (10). We excluded admissions to non-short term and non-general acute care hospitals as these hospitals were not the focus of our study. We also excluded admissions less than 20 years of age, admissions for which gender or age were missing, and admissions at hospitals that were not continuously open and admitting patients for the duration of the study period. To maintain independence of observations, if a single patient had multiple encounters within a study year, then we randomly included a single encounter per year. Base model for risk-adjusted mortality We first created a base logistic regression model for risk-adjusted mortality using exclusively risk-adjustment variables that are available in administrative data. The primary outcome variable for this model was all-cause mortality within 30 days of the admission date, as determined using the Pennsylvania vital status records. The model was based on five categories of risk-adjustment variables hypothesized to be associated with sepsis outcomes based on prior work [9,11,12]: demographics, admission source, comorbidities, organ failures present on admission, and infection source. Demographic variables were obtained directly from the claims and included age and gender. Gender was modeled as an indicator covariate, and age was modeled as a linear spline by age quintile. Admission source was obtained directly from the claims and modelled as an indicator covariate defined as admission through the emergency department versus admission from another source. Comorbidities were defined using ICD-9-CM codes in the manner of Elixhauser [13] and modelled as indicator covariates. Organ failures present on admission were defined in the manner of Elias [12] and modelled as indicator covariates. For comorbidities and organ failures present on admission, we excluded from the model any designation that had less than a 1% prevalence in our sample population. Infection source was modeled as hierarchical infection categories in which we assigned each patient an infectious source category identified using ICD-9-CM diagnosis codes (see Additional file 1: Table S1). We created the categories from the Angus sepsis definition [9] which we further divided into 12 groups: septicemia, bacteremia, fungal infection, peritoneal infection, heart infection, upper respiratory infection, lung infection, central nervous system infection, gastrointestinal infection, genitourinary infection, skin infection, and other infection source. For patients with multiple ICD-9-CM codes indicating multiple infection sources, we assigned them the single infection source category associated with the highest unadjusted mortality. In ranking the infectious sources based on their unadjusted mortality, we used 2011 data in order to avoid model overfitting. The final variable was modelled as a series of mutually exclusive indicator covariates with upper respiratory infection as the reference category. Augmented mortality model including laboratory variables We next created an augmented logistic regression model for risk-adjusted mortality using all of the variables from the base model plus selected laboratory values obtained on the day of admission. The list of available laboratory values including their units, frequency, averages, and ranges is available in Additional file 1: Table S2 and S3. Values outside the plausible range, such as negative data points for non-calculated laboratory values, were recoded as missing. We used a multi-step process to determine not only which lab variables to include in our model but also their functional forms. First, we used locally weighted scatterplot smoothing to visually assess inflection points in the relationship between each numeric laboratory value and 30-day mortality [14]. Based on visual inspection of these plots and standard reference values from our hospital's laboratory, we categorized each variable into between two and five categories, with one category representing a normal result and the other categories representing non-normal extremes: very low, low, high, and very high. For arterial pH and arterial pCO 2 , which are interdependent, we performed an additional step in which we created a single combined variable for which the categories were permutations of the non-normal categories defined for pH and pCO 2 , respectively, as previously performed [15]. For each patient, we assigned an appropriate category for every laboratory test based on the reported result. If the patient had more than one result available for a given laboratory test, we selected the value that would be included in the category associated with a higher mortality rate. When a laboratory test result was missing, we assumed it to fall into the normal range and assigned the normal category, as is standard in physiological risk-adjustment models [15]. Next, we used Bayesian information criterion (BIC)-based stepwise logistic regression to identify the laboratory value covariates to be included in the model. This regression included all the covariates in the claims-based model. Laboratory values that did not contribute to a maximal BIC were excluded from the final model. Each laboratory value's categories were assessed in the BIC regression as a group and ultimately either included in or excluded from the model as a group, so as not to partially remove categories for a given laboratory value. Laboratory values deemed contributory by the BIC regression entered the final model as categorical variables with the normal category as the reference group. Risk-standardized mortality rates Based on these models we use mixed-effects logistic regression to create risk-standardized hospital-specific 30-day mortality rates. These rates account for variation in both risk and reliability across hospitals: they account for variation in risk in that they control for the different baseline characteristics of sepsis patients across hospitals; they account for reliability in that the rates for small hospitals, which are more susceptible to random variation than rates for large hospitals, are adjusted toward the state-wide mean [16]. We calculated hospital-specific risk adjusted mortality rates by dividing each hospital's predicted mortality (using the base model plus a hospital-specific random effect) by each hospital's expected mortality (using the base model without a hospital-specific random effect), generating a risk-standardized mortality ratio. Multiplying the risk-standardized mortality ratio by the mean 30-day mortality of the state-wide sample yielded a hospital-specific risk-standardized mortality rate. We performed this process separately for 2012 and 2013 without laboratory data and then again for 2012 with laboratory data, resulting in three sets of hospital-specific mortality rates: 2012 administrative rates, 2013 administrative rates, and 2012 clinical rates. Analysis For all models we assessed discrimination, using the C-statistic, and calibration, using the slope and intercept of regression lines fit to the calibration plots. We assessed the validity of our administrative model by examining the consistency of hospital rankings over time and with the addition of laboratory data. As noted above, we assumed that a valid model would yield hospital rankings that did not markedly change between years or after the addition of laboratory values. We generated scatter plots to compare the hospital-specific risk-standardized mortality rates between the 2012 and 2013 administrative rates; and between the 2012 administrative and clinical rates, calculating a coefficient of determination. Additionally, for each of the three sets of hospital-specific mortality rates, we calculated performance quintiles, with the outer quintiles representing the highest and lowest performing 20% of hospitals, respectively. We compared the composition of the quintiles between the 2012 and 2013 administrative rates and then between the 2012 administrative and clinical rates. We considered hospital movement of one quintile or less between comparison groups to be a marker of stability. Data management and analysis was performed using Stata version 14.0 (StataCorp, College Station, Texas). All aspects of this work were reviewed and approved by the University of Pittsburgh institutional review board. Patients and model development A total of 236,154 patients met our final inclusion criteria: 115,213 in 2012 and 120,941 in 2013 (Fig. 1). These patients were admitted to 152 different acute care hospitals. Patient characteristics stratified by year are shown in Table 1. In both years average age was over 70 and a large percentage of patients were admitted through the emergency department. The most common comorbidity was hypertension (58.2% in 2012 and 57.4% in 2013) followed by fluid and electrolyte disorders, renal disease, diabetes, congestive heart failure, and chronic pulmonary disease. The most common organ failure on admission was renal failure (48.2% in 2012 and 49.1% in 2013) followed by cardiovascular failure. Unadjusted 30-day mortality was 18.5% in 2012 and 18.2% in 2013. The hierarchical infection categories along with each category's mortality rate and the number of patients who were placed into that infection category are shown in Fig. 2. In both years, septicemia was the most prevalent category (30.9% in 2012 and 33.0% in 2013) and was associated with the highest mortality (30.5% in 2012 and 28.9% in 2013). The set of laboratory test results available from PHC4 along with their plausible ranges and final categorizations are shown in Additional file 1: Table S2 and S3. Based on BIC criteria, 19 of these laboratory test results were included in the final risk-adjustment model. These tests along with the proportion of results that were normal, abnormal, or missing, are shown in Table 2. For individual laboratory values, the percent of patients with a reported value ranged from 4.4 to 72% in 2012 and 5.2 to 74% in 2013. The most frequently reported lab value was serum glucose, and the least frequently reported lab value was serum pro B-type natriuretic peptide. The final base model results are shown in Additional file 1: Table S4. The factors most strongly associated with mortality included age, selected hierarchical infection categories (e.g., septicemia, heart infection, lung infection, and fungal infection), and selected comorbidities (e.g., metastatic cancer and neurologic decline). Regarding laboratory results, derangements in pro-BNP, albumin, troponin, bilirubin, BUN, and sodium were most strongly associated with mortality. All models showed good discrimination and calibration. The C-statistics were 0. Risk-adjusted mortality rates Risk-adjusted mortality rates varied widely in all models, demonstrating their utility in identifying high performing and low performing hospitals. The range of hospital-specific risk-standardized mortality rates was 12.2 to 24.5% with a mean of 18.4% in the 2012 administrative model; 12.7 to 23.7% with a mean of 18.1% in the 2013 administrative model; and 12.9 to 23.9% with a mean of 18.4% in the 2012 clinical model that included both administrative variables and laboratory results. In the validation steps, the risk-standardized mortality rates for individual hospitals were relatively stable across years (Pearson's correlation = 0.53; Fig. 3a); and after the addition of laboratory values (Pearson's correlation = 0.93, Fig. 3b). When stratifying hospitals into quintiles by performance and comparing the 2012 and 2013 administrative models, of the 152 hospitals, 69 (45%) did not change quintile and only 19 (13%) moved by more than one quintile between the 2 years (Table 3). Comparing the 2012 administrative model to the 2012 clinical model, 113 (74%) hospitals stayed in the same quintile, and only 1 hospital (1%) moved by more than one quintile (Table 3). Discussion Using a large, state-wide sample of sepsis admissions to over 150 hospitals, we developed an administrative risk-adjustment model suitable for benchmarking hospitals on their 30-day sepsis mortality. This model showed very good discrimination and calibration. In addition, the model results were reasonably stable, yielding performance assessments that were similar when comparing multiple years and when comparing the administrative model to a model that contained more granular clinical risk adjustment variables. Our model can be used by health systems and governments to assess hospital performance in the care of patients with sepsis. Sepsis is increasingly recognized as a major public health problem, and there is increasing attention to implementing large-scale sepsis performance improvement initiatives in hospitals [17]. For example, in the United States, the federal government requires all hospitals participating in the Medicare program to report data on adherence to a sepsis care bundle [6]. In addition, several US localities require hospitals to implement protocols for sepsis recognition and treatment [7]. Our model can be used to assess the impact of those initiatives and others like them, providing a valuable tool for sepsis-focused health policy assessment and population-based comparative effectiveness research. Similarly, our model could allow researchers and policy makers to identify hospitals with outlying performance as candidates for targeted quality improvement efforts. For example, poor performing hospitals could benefit from dedicated resources to improve sepsis outcomes, and high performing hospitals could serve as laboratories to understand how to deliver high-quality sepsis care. This framework, known as "positive-negative deviance" [18], is an increasingly common quality improvement tool and has been useful in other analogous areas such as performance improvement in intensive care unit telemedicine [19]. The current study builds off prior work in this area, including related studies performed in Germany [20], in the United States Medicare population [21], and in patients with septic shock [22]. Our study adds to this literature in that it examined all hospitalized sepsis patients in a large US state and included patients with all insurance types instead of just Medicare, thus filling an important niche. Our study also extends related work which developed an administrative model for sepsis Y and X axes are the model-derived risk-adjusted mortality rates. Blue dots represent a single hospital. Grey lines represent the linear correlation between the two performance estimates mortality but for which the time horizon was limited to the hospital [23] (i.e. patients were not followed for their outcome after discharge). In-hospital mortality as an outcome measure is known to be biased by discharge practices [24]. Benchmarking hospitals using in-hospital mortality might incentivize them to discharge patients more quickly to post-acute care hospitals, biasing the performance assessments [25,26]. This problem is overcome when using 30-day mortality as an outcome measure, as we do here, making our results particularly useful. Our study has several limitations. First, by using administrative data, we cannot rule out that we insufficiently accounted for variation in case-mix across hospitals. Although our comparison to a model that included lab values provides important construct validity, we did not have access to other key variables like vital signs or patients' preferences for limitations of life-sustaining treatment [27]. Including these values might demonstrate that a more accurate model would perform differently than our administrative model and result in more significant changes in hospital performance rankings. Second, in addition to administrative risk adjustment we used an administrative case-ascertainment strategy, which is only modestly accurate and may lead to different performance rankings than a different administrative strategy or a clinical strategy [28]. Third, we used data from only one US state, however it is a large state with both urban and rural areas, supporting the generalizability of our results. Finally, we examined 30-day mortality but not other important outcome measures like sepsis readmission rates or long-term outcomes. Future work should be directed at understanding hospital-level variation in these outcome measures. Conclusions In conclusion, we developed a robust risk-adjustment model that may be implemented on existing data collection structures and can be used to benchmark hospitals on sepsis outcomes. Future work should be directed at using this model to develop and test large scale sepsis performance improvement initiatives. Table 3 Comparison of quintile rankings of individual hospitals by model. Hospitals along the diagonal did not change rankings in the different models, indicating that for these hospitals the performance rankings were stable across time or after the addition of laboratory values
2019-04-12T13:03:25.322Z
2019-04-11T00:00:00.000
{ "year": 2019, "sha1": "6a8e4d1b7a270426e9d7245bdb44f4a798931dda", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12913-019-4037-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6a8e4d1b7a270426e9d7245bdb44f4a798931dda", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118996219
pes2o/s2orc
v3-fos-license
Short-wave vortex instability in stratified flow In this paper we investigate a new instability of the Lamb-Chaplygin dipole in a stratified fluid. Through numerical linear stability analysis, a secondary peak in the growth rate emerges at vertical scales about an order of magnitude smaller than the buoyancy scale $L_{b}=U/N$ where $U$ is the characteristic velocity and $N$ is the Brunt-V\"{a}is\"{a}l\"{a} frequency. This new instability exhibits a growth rate that is similar to, and even exceeds, that of the zigzag instability, which has the characteristic length of the buoyancy scale. This instability is investigated for a wide range of Reynolds $Re=2000-20000$ and horizontal Froude numbers $F_{h}=0.05-0.2$, where $F_{h}=U/NR$, $Re=UR/\nu$, $R$ is the characteristic length scale of the dipole, and $\nu$ is the viscosity. A range of vertical scales is explored from above the buoyancy scale to the viscous damping scale. Additionally, evidence is presented that the dynamics of this new instability are partially determined by the buoyancy Reynolds number, $Re_{b}=F_{h}^{2}Re$. F h = U/NR, Re = UR/ν, R is the characteristic length scale of the dipole, and ν is the viscosity. A range of vertical scales is explored from above the buoyancy scale to the viscous damping scale. Additionally, evidence is presented that the dynamics of this new instability are partially determined by the buoyancy Reynolds number, I. INTRODUCTION Vortices play a fundamental role in the transition to turbulence by providing the mechanism for the energy cascade from large to small scales. In the atmosphere and ocean, vortices are strongly influenced by density stratification and the rotation of the earth. However, stratification dominates at intermediate length scales -the atmospheric mesoscale and the oceanic submesoscale -which are small enough for the Coriolis effects to be weak, but large enough for the stable density stratification to be strong (e.g. Refs. [1][2][3]. There has recently been much work, using full direct numerical simulations of the Boussinesq equations with various initial configurations, to uncover the emergence and evolution of stratified turbulence from vortices [4][5][6][7] . Turbulence in this regime is governed by the Reynolds number Re = UR/ν as well as the horizontal Froude number F h = U/NR, where U is the characteristic velocity, R is the characteristic horizontal length, N is the Brunt-Väisälä frequency, and ν is the kinematic viscosity. Because of this extra dependence on the Froude number, the underlying dynamics are not as well understood and a full picture of stratified turbulence is not complete [1][2][3]8,9 . In large-scale atmosphere and ocean simulations, it is difficult or impossible to resolve all possible processes. As a result, obtaining a proper parameterisation of small-scale phenomena is critical to correctly modelling the evolution. A useful approach to investigating these small-scale dynamics is to consider the transition problem in an idealised flow, which can elucidate the key features that govern the more comprehensive turbulence problem. One model that may be used to study the transition to stratified turbulence is that of a columnar counter-rotating vortex dipole. There is a large body of literature on the instability of vortex dipoles in unstratified fluids, including the Crow instability at large length scales (e.g. Refs 10-12) and the elliptic instability at smaller scales (e.g. Refs 11,[13][14][15]. In stratified fluids, laboratory and numerical experiments of the stability of such dipoles have uncovered a unique instability, the zigzag instability, so named due to the zigzag-like structure exhibited by the flow 16,17 . The zigzag instability has a dominant vertical wavelength of around U/N, which is known as the buoyancy scale 18 . This instability has also been found in other flow configurations including co-rotating vortices 19 and vortex arrays 20 . The breakdown of this dipole into turbulence due to the growth and saturation of the zigzag instability has also been investigated [4][5][6] . However, these studies mainly consider dipoles perturbed at the zigzag scale U/N, and do not investigate the growth of smaller vertical scale perturbations. Growth in such small-scale perturbations has been reported in nonlinear simulations 4 16 . The work is presented as follows: in section 2 we present the numerical scheme and methodology, in section 3 we discuss the results of the numerical simulations and investigate some properties of the small-scale instability. Conclusions are discussed in the last section. A. Equations and Initial Conditions We consider the non-dimensional Boussinesq approximation to the Navier-Stokes equations in Cartesian co-ordinates where D/Dt = ∂/∂t + u · ∇, u = (u, v, w) is the velocity, p is the pressure, and ρ ′ is the density perturbation. We have non-dimensionalised by the characteristic velocity U, length R, time-scale R/U, pressure ρ 0 U 2 , density ρ 0 U 2 /gR, and defined Sc = ν/D as the Schmidt number, where D is the mass diffusivity, ρ 0 is the background density, and g is the gravitational constant. The Reynolds and horizontal Froude number are as defined above. The buoyancy frequency N, and hence the Froude number F h , is assumed to be constant. As the basic state for linear stability analysis we use the Lamb-Chaplygin dipole in a comoving frame 22 . This dipole is a solution to the 2D inviscid Euler equations. This basic state is motivated by laboratory experiments 16,23 which demonstrated that a vertically oriented Lamb-Chaplygin dipole is a good approximation to the vortex generated by two flaps closing in a tank of salt-stratified water. The dipole, in cylindrical coordinates, is given by the stream function and the corresponding vertical vorticity ω z0 = ∇ 2 h ψ 0 where J 0 , J 1 are the zero and first order Bessel functions, µ 1 ≈ 3.38317 is the first root of J 1 , and ∇ h is the horizontal Laplacian. The basic state velocity is purely horizontal and is We now write the fields as a basic state plus perturbations, denoted by ∼. Ignoring the viscous diffusion of the basic state 24 and neglecting products of the perturbations, we obtain the following set of linear equations for the perturbations ∇ ·ũ = 0, whereω = ∇ ×ũ. As stated above, the Lamb-Chaplygin dipole is oriented vertically. As a result we can separate the perturbation into the vertical and horizontal directions as where c.c. is the complex conjugate. From here we can now take the 2D Fourier transform and define a projection operator P(k), with components P ij (k) = δ ij − k i k j /k 2 to eliminate pressure (e.g. Lesieur 25 ) to obtain a set of equations for the Fourier coefficients where k z , Re, Sc, F h are input parameters, k h = (k x , k y ) is the horizontal wavenumber and B. Numerical Scheme To numerically solve (10) and (11), we use a spectral transform method to evaluate derivatives, with 2/3-rule de-aliasing and second order Adams-Bashforth for time-stepping. Each simulation was initialised with a random field and integrated over an N × N grid for 100 time units to determine the behaviour of the fastest growing mode. After several time units, the leading eigenmodes for u, ρ behave exponentially (e.g. Billant and Chomaz 17 ) and we can obtain the largest growth rate by the formula where σ is the real growth rate of the mode and E is the kinetic energy 1 2 (u 2 + v 2 + w 2 ). To evaluate σ, we compute the average value of the growth rate beginning at t = 20, after the initial transient behaviour has died out and the leading mode dominants, from the time series of σ produced by (13) until the end time t = 100. In the case of an oscillatory growth rate, as considered in 26 , we drop the assumption that σ is real and instead compute the growth rate from where T is the period of the oscillatory mode. The imaginary growth rate is given as σ i = 2π/T . As above, we compute σ from the time series beginning at t = 20, however we first measure the period T from roughly 10 oscillations, and then compute the average. To simulate higher Reynolds number, we use a hyperviscosity operator. The ν∇ 2 diffusion term is replaced with a ν 4 ∇ 4 diffusion term. The ν 4 coefficient is chosen so that νk 2 max = ν 4 k 4 max , where k is the maximum dealiased horizontal wave number. This allows us to define the hyperviscosity Reynolds number Re h = Rek 2 max . The hyperviscosity simulation was run with F h = 0.1 and Re = 20000 with the same numerical parameters as the regular viscosity simulation. are very similar to one another. At small k z , the growth rate reaches a local maximum, the zigzag peak, located at k z F h ≈ 0.6 as predicted by Billant and Chomaz 17 . The growth rate then decreases for increasing k z to a local minimum before increasing to a second local maximum. Continuing to even smaller vertical scales, viscous effects increase and may damp out the instability, and hence the growth rate decays with increasing k z F h in the limit of large k z F h . Oscillatory growth rates are observed for the smallest k z F h as observed in Ref 26. The imaginary part of the growth rate σ i remains zero everywhere else except in a small region surrounding the local minimum between the zigzag and short-wave peaks. This oscillatory behaviour is not considered here. A. Growth Rate For F h = 0.2 (Fig. 1a), the peak growth rate of the short-wave instability exceeds that of the zigzag instability for increasing Reynolds numbers. The growth rates at the second peak is smaller for F h = 0.1 (Fig. 1b), but they continue to increase with increasing Re. For F h = 0.05 (Fig. 1c), the second peak is weaker than the zigzag peak. Fig. 2 shows the growth rate for fixed Reynolds numbers with varying Froude numbers. Examining the case of Re = 20000 (Fig. 2a), the second peak increases with increasing Froude. A similar result is observed for Re = 10000 and 5000 (Fig. 2b-c). Re = 2000 is not included because viscous effects have damped out the second peak in this case. Overall, the dependence of the short-wave growth rate on Froude is also more pronounced then that of Reynolds. For example, the growth rate of the second peak at fixed Re = 20000 (Fig. 2a) doubles from F h = 0.05 to F h = 0.2. By contrast, at fixed F h = 0.2 (Fig. 1a), the increase in the growth rate from Re = 5000 to Re = 20000 is only about 25% larger. The above analysis demonstrates that the short-wave growth-rate peak moves to larger k z F h with increasing F h and increasing Re, but has a stronger dependence on Froude than Reynolds. Some of this joint dependence can be explained by examining the dependence on the buoyancy Reynolds number Re b = F 2 h Re 1,27,28 . In stratified turbulence, the buoyancy Reynolds number is analogous to the Reynolds number in the viscous term due to the vertical gradients 28 . As k z increases, we move to smaller vertical scales where the vertical viscosity terms, controlled by the buoyancy Reynolds number, dominates, so it follows that the second peak may be governed by Re b . In Fig. 3 the location of the second peak from Fig. 1 is plotted as a function of the buoyancy Reynolds number. The peak location line is approximately linear and can be fitted with the curve k z F h = Re 2/5 b , which is plotted. This scaling implies that the vertical wavenumber, k z , of the short-wave instability is approximately The dependence of the growth rate on k z F h appears to be similar in the cases with different F h and Re but the same Re b . Fig. 4 demonstrates the similarity of the growth rate plotted against k z F h for two cases with Re b = 500 and two cases with Re b = 50. For both cases, the locations of the zigzag and second peak line up quite well. The difference between the red and blue curves at the second peak is 4% for Re b = 200 and 6% for Re b = 50, a reasonable variation. In Fig. 1 (b) the green curve corresponds to a hyperviscosity run with Re = 20000, which has Re h = 2.8 × 10 8 . The motivation for using hyperviscosity is to capture higher-Reynolds number regime by restricting dissipation to only the largest wavenumbers. As the hyperviscosity run demonstrates, the zigzag peak is independent of Reynolds number and the existence of the peak would be expected at higher Reynolds numbers. For the second peak, we note that the growth rate of the hyperviscosity run exceeds that of Re = 20000 for k z F h > 3 and reaches a maximum around k z F h = 7. The maximum growth rate in the hyperviscosity case is around 25% larger than the regular viscosity case with Re = 20000. At k z F h = 12 we see the hyperviscosity and non-hyperviscosity curves cross. This intersection corresponds to the horizontal wavenumber at which the hyperviscosity damping rate equals the regular viscous damping rate for Re = 20000. For k z greater than this maximum, the hyperviscosity operator experiences greater damping than the regular viscosity, which can be seen by the sudden drop off of the growth rate. This simulation presents evidence that as Re → ∞, the growth rate of the second peak will the same order as, or larger than, the growth rate of the zigzag instability. B. Structure The full vorticity field of the short-wave instability has a much more dominant twist then the zigzag instability and the bending of the dipole is reduced. As the stratification is increased, this behaviour continues but there is a significant emergence of structure within the cores of the vortices, as observed in Fig 5. C. Scale Analysis Motivated by the scale analysis of Refs 8,21,28,29, we present a scaling analysis for small vertical scales as considered in the above numerical simulations. We consider the Boussinesq where the primed notation denotes the dimensional variables in this section only. Following Ref 21 let U, W be the characteristic velocities in the horizontal and vertical directions, L h , L v be the corresponding characteristic length scales, P be the pressure, and R be density perturbation scales, not to be confused with the dipole radius R from above. We assume, differing from the analysis of Refs 21,29, that in addition to U, L h being imposed on the system, we also impose a separate vertical scale L v . This scaling is motivated by the above numerical simulations where we impose a vertical length scale through the vertical wavenumber k z . The aspect ratio δ = L v /L h is assumed to be small, δ < 1. We define the horizontal Froude number to be F h = U/NL h , which is also assumed to be small. Following the above numerical simulations, let δ < F h , which we can also write as L v < U/N, i.e. vertical scales are assumed to be smaller than the buoyancy scale. We now define the advective time scale T = L h /U. To determine the characteristic scale of W , we are left with two choices: imposing the scaling from the continuity equation or from the density equation. Previous work 21 chose the latter and obtained a characteristic velocity By contrast, we use the continuity equation (18), which implies W δU. This scaling for w is consistent with the assumption that δ < F h . Using (21), the vertical momentum equation (17) gives a density scaling of R ∼ ρ 0 U 2 /(gL v ). Plugging this result into (20), we obtain W ∼ UF 2 h /δ. Because δ < F h we have Uδ < UF 2 h /δ so our assumptions are consistent. Setting W ∼ Uδ the horizontal momentum equation (16) gives P ∼ ρ 0 U 2 . Combining this all, we obtain the following scaling for the Boussinesq equations with L v < U/N: which leads to which holds when δ < F h ≪ 1. This suggests that for very small vertical scales with δ ≪ F h the effects of stratification should be negligible. At such small vertical scales, density variation due to stratification would be negligible and thus we would not expect stratification to play an important role in the overall evolution. Additionally, the presence of the factors of δ in the denominator of the vertical viscous terms suggests that the effects of viscosity become more dominant at very small vertical scales. As a result of this scaling analysis we expect that the nature of the instability at short vertical scales to become independent of F h for large k z . To test this hypothesis Fig. 7 shows growth rate as a function of k z for four sets of simulation with Re = 10000: F h = 0.2, 0.1, 0.05 and a new unstratified case with F h = ∞ (note that, unlike in Fig. 2, we are not scaling k z by F h ). The growth rate curves appear to be converging for large k z where δ ≪ F h , which agrees with the conclusion of the above scaling analysis. These large k z are well into the viscous damping range and as discussed above, the effects of viscosity become stronger and we observe a sharper decrease in the growth rate. For the short-wave instability examined above, δ/F h = 1/(k z F h ) ranges from ≈ 0.5 down to 0.1, which is < 1 but not ≪ 1. As a result, we do not necessarily expect the characteristics of this instability to be independent of F h for the parameters considered here. Indeed, our stability analysis shows that the (unscaled) wavenumber k z of the short-wave peak is weakly dependent on F h , through the F 1/5 h factor in (15). However, by examining even larger k z F h (i.e. even smaller δ/F h ), this scale analysis suggests that the nature of the short-wave instability will eventually become independent of F h . IV. CONCLUSIONS In this paper, we have investigated the linear stability of the Lamb-Chapylgin dipole for perturbations with small vertical scales. In particular , we have considered vertical scales from around the buoyancy scale U/N, where the zigzag instability occurs 2,17,18,21 , down to the dissipation scale. We have discovered a short-wave instability that emerges at scales much smaller than the buoyancy scale. This instability can exhibit a growth rate that is comparable to, and possibly even greater than, that of the zigzag instability. Despite having a similar growth rate in some cases, the structure of the instability is qualitatively different that of the zigzag peak suggesting a different mechanism is governing the evolution. We have discovered that the location of the peak depends upon a combination of the Reynolds and Froude numbers, specifically the buoyancy Reynolds number Re b which plays an important role in stratified fluids. The wavenumber of maximum growth rate for the short-wave instability is found to scale like F h k z ∼ Re 2/5 b for the range of Re b considered here. We expect this may change at even larger Re b . By contrast, the maximum growth rate of the zigzag instability occurs for F h k z ∼ 1 30 . As a result, these instabilities will be widely separated when Re b ≫ 1, as in the case of strongly stratified turbulence 28 . This new instability has implications for numerical modelling of small scales in stratified turbulence as it provides an additional mechanism for the transfer of energy to small vertical scales. In nature, perturbations are broad-band and so short vertical scales will be excited. Our results show that such short scales may grow, at least initially, as fast as the zigzag instability. Important questions to be addressed in future work are how does this short-wave instability evolve nonlinearly, and how does it saturate? There is some suggestion that such perturbations may saturate at a relatively low level 4,31 but this question requires further study. ACKNOWLEDGMENTS Financial support for this work was provided by the Natural Sciences and Engineering Re-
2014-09-18T14:47:50.000Z
2014-02-26T00:00:00.000
{ "year": 2016, "sha1": "b818ea714fa5b2b60fbcaf244e44e1aa377fc90d", "oa_license": "CCBYNCND", "oa_url": "https://uwspace.uwaterloo.ca/bitstream/10012/10545/3/bw16_accepted.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3c9a744080481bd5ed407902e0ff413d0cc0e18c", "s2fieldsofstudy": [ "Physics", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
218953354
pes2o/s2orc
v3-fos-license
DOMINANT MICROBIAL ASSOCIATIONS OF ORAL CAVITY AT PERIODONTITIS AND FEATURES OF THEIR SENSITIVITY TO ANTIBACTERIAL DRUGS in 100 % of cases within the oral cavity microbiota in patients with generalized periodon -titis. The most pronounced clinical symptoms of the disease and recurrent inflammation were observed with the persistence of the Staphylococcus genus bacteria association with the Enterobacteriaceae family bacteria and Pseudomonas bacteria genera , microscopic Candida genus fungi in the oral cavity. It was found that the fluoroquinolones and cefoperazone/sulbactam were the most effective against the isolates. Among the commercial preparations, Sangviritrin showed the most expressive antimicrobial activity and its antibacterial effect was registered on the clinical isolates of S. aureus . It is worth noting that the Sangviritrin showed the antimicrobial effect against an extra antibiotic-resistant isolates resistant to all phytopreparations and disinfectants used in the test. A high antimicrobial activity of Decasanum and Sangviritrin against the multi-antibiotic resistant strains, most by against Gram­positive bacteria, was established. The estab lished regularities determine the relevance of the antibiotic therapy taking into account the antibiotic susceptibility of the inflammatory process pathogens and the development of a comprehensive approach for correction of the opportunistic microbiota at generalized periodontitis. bacteria isolates were screened for a susceptibility to the following antibiotics: gatifloxacin norfloxa cin tetracycline azitromycin µ g clarithromycin µ g). The fungi isolates were screened for susceptibility to the following antibiotics: (in 6 mm diameter wells) with nystatin µ g ), itroconazole g) , fluconazole ketoconazole , voriconazole klotrimazole miconazole lower than that of Dioxydine and Dexasane. Chlorhexidine did not affect methicillin-resistant S. aureus and no antibacterial effect of chlorhexidine on the Streptococcus genus bacteria was detected. Opportunistic microorganisms play a significant role in the development of inflammatory diseases, including generalized periodontitis. The growing tendency towards the formation of antibiotic resistant microorganisms causes the relevance of monitoring the microorganisms' sensitivity to antimicrobial drugs and the development of new approaches to the antimicrobial therapy. The purpose of this work was to determine the dominant microbial associations of the oral cavity in the conditions of generalized perio dontitis, to study their sensitivity to antibiotics, antiseptics and phytopreparations. The mic robial associations' study at generalized periodontitis was performed by bacteriological monitoring of the pathological material from the foci of the inflammatory process. Determination of the isolates' antibiotic susceptibility to antiseptics and phytopreparations was carried out by the disc diffusion method, in the agar. As test culture, the follo wing bacteria and yeast from the American Type Culture Collection were used: Candida albicans ATCC 885653; Staphylococcus aureus ATCC 25923; Escherichia coli ATCC 25922; Enterococcus faecalis ATCC 29212; Streptococcus pyogenes ATCC 19615; and Pseudomonas aeruginosa ATCC ATCC 27853. We also used clinical strains of bacteria and yeasts (S. aureus, K. rhinoscleromatis, H. alvei, E. coli, S. pyogenes, S. pneumoniae, C. albicans, C. glabrata isolated from the oral cavities of patients suffering from the inflammatory periodontium. We selected clinical strains with multiple resistance to antibio tics. The results of studies have shown that the opportunistic microorganisms dominated in 100 % of cases within the oral cavity microbiota in patients with generalized periodontitis. The most pronounced clinical symptoms of the disease and recurrent inflammation were observed with the persistence of the Staphylococcus genus bacteria association with the Enterobacteriaceae family bacteria and Pseudomonas bacteria genera, microscopic Candida genus fungi in the oral cavity. It was found that the fluoroquinolones and cefoperazone/sulbactam were the most effective against the isolates. Among the commercial preparations, Sangviritrin showed the most expressive antimicrobial activity and its antibacterial effect was registered on the clinical isolates of S. aureus. It is worth noting that the Sangviritrin showed the antimicrobial effect against an extra antibioticresistant isolates resistant to all phytopreparations and disinfectants used in the test. A high antimicrobial activity of Decasanum and Sangviritrin against the multi-antibiotic resistant strains, most by against Grampositive bacteria, was established. The established regularities determine the relevance of the antibiotic therapy taking into account the antibiotic susceptibility of the inflammatory process pathogens and the development of a comprehensive approach for correction of the opportunistic microbiota at generalized periodontitis. INTRODUCTION The inflammatory diseases of the periodontal tissues, including generalized parononitis, are multifactorial diseases, in whose etiology, the microbial factor occupies the key position [2,6,13]. At the first stages of the periodontitis formation, significant disorders of the periodontal tissues local immunity and anaerobic Gramnegative Actinobacillus actinomycetemcomitans, Porphyromonas gingivalis, Prevotella intermedia and Tannerella forsythensis bacteria play an important role, being involved in the periodontal pockets formation, the connective tissue destruction, and the alveolar bone resorption. The inflammatory infiltrate formation is accompanied by a variety of immuno pathologies. Along with this, the cause of complications and recurrences of the inflammatory periodontal diseasse is the microbial cenosis restructuring and the opportunistic microorganisms domination within the microbial associations of the oral cavity. The colonization of the mucous membrane with transient and allochthonic microbiota representatives, frequently manifesting multiple resistance to antibiotics, leads to a permanent recurrence and chronic inflammatory process [4,15]. Due to this particular reason, some authors consider periodontitis that is an opportunistic infection, accompanied by the presence of opportunistic and pathogenic bacteria in the oral cavity [7,12,13]. It is also established that bacteria causing inflammatory periodontal processes can be a separate risk factor for cardiovascular, cerebrovascular diseases and premature delivery [8,11]. That is why, antibacterial drugs are widely used in treatment of the periodontal tissues [4,13,14]. At the same time, the everincreasing trend towards the formation of the antibiotic resistance, especially among the representatives of the opportunistic microbiota, including that in the biofilm [8,18], requires new approaches to both local and systemic treatment. Under such conditions, a permanent monitoring of the poly-antibiotic resistant microorganisms circulation, the study of their sensitivity to antibacterial drugs, the deve lopment of new integrated approaches and means for the oral microbiocenosis correction in the inflammatory process is relevant. The purpose of the present study was to determine the dominant microbial associations of the oral cavity at generalized periodontitis, to clarify their sensitivity to antibio tics, antiseptics and phytopreparations. MATERIALS AND METHODS The isolates that caused periodontal inflammatory processes were isolated on the basis of the Dental Polyclinic at Uzhhorod National University; the antimicrobial activity was studied at the Microbiology Laboratory at the Department of Genetics, Plant Physiology and Microbiology, Uzhhorod National University. We examined 180 patients diagnosticated for generalized periodontitis; the control group composed of 50 persons without such diagnosis. The biological material was collected from the mucous membrane of the inflammatory site using a sterile transport system (a test tube with gel and an applicator for biological fluids produced by FLmedical (Italy). The material was plated according to Gold on nutrient media: Sabourund Dextrose Agar, HiCrome™ Candida Differential Agar (Himedia) for the cultivation of microscopic fungi, hemolytic microflora, namely, the Streptococcus and Neisseria genera bacteria on the blood agar, Enterobacteriaceae genera bacteria -on Endo and Ploskirev agar (Farmaktiv, Ukraine), the Staphylococcus genus bacteria -on Mannitol Salt Agar (BiolifItalia), Enterococci were identified with Bile esculin agar (BiolifItalia ) We identified the bacteria and fungi based on macromorphology, micromorphology, physiological and biochemical tests using ENTEROtest, STREPTOtest, STAPHYLO test produced by Erba Lachema. The antibiotic sensitivity of bacteria and microscopic fungi was identified by the disc diffusion method according to the accepted procedure (Order No. 167 of the MOH of Ukraine dated 05/04/2007; EUCAST (European Committee on Antimicrobial Susceptibility Testing). The microorganisms' sensitivity to plantbased materials and disinfectants was determined by the standard agar diffusion test (with 8 mm diameter wells) [3]. The antibacterial properties was carried out according to the following criteria: absence of a growth retardation zone of 10 mm, indicating that the microorganisms are not sensitive to the sample inserted into the well; that of 10-15 mm indicates a weak sensitivity; 15-25 mm means sensitivity; more than 25 mm -high sensitivity. The data obtained were expressed as mean ± standard deviation (SD) of three measurements. The Tukey's test was applied for comparisons of mean values, differen ces were considered reliable at p<0.05. Statistical analysis and comparisons among the means were carried out using Microsoft Excel 2013. The parameters calculated alongsi de with the basic variation were: average and standard deviation; minimum and maximum coefficients of variation; and the inhibition zones size frequency. RESULTS AND DISCUSSION The total of 389 opportunistic microorganisms strains were isolated from 180 patients with persistent inflammatory process at generalized periodontitis. The Staphylococcus genus bacteria were isolated from the inflammatory site in 73 % (131/180) cases and were represented by four species: S. aureus, S. haemolyticus, S. saprophyticus, S. epidermidis. In 53 patients S. aureus was isolated, 20 of them being methicillinresistant. In 44 patients, S. haemolyticus was isolated. The total of 45 microscopic fungi strains of the Candida genus were isolated, 38 of them belonging to С. albicans, 5 strains -to C. glabrata, 2 -C. cruzei. In patients with generalized periodontitis, the following representatives of indigenous microbiota were isolated: nonpathogenic streptococci S. sanguis, S. mitis, S. salivarius. In 57 % cases the Lactobacillus genus bacteria were isolated. By studying the microbiota of mouth cavity in control group patients, we ascertained presence of the nonpathogenic Streptococcus spp.; lactobacilli were identified in 91.5 % of the reviewed material and only in 57 % of patients suffering from the periodontitis. The microbiota of the healthy gum tissues was characterized by a low population level of microbial populations of opportunistic microorganisms (S. saprophyticus, S. epidermidis) -22 %. However, at chronic generalized periodontitis, the opportunistic asso ciations prevailed. A high degree of antibiotic resistance in the microorganisms isolated from the inflammatory process foci with generalized periodontitis was shown -48 % of isolates were resistant to at least 7 antibiotics. Thus, 3 isolates of K. rhinoskleromatis were resistant to 30 antibiotics. All the isolates were resistant to ampicillin, erythromycin, tetracycline. The study results on the isolates' antibiotic susceptibility showed that 57.65 % of isolates were susceptible to amoxicillin/clavulanate, 5.3 % were moderately sensitive and 37.05 % were resistant. Isolates in the amount of 85.29 % showed resistance to cephalosporins of the 1st generation. Sensitivity to cephalosporins of the second generation has been established: to ceftriaxone -61.76 % and to cefuroxime -55.29 % of cultures. It was shown that 97 % of isolates were susceptible to cefoperazone/sulbactam. It was established that 73 % of all isolates were susceptible to fluoroquinolones, in particular to fluoroquinolones of the second generation: to ofloxacin -41.76 % of isolates; to norfloxacin -68.23 %; to lomefloxacin -32.35 %; to ciprofloxacin -68.82 %; to fluoroquinolones of the third generation: to levofloxacin -75.29 %, and to the forth gene ration fluoroquinolones (gatifloxacin) -87 %. The opportunistic microorganisms were susceptible to the cabopenems: 80 %to meropenem and 58.23 % -to imipenem. Of the 170 isolates, 15.29 % were sensitive to azithromycin, 30 % were moderately susceptible and 55.29 % were resistant. 30 % of cultures were sensitive to the semisynthetic clarithromycin macrolides. The microscopic Candida genus fungi was resistant to fluconazole, 6 strains showed sensitivity to itraconazole, 10 -to clotrimazole. The conducted studies have shown that Dexasan antiseptic exhibited a wide spectrum of antimicrobial activity (see Figure). In particular, the sensitivity of all bacteria involved in the experiment as clinical isolates and typical cultures has been established. The highest levels of antimicrobial activity were recorded for the Staphylococcus genus bacteria, including methicillinresistant strains. However, the antimycotic action of Dezasan on Candida species in the drug dose selected by us was not detected. High antibacterial effect was observed as a result of the Dioxydine action, but the parameters of growth retardation zones varied greatly from 30.33 ± 0.58 mm to the clinical strain of S. aureus to 17.33 ± 0.33 c to S. aureus MRSA. High antibacterial acti vity of dioxydine on S. pneumonia has also been established. The bactericidal activity of the drug against E. faecalis and K. rhinoscleromatis was not detected. Moderate activity of the dioxydine against E. coli has been established. The moderate sensitivity of the Staphylococcus genus bacteria to chlorhexidine is shown, but it is much lower than that of Dioxydine and Dexasane. Chlorhexidine did not affect methicillin-resistant S. aureus, and no antibacterial effect of chlorhexidine on the Streptococcus genus bacteria was detected. The studies have shown a high level of the antimicrobial activity of the Sangviritrin drug, possessing a high antibacterial effect on Grampositive microorganisms, a moderate effect on Gramnegative bacteria and a weak antimycotic activity ( Table 1, Table 2). The Chlorophyllipt phytopreparation demonstrated high in vitro efficacy towards the Staphylococcus genus bacteria, more towards typical strains than to clinical ones, and to E. faecalis, weak and moderate activity towards E. coli. The Eucalyptus tincture showed a high and moderate activity towards the isolates. No antimicrobial activity of the Mint rinse was detected, and low and moderate activity of the "Salvia" tincture towards the Staphylococcus and Streptococcus genera bacteria was observed. The Rotocanum drug produced a moderate and low antimicrobial effect on isolates involved in the experiment, except for E. coli and S. aureus (MRSA). At chronic persistent inflammatory process, the opportunistic microorganisms domi nate in the oral cavity of patients with generalized perio dontitis. The obtained results are consistent with the data of other researchers [8] who established the domination of the Staphylococcus genus bacteria in the microbiota of the oral cavity in patients with generalized periodontitis. The authors also identify the S. aureus, S. epidermidis, Bacteroides spp., Actinomyces spp., Candida spp., Pseudomonas aeruginosa and Streptococcus spp. microorganisms, as the most important agents Table 1. Примітка: дані у стовпці, позначені різними літерами, достовірно відрізняються за P<0,05 згідно з Тукі тестом; "" -затримки росту немає causing the development of generalized periodontitis and the role of the above microorganisms proteolytic enzymes, including collagenolytic enzymes, hyaluronidase, chondroitin sulfatase in the disintegration of collagen, the main protein of the periodontal tissue. The authors also report high sensitivity of the opportunistic microbiota to clinda mycin and ofloxacin. Colonization of the oral cavity with pathogenic bacteria can be an antecedent and, in its turn, can lead to other diseases [1]. Diseases of the periodontal tissues contribute to the oropharyngeal colonization by potential respiratory pathogens, including Enterobacteriaceae (K. pneumoniae, E. coli, Enterobacter sp., P. aeruginosa, and S. aureus). The persistence of opportunistic microorganisms in the oral cavity creates preconditions for the development of systemic diseases, especially, this correlation is established in the elderly persons [1,2]. The domination of opportunistic microorganisms in the oral cavity in conditions of generalized periodontitis causes the need for antibacterial therapy. The literature data report that according to the dental practitioners' survey, amoxicillin (73.58 %) is the most frequently prescribed as an antibiotic systemic treatment, 36.47 % of dentists recommend lincomycin to patients, and fluoroquinolone preparations (ciprofloxacin) are prescribed by 30.18 % of dental practitioners, doxycycline -by 17.61 %, clarithromycinby 5.03 % [1]. Our studies showed high sensitivity of the opportunistic microorganisms, isolated from the inflammatory foci, to fluoroquinolones, however, their high resistance to penicillins, macrolides, lincomycin, doxycycline was established. It is shown that most microorganisms causing complications of dental implantation are resistant to penicillins, macrolides, lincosamines [2]. A significant increase in the level of antibiotic resistant strains of microorganisms in conditions of periodontal inflammatory diseases, may be due to the presence of bacteria in the composition of biofilm. It has been proved that the resistance of bacteria to antibiotics within the biofilm is seve ral orders greater [4,6,11,17]. At the same time, the auxiliary or alternative means of the opportunistic microbiota correction is the use of local antiseptics and phytopreparations to potentiate the action of antibiotics, and in benign cases, an alternative to their application [5,8]. We have established low efficacy of the antimycotic drugs, particularly their local effects. Meanwhile, in our previous studies, high activity of essential oils [9,10,16] and phytopreparations to isolates of opportunistic infections pathogens, including microscopic fungi of the Candida genus was established. CONCLUSION At generalized periodontitis, the domination of conditionally pathogenic microorganism associations in the microbiote of the oral cavity was observed. Association of the bacteria of Staphylococcus genus and Enterobacteriaceae family accompanied the most complex recurrent inflammatory processes. High percentage of the antibiotic resistant isolates is shown. It was found that the most effective against the isolates were fluoroquinolones and cefoperazone/sulbactam. Among antiseptics, Decasanum demonstrated a wide range of the antibacterial activity. The high antimicrobial effect was demonstrated by Sangviritrin. The application of a comprehensive, differential approach to the correction of microbiota in the oral cavity at generalized periodontitis is grounded. Taking into account the dominant associations and their sensitivity to antibacterial drugs, including phytopreparations, which, in addition to antimicrobial activity, do not violate the composition of the indigenous microbiota. ACKNOWLEDGMENTS The present study is a fragment of the research project at the Department of Gene tics, Plant Physiology and Microbiology of Uzhhorod National University "Research of genetic, physiological and biochemical mechanisms of various organization level biological systems adaptation in the anthropogenic loading conditions", No. 0115U003902. Conflict of Interest: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
2020-05-07T09:17:43.839Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "91822ce4c004284db8dba3fe7656d296952994bf", "oa_license": "CCBY", "oa_url": "http://publications.lnu.edu.ua/journals/index.php/biology/article/download/1123/1119", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e37eba4553405d3e420e9a2bdd327fe5f0305a7e", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56065525
pes2o/s2orc
v3-fos-license
KARYOLOGIC SURVEY OF NOT FLYING SMALL MAMMALS FROM TOCANTINS, BRAZIL [a] Graduated in Biological Sciences, Universidade Estadual Paulista Júlio de Mesquita Filho (IB/Unesp), Botucatu, SP Brazil. [b] Department of Natural Resources, Universidade Estadual Paulista Júlio de Mesquita Filho (FCA/Unesp), Botucatu, SP Brazil, e-mail: jfslima@hotmail.com [c] Museum of Zoology José Hidasi, Fundação Universidade Tocantins (Unitins), Porto Nacional, TO Brazil, e-mail: jose.fs@unitins.br INTRODUCTION According Gastal (1), the small mammals can change de floristic composition due their activities and habits; they act as energy and biomass store, mediating the producers-decomposers cycle; act, probably, as regulators of invertebrate populations, specially of insects and can, even, act as pollinate agents.The small mammals are important in the ecosystems dynamics, interfering at least three components: soil, vegetation and the predators. The rodents stand out for the abundance, diversity and by the big taxonomy complexity (2).Among the marsupials, the family Didelphidae is the unique found in South America and has species with difficult identification (2,3). Survey studies have utilized the cytogenetic (cytotaxonomy) as a basic toll for the species identification in completeness analyze of the systematic knowledge, morphologic and geographic distribution.This boarding has been contributed so much to elucidate taxonomic problems, including in the identifications of new species (4-8) and others, as list Silva (9). This work shows the karyologic study of not flying small mammal sampled in the survey fulfilled in 2001, in Ipueiras city mainly, and data of two marsupials from others two cities (Lajeado and Pequizeiro). The rodents were gotten in consequence of one practice class relative from the discipline Coleta e Preservação Animal, offered for the curse of Biologics Science (Unitins, actual UFT, Porto Nacional).Being of the interest the development of studies about diversity, cytotaxonomy and conservation of terrestrial small mammals in FCA/ Unesp, our objective is to show the results of the cytotaxonomy analyze realized for the small rodents and marsupials from the Tocantins State, Brazil, moreover commentaries about the distribution and vegetation type from the collect area. Flakes with cell material of each animal were done and blushed with Giemsa for analyze in optic microscopic.The flakes with metaphases were priorized for the count of the diploid number (2n) and definement of the autossomic arms (NA).According the quality of the flakes, these were classified as: "Good" (metaphases with none or rare superposition and easy identification of the chromosome form); "Reasonable" (metaphases with superpositions, but allowing the identification chromosome form) and "Bad" (with two situations: 1. Incomplete -variation for the diploid number and grouped chromosomes, but with identification form; 2. Separated chromosomes with arms very united, defaulting the form identification).The conclusive cytotaxonomy identification were obtained through the completeness analyze of all metaphases from each animal, principally for the "Reasonable" and "Bad" ones and consult of the specialist bibliographic, especially for the Tocantins State. The identified species is showed in this work respecting the most recently taxonomy denominations (4,11), fallowing the oldest name, between parentheses, and flakes for each specie. The Table 2 brings the studied specie names and identified, cell sample quality and obtained cytogenetic data (2n, NA, autossomic and sexual types).The majority of the analyzed species already is known by the literature for the Tocantins State, but not for the studied city and others States, even São Paulo (Table 2). The quality flakes of Necromys lasiurus allowed easily verified 2n = 34 and NA = 34.The autossomic pairs 1 until 15 are acrocentric, varying since big until gradual little and the pair 16 shows chromosomes very little meta or submetacentric.The X is middle acrocentric and the Y one little acrocentric The second sample pertains to Calomys tener, despite the bad quality of the metaphases, is possible to show up variation of 2n since 56 until 66 chromosomes, the acrocentric form is predominant and one big submetacentric is present (Table 2).These data are more similar to C. tener (2n = 66 and NA = 66) with occurrence in São Sebastião and Pequizeiro city, Tocantins (6, 7).The karyotype is described as 31 pairs of acrocentric chromosomes, one little metacentric pair, the X is big submetacentric and Y is little acrocentric. The N. rattus metaphases have 2n = 52 and NA = 52, the specie already was described by Lima (6), Lima and Kasahara (7) for the cities São Sebastião, Couto Magalhães (northwestern), Lajeado and Porto Nacional, Tocantins State.The autossomic pairs 1 until 10 are acrocentric, assorting big until middle; the pair 11 until 24 are little acrocentric with size gradual variation and the pair 25 is little submetacentric.In the sexual pair, the X is big submetacentric and the Y little acrocentric (Table 2.) The Oryzomys sp.flakes revealed to belong to the specie Hilaeamys megacephalus (O.megacephalus), 2n = 54 and NA = 62, this specie already was described in Tocantins for the Lajeado and Porto Nacional cities (6,7).The karyotype has the pair 1 and 3 as big subtelocentrics, the pair 1 is the biggest of them.The pairs 2 and 4 until 21 are acrocentrics, varying since big until little.The sexual pair has a big acrocentric X and a middle submetacentric (Table 2). Analyzing the Oligoryzomys sp.samples, that have reasonable quality metaphases, was possible to obtain consented data with Oligoryzomys microtis (O.flavescens), despite the quality.However, wasn't possible to define the diploid number with precision, but 73% of the analyzed metaphases display 2n varying always 62 chromosomes.The majority are acrocentric and four are little metacentric or submetacentric (Table 2).In the literature, is found two karyotypes for the gender Oligoryzomys in Tocantins, they are: 2n = 64 and NA = 66 for O. flavescens (6, 7) and 2n = 70 and NA = 76 for Oligoryzomys sp.n.(6,8).Oligoryzomys microtis (O.flavescens) own only two pairs little metacentric and the other specie has four pairs of meta and submetacentric.The last specie occurs in São Sebastião and Couto Magalhães, near Pequizeiro (northeast). For R. macrurus metaphases were found the same karyotype described by Lima (6) and Lima & Kasahara (7) for Lajeado city, with 2n=44 and NA=48.The karyotype shows the pairs 1 until 18 as acrocentrics.The 19 until 21 are meta or submetacentric and the sexual pair has a big submetacentric X and a small metacentric Y. The P. roberti metaphases shows 2n=30 and NA=54.The result is equal found by Lima (6) and Lima & Kasahara (7) for Tocantins State, in São Sebastião, Lajeado and Porto Nacional.The 1 until 4 autossomic pairs are big submetacentric with size gradate change; the pairs 5 until 12 are metacentric or submetacentric, varying small up to middle.The pair 9 has a secondary constriction at the long arms; the 13 pair is a small subtelocentric and the 14 pair is a very small acrocentric.The X is a middle metacentric (Table 2). Thrichomys sp.showed 2n=26 and NA = 48.This karyotype was described by Leal-Mesquita et al. (12) to São Paulo State.The pair 1 is a big metacentric and the pair 2 is a metacentric with secondary constriction at the short arm.The pair 3 is metacentric and the pairs 4 until 12 are meta or submetacentric, with size gradual change.The sexual pair is a big subtelocentric X and a small metacentric Y.This karyotype was discussed by Lima (6) and Lima & Kasahara (13) as being T.a.inermis and, later was recognized as T. inermis (14).Our results disagree only for the sexual pair.The chromosome Y of our sample is a small subtelocentric or submetacentric and not a metacentric (Figure 1 and Table 2).Carvalho and Fagundes (15) and Silva et al. (16), mention 2n=26 occurring in Jalapão-TO and Ipueiras-TO, respectively and recently for Rio do Sono (17).However, the Y variation is cited here by the first time and none variation for X was found as in Rio do Sono. The specie R. rattus, domestic mouse, has easy morphological identification and his karyotype is well-know and studied, including is found in Tocantins with large distribution (6,7).The pairs 1 until 9 are subtelocentric.The pair 1 is big and the 9 is midlle.The pairs 2,3,5,6,8,10 and 13 are small up to big acrocentrics.The pair 14 until 20 respectively).The sexual pair has an acrocentric X and a middle acrocentric Y (Tabela 2). The exemplar identifi ed in the fi eld as Micoreus sp. has 14 chromosomes and NA = 22, but the chromosomal forms are equal Marmosa murina from Porto Nacional, Tocantins, studied by Lima (18).The pairs 1, 2 and 3 are submetacentric.The pair 4 is metacentric, the pair 5 is subtelocentric and the pair 6 is acrocentric.The X is a small acrocentric and the Y is a acrocentric smaller than X (Tabela 2.) The species 2n=14 are morphologically very similar and until then were considered from Marmosa gender.After review (3), many species of Marmosa changed to the genders Micoureus (ex.M. cinereus, M. elegans and M. incanus), Marmosops (ex.M. fuscatus) and Thylamus (ex.M. pussilla). In the material of P. opsossum was obtained 2n=22 and NA=20.All chromosomes are acrocentrics with small up to middle size.The sexual pair is the smallest (Table 1).The karyotype is very similar to others species of the genders Didelphis and Chironectes, both 2n=22 and NA = 20 (18,19).However, P. opsossum has easy morphological identifi cation, with grey uniform color (dorsal and laterally) with furs until 5-8cm on the tail and a cream speck above the eyes (2).The studies about small mammals from Tocantins are very little and the registers of habits are scarce.However, the data showed in table 1 agree with the literature for some species (2,20) and, particularly, informations of Lima (6) and Silva et al. (16) relating to Tocantins. FIGURE 1 - FIGURE 1 -Metaphase of T. inermis of our sample.The arrows stand out X (arrow a) and Y (arrow b) chromosomes Source: Picture of author research result. TABLE 2 - Cytogenetic data from rodents and marsupials
2018-12-06T22:24:32.707Z
2008-11-27T00:00:00.000
{ "year": 2008, "sha1": "1403546ff50b7fe37866ddbfe0c710cde4f0b6a5", "oa_license": "CCBY", "oa_url": "https://periodicos.pucpr.br/estudosdebiologia/article/download/22812/21915", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "1403546ff50b7fe37866ddbfe0c710cde4f0b6a5", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
9904397
pes2o/s2orc
v3-fos-license
Transfer Learning to Learn with Multitask Neural Model Search Deep learning models require extensive architecture design exploration and hyperparameter optimization to perform well on a given task. The exploration of the model design space is often made by a human expert, and optimized using a combination of grid search and search heuristics over a large space of possible choices. Neural Architecture Search (NAS) is a Reinforcement Learning approach that has been proposed to automate architecture design. NAS has been successfully applied to generate Neural Networks that rival the best human-designed architectures. However, NAS requires sampling, constructing, and training hundreds to thousands of models to achieve well-performing architectures. This procedure needs to be executed from scratch for each new task. The application of NAS to a wide set of tasks currently lacks a way to transfer generalizable knowledge across tasks. In this paper, we present the Multitask Neural Model Search (MNMS) controller. Our goal is to learn a generalizable framework that can condition model construction on successful model searches for previously seen tasks, thus significantly speeding up the search for new tasks. We demonstrate that MNMS can conduct an automated architecture search for multiple tasks simultaneously while still learning well-performing, specialized models for each task. We then show that pre-trained MNMS controllers can transfer learning to new tasks. By leveraging knowledge from previous searches, we find that pre-trained MNMS models start from a better location in the search space and reduce search time on unseen tasks, while still discovering models that outperform published human-designed models. INTRODUCTION Designing deep learning models that work well for a task requires an extensive process of iterative architecture engineering and tuning. These design decisions are largely made by human experts guided by a combination of intuition, grid search, and search heuristics. Meta-learning aims to automate model design by using machine learning to discover good architecture and hyperparameter choices. Recent advances in meta-learning using Reinforcement Learning (RL) have made promising strides towards accelerating or even eliminating the manual parameter search. For example, Neural Architecture Search (NAS) has successfully discovered novel network architectures that rival or surpass the best human-designed architectures on challenging benchmark image recognition tasks . However, naively applying reinforcement learning to each new task for automated model construction requires sampling, constructing, and training hundreds to thousands of networks to relearn how to generate models from scratch. Human experts, on the other hand, can design and tune networks based on knowledge about underlying dependencies in the search space and experience with prior tasks. We therefore aim to automatically learn and leverage the same information. In this paper, we present Multitask Neural Model Search (MNMS), an automated model construction framework that finds the best performing models in the search space for multiple tasks simultane-ously. We then show that a MNMS framework that has been pre-trained on previous tasks can construct the best performing model for entirely new tasks in significantly less time. RELATED WORK The Neural Architecture Search (NAS) method was introduced in , where it was applied to construct Convolutional Neural Networks (CNNs) for the CIFAR-10 task and Recurrent Neural Networks (RNNs) for the Penn Treebank tasks. Later work by the same authors attempted to address the computational cost of using Neural Architecture Search for more challenging tasks . To engineer a convolutional architecture for ImageNet classification, this paper demonstrated that it was possible to train the NAS controller on the simpler, proxy CIFAR-10 task and then transfer the architecture to ImageNet classification by stacking it. However, this work did not attempt to transfer learn the NAS controller itself across multiple tasks, relying instead on the human expert intuition that additional network depth was necessary for the more challenging classification task. Additionally, the final generated architectures required additional tuning, to choose hyperparameters such as the learning rate, before evaluation on the test set. The complexity of model engineering in machine learning is widely recognized. Optimization methods have been proposed, ranging from random search over the space of possible architectures (Bergstra & Bengio, 2012) to parameter modeling (Bergstra et al., 2013). Recent publications apply RL to automate architecture generation. These include MetaQNN, a Q-learning algorithm that sequentially chooses CNN layers (Baker et al., 2016). MetaQNN uses an aggressive exploration to reduce search time, though it can cause the resulting architectures to underperform. Separately, Cai et al. (2017) propose an RL agent that transforms existing architectures incrementally to avoid generating entire networks from scratch. Our work also draws on prior research in transfer learning and simultaneous multitask training. Transfer learning has been shown to achieve excellent results as an initialization method for deep networks, including for models trained using RL (Yosinski et al., 2014;Sharif Razavian et al., 2014;Zhan & Taylor, 2015). Simultaneous multitask training can also facilitate learning between tasks with a common structure, though effectively retaining knowledge across tasks is still an active area of research ; Teh et al. (2017). NEURAL ARCHITECTURE SEARCH OVERVIEW The controller (RNN) Child network trained with hyperparameters H to get accuracy R Sample hyperparameters H with probability P Compute gradient of P and scale it by R to update the controller Neural Architecture Search uses an RNN to generate model designs that maximize expected performance on a given task ( Figure 1) . Specifically, an RNN controller iteratively samples architectures as a sequence of actions. Every action is a discretized design choice, such as CNN filter heights, widths, and strides. Child networks are then constructed with these architectures and trained to convergence. The performance metric of the child network is used as a reward to update the controller through a policy gradient algorithm. The controller learns a distribution over the architecture search space that is updated to increase the probability of the best performing architectures, allowing it to sample better architectures over time. While the original Neural Architecture Search framework sampled models over a search space of strictly architectural parameters, later work has shown that this framework can be extended to automatically search over other model design parameters and domains, such as update rules for network optimizers (Bello et al., 2017). SIMULTANEOUS MULTITASK TRAINING IN NEURAL ARCHITECTURE SEARCH In this section, we describe the Multitask Neural Model Search (MNMS) controller, which allows simultaneous model search over multiple different tasks. Many deep learning models require the same common design decisions, such as choice of network depth, learning rate, and number of training iterations; using a generally defined search space of widely applicable architecture and hyperparameter choices, the controller can therefore engineer a wide range of models applicable to many common machine learning tasks. Multitask training over this space can then allow the controller to learn more broadly applicable relationships between search space actions, by leveraging shared behavior across tasks. We implement a controller capable of simultaneous multitask training through three key modifications: 1. Learned task representation and task conditioning. The MNMS controller can be trained synchronously on a set of N tasks. The controller learns to build differentiated architectures for each task. This is achieved by sampling a task uniformly at the beginning of each controller training iteration. The task is then mapped to a unique embedding vector. The tasks embeddings are randomly initialized, and are trained jointly with the controller. The task embedding is then used to condition the model construction on the task. This is achieved by concatenating the task embedding to every input that is fed to the controller RNN. Specifically, in single-task NAS, the controller RNN generates an output at each timestep that determines the distribution over the current set of actions. An action is sampled according to this action distribution and then embedded. The action embedding is then passed back into the RNN as input to the next timestep. Here, in multi-task training for MNMS, for multi-task training, the task embedding is now concatenated with the action embedding to form the RNN input, allowing it to condition each action on a specific task ( Figure 2). 2. Off-policy training using multitask replay. Previous works train the NAS controller using the REINFORCE policy gradient algorithm , and more recently, the PPO algorithm Bello et al., 2017). Here, we train using off-policy PPO, an actor-critic algorithm in which an actor controller generates sampled models and a critic controller trains on a replay bank of the sampled models and rewards (Schulman et al., 2017). Preliminary experiments with on-policy training found that the controller shows a reduced ability to learn a differentiated model for each task. Specifically, the controller is prone to premature convergence to a single model design that works generally well for all tasks but is not the best model for some of the tasks. On-policy sampling is biased toward more recent predictions of the optimal parameter distribution. Our hypothesis is that in multitask training, on-policy sampling can prematurely reduce exploration of better parameters for each individual task, while off-policy training allows the actor controller to continue to explore separate parameter choices for each task, and better learn a differentiated distribution over the parameter search space to maximize expected performance for each. 3. Per-task baseline and reward distributions normalization. ... (1) Sample Task embeddings Figure 2: Overview of the multitask controller RNN. (1) A task embedding table is maintained and updated with controller gradients to learn differentiated task embeddings over time. (2) At each iteration of the multitask training, a task is randomly sampled. The task embedding is passed into the controller RNN along with the sampled action embedding at each RNN timestep. The full sequence of outputted actions defines the child architecture trained on the chosen task. Each task can define a different performance metric to be used as reward. The rewards affect the amplitude of the updates on the controller, so we need to make sure that the distributions of each task rewards are aligned to have same mean and similar variance. The mean of each tasks reward distribution is aligned to 0 by scaling the gradients with the advantage instead of the reward. The advantage, A(a, t), of a given model, a, applied to a task, t, is defined as the difference between the reward, R(a, t), and the expected reward for the given task, b(t): A(a, t) = R(a, t) − b(t) b(t) is often referred to as the baseline. This is a standard RL technique that is usually applied with the aim of increase the training stability. During multitask training, the baseline is conditioned on the sampled task. We keep track of a separate baseline for each task, computed as an exponential moving average of the rewards recorded for each task. The range of each tasks reward distribution is normalized by dividing the advantage by the baseline: We refer to A as the normalized advantage. Notice that the division by the baseline does not compromise the convergence criteria, as it can be seen as using a distinct adaptive learning rate for each task. Using the normalized advantage to scale the gradients instead of the raw reward allows MNMS to use any performance metric as a reward even when training on multiple tasks. TRANSFER LEARNING FOR AUTOMATED MODEL SEARCH Using the multitask framework, we can transfer learn pretrained controllers by simply reusing the weights of the pretrained controller and adding a randomly initialized task embedding for each new task. The controller weights and the new task embedding are then updated with standard policy gradient steps. In our experiments, we also restart the experience replay bank used by the off-policy critic, so that only rewards obtained on the new task are sampled. However, future work could retain and continue to sample from previously seen tasks in order to better retain controller memory of the former tasks. EXPERIMENTS AND RESULTS We apply MNMS to the NLP setting, demonstrating that the framework can be trained simultaneously to design models for two separate text classification tasks. We then transfer learn the MNMS model to two new text classification tasks, and demonstrate that the pre-trained framework achieves significant speedups in model search. Additional details about the experimental procedures and results follow. Tasks For multitask training, we trained the MNMS framework simultaneously on two text classification tasks: 1. Binary sentiment classification on the Stanford Sentiment Treebank (SST) dataset (Socher et al., 2013). 2. Binary Spanish language identification on a dataset consisting of each of the 5,000 highest frequency Wikipedia tokens in English, Spanish, German, and Japanese. The example label is a binary label denoting whether the token is Spanish or not Spanish. These tasks were chosen specifically for their differences in task complexity, language, and potential for overfitting. This would require a controller capable of true multitask model search to differentiate between the tasks when choosing optimal model parameters for each task. For transfer learning, we then trained the pre-trained MNMS framework on two new text classification tasks: These tasks were chosen so that an effectively transfer learned framework could conceivably leverage knowledge from previous searches. As a baseline to compare search convergence rates, we also trained MNMS models from scratch on the transfer learning tasks. Search Space For all four tasks, we define a single general search space consisting of 7 common model parameters, with 2-6 discrete parameter choices specified for each (Table 1). A naive grid search over all parameters would therefore need to try 15,360 parameter combinations to search over all possible models. These parameters represent general architectural and training design choices applicable to any text classification task. Child networks are then constructed as feed-forward neural networks using the sampled parameters. Specifically, for a sampled parameter sequence consisting of word embedding W , word embedding trainability T , number of neural network layers N layers , number of nodes per layer N nodes , learning rate L, number of training iterations I, and L2 regularization weight w, we construct a feedforward network with N layers RELU-activated layers and N nodes per layer. For each task, the network receives tokens embedded using W , where we continue to gradient update the entire word embedding table if T is true. The child model is trained for I iterations using learning rate L and L2 regularization weight w. All child models end with a final fully-connected softmax layer, and are trained using the Proximal Adagrad optimizer on batches of 100 training examples at each iteration. Training Details The actor and critic controller RNNs used in off-policy PPO training are 2-layer LSTMs with hidden layer size 50. At each RNN timestep, both action and task embeddings have size 25, resulting in an RNN input of size 50 after concatenation. Both controller and embedding weights are initialized uniformly at random between -0.08 and 0.08. When training, the controller that receives gradient updates is trained on batches of size 20 with learning rate 5 · 10 −4 , and updated for 25 gradient steps before the weights between the two controllers are averaged with Polyak Average weight 0.9. The reward used for updating the controller is the cubed accuracy on a validation set. We train n=3 MNMS models simultaneously on the SST and Spanish language identification tasks. Accuracies achieved by the sampled child models over time are shown in Figure 3, as well as the validation accuracy achieved by the best sampled models for each task. In each model, the accuracy of sampled models improves over time on a per-task basis, even while the tasks clearly have different baseline accuracies. SIMULTANEOUS MULTITASK TRAINING RESULTS Additionally, we find that the best discovered model design outperforms the hand-tuned state-ofthe-art model within the subset of models that use a similar BOW approach (Socher et al., 2013) The best performance on the task is obtained by more complex architectures that are not within the scope of our search space (Le & Mikolov, 2014). We also find that the MNMS framework can differentiate between the tasks to choose optimal parameters for each. In Figure 4, we show that MNMS learns differentiated distributions over the parameter search space for the separate tasks. For example, MNMS learns to choose a word embedding pre-trained on Spanish documents for the Spanish language identification task, while choosing word embeddings pre-trained on an English dataset for the Stanford Sentiment Treebank task. Finally, we find that MNMS learns that for the trivial language identification task, there is no significant difference between continuing to train the word embedding vectors or simply using the fixed, pre-trained word embeddings. For the SST task, which contains longer and more complex examples, the model learns that it must continue training the word embeddings to achieve better performance. Similarly, the search converges to favor higher hidden layer dimensions and more training iterations for the more complex SST task. Figure 5 compares the smoothed validation accuracy curves of baseline MNMS models trained from scratch on the IMDB and Corpus Cine tasks with MNMS models pre-trained on SST and Spanish language identification. We observe that transfer learning allows MNMS to start from a better initial location in the parameter search space, train more consistently and stably, and converge much more quickly to finding good parameters for the tasks. Additionally, we find that the best learned models discovered by MNMS perform essentially identically regardless of whether the search is started from scratch or transfer learned from a pre-trained model, demonstrating that the search is not so biased towards pre-training that it converges prematurely to local optima. When compared against other hand-tuned, state-of-the-art benchmarks also using averaged word vector inputs, we find that MNMS discovers models that outperform documented benchmarks on both tasks (Maas et al., 2011;Calvo, 2017 Figure 5: Smoothed sampled model accuracy curves for n=3 MNMS models trained on IMDB and Corpus Cine, comparing models trained from scratch without transfer learning, and models transfer learned after pre-training. Curves smoothed using Savitzky-Golay filtering (n=101) for clarity. TRANSFER LEARNING RESULTS We also find that MNMS learns task embeddings that encode expected relationships between the tasks ( Figure 6). For example, we see a strong learned correlation between the IMDB and SST task embeddings, and separately between the Spanish language identification and Corpus Cine task embeddings. CONCLUSION Machine learning model design choices do not exist in a vacuum. Human experts design good models by leveraging significant prior knowledge about the intuitive relationships between these model parameters, and the performance obtained by different model designs on similar tasks. Automated model design algorithms, too, can and should learn from the models they have discovered for prior tasks. This paper demonstrates that Multitask Neural Model Search can discover good, differentiated model designs for multiple tasks simultaneously, while learning task embeddings that encode meaningful relationships between tasks. We then show that multitask training provides a good baseline for transfer learning to future tasks, allowing the MNMS framework to start from a better location in the search space and converge more quickly to high-performing designs.
2017-10-30T05:32:50.000Z
2017-10-30T00:00:00.000
{ "year": 2017, "sha1": "b6b5397a223599c3db58346b2a6560fce5cbcc1b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "699a777f5d8c16d4c6eebb5ab6f4cfce38c20061", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
15543211
pes2o/s2orc
v3-fos-license
Proteomic Analysis of Testicular Ischemia-Reperfusion Injury in Rats ABSTRACT Testicular torsion is a urological emergency that leads to serious testicular damage and male infertility. We performed this study to identify specific proteins that are differentially expressed in response to testicular torsion and detorsion-induced ischemia-reperfusion (I-R) injury. Adult male rats were divided into two groups: a sham-operated group and a testicular I-R group. Testicular torsion was induced by rotating the left testis 720° in a clockwise direction for 1 hr, and then, detorsion was performed for 24 hr. After this testicular tissues were collected, protein analysis was performed using two-dimensional gel electrophoresis and Western blot analyses. Testicular I-R injury resulted in serious histopathologic damage to the germinal cells in the seminiferous tubules and increased the number of TUNEL-positive cells in testicular tissue. Specific protein spots with a greater than 2.5-fold change in intensity between the sham-operated and testicular I-R groups were identified by mass spectrometry. Among these proteins, levels of peroxiredoxin 6, thioredoxin, heterogeneous nuclear ribonucleoproteins, ubiquitin carboxyl terminal hydrolase isozyme L5 and zinc finger AN1-type domain 3 were decreased in the testicular I-R group compared to the sham-operated group. Moreover, Western blot analysis clearly showed the decrease of these proteins in the testicular I-R group. These proteins have spermatogenesis and anti-oxidative functions. These findings suggest that testicular I-R results in cell death due to altered expression of several proteins with spermatogenesis and anti-oxidation functions. Testicular torsion is a urologic emergency that mainly affects newborns, children, adolescents and young adults [7,11]. The most common type of testicular torsion is prenatal testicular torsion, which happens prenatally or within one month of life [8,32]. Testicular torsion causes testis dysfunction, including male infertility. Infertility results from a serious defect in spermatogenesis and affects about 5% of human males. Testicular injury caused by spermatic cord torsion causes edema and testicular ischemia [2,4,6]. Torsion reduces the oxygen supply to the testes, and reperfusion leads to the formation of nitrogen and reactive oxygen species (ROS) [2,15]. Highly toxic metabolites of oxygen induce the overproduction of ROS and the activation of oxidizing enzymes, consequently leading to cytoskeletal, cell membrane and mitochondrial damages [30]. However, antioxidant agents can eliminate the ROS by functioning as free radical scavengers [3]. The mechanisms of testicular I-R are unclear. We hypothesized that various proteins may contribute to the process of testicular I-R. However, little information is known about the changed proteins expression during testicular I-R. Thus, we identified proteins that were differentially regulated in response to testicular I-R injury. MATERIALS AND METHODS Experimental animals: Male Sprague-Dawley rats (230-250 g, 10 weeks, n=20) were purchased from Samtako Co. (Animal Breeding Center, Osan, Korea) and were randomly divided into 2 groups, sham-operated group and testicular ischemia-reperfusion (I-R) group (n=10 per group). Rats were used for the morphological study (n=5 per group) and the molecular biological study (n=5 per group). Animals were maintained under controlled temperature (25°C) and lighting (14:10 light/dark cycle) and were allowed free access to water and food. All animal experiments were carried out in accordance with the guidelines that were approved by the ethics committee concerning animal research at Gyeongsang National University. Testicular ischemia-reperfusion: Testicular ischemia and reperfusion injury was carried out as previously described method [37]. Rats were anesthetized with sodium pentobarbital (100 mg/kg) and were kept in a supine position. The left testis was exposed through a left-sided longitudinal incision and rotated 720° in a clockwise direction, and this torsion position was maintained by fixing the testis to the scrotum with 4-0 silk suture [37]. The incision was sutured and was reopened after 1 hr of torsion. The testis was counter-rotated to its natural position, and the testicular tissues were removed after 24 hr. In the sham-operated group, the left testis was brought out by a left-sided longitudinal incision, and then, a 4-0 silk suture was placed through the tunica albuginea. After the left testis was replaced into the scrotum, the incision was closed. The sham-operated group was constituted to investigate the effect of surgical stress on spermatogenesis. The testis was frozen in liquid nitrogen and stored at −70°C until use for proteomic and Western blot analyses. Histological analysis: Testis tissues were fixed in 4% neutral buffered paraformaldehyde, embedded with paraffin and cut into 4 µm thick slices. The sections were deparaffinized in xylene and rehydrated in gradient ethanol from 100% to 70%. The sections were stained using hematoxylin and eosin solution. The morphological changes of testis tissues were observed using light microscopy. TUNEL histochemistry: Terminal deoxynucleotidyl transferase (TdT) dUTP nick end labeling (TUNEL) histochemistry was carried out using the DNA Fragmentation Detection Kit (Oncogene Research Products, Cambridge, MA, U.S.A.). Briefly, paraffin sections were deparaffinized in xylene, dehydrated through graded alcohol and washed with PBS. The sections were subjected to proteinase K digestion (20 µg/ml) for 20 min and blocked with 0.3% hydrogen peroxide in methyl alcohol for 10 min. The sections were washed in PBS and incubated in equilibration buffer for 30 min, and then, TdT labeling reaction mixture was applied to each specimen and incubated at 37°C for 1 hr. The reaction was stopped with stop solution for 5 min, and the section was incubated with blocking buffer for 10 min. The sections were labeled with digoxigenin peroxidase and visualized with diaminobenzidine (DAB) substrate. The sections were counterstained with hematoxylin, dehydrated in graded alcohol, cleared and coverslipped with permount. To quantitate the incidence of apoptosis, the seminiferous tubules containing three or more apoptotic cells by TUNEL stain were calculated [18]. The apoptosis percentage was calculated by the ratio of the positive seminiferous tubules of apoptosis to the total number of seminiferous tubules in cross sections. Two-dimensional gel electrophoresis and silver staining: Testis tissues were homogenized on ice in lysis buffer [8 M urea, 4% CHAPS, ampholytes and 40 mM Tris-HCl (pH 7.2) ] and centrifuged at 16,000 g for 20 min at 4°C. The samples were kept at −70°C until use. The total protein concentration was determined using the Bradford method (Bio-Rad, Hercules, CA, U.S.A.) according to the manufacturer's protocol. Protein samples (50 µg) were applied to the immobilized pH gradients gel strips (17 cm, pH 4-7, pH 6-9, Bio-Rad) with sample buffer (8 M urea, 2% CHAPS, 20 mM DTT, 0.5% IPG buffer and bromophenol blue). Isoelectric focusing (IEF) was performed using the Protean IEF Cell (Bio-Rad). After rehydration for 13 hr at 20°C, IEF was carried out three steps at 20°C: 250 V (15 min), 10,000 V (3 hr) and then 10,000 to 50,000 V. For the second dimension analysis, strips were equilibrated in equilibration buffer [6 M urea, 1% dithiothreitol (DTT), 30% (v/v) glycerol, 2% (w/v) SDS, 50 mM Tris-HCl (pH 8.8) and bromophenol blue] for 10 min and in the same buffer containing 2.5% iodoacetamide for 10 min. Separation was performed on 7.5-17.5% gradient gels followed by electrophoresis in a Protein-II XI electrophoresis equipment (Bio-Rad) at 10°C. Current conditions were 5 mA/gel for 2 hr and 10 mA/gel for 10 hr. The gels were fixed in solution (50% methanol and 12% acetic acid) for 2 hr, washed with 50% ethanol for 20 min and incubated in 0.02% sodium thiosulfate for 1 min. After washing with distilled water, the gels were reacted with silver stain solution for 20 min and developed in a solution, and the reactions were stopped by stop solution (1% acetic acid). Image analysis and protein identification: The silver stained gels were scanned using Agfar ARCUS 1200 TM (Agfar-Gevaert, Mortsel, Belgium). The scanned gel images were used to measure differentially expressed proteins between groups using PDQuest software (Bio-rad). The selected spots were cut from gels, distained using 50% acetonitile solution and dried for 20 min using a vacuum centrifuge. The gel particles were incubated with reduction solution (10 mM DTT in 0.1 M NH 4 HCO 3 ) at 56°C for 45 min and alkylation solution (55 mM iodoacetamide in 0.1 M NH 4 HCO 3 ) for 30 min. The gel particles were washed with 0.1 M NH 4 HCO 3 for 15 min, and the same volume of acetonitrile was added. And then, the gel spots were followed by incubation with trypsin-containing digestion buffer. Matrix solution was made using ACHC solution (α-cyano-4-hydroxycinnamic acid in acetone) and nitrocellulose solution (nitrocellulose in acetone and isopropanol) at a ratio of one to four. After the preparation of matrix solution, calibrants (angiotensin and neurotensin) were added. The samples were dissolved in the matrix solution by pipetting, loaded on a MALDI plate, dried completely and then washed by 0.1% trifluoroacetic acid. MALDI TOF MS was carried out to using Voyager- Western blot analysis: Total protein (30 µg) was applied to each lane on to 10% SDS-polyacrylamide gels. Electrophoresis and immunoblotting were performed and the polyvinylidene fluoride (PVDF) membranes (Millipore, Billerica, MA, U.S.A.) were washed in Tris-buffered saline containing 0.1% Tween-20 (TBST) and then incubated with anti-zinc finger AN1-type domain 3 (Sigma), anti-heterogeneous nuclear ribonucleoproteins, anti-ubiquitin carboxyl terminal hydrolase isozyme L5, anti-peroxiredoxin-6, anti-thioredoxin and actin antibody (diluted 1:1,000, Santa Cruz Biotechnology, Santa Cruz, CA, U.S.A.), as the primary antibody. And, the membrane was incubated with horseradish peroxidaseconjugated-rabbit IgG or mouse IgG, as secondary antibody (diluted 1:5,000, Pierce, Rockford, IL, U.S.A.), and signals were detected by ECL Western blot analysis system (Amersham Pharmacia Biotech, Piscataway, NJ, U.S.A.) according to the manufacturer's protocol. Data analysis: All data are expressed as mean ± SEM. The intensity analysis of protein spots was carried out using Sig-maGel 1.0 (Jandel Scientific, San Rafael, CA, U.S.A.) and SigmaPlot 4.0 (SPSS Inc., Point Richmond, CA, U.S.A.). The results in each group were compared by Student's t-test. The difference for comparison was considered significant at P<0.05. RESULTS Testes from sham-operated animals had a normal testicular architecture and seminiferous tubular morphology with normal spermatogenesis, including primary and secondary spermatocytes, spematids and spermatozoa ( Fig. 1A and 1C). However, in testicular I-R animals, marked morphological changes were evident with severe distortion of tubules. Some tubules contained a few primary and secondary spermatocytes, while other tubules had noncohesive germinal cells with pyknotic nuclei and extensive disorganization ( Fig. 1B and 1D). TUNEL histochemical staining was performed to evaluate apoptotic cell death. The number of TUNEL-positive cells was significantly higher in testicular I-R animals than sham-operated animals ( Fig. 2A-2D). TUNEL-positive cells were specially observed in spermatogonia and spermatocyte. However, a few seminiferous tubules in sham-operated animals had TUNEL-positive cells. The apoptotic index was 4.3 ± 1.3% and 43.8 ± 6.8% in sham-operated and testicular I-R animals, respectively (Fig. 2E). Figure 3 shows the two-dimensional electrophoresis maps in the pH ranges of 4-7 and 6-9 for testes proteins from sham-operated and testicular I-R injured animals. Approximately 900 protein spots were present in the pH 4-7 map and 200 protein spots in the pH 6-9 map. We detected thirty-one protein spots with more than a 2.5-fold change in intensity between sham-operated and testicular I-R injured animals. Among the identified proteins, twenty-seven proteins were identified by MALDI-TOF analysis with protein sequence coverage of 10-69% (Table 1). However, four proteins were not identified by MALDI-TOF analysis and were named as unknown proteins. Among the identified proteins, levels of ubiquitin carboxyl terminal hydrolase isozyme L5, zinc finger AN1-type domain 3, heterogeneous nuclear ribonucleoproteins, peroxiredoxin 6 (Prdx-6) and thioredoxin (Trx) were decreased in testicular I-R injury animals compared to sham-operated animals. In contrast, levels of Rab GDP dissociation inhibitor beta, guanidioacetate N-methyltransferase, proteasome subunit beta type-4, hydroxymethylglutaryl CoA synthase and one unknown protein were increased in testicular I-R injury animals relative to sham-operated animals Western blot analysis demonstrated that ubiquitin carboxyl terminal hydrolase isozyme L5, zinc finger AN1-type domain 3 and heterogeneous nuclear ribonucleoproteins levels were significantly decreased in testicular I-R injury animals compared to sham-operated animals (Fig. 4). Protein levels are presented as the ratio of the intensity of the protein to that of actin. Ubiquitin carboxyl terminal hydrolase isozyme L5 levels were 0.85 ± 0.02 and 0.65 ± 0.03 in sham-operated and testicular I-R animals, respectively (Fig. 4A). Heterogeneous nuclear ribonucleoproteins levels were 0.81 ± 0.03 and 0.66 ± 0.04 in sham-operated and testicular I-R animals (Fig. 4B). Zinc finger AN1-type domain 3 levels were 0.77 ± 0.04 and 0.63 ± 0.03 in sham-operated and testicular I-R animals (Fig. 4C). Moreover, peroxiredoxin-6 and thioredoxin levels were significantly decreased in testicular I-R injury animals compared to sham-operated animals. Peroxiredoxin-6 levels were 0.78 ± 0.02 and 0.53 ± 0.02 in sham-operated and testicular I-R animals, respectively (Fig. 4D). Thioredoxin levels were 0.82 ± 0.02 and 0.47 ± 0.03 in sham-operated and testicular I-R animals (Fig. 4E). Fig. 3. Two-dimensional SDS-PAGE analysis of proteins in the testis from sham-operated (A and C) and testicular I-R (B and D) groups. Isoelectric focusing was performed at pH 4-7 and pH 6-9 using IPG strips, followed by second-dimensional separation on 7.5-17.5% gradient SDS gels stained with silver. Squares indicate the protein spots that were differentially expressed between shamoperated and testicular I-R groups. DISCUSSION This study clearly demonstrates that testicular I-R injury leads to serious histopathologic damage of the testis, including distortion of the seminiferous tubules and separation of germinal cells, as well as an increase in the number of apoptotic cells. Testicular I-R injury leads to the generation of ROS, and excessive ROS generation overcomes endogenous free radical scavenger's capacity. The accumulation of toxic oxide causes apoptosis in germ cells [14,20]. Moreover, it has been revealed that apoptotic cells were observed in the spermatogonia and other spermatogenic cells in testicular I-R [21,34]. We confirmed that apoptotic cells were specially observed in spermatogonia and spermatocytes. Moreover, using a proteomics approach, we identified thirty-one differentially-expressed proteins following testicular I-R injury. Among these proteins, we focused on specific proteins that have spermatogenesis and anti-oxidative functions in the discussion. Ubiquitin thioesterase is a ubiquitin carboxyl terminal hydrolase (Uch). Ubiquitin plays a critical role in various cellular processes including cell differentiation, cell protection under oxidative stress, signal transduction and apoptosis [31]. Uch isoenzymes (Uch-L) mainly affect spermatogenesis in the testis. Among Uch-L, Uch-L1 and Uch-L4 mRNAs are expressed in spermatogonia, while Uch-L3 and Uch-L5 mRNAs are expressed in spermatids and spermatocytes [22]. Moreover, Uch-L1 has been shown to be down regulated in germ cells during testicular ischemia and reperfusion, and Uch-L1 deficiency results in infertility [33]. A decrease in Uch-L1 expression leads to a decline in ubiquitination, and thus, ubiquitination level decreases [33]. However, over- expression of the UCH-L1 also induces germ cell apoptosis and inhibits spermatogenesis, leading to male sterility [35]. Thus, the proper quantity of UCH-L1 expression in testis is very important for normal spermatogenesis. This study showed the decrease of Uch-L5 in testicular I-R injury. The decrease of Uch-L5 can contribute to spermatogenesis dysfunction. We therefore speculate that a decrease in expression of Uch-L5 mediates testicular I-R-induced apoptotic cell death and defects in spermatogenesis Zinc finger AN1-type domain 3 (ZFAND3), also known as testis-expressed sequence 27 (Tex27), was originally detected in mouse testis [24]. ZFAND3 is present in post-meiotic cells during spermatogenesis [9]. Moreover, ZFAND3 mRNA is expressed primarily in spermatids in the testis and oocytes in the ovary [29]. It is accepted that ZFAND3 is a useful marker of spermatogenesis, because ZFAND3 has a critical physiological function related to germ cell maturation [29]. Thus, the regulation of ZFAND3 expression in the testis is critical for spermatogenesis, and a decrease in ZFAND3 expression may lead to the inactivation of spermatogenesis [9]. We found that the level of ZFAND3 was decreased in the testis as a result of testicular I-R injury. The decrease of ZFAND3 expression declines spermatogenic ability. Thus, our data demonstrated that the testicular I-R injury leads to a decrease of ZFAND3 and continuously results in a serious defect in spermatogenesis. Heterogeneous nuclear ribonucleoproteins (HnRNPs) are family proteins which have common structural domains. HnRNPs play important roles in DNA repair, telomere biogenesis, cell signaling during gene transcription and translation [5]. Moreover, HnRNPs have multiple roles in tumor development, including angiogenesis and cell invasion [5]. Among HnRNP family proteins, HnRNP G-T is a germ cell-specific nuclear protein that is expressed mostly in pachytene spermatocytes [38]. The presence of HnRNP G-T is important for normal germ cell development [13]. HnRNP-L acts as a key regulator of spermatogenic cell apoptosis and growth [23]. Knockout of the HnRNP-L gene leads to inhibition of proliferation and an increase in apoptosis of spermatogenic cells [23]. We found that testicular I-R injury resulted in increased apoptotic cell death in germ cell and serious testis tissue damage. In particular, we observed a significant decrease in HnRNPs in response to testicular I-R injury, which may explain the increase in apoptosis of germ cells and resultant dysfunction of spermatogenesis. The peroxiredoxin (Prdx) family proteins are involved in cell differentiation, proliferation and gene expression. Among these family proteins, Prdx-6 is also known as antioxidant protein 2. Prdx-6 protects liver tissue against mitochondrial dysfunction during hepatic ischemia-reperfusion and contributes to the mitochondrial trafficking [12]. Prdx6knockout mice were more susceptible to injury, increased tissue damage in liver and heart injury [12,26]. Prdx-6 is highly expressed in epithelial cells and the Sertoli cells of the testis [14]. Prdx-6 protects Leydig cells against oxidative stress [39]. Moreover, over-expression of Prdx-6 results in resistance to cytotoxicity induced by chemical materials and promotes cell proliferation [10,16]. We showed that testicular I-R injury induces a decrease in Prdx-6 levels. Moreover, Western blot analysis and RT-PCR analysis clearly demonstrated that Prdx-6 levels are markedly decreased in rats with testicular I-R injury, which would decrease anti-oxidant activity in the tissues of the testis, resulting in testicular damage. Thioredoxin (Trx) is a small redox protein that suppresses apoptosis and protects cells against oxidative stress. Trx contributes to several cellular processes, including redox signaling and oxidative stress responses [36,38]. Redox regulation is an essential step in the normal spermatogenesis process. Thus, oxidative stress is one of the major causative factors of male infertility [1]. Moreover, sperm-specific Trx is expressed in spermatozoa and in developing testicular germ cells [28]. The sperm redox system plays a key role in protecting spermatozoa from ROS until fertilization [27]. Thus, a decrease in Trx expression indicates a decline in anti-oxidative ability and spermatogenesis. We found the decrease of Trx expression in testicular ischemic injury. The decrease of Trx expression indicates a decline in anti-oxidative ability and spermatogenesis. A previous study demonstrated that the expression of thioredoxin-1 and thioredoxin-2 was significantly decreased in cerebrums of rats with ischemia and reperfusion injury [19]. During ischemia and reperfusion injury, excessive radical production is produced and leads to protein oxidation and DNA damage [17]. Thioredoxin may reduce the free radical production and remove oxygen free radicals. In this study, we identified the decrease of Trx in testicular I-R injury using a proteomics. We also confirm this decrease using Western blot analysis and RT-PCR analysis. Our results indicate that the testicular I-R injury-induced reduction in Trx leads to testicular cell death. In the present study, we obtained these results at 24 hr after testicular I-R injury. However, the expression of apoptosis-related proteins is correlated with the time of reperfusion after testicular I-R [25]. Thus, we purpose the fact that several new proteins can be identified according to time course after testicular I-R injury. In conclusion, this study showed that levels of peroxiredoxin 6, thioredoxin, heterogeneous nuclear ribonucleoproteins, ubiquitin carboxyl terminal hydrolase isozyme L5 and zinc finger AN1-type domain 3 proteins decreased significantly in response to testicular I-R injury. These proteins have anti-oxidative and spermatogenesis functions. Thus, these findings suggest the fact that testicular I-R injury causes testicular damage due to changes in the expression of several proteins.
2018-04-03T04:37:21.068Z
2013-11-01T00:00:00.000
{ "year": 2013, "sha1": "b801cd1dce6e3bb08ea3d016b0b56796f6cce7d6", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/jvms/76/3/76_13-0248/_pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "b801cd1dce6e3bb08ea3d016b0b56796f6cce7d6", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
33710168
pes2o/s2orc
v3-fos-license
Negative symptoms and negative schizophrenia. This study determines the frequency distribution of prominent negative symptoms in a group of chronic, hospitalised schizophrenics. Thirty chronic Schizophrenic (D.S.M. III) patients were rated on the scale for Assessment of Negative Symptoms (SANS) and the prominent negative symptoms were correlated with age, sex and certain illness variables. Majority (80%) of patients had some or the other negative symptom, except thought blocking which was found in none. The subjective awareness of the symptoms was poor. Most negative symptoms were present to a severe degree in about 40% of cases. However, no significant correlation was found between severe negative symptoms and age or sex. Similarly, duration of illness, duration of hospitalisation or current medications did not influence negative symptoms to any appreciable degree. The implications are discussed. Schizophrenia researchers have recently expressed renewed interest in the assessment, etiology and treatment of negative symptoms Andreasen and Oslen, 1982;Strauss and Carpenter, 1974;Crow, 1980;Lewine et al., 1983). Negative symptoms, defined as deficits or losses in function are emphasised as important features of the schizophrenic syndrome. They are common in chronic schizophrenia . Current thinking about the phenomenology of schizophrenia stresses the importance of distinguishing between positive and negative symptoms (Crow, 1980;Strauss and Carpenter, 1974). Schizophrenics presenting with predominant negative symptoms have been variously called as Negative Schizophrenia (Andreasen and Oslen, 1982), type II Syndrome (Crow, 1980), Clinical Poverty Syndrome (Wing, 1978) or Defect State (Crow et al., 1979). These so-called negative or unproductive symptoms are described as typical of schizophrenic deterioration (Ciompi, 1983). The exact etiopathogenesis of negative symptoms is unclear. Negative symptoms might be a consequence of positive symptoms that occur over extended periods, a result of social or institutional responses to these symptoms or relatively intrinsic to the individual personality structure (Strauss and Docherty, 1979). Biological and organic factors as structural brain lesions are also implicated as being responsible for these negative symptoms (Johnstone et al., 1976;Crow, 1980). Social understimulation and Tnstitutionalisation are other factors considered as influencing production of negative symptoms (Wing, 1978;Bhaskaran et al., 1972). Or it could be a multi-determined state which occurs mainly after, but sometimes before the acute productive schizophrenic manifestations (Ciompi, 1983). Preliminary research suggests negative symptoms may be useful in predicting long term outcome and response to treatment, in distinguishing between mania and schizophrenia and in identifying patients with structural brain lesions Johnstone et al., 1976;Johnstone et al., 1978). Although these various studies are of interest and are certainly important, they raise further questions, principally about the frequency of these symptoms and about the extent to which they are associated with factors as duration of illness, duration of hospitalisation and treatment given. The present study attempts to provide some new information on this important aspect. The aim is to determine the prevalence of various negative symptoms, in chronic hospitalised schizophrenics. The second objective is to study the relationship between patients with marked or severe intensity of negative symptoms on more than two subscales and certain demographic as well as illness variables. Such patients with marked negative symptoms on more than two sub-scales can be considered as Negative Schizophrenia (Andreasen and Oslen, 1982). MATERIAL AND METHOD This study was conducted at the National Institute of Mental Health and Neuro Sciences, Bangalore. Thirty patients diagnosed as Chronic Schizophrenia as per DSM-III (A. P. A., 1980) were selected at random as the sample. For the purpose of this study patients between the ages of 20 and 55 years and who were long stay inpatients (more than one year ) of this hospital were selected. Patients with Epliepsy, Mental Retardation, Organic Mental disorders and major physical diseases were excluded from the study. To remove any bias in selection, a list of cases meeting the above criteria was prepared, from shich thirty cases were selected randomly. The sample consisted of 7 males and 23 females. The negative symptoms in these patients were rated using the scale for Assessment of Negative Symptoms "SANS" (Andreasen, 1981) by two clinical psychiatrists. The scale has undergone tests for reliability, internal consistency and validation (Andreasen and Oslen, 1982). We have also evaluated the inter-rater and test-retest reliability of the scale and found it applicable in our setting (Mathai et al., 1984). The rating of each of the components was made based on multiple sources of information including direct observation by the investigators and the nurse incharge of ward, and from reports of the patients. Patients were rated on all the five sub-scales, Affective flattening, Alogia., Avolition-apathy, Anhedonia -asociability and Attentional impairment. For this study, patients having marked to severe degree of the negative symptoms on more than two sub-scales, have been correlated with the patients' age, sex and certain illness variables as duration of illness, duration of hospitalisation and status of current medication. Table I gives the distribution of the sample studied. The patient percentage distribution on various scales and subscales according to severity are given in Table II. Majority of patients have some negative symptoms to be present definitely except blocking, which we found in none of our patients. The common negative symptoms are found to be unchanging facial expression, impersistence at work, inability to feel intimacy and closeness and asociability in about 80% of patients. Lack of sexual interest and activity and poverty of speech was reported in 35-40% of cases. However the subjective complaints or awareness Amlition-Apathy : Grooming of various negative symptoms is very poor in most cases. Twenty cases (66.7%) have marked to severe negative symptoms on mote than two subscales. The comparison between patients with negative symptoms on more than two subscales and those with minimal or absent negative symptoms in relation to age, sex, and certain illness variables are given in Table III. There is no significant statistical association between severe negative symptoms and any of the variables. DISCUSSION This report shows that a large majority (84%) of patients have definite negative symptoms and in about twothirds these were present to a marked or severe degree and fulfill the criteria of negative schizophrenia as described by Andreasen and Oslen (1982). None of the patients had any positive symptoms dominating the clinical picture. The extensive prevalence of these symptoms justifies the subtyping as negative schizophrenia as has been done by certain researchers Crow, 1980). The exact frequency of negative symptoms in schizophrenia is not reported in literature, hence a comparison with the present results cannot be attempted. Symptoms like flatness of affect, other negative and non-specific symptoms were reported from all centres of the International Pilot Study of Schizophrenia and its 2 year follow up (WHO, 1974;WHO, 1979). Certain disabling negative symptoms as inattentiveness, inability to feel intimacy, inadequate social relations, poverty of speech and affective flattening present in a servere intensity in almost 25% of the patients in this study, were noticed to be present in many subjects in Owens and Johnsstones (1980) study. Negative symptoms as apathy and withdrawal 'were difficult to treat in another report (Strauss and £)ocherty, 1979). Interestingly, there is no significant correlation between negative schizophrenia and age or sex. Bhaskaran et al., (1972) reported more severe deficits in females and positive relationship between age and deficits was reported by Owens and Johnstone (1980) and Johnstone et al., (1981). The difference is probably due to the nature of the sample, Owens and Johnstone examined mainly elderly patients with mean age 60 years. Similarly, duration of illness and duration of hospitalisation have no significant association with severe negative symptoms in this study as was noticed by Bhaskaran et al. (1972) also. Some clinicians believe that negative symptoms increase as illness becomes more chronic (Strauss and Docherty, 1979;Johnstone et al., 1981) and some implicate hospitalisation and hospital environment to be responsible for causing deficits and negative symptoms as avolition or apathy (Wing and Brown, 1961;Wing and Brown, 1970;Bhaskaran, 1970;Bhaskaran et al., 1972;Strauss and Docherty, 1979). The present study does not lend support to this opinion as there is no significant differential distribution of patients with negative schizophrenia in relation to duration of illness of hospitalisation. Similar results were reported by Andreasen and Oslen (1982). The role of continuing medications in chronic schizoprenics has been a controversial area. Some surveys found no difference of outcome between patients discontinuing medications and those -ontinuing (Johnson, 1976, Johnson, 1979Johnson et al., 1983). Andreasen and Oslen (1982), Wing and Brown (1970) and Owens and Johnstone (1980) reported that neuroleptic medication as such does not contribute to development of deficits or negative symptoms. This study also has found no difference in severity of negative symptoms between patients continuing or not continuing medications. These patients were not on any medication for more than two years as the clinical status, had been stable and the symptoms were not showing even slightest improvement. Other patients were receiving long term depot preparations (Fluphenazine decanoate). Probably, additional investigation is required to determine the frequency of negative symptoms in a relatively larger sample of schizophrenic patients, their possible predictive validity and possible remedial measures. This study certainly proves that negative symptoms are very common in chronic schizophrenia as was reported by also. Also, it makes clearer the absence of role of duration of illness, hospitalisation, medication, age or sex in producing negative symptoms. The area of reseaich is interesting and more work from the Indian Culture and background as well as cross-cultural studies would prove to be illuminative.
2014-10-01T00:00:00.000Z
1984-07-01T00:00:00.000
{ "year": 1984, "sha1": "759462f2cfc59ce6d5eb7c5cdf23a89c0fc8a30f", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "759462f2cfc59ce6d5eb7c5cdf23a89c0fc8a30f", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
55015551
pes2o/s2orc
v3-fos-license
Photoemission study of the SiO2 conversion mechanism to magnesium silicate The objective of this work is to investigate interface chemistries which minimize the interfacial silicon oxide transition region at Si/high-k dielectric interfaces. We report on the mechanism by which a silicon native oxide layer is converted into magnesium silicate. The deposition of metal Mg onto a SiO2 native oxide surface resulted in the formation of a magnesium silicide in addition to substochiometric silicon oxides and a significant decrease in the oxidised silicon signal. Annealing to 300 °C resulted in the decomposition of the magnesium silicide, oxidation of the Mg, and the desorption of excess metallic Mg. Subsequent annealing to 500 °C resulted in converting the SiO2 into magnesium silicate. The results suggest that the decomposition of the Mg silicide in the presence of the residual native oxide facilitates silicate formation at 500 °C. Due to the reported thermal stability of Mg silicate it is suggested that this process may be beneficial in modifying the interface characteristics of the Si/high-k dielectric interface which has potentially significant implications for future semiconductor device generations. © 2010 American Institute of Physics. doi:10.1063/1.3357392 I. INTRODUCTION Controlling the interfacial properties of silicon is one of the biggest challenges facing the integration of high-k materials in future silicon based transistor technology.The thermal stability of the interfacial region must be addressed in order to prevent the growth of interfacial SiO 2 , which can adversely affect the equivalent oxide thickness of the device. 1 The formation of thermally and chemically stable metal-silicates at the silicon surface may prevent the growth of SiO 2 .Previous studies have shown that silicate layers can be formed by deposition of various metals including yttrium, 2 lanthanum, 3 and erbium, 4 and have been shown to produce promising electrical and physical characteristics.In this study the formation of a magnesium silicate interfacial layer is investigated as an alternative to SiO 2 due to its high reported thermal stability. 5Previous studies have shown that MgO deposition onto Si results in the growth of a thin ͑ Ͻ1 nm͒ interfacial Mg silicate region. 6,7Further studies also indicate that these films lead to favorable interface characteristics including low interface state densities. 8However, it has also been reported that the strong tendency for MgO to crystallize results in the film displaying a high density of columnar grains within the oxide films, independent of growth method. 9,10The grain boundaries associated with these columnar structures may result in the rapid formation of breakdown paths, dramatically reducing the lifetime of the devices. 11Therefore the focus of this work is to investigate the potential benefits of using magnesium silicate as a chemically stable interlayer between the silicon substrate and a high-k material in order to control the interfacial properties.The results suggest that it is possible to convert a SiO 2 native oxide layer into Mg silicate, and the mechanism by which this occurs is discussed.Also, due to the thermal instability of Mg silicide, the formation of Mg silicate can be achieved in situ in ultrahigh vacuum ͑UHV͒ conditions.This allows for greater understanding of the reaction mechanisms involved in metal-silicate formation, specifically the role of silicide formation. II. EXPERIMENTAL PROCEDURE Si͑111͒ native oxide surfaces were prepared using a standard degreasing procedure of successive dips in acetone, methanol, and isopropyl alcohol before being loaded into a UHV deposition and analysis system.Pure magnesium metal ͑99.9%͒ was deposited at room temperature at a pressure of 1 ϫ 10 −9 mbar onto the native oxide surface using thermal evaporation.The x-ray photoelectron spectroscopy ͑XPS͒ analysis was carried using a Vacuum Generator ͑VG͒ microtech electron spectrometer at a base pressure of 1 ϫ 10 −9 mbar.The photoelectrons were excited with a conventional Mg K␣ ͑h = 1253.6eV͒ x-ray source and an electron energy analyzer operating at a 20 eV pass energy, yielding an overall resolution of 1.2 eV.High temperature annealing studies were carried out in vacuum at a pressure of 1 ϫ 10 −9 mbar, with samples kept at the target temperature for 20 min.Information on the species which desorbs from the surface during thermal annealing was acquired with Ametek process instruments-Dycor mass spectrometer, set to monitor atomic masses of 24 ͑Mg͒ and 40 ͑MgO͒.XPS core level spectra were curve fitted using Voigt profiles composed of Gaussian and Lorentzian line shapes in a 3:1 ratio and using a Shirley-type background.The full width at half maximum ͑FWHM͒ of the Si 2p substrate peak was 0.80 eV and the oxides, silicides, and silicate component peaks were in the range 1.2 to 1.5 eV.The corresponding O 1s core level spectra have FWHM values in the 1.75 to 1.9 eV range. All spectra were charge references to the Si 2p bulk signal at 99.3 eV and there was no evidence of differential charging effects for the native oxide covered surfaces. III. EXPERIMENTAL RESULTS AND DISCUSSION The curve fitted Si 2p spectra taken from a SiO 2 native oxide covered silicon surface before and after ϳ2 nm Mg deposition, shown in Fig. 1, illustrate that Mg deposition has resulted in the growth of peaks on both the higher binding energy ͑HBE͒ and lower binding energy ͑LBE͒ sides of the Si bulk peak.The HBE peaks have been attributed to substochiometric silicon oxide peaks which appear as the result of Mg incorporation within the SiO 2 layer.The LBE peak, separated from the Si substrate peak by 1.1 eV, has been attributed to the presence of magnesium silicide, which is consistent with the results reported by Brause et al. 12 The formation of a metal silicide resulting from the deposition of a reactive metal onto a thin silicon surface oxide is usually associated with a reaction between the metal and the silicon substrate.This is because the normal thermodynamic trend for metal oxides used as high-k dielectrics is that the metal oxide is thermodynamically more stable than the silicide.Therefore, depositing these metals on thin Si oxide layers usually results in the formation of a metal oxide by the reduction in SiO 2 13 releasing silicon which can subsequently oxidize and contribute to the formation of an increased silicon oxide interlayer.In this case, the Mg preferentially reacts with the silicon atoms within the native oxide layer.This has been conclusively illustrated in a separate study where Mg was deposited on a SiO 2 layer of effectively infinite thickness ͑700 nm͒ and, as shown in Fig. 1, the metal silicide was clearly observed.However, interface reactions are not necessarily predictable from bulk thermodynamic data as the nonequilibrium nature of the deposition process can have an impact on interface composition 14,15 The growth of Mg silicide in the absence of bulk silicon suggests that the Mg atoms have disrupted the SiO 2 structure resulting in Mg atoms taking the place of the O atoms, creating Mg-Si bonds.Following the deposition of Mg on the native oxide surface, the samples were annealed in UHV to 300 °C and the changes in the Si 2p spectrum shown in Fig. 2 suggest the complete removal of Mg silicide, along with the removal of the Si suboxide groups.The thermal instability of Mg silicide has previously been reported. 12,16The reduction in the SiO 2 peak area from 17% to 10% of the total signal is consistent with the loss of oxygen from the surface during Mg silicide formation but there is no discernable shift in the peak position of the silicon oxide indicating that after this anneal, the silicon within the oxide is predominantly in the Si 4+ oxidation state. The Mg 2p spectrum in Fig. 3 relating to the deposition of Mg results in the appearance of an asymmetric peak indicative of metallic Mg. 17 dicative of the oxidation of the remaining Mg present on the SiO 2 surface.The integrated area under the Mg 2p peak was also reduced by a factor of 2.5 as a result of the anneal suggesting desorption of excess Mg from the surface.This is substantiated by mass spectrometry data taken during annealing, shown in the inset of Fig. 3, where the predominant species desorbing from the surface at 300 °C is Mg with a smaller contribution from MgO.The desorption of metallic Mg from the surface is in agreement with the work of Galkin et al. 16 which suggest that room temperature deposition of Mg onto Si initially results in the formation of silicide islands while continued deposition results in the growth of metallic Mg on top of these island structures.Other studies 18 have shown that metallic Mg will desorb from the Si surface at temperatures above 200 °C.The changes induced in the Si 2p peak profile in Fig. 2 after subsequent annealing of the sample to 500 °C results in a significant shift in the Si oxide peak.The Si oxide peak now shows a separation from the bulk peak equal to 3.25 eV, which is indicative of Mg silicate and not SiO 2 .The peak position of Mg silicate with respect to the silicon substrate peak has previously been found during MgO deposition onto the Si͑111͒ surface, 7 and verified using Mg silicate reference materials.Peak fitting of the Si 2p spectrum in Fig. 2 suggests that the contribution of the Si 4+ oxidation state has been reduced below the level of detection for standard XPS, which indicates a considerable reduction in the presence of the SiO 2 native oxide.Mg 2p spectra ͑not shown͒ taken after the 500 °C anneal show a 0.2 eV shift to LBE, which is consistent with Mg silicate formation.It should also be noted that there is no discernable change in Mg 2p peak area, indicating that the oxidised Mg formed after at 300 °C annealing does not desorb during the 500 °C anneal. Further evidence that the SiO 2 native oxide has been converted into Mg silicate comes from the evolution of the O 1s spectrum illustrated in Fig. 4. The deposition of Mg on the SiO 2 native oxide surface leaves the O 1s peak profile unchanged which supports observations from the Si 2p core level that the room temperature interaction of Mg with the SiO 2 occurs preferentially with the silicon atoms in the SiO 2 .Curve fitting of the broadened O 1s peak observed after 300 °C annealing reveals the presence of an oxidised Mg component shifted by 1.0 eV to LBE in addition to the SiO 2 peak, which is consistent with the oxidation shown in the Mg 2p profile.This energy separation between the two oxide components of the O 1s spectrum is less than that seen between SiO 2 and fully stochiometric MgO, which has been measured in separate experiments to be 1.54 eV.Therefore, although the Mg is in an oxidized state, it is not in the fully oxidized species MgO.It can also be said that the Mg atoms are not in a silicate bonding environment as the silicon oxide state shown in the Si 2p spectra ͑Fig.2͒ is still indicative of the SiO 2 native oxide.After annealing to 500 °C there is a significant decrease in the presence of the SiO 2 related component within the O 1s core level profile.Given that there is no change in the total integrated area of the peak, this again indicates that the SiO 2 has been converted into Mg silicate consistent with the observations on the Si 2p core level.While the Mg 2p core level spectrum shifts by 0.2 eV between the suboxidized Mg and the Mg silicate, no corresponding binding energy shift is observed between the Mg suboxide and Mg silicate components of the O 1s spectra.This would indicate that the binding energy position of the O 1s spectrum is insensitive to chemical changes between oxidized magnesium and magnesium silicate.The chemical shifts shown by the Si 2p, Mg 2p, and O 1s spectra after Mg silicate formation are in accordance with the electronegativities of magnesium, silicon, and oxygen ͑1.31, 1.90, and 3.44 on the Pauling scale͒.These values would also suggest that the O 1s binding energy position of both suboxidized magnesium and magnesium silicate would exist between the SiO 2 ͑BE= 533.3 eV͒ and MgO ͑BE= 531.8 eV͒ which agrees with the above observations.Subsequent studies have shown that the determining factor in silicate growth is the amount of Mg silicide initially formed on the SiO 2 surface.The importance of silicide formation as an intermediate step to silicate formation has previously been suggested by Chambers and Parsons 2 and can be conclusively shown here.The spectra in Fig. 1 showed the effect of ϳ2 nm Mg deposition on to SiO 2 , which caused the growth of Mg-silicide and eventually led to the growth of Mg silicate.However, if only 1 nm of Mg is deposited, an insufficient amount of Mg silicide is formed, and annealing to 500 °C results in only partial transformation of the SiO 2 into Mg silicate as can be seen in the Si 2p spectra in Fig. 5. Further evidence for the importance of silicide formation comes from analyzing the chemical species present on the surface after annealing to 300 °C.The Si 2p spectrum in Fig. 2 taken after the 300 °C anneal shows no evidence of Mg-silicide, while the silicon oxide present on the surface remains in the form of SiO 2 .Also, Fig. 3 shows that the Mg present on the surface after the 300 °C anneal is in the form of oxidised Mg, but not fully oxidised MgO.Therefore, while the only chemical species present are oxidized Mg and SiO 2 , annealing the sample to 500 °C still results in silicate formation.This is in direct contrast to the result seen after the deposition of stochiometric MgO onto the SiO oxide surface ͑not shown͒ where the spectra taken after annealing to 500 °C show no evidence for the transformation of SiO 2 into Mg silicate.This would indicate that the presence of SiO 2 and stoichiometric MgO will not lead to silicate formation at 500 °C without the effect of the intermediate silicide formation at room temperature and subsequent decomposition at 300 °C prior to the 500 °C anneal. Given the importance of silicide in silicate formation, it is suggested that the thermal instability of Mg-silicide offers an advantage over other metal-silicides.The thermal removal of Mg silicide prior to Mg silicate formation is in contrast to the silicide oxidation process developed for the formation other metal silicates such as Y silicate 2,19 and Hf silicate. 20revious studies have shown that the oxidation of Y silicide can only be achieved at high temperature ͑600°-900 °C͒ and high pressures of N 2 O ͑1 atm͒, while high pressure ozone oxidation has been used to form hafnium silicate from deposited hafnium silicide.The presence of excess oxygen in the high temperature annealing of rare earth silicates has been shown to result in the growth of interfacial Si-O-Si bonds. 21However, it should also be noted from our studies that the desorption of excess metallic Mg from the silicon surface at 300 °C means that it is difficult to promote additional silicide formation above that which forms initially at room temperature.The thermal instability of Mg silicide therefore means that only those Si atoms which are involved in silicide formation after Mg deposition can be incorporated into Mg silicate, placing a limit on the achievable thickness of the silicate film.This contrasts with the experimental observations reported by Baglin et al. 22 for the promotion of yttrium silicide growth at elevated temperature. The results of this study may have relevance for the low temperature modification of silicon interlayers on III-V materials which have been deposited prior to high-k dielectric deposition, in an attempt to improve the interfacial electrical characteristics. 23While Jiang et al. 24 have shown that thin amorphous silicon interlayers can be successfully converted into ytterbium silicide, high temperature annealing could detrimentally impact on the interface quality as III-V materials are unstable at high temperature. 25The Mg silicate formation procedure at 500 °C outlined in this work may make it more suitable for silicon interlayer modification on III-V semiconductor substrates than other metal silicates. IV. CONCLUSIONS The results presented here suggest that a SiO 2 native oxide layer can be converted into Mg silicate.Initial deposition of Mg onto the SiO 2 surface resulted in the growth of Mg silicide due to disruption of the SiO 2 and loss of oxygen from the surface.Annealing the sample to 300 °C resulted in removing the thermally unstable Mg silicide and desorption of excess Mg from the surface.It is believed that the decomposition of Mg silicide creates under coordinated silicon atoms which react with the oxidised Mg resulting in the formation of Mg silicate upon annealing to 500 °C.The study has also shown that the initial formation of Mg silicide is a necessary intermediate step in Mg silicate growth and can only be achieved by deposition of metallic Mg. FIG. 3 . FIG.3.Mg 2p spectra of the core level following Mg deposition and the subsequent 300 °C anneal which results in the oxidation of the Mg.A reduction in the integrated area of the peak by a factor of 2.5, along with mass spectrometry data ͑inset͒ suggest desorption of excess Mg from the surface during annealing. FIG. 5 . FIG. 5. Si 2p spectra taken after 1 nm Mg deposition onto Si native oxide surface show the growth of less Mg silicide than that seen in Fig. 1.Subsequent UHV annealing to 500 °C resulted in only partial transformation of SiO 2 into Mg silicate, indicating the importance of silicide formation as an intermediate step in Mg silicate growth. Annealing to 300 °C results in a 1.8 eV shift in the Mg 2p peak toward HBE, which is in-. 1. Curve fitted Si 2p spectra showing the growth of Mg silicide and substochiometric silicon oxide species after Mg deposition ͑2 nm͒ onto Si native oxide surfaces.The growth of Mg silicide on 700 nm SiO 2 surfaces suggests disruption of the SiO 2 structure resulting in Mg atoms taking the place of the O atoms. .2. Curve fitted Si 2p spectra taken after UHV annealing to 300 °C show the complete removal of Mg silicide and substochiometric Si oxide species, along with a reduction in SiO 2 from 17% to 10% of the signal.Further annealing to 500 °C results in a chemical shift in the Si oxide peak, which is indicative of the transformation of SiO 2 to Mg silicate. FIGFIG 2 native FIG. 4. Normalized O 1s core level spectra show the presence of both SiO 2 and partially oxidised Mg species after a 300 °C anneal.The reduction in the SiO 2 peak after a 500 °C anneal provides further evidence for the transformation of SiO 2 into Mg silicate.
2018-12-11T12:16:44.670Z
2010-04-12T00:00:00.000
{ "year": 2010, "sha1": "15539fae5df9c2562393ffb350c2f95ce24ab813", "oa_license": "CCBYNCSA", "oa_url": "http://doras.dcu.ie/15580/1/hughes3.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "15539fae5df9c2562393ffb350c2f95ce24ab813", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
201643827
pes2o/s2orc
v3-fos-license
Fabrication of sharp silicon hollow microneedles by deep-reactive ion etching towards minimally invasive diagnostics Microneedle technologies have the potential for expanding the capabilities of wearable health monitoring from physiology to biochemistry. This paper presents the fabrication of silicon hollow microneedles by a deep-reactive ion etching (DRIE) process, with the aim of exploring the feasibility of microneedle-based in-vivo monitoring of biomarkers in skin fluid. Such devices shall have the ability to allow the sensing elements to be integrated either within the needle borehole or on the backside of the device, relying on capillary filling of the borehole with dermal interstitial fluid (ISF) for transporting clinically relevant biomarkers to the sensor sites. The modified DRIE process was utilized for the anisotropic etching of circular holes with diameters as small as 30 μm to a depth of >300 μm by enhancing ion bombardment to efficiently remove the fluorocarbon passivation polymer. Afterward, isotropic wet and/or dry etching was utilized to sharpen the needle due to faster etching at the pillar top, achieving tip radii as small as 5 μm. Such sharp microneedles have been demonstrated to be sufficiently robust to penetrate porcine skin without needing any aids such as an impact-insertion applicator, with the needles remaining mechanically intact after repetitive penetrations. The capillary filling of DRIE-etched through-wafer holes with water has also been demonstrated, showing the feasibility of use to transport the analyte to the target sites. Introduction Wearable healthcare monitoring technologies have the potential for dramatically expanding the capability of data acquisition regarding an individual's health 1,2 . Recently, microneedle technologies have become increasingly interesting for unleashing the potential to minimally invasively interfere with an individual's biochemistry, such as for drug delivery 3,4 , interstitial fluid sampling 5,6 , and diagnostics 7,8 . Microneedles require only a small area of skin to be penetrated at a limited depth, resulting in minimal irritation of the dermal layers associated with pain and tissue damage. Silicon microneedles are desirable due to their excellent biocompatibility and, in particular, mechanical properties superior to those of polymer and metal, such as a nonductile nature, high Young's Modulus, and indentation hardness enabling skin penetration without breakage in the skin 9 . Despite some concerns, silicon material revealed biocompatibility in a baseline battery of ISO 10993 physicochemical and biocompatibility tests 10 . With silicon implantation, comprehensive studies of the immunohistochemistry of brain tissues demonstrated that silicon devices and the byproducts of their dissolution in the intracranial space are biocompatible 11,12 . In particular, the Food and Drug Administration (FDA) has granted clearance for silicon devices, such as silicon microneedles (NanoPass Technologies Ltd., https://www.nanopass. com/) 13 and silicon Utah array electrodes (Blackrock Microsystems LLC, https://blackrockmicro.com/ electrode-main/) 14 . In comparison with other microneedle materials, e.g., polymers 15 and metals 16 , silicon has the advantages of being compatible with well-established micro/nanofabrication technologies for enabling added functionalities, such as through the monolithic integration of microneedles and complementary metal-oxidesemiconductor (CMOS) circuitry for continuous and real-time diagnostics [17][18][19][20] . Interstitial fluid (ISF) holds great promise as an alternative source of biomarkers for blood plasma 21,22 . ISF, formed by blood transcapillary filtration, has a composition comparable to that of plasma, indicating significant untapped potential for a wide range of diagnostics. Proteomic and metabolomics analysis indicates that ISF is highly similar to both plasma and serum 21,22 . It has also been shown that certain biomarkers (e.g., glucose) in ISF at equilibrium have concentration levels that correlate well with those in clinically relevant blood plasma 23,24 . By withdrawing ISF, biomarkers of clinical interest are measured either off-line by standard commercial methods 22 or in-line by integrated biosensors 7 . In the latter configuration, ISF is transported through the needle lumen to the biosensor integrated on the backside of the device 7,23 . Both methods necessitate a large amount of ISF volume to improve the diagnostic consistency. The epidermis largely comprises keratinocytes cells, and ISF-filled compartments are sparsely distributed in the upper region of the dermis (papillary), enveloped by the structural molecules of the interstitium matrix such as collagen frameworks [24][25][26] . The sampling of ISF using microneedles involves penetration through a deformable, elastic skin barrier, which is a challenge often resulting in incomplete needle penetration 27,28 . The inhomogeneity of ISF population results in inconsistent recovery and limited volume (typically submicroliter scale), and enhanced recovery necessitates fluid extraction mechanisms such as vacuum suction 5 . In addition, millimeter-long needles have enabled the recovery of ISF at up to tens of microliters from the deeper dermal region (i.e., the reticular dermis) 22 . However, ISF interrogation in the reticular dermis likely results in inevitable contact between the needle and sensory nerve endings, as well as blood capillaries. On the other hand, bringing the sensor closer to the microneedle tip enables consistent in-vivo monitoring of biomarkers of clinical interest with reduced ISF volumes such as subnanoliter amounts 8,16,[29][30][31] . The close proximity between the sensor and interstitium (containing ISF) ensures attaining an equilibrium protein concentration, as the biomarkers of clinical interest surrounding the sensor have a good correlation with the free ISF (and plasma) 32 . Such sensor configuration also enables reducing the lag time in ISF corresponding to changes of the blood glucose level associated with the capillary-to-sensor diffusion time 30 . The device architecture mainly relies on an assembly process to place the biosensor compartment (i.e., electrodes of electrochemical transducers) inside the borehole of the hollow microneedle. By leveraging CMOS/MEMS technologies (e.g., through-silicon via (TSV) filling of copper or doped poly-silicon), metallization inside the hole for electrode fabrication can therefore be accomplished with wafer-level processes amenable to mass production 33,34 . The state-of-the-art fabrication process for making silicon hollow microneedles typically relies on deepreactive ion etching (DRIE) of holes with small diameters from both sides because of the otherwise limited hole etching depth from a single side associated with so-called aspect ratio-dependent etching (ARDE) [35][36][37][38][39][40][41] . The challenge in this process is the need of precise double-sided alignment, especially with the ARDE-induced tapering hole profile 41,42 , and the significant hole widening during the needle sharpening process using isotropic wet etching. Alternatively, relatively large through-wafer holes are etched from one side through a combination of tapered and straight profiles 43,44 , whereas a smoothly tapered profile is more desirable 45 . This paper presents the DRIE of silicon hollow microneedles, which resemble elongated cones with smooth tapering from the shank to extreme sharpness. A triplephase modified Bosch process was developed to enable the production of sufficiently deep holes with small diameters from a single side of the Si wafer. As such, more processing flexibility was provided for optimal pillar etching without the compromise of simultaneous pillar and hole etching. Afterward, the isotropic etching process was managed to selectively remove the unwanted silicon to open holes and to achieve needle sharpening. This process also prevents the holes from undesirable etching by the aggressive etchant. Design of THE microneedle chip Microneedle-enabled transdermal minimally invasive platforms typically utilize miniaturized needles of several hundred micrometers, resulting in a limited skin penetration depth to minimize the patient's discomfort. It is highly desirable to interrogate ISF in the superficial papillary dermis, which is directly beneath the epidermisdermis junction (with a variable depth of 100-200 μm from the skin surface), without triggering the sensory nerve endings and blood capillaries in the deeper dermis (deeper papillary and reticular) 24,46 . Such pain-free ISF interrogation can be enabled by an array of~150 µm-long microneedles, which penetrate through the thinner epidermis (and stratum corneum) exactly, only touching the ISF-filled compartment. It is noteworthy that the optimization of the needle shape (e.g., via a smaller needle base) can mitigate the incomplete penetration resulting from the so-called bed-nail effects and the viscoelastic nature of skin. Furthermore, fluid sampling approaches are prone to clogging associated with tissue coring at the insertion sites, which necessitates accurate placement of the borehole off-center from the needle axis 44 . This also enables improving the sharpness of the apex (essentially solid, rather than hollow) after the sharpening process. Therefore, our microneedle array was designed to have silicon pillars of 200-300 μm long to achieve the needle length of~150 µm for etching vertically, along with lateral sharpening (see details in Fig. 1). Second, the diameter of the pillars was chosen as 100 µm to provide sufficient mechanical strength for supporting skin penetration, as well as to enhance the penetration with a relatively small base in the combination of the smoothly tapered profile 7,44 . Furthermore, the needle pitch of 300 μm enables mitigation of the bed-nail effect, while maintaining the shear stress distribution during skin penetration to minimize needle breakage. For the hole-side etching, the 30 μm-diameter holes are designed to be anisotropically etched to 300 μm deep (with an aspect ratio of 10) to provide sufficient overlap between the pillars and holes, which actually determines the opening location of the holes on the sidewalls of the sharpened needles. The borehole was intentionally shifted 30 μm from the center of the column to mitigate tissue coring and improve the sharpness. It is also noteworthy that a single or small number of needles frequently have clogging issues, wherein the microneedle bore is blocked by skin tissue. As such, an adequately large number of microneedles is necessitated to maximize the ISF access volume and consistency, given the requirements for a shallow penetration depth. The two-dimensional array design is also desirable to accomplish a multiplexed sensing system within a single chip for the simultaneous and selective screening of target biomarkers in ISF. Micromachining process Deep-reactive ion etching enables highly anisotropic silicon etching with high-selectivity relative to photoresists, making it feasible to fabricate structures with highaspect ratios (i.e., the ratio of depth to width) 41 . However, the etching rate rapidly decreases with increasing aspect ratios of the structures being etched, which is termed aspect ratio-dependent etching (ARDE) 37,41,47 . In this case, the etching rate rapidly decreases with the aspect ratio of the etched structures. Above a certain critical point of the aspect ratio, the etching rate reaches a constant extremely low value 48 . The proposed mechanism lies in that the ion flux to the bottom of the structure decreases along with its aspect ratio, resulting in insufficient passivation layer removal 47 . As such, holes with higher aspect ratios would start to pinch off at the bottom, and above a certain critical point of the aspect ratio, the The plasma provides energetic species (bombarding positive ions) that are accelerated toward the wafer surface by a strong electric field. To mitigate the effects of ARDE, an independent depassivation step was inserted into the standard dual-phase Bosch process (i.e., passivation and etching) 49 . In this step, the energetic ions (e.g., argon) directionally bombard the bottom of the etched holes, which are conformally deposited with a Teflon-type polymer during the preceding passivation step. Such a triple-phase DRIE (i.e., passivation, depassivation, etching) can efficiently remove the passivating polymers, enabling anisotropic etching with very high-aspect ratios and without undesirable tapered sidewall profiles. In our experiment, the anisotropic etching of silicon structures (i.e., pillars, holes) was conducted in an inductively coupled plasma reactive ion etcher (ICP-RIE). Specifically, the pillars were etched with a standard dualphase Bosch DRIE recipe (i.e., passivation, etching) using an Oxford Instruments PlasmaLab 100 etcher. The holes were etched with a triple-phase modified Bosch recipe (i.e., passivation, depassivation, etching) using an Oxford Instruments PlasmaPro Estrelas100 etcher. The parameters of the "standard Bosch" recipe and the "modified Bosch" recipe are listed in Table 1. The DRIE system is configured for 4-inch wafers, and smaller samples (e.g., square shapes with side lengths of 3-5 cm) can be accommodated by mounting to a 4-inch silicon carrier wafer using Fomblin oil as a thermally conductive adhesive. The wafer is clamped with continuous helium backside cooling to ensure a constant wafer temperature, i.e., 15°C for the "standard Bosch" and 5°C for the "modified Bosch" processing. The microneedle pattern was fabricated using standard photolithography techniques on a (100) silicon wafer, with the major process steps schematically illustrated in Fig. 1. The Si wafers were cleaned by immersion into hydrofluoric acid (HF: deionized water = 1:10) for 30 s, followed by deionized (DI) water rinsing, and nitrogen blow drying. First, a bilayer of AZ 4620 photoresist at a total thickness of~24 μm was spun onto one side (termed as the 'backside'') of a 4-inch wafer (double-side polished, Fig. 1a). Afterwards, ultraviolet (UV) light exposure was carried out with a mask aligner (Karl Suss MA6) at a dosage of 1600 mJ/cm 2 , followed by immersing the exposed sample into a developer solution (AZ 400 K 4:1 diluted developer). The photolithography formed an array of holes in the photoresist of~30 μm in diameter (Fig. 1a). Then, DRIE (i.e., the "modified Bosch" process) was performed to etch 300 μm-deep boreholes into the backside of the wafer, defining a high-aspect ratio (HAR) structure (~1:10, Fig. 1b). Note that at this moment, the other side of the wafer (termed as 'frontside'') was still flat. The DRIE process was halted before the boreholes were etched through the wafer to its frontside. Similar to the backside patterning, the frontside pillar pattern was defined ( Fig. 1c) with alignment to the holes on the backside. The AZ 4620 photoresist was patterned to create cylindrical pillars aligned to the boreholes. This alignment also enabled accurate patterning of the holes such that their centers were offset from the needle axis. This offset was to address the tissue coring issue within the needle bore during insertion 7 . Pillars of~300 μm in height and 100 μm in diameter were then etched by DRIE (Fig. 1d), making an overlap of 100 μm between the pillars and holes. At this point, the boreholes were still not exposed on the frontside. Essentially, they were still buried channels. Afterwards, using a mixed solution of hydrofluoric acid and nitric acid, the circular pillars were sharpened into conical needles, and the through-wafer holes were fully opened (Fig. 1e). This sharpening was realized by taking advantage of the isotropic etching nature of the chemical mixture, in which the etching rate decreases from the needle tip to the base 50 . Sharpening was also demonstrated by using the isotropic plasma dry etching and the combination of "wet" and "dry" isotropic etching. Holes Note that the etching rate here was determined from the pillar structure etching with a large open area (~500 μm), and the etching rate can vary for different structures were exposed on the sidewall of the needles, creating channels from the needles to the wafer's backside (Fig. 1f). Pillar etching The pillars were etched on a 4-inch Si wafer using dualphase "standard Bosch" processing with the parameters shown in Table 1. The standard Bosch DRIE process was carried out for 300 cycles to etch 102 μm-high pillars with straight and smooth sidewall profiles (Fig. 2a). The pillar diameter was measured as 103 μm with negligible pattern erosion from the designed 105 μm, showing high-fidelity pattern replication. Upon extending the etching for another 600 cycles, the pillar height increased to 297 μm, whereas the pillar top decreased to 87 μm, which was caused by resistance erosion (Fig. 2b). The pillar base decreased to 55 μm, creating a reentrant (negatively tapered) profile (i.e., the top is wider than the base) as a result of ARDE on large open areas 51 . It is believed to be very challenging to control the profile at heights >100 μm for features with large gaps. This is because the plasma sheath starts to follow the etched features and results in some nonvertical ions and, hence, increased undercutting. Figure 3 compares the dependency of the etch depth and etch rate on the etch cycles in the etching of pillars and holes using dual-phase "standard Bosch" processing. The error bar represents the standard deviation from the average values. The etch depth in the pillar etching linearly increases with the etch cycles, showing an almost constant etch rate of 1.8 μm/min. In contrast, the hole etching shows that the correlation between the etch depth and etch cycles gradually decreases (see supplementary Fig. 1), and hence, the etch rate decreases from 1.8 μm/ min to 1.1 μm/min in the etching of deeper holes (>200 μm). The deceleration of the etching rate for highaspect ratio structures is attributed to the ARDE. The physical ion bombardment during the etching step is insufficient to remove the fluoropolymers on the bottom of the hole structures 37 . Moreover, a positively tapered profile (opposite to the reentrant sidewall) was also observed in the etching of the 400 μm-deep holes, resulting from severe mask erosion (even with a doublelayered photoresist) associated with the poor selectivity of the resist to Si (i.e., 1:18). Hole etching The dual-phase "standard Bosch" process consists of a passivation step and an etching step, suffering from ARDE for the etching of high-aspect ratio structures with wellcontrolled sidewall profiles. The triple-phase "modified Bosch" process has the distinctive feature of adding a depassivation step between the two steps of the standard Bosch. The depassivation step utilizes energetic Ar ions to efficiently remove the fluorocarbon passivating polymers (see details in Table 1). As such, the DRIE process provided the highly directional (almost completely anisotropic) etching of holes of 200 μm in depth and 30 μm in diameter. The etching mask on the wafer surface is faithfully replicated in the underlying silicon (Fig. 4a). The thicknesses of the photoresist (single layer) before and after the DRIE were respectively 10.65 μm and 9.10 μm. The selectivity of the photoresist with respect to Si was Fig. 3 The dependency of the etch depth and etch rate on the etch cycles using "standard Bosch" DRIE processing for the fabrication of pillars and holes. The error bar represents the standard deviation from the average values thus~1:200, with a Si etching rate of 12 μm/min. Note that this etching also has the advantage of smaller scalloping due to the shorter etch cycle, resulting in a smoother surface that is beneficial to potential processing for the integration of sensing elements. The etching of deeper holes (>200 μm) was also observed to decelerate as a result of insufficient passivation layer removal, similar to the case for etching with the dual-phase "standard Bosch" process. However, the triplephase "modified Bosch" process enables adjusting the parameters of the independent depassivation (Ar ion bombardment) step for mitigating ARDE. To promote ion bombardment, the RF power (and hence, the acceleration voltage of the argon ions, Fig. 4b) and the Ar ion bombardment time (Fig. 4c) were respectively increased to enhance the removal of the passivating polymers at the bottoms of the holes (illustrated in Fig. 1b). As such, the etch depth increased without degrading the desired feature, especially upon using the combination of the original recipe and the HAR recipe (Fig. 4d). In another experiment, etching with the depassivation step of 1 s was carried out for 1000 cycles (Fig. 4e). The remaining photoresist was still sufficiently thick under the extended long ion bombardment (Fig. 4f), showing negligible pattern erosion (Fig. 4g). Note that the etching depth with a specific recipe has a standard deviation of <5 μm, showing high reproducibility. Needle sharpening The silicon micropillars were then sharpened into conical needles using an isotropic etching process (Fig. 5). This wet chemical etching of silicon utilizes a so-called HNA system, consisting of hydrofluoric acid (HF), nitric acid (HNO 3 ), and a comparatively weak acetic acid (CH 3 COOH), which can be replaced with water 50,52 . The overall reaction involves the oxidation of silicon to SiO 2 by HNO 3 and subsequent SiO 2 dissolution in HF. It is important that the overall reaction is limited by the diffusion of HF, and as a result, a ratio of 19:1 (which was adopted in this work) between HNO 3 and HF was chosen to assure that the oxide formation dominates its removal. By placing the silicon micropillar sample at the bottom of a static solution, the HF diffusion to the silicon surface was made significantly slower than the dissolution reaction at the surface, and hence, it was the rate-limiting factor. HF reacts with SiO 2 when they are in contact, rapidly consuming HF in the process. Owing to the large amount of exposed Si (and SiO 2 ) between the pillars, the reactive species are significantly consumed at the pillar bottom, rather than at the pillar top. As the reaction proceeds in a static solution, the bottom has less replenishment than does the top, especially when the solution depth is well-controlled, which will result in faster etching (shrinking) of the pillar top. Another factor also contributes to the faster etch rate at the top-the fact that the Fig. 4 The optimization of hole DRIE etching using triple-phase "modified Bosch" processing edge is essentially exposed to HF from both vertical and lateral directions, whereas such etching will stop until a rounded (rather than sharp) shape is achieved. In the isotropic wet etching experiment, square samples (1-2 cm side length) were used, and 600 cycles of "standard Bosch" DRIE etching was performed to fabricate solid micropillars. A 5 min wet etching resulted in a nearly vertical pillar of 94 μm wide and 217 μm high (Fig. 5a). Upon extending the etching time to 20 min, the pillar shrunk to 58 μm in diameter at the top and 200 μm in height (Fig. 5b), with the etching rate of appx. 1 μm/min in good agreement with the literature 50,52 . Upon adding another 20 min of etching, the blunt tip was sharpened into a single point of <1 μm, and simultaneously, the pillar height shrunk to 152 μm (Fig. 5c). By keeping the solution under static conditions (i.e., without agitation), as the etching proceeds, the etching species only diffuse from the bulk solution surrounding the silicon sample to replenish the solution contained within the spaces between the pillars. As a result, the sharpening shows high uniformity over the entire sample, with a percentage standard deviation of <5% for needle length (Fig. 5d), except in rows 1-3 on the sample edge, where the lateral diffusion from the adjacent open space becomes as important as the vertical diffusion (Fig. 5d). To achieve a higher degree of spatial uniformity across an entire wafer (including the edge), it is suggested to incorporate sacrificial structures (e.g., borders, extra rows of lines on the edge) to adjust the local concentration of etchant 50 . Simultaneous needle sharpening and hole opening Afterwards, the micropillars incorporating holes were sharpened using a mixed solution of HNO 3 and HF under stationary conditions. The isotropic etching-in particular, the etching from the lateral direction-simultaneously sharpened the needle tip and exposed the buried channel enclosed within the pillar. In such a manner, 160 μm-high hollow microneedles were fabricated after 40 min wet etching, with a percentage standard deviation of 2.7%, expressing high uniformity (Fig. 6a). Additionally, 30 μm-diameter holes were positioned off-center to mitigate the tissue coring issue during needle penetration, leading to so-called snake-fang needles 7 (Fig. 6b). A similar shape was also fabricated by using the combination of isotropic "wet" and plasma "dry" etchings, with the needle height showing a percentage standard deviation of 1.5% (Fig. 6c). Here, the micropillars were etched in the HNA solution for 30 min to obtain a blunt tip (inset Fig. 6c), followed by a 15-min SF 6 plasma etching in a reactive-ion etching system (Phantom II, Trion Technology Inc.). A SF 6 plasma etching process alone for 35 min (without wet etching) with the PlasmaLab 100 etcher, in which the C 4 F 8 gas was deactivated from the "standard Bosch" recipe, was also demonstrated to be capable of needle sharpening, with the needle height showing a percentage standard deviation of 1.3% (Fig. 6d). The aggressive plasma etching resulted in a nearly straight sidewall with a smoothly tapered profile and a narrow base to mitigate the incomplete needle penetration associated with skin elasticity 27,28 . Some of the major advantages of using plasma etching for needle sharpening include better process control and easy automation, as well as the elimination of handling strong oxidants. Capillary filling The capillary filling of DRIE-etched through-silicon holes (40 μm-diameter, without needle features) has been demonstrated using 1% weight/volume Allura red dye solution (0.1 g dye + 10 mL water, Fig. 7a). Oxygen plasma treatment is imperative to assure a hydrophilic surface with a water contact angle of <20 degrees 53 by removing the passivating polymer (i.e., C 4 F 8 ) left over from the previous Bosch DRIE process. The authors also observed that the lateral spreading of a water droplet was significantly larger on the Si surface after the oxygen plasma treatment. The sample with through-wafer holes was placed on the polished side of a clean silicon wafer (treated with the O 2 -plasma, Fig. 7a), creating a micrometer-level gap between the two surfaces to ensure effective capillary uptake. By placing a drop of the red dye solution in proximity to the hole chip (as indicated by the dashed circle in Fig. 7a), water uptake by the DRIE-etched through-wafer holes was accomplished in~1-2 s, changing the color from white (empty holes, Fig. 7b inset) to red (Fig. 7b). The top surface with the polymer coating (without O 2 plasma treatment) is hydrophobic, so the droplet on the top cannot spread out laterally. Upon placing a nitrocellulose paper on the top of the hole chip with a gentle finger push, the nitrocellulose paper was able to wick liquid through its capillary structure (Fig. 7a). This result shows the feasibility of direct integration between Si microneedles and paper microfluidics for the purpose of lateral-flow assay assembly. Skin penetration Afterwards, the skin penetrability was assessed with excised porcine skin 54 . Porcine skin is often used as a model of human skin due to the similarities in anatomy such as the thickness of the stratum corneum. The insertion force is suggested to range from 0.1 to 3 N, such that a thumb-push is sufficient for skin penetration, rather than requiring aid from impact-insertion applicators 7,28 . A razor was used to remove the subcutaneous fat on the back of the skin, making 3-4 mm-thick porcine skin samples. Then, the skin with good surface conditions (without hair or skin disorders such as scarring) was cut into square pieces (2-3 cm length) for experimentation. Following IPA soaking, the microneedle chip was gently thumb-pushed onto the porcine skin sample for a few 6 Hollow microneedle sharpening process using various methods. The hollow microneedle sharpening process using wet etching (a, b), a combination of wet etching and plasma etching (c), and plasma etching only (d). The inset in Fig. 6c shows the microneedle after wet etching and before plasma etching (scale bar 100 μm) seconds, and the needles were capable of holding the skin sample (Fig. 7c), indicating successful skin penetration. The successful skin penetration is largely attributed to the extreme sharpness of the microneedles, as the insertion force linearly decreases with the interfacial area of the needle tip, according to a study 28 . Following separation from the skin, the microneedle chip was baked at 250°C for 30 min on a hotplate for disinfection and sterilization by a burning (oxidation) process, while keeping the residual body tissues adsorbed onto the needle shank during skin penetration. This step also dried the sample to avoid contaminating the SEM chamber. Afterward, we carried out an scanning electron microscopy (SEM) inspection, in which we did not observe any mechanical failures. As such, the microneedles were concluded to be mechanically intact after repetitive skin penetration (Fig. 7d). This agrees well with the theoretical prediction regarding similar Si structures, in which the Si pillar fracture force was an order of magnitude greater than the insertion force necessary for skin penetration 28,44 . Note that the blackcolored substances on the needle shanks are likely the burning products of residual body tissues. Conclusion A DRIE process for fabricating silicon hollow microneedle arrays has been presented, aiming to explore the feasibility of microneedle-enabled in-vivo biosensors, in which the sensing elements can be incorporated on the inner surface the boreholes. The insertion of a depassivation process using directional Ar ion bombardment into the two standard Bosch half-cycles resulted in efficient removal of the passivating fluorocarbon layer, enabling highly anisotropic etching of circular holes with diameters as small as 30 μm to a depth of >300 μm. The needle tips were sharpened to single points with radii as small as 5 μm using either wet or dry plasma etching or the combination of both. This isotropic etching step also opened the holes originally embedded within the pillars. Such sharp microneedles have been demonstrated to be sufficiently robust to penetrate porcine skin without needing a mechanical applicator, with the needles remaining mechanically intact after repetitive penetrations. Capillary filling of the DRIE-etched through-silicon holes has also been demonstrated, showing their feasibility for transporting biomarkers of clinical interest to the sensing sites using capillary action.
2019-08-26T13:52:08.628Z
2019-08-26T00:00:00.000
{ "year": 2019, "sha1": "b29081fd624829bdde9d63107840556b79719f3d", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41378-019-0077-y.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "34066143ab4b825c0344722c36ab5f0056e76337", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
7992093
pes2o/s2orc
v3-fos-license
Testing whether barriers to a hypothetical screening test affect unrelated perceived benefits and vice versa: A randomised, experimental study Highlights • Perceptions of test barriers were more negative when unrelated screening benefits were low.• Perceptions of screening benefits were less positive when unrelated test barriers were high.• Screening intentions were markedly lower for a test with both high barriers and low benefits. Introduction Screening is an important public health strategy for reducing cancer mortality and incidence. There is potential to improve population health by increasing uptake of available screening tests but people's willingness to undergo them typically requires them to accept some short-term burden and some level of risk in exchange for a degree of potential health benefit in the relatively distant future. Much informative research has been carried out on how invitees perceive benefits and barriers of screening in order to address the policy goal of improving uptake (and satisfaction with screening services in general). Studies in this area have often been guided by psychological theories which assume implicitly that perceptions of barriers and benefits are independent. For example, the Health Belief Model [1] includes benefits and barriers as discrete 'constructs' that are often analysed separately (e.g. [2,3]). Similar conceptual and analytical approaches are also apparent in less theoretically-oriented research (e.g. where perceived barriers are examined without assessment of perceived benefts [4]). However, this assumption may not be not true; appraisals of barriers may be less negative when benefits are high vs. when they are low (and likewise for perceptions of benefits when barriers are low vs. when they are high), even when those benefits are objectively unrelated. Previous research provides several theoretical bases for this hypothesis. Most notably, much research has found evidence that perceptions can be systematically 'irrational' in the context of evaluating whether to carry out a given healthrelated behaviour. For example, one cognitive shortcut known as the 'affect heuristic' suggests that individuals do not necessarily carry out separate appraisals of the favourable and unfavourable characteristics of a behaviour and evaluate the balance. Instead, both aspects are evaluated together, in the context of a shared 'pool' of feeling or emotion (i.e. 'affect'). That is, where an affective response towards a behaviour is positive, desirable characteristics (e.g. health benefits) are judged to be high and aversive characteristics (e.g. risks or barriers) are judged to be low, whereas the opposite applies if the affective response is negative [5,6]. Affect may also lead to interrelatedness in other ways, such as through directing attention to particular information: positive feelings towards screening may increase the extent to which benefits are focused on and decrease the extent to which barriers are considered [7]. There are various other rationales for this hypothesised interaction, some of which are more cognitive in nature, such as halo effects (in which characteristics of a behaviour are evaluated in terms of general attitudes towards it) and efforts to maintain cognitive consistency (i.e. people may attempt to avoid 'incompatible' views where favourable aspects of a behaviour are seen as positive while unfavourable aspects would be seen as negative simultaneously) [8]. Irrespective of the psychological underpinnings, empirical evidence provides some support for this hypothesis; crosssectional studies have often found that perceptions of screening test benefits and barriers are negatively correlated [9][10][11][12]. However, to our knowledge, no experimental studies have tested this hypothesis of interrelatedness directly, meaning that their applicability to screening policy is limited. It is important to investigate this relationship because efforts to improve screening uptake based on addressing invitees' stated barriers will have limited success if they are proxies for negative perceptions regarding other aspects of screening. This study used an experimental design to test whether modifying test barriers affected perceptions of conceptually unrelated benefits, and vice versa. Participants were allocated at random to receive information regarding a screening test with high or low benefits, and high or low barriers, in the context of a hypothetical disease with similarities to cancer. Perceived benefits and barriers were then compared between conditions in order to test i) whether perceptions of benefits were lower when barriers were higher, and ii) whether perceptions of barriers were higher when benefits were lower. Intention to have the hypothetical test was also compared between conditions as an exploratory analysis of how the manipulation might affect actual screening behaviour. Participants Recruitment was through Survey Sampling International (SSI, London, UK), a company which curates a panel of members of the UK general population who are offered small rewards (such as air miles) to participate in online surveys. Respondents to the initial email invitation from SSI were asked their age at the start of the survey and excluded if they were younger than 25 or older than 75 years (i.e. ineligible for cancer screening in the UK). A software algorithm applied stratified sampling to ensure that the sample resembled the general adult UK population in terms of age; one third of the sample were aged 25-39 years, one third were aged 40-54 years, and one third were aged 55-75 years. Manipulations This study consisted of a 2  2 between-subjects experimental design. Participants were invited to complete one of four versions of a survey, randomly determined by a software algorithm. After confirming eligibility, they were shown a vignette consisting of information on the high incidence (33%) of a hypothetical illness that was amenable to screening ('Rogan's disease'), the rationale for screening, and the extremely high mortality risk in the absence of a screening test (only 100 in 1000 would survive). Participants were also given a description of a set of practicalities for a hypothetical hospital-based screening test, designed to resemble Computed Tomography (a screening test based on x-rays). This test can include an intravenous dye that carries a small risk of an adverse reaction, consisting of nausea and vomiting [13], the severity of which was manipulated as a screening test barrier ("severe nausea and regular vomiting for 3 days"; "mild nausea and occasional vomiting for 5 minutes"). The specific types of benefits and barriers were selected with the aim of being realistic, understandable, and plausible to participants, potentially influential on their intentions to have the test, as well as being fundamentally unrelated (as opposed to e.g. false positive and false negative results). Information on the degree of benefit was provided in terms of the mortality risk after undergoing a screening test. This was manipulated to equal either a large or a comparatively small reduction in mortality risk (900 per 1000 with Rogan's disease who underwent screening would survive; 105 per 1000 would survive). Fig. 1 contains an example of a complete vignette. Levels of barriers and benefits were designated "high" and "low" for convenience. Comprehension checks Participants were asked three multiple choice questions with four response options to assess whether they correctly recalled the relevant information on mortality risk in the presence or absence of screening (e.g. "If 1000 people with Rogan's disease are not screened and only treated once they feel unwell, how many people will be successfully treated and survive?": 100 people; 105 people; 500 people; 900 people), and information on the severity of the adverse reaction. Responses were coded as either correct or incorrect based on the allocated condition. Perceived benefits and barriers scales Primary outcomes were assessed by seven items measuring perceived benefits (e.g. "Having the screening test would increase my chances of surviving Rogan's disease") and seven items assessing perceived barriers of the screening test (e.g. "the side-effects would be too uncomfortable"). Response options consisted of a five-point Likert scale ranging from "strongly disagree" to "strongly agree". Items were adapted from existing measures [14][15][16] and demonstrated high internal consistency (Cronbach's a: 0.89 and 0.96, respectively). Responses were scored from one to five with higher scores representing more positive perceptions of benefits and more negative perceptions of barriers, as applicable. Scores for individual items were summed to create two overall scale scores for each participant (each out of 35). Perceived risk Participants were asked about perceived risk using an adaptation of a previously designed measure [17] with six response options ("If I didn't have the screening test, I think my chances of dying from Rogan's disease would be . . . ": Almost zero; very small; moderate; large; very large; almost certain). Self-efficacy A five-item assessment of self-efficacy (e.g. "How confident are you that...You could find the time to have the screening test?"), with four response options ranging from "very confident" to "not at all confident", was adapted from a previous measure [18] and this also had high internal consistency (Cronbach's a = 0.93). Responses were scored from one to four; higher scores represented greater self-efficacy and were summed to create an overall scale score for each participant (out of 20). Screening intention Intention to participate in screening was assessed using an adhoc item: "Imagine the NHS just sent you a letter, inviting you to be screened for Rogan's disease. Would you attend the screening test?"). Response options consisted of "yes", "no", and "don't know". Demographics The survey ended with items assessing demographic characteristics, including gender, first language, and markers of socioeconomic status. A previously used method was used to derive an overall measure of socioeconomic status, based on responses to questions on home and vehicle ownership, and education [19]: One point was counted for living in rented accommodation, no vehicle ownership, and no formal qualifications; higher scores indicated greater deprivation. Previous participation in the three cancer screening programmes that exist in the UK were also assessed (cervical, breast and colorectal; questions were tailored by age and gender so that ineligible participants did not see irrelevant questions). At the end of the survey, participants were able to request a summary copy of the study results. An example of the full survey is included in Appendix A. 1 Piloting Prior to data collection, the manipulations for high and low benefits and barriers were tested in two waves, consisting of 32 and 26 participants, respectively. Each wave aimed to ensure that participants in the main study would discriminate between high and low levels of the two independent variables. In particular, it was assumed that participants would perceive high benefits from even very few lives saved through screening, which would have led to ceiling effects that reduced the perceived differences between high and low levels. Perceived benefits and barriers of several possible manipulations were assessed using two ad-hoc items and results were used to select levels that were likely to generate the largest possible differences while still being believable to participants. As an example, the first wave of piloting compared perceived benefits of 800 vs. 200 people surviving following screening (relative to 100 people surviving without screening). Notwithstanding the small sample size, scores differed in the predicted direction but only by a small amount. Consequently, the second wave of piloting amended the number of lives saved to 900 vs. 105, which was associated with a larger apparent difference in perceived benefits scores. The first wave also compared perceived barriers of an alternative to test side-effects (travel time to the hospital: 20 min vs. 2 h). Similar to perceived benefits, scores differed in the predicted direction but to a smaller degree than the side-effect manipulation. The second wave of piloting also assessed performance of the items adapted from previously used measures of perceived benefits and barriers, in order to gauge reliability prior to administering the survey to a larger sample. Analysis Data were analysed using SPSS version 21 for Windows (IBM, Armonk, NY, USA). Participants answering one or more comprehension questions incorrectly were assumed to be insufficiently Fig. 1. An example of a complete information vignette (low benefit; high barrier). 1 Three single items were devised to assess i) perceived barriers, ii) perceived benefits, and iii) "intention certainty" as part of piloting the survey. These were also included in the main study but after preliminary analysis of the data, it was unclear how to interpret the distributions of data. Hence, these ad-hoc items were considered to lack sufficient face validity and were not used for further analysis (particularly since superior measures of perceived barriers and benefits were available). engaged with the survey and excluded from the analysis (Fig. 2). Descriptive statistics were used to illustrate frequencies and proportions for sample characteristics. Parametric assumptions of data relating to perceived benefits and barriers (normally distributed residuals and homogeneity of variance) were tested and met. Hence, the primary analysis comprised two-way ANOVAs, one in which the dependent variable consisted of overall perceived benefit score and one in which the dependent variable was overall barrier score. In each ANOVA, independent variables consisted of benefit condition (high or low), barrier condition (high or low) and a benefit  barrier interaction term. Age-band (25-35, 40-54, and 55-75 years) was included to account for any effects of stratified sampling. A sensitivity analysis was carried out in which the age band variable was omitted; results did not differ meaningfully and so are not reported here. An exploratory analysis of screening intentions (proportions intending to be screened vs. not intending vs. did not know) compared responses across the four conditions using a Pearson's x 2 test. Standardised residuals (i.e. z-scores based on the difference between observed and expected frequencies) were used to test for differences in proportions between any given pair of conditions. Required sample size and hypotheses The survey was 'soft-launched' and recruitment paused after 138 participants had completed the study in order to generate a preliminary estimate of mean square error for the dependent variables (necessary to calculate effect size). Since there is a direct conceptual link between perceptions of benefits and the actual magnitude of benefits, but not the actual magnitude of barriers, it was assumed that there would be a larger effect of manipulating benefits on perceived benefits than that of manipulating barriers. Likewise, manipulating barriers was expected to have a larger effect on perceived barriers than manipulating benefits. Calculations were based on a five-point difference for the effects of conceptually linked manipulations and a three-point difference for conceptually unrelated manipulations. Based on the initially observed mean square errors, it was estimated that a total of 204 participants would be required (51 participants per condition; 80% power, a = 0.05). Sample characteristics The flow of participants through the study is presented in Fig. 2. After exclusions, 218 participants were included in the main analysis. Across the whole sample, participants had a mean age of 48.6 years (standard deviation: 13.6), 52.8% were female (n = 115), 86.7% (n = 189) were white British, and 96.3% (n = 210) spoke English as a first language. The majority of screening-eligible participants reported previous experience of testing, ranging from 73.6% for CRC screening to 87.9% for breast screening. All demographic and other background characteristics are presented in Table 1. Effects of manipulating barriers (and manipulating benefits) on perceived benefits As expected, manipulating benefits had an effect on perceived benefits, and in the predicted direction (F(1,212) = 55.25, p < 0.0005), providing an indication that the manipulation was successful (Mean: 30.0, standard deviation: 4.0 vs. M: 25.6, SD: 5.1 for high vs. low benefits, respectively). The primary hypothesis that increasing barriers also reduced perceived benefits was also supported (F(1,212) = 6.81, p = 0.010; M: 28.5, SD: 4.8 vs. M: 27.5, SD: 5.3 for low vs. high barriers, respectively). As predicted, the effect of manipulating barriers was smaller (partial h 2 = 0.031) than that of manipulating benefits (partial h 2 = 0.207). In terms of the effects of the interaction term, there was only weak evidence against the null hypothesis (p = 0.137). Effects of manipulating benefits (and manipulating barriers) on perceived barriers The manipulation of barriers was also successful; perceived barrier scores were higher when barriers were high (F Table 2 reports means and standard deviations for perceived benefit and barrier scores for each of the four conditions. Discussion These findings provide evidence that screening attributes are not appraised independently but jointly, and manipulating one affects evaluations of the other. Our results build on cross-sectional studies that have demonstrated a negative correlation between benefits and barriers of cancer screening tests [9][10][11][12] by showing that to some extent these correlations are likely to be due to a degree of interrelatedness between the two characteristics. This study also found that the large majority of participants stated that they would have the test in three of the four conditions. However, there was a marked difference in the worst condition (low benefits, high barriers), with a greater proportion stating that they would not have the test. This exploratory analysis offers some indication that barriers and benefits might interact in a way that influences screening uptake. Further research would also be necessary to understand how intentions (and ultimately actual uptake) relate to the observed interaction. Although there is evidence that barriers and benefits are good predictors of behaviour when assessed individually [1,20], there has been extensive criticism of the assumption made by the Health Belief Model that they have simple additive effects [1,21]. In this respect, the present findings support researchers' recommendations for alternative approaches that examine moderation among variables [1]. Our results further suggest that a degree of caution is warranted regarding research that aims to identify specific barriers to cancer screening without simultaneously addressing perceptions of benefits [4]: The issues that participants raise as important barriers to screening may be proxies for being unconvinced about the benefits [22,23]. One further implication of these results is that screening tests with greater barriers might also elicit less positive perceptions of benefits. For example, flexible sigmoidoscopy screening for colorectal cancer (CRC) involves an invasive, internal examination and an inconvenient bowel preparation, which might diminish the effectiveness of interventions to improve uptake that aim solely to communicate its efficacy in terms of reducing CRC incidence and mortality. Conversely, these findings suggest that there may be potential to improve perceptions of screening test benefits by reducing barriers (and vice versa). As a practical example, as the Bowel Cancer Screening Programme in England replaces one method of stool testing with a less inconvenient alternative [24], this reduced inconvenience may lead to more favourable appraisals of the test's capability to reduce mortality. This study has limitations. The context of screening for a hypothetical illness allowed benefits and barriers to be manipulated freely, to the point that participants could discriminate between the two levels of each independent variable. However, the implications for practice with respect to real screening contexts are undetermined. It is notable that pilot work found similar benefit scores for even large differences in mortality reduction. The small observed effects may not apply to real screening contexts in which differences between tests are subtler. In addition, participants were excluded if they answered one of the three 'comprehension check' questions incorrectly, despite assistance offered to help them respond correctly. This approach aimed to exclude participants from the analysis if they had not read the relevant information and so were not sufficiently engaged with the study. However, it might have also resulted in a sample that was more numerate or literate than the general population. The proportion of exclusions was also greater in the low benefit-high barrier condition. This study tested whether barriers affected perceptions of benefits and vice versa. However, it did not aim to test whether any particular psychological mechanism underpinned this relationship. The findings are consistent with the presence of an affect heuristic [5], which has been used to explain similar effects in appraisals of other technologies [6], but it is also consistent with various alternative explanations such as directing attention towards particular kinds of information [7], a halo effect, and attempts to avoid cognitive dissonance [8]. Further studies would be necessary to explore these possibilities. For example, subsequent studies could use a similar design but include measures of emotion in order to test for affective explanations. In the first instance, it would be important to test whether the effects of absolute barriers and benefits on unrelated outcomes were mediated via perceived barriers and benefits, respectively. Further research that uses these approaches would make a greater contribution to psychological theory. Other areas for further research relate to the specific manipulations used: The set of benefits and barriers manipulated in the present study were selected following pilot work that aimed to maximise the chances of observing the hypothesised effect while still being believable to participants. This effect may not necessarily have been apparent with other benefits or barriers (e.g. one of our original tested barriers of travel distance to the hospital, which appeared to elicit smaller differences in perceived barriers between longer and shorter journey times than in the case of the side-effects attribute). However, characteristics of real screening tests are complex and multifactorial. Benefits can be medical and psychological; barriers can also be psychological as well as practical [21]. It may be particularly valuable to policy makers to determine the effects of manipulating specific characteristics of screening tests. For example, the risk of overdiagnosis in the case of breast cancer screening is the subject of intense debate [25] since it results in unnecessary treatment and the psychological harms of a cancer diagnosis. Overdiagnosis may be perceived more negatively by screening invitees than the practical barriers described in this study. Furthermore, it is often unfeasible to change real characteristics of screening tests but it is much easier to alter information in screening invitations. For example, different degrees of emphasis can be placed on information about barriers or benefits (e.g. by giving them greater prominence within an invitation leaflet, or by reiterating them in a leaflet summary). Manipulating these characteristics may increase or decrease some of the effects observed here. Moreover, the results of this study suggest that manipulating both attributes would have more than just an additive effect. Conclusion We found evidence that manipulating barriers of a screening test influenced perceived benefits and that manipulating benefits influenced perceived barriers. Future research should test the possible underlying psychological mechanisms and investigate the extent to which these findings generalise to real screening contexts. This would inform policy makers in their efforts to improve the balance of screening barriers and benefits in order to increase uptake.
2018-04-03T00:00:38.419Z
2017-02-01T00:00:00.000
{ "year": 2017, "sha1": "6b8e510fe7f9fc46d9263eb97ce8532dbc86803d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.pec.2016.09.007", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6b8e510fe7f9fc46d9263eb97ce8532dbc86803d", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
222637700
pes2o/s2orc
v3-fos-license
Design and Synthesis of a New Class of Pyridine-Based N-Sulfonamides Exhibiting Antiviral, Antimicrobial, and Enzyme Inhibition Characteristics A new strategy for designing and assembling a novel class of functionalized pyridine-based benzothiazole and benzimidazole incorporating sulfonamide moieties was developed. The synthesis was carried out by reacting N-cyanoacetoarylsulfonylhydrazide with various electrophiles such as 2-(benzo[d]thiazol-2-yl)-3,3-bis(alkylthio)acrylonitriles and 2-(benzo[d]imidazol-2-yl)-3,3-bis(methylthio)-acrylonitriles, as well as 2-ethoxyl acrylonitrile derivatives. The synthesized compounds were tested for their antiviral and antimicrobial potency. Two of the synthesized compounds, 15c and 15d, showed more than 50% viral reduction against HSV-1 and CBV4, with significant IC50 and CC50 values. The two potent compounds 15c and 15d have also shown inhibitory activity against Hsp90α protein with IC50 values of 10.24 and 4.48 μg/mL, respectively. A combination of 15c and 15d with acyclovir has led to IC50 values that are lower than that of acyclovir alone. Molecular modeling studies were used to identify the interactions between the 15c and 15d compounds and the active site of Hsp90α enzyme. The antimicrobial investigation of the new compounds has also shown that 8b and 15d exhibited a higher inhibition zone (IZ) than sulfadiazine and gentamicin against Klebsiella pneumonia, whereas 9a showed higher IZ than ampicillin against Staphylococcus aureus. According to the enzyme assay study on dihydrofolate reductase, 9a was shown to be the most potent compound among all examined compounds. INTRODUCTION Recently, we conducted numerous research investigations to develop different innovative synthetic methods for the preparation of N-sulfonylamino and N-sulfonyl-based heterocyclic compounds that have come into application as new forms for antiviral and antimicrobial agents. 1−4 Series of Nsulfonylpyrazole, synthesized by our group, 5,6 were evaluated against the enzyme cathepsin B.16 and breast adenocarcinoma MCF-7 cell line. 7,8 Similarly, the N-arylsulfonylpyrazole series was also identified to be active as inhibitors of the NS2B-NS3 enzyme. 9 These promising results led our research team to further investigate new approaches for the synthesis of alternative scaffolds for use as promising chemotherapeutics. In 1990, we have reported the first synthesis of structurally simple 2-arylbenzothiazoles (A), by the reaction of oaminothiophenol with arylmethylenecyanothioacetamide, 10 whose derivatives displayed thereafter interesting pharmacological properties suggesting their potential as anticancer, antiviral, and antimicrobial agents, Figure 1. The aniline derivative, compound (B), had excellent in vitro cytotoxicity in nanomolar concentrations against breast cancer cell line. 11 Compound (C) has demonstrated a superior in vivo efficacy against breast cell line; however, metabolic instability prevented its development as a chemotherapeutic agent, Figure 2. To overcome this problem, fluorinated analog (D) had been developed. 12 Compound (E) showed excellent growth inhibition of kidney cancer cell line A498, 13,14 whereas compound (F), naphthalimidearylbenzothiazole, was found to possess potent cytotoxicity against human hepatocarcinoma cell line SMMC-7721. 15 These findings were later on filed as patents. As a result of these interesting findings, we plan in this study to complement our previous work on 2-arylbenzothiazoles by replacing the benzene ring with a pyridine ring-tagged sulfonamide moiety. The synthesis was carried out by reacting N-cyanoacetoarylsulfonylhydrazide with various electrophiles such as 2-(benzo[d]thiazol-2-yl)-3,3-bis(alkylthio)-acrylonitriles and 2-(benzo[d]imidazol-2-yl)-3,3-bis(methylthio)-acrylonitriles, as well as 2-ethoxyl acrylonitrile derivatives. Antiviral and antimicrobial activities, as well as the toxicity effect of the synthesized compounds have all been evaluated. Many of the synthesized compounds showed interesting activities when compared to that of current antimicrobial drugs. RESULTS AND DISCUSSION 2.1. Chemistry. The synthetic strategies adopted for producing the target pyridine-based benzothiazole and benzimidazole compounds are represented in Schemes 1−4. This work focused on the key compound bearing sulfonamide moiety, which was used as a potential precursor for the synthesis of the novel series of N-sulfonylaminopyridones tagged either a benzothiazole or benzoimidazole ring. The key compound, N-cyanoacetoarylsulfonylhydrazide was afforded via a convenient reaction that was developed earlier of cyanoacetohydrazide with arylsulfonyl chloride. 6 The reaction of N-cyanoacetoarylsulfonylhydrazide 4a,b with 2-(benzo[d]thiazol-2-yl)-3,3-bis(alkylthio)-acrylonitrile 3 in the presence of potassium hydroxide in dry dimethylformamide (DMF) produced an adduct for which four possible isomeric structures were considered, as shown in Scheme 1. The X-ray crystal structure of 8b confirmed the presence of the compound in the solid state. 16 Addition of the active methylene carbon atom of 4a,b to the double bond of 3a,b, followed by the elimination of RSH and the subsequent cyclization via the addition of the NH group to the cyano group of benzothiazole led to the formation of the thermodynamically controlled and kinetically favored product 8a−d. The 1 H NMR spectra of 8b revealed the presence of an amino group at 8.83 ppm and a pyridine methylthio group at 2.45 ppm. In addition, NH 2 stretching band was confirmed in the infrared (IR) spectra at a range of 3200−3208 cm −1 . Furthermore, treatment of thioalkylpyridone derivatives 8a− d with hydrazine hydrate in the presence of a catalytic amount of piperidine in DMF resulted in the formation of the corresponding 1H-pyrazolo [4,3-c]pyrid-2-ones 9a,b, Scheme 2. This reaction occurred with the loss of the alkylthio group by nucleophilic attack of hydrazine and the subsequent cyclization through the attack on the cyano group in the pyridone ring to afford pyrazolopyridones 9a,b. The structure of compounds 9a,b was confirmed by their basic analysis spectroscopic data [Fourier transform infrared spectroscopy (FTIR), 1 H NMR and mass]. The absence of cyano stretching band in the IR spectra as well as the disappearance of the SCH 3 protons in 1 H NMR validated the conversion of 8a−d to 9a,b. In addition, the 1 H NMR spectra showed two peaks at 11.92−11.95 and 6.10−6.11 ppm and thus indicating the presence of NH 2 and NH, respectively. The target benzoimidazole derivatives 13a,b was synthesized using the reaction sequence illustrated in Scheme 3. They are formed by the reaction of N-cyanoacetoarylsulfonylhydrazide 4a,b with 2-(1H-benzo[d]imidazole-2-yl)-3,3-bis(methylthio)acrylonitrile derivatives 12 at room temperature. The structure of 13a,b was characterized by their basic analysis and spectroscopic data ( 1 H NMR and IR). For example, 1 H NMR spectra of 13a revealed singlet peaks at 2.35 and 2.90 ppm, which are assigned to CH 3 and SCH 3 protons, respectively, and multiplet peaks at range of 8.30−7.40 ppm for aromatic protons. Moreover, our investigation was extended to include reacting N-cyanoacetoarylsulfonylhydrazide 4a,b with various acrylonitrile derivatives, Scheme 4. The reaction of Ncyanoacetoarylsulfonylhydrazide 4a,b with 2-ethyl acrylonitrile derivatives 14a,b in ethanolic sodium ethoxide afforded the corresponding N-arylsulfonylpyridones 15a−d. The synthetic route to target compounds 15a−d is assumed to occur via the addition of the active methylene group of 4a,b to the ylidene bond in 14a,b followed by the elimination of one ethanol molecule and the cyclization via the addition of the NH group to the cyano group. The elemental analysis and spectral data validated the proposed individual structures of compounds 15a−d. IR showed NH 2 absorption band at a range of 3308− 3286 cm −1 . In addition, a singlet peak in 1 H NMR at range of 7.95−7.94 ppm confirmed the presence of pyridine-H. In the case of compounds 15c,d, 1 H NMR spectrum showed a triplet peak at 1.28 ppm and quartet at 4.24 ppm for CH 2 and CH 3, respectively, of the COOCH 2 CH 3 group. Furthermore, 1 H NMR spectra revealed the presence of an amino group at the range of 7.68−8.02 ppm. 2.2. Biological Results. 2.2.1. Antiviral Evaluation. The antiviral activities of the newly synthesized were evaluated in vitro against a wide variety of viruses such as herpes simplex virus type 1 (HSV-1), coxsackievirus B4 (CBV4), hepatitis A virus HM 175 (HAV HM 175), ED-43/SG-Feo (VYG) replicon of hepatitis C virus genotype 4a (HCVcc), and adenovirus type 7 (HAdV7). As reported, no specific medications are available for HCVgenotype4, CBV4, and HAdV7 viruses and the commercial drugs are only used to treat the symptoms but not the illness itself except for HSV-1, acyclovir is used. 17−19 In order to study their antiviral activities, the newly synthesized compounds were first subjected to a cytotoxicity evaluation as shown in the Supporting Information document using cell line FRHK-4, Hep2, BGM, Vero, and Huh 7.5 as the specific hosts to the various studied viruses. For comparison, acyclovir was used as standard drug against HSV-1. No significant difference was observed between the nontoxic doses of the different synthesized compounds, which ranged between 90 and 120 μg/mL. The synthesized compounds showed an apparent effect on viruses having different types of genome, that is, either RNA such as CBV4, HAV, and HCV or DNA such as HAdV7 and HSV-1, Table 1. Two compounds in particular, 15c and 15d, showed interesting antiviral effects that exceed 50% reduction against three of the studied viruses, Scheme 1. Synthesis of N- thiazol-2-yl)-4-(alkylthio)-2-oxopyridin-1(2H)-yl)arylsulfonamide ACS Omega http://pubs.acs.org/journal/acsodf Article Figure 3. Error bars in the Figure 3 represent standard division of the measured data. The 50% maximum cytotoxicity concentration (CC 50 ), defined as the concentration in μg/ mL required to reach 50% cytotoxicity of the treated uninfected cells, and the 50% maximal inhibitory concentration (IC 50 ), defined as the concentration in μg/mL required to inhibit 50% of the tested viruses, were both evaluated for the two compounds, 15c and 15d, that exhibited greater than 50% viral reduction against the aforementioned tested viruses. In addition, the selectivity index (SI) of the promising compounds was calculated by evaluating the CC 50 /IC 50 ratio, Table 2. The two compounds, 15c and 15d, showed, however, moderate levels of activity against HSV-1, CBV4, and HAV. In the case of HSV-1, both compounds 15c and 15d exhibited IC 50 values of 90 μg/mL, whereas their CC 50 were 250 and 220 μg/mL, respectively. Both compounds showed 50% viral reduction against HSV-1 while acyclovir showed 99.6% viral reduction. Additionally, compound 15c showed 50% reduction for CBV4 and 53% reduction for HAV, with IC 50 88 μg/mL and SI 2.95 for both viruses while compound 15d showed reduction 50% for CBV4 alone with IC 50 90 μg/mL and SI 2.66. 2.2.1.1. Structure Activities Relationship. Based on the results of the tested activities against HSV-1, CBV4, and HAV, as shown in Tables 1 and 2, the structure−activity relationships (SARs) have been established. We can observe that both compounds 15c and 15d possessing the ethoxycarbonyl group at C5 of the pyridine ring have almost equal activities toward HSV-1 and CBV4. However, replacement of the ethoxycarbonyl group with a cyano group led to the other two compounds, 15a and 15b, which showed lower to no activity against the aforementioned viruses. Alternatively, the presence of a benzothiazole ring at C5 and an alkylthio group at C4 of the pyridine ring, compounds 8a−d, resulted in lower activities for these compounds than those of the compounds containing ethoxycarbonyl group at C5, 15c and 15d. 2.2.1.2. Hsp90α Inhibition Assay. Enzyme Assay: It is wellknown that the protein "heat shock protein 90" (Hsp90α), which exists in almost all cell types, is quite important in the viral protein processes such as protein folding, protein assembly, and protein replication. 20 During infection, HSV-1 uses the protein Hsp90α as part of the viral polymerase process. 21 Therefore, an inhibitor of Hsp90α would naturally be an inhibitor of the HSV-1 infection. This has encouraged us to examine the possibility that our newly synthesized compounds 15c and 15d that showed high potency against HSV-1 might act as novel Hsp90α inhibitors. The Hsp90α (C-terminal) inhibitor screening assay kit was used to examine the effect of 15c and 15d against Hsp90α. A combination of 15c and 15d with the well-known drug acyclovir (standard drug), in 1:1 ratio, were also investigated. The IC 50 , the concentration of tested compounds that is required to inhibit 50% of the virus cell population, was evaluated, Table 3. It is interesting to report that both compounds had promising inhibitory effect against Hsp90α as measured in μg/mL. According to the calculated IC 50 , compound 15d with IC 50 value of 4.48 μg/mL showed more potency as the Hsp90α inhibitor than both acyclovir with IC 50 value of 4.78 μg/mL and 15c with IC 50 values of 10.24 μg/mL. Combining 15c as well as 15d with acyclovir has resulted in IC 50 values of 5.22 and 2.26 μg/mL, respectively, which is a major reduction from those of the individual compounds. It is clear that the combination of both compounds 15c and 15d with acyclovir has increased their potency. The resultant data clearly indicates that compound 15c and its combination with acyclovir act as potential inhibitors for Hsp90α and accordingly inhibitors for HSV-1 as was clearly illustrated from the antiviral reduction investigation and are, therefore, highly recommended. 2.2.1.3. Molecular Modeling and Docking Study. To further understand the underlying mechanism of the anti-HSV-1 action of the potent compounds with viral reduction of 50% or more, 15c and 15d were tested by docking them onto the crystal structure of the Hsp90α protein (PDB ID: 3B25) obtained from protein data bank (PDB). To prepare the models, the bound ligand, B2K (4-methyl-6-(toluene-4- They were also evaluated for their in vitro antifungal potential against Candida albicans fungal strain. The agar-diffusion method was used for the determination of the preliminary antibacterial and antifungal activities. Gentamicin, ampicillin, and nystatin were used as standard drugs against Gram negative bacterial, Gram positive bacterial, and fungal strains, respectively. Sulfadiazine as one of sulfonamide antibiotics with pyrimidine ring bearing the sulfamide group at position 2 has been also tested. The results were recorded for each tested compound as the average diameter of inhibition zones (IZs) of the microbial growth around the disks in mm ± SD, as summarized in Table 4. The minimum inhibitory concentration (MIC) measurements were determined for the most active compounds 8b, 9a, and 15d by using twofold serial dilution method (as explained later) and the results are all summarized in Table 5. Scheme 4. Synthesis of 2-Oxopyridin-1(2H)-ylbenzenesulfonamide In general, thiomethylpyridone derivatives 8a and 8b showed improved inhibitory effectiveness against the tested Gram-positive bacteria. Compound 8b was identified as the most potent agent among the thiomethylpyridone series. It was equipotent to gentamicin against K. pneumonia strain (IZ 32.2 ± 2.5 mm, MIC 125 μg/mL). On other hand, 1Hpyrazolopyridones 9a (IZ 34 ± 1 mm, MIC 125 μg/mL) was shown to be the most potent derivative among all synthesized compounds against S. aureus, Table 1. Pyrazolopyridone 9b had the highest antifungal activity among all of the synthesized compounds investigated in this study. Pyridone derivatives 13a,b revealed moderate antifungal activities against C. albicans and strong bacterial activities against S. aureus. Pyridone derivative 13a was more active than 13b against both S. aureus and C. albicans. Regarding the antibacterial activities of sulfonylpyridones 15a−d, pyridone 15d was the only compound that showed some activities against most of tested antibacterial strains. Compound 15d exhibited higher potency than gentamicin against K. pneumonia (IZ 35.3 ± 1.5 mm, MIC 1000 μg/mL). Compound 15a showed a moderate antifungal activity against C. albicans. Both 15b and 15c have no activity to all tested bacterial and fungal strains. In addition, all of the synthesized compounds showed no activity against P. aeruginosa. 2.2.2.1. Structure Activities Relationship. The study of SAR of the synthesized compounds demonstrated that unsubstituted benzene sulfonamide pyridone derivatives 8a, 8c, and 9a were more potent than the p-tolyl pyridone derivatives 8b, 8d, and 9b against S. mutans while p-tolyl pyridone derivatives 8b, 9b, and 15d were more active against Gram-negative bacteria specially K. pneumonia as compared to the unsubstituted benzene sulfonamide pyridone derivatives 8a, 9a, and 15a. Moreover, the unsubstituted benzene sulfonamide pyridone derivatives 8a, 9a, and 13a were more potent than the p-tolyl pyridone derivatives 8b, 9b, and 13b against S. aureus. In addition, both 8b-and 15d-bearing p-tolyl group showed a higher potency than the reference drug, gentamicin, against K. pneumonia. Compound 9a showed higher potency than ampicillin against S. aureus. Comparing the antimicrobial activity of synthesized compounds (as reported above in terms of its IZ) with sulfadiazine, which is considered to be a well-known antibacterial sulfonamide drug indicated that some of the synthesized compounds showed superior activities. For example, p-tolyl pyridone derivatives 8b, 9b, and 15d showed a much higher potency against Gram-negative bacteria, K. pneumonia, than sulfadiazine (IZ 20.4 ± 0.9 mm). In addition, both 8a and 8c, unsubstituted benzene sulfonamide pyridone derivatives, were highly potent than sulfadiazine (IZ 16.3 ± 1.1 mm) against S. mutans. Furthermore, 1H-pyrazolopyridones 9a was highly effective while pyridone derivatives 8d, 13a, and 13b showed a relatively lower activity than sulfadiazine (IZ 31.3 ± 2.1 mm) against S. aureus. Although, sulfadiazine had no activity against C. albicans, some of synthesized compounds such as 9b, 13a, 13b, and 15a showed some activity. These remarkable findings indicate the influence of the synthesized compounds and the effectiveness of these newly synthesized chemical structures in effectively inhibiting and combating the mentioned hazardous bacterial strains, confirming the appropriate approach of this work in envisaging and generating new effective antimicrobial drugs. 2.2.2.2. Dihydrofolate Reductase Activity Assay. Folate inhibitors are well-known agents in provoking the synthesis of folic acid and therefore are usually used for treating protozoal, bacterial, and fungal infections. This class of antimicrobial drugs may contain compounds from the subclasses of sulfonamides or dihydrofolate reductase (DHFR) inhibitors or a combination of both drugs. 22 Generally, sulfonamide drugs inhibit conversion of p-aminobenzoic acid PABA to dihydrofolate (DHF) through the action of the enzyme dihydropteroate synthase, which can be converted to tetrahydrofolic acid through the action of the enzyme DHFR. DHFR inhibitors are commonly used for fighting malaria and other protozoal infections, as well as for treating fungal, bacterial, and mycobacterial infections. 23 Trimethoprim is used as the DHFR inhibitor, in the case of bacterial infections, because it competes with the pteridine moiety of DHF in binding with the DHFR enzyme. It has been shown that drugs containing pyrimidine ring such as trimethoprim in association with sulfonamide drug are commonly used as an antibacterial drug because of its ability to bind to the active site of the enzyme. 24 Inasmuch, it has been also reported that drugs such as trimetrexate having pyridopyrimidines inhibit the DHFR enzyme's function during the development of the cancer cells. 25 Therefore, it would be quite interesting to perform an enzyme assay on the DHFR enzyme. In these regards, the highly potent antimicrobial compounds, 8b, 9a, and 15d were tested in terms of their inhibitory activities toward the enzyme using the DHFR inhibitor screening kit. For the purpose of comparison, both sulfadiazine, as one of sulfonamide antibiotics, and trimethoprim were used as standard drugs, Table 6. The inhibitory potency IC 50 values of tested compounds, 8b, 9a, and 15d, were compared to that of sulfadiazine. All tested compounds showed IC 50 values higher than that of sulfadiazine (IC 50 = 0.201 μg/mL) and trimethoprim (IC 50 = 0.170 μg/ mL), Figure 6. It was determined that compound 9a had IC 50 value of 0.699 μg/mL followed by 8b with IC 50 value of 1.368 μg/mL and 15d with IC 50 value of 4.671 μg/mL making 9a the most potent agent against DHFR among the newly synthesized compounds. CONCLUSIONS The work in this article is focused on synthesizing a new class of functionalized benzothiazole and benzimidazole-based pyridine incorporating sulfonamide moieties, with remarkable antiviral and antimicrobial potency. The synthesis was carried out by reacting N-cyanoacetoarylsulfonylhydrazide with various electrophiles such as 2-(benzo[d]thiazol-2-yl)-3,3bis(alkylthio)acrylonitriles and 2-(benzo[d]imidazol-2-yl)-3,3bis(methylthio)-acrylonitriles as well as 2-ethoxyl acrylonitrile derivatives. The structure of the different synthesized compounds was confirmed by basic spectroscopic data and elemental analysis. The antiviral activities of the newly synthesized compounds were evaluated in vitro against a wide variety of viruses such as HSV-1, CVB4, HAV HM 175, HCVcc genotype 4a, and HAdV7. The CC 50 , IC 50 , and SI were also evaluated for the promising compounds. Two of the synthesized compounds, 15c and 15d showed more than 50% viral reduction against HSV-1 and CBV4, with significant IC 50 and CC 50 values. Even though these compounds have also shown inhibitory activity against Hsp90α protein, their combination with the known drug acyclovir has resulted in a All data were taken as the average of three measurements. very high potency against the Hsp90α and consequently against the HSV-1 virus. To evaluate the underlying principles behind the action of these new compounds in inhibiting HSV-1, a molecular docking study was performed with focus on compounds that showed the greatest potency as HSV-1 virus. The study showed that the investigated compounds occupied the protein binding pockets of Hsp90α with strong binding interactions. The newly synthesized target compounds were also evaluated for their in vitro antimicrobial activity against E. coli, K. pneumonia, P. aeruginosa, S. aureus, and S. mutans, as well as C. albicans. In general, the thiomethylpyridone derivatives showed improved inhibitory effectiveness against the tested Gram-positive bacteria while 1H-pyrazolopyridones showed potency more than ampicillin against S. aureus. According to the enzyme assay study on DHFR, compound 9a was the most potent among the tested potent compounds. 4. EXPERIMENTAL PART 4.1. Chemistry. Melting points were determined on digital melting point apparatus, SMP3, using one end open capillary tubes and are uncorrected. IR spectra were recorded on FTIR plus 460 or using KBr pellets. 1 H NMR and 13 C NMR were carried out in the Center of Drug Discovery Research at Ain Shams University and spectra were recorded on a Bruker ADVANCE (III) model (400 MHz) spectrometer in DMSOd 6 as a solvent using tetramethylsilane as an internal standard and chemical shifts are reported as δ ppm units. The elemental analyses were done at the microanalytical data unit at Cairo University and performed on a Vario EI III Elemental CHNS Analyzer. Progress of the reactions was monitored by thin-layer chromatography (TLC) using aluminum sheet coated with silica gel Merck 60F and was visualized by a UV lamp. The reagents and solvents were purchased in commercially available grade purity. 4.1.1. General Procedure for the Synthesis of 3b. A mixture of sodium ethoxide (0.08 mol) and 2-cyanomethylbenzothiazole (0.04 mol) in absolute ethanol (100 mL) was refluxed for 20 min. After cooling, carbon disulfide (0.04 mol) was added gradually and the solution was warmed for 20 min. The reaction mixture was stirred overnight at room temperature after the addition of methyl iodide (0.08 mol). The solution was poured onto ice water and the solid product formed was filtered. After drying, the solid product was dissolved in hot petroleum ether and then filtrated. The precipitate formed after evaporation of the solvent recrystallized from DMF. %,4.60;N %,9.14. Found: C %,54.85;H %,4.58;N %,9.16. 4.1.2. General Procedure for the Synthesis of 8a−d. 2-(Benzo[d]thiazol-2-yl)-3,3-bis(alkylthio)acrylonitrile (0.01 mol) was added to a solution of N-cyanoacetoarylsulfonylhydrazides (0.01 mol) in dry DMF (30 mL) containing pulverized potassium hydroxide (0.01 mol). The reaction mixture was refluxed with stirring for 2 h (TLC monitoring). After cooling, the reaction mixture poured onto ice-cold water and neutralized with HCl. The solid product was filtered, washed with water, and dried. Further purification was done using a hot mixture of petroleum ether: ethyl acetate (50:50; v/v). The remaining solid compound was crystallized from DMF. N-(6-Amino-5-(benzo[d]thiazol-2-yl)-3-cyano-4-(methylthio)-2-oxopyridin-1(2H)-yl)benzenesulfonamide done according to the literature. 3,26,27 First, 50 mg of each sample was dissolved in 1 mL of DMSO. Decontamination of samples was done by adding 0.024 mL of 100× of antimycotic−antibiotic mixture to 1 mL of each sample. To estimate the nontoxic dose of the tested samples, bi-fold dilutions were done to 0.1 mL of original dissolved samples and 0.1 mL of each dilutions were inoculated in Hep-2, Vero, BGM, FRHK4, and Huh 7.5 cell lines, which obtained from the Holding Company for Biological Products & Vaccines VACSERA, Egypt, and previously cultured in 96 multi well plates (Greiner Bio-One, Germany). Cytotoxicity assay was done using cell morphology evaluation by inverted light microscope and cell viability test applying the trypan blue dye exclusion method. Cell Morphology Evaluation by Inverted Light Microscopy. Vero, FRHK4, BGM, Hep-2, and Huh 7.5 cell cultures (2 × 10 5 cells/mL) were prepared individually in 96well tissue culture plates (Greiner Bio-One, Germany). The cell cultures were then incubated for 24 h at 37°C in a humidified 5% (v/v) CO 2 atmosphere cell monolayers. The medium was then removed from each well and replenished with 0.1 mL of bi-fold dilutions of different samples tested and prepared in Dulbecco's modified Eagle's medium (DMEM) (GIBCO BRL). 0.1 mL of DMEM without samples was added as cell control. Subsequently, all cultures were incubated for 72 h at 37°C in a humidified 5% (v/v) CO 2 atmosphere. Cell morphology was observed daily for microscopically detectable morphological alterations, such as cell rounding, loss of confluence and shrinking, and vacuolization and cytoplasm granulation. Morphological deviations were recorded. 26 4.2.3. Cell Viability Assay. This assay was done by the trypan blue dye exclusion method. 27 Vero, FRHK4, BGM, Hep-2, and Huh 7.5 cell cultures (2 × 10 5 cells/mL) were grown in 12-well tissue culture plates (Greiner Bio-One, Germany). After 24 h of incubation, the same procedure described above for tested sample cytotoxicity was carried out by applying 0.1 mL of tested sample dilutions (bifold dilutions) per well. The medium was removed after 72 h and cells were trypsinized and an equal volume of 0.4% (w/v). Trypan blue dye aqueous solution was added to the cell suspension. By using the phase contrast microscope, viable cells were counted. 4.2.4. Determination of Adenovirus 7, HAV HM 175, CBV4, and HSV-1 Titers Using Plaque Assay. Nontoxic dilutions were mixed (0.1 mL) with 0.1 mL of different doses of HSV-1, HAdV7, HAV HM 175, and CBV4 (1 × 10 5 , 1 × 10 6 , 1 × 10 7 ). The mixture was incubated in 37°C for 1/2 h. The inoculation of (0.1 mL) 10-fold dilutions of treated and untreated CBV4, adenovirus 7, HSV-1, and HAV HM 175 was added separately into BGM, Hep-2, Vero, and FRHK4 cell lines, respectively, in 12 multi well plates. The palates were left for 1 h of incubation for adsorption at 37°C in a 5% CO 2 − water vapor atmosphere without constant rocking. Afterward, the plates were rocked occasionally to keep the cells from drying. After adsorption, 1 mL of 2× media DMEM, Gibco-BRL and 1 mL 1% agarose were added to each well. The plates were incubated at 37°C in a 5% CO 2 -water vapor atmosphere. After incubation, the cells were stained with 0.4% crystal violet after formalin fixation, and the number of plaques were counted. The viral titers were then calculated, and expressed as plaque-forming units per milliliter (pfu/mL). 28 CC 50 and IC 50 were done for the promising compounds that showed viral reduction of 50% or more. The 50% cytotoxic concentration (CC 50 ) of the test extract was defined as the concentration that reduce the OD492 of treated uninfected cells to 50% of that of untreated uninfected cells. IC 50 is the concentration at which the compound plaque reduction rate reaches halfway between the baseline and the maximum. All data were taken as the average of three measurements (triplicate). 4.2.5. Antiviral Bioassay of Tested Compounds against ED-43/SG-Feo (VYG) Replicon of HCVgenotype 4a. ED-43/ SG-Feo (VYG) replicon of HCV genotype 4a was treated with the nontoxic dose of the tested compounds. HCV RNA was quantified in algal extracts treated Huh 7.5 infected cells using qRT-PCR (TaqMan probe kit, QIAGEN) and according to the manufacturer's instructions to show a dose-dependent decrease in sub genomic RNA copies according to the literature. 29 4.3. Hsp90α (C-Terminal) Inhibitor Screening Assay. The Hsp90α (C-terminal) inhibitor screening assay was carried out using Hsp90α (C-terminal) Inhibitor Screening Assay Kit actalog 50317, size: 384 reactions, according to the method described in the catalog and literature. 3 Enzyme assay experiments were carried out at the Tissue Culture Unit, the Egyptian Company for Production of Vaccines, Sera and Drugs (VACSERA), Giza, Egypt. All samples and controls were tested in triplicate. First, both Hsp90α and PPID were thawed on ice. Aliquot protein into single use aliquots. Store remaining undiluted protein in aliquots at −80°C immediately. Note: Hsp90α and PPID proteins are very sensitive to freeze/thaw cycles. Dilute 3× Hsp90α assay buffer 2 with water to 1× Hsp90α assay buffer 2. Dilute Hsp90α in 1× Hsp90α assay buffer 2 at 1.5 ng/μL. Add 4 μL of diluted HSPα protein to each well designated for the "Positive Control", "Test Inhibitor", and "Blank". To the wells labeled "Substrate Control", add 4 μL of 1× Hsp90 assay buffer 2. Discard any unused diluted protein after use. Add 2 μL of inhibitor solution to each well-designated "Test Inhibitor". For the "Positive Control", "Substrate Control", and "Blank", add 2 μL of the same solution without inhibitor (inhibitor buffer). Dilute PPID in 1× Hsp90α assay buffer 2 at 10 ng/μL. Keep diluted protein on ice until use. Add 4 μL of 1× Hsp90α assay buffer 2 to the well-designated "Blank". Initiate reaction by adding 4 μL of diluted PPID to each well designated for the "Substrate control", "Positive Control", and "Test Inhibitor". Incubate at room temperature for 30 min. Dilute 3× detection buffer with water to 1× detection buffer. Dilute glutathione acceptor beads 250-fold with 1× detection buffer. Add 10 μL per well. Shake plate briefly. Incubate at room temperature for 30 min. Dilute streptavidin-conjugated donor beads 125-fold with 1× Detection Buffer. Add 10 μL per well. Incubate at room temperature for 1 h. Read Alpha-counts. The percentage inhibition was calculated for the different concentrations tested against the control, and the IC 50 values against Hsp90α protein were calculated from the concentration inhibition response curve. 4.4. Molecular Modeling. Docking study was performed using the crystal structure of Hsp90α (PDB code: 3B25) in complex to B2K (4-methyl-6-(toluene-4-sulfonyl)-pyrimidin-2ylamine). 30 The PDB file was downloaded from the PDB. Structure of chain A was processed using the structure preparation application in MOE (Molecular Operating Environment, 2014). The ligand molecule was removed from the protein active site. Then, the missing hydrogens was added using the Protonate three-dimensional application of MOE and properly assign the ionization states. To discover the favorable binding conformation, the default procedure in the MOE Dock application was used. Primary placement poses created by the alpha triangle matcher were rescored and filtered using the London dG scoring method to pick exhibiting maximal hydrophobic, ionic, and hydrogen-bond contacts to the protein. This was followed by a refinement stage. The generated poses were energy minimized using the MMFF94× force field. Finally, the optimized poses were ranked using the GBVI/WSA DG free-energy estimates. Docking poses were inspected and interactions with binding pocket residues were analyzed. 4.5. Antimicrobial Activity. Antimicrobial assessment and the MIC performed at the Microbiology Unit in the Biochemistry Central Laboratory, Faculty of Science, Cairo University, Cairo, Egypt. The synthesized compounds were individually tested against Gram-positive and Gram-negative bacterial pathogens and fungi. Agar well diffusion method was used to determine the activity of tested compounds. 31,32 The compounds were tested at a concentration of 15 mg/mL against both bacterial and fungal strains. Microbial suspension was prepared in sterilized saline equivalent to McFarland 0.5 standard solution (1.5 × 10 5 CFU mL −1 ). Its turbidity was adjusted to the optical density equals to 0.13 using spectrophotometer at 625 nm. After adjusting the turbidity of the inoculum suspension within 15 min, a sterile cotton swab was dipped into the adjusted suspension and was flooded on the dried agar surface. The agar was then allowed to dry for 15 min. Wells of 6 mm diameter was made in the solidified media with the help of sterile borer. The solution of the tested compound (100 μL) was added to each well with micropipette. The plates were then incubated at 37°C. The IZ was measured in millimeter (mm) after 24 h incubation at 30°C in the case of bacterial plates while in the case of fugal plates, the incubation was for 48 h. This experiment was carried out in triplicate and the results were recorded for each tested compound as mm ± SD. 4.6. MIC Measurement. Stock solutions of the tested compounds, ampicillin, gentamicin, and nystatin were prepared in DMSO at concentration of 1000 μg/mL followed by serial twofold dilution at concentrations of (500, 250,125, 62.5, 31.25 μg/mL), which was then mixed with sterile nutrient agar from Sigma-Aldrich, USA, in a sterile plate, followed by the inoculation of a defined microbial inoculum onto the agar plate surface. The plates were then left to incubate at 37°C in a humid chamber. After 24 h, the MIC endpoints were read and recorded as the lowest concentration of an antimicrobial agent that completely inhibits growth under suitable incubation conditions. 32 4.7. DHFR Inhibitor Screening Assay. Enzyme assay experiments were carried out at the Tissue Culture Unit, the Egyptian Company for Production of Vaccines, Sera and Drugs (VACSERA), Giza, Egypt. The assay for the inhibitory effect was applied as indicated in the BioVision manufacture's protocol. Dilute methotrexate 100-fold was prepared by diluting 2 μL of methotrexate with 198 μL of DHFR buffer. Each tested sample was then dissolved into 100× of an appropriate solvent. 2 μL of the tested sample, diluted methotrexate or DHFR assay buffer was added into wells assigned as sample screening (S), inhibitor control (IC), or enzyme control (EC), respectively. Dilute DHFR 400-fold was then prepared by diluting 2 μL of DHFR with 798 μL of DHFR buffer. Enough enzyme mixture for the number of wells were also prepared to be analyzed. Diluted DHFR (98 μL) was added into desired well(s) having the tested samples, EC or IC with volume equivalent to 100 μL. Background control (BC) was prepared by adding 100 μL of DHFR assay buffer to desired well(s). A stock solution of nicotinamide adenine dinucleotide phosphate (NADPH), a 40-fold dilution was prepared by diluting 10 μL of NADPH stock solution with 390 μL DHFR assay buffer, vortex briefly and kept at 0°C. Diluted NADPH (40 μL) was added to each well containing the tested samples, EC, IC, or BC. Samples were then mixed well and incubated at room temperature for 10−15 min and was kept away from light. A 15-fold dilution of the DHFR substrate was prepare by diluting 40 μL of DHFR stock substrate with 560 μL of DHFR assay buffer, vortexed briefly and kept at 0°C. Diluted DHFR substrate (60 μL) was then added to each well containing the tested samples, EC, IC, or BC. After samples were mixed well, the total volume reached to 200 μL. The absorbance was measured then at 340 nm in kinetic mode for 10−20 min at room temperature. Choose two time points (t 1 & t 2 ) in the linear range of the plot and obtain the corresponding values for the absorbance (OD1 and OD2
2020-10-17T05:08:43.754Z
2020-09-29T00:00:00.000
{ "year": 2020, "sha1": "45cbdfe98839dcf941e7258e2c58800096c97e32", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.0c03773", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "45cbdfe98839dcf941e7258e2c58800096c97e32", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
235447775
pes2o/s2orc
v3-fos-license
Tubulointerstitial nephritis without glomerular crescent formation as an underestimated subgroup of renal involvement among microscopic polyangiitis patients: A case report Abstract Although extra‐glomerular involvement of microscopic polyangiitis is not regarded as a characteristic finding of the disease, tubulointerstitial nephritis should be considered as a new subclass of renal involvement. | INTRODUCTION Microscopic polyangiitis (MPA) is an anti-neutrophilic cytoplasmic antibody (ANCA)-associated vasculitis (AAV), which mainly involves the small vessels without granulomatous changes. More than 70% of MPA patients have renal involvement. 1 Renal biopsy is the gold standard to confirm the renal manifestation of MPA. Rapid progressive glomerulonephritis (RPGN) with a crescent formation in the glomeruli and pauci-immune nephritis are typically found in glomerulonephritis associated with MPA. However, several case reports documented MPA patients without these typical glomerulus pathologies. These suggested that tubulointerstitial nephritis without crescentic glomerulonephritis pertained to a new subgroup of renal manifestation in MPA. We describe a case of MPA with tubulointerstitial nephritis without a crescentic change in the glomeruli. | CASE REPORT A 70-year-old woman with underlying hypertension was admitted for complaints of intermittent fever, backache, bilateral shoulder and leg pain, and left-sided foot drop for 2 months. On admission, her blood pressure was 136/61 mm Hg, and her heart rate was 99 bpm. On the next day, she had a fever of 40°C. On neurologic examination, the deep tendon reflex of the left ankle was negative. The initial laboratory findings are presented in Table 1 Chest computed tomography showed multiple ground-glass opacities and mosaic attenuation in both lungs. The patient experienced difficulty urinating. Urodynamic studies revealed a bladder muscle dysfunction. Magnetic resonance imaging (MRI) of the whole spine showed mild disk protrusion at the C4-C5 level without spondylitis. A nerve conduction study (NCS) showed left peroneal neuropathy, bilateral lumbosacral radiculopathy (L4-S1), and bilateral distal median neuropathy at the wrist level. The MRI findings did not correlate with the patient's clinical symptoms or NCS findings. These findings, along with ANCA positivity, elevated acute phase reactants, polyneuropathy, and hematuria with proteinuria, suggested AAV. Thus, a renal biopsy was performed. The renal biopsy included 14 glomeruli and showed no immune complex deposits or crescent formation. However, segmental mesangial cell proliferation, focal atrophy and loss of tubules, and mononuclear cell infiltration with fibrosis were noted in the interstitium ( Figure 1A). On electron microscopy, foot process effacement was observed. However, electron-dense deposits were not found ( Figure 1B). In the immunohistochemical examination, staining of immunoglobulin G (IgG), IgM, IgA, complement 3, C4, C1q, kappa light chain, or lambda light chain was not noted. The patient did not have any allergic disorders, peripheral eosinophilia, upper respiratory involvement, or granulomatous changes. Hence, granulomatosis with polyangiitis (GPA) and eosinophilic granulomatosis with polyangiitis (EGPA) were excluded. 2 The clinical and laboratory findings were compatible with a diagnosis of MPA. Ceftriaxone, azithromycin, and piperacillin/tazobactam were . Urine analysis showed no protein or RBCs ( Figure 2). There were no adverse events during glucocorticoid therapy, and the patient was satisfied with the current therapy because the previous symptoms improved. | DISCUSSION Rapid progressive glomerulonephritis is a typical renal manifestation of MPA. 3 Typical pathological findings of AAV nephritis include glomerular necrosis and crescent formation with no immune complexes deposition. This is also known as pauci-immune glomerulonephritis. 4 In 2010, Berden et al proposed a pathologic classification for ANCAassociated glomerulonephritis. The classification consisted of four categories: focal (≥50% of normal glomeruli), crescentic (≥50% glomeruli with cellular crescents), sclerotic (≥50% with globally sclerotic glomeruli), and mixed, which was defined as a combination of <50% of normal, crescentic, and sclerotic glomeruli. The extent of crescentic glomerulonephritis in MPA is essential because it predicts the prognosis of renal manifestations in AAV. 5 The renal biopsy of our patient had pauci-immune findings. However, contrary to the typical presentation, glomerular crescent formation was absent. Only foot process effacement and tubulointerstitial nephritis were noted in the present case. The foot processes of podocytes form the filter of the glomerular basement membrane, and changes in foot processes can be observed in various kidney diseases at an early stage. 6 In a typical MPA renal biopsy, the changes in podocytes are not apparent because MPA features advanced crescentic glomerulonephritis. Tubulointerstitial changes are also observed in the renal biopsy of patients with MPA. However, this histologic feature is considered a secondary change following crescent formation and consequent rupture of the Bowman's capsule. 7 A recent study discussed the importance of tubulointerstitial damage in the long-term prognosis of AAVassociated nephritis. It associated a specific tubulointerstitial biomarker (EGF mRNA expression) with the severity of F I G U R E 2 Clinical course, treatment schedule, and laboratory findings. On the day of initiating methylprednisolone 62.5 mg intravenously, fever was subsided, and antibiotics were stopped. After administration of methylprednisolone for 8 d, it was switched to prednisolone tablet 50 mg and azathioprine 100 mg was added. On D14, kidney biopsy was done. BT, body temperature; P/C, protein/creatinine renal function loss. 8 In our case, renal histology showed foot process effacement without glomerular crescent formation. This indicated that podocyte changes may be an early-stage pathologic finding in MPA. Furthermore, tubulointerstitial nephritis may occur in MPA independently of crescent formation, and these may form a unique subgroup of MPAassociated nephritis. Distinguishing MPA nephritis from other alternative causes is essential because hypertension and diabetes mellitus can cause chronic kidney disease. The present case's findings differed from those seen in hypertensive nephropathy cases (characterized by arterial nephrosclerosis, renal arterial hyalinization, and thickening of the arterial wall). 9 In addition, the patient was newly diagnosed with diabetes mellitus after admission. Diabetic retinopathy, which occurs before diabetic nephropathy, was absent. Pathologic findings of diabetic nephropathy include glomerulosclerosis and Kimmelstiel-Wilson lesions, 10 which were not noted in the present case. Other possible causes of nephritis, such as allergic reactions or nephrotoxic drugs, were excluded. The absence of a history of allergy and systemic eosinophilia excluded the diagnosis of EGPA. GPA usually involves the upper and lower respiratory tracts. Moreover, cavitary lesions or nodules are typically seen on chest radiography. In contrast, a diffuse ground-glass appearance without a cavitary lesion is more common in patients with MPA. 11 Distinguishing between GPA and MPA is sometimes difficult. However, the clinical findings in the present case were closer to MPA than GPA. There have been few similar documented cases of MPA and nephritis without crescentic glomerulonephritis. 12,13 Nakabayashi et al revealed that patients with an intact cluster of differentiation (CD)34, the surface marker for glomerular endothelial cells had tubulointerstitial nephritis-dominant pathology rather than crescentic glomerulonephritis. This suggested that the loss of CD34 was related to the glomerular damage found in MPA. 13 Gou et al found that the MPO antibody was able to bind to several epitopes of MPO. Moreover, a higher portion of normal glomeruli and a lower portion of cellular crescents were found in MPA patients whose MPO antibodies were bound to the H4 fragment (a part of the Cterminal epitope of MPO) than in those with MPO antibodies unresponsive to the H4 fragment. 14 In addition, levels of interleukin-1β, toll-like receptor-4, and nod-like receptor family pyrin domain-containing-3 in the tubulointerstitium correlated with the severity of tubulointerstitial injury in ANCA-associated nephritis. This indicated that the mechanism behind tubulointerstitial injury was independent of glomerular injury. 15 Previous studies have suggested that a specific cell surface marker, MPO epitope, or receptors were involved in the pathogenesis of MPA-associated tubulointerstitial nephritis by controlling the binding affinity of the MPO antibody. | CONCLUSIONS In conclusion, the present report described a patient with MPA and atypical nephritis, which dominantly demonstrated tubulointerstitial nephritis with podocyte changes. These histologic changes in the present case suggested a specific subcategory or an early change in MPA-associated nephritis. Physicians should consider MPA as a possible diagnosis, even if crescentic glomerulonephritis is absent on renal biopsy.
2021-06-17T05:18:16.388Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "059e7fbc4378b041979676ddd96fab4423a89628", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1002/ccr3.4123", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "059e7fbc4378b041979676ddd96fab4423a89628", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
115146606
pes2o/s2orc
v3-fos-license
Thin film instability with thermal noise We study the effects of stochastic thermal fluctuations on the instability of the free surface of a flat liquid film upon a solid substrate. These fluctuations are represented as a standard Brownian motion that can be added to the deterministic equation for the film thickness within the lubrication approximation. Here, we consider that while the noise term is white in time, it is coloured in space. This allows for the introduction of a finite correlation length in the description of the randomized intermolecular interaction. Together with the expected spatial periodicity of the flow, we find a dimensionless parameter, $\beta$, that accounts for the relative importance of the spatial correlation. We perform here the linear stability analysis (LSA) of the film under the influence of both terms, and find the corresponding power spectra for the amplitudes of the normal modes of the instability. We compare this theoretical result with the numerical simulations of the complete non-linear problem, and find a good agreement for early times. For late times, we find that the stochastic LSA predictions on the dominant wavelength remains basically valid. We also use the theoretical spectra to fit experimental data from a nanometric melted copper film, and find the corresponding times of the evolution as well as the values of the parameter, $\beta$. A basic problem in the study of free surfaces instabilities is the breakup of a flat thin liquid film on a solid substrate. Up to now, the description based on the hydrodynamic and deterministic Navier-Stokes equations are proven to be valid even down to the nanometric scale [1]. This has been accomplished by introducing the intermolecular interaction between the liquid and the substrate. However, it is known that at these scales the thermal agitation of molecules is relevant when describing the behavior of matter [2][3][4]. Thus, it is still necessary to investigate what role can play the thermal fluctuations in the hydrodynamic description of these instabilities. The consequences derived from considering this other effect can be of interest when designing microfluidic devices or electronic components whose function relies on thin film properties. In particular, we are interested on the effects that thermal noise may cause on films laterally much larger (up to microns) than their thicknesses. The study of the effects of thermal noise in the hydrodynamical equations was first introduced phenomenologically many years ago by Landau [5] and Uhlenbeck [6]. This inclusion can be done from the deterministic Boltzmann equation by a long-wave approximation, which justifies its microscopic feature. These equations have been used in the study of turbulence in randomly stirred fluids [7], the onset of instabilities in Rayleigh-Benard convection [8] and Taylor-Couette flow [9]. This subject is of interest nowadays because one of the issues to be considered in the discussions about the differences between Navier-Stokes equation and molecular dynamics simulations is the effect of thermally triggered fluctuations in the classical hydrodynamic continuum modeling. In particular, the application to unstable polymeric thin films has been the object of several theoretical and experimental studies [10][11][12]. In this problem, stochasticity has been analyzed using several techniques (such as Minkowsky invariants [13]) to contrast some theoretical predictions with experimental results [14], where stochasticity is mostly considered as a spatial white noise. On the other hand, the problem of unstable liquid metal films with thermal noise has not been object of such a thorough study. In this problem, the solid coating is melted by laser, and this introduces aspects that require the consideration of new factors such as the spatial correlation. Since the deposition of energy is not strictly uniform throughout the illuminated spot, the liquid lifetimes of different regions are not the same. In this context, there is a mix of factors to be considered when looking at samples from different regions. In fact, the time evolution of the sample at a certain region, due to the corresponding liquid lifetime, is compounded with the possibility that the laser illumination induces thermal fluctuations that might not be the same for all regions. One of the aims of the present paper is to consider how different spatial correlations of the ensuing fluctuations could influence the final spectra of the unstable modes. In this work we study the thin film instability by using a stochastic version of the thin-film equation based on the lubrication approximation for incompressible hydrodynamic equations (see Section II). Then, in Section III we perform the linear stability analysis of the thin film under the perturbation with normal modes, and in Section IV we solve numerically the stochastic thin-film equation, and compare the results with the linear solution obtained previously. A comparison of theoretical predictions with experimental Fourier spectra obtained from SEM images of the instability of a melted copper film is presented in Section V, and finally we summarize and discuss the results in Section VI. II. THIN FILM EQUATIONS WITH STOCHASTIC NOISE In the framework of the continuous mechanics, the thermal agitation of the film molecules modifies the surface forces which describe the interaction between the fluid inside a volume element and its surroundings. Thus, an additional term, S, in the expression of the Newtonian stress tensor has to be considered in order to include somehow the effect of molecular thermal motion [2,15]. Within the lubrication approximation, the most relevant component of S is S iz , where i can be either x or y and indicates a direction parallel to the substrate while z stands for the normal one. Due to its randomness, S iz has zero mean and the correlator is given by where i, j = x, y, µ is the fluid viscosity, and x = (x, y). Here, k B and T are the Boltzmann constant and fluid temperature, respectively. This property is a consequence of the fluctuation-dissipation theorem of the statistical mechanics, which relates the fluctuations of physical quantifies to the dissipative properties of the system. From a physical point of view, the hydrodynamical equations are only valid at a scale large as compared to the molecular scale. Strictly speaking, since thermodynamic equilibrium is characterized by a Gaussian velocity spatial distribution, uncorrelated noise is required. Therefore, the correlation function F ( x− x ) in Eq. (2) must have a small width. In the same approximation, the pressure terms in the isotropic part of the stress for a film of local thickness h( x, t) are given, as usually, by the capillary pressure, −γ∇ 2 h (where γ is the surface tension), and the disjoining-conjoining pressure (van der Waals force), Π(h). Thus, the reduction of the Navier-Stokes equations under the lubrication approximation leads to [15]: where S ||z = (S xz , S yz ). Note that the new noise term in Eq. (3) enters as a rather complicated integral. It sums up z-uncorrelated noise terms over the film thickness, but it has the advantage that it maintains the conservative form of the equation. Thus, we have now a random current which acts as another driving force. It can be shown [15] that the Fokker-Planck equation from Eq. (3) leads to the same time evolution of the thickness distribution function as that of the Langevin equation with a single multiplicative conserved noise vector ξ( x, t), where the noise amplitude, ξ( x, t), satisfies [6,15] ξ( x, t) = 0, In general, ξ is a correlated noise in space, but a white noise in time. Assuming symmetry along y-axis, the one dimensional version of Eq. (4) for h(x, t) is where, for brevity, ξ(x, t) stands for ξ x (x, t). Since the only characteristic length scale of an infinite film is its thickness, h 0 , we define the following non dimensional variables,x where the scales of time, t 0 , and noise, ξ 0 , are to be determined in terms of the characteristic parameters of the problem. Note that we take the capillary pressure, γ/h 0 , as the scale for the disjoining pressure. Thus, the dimensionless version of Eq. (6) is: where The temperature scale, T 0 , can now be obtained from Eq. (5). In fact, by defining the dimensionless temperaturẽ T = T /T 0 , we obtain with Moreover, it is still convenient to define the dimensionless noise amplitude as Θ =ξ/ T , since the correlation is now normalized to one. Finally, the governing Eq. (8) becomes where we define by using Eqs. (9) and (11). As a result, we obtain a meaningful interpretation of the dimensionless constant σ. In fact, it gives the relative importance of the magnitude of the stochastic term (thermal noise) respect to the deterministic part of the equation is given by σ, in the form of the ratio between the thermal and surface energies of the system. Since typical experimental data yield σ of the order of 10 −4 (or even less) we will consider here this parameter within this range of values in order to look for effects on the film instability. As regards to the form of Π, we take into account both the attractive and repulsive intermolecular liquid-solid forces, so that it includes both the disjoining and conjoining pressure terms in the form where h * is the dimensional equilibrium thickness, and κ (with units of pressure) is given by being A the Hamaker constant. In dimensionless variables, κ becomes K = κh 0 /γ, and then the final version of Eq. (13) is: where we omit the tilde (˜) for brevity and for now on. For the stochastic term, we consider as usual that Θ(x, t) is related to a standard Brownian motion as which satisfies where N (0, ∆) is a normal distribution with zero mean and variance ∆. Here, the notation "∼" means an equality of distributions. III. LINEAR STABILITY ANALYSIS (LSA) OF THE STOCHASTIC THIN FILM EQUATION At the beginning of the instability process the deviations, δh(x, t) = h(x, t) −h 0 , from the initial average film height are small (even ifh 0 = 1, we keep this notation for clarity). By expanding Eq. (8) up to first order in δh and Θ (assuming that the noise amplitude is small as well) we obtain the linear stochastic equation, It is convenient to look for its solution in the Fourier space, so that we have where the Fourier transform is defined by and ω(q) is the deterministic dispersion relation. This one is given by [16] where are the critical (marginal) wavenumber and the maximum growth rate, respectively. The wavenumber of maximum growth rate is q m = q c / √ 2. Since Eq. (21) is an equation of the Langevin type, its solution is given by [17,18], In order to study the instability evolution in the spectral space, we calculate the autocorrelation where the terms on the r.h.s. are defined as follows: In order to calculate these terms, let us first consider the autocorrelation of the Fourier transformed noise, Θ, as, is the Fourier transform of the correlation function, F (u), being u = x − x , and we have used ∞ −∞ e −iqx dx = 2π δ(q). Note that only in the case of non-correlated noise we have F (q) = 1, otherwise this transform has to be calculated (see Section III A). Since Θ is a white noise in time, its Fourier transform satisfies (see Eq. (18)) Then, the autocorrelation of W (q, t) is given by where t ∧ t stands for the minimum of t and t . On the other hand, the height-height correlation for the initial condition is where F 0 (u) is the spatial correlation function of the initial condition, and F 0 (q) its Fourier transform. Consequently, we can write A 1 in Eq. (27) as where we have considered the parity ω(−q) = ω(q). Regarding A 2 and A 3 in Eq. (27), we note that because the randomness of the initial condition is independent of the Brownian W . For the term A 4 in Eq. (27) we note that since the Brownians in different time intervals t and t are not correlated, only the common interval [0, t ∧ t ] contributes to the correlation of the product of the integrals. Besides, due to Eq. (30) only the terms with q = −q have non zero correlation. Thus, we obtain The last line above is consequence of one of the lemmas of Ito's integral [17,18], since the stochastic process in time is a white noise. Performing the integral and using Finally, by replacing Eqs. (32), (33) and (35) into Eq. (26), we obtain where For the case of non-correlated noise, we have F (q) = 1, which reduces this equation to that obtain in [2]. Note that the first term of Eq. (37) corresponds to the spectra predicted by the deterministic model (σ = 0). Here, we aim to compare the evolution of films with (σ > 0) and without (σ = 0) the stochastic term. To study the corresponding spectra separately the film has to be perturbed at t = 0, otherwise the film does not evolve in the deterministic case. Thus, we consider that the originally flat free surface of the film is slightly modified by a small amplitude perturbation of the form, and the amplitudes B k are random numbers with |B k | < B max = 10 −3h 0 . As a typical case, in the following calculations we choose a film with h * = 0.1 and θ = 30 • , which yields [16] q m = 0.151, q c = 0.213 and ω m = 5.19 10 −4 . The quantities λ m = 2π/q m = 41.6 and give a rough idea of the spatial extension and time duration of the film breakup process. We find that L = 500 ≈ 12λ m is large enough to produce results which are practically domain size independent. The consequences on the stochastic process of using a correlated noise on a finite domain is analyzed in the next section. A. Correlated stochastic noise in a finite domain Here, we will assume that the correlation function F (x − x ) in Eq. (5) is L-periodic. Moreover, we give meaning to the stochastic process Θ(x, t) in terms of a Q-Wiener process in the form (see Eq. (18)) where β k (with k integer) forms a family of mutually independent Brownian motions with respect to time and the dot stands for time derivative. The constants χ k are the eigenvalues of the Hilbert-Schmidt operator Q, defined by: The corresponding complete system of orthonormal eigenfunctions, g k (x), that satisfy In fact, this can be easily verified that by considering the complex eigenfunction with the eigenvalue given by Here, we used the parity property F (u) = F (−u), and the x-dependence at the limits of integration has been omitted due to the assumed periodicity over a distance L (which let us take x = 0 in both limits without loss of generalization). Thus, Eq. (44) allows to obtain all the eigenvalues for a given correlation function, F (x − x ). Note that this equation is the finite size domain version of Eq. (29) for a discrete spectrum, so that the correlated noise effect is embedded in the discrete spectrum of the Hilbert-Schmidt operator Q. Now, we choose the particular correlation function [15] F (u, c ) = where c is the correlation length, and Z is such that L 0 F (u, c )du = 1. As shown in Appendix A we find that the eigenvalue in Eq.(44) becomes where We show in Fig. 1 The actual effect of c on the evolution of the instability is clearly observed in the power spectrum of the perturbation, S(q, t), as predicted by the linear stability analysis in Section III. Figure 2 shows S versus q at t = 200 and t = 2000 as given by Eqs. (37) (t = t ) and (38). As expected from the analysis of Fig. 1, the inclusion of stochastic noise increases the amplitude of the modes q > q c (dotted vertical line) which are otherwise stable in the deterministic case. Note that β = 0 (i.e. c = ∞) is coincident with this case in which σ = 0. This increment increases with β, that is as the type of noise becomes closer to white noise ( c → 0). Taking very large values of c , i.e. of β (e.g. β = 50), is equivalent to disregarding the noise at all (σ = 0), at least for short wavelengths, since both spectra are practically coincident for early times, and only differ at later times for smaller q's. In Fig. 4 we show the time evolution of the wavenumber of the maximum of the spectra, q max (t), for different values of β. Note that for small β (say β < 5), we find q max ≈ q m in agreement with the deterministic prediction. As β increases up to β ≈ 20 we find that q max < q m and that it approaches q m from below. For β 20 the initial behaviour of q max becomes closer to q m . Finally, for β 27, q max > q m for all time, and it approaches q m from above. IV. NUMERICAL IMPLEMENTATION IN A FINITE DOMAIN In order to understand the nonlinear effects in the film instability, we perform numerical simulations of the evolution of the film governed by the nonlinear Eq. (8). The calculations are carried out in a computational domain defined by 0 ≤ x ≤ L, which is divided into cells of size ∆x (typically, we use ∆x = 0.1 = h * which assures convergence of the numerical scheme [19]). Equation (8) is discretized in space using a central finite difference scheme. Regarding the spatial dependence of the noise term, we use here only the sinusoidal modes in Eq. (42), since no flow boundary conditions are imposed at x = 0, L. Time discretization is performed using implicit Crank-Nicolson scheme with relaxation factor equal to 1/2. Thus, the time evolution of the stochastic term is performed according to Stratonovich rules. We note that all the results presented in this paper are fully converged, as verified by grid refinement; more details about numerical issues can be found in [20]. Due to the discretization of the equations, the minimum possible value of the correlation length is c = ∆x (= 0.1 in our case), since the discretized equations cannot make any correlation below this limit. To discretize the time-Wiener-processes in the framework of Ito-calculus, we replaceβ k (t n ) at a time step t n by the The difference ∆β is normal distributed and the variance is given by the time increment ∆t n . Thus, we approximate Eq. (48) by where N n k is a computed generated random number which is approximately N (0, 1)-distributed, i.e. its histogram is close to a Gaussian with media zero and unity standard deviation (we used the GASDEV routine from Numerical Recipes [21]). Altogether, the space-time discrete noise term, Eq. (39), is given by where χ k is given by Eq. (46), and g k (x) by Eq. (42). Thus, Eq. (50) is used to calculate the noise term in Eq. (8). Each realization of the stochastic process requires a given seed for N . Then, some of the numerical results presented below correspond to a single realization, and others to the average of 20 realizations (different seeds). A typical example of the evolution of a film with and without noise effects for a single realization (i.e. a given seed) is shown in Fig. 5. Note that for the same time of the evolution, the amplitudes of the corrugations are much larger for σ > 0 ( Fig. 5e-h) than for σ = 0 ( Fig. 5a-d). Thus, one of the effects of the noise is to decrease the duration of the breakup process. In order to study how the correlated noise affects the time evolution of the instability we first concentrate on the time it takes to appear the first rupture of the film. By first rupture time, we mean the moment when the film first reaches its possible smallest value, which is h * . Figure 6a shows the time evolution of the minimum of h(x, t), namely h min (t). Clearly, as c increases the breakup time, t b , increases, such that β = 2.5 ( c = 100) is practically coincident with the case without noise (σ = 0), which has the largest time. For σ > 0, this time decreases for increasing σ. A parameter of interest for the drop formation problem after the first breakup is the evolution of the maximum thickness as the final static configuration is reached. In Fig. 6a we show the average of h max (t) over 20 realizations for different values of β. We also plot h min (t) for reference, and define the corresponding breakup times, t b , as h(t b ) = 1.05 h * = 0.0105. Figure 6b shows that in fact the evolution of h max (t) is very weakly dependent on β (i.e. c ), since the curves h max versus t − t b are practically superimposed. This result implies that the noise does not have any effect on the drop formation process after the breakup of the film, that is, during the dewetting stage following the pinch off. Now we aim to study the effects of the correlation length in both linear (early) and nonlinear (late) stages of the instability. To do so, we calculate the Fourier spectra of the thickness profiles for different times. In Fig. 7 we show the evolution of the spectra with c = 10 (β = 25) for both early and late times. All spectra correspond to an average of 20 realizations, and no adjusting parameter has been used (the scales for S are different to those used in previous sections because a different normalization was employed in the Fourier transform of the numerical results). For early times, the agreement between numerics and the linear stability prediction, Eq. (37), is very good if one considers that some initial noise is introduced in the numerics. For larger times, the peaks of both spectra approach q m though the numerics show higher and a bit wider spectra than those predicted by LSA. A similar situation is observed for smaller and larger values of β as shown in Fig. 8. The main difference is that LSA overestimate the amplitude of the peaks respect to the numerical ones for small β (Fig. 8a), but the contrary occurs for large β (Fig. 8b). V. COMPARISON WITH EXPERIMENTS Previous comparisons between experiments and stochastic models have studied the instability of polymeric films on silicon oxide substrates [12,14]. However, these comparisons were made without considering spatial correlation, i.e. assuming both spatial and temporal white noise. Also, they mainly employed the integration of the spectra S(q) for all possible values of q, and derived quantities from it. Here, instead, we apply the theoretical model described above to experimental results for unstable liquid metal films to evaluate the importance of spatial correlations when considering stochastic instabilities. In order to do this, we do not restrict ourselves to some integrals of the spectra, but employ their complete profiles as a function of the wavenumber, q. Our experimental data correspond to copper thin films of a few nanometers thick that are melted by the illumination with pulses of an Excimer laser that last some tens of nanoseconds. During these pulses, the metal is in a liquid state, and thus the present hydrodynamic model can be applied. In this configuration, the liquid lifetime of the melted copper is related with the local temperature of the film, i.e. with the spatial distribution of the laser intensity, which spans in a radially symmetric Gaussian profile. After the pulse, the metal solidifies leaving a distinct pattern of holes, drops and/or ridges depending on how long the metal has been in the liquid state. More information about this setup configuration and details on the technique can be found elsewhere [22][23][24][25][26]. Since the outer regions of the laser spot have shorter liquid lifetimes, one can associate these regions with earlier times of the evolution, and consequently, central regions with later times. Since the laser spot is relatively large, the SEM images of these experiments have the advantage of offering more spatial information than other setups [12]. Nevertheless, they have the drawback that the times corresponding to every stage of the evolution are unknown, even if it is possible to order the time sequence in connection with the distance of the image respect to the center of the laser spot [27]. The goal of the following comparison is to show that the experimental observations represented by the spectra require not only a stochastic temporal evolution, but also some spatial correlation in the thermal noise in order to reproduce the full results. In particular, we will concentrate here on the data reported in [27], where the SEM images of the evolving melted metal were analyzed by using bidimensional (2D) discrete Fourier transform (DFT). Since, the 2D spectra turned out to be radially symmetric in the wavenumber space, (q x , q y ), the results in Fig. 5 of [27] were reported as amplitudes A 2D , versus k = (q 2 x + q 2 y ) 1/2 . These amplitudes were in fact averaged on circles of radius k, and therefore the corresponding 1D amplitude is obtained as A 1D = kA 2 2D (see symbols in Fig. 9). The symbols for both small k and amplitudes (S < 0.15) are an artifact of the finite length of the sample in the Fourier calculation. The parameters for liquid copper are γ = 1.304 N/m, and µ = 4.38 mP a s. Assuming T = 1500 K as a typical temperature of the film with thickness h 0 = 8 nm, we have σ = 2.48 × 10 −4 , and t 0 = 0.08 ns. Regarding the intermolecular interaction with SiO 2 we use (n, m) = (3, 2), h * = 0.1 nm and A = 2.58 × 10 −18 J (as suggested in [27]). Thus, we have q c = 63.4 µm −1 and q m = 44.8 µm −1 (dotted and dashed lines in Fig. 9). In order to perform the comparison of the experimental and theoretical spectra (see Eq. (37)) we choose a constant value for the unknown F 0 (q), namely F 0 (q) = 2 × 10 −4 , and use the same normalization factor for the DFT as in [27]. Thus, we are left only with t and β as adjustable parameters. The fitting values for the spectra in Fig. 9 are given in Table I. The low local maximum for k ≈ 100 µm −1 is related to the size of the drops, which is smaller than the distance between them [27]. Interestingly, we find not only increasing values of time as one moves from inner to outer regions (as expected), but a decrease of the corresponding values of β is also required for the fitting. This implies that the stochastic noise is somehow different at the sampled regions which, in turn, correspond to distinct liquid lifetimes. However, the relatively large values of β for the first three images suggest that the noise is practically white at the beginning, and that spatial correlation becomes important only for larger times when β decreases significantly. In general, it is then expected that the spectrum for earlier times (i.e., near the outer borders of the laser spot) correspond to a quasi white noise, but the noise becomes more and more spatially correlated as one goes to the center of the spot (i.e. as the liquid lifetimes increase). In fact, the correlation length, c , can be estimated considering the value of β and the length of the image, which can be assumed as the periodicity length, L. For the images corresponding to Fig. 9 we have L = 2.13 µm, so that we obtain c = L/(2β) as shown in Table I. Moreover, note that c finally approaches λ m (= 144 nm), which is also very close to λ exp m (= 165 nm). Thus, c turns out to be very close to the average distance between drops. VI. SUMMARY AND CONCLUSIONS In this work we have considered the effect of correlated thermal noise on the instability of a liquid thin film under the action of viscous, capillary and intermolecular forces by adding a stochastic term in the lubrication approximation equation for the film thickness. This term depends on the noise amplitude that is spatially self-correlated within a characteristic microscopic distance, c . The linear stability analysis (LSA) of the resulting equation shows that this yields a new factor in the stochastic part of the instability spectrum, which is given by the Fourier transform of the correlation function that can be expressed in terms of the eigenvalues of the Hilbert operator associated with it. In order to observe the nonlinear effects on the evolution of the instability, we also perform numerical simulations of the full lubrication equation using different seeds to generate the random sequence of amplitudes for the stochastic term (so that a realization corresponds to each seed), and average the resulting power spectra to obtain a representative spectrum to be compared with the one predicted by the LSA. As expected, we find a good agreement with LSA for early times. Interestingly, for late times we obtain that the wavenumber of the maximum of the spectra tends to approach the deterministic value, q m , corresponding to the LSA without stochasticity. Since the LSA with stochasticity also tends to q m , we can conclude that the typical lengths of the patterns in advanced stages of the instability with stochasticity seem to be close to the length of maximum growth rate of the linear deterministic modes. Therefore, encouraged by this result we also compare the LSA prediction with the experimental data from the instability of melted copper films on a silicon oxide substrate. These data correspond to the early stages, where the holes start to grow, as well as to the stages of drops formation, i.e. after having passed through the processes of film breakup and dewetting. A special feature of these data is that they come from different spatial regions of the laser spot, and thus received distinct illuminations. Thus, different times of a single evolution can be attributed to each region. These times were estimated here by fitting the LSA power spectrum to each experimental one with its corresponding value of β. As a result, we found that the early stages of this experiment evolved with a practically white noise in space, while a strong spatial correlation appeared in the spectra for late times. This shows that the explanation of experimental results in the nanometric scale requires the inclusion of some thermal noise in the modeling. In particular, correlated noise seems to be an important factor in the central regions of the laser spot, i.e. those with larger liquid lifetimes. We believe that our results justify further testing with more detailed experimental data.
2015-06-23T18:44:07.000Z
2015-06-23T00:00:00.000
{ "year": 2015, "sha1": "14e08ee5d88bb784e7d4d35a758030b1adfbf652", "oa_license": "CCBYNCSA", "oa_url": "https://ri.conicet.gov.ar/bitstream/11336/58234/2/CONICET_Digital_Nro.bb84b8e4-634d-46e5-a794-1876183aedfb_A.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "14e08ee5d88bb784e7d4d35a758030b1adfbf652", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
233594972
pes2o/s2orc
v3-fos-license
Accelerated versus delayed initiation of renal-replacement strategies following cardiac surgery Graphical abstract Visual representation of the impact and controversy surrounding the timing of initiation for renal-replacement therapy for acute kidney injury after cardiac surgery. On the left side of the image is the impact of acute kidney injury on postoperative heart surgery patients. On the right are the considerations a provider must undertake when determining the appropriating timing for renal-replacement therapy for individual patients. Feature Editor Note-Cardiac surgery-associated acute kidney injury (AKI) is common and has significant prognostic implications, including the substantial long-term effects of persistent renal failure and a clear impact in mortality. Moreover, fluid overload is a frequent consequence of perioperative resuscitation in the early stages of recovery after cardiac surgery, specifically after complex procedures with massive fluid shifts. An important component of the approach to manage severe AKI is the use of renal-replacement therapy (RRT). Timing of initiation of RRT for AKI with or without volume overload is a controversial dilemma that we encounter frequently in perioperative cardiothoracic care. Broadly speaking, there is no difference in mortality when evaluating accelerated or early RRT initiation versus delayed or only starting RRT when absolutely indicated in critically ill patients, and data in cardiac surgery are conflicting at best. There may be benefit from early RRT in subsets of cardiac surgical patients, but also there appears to be a significant number of patients who recover renal function in whom initiation of RRT could be detrimental or counterproductive. In this invited expert opinion paper, Dr Merritt-Genore and colleagues review this controversial and timely topic. The authors start by acknowledging the different definitions of AKI and the fact that currently there are no clear tools to determine whether AKI is likely to be transient or persistent at time of diagnosis, data that would help determine when would be best to have a careful watchful approach versus early RRT. The authors go on to review the data for the timing of RRT in critically ill patients in general, followed by segments specific to cardiac surgery and special patient populations, such as those with left ventricular assist devices, heart transplant recipients, and those receiving extracorporeal membrane oxygenation complicated with AKI. The debate is not settled. It is clear that having more tools to determine the probability for persistent AKI would help solve the controversy to ideally provide RRT at the right time to the right patient. Juan N. Pulido, MD Acute kidney injury (AKI) is known to increase length of hospital stay, morbidity, and mortality following cardiac surgery. Multiple definitions exist for the designation and classification of AKI. For instance, the Society of Thoracic Surgeons establishes acute renal failure as a 2-fold rise in serum creatinine (SCr), or SCr >4.0 mg/dL (with a minimum rise of 0.5 mg/dL), or new hemodialysis. 1 The definition of AKI from Kidney Disease: Improving Global Outcomes, in contrast, includes an increase in SCr by >0.3 mg/dL within 48 hours or an increase within 7 days' time of baseline SCr by >1.5 times or urine output <0.5 mL/kg/h for 6 hours, and further characterizes the stage of AKI to more granular levels. While the criteria to define AKI vary, 2 the incidence of kidney injury remains high in cardiac surgery, with between 4% and 35% of patients experiencing some form of AKI [3][4][5] and between 2% and 20% of patients requiring renal-replacement therapy (RRT) in the postoperative period. In this population, RRT is independently associated with an up to 8-fold increase in mortality. 4 The causes of AKI after cardiac surgery are many. In addition to known preoperative risk factors (pre-existing renal dysfunction, recent contrast, diabetes, advanced age), perioperative myocardial injury and fluctuations in cardiac output may lead to increased sympathetic activity, increased production of vasopressin, and activation of the The initiation of early renalreplacement therapy for acute kidney injury following cardiac surgery remains controversial, with data supporting both accelerated and delayed strategies. See Commentaries on pages 198 and 200. renin-angiotensin-aldosterone system. Added to this are the effects of the cardiopulmonary bypass circuit on arterial resistance and subsequent volume retention postoperatively. All of this may create a vicious cycle of repeated kidney injury with renal sodium and water retention and extracellular fluid expansion. Perioperative fluid overload is associated with worse outcomes and is a primary risk factor for multiorgan failure, including acute renal failure, 6 yet the timing and intensity of RRT in the postoperative cardiac patient remains controversial, with many factors playing into the decision for initiation, such as acidosis, urine output, and fluid balance, and the overall clinical picture ( Figure 1). In addition, there are few data to guide the clinician's determination as to whether the AKI is transient and retains capacity for early recovery or whether the injury will progress to azotemia and volume overload, perhaps despite a lower SCr. In this Expert Review, we will examine and summarize contemporary studies focusing on accelerated versus delayed strategies for RRT following cardiac surgery. As the bulk of high-quality studies come from critically ill populations (and not specifically cardiac surgery patients), we will examine the data separately and discuss what can be generalized to individual populations. CRITICALLY ILL PATIENTS Contemporary studies examining the impact of the timing of RRT have tended to focus on critically ill patients, which may or may not include cardiac surgical patients specifically. The studies have generally concentrated on survival benefit and recovery of renal function as primary outcomes. A large 2020 meta-analysis including 10 randomized controlled trials (RCTs) and 2143 critically ill patients with severe AKI found that in the absence of urgent indicators for RRT, there was no survival benefit to early RRT initiation. Somewhat surprisingly, this study also showed that up to 42% of patients in the late RRT group had recovery of renal function and never went on to require dialysis. 7 A second recent large meta-analysis of 18 RCTs confirmed similar findings and reinforced the idea that a delayed strategy may obviate the need for RRT in critically ill populations. 8 The recently published STandard versus Accelerated initiation of Renal Replacement Therapy in Acute Kidney Injury (STARRT-AKI) multinational RCT randomized nearly 3000 critically ill patients (including 230 cardiac surgery patients) into early RRT or standard management of AKI, in which RRT was discouraged unless standard criteria for initiation were met. 9 The primary end point of death at 90 days was similar between groups (43.9% vs 43.7%, P ¼ .92). Prolonged RRT was observed at greater rates in 90-day survivors in the accelerated strategy group (10.4% vs 6%; relative risk, 1.74; 95% confidence interval [CI], 1.24-2.43). While the STARRT-AKI trial did include 230 patients who underwent cardiac surgery (accelerated there has not yet been a subgroup analysis of the cardiac surgery group. Significant heterogeneity in the population may limit the applicability of the results; thus, it is difficult to discern outcomes differences in cardiac surgery patients specifically based on the results of the STAART AKI trial. Selected studies have also examined the effect of fluid balance in critically ill populations regarding the timing Renal Replacement Therapy after Cardiac Surgery The timing for initiation of renal replacement therapy for acute kidney injury after cardiac surgery must be determined through careful weighing of pros and cons for an accelerated or delayed strategy. of RRT initiation. In a large retrospective study of 18,084 critically ill patients with AKI from various etiologies, 10 positive fluid balance was associated with increased shortand long-term mortality (adjusted hazard ratio, 1.3-1.92) compared with even fluid balance. Negative fluid balance did show increased mortality risk using Gray's statistical model but not using logistic regression analysis. Interestingly, recovery of renal function was similar between all groups. A contrasting prospective study of 618 critically ill patients demonstrated fluid overload (>10% body weight) at the time of RRT initiation was associated with odds ratio of death of 2.07, and that patients with fluid overload at the time of peak creatinine were less likely to recovery kidney function long-term. 11 CARDIAC SURGERY PATIENTS There are fewer studies that specifically focus on the timing of RRT in cardiac surgery patients, and conclusions are somewhat conflicting. For instance, a 2016 retrospective study by Yang and colleagues 12 included intraoperative and postoperative factors in the selection of patients for preemptive and standard RRT and found that a pre-emptive strategy had reduced rates of mortality. These findings were related to reducing fluid overload with early RRT compared with the standard treatment group. A 2014 meta-analysis of 841 cardiac surgery patients also reported a lower mortality in patients who received earlier RRT as compared with standard therapy (odds ratio [OR], 0.29; 95% CI, 0.16-0.52, P <.0001), and suggested a trend toward shorter length of stay in the intensive care unit (ICU). This meta-analysis included studies dating back as far as 1950, however, and a large degree of heterogeneity was observed, making generalization difficult. 13 Another large contemporary meta-analysis 14 included 1479 cardiac surgery patients in 15 different studies and concluded that AKI treated with early RRT had decreased 28-day mortality (OR, 0.36; 95% CI, 0.23-0.57) and shortened ICU and hospital length of stay. A subgroup analysis was performed comparing the outcomes of cohort studies with the 5 RCTs. While the cohort studies supported the benefit of early RRT, the RCT analysis did not show a statistically significant decrease in mortality (OR, 0.41; 95% CI, 0.14-1.24). There was also an earlier meta-analysis 15 in 2008 that performed a subgroup analysis of RCTs and came to a similar conclusion that there was no survival benefit with earlier RRT initiation (RR, 0.64; 95% CI, 0.64-1.05; P ¼ .08). How do we reconcile this difference and apply these data to our own patients? Dropout bias frequently seen in cohort studies can negatively impact studies looking at an intervention in regards to timing, because patients likely have a better prognosis if their disease process improves before initiation of the intervention. This is illustrated in the largest RCT on cardiac surgical patients to date, where more than one third of patients in the delayed RRT arm survived without ever needing RRT. 16 In 2015, Crescenzi and colleagues 17 attempted to avoid the dropout bias by prospectively enrolling 1658 cardiac surgery patient at the time of ICU admission, before any signs of postoperative renal insult. A total of 56 patients (3.6%) required RRT after cardiac surgery within this study. Patients in the "late" group (oliguria for >12 hours) required significantly less RRT than in the early group (oliguria for>6 hours), highlighting again the possibility that patients may recover renal function before requiring RRT. There was no significant difference in mortality or length of stay in the ICU or hospital between groups. Importantly, the authors looked at the "late" group and determined that 21 patients who did not start RRT would have actually received RRT had they been in the "early" group, as the duration of their oliguria would have met criteria. Although overall mortality was similar between groups, a survival benefit was observed for a subset of patients with pre-existing renal dysfunction in the "late" RRT group. This may suggest a benefit in "watchful waiting" for patients undergoing cardiac surgery with preoperative renal dysfunction, but more work is needed in this area. Each of these trials use different criteria, such as urine output, SCr, and blood urea nitrogen for initiation of RRT. Even the definition of "early" versus "late" initiation of RRT varies significantly between studies, with some trials using time from decreased urine output versus time after acute renal failure was diagnosed. Use of these parameters does not allow many other complex factors to be considered. Importantly and specifically in the cardiac postoperative period, these factors often drive AKI but are not traditional indicators for RRT. In patients undergoing cardiac surgery, volume shifts are common in postoperative recovery and are traditionally managed by use of diuretics expectantly. When renal function is compromised, excessive volume retention may lead to deleterious effects in a depressed left or right ventricle and thus some have advocated for earlier initiation of RRT to diminish these effects. SPECIAL CIRCUMSTANCES The development of AKI in postoperative heart transplant patients is also a serious concern, with up to 12% of patients requiring peritransplant RRT, and an association with greater hospital and 1-year mortality in these patients. 18 Shen and colleagues 19 presented a retrospective study of 76 patients over 10 years' time to determine optimal timing for initiation of RRT in cardiac transplant patients with AKI postoperatively. Patients with earlier RRT had significantly lower mortality (39.1% vs 63.3%, P ¼ .039), shorter ICU and hospital lengths of stay, and lower overall cost. Contrary to some of the previous studies, recovery of renal function was more common in the accelerated RRT group than for the delayed cohort in this study. Likewise, a 2020 retrospective study by Liu and colleagues 20 used Kidney Disease: Improving Global Outcomes staging for AKI to determine the need for RRT in 184 patients with a left ventricular assist device. In this cohort, an "early" strategy was associated with a trend toward shorter ICU and hospital length of stay, as well as lower need for permanent RRT, with similar mortality between groups. Further work is needed in the area of transplantation and left ventricular assist devices, including high-quality prospective studies to examine the appropriate timing of initiating RRT to reduce preload in patients with an acute or chronically failing right ventricle. There has also been the suggestion that RRT may help improve the prognosis of critically ill patients by filtering inflammatory markers in addition to managing volume overload. Several RCTs have examined RRT and immunomodulation in critically ill patients with sepsis. Targeted cytokine size is important, with many filtration membranes still allowing passage of macromolecules such as tumor necrosis factor-alpha, interleukin-6, and interleukin-1. Heterogeneity in patient populations and lack of uniformity of study designs have limited the applicability of singular studies, and the results remain somewhat contradictory. 21,22 One RCT examining cardiac surgery patients with severe shock and renal dysfunction targeted RRT to remove proinflammatory mediators and reduce vasopressor requirements but found no significant differences in the RRT versus standard groups for mortality or renal recovery. 16 These theories of cytokine reduction have been recently debated in the context of the coronavirus disease 2019 pandemic, and data are still emerging at this time. Lastly, it has been well-established that renal dysfunction occurs commonly in patients supported on extracorporeal membrane oxygenation (ECMO). A large recent metaanalysis that included 5896 adult patients on ECMO demonstrated an 81% increased risk of death (RR, 1.81; 95% CI, 1.56-2.08, P < .001) for patients who required new RRT while on support. 23 Fluid balance has been identified as a predictor of mortality in patients receiving ECMO/RRT. 24 Although there has been a trend toward improving survival in patients receiving ECMO/RRT over the past 20 years, further work is certainly needed in this realm to define indications for initiation of RRT and methods to decrease renal dysfunction in this complex population. CONCLUSIONS It is clear the long-term effect of AKI after cardiac surgery is substantial. While the use of RRT can substitute key portions of renal function and help maintain euvolemia in the immediate postoperative phase, mortality is greater in patients who suffer AKI after cardiac surgery regardless of whether RRT is used. However, patients who do not recover to their baseline renal function may have twice the longterm mortality risk than those who do recover. 25 Increasing evidence from RCTs in critical care populations suggest a delayed strategy may allow renal recovery and a high rate of avoidance of long-term RRT dependence, 7,8,11,16 without an increased risk of mortality. Further high-quality studies within this specific cohort are needed before this debate can be settled. Future studies should look not only into the timing and modality of RRT used but also patient factors (ventricular and pulmonary function in addition to standard preoperative risk factors) and perioperative events (longer pump time, coagulopathy requiring blood transfusions, fluid balance, hypotension requiring inotropic support, expected postoperative course, and mechanical support), which may lead to the development of AKI and need for RRT. Presently, the timing of RRT after cardiac surgery is still a decision that continues to generate debate due to lack of consensus and requires an individualized and multidisciplinary approach.
2021-05-04T22:05:55.964Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "c7f1be72ec03da1ea613ad9c29900821bd059c5c", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "e80ceb3bd61b22fe386257e7944fde0a4938bf5f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
236780847
pes2o/s2orc
v3-fos-license
Formation and fragmentation of doubly and triply charged ions in the negative ion spectra of neutral N-glycans from viral and other glycoproteins Structural determination of N-glycans by mass spectrometry is ideally performed by negative ion collision-induced dissociation because the spectra are dominated by cross-ring fragments leading to ions that reveal structural details not available by many other methods. Most glycans form [M – H]- or [M + adduct]- ions but larger ones (above approx. m/z 2000) typically form doubly charged ions. Differences have been reported between the fragmentation of singly and doubly charged ions but a detailed comparison does not appear to have been reported. In addition to [M + adduct]- ions (this paper uses phosphate as the adduct) other doubly, triply, and quadruply charged ions of composition [Mn + (H2PO4)n]n- have been observed in mixtures of N-glycans released from viral and other glycoproteins. This paper explores the formation and fragmentation of these different types of multiply charged ions with particular reference to the presence of diagnostic fragments in the CID spectra and comments on how these ions can be used to characterize these glycans. Graphical abstract Introduction N-Linked glycans are those attached to proteins in an Asn-Xxx-Ser/Thr motif where X is any amino acid except proline. (ESI), the latter method being conveniently interfaced with high-performance liquid chromatography (HPLC). A potential problem with ESI is the preferential production of ions in several charge states, thus inhibiting acquisition of a quantitative glycan profile from mixtures. MALDI produces essentially only singly charged ions allowing better profiling of mixtures but suffers from the disadvantage that sialylated glycans tend to eliminate sialic acids. This latter problem, however, can be overcome by permethylation or better, ester [16] or amide formation from the acid groups of the sialic acids. This latter derivatization method has been developed into a linkage-specific technique [17][18][19] and incorporated into many HPLC/MS analyses. Although positive ion fragmentation mass spectrometry, used by most of the above techniques, provides much structural information, better information can be obtained by negative ion methods [20]. Neutral glycans (those not containing acid groups such as Neu5Ac) are best examined with tandem instruments by forming adducts of the type [M + A n ] n-(where A is the anion) with anions such as chloride, nitrate, or phosphate, in order to inhibit fragmentation in the mass spectrometer ion source. Their fragmentation spectra, usually acquired by collision-induced dissociation (CID), although generally containing fewer fragments that their positive ion counterparts, are dominated by cross-ring fragments that provide specific information on, for example, location of fucose residues and the presence or absence of bisecting GlcNAc residues that is often difficult to determine by traditional positive ion methods or by exoglycosidase digestions. Another advantage of negative ion fragmentation is that Scheme 1 Top, biosynthesis of high-mannose glycans. Glycan 1 is attached to the protein at the GlcNAc terminus. Path A, If glucose removal is blocked with drugs such as NBDNJ, then only the outer mannose residues are removed (glycans 2 and 3) by the enzymes of the normal pathway (Path B, glycans 4-12). Pathway C operates under these conditions with use of an endomannosidase to produce an isomer of Man 8 GlcNAc 2 (13) which then enters the normal pathway. The two outer mannose residues are removed from glycan 11 leading to the synthesis of complex glycans such as glycans 14 -30 that are discussed in this paper. The galactose residues of the hybrid (12) and complex glycans are frequently capped with sialic acid. Symbols used for the glycans are = mannose, = GlcNAc, = glucose, = fucose, and = galactose. Solid lines connecting the symbols are β-linkages; broken lines are α-linkages. The angle of the lines shows the linkage position. For more information see [2]. isomeric glycans usually fragment to give mass-different ions rather than the predominantly abundance-different ions commonly seen in positive ion spectra. Consequently, their presence is usually obvious from the CID spectra. Deprotonated ions can also be formed under ESI conditions but there is a tendency for double charging ([M -H 2 ] 2ions). Fragmentation of these doubly charged ions often differs somewhat from those of the [M -H] - [21] and phosphate adducts even though the first stage of the fragmentation of the latter ions is to deprotonate the molecular ion. Reasons for this difference have been discussed but without a satisfactory conclusion mainly because of differences in the experimental conditions (collision gas, collision energy, etc.) that prevent a strict comparison [22]. Sialylated glycans (not derivatized) under ESI conditions tend to deprotonate rather than form adducts but their fragmentation spectra are less informative than those of the neutral glycans because of domination by fragment ions formed by charge localization of the acid group rather than on a hydroxyl group, as is the case with neutral glycans. The diagnostic ions seen from adducted neutral glycans are generally missing but can be restored by derivatization as described above. The recent introduction of ion mobility to commercial instruments has provided another dimension to the analysis of these glycans [23][24][25]. Although resolution is currently inferior to that provided by HPLC, the technique is much more rapid (ms time scale) and provides the ability to separate ions into groups of different charge states and, because of its sensitivity to molecular shape, to enable separation of some isomers, a property not directly available to mass spectrometry. The physical property associated with ion mobility is a molecule's collisional cross section which is instrument independent and provides another parameter for compound identification. Together with negative ion fragmentation, this combined technique provides one of the most powerful analytical method for N-glycan analysis available today. When examined by ESI, many glycans, particularly the larger ones, produce doubly and triply charged ions and the ability of ion mobility to separate glycans on the basis of charge is used here to extract these ions from mixtures and to allow many of the minor ions, often hidden by "noise" and not seen before, to be identified. One of the applications of ion mobility/negative ion CID technique employed in this laboratory is the structural identification of N-glycans from viruses. Enveloped viruses, such as the human immunodeficiency virus (HIV), contain heavily glycosylated spike proteins in their surface layer, or spike as is the case with COVID-19, and are possible targets for vaccine development [26]. Negative ion fragmentation, usually combined with ion mobility, has been applied to glycoproteins from a number of viruses such as influenza (hemagglutinin and neuraminidase) [27], Ebola (transmembrane glycoprotein (GP1) and the soluble glycoprotein (sGP) [28]), SARS (spike glycoprotein [29]), HIV (gp120 and gp41 glycoproteins [30][31][32][33][34][35][36][37]), Nipah virus [38], Hendra virus attachment glycoprotein [39], Machupo virus attachment protein [40], swine fever virus (E2 glycoprotein [41]), Lassa virus [42], Uukuniemi phlebovirus [43], and Semliki Forest virus (E1 and E2 glycoproteins [44]), mainly of recombinant origin from systems such as human embryonic kidney (HEK) 293 and Chinese hamster ovary (CHO) cells. These studies have shown the occurrence of high-mannose, hybrid, and complex glycans with high-mannose glycans being particularly abundant in the heavily glycosylated glycoproteins such as gp120 from HIV [33]. Both singly and multiple-charged ions are produced with some of the doubly charged ions from gp120 being of a type not observed previously from negative ions. Because as yet, there does not appear to have been a systematic comparison of the formation and fragmentation of negative ions produced with multiple charge states, this paper utilizes the power of ion mobility to extract these ions and discusses the formation, fragmentation, and uses of them. In addition to the common [M + A n ] nand [M -H n ] nions, it also reports other novel types of multiply charged ions that can be formed and which have been revealed by ion mobility. Materials Reference glycans were purchased from Dextra Laboratories (Reading, UK). Glycans from HIV gp120 and gp41 were released in-gel with peptide N-glycosidase F (PNGase F from New England Biolabs (UK), Hitchin, UK) [45] as described earlier [33]. Fucosylated glycans from human parotid gland glycoproteins, obtained from banked deidentified human tissue, were released with hydrazine and re-acetylated, also as described earlier [46]. Human α1-acid glycoprotein (AGP) was obtained from Oxford GlycoSystems (Abingdon, UK). Glycans were released by PNGase F and desialylated by heating with 1 M acetic acid for 10 min at 80°C. Mass spectrometry [M + adduct ions] Released glycans in 2 μL of water were cleaned with a Nafion membrane [47] prior to analysis. All glycans were then dispersed into 1:1 (v:v) water:methanol (~6 μL) to which 0.2 mL of an 0.5 mM solution of ammonium phosphate had been added (to form phosphate adducts of the glycans). Travelling wave ion mobility mass spectrometry (TWIMS) measurements were performed with a Synapt G2Si travelling wave ion mobility mass spectrometer (Waters, Manchester, UK) [48] fitted with a nano-ESI (nESI) ion source. Gold-coated borosilicate capillaries, prepared in-house [49], were used for introducing the samples. Infusions lasted from 2 to 3 h (50-33 nL/min). Ion source conditions were as follows: ESI capillary voltage, 1.0-1.2 kV cone voltage, 100-180 V, ion source temperature 80°C. The T-wave velocity and peak height voltages were 450 m/s and 40 V respectively with nitrogen in the TWIMS cell. Fragmentation was performed after mobility separation in the transfer cell with argon as the collision gas. The collision cell voltage (60-130 V) was adjusted manually according to the precursor ion mass to give an even distribution of fragment ions across the mass range. The instrument was externally mass calibrated with dextran oligomers (Glc 2-13 ) from Leuconostoc mesenteroides. Data acquisition and processing were carried out using the Waters DriftScope (version 2.8) software and MassLynx TM (version 4.1). The scheme devised by Domon and Costello [50] was used to name the fragment ions with the addition that the subscript R sometimes replaced the numerical subscript on the reducing-terminal GlcNAc residue in the following discussion to avoid the number changing with glycans with different chain lengths. R-1 replaced the corresponding subscripts for the penultimate GlcNAc residue and for the B ion separating them. Interpretation of the fragmentation data followed the rules established earlier [51][52][53][54][55]. [M -H 2 ] 2ions Released glycans were cleaned as above but spectra were recorded mainly with a Waters Ultima Global Q-TOF instrument. Where fragmentation spectra were recorded on both instruments, they were identical. Samples (~50 pmol/L) in 1:1 (vol:vol) methanol:water containing 0.1 M ammonium hydroxide were infused at 5 μL/min with a diffusion pump and with a potential of 3.0 kV on the ESI needle. The ion source was maintained at 120°C, the nebulizer gas was 100°C, and the cone and desolvation nitrogen flows were 50 and 450 L/h, respectively. The cone voltage was 100 V and the RF-1 voltage was set at 180 and 80 V for singly and doubly charged ions, respectively. Spectra (2-s scans) were acquired with a digitization rate of 4 GHz and accumulated until a satisfactory signal:noise ratio had been obtained (noise level less than about 1%). For MS/MS data acquisition, the precursor ion was selected at low resolution (5 m/z mass window) to allow transmission of isotope peaks and fragmented with argon at a pressure (recorded on the instrument's pressure gauge) of 0.5 bar. The voltage on the collision cell was adjusted with mass and charge to optimize formation of the diagnostic fragment ions. Typical values were 80-120 V for the singly charged ions and 30-50 V for the doubly charged ions. Other voltages were as recommended by the manufacturer. Instrument control and data acquisition were performed with a MassLynx data system (Version 4.0) with processing as above. Doubly charged ions and ions in higher charge states are preferentially produced from larger glycans in a mixture (e.g., [56]), probably because the charges can be better distributed, thus reducing Coulombic repulsion [57]. Figure 1c shows the preferential formation of doubly charged ions of the type [M + (H 2 PO 4 ) 2 ] 2from the larger high-mannose glycans from gp120 produced in HEK293F cells and recorded with a Waters Synapt G2Si instrument with a relatively high cone voltage. Mobility-extracted singly charged ions from neutral glycans are shown in Fig. 1b and those from the doubly charged ions are displayed in Fig. 1c. In addition to the presence of doubly charged high-mannose glycans, Fig. 1c reveals prominent ions from the fucosylated triantennary glycan (14, m/z 1172.9), which is about three times as abundant as the corresponding singly charged ion (m/z 2248.8). The tetraantennary glycan (15) also produced a prominent doubly charged ion at m/z 1355.4 but the singly charged ion is hardly detectable emphasizing the need to investigate ions in both charge states to deduce the total glycan profile. Another example of where [M 2 + (H 2 PO 4 ) 2 ] 2-) ions were observed from viral glycans was in a sample of N-glycans released from HIV gp120 produced in the presence of Nbutyldeoxynojirimycin (NBDNJ), a compound that blocks removal of the glucose residues from the Glc 3 Man 9 GlcNAc 2 high-mannose glycan (1) originally attached to proteins during the biosynthesis of glycoproteins (Scheme 1). Consequent biotransformation by α-mannosidase removes the mannose residues from the d2 and d3 antennae to give Glc 3 Man 8 GlcNAc 2 (2) and Glc 3 Man 7 GlcNAc 2 (3, Scheme 1, Path A). The profile of released glycans from this sample is shown in Fig. 2. The doubly charged profile (Fig. 2b) (10). Peaks marked with an asterisk are analogues of these ions with an additional HPO 4 Na, an adduct often seen in the spectra of phosphate adducts. Ions of type [M -H n ] nand [M + (H 2 PO 4 ) n ] nfrom other glycoproteins The ability of ion mobility to extract multiply charge ions, often from a "noisy" region of the spectrum, provided a method for identification of large glycans, often present in low abundance and was investigated with several other glycan mixtures. Figure 3 shows the profile of the N-glycans released from desialylated AGP. Figure 3a shows the total profile and A similar pattern of doubly charged ions was seen with the fucosylated glycans released from glycoproteins obtained from human parotid glands (Fig. 4). The profile of the singly charged glycans (Fig. 4b) was dominated by bi-antennary and hybrid glycans with varying numbers of fucose residues and containing relatively weak signals from the more highly fucosylated glycans. By contrast, the penta-fucosylated bi-antennary glycan (16) dominated the spectrum of the doubly charged ions (Fig. 4c) with additional sets of polyfucosylated tri-and tetraantennary glycans, a few of which, such as the nonafucosylated tetra-antennary glycan (17), do not appear to have been reported before from salivary glycoproteins [46,58,59]. The inset to panel c shows a vertically enlarged portion of the upper mass range. Also, marked with broken lines is a series of ions attributable to hexose oligomers but the specific compounds producing these ions were not identified. Ions of type [M n +(H 2 PO 4 ) n ] n- Most reference glycans were observed to form [M 2 + (H 2 PO 4 )] 2ions when examined at concentrations greater than about 1 μg/ μL. In addition, the larger ones, such as Glc 1 Man 9 GlcNAc 2 (4), were found to form even larger clusters at higher charge states such as [M 3 + (H 2 PO 4 ) 3 ] 3and [M 4 + (H 2 PO 4 ) 4 ] 4-, that were easily separable by ion mobility. Glycans from AGP were also found to form doubly, triply, and quadruply charged ions as shown above in Fig. 3c, d, and e. The dimeric ion formed from two triantennary glycans (19, m/z 2102) from AGP was detected by its isotope peaks that were clearly visible among the peaks from the singly charged ion (inset, Figs. 2a and 3a). Combinations with the larger glycans greatly increased the relative abundance of these ions making the presence of these larger glycans easier to detect than in the spectra of the singly charged glycans. Three types of triply charged ions were observed ( , and H 2 PO 4 anions was found to make negligible difference on the appearance of the spectra although larger anions such as Brand Iproduced less fragmentation. Full details of the diagnostic ions that can be formed in these negative ion spectra can be found in earlier publications [20,[52][53][54][55]60]. Briefly, referring to Fig. 5a, the structure of the N,N'-di-acetylchitobiose core region, including the presence of absence of fucose, was defined by 2,4 A R , B R-1 , and 2,4 A R-1 ions (m/z 1720.5, 1660.5, and 1517.4, respectively); the non-reducing residues produced C 1 fragments (m/z 179) and the composition of the 6-antenna was defined by D and D-18-type ions (m/z 971.3 and 953.3 respectively), formed by loss of the N,N'-diacetylchitobiose core and the d1 antenna (also called the 3-antenna), accompanied by 0,4 A 3 and 0,3 A 3 cross-ring fragments at m/z 899.3 and 869.3, respectively, appearing in the center region of the spectra (terms such as 3-antenna and d1 antenna are defined in Fig. 5a). Doubly charged ions of type [M + (H 2 PO 4 ) 2 ] 2- Doubly charged [M + (H 2 PO 4 ) 2 ] 2ions fragmented to give almost entirely singly charged products such that the spectra were almost indistinguishable from those of the singly charged [M + H 2 PO 4 ]ions as illustrated with the high-mannose glycans Man 9 GlcNAc 2 (5, Fig. 5a, b). A few very minor (< 2% RI) doubly charged fragments such as the 2,4 A R ion were sometimes seen at lower cone voltages. The mechanism would appear to be loss of H 2 PO 4 leaving the [M + H 2 PO 4 ]ion which fragmented as the singly charged ion. Thus, structural interpretations of the larger glycans are often more appropriately made with the doubly charged [M + H 2 PO 4 ] 2ions when their relative abundance exceeds that of the corresponding singly charged ones. These [M + (H 2 PO 4 ) 2 ] 2ions were not totally stable using the Waters G2Si instrument. Although the doubly charged ion was selected in the quadrupole, ion mobility, after the trap cell, showed both this ion and the singly charged [M + H 2 PO 4 ]species as shown by the arrival time distribution (ATD) profile (inset to Fig. 5b). The CID spectrum of this [M + H 2 PO 4 ]ion was identical to that of the singly charged [M + H 2 PO 4 ]ion from a reference sample. The asymmetry of the ATD peak appeared to be caused by anomers because all fragment ions exhibited the same profile [61] and reducing end reduction produced a symmetrical profile [62]. Doubly charged ions of type [M -H 2 ] 2- The CID spectra of these ions differed considerably from those of the and [M -H]ions as can be seen from Figs. 5c and d and 6b and c. Doubly charged ions dominated the spectra. In particular, the 2,4 A R , B R-1 , and 2,4 A R-1 ions appeared mainly as doubly rather than singly charged ions (m/z 859.8, 829.8, and 758.3 in Fig. 5d and at m/z 616.7, 586.7, and 515.2, respectively, in Fig. 6c). In the spectra of the high-mannose glycans, successive losses of mannose residues from the 2,4 A R ion produced major singly charged fragments but no such loss was observed from the molecular ion. In contrast to this observation, the spectrum of reduced Man 6 GlcNAc 2 reported by Tjondro et al. [63], where formation of the 2,4 A R ion was blocked by the open ring, showed a very prominent loss of mannose directly from the doubly charged molecular ion. The other main diagnostic ions tended to be singly charged. Thus, in the CID spectra of the [M -H 2 ] 2ion from Man 6 GlcNAc 2 (9, Fig. 6c), the D, D-18, 0,3 A 3 , and (Fig. 7g). Diagnostic ions from the biantennary glycan (22, Fig. 7a) were the C 1 fragment at m/z 179 (non-reducing terminal galactose), the 1 (Fig. 7b), although the 2,4 A 6 , B 5 , and 2,4 A 5 ions appear as doubly charged fragments as they did in the spectra of the high-mannose glycans above. This pattern of fragmentation is similar to the spectrum reported by Ni et al. [64] using an Agilent 6520 Q-TOF instrument although the relative abundance of some of the ion differed. 2-Aminobenzamide (2-AB) derivatives 2-AB derivatives, prepared by reductive amination, are commonly used as fluorescent analogues of these glycans for detection in HPLC experiments [65]. They are often encountered in MS work even though the fluorescent tag is not needed. Figure 6d shows the CID spectrum of singly charged phosphate adduct of Man 6 GlcNAc 2 (9) derivatized in this manner. The spectrum of the lower mass region of the spectrum mirrors that of the underivatized glycan ( Fig. 6a) but, because of the open-ring nature of the reducing terminal GlcNAc residue (a consequence of the reductive amination reaction during labelling), the 2,4 A 4 ion, which is important in identifying the presence of fucose attached to this residue (see below), was missing. The CID spectrum of the [M -H 2 ] 2ion ( Fig. 6e) contained the same diagnostic singly charged ions in the low mass region and, like the spectrum of the reduced glycan [63], successive losses of mannose residues (m/z 676.2, 595.1, 514.1) were prominent from the molecular ion. In the spectra of the [M -H 2 ] 2ions from two isomers of Man 7 GlcNAc 2 as 2-aminopyridine (2-AP) derivatives reported by Yan et al. [21], the most abundant fragment was loss of mannose from the molecular ion (doubly charged). However, the spectrum of the d1,d3-isomer appeared to lack the D and D-18 fragment ions. In the spectra of these two isomers, and in that of the 2-AB derivative of Man 6 GlcNAc 2 (9), the 2,4 A R-1 ( 2,4 A 4 at m/z 1031.4 in Fig. 6e) ion was observed mainly as a singly charged fragment. The most abundant ion was the Y 1 fragment (GlcNAc-2-AB) at m/z 322.1. 2-Aminobenzoic acid (2-AA) derivatives 2-AA derivatives [66] have received increasing popularity in recent years [67] because of their ease of preparation and high fluorescence. The incorporation of an acidic group potentiates formation of deprotonated molecules and localizes the negative charge. Although providing high negative ion sensitivity, localization of the charge on the derivative considerably alters the fragmentation from that discussed above. Thus, the singly charged spectrum of Man 6 GlcNAc 2 (9, Fig. 6f) contained mainly Y-type fragments produced by losses from the nonreducing terminus. There was very little information on the topology of the antennae except a moderate increase in the relative abundance of the Y 3α ion (loss of the d2 and d3 antennae) from the larger high-mannose glycans. The diagnostic 2,4 A ions from the reducing terminus and the D-type ions were missing. Fragmentation of the [M -H 2 ] 2ions from the highmannose ions was even less informative (Fig. 6g). Although prominent singly charged ions were produced from successive losses of mannose residues, there was no information available regarding the glycan topology. For more information on the fragmentation of 2-AA-derivatized glycans, see Harvey [68]. Multiply charged ions of type [M n + (H 2 PO 4 ) n ] nfrom single glycans The spectra of several reference glycans were examined. These included biantennary glycans with zero (26), one (27), and two (18) galactose residues, with (28)(29)(30) and without (26,27,18) a bisecting GlcNAc residue and with or without core fucose, a hybrid glycan (23), and a series of high-mannose glycans. The relative amount of the dimeric ions in the total glycan profile tended to increase with increasing molecular weight but the fragmentation spectra of all dimers were similar to those of the monomers. Glc 1 Man 9 GlcNAc 2 (4) formed dimers, trimers, and tetramers that were well-separated by ion mobility as shown by the ATD profile (inset Fig. 8a). CID spectra of the dimer and monomer were identical but the trimer and tetramer showed progressively increasing relative The negative ion CID spectrum of the fucosylated triantennary glycan from AGP (m/z 2248.7, singly charged) is shown in Fig. 9d. Location of the fucose residue on the 3antenna is shown by the masses of the 2,4 A 6 , B 5 , and 2,4 A 5 ions together with m/z 570 ( 1,3 A 3 , [Gal-(Fuc)GlcNAc-O-CH=CH 2 -OH] -) and m/z 977 (E-ion + fucose). The ATD profile of m/z 2248.7 (Fig. 9b) showed three additional constitutes with charge states 2, 3, and 4. CID spectra are shown in Fig. 9e, f, and g. These spectra show an increasing contribution of the core fucosylated isomer (24) such that the quadruply charged ion (Fig. 9g, [M 4 + (H 2 PO 4 ) 4 ] 4-) consisted almost entirely of this isomer. Fragment ions are color coded to match the glycan number. Ions in these higher charge states, thus, possibly provide a means to confirm the presence of minor isomers in mixtures. The situation becomes rather complicated when ions contain two or more constituents as shown in Fig. 9h, which shows the CID spectrum of the doubly charged ion at m/z 2175.2 from AGP with a composition of Hex 12 GlcNAc 10 Fuc 1 . Molecular ions in the spectra at m/z 1737.6, 1883.7, 2102.7, 2248.7, and 2613.9 show the presence of glycans 18,22,19,21, and 20, respectively. 2,4 A 6 , B 5 , and 2,4 A 5 ions for each glycan are shown in color. Thus, the ion consists of a mixture of the glycans 20/22, 19/24, and 18 with probably a fucosylated tetraantennary glycan such as 25, although its molecular ion was absent from the spectrum. Figure 3d shows the even more complex profile from the triply charged ions from trimers where there are more opportunities for the formation of mixed trimers. Constituents are labelled in the figure. The relative abundance of trimers containing fucose was much greater than in the spectra of the singly charged ions making their presence easier to detect. Conclusions Under negative ESI conditions, most N-glycans form singly charged ions of the type [M -H]or [M + adduct]where the adduct can be anions such as halogens, phosphate, or nitrate. Higher mass glycans tend to form doubly charged ions. In many samples, in very large glycans, such as tetra-antennary compounds bearing N-acetyl-lactosamine extensions, only doubly charged ions were seen. Although not reported in this paper, it has been found that very large high-mannose N-glycans found in some fungi produce triply and quadruply charged ions and that no singly charged ions from these larger glycans are present in the spectra. In addition to these ions, small quantities of dimeric doubly charged ions of structure [M 2 + (H 2 PO 4 ) 2 ] 2can also be produced and, in some cases, tri-and tetrameric ions in the appropriate charge states. CID spectra of the [M -H]and [M + adduct]ions, where the adduct is phosphate, chloride, or nitrate, tend to be almost identical because the first stage of fragmentation by the adducted glycans is removal of a proton (loss of HA, where A is the adduct) to leave the [M -H]ion, which is then further fragmented. Fragmentation of the doubly charged [M + (H 2 PO 4 ) 2 ] 2ion is similar but the [M -H 2 ] 2ions produce large amounts of doubly charged fragment ions ( 2,4 A R , B R-1 , and 2,4 A R-1 ) from the reducing end of the molecule. Most other structurally diagnostic ions, such as the D and D-18 ions, remain singly charged and appear in the spectra of the doubly charged precursors. In addition to these ions, small quantities of dimeric doubly charged ions of structure [M 2 + (H 2 PO 4 ) 2 ] 2can also be produced from the phosphate adducts. M can be two molecules of a single glycan or one each of two different molecules. If the Although the CID spectra of the singly and doubly charged ions are virtually identical, those of the larger charge states show an increase in the relative abundance of the lower mass fragments consistent with their being more sensitive to the collision energy, although, in mixed dimers, it appeared that there is a fairly random mixture of the two constituents. In conclusion, ion mobility enables multiply charged ions to be extracted from the spectra of glycan mixtures, often enabling new compounds to be identified, particularly when they are present at low concentration. CID spectra contain the same diagnostic ions as found in the spectra of the singly charged ions, thus providing a useful addition to the ion mobility/negative ion technique reported earlier. Data availability Not applicable Code availability Not applicable Declarations Conflict of interest The authors declare no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2021-08-03T13:50:09.541Z
2021-08-03T00:00:00.000
{ "year": 2021, "sha1": "b27916dc1d7085c8157e322bcb00757530ca85fe", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00216-021-03480-8.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b27916dc1d7085c8157e322bcb00757530ca85fe", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
5555692
pes2o/s2orc
v3-fos-license
Preconditioning Stimuli Induce Autophagy via Sphingosine Kinase 2 in Mouse Cortical Neurons* Background: Preconditioning provides insights into endogenous mechanisms that could be used to protect brain from injury. Results: Preconditioning stimuli up-regulate sphingosine kinase 2, leading to autophagy. Conclusion: Sphingosine kinase 2 mediates autophagy and preconditioning, possibly by disrupting Beclin 1/Bcl-2 interaction. Significance: The discovery of new signaling independent of SPK2 catalytic activity provides medicinal chemists with novel “druggable” targets important for neuroprotection. Sphingosine kinase 2 (SPK2) and autophagy are both involved in brain preconditioning, but whether preconditioning-induced SPK2 up-regulation and autophagy activation are linked mechanistically remains to be elucidated. In this study, we used in vitro and in vivo models to explore the role of SPK2-mediated autophagy in isoflurane and hypoxic preconditioning. In primary mouse cortical neurons, both isoflurane and hypoxic preconditioning induced autophagy. Isoflurane and hypoxic preconditioning protected against subsequent oxygen glucose deprivation or glutamate injury, whereas pretreatment with autophagy inhibitors (3-methyladenine or KU55933) abolished preconditioning-induced tolerance. Pretreatment with SPK2 inhibitors (ABC294640 and SKI-II) or SPK2 knockdown prevented preconditioning-induced autophagy. Isoflurane also induced autophagy in mouse in vivo as shown by Western blots for LC3 and p62, LC3 immunostaining, and electron microscopy. Isoflurane-induced autophagy in mice lacking the SPK1 isoform (SPK1−/−), but not in SPK2−/− mice. Sphingosine 1-phosphate and the sphingosine 1-phosphate receptor agonist FTY720 did not protect against oxygen glucose deprivation in cultured neurons and did not alter the expression of LC3 and p62, suggesting that SPK2-mediated autophagy and protections are not S1P-dependent. Beclin 1 knockdown abolished preconditioning-induced autophagy, and SPK2 inhibitors abolished isoflurane-induced disruption of the Beclin 1/Bcl-2 association. These results strongly indicate that autophagy is involved in isoflurane preconditioning both in vivo and in vitro and that SPK2 contributes to preconditioning-induced autophagy, possibly by disrupting the Beclin 1/Bcl-2 interaction. Preconditioning is a procedure by which a noxious stimulus is applied to a tissue or organ below the threshold of damage and induces tolerance to the same or different subsequent noxious stimuli given above the threshold of damage (1,2). Studying cerebral preconditioning may provide insight into endogenous protective mechanisms that could be exploited therapeutically. Known preconditioning stimuli include inhalational anesthetics, hypoxia, brief ischemia, cortical spreading depression, and proinflammatory agents. Isoflurane, used widely and safely in surgical procedures, induces tolerance to ischemia in many organs, including brain (1). In the central nervous system, sphingosine 1-phosphate (S1P) 2 regulates multiple cellular processes, including proliferation, survival, and migration of neurons (3). Intracellular S1P levels are regulated by the expression and activity of sphingosine kinases (SPKs), which have been shown to play a role in preconditioning of the heart (4 -7), kidney (8), and brain (9). We previously found that SPK2, but not SPK1, mediates hypoxia-and isoflurane-induced brain preconditioning, possibly via hypoxia-inducible factor-l␣ (9), but the mechanisms involved were not elucidated. Autophagy is a regulated process for the removal of cellular proteins and damaged organelles (10,11). Autophagy is induced during preconditioning in heart (12,13) and is involved in ischemic preconditioning of neurons and rat brain (14,15). We thus hypothesized that isoflurane and hypoxic precondi-tioning might also induce autophagy in an SPK2-dependent manner to protect neurons. EXPERIMENTAL PROCEDURES The experiments were conducted according to protocols approved by the Animal Research Committee of Massachusetts General Hospital and National Institutes of Health Guide for the Care and Use of Laboratory Animals. In Vitro Isoflurane Preconditioning (ISO) and Hypoxic Preconditioning (HP) Model-After 7 days in culture, neurons were exposed to 2% isoflurane (Abbott Laboratories; 26675-46-2) for 30 min in an airtight chamber and harvested 6, 12, 24, and 48 h later. For HP, neurons were exposed to 4% oxygen for 8 h in an airtight chamber and harvested 12, 24, 48, and 72 h later. These conditions were based on previous reports and did not induce significant neuronal toxicity (17)(18)(19). Cell Viability Analysis-Cell death was induced by OGD or exposure to glutamate 24 h after exposure to ISO, or 48 h after exposure to hypoxia. To induce Glu toxicity, neurons were treated with 100 M L-glutamic acid (Sigma; 49449) for 5 min (drugs prepared in medium), washed and placed in fresh prewarmed Neurobasal medium. For OGD, cultures were washed three times with N 2 -bubbled Hanks' balanced salt solution and placed in an airtight chamber aerated with 95% N 2 /5% CO 2 for 4 h. Cells were then removed from the anaerobic chamber, washed and then placed in Neurobasal medium. Cell viability was quantified by MTT assay 24 h after OGD or Glu exposure. Neurons were incubated in 200 g/ml thiazolyl blue tetrazolium bromide (MTT; Sigma; M2128) at 37°C for 2 h. Culture medium was aspirated, and cells were lysed in 200 l of DMSO. Color intensity was measured at 570 nm using a Victor3V plate reader (PerkinElmer Life Sciences). The results are expressed as a percentage of absorbance of control wells. Separate cultures of neurons were fixed with 4% paraformaldehyde for 10 min, and the nuclei were stained with Hoechst 33342; cells undergoing cell death were characterized by condensed nuclei, and the percentages of healthy-looking cells were counted in a blinded fashion in four random fields. Isoflurane Preconditioning in Mice-Male C57BL/J mice (23-28 g, 6 -8 weeks of age; Charles River, Wilmington, MA) and age-matched wild-type, SPK1 Ϫ/Ϫ , and SPK2 Ϫ/Ϫ mice were maintained on a 12-h light/12-h dark cycle and fed ad libitum. The mice were randomly allocated to treatment groups: they were exposed to 1% isoflurane (in 70% N 2 and 30% O 2 ) for 3 h in an airtight chamber, recovered in an incu-bator (at 28°C) for ϳ30 min, and then returned to their cage (9,20), whereas control mice were placed in the airtight chamber flushed with air for the same duration of time. For Western blot analysis, 6, 24, or 48 h after isoflurane exposure, mice were euthanized and perfused transcardially with cold PBS. The cortex, striatum, and hippocampus were harvested and frozen immediately. Transmission Electron Microscopic Examination-Twentyfour hours after ISO, mice were perfused with PBS followed by PBS containing 2% paraformaldehyde/2% glutaraldehyde. The brains were postfixed overnight in PBS containing 2% paraformaldehyde/2% glutaraldehyde. Fifty-micron-thick coronal sections were cut with a vibratome. The sections were incubated in 1% osmium tetroxide for 1 h, dehydrated in graded ethanol, incubated in 1% uranyl acetate for 1 h, dehydrated in graded ethanol, and embedded in epon. Polymerization was performed at 60°C for 24 h. Based on our immunohistochemistry results, layer V (internal pyramidal layer) of the parietal cortex was selected for analysis. Blocks were cut on an ultramicrotome (50 nm) and examined using a JEOL 1011 electron microscope. To quantify the number of double-membrane vacuolar structures, four mice in each group and 25 neurons from each block were examined in a blinded manner. The number of large doublemembrane vacuolar structures (typical of autophagosomes) was counted in lower magnification images of randomly selected neurons, and the autophagosomal nature of the structures was confirmed using higher magnification images. Cortical neurons were identified by their large, round, and light nucleus with obvious nucleolus; they often contained randomly scattered rosettes of RNA particles and dispersed profiles of endoplasmic reticulum and could be recognized by the presence of neural filaments. Co-immunoprecipitation-Twenty-four hours after ISO, neurons were harvested and lysed in radioimmune precipitation assay buffer. The lysates were precleaned with protein A/G-agarose (Santa Cruz; sc-2003) for 1 h, incubated with anti-Bcl-2 antibody (Santa Cruz; sc-7382) overnight, and then subjected to immunoprecipitation with protein A/G-agarose for 3 h. The immunoprecipitates were analyzed by immunoblot with anti-Beclin antibody (Santa Cruz; sc-11427). Statistical Analysis-All assessments were performed in a blinded fashion. For in vivo experiments, mice were randomly allocated. The number of mice in each group was based on power analysis assuming a treatment effect of 30% and an S.D. of 25%. The data are expressed as means Ϯ S.D. Statistical analysis was carried out by one-way analysis of variance, followed by the Newman-Keuls multiple-comparison tests. p Ͻ 0.05 was considered to be significant. Autophagy Contributes to the Neuroprotection Elicited by ISO and HP in Cortical Neurons-Activation of autophagy was first examined in primary cultured mouse cortical neurons by immunoblotting LC3 and p62 (28,29). The LC3II/LC3I ratio was increased after ISO (Fig. 1A), whereas p62 was down-regulated (Fig. 1B), with maximal effects observed at 24 h. SPK2 was also up-regulated after ISO, and the peak SPK2 levels were seen 12-24 h after ISO (Fig. 1C). LC3 and SPK2 up-regulation was confirmed by immunofluorescence (Fig. 2). Hypoxia, the other preconditioning stimulus, also increased LC3II/LC3I ratio and down-regulated p62 in neurons (Fig. 1, D and E), but maximal effects were seen at 48 h after HP, with a corresponding peak in SPK2 expression at 24 -48 h (Fig. 1F). Either 4-h oxygen/glucose deprivation (OGD) or 5-min exposure to Glu decreased cell viability (Fig. 3, A and B). ISO greatly attenuated OGD-or Glu-induced cell death. Pretreatment with 3-MA or KU55933, at concentrations known to effectively block autophagy (10 mM and 2 M) (22,23), abolished ISO-induced protection both in the OGD and the Glu models. Hypoxia also induced tolerance to OGD or Glu (Fig. 3, C and D), in a 3-MA-and KU55933-sensitive manner. The degree of cell death was also quantified by Hoechst 33342 staining, providing results similar to MTT (thiazolyl blue tetrazolium bromide) measurements (data not shown). Although 3-MA and KU55933 both abolished HP-mediated neuroprotection against OGD, only KU55933 significantly inhibited HPmediated tolerance against glutamate, whereas the inhibition seen in the presence of 3-MA did not reach statistical significance. In contrast, both 3-MA and KU55933 abolished preconditioning by isoflurane, against the effects of OGD and glutamate toxicity. This could be due to the fact HP induces higher levels of SPK2 (Fig. 1) and induces a more robust neuroprotection (9), which might therefore be more difficult to inhibit using autophagy inhibitors. In control experiments (not shown), we established that cortical neurons were unaffected by either 10 mM 3-MA or 2 M KU55933, added alone; we also ruled out possible neuroprotective effects of these agents (in the absence of preconditioning), finding similar cell viability when neurons were treated with 3-MA, KU55933, or their vehicle 24 h before exposure to OGD or Glu. SPK2 Inhibition Prevents Preconditioning-induced Autophagy in Cortical Neurons-To explore whether SPK2 is involved in preconditioning-induced autophagy, we used two SPK2 inhibitors, SKI-II (4-[4-(4-chlorophenyl)-thiazol-2-ylamino]-phenol) and ABC294640 (3-(4-chlorophenyl)-adamantane-1-carboxylic acid (pyridin-4-yl-methyl) amide), on cortical neurons. SKI-II is a specific SPK inhibitor but does not discriminate between isoforms, whereas ABC294640 is an SPK2-selective inhibitor (24,25). We have previously shown that these inhibitors abolish ISO-induced tolerance both in vivo and in vitro (9). In the present study, isoflurane significantly increased the LC3II/LC3I ratio and decreased p62 levels (Fig. 4, A and B), whereas pretreatment with 1 M SKI-II or 10 M ABC294640 reduced LC3II/LC3I ratio and restored p62 levels. Because high concentrations of ABC294640 (50 M) or the SPK inhibitor SKI-I have been reported to activate autophagy in tumor cells or mouse embryonic fibroblasts, resulting in autophagic or apoptotic cell death (30,31), we also treated neurons with SPK inhibitors alone. LC3II/LC3I ratio and p62 levels were not altered by 10 M ABC294640 or 1 M SKI-II, suggesting that they have no direct effect on autophagy under our experimental conditions. As with ISO, pretreatment with ABC294640 or SKI-II abolished the changes in LC3II/LC3I ratio and p62 induced by HP (Fig. 4, C and D). To confirm these data obtained with drug inhibitors, we also transfected neurons with SPK2 siRNA and found that SPK2 siRNA prevented ISOmediated increases in the LC3II/LC3I ratio (Fig. 5). Taken together, these results suggest that SPK2 mediates preconditioning-induced autophagy. Isoflurane Preconditioning (ISO) Induces Autophagy in Vivo-To ascertain the in vivo significance of our findings, we examined LC3 and p62 expression in the cortex, striatum, and hippocampus of C57 mice 6, 24, and 48 h after exposure to isoflurane. The LC3II/LC3I ratio was significantly increased in cortex at 24 h, whereas p62 was down-regulated, with peak effects observed at 24 h in cortex and striatum (Fig. 6, A and B). Other changes in immunoblots did not reach statistical significance (Fig. 6), but the fact that these timerelated increases in LC3II/LC3I ratios were consistently observed in the three brain regions examined and were mirrored by time-related decreases in LC3 levels (also consistent between brain regions) strongly suggests that isoflurane induces autophagy in vivo. We also evaluated autophagy by visualizing LC3 immunoreactivity with immunofluorescence and diaminobenzidin staining in cortex 24 h after ISO. In control mice, LC3 immunoreactivity in cortex was low. Strong LC3 staining in cortical neurons was observed in mice exposed to ISO (Fig. 6C). Many LC3-positive neurons showed a punctate pattern of immunofluorescence (data not shown), suggesting induction of autophagy. We then used electron microscopy to evaluate ultrastructural changes and autophagosome formation in cortical neurons. Neurons in control cortex appeared normal with relatively healthy-look-ing organelles and nuclei (Fig. 6D). Twenty-four hours after ISO, neuron organelles and nuclei also seemed normal without appreciable injury, but some engulfment of cytoplasmic materials by double-membrane vacuolar structures was found, suggesting possible autophagy induction after ISO. Quantitative analysis showed that 32.5 Ϯ 6.8% of cortical neurons had double-membrane vacuolar structures in the control group, whereas 62.0 Ϯ 4.8% of neurons showed these structures in the ISO group (p ϭ 0.011; Fig. 6E), confirming that ISO induces autophagy not only in primary neurons but also in vivo. Preconditioning-induced Autophagy Activation Is Absent in SPK2 Knock-out Mice-To expand on our in vivo data and confirm that SPK2 is involved in preconditioning-induced autophagy in vivo, we used SPK1 Ϫ/Ϫ (32) and SPK2 Ϫ/Ϫ (33) mice. Because of the limited number of available mice, in some cases we only observed trends without reaching statistical significance, but we did observe that in WT mice, ISO significantly increased LC3II/LC3I ratio and decreased p62 in cortex or striatum at 24h (Fig. 7, C and D), whereas these changes were not seen in SPK2 knock-out mice. In contrast, LC3II/LC3I ratio and p62 expression in WT and FIGURE 2. Isoflurane preconditioning up-regulates LC3 and SPK2 in primary cortical neurons. Neurons were exposed to 2% ISO for 30 min and 24 h later were fixed with 4% paraformaldehyde and processed for immunofluorescence. Representative images of cortical neurons were stained with 4,6-diamidino-2-phenylindole (DAPI, blue) and antibodies against LC3 (A, red; bar, 100 m) or SPK2 (B, red; bar, 50 m). Microphotographs are shown as representative results from three independent experiments. CON, control. SPK1 Ϫ/Ϫ mice did not differ at 24 h after ISO (Fig. 7, A and B). These results suggest that the SPK2, but not the SPK1 isoform is involved in ISO-induced autophagy. SPK2 Inhibition Abolish ISO-induced Disruption of Beclin 1/Bcl-2-To determine whether the preconditioning effect of SPK2 depends on its catalytic activity, we examined whether S1P or the S1P receptor agonist FTY720 protects neurons against OGD-induced cell death. OGD induced significant cell injury (Fig. 8A), which neither S1P (1 or 3 M) nor FTY720 (30 or 100 nM) were able to prevent, indicating a lack of direct neuroprotective effect by these agents. S1P or FTY720 did not alter LC3II/LC3I ratio and p62 levels, suggesting that neither S1P nor FTY720 has direct effects on autophagy (Fig. 8, B and C). Pretreatment with SKI-II and ABC294640 had no effect on basal SPK2 levels but significantly reduced preconditioninginduced SPK2 up-regulation in cortical neurons (Fig. 8D). Taken together, these results suggest that autophagy activation mediated by SPK2 during preconditioning may be independent of its catalytic activity. To determine the role of Beclin 1, we knocked it down in neurons using two siRNA sequences (Fig. 9A). Both siRNAs prevented ISO-mediated increases in the LC3II/LC3I ratio (Fig. 9B), suggesting that ISO preconditioning induces autophagy via Beclin 1. Considering that SPK2 is a BH3-only protein that induces cell death when overexpressed in different cell types (34), we hypothesized that SPK2 might disrupt the interaction between Bcl-2 and Beclin 1 by a mechanism previously described for the atypical BH3-only proteins BNIP3/BNIP3L (35). We therefore quantified Bcl-2/Beclin 1 association by coimmunoprecipitation in lysates of cortical neurons. ISO decreased the amount of co-immunoprecipitated Bcl-2/Beclin 1, whereas ABC294640 and SKI-II increased co-immunoprecipitation of Bcl-2 and Beclin 1 (Fig. 9C), indicating that ISO might disrupt the interaction between Bcl-2 and Beclin 1, and initiate autophagy, whereas SPK2 inhibitors abolish preconditioning-induced disruption of Bcl-2/Beclin 1. DISCUSSION We used two preconditioning stimuli to explore the role of SPK2 in preconditioning-induced autophagy. In primary neurons, both ISO and HP induced autophagy and tolerance to subsequent OGD-or Glu-induced injury, whereas pretreatment with autophagy inhibitors abolished this tolerance, suggesting that autophagy is involved in the preconditioning process. Pretreatment with SPK2 inhibitors abolished preconditioning-induced autophagy. ISO also increased autophagy in the cortex of wild-type C57 mice but only induced autophagy in SPK1 Ϫ/Ϫ mice, not in SPK2 Ϫ/Ϫ mice. Our data show increased SPK2 levels, LC3II/LC3I ratio, and down-regulation of p62 in primary neurons after preconditioning. In agreement with our in vitro data, in mice exposed to isoflurane, LC3II/LC3I ratio is increased in cortex, whereas p62 is down-regulated in both cortex and striatum. The occurrence of autophagy was further confirmed in vivo using both LC3 immunostaining and electron microscopy. We have previously observed up-regulated SPK2 protein expression after ISO in vivo (9); these results and the current data strongly implicate both autophagy and SPK2 in the mechanism of preconditioning. Indeed, we have found that ISO and HP protect against OGD-or Glu-induced injury, whereas pretreatment with autophagy inhibitors 3-MA or KU55933 blocks preconditioning-induced tolerance in primary neurons. We thus conclude that activation of autophagy is essential in preconditioning and protects against cell death. These results add to previous reports indicating that autophagy is induced by hypoxia and ischemic preconditioning in heart (12,13), neurons or brain (14,15,24) and now point to SPK2 as a potential key mediator of these effects. To explore whether SPK2 is involved in preconditioning-induced autophagy, we used SPK2 inhibitors in cultured neurons. Although SKI-II is not thought to be isoform-specific, ABC294640 inhibits preferentially SPK2 (24,25). In our study, both preconditioning paradigms increased LC3II/ LC3I ratio and decreased p62, and pretreatment with SKI-II and ABC294640 reduced LC3II/LC3I ratio and restored p62 level. We then applied genetic approaches in vivo, by using SPK1 Ϫ/Ϫ and SPK2 Ϫ/Ϫ mice (32, 33). We have previously observed that SPK2 predominates in different regions and cell types in the mouse brain (36). Both neuronal (9) and microvascular SPK2 (37-39) might play a role in brain preconditioning. In the present study, we showed that knocking out SPK2, but not SPK1, abolished preconditioning-induced autophagy. These data, combined with our observations in primary neurons, suggest that neuronal SPK2 plays a key role in preconditioning-induced autophagy; the role of similar pathways in other brain cell types, in particular the vasculature, remains to be investigated. We cannot rule out that increased SPK2 activity might reduce sphingosine levels and indirectly decrease ceramide levels (because sphingosine can be converted to ceramide in ER). To the best of our knowledge, however, ceramide induces autophagy (40,41), it is therefore unlikely that autophagy activation via SPK2 would be related to a decreased levels of ceramide. Conflicting findings have been published on the effects of S1P on autophagy in different tumor cell lines (42)(43)(44). S1P has anti-apoptotic properties in many cell types (45), whereas the agonist FTY720, which acts on four of the five known S1P FIGURE 6. ISO induced autophagy activation in vivo. C57 mice were exposed to 1% isoflurane for 3 h to induce ISO. Cortex, striatum, and hippocampus were dissected 6, 24, and 48 h after ISO. A and B, the protein levels of LC3 (A) and p62 (B) were detected with immunoblotting. ␤-Actin levels were used as loading control. The data are shown as means Ϯ S.D. (n ϭ 6 mice). *, p Ͻ 0.05 versus control group. In a separate series of experiments, mice were exposed to 1% isoflurane for 3 h and decapitated 24 h later. Layer V (internal pyramidal layer) of the parietal cortex was selected for observation and analysis. C, brain sections were labeled with the anti-LC3 antibody and processed with diaminobenzidin (DAB) staining. Scale bars, 100 m. Note that LC3 expression was relatively low in the sham group, whereas LC3 immunoreactivity was increased in ISO group. D and E, electron microscopy images show increased number of doublemembrane vacuolar structures in cortical neurons of ISO mice. Scale bars, 500 nm. Arrows indicate nascent autophagosomes. N, nucleus. The data are shown as percentages of neurons displaying typical features of autophagosomes (double-membrane vacuolar structures; n ϭ 4 mice). *, p Ͻ 0.05 versus control group. Con or CON, control. receptor subtypes, is protective in several animal models of cerebral ischemia (46). SPK2 and S1P 1 receptors have been shown to participate in the signaling associated with hypoxic and FTY720 preconditioning (39), but this study did not investigate which cell type(s) express the relevant SPK2 and S1P 1 receptors. To investigate potential mechanisms by which SPK2 contributes to autophagy activation, we examined the effects of S1P on autophagy and preconditioning. We found that neither tolerance nor autophagy induction by ISO were affected by FTY720 or by S1P. These negative results are in agreement with previous findings suggesting that anti-inflammatory mechanisms and vasculo-protection, rather than direct effects on neurons, underlie the beneficial effects of FTY720 in mouse stroke models (26). However, it is worth mentioning that when tested on mixed cortical cell cultures, FTY720, P-FTY720, and S1P were recently reported to be neuroprotective when applied prior to NMDA-induced cell death (47); it is unclear whether differences in cell types, noxious stimulus, and/or preversus post-treatment paradigms account for the difference between these and our findings. The lack of effect of S1P on autophagy and neuroprotection suggests that the effect of SPK2 may not depend on its catalytic activity, suggesting an alternative, possibly BH3 domain-dependent, mechanism by which SPK2-mediated preconditioning might be linked to autophagy. Indeed, ISO decreased the interaction between Bcl-2 and Beclin-1, suggesting that autophagy is involved in isoflurane preconditioning both in vivo and in vitro and that preconditioningassociated SPK2 up-regulation may promote Beclin 1-dependent autophagy by disrupting association between Bcl-2 and Beclin 1. The fact that SPK2 inhibitors prevented the preconditioning-induced disruption of Beclin 1/Bcl-2 interaction would seem to invalidate this hypothesis. However, SPK inhibitors, at least for the SPK1 isoform, can also lead to proteasomal degradation of the enzyme, in addition to blocking its catalytic activity (48,49). Indeed, in the current study, SKI-II or ABC294640 had no effect on basal SPK2 levels, but they significantly reduced preconditioning-induced SPK2 up-regulation, suggesting that these inhibitors not only block SPK2 catalytic activity but also act at the level FIGURE 7. Isoflurane-induced autophagy activation was seen in SPK1, but not SPK2 knock-out mice. The mice were exposed to 1% isoflurane for 3 h. Cortex, striatum, and hippocampus were dissected 24 h later. Levels of LC3 and p62 were measured by immunoblotting. LC3 (A) and p62 (B) expression in SPK1 Ϫ/Ϫ mice after ISO (n ϭ 4). LC3 (C) and p62 (D) expression in SPK2 Ϫ/Ϫ mice after ISO (n ϭ 5). *, p Ͻ 0.05 versus control group. #, p Ͻ 0.05 versus ISO group. CON, control. of SPK2 expression in neurons. Taken together, all these results indicate that SPK2-mediated autophagy activation in preconditioning may not depend on its catalytic activity. S1P-independent actions of SPK2 are not unprecedented: SPK2 regulates IL-2 pathways in T cells independently of S1P (50), and previous studies have shown that SPK2 is a BH3-only protein that induces apoptosis when overexpressed in different cell types (34,35). BNIP3 is another BH3 domain protein that is up-regulated by hypoxia via hypoxiainducible factor-1␣; up-regulated BNIP3 displaces Beclin 1 from Bcl-2/Beclin 1 or Bcl-XL/Beclin 1 complexes, releasing Beclin 1, thereby initiating mitochondrial autophagy and decreasing reactive oxygen species production (51,52). The literature suggests that although hypoxia-induced upregulation of SPK2 is protective (53), SPK2 overexpression induces apoptosis (34). Interestingly, such dual effects have similarly been reported for BNIP3/BNIP3L (35,54). It is therefore tempting to speculate that the effect of SPK2 on cell fate might 1) be critically dependent on its levels, on the levels of interacting molecules or on the cellular environment and 2) involve a mechanism similar to that described for BNIP3/BNIP3L. Our co-immunoprecipitation experiments indeed support the notion that SPK2 is another BH3only protein up-regulated by preconditioning that can displace Beclin 1 from Bcl-2/Beclin 1 complexes, release Beclin 1, and initiate autophagy. In addition, we found that cortical neurons transfected with Beclin 1 siRNA did not show preconditioning-mediated autophagy activation, suggesting that that ISO is associated with Beclin 1-dependent autophagy. Taken together, our results suggest that autophagy is involved in preconditioning in cortical neurons both in vivo and in vitro and that SPK2 contributes to preconditioning-induced autophagy by disrupting Bcl-2/Beclin 1 complexes. Although most current drugs act either on receptors or on enzymes, usually interacting with their ligand binding or catalytic sites (55), the discovery of new signaling properties independent of SPK2 FIGURE 8. SPK2-mediated autophagy activation in preconditioning does not depend on its catalytic activity. A, S1P and FTY720 did not protect against OGD injury. Cortical neurons were incubated with S1P (0.3-3 M) or FTY720 (0.03-1 M) 24 h before the onset of 4-h OGD. OGD reduced cell viability; neither FTY720 nor S1P treatment was able to prevent cell death (n ϭ 3 independent experiments). **, p Ͻ 0.01 compared with the control group. B and C, cortical neurons were then incubated with S1P (1 or 3 M) or FTY720 (1 M) for 28 h. S1P and FTY720 had no effect on LC3 (B) or p62 (C) expression (200 nM rapamycin was used as a positive control). D, neurons were preincubated with ABC294640 (10 M) and SKI-II (1 M) 30 min before the onset of ISO. The levels of SPK2 were measured by immunoblotting (n ϭ 3 independent experiments). *, p Ͻ 0.05; **, p Ͻ 0.01 versus control group. #, p Ͻ 0.05; ##, p Ͻ 0.01 versus ISO. CON, control; FTY, FTY720.
2018-04-03T04:07:43.942Z
2014-06-13T00:00:00.000
{ "year": 2014, "sha1": "6e1e271256c95a471df69a3e9b76ccb99c21d3c6", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/289/30/20845.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "fd168a275ff8ab28c1e2d9b5c48083f09566914d", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
4710138
pes2o/s2orc
v3-fos-license
The Correlation between Changes in Biochemical Parameters and Central Macular Thickness in Patients with Non-Proliferative Diabetic Retinopathy. This study aimed at evaluating the correlation between changes in Hemoglobin A1c (HbA1c) and fasting serum lipids, and Central Macular Thickness (CMT) in patients with Non-Proliferative Diabetic Retinopathy (NPDR). In the current research, both eyes of 68 patients with mild or moderate NPDR, without clinically significant macular edema, were studied. Levels of fasting serum lipids, HbAlc, and CMT were measured during the first visit and at the end of the follow-up period (3 months). For statistical analysis, CMTs of each eye were studied and the correlation of changes was investigated. Additionally, the direction of changes in CMT for each eye was determined, and whether the changes in both eyes were symmetrical was investigated. Out of 68 patients, 24 were male and 44 were female. The mean CMT of all eyes was 290.05 ± 48.90 µm during the first visit and 286.80 ± 37.57 µm on the 3rd month follow-up. The mean HbAlc was 8.71 ± 1.82% at first visit to the hospital and the mean HbAlc was 8.39 ± 1.65% at the final visit. Although the changes in HbA1c and CMT during the follow-up period were statistically insignificant, the correlation of these 2 values was statistically significant (p=0.01). However, amongst l3 patients, the CMTs were asymmetrically changed in each eye during the follow-up period. To the best of the author's knowledge, this was the first study, which indicated a significant correlation in changes of CMT and HbA1c, even amongst patients with low-grade diabetic retinopathy. Demonstration of asymmetric changes in CMT of each treatment-naive eye of the same patient, during changes in systemic conditions, was another important finding of this study. INTRODUCTION Diabetes Mellitus (DM) is a chronic metabolic syndrome characterized by hyperglycemia due to insulin resistance. Long-standing DM influences many organs and tissues, leading to several complications, such as Diabetic Retinopathy (DR) and Diabetic Macular Edema (DME) [1,2]. The pathogenesis of DME remains unclear as complex processes, with various contributing factors, seem to be involved. Increasing diabetes duration with chronic hyperglycemia, advanced glycation end-products, such as levels of glycosylated hemoglobin (HbA1c), free oxygen radicals, protein kinase C, hypercholesterolemia, and blood pressure play important roles in such progressive metabolic diseases [3][4][5]. Hemoglobin A1c is the index of average glycemic control over the previous 2 to 3 months and indicates the level of diabetic control; therefore, increased HbA1c concentration is the most important risk factor for the development of DM complications, mainly DME [6]. Several conflicting reports regarding the effect of serum lipid profile on macular edema have been published, a number of which did not show any statistical correlation between serum lipid parameters and CMT, while others showed that high serum lipid levels indicate a risk of hard exudate and macular edema development [3][4][5][6]. The purpose of the current study was to evaluate and compare the correlation between changes in HbA1c, fasting serum lipids, and Central Macular Thickness (CMT) in patients with low-grade DR. Additionally, the researchers aimed at examining whether the changes in CMT of each eye were symmetrical under the changing systemic conditions of the same subject. MATERIALS AND METHODS The current study was carried out at the Ulucanlar Eye Education and Research Hospital. All procedures were designed in accordance with ethical standards and principles of the Declaration of Helsinki for human subjects. The Medical Ethics Committee of Diskapi Training and Research Hospital reviewed and approved the study protocol and informed consent forms were obtained from all participants. Both eyes of 68 patients with type 2 DM and mild or moderate Non-Proliferative DR (NPDR) and without Clinical Significant Macular Edema (CSME), according to the criteria of the International Clinical Diabetic Retinopathy Disease Severity Scale [7] and Early Treatment Diabetic Retinopathy Study (ETDRS), were studied in the current research [8]. The presence of type 2 DM in each patient was confirmed by an internal medicine consultant. For systemic evaluation, presence of other systemic diseases associated with type 2 DM, including hypertension, diabetic nephropathy, and diabetic neuropathy, and types of diabetic treatment and history of other medications, such as antihypertensive, were recorded for each patient. Patients with a history of other retinal diseases, glaucoma, uveitis, ocular trauma, and any type of ocular surgery, eyes with proliferative DR, laser photocoagulation or intravitreal injections were excluded from the current study. All patients had undergone detailed ophthalmic evaluation including Visual Acuity (VA) and intraocular pressure measurement, detailed slit lamp, and dilated fundus examinations. All measurements and evaluations were performed on the patients' first visit and at the 3 rd month follow-up. None of the patients had ocular risk factors, such as cataract extraction, trauma, inflammation, and topical treatment, which may affect CMT during this period. To evaluate the correlation between changes in biochemical parameters, including fasting serum lipids, levels of High-Density Lipoprotein (HDL), Low-Density Lipoprotein (LDL), Triglyceride (TG), and plasma HbA1c were recorded for each subject through peripheral blood sampling. Samples were evaluated with standard methods, including a Roche Modular-P 800 device (Roche Diagnostic, GmbH, Germany) for fasting serum lipids. Low-Density Lipoprotein levels were calculated with available lipid data, using the Friedewald formula. Hemoglobin A1c was measured by high-performance liquid chromatographyultraviolet detection [9]. Central Macular Thickness was measured on the same day of serum parameters evaluation. Spectral Domain Optical Coherence Tomography (SD-OCT) (Spectralis; Heidelberg Engineering, Heidelberg, Germany), volume scans of 20x20 degree, consisting of 49 horizontal highresolution line sections and including scanning laser ophthalmoscopy en face and fundus auto-fluorescence images of the macula were obtained with HRA2 (Heidelberg Retina Angiograph-Optical Coherence Tomography, Heidelberg Engineering, Heidelberg, Germany). The morphology of central retinas, such as macular cystic changes or diffuse macular thickening, was evaluated by the same observer. Central Macular Thickness was measured from the same area each time with the aid of an eye tracker system, and to obtain high quality images without pupil dilation, motion artifacts were eliminated by the SD-OCT device. To evaluate changes in CMT during the follow-up period, differences in macular thickness in the same location of the fovea were calculated with the progression mode of the SD-OCT devices, during each observation period. The direction of CMT changes for each eye was determined by different analysis systems in OCT. Statistical analyses of data were performed using the Statistical Package for the Social Sciences software (SPSS Inc., Chicago, IL, USA), version 24. The normality of variables was assessed by the Kolmogorov-Smirnov test and the variables were evaluated with statistical methods. The Wilcoxon test was used for comparison of each parameter, measured during different examination periods, as variables did not have a normal distribution. In order to evaluate the correlations between non-normally distributed variables, the correlation coefficients and their significance were calculated using the Spearman's correlation test. Mann-Whitney U test was used to evaluate the significance of correlations between non-normally distributed numerical variables of different groups. To evaluate the correlation between changes of HbAlc, the CMT chi-square test was performed. The P values of < 0.05 were considered significant. RESULTS Amongst 68 patients, 24 (35.30%) were male and 44 (64.70%) were female. The number of female subjects was significantly more than male subjects (P = 0.03). The mean age was 57.5 years (range of 38 to 80 years). The mean age in the female group was 60 years (range of 39 to 74 years), while it was 53 years (range of 38 to 80 years) in male patients. The difference was statistically insignificant (P = 0.10). Hypertension was observed in 28 (41.2%) out of 68 cases, being the most frequent systemic disease associated with DM in the current study. There was also no significant difference between genders regarding the presence of hypertension (p=0.14). Any other complications of DM, such as polyneuropathy, nephropathy, history of dialyses, diabetic foot or any other systemic disease, were not present in any of the patients. A total of 43 patients were under insulin treatment, while 23 were receiving oral anti-diabetic medication, 1 patient was under combined treatment of oral anti-diabetic medications and insulin injection, and only 1 patient did not require any systemic treatment for DM. The summary of demographic data, presence of hypertension, and treatment protocol of diabetes mellitus, according to gender, is presented in Table 1. The mean HbAlc was 8.71 ± 1.82% when the patients were first presented to the hospital, while it changed to 8.39 ± 1.65% on the 3rd month follow-up. The mean HbAlc was significantly higher than the recommended upper limits of patients with diabetes, according to the American Diabetes Association (ADA) recommendations [10]. In 22 patients, the HbA1c level was increased, while it was decreased in 39 patients, and in only 7 subjects, the HbA1c level remained the same. Both during the initial visit and on the 3 rd month, HbA1c levels were normally distributed, as evaluated by the Kolmogrov-Smirnov test, and there was an insignificant decrease during the 3rd month follow-up visit (P = 0.06). Furthermore, the difference between fasting serum lipid measurements was not statistically significant (p values were 0.25, 0.62, and 0.09 when comparing HDL, LDL, and TG levels, respectively). All of these serum parameters were also normally distributed during each period. When the patients were first presented, the mean CMT of all eye examinations was 290.05 ± 48.90 µm, while it became 286.80 ± 37.57 µm on the 3rd month follow-up. The difference between CMT during the 2 observation periods was statistically insignificant (P = 0.11), although a slight decrease was observed on the 3rd month followup visit. In contrast to serum parameters, CMTs were not normally distributed during both observation periods. The mean value ± Standard Deviation (SD) of HbA1c, serum lipids, and CMT during the first and final visit are presented in Table 2. Table 3. A total of 55 subjects (80.88%) had CMT changes, which had the same direction in both eyes of the same patient. In 28 subjects, CMTs of both eyes were decreased while it was increased in both eyes of 27 subjects. During the follow-up period, in 13 subjects (19.11%), CMTs were asymmetrically changed, amongst which in 3 subjects CMTs were increased in one eye while it remained the same in the other eye. Similarly, in 3 subjects, CMT was decreased in one eye while its level did not change on the other side, and in 7 subjects, CMT was increased in one eye while it was decreased in the other eye. DISCUSSION In the current study, a correlation was found between macular thickness and both HbA1c and fasting serum lipids, as an indirect evidence of metabolic control. Type 2 DM is a complex disease and the risk of developing DR was found to be associated with several factors, such as diabetes duration, cardiovascular disease, and blood pressure [2,5,6]. Thapa et al. [6] found that concurrent hypertension was observed in 55.76% and abnormal lipid profile in 52.56% of their subjects. In the current study, results similar to that of Thapa et al. [6] were found. In the mentioned study, hypertension was observed in 58.82% of patients, being the most frequent systemicassociated factor in patients with diabetes. In clinical practice, decision to initiate treatment is based on retinal findings in biomicroscopy and SD-OCT changes rather than VA. The findings of Pieramici et al. [11] reaffirm the discordance between retinal thickness and VA, which had been widely accepted and demonstrated previously. In accordance with previous studies, the current evaluation also focused on the anatomic correlation with both fasting blood lipids and HbAlc rather than VA. Thapa et al. [6] observed that poor glycemic control (HbA1c > 7%) was found in 73.97% of newly diagnosed proliferative DR among patients with type 2 DM. Even though the current study only evaluated NPDR, high HbA1c (HbA1c > 7%) was seen in 79.4% of the subjects during both visits, which was very similar to the results of Thapa et al. This finding indicates that uncontrolled blood sugar is very common in patients with DR of any stage. According to several studies, reduction of HbAlc values decreases the risk of development or progression of any stage of DR among patients with type 2 DM [6,[12][13][14]. In addition, Benarous et al. [13] reported that HbA1c had a positive correlation with DR stage, which was 7.3% in patient without DR, 8.0% in eyes with DR, 8.1% in eyes with DME, and increased up to 8.3% in eyes with CSME. Various studies have indicated that a higher HbA1c level is associated with the occurrence of DME [14,15]. Jew et al. [5] published his work on the correlation between HbA1c and DME, in which the HbA1c was 7.8% in eyes without DME while it was 10.3% in eyes with DME. Such studies suggest a significant correlation between HbA1c and different stages of DR, yet the range of systemic and local conditions between mild NPDR and severe proliferative DR is very large. Additionally, DME is an important outcome and can occur at any stage of DR, being a major cause of visual impairment and blindness. Several SD-OCT images on DME, such as diffuse retinal thickening, serous retinal detachment, tractional retinal detachment due to posterior hyaloid traction, and cystoid macular edema, have been described, with all of these patterns generally co-existing with one another [16][17][18]. To evaluate the real effect of systemic factors on CMT and exclude local factors, the current study did not include patients with vitreomacular/vitreopapillary traction. To enhance homogeneity and validity of the study group, only the treatment-naive eyes with mild and moderate NPDR, which did not need laser photocoagulation or intravitreal anti-Vascular Endothelial Growth Factor (anti-VEGF) injection, were evaluated. In the current study, the mean HbA1c was 8.71% and CMT was 290.05 µm at initial visit, while the mean HbA1c was decreased to 8.39% and CMT was decreased to 286.8 µm in eyes with NPDR. A total of 60.29% of patients had parallel changes in HbA1c and CMT, which was statistically correlated. Ozturk et al. [19] found similar results, where the serum HbAlc values were found to correlate with change in CMT during the anti-VGF treatment. The mean HbAlc was 8.25 ± 1.74% (range of 5.7% to 12.7%) in their sample, which was very similar to that of the current study, while the mean CMT was 468 µm (range of 255 to 964 µm) in their study, which was higher than the current subjects. This may be because patients with advanced DR, who needed anti-VGF treatment, were enrolled in their study. Suwal et al. [2] showed that each of the serum lipid associations with DME were not statistically significant and serum lipid profiles, including total cholesterol, HDL, LDL, and TG, had no effect on CMT. Similarly, Benarous et al. [13], found that serum lipid levels were not correlated with the development of DME or increased macular thickness. On the other hand, Sasaki et al. (20) observed that total cholesterol, HDL, and TG levels were not significantly associated with CMT; however, LDL was positively associated with CMT. The current study did not evaluate subjects with DME and each of the subjects had lowgrade DR compared with the studies of Suwal et al. [2] and Benarous et al. [13]. No one developed DME during the follow-up period in the current study. In this study, while there was an absence of association with any type of fasting serum lipids and CMT changes, a significant correlation was observed between HbAlc and CMT changes during the observation period. This result showed that blood sugar level may be an essential determining factor for CMT. Altintas et al. [20] reported that the mean CMT of both eyes with NPDR was 297.12 μm, as evaluated by SD-OCT. The CMT of each eye was not symmetrical in most of the patients, being 304.40 μm in the worse eye and 273.28 μm in the better eye. In the current study, l3 patients' CMTs were asymmetrically changed in each eye of the same patient. This means that under different systemic conditions, while changing the HbA1c and fasting serum lipids level in the follow-up period in the same subjects, each retina could behave differently. Therefore, other local factors may influence each macula differently in the same subject. There is an upregulation of growth factors and cytokines, including angiopoietins, tumor necrosis factor, interleukins, and matrix metalloproteinases that contribute to the breakdown of the blood retinal barrier with consequent vascular leakage, finally being responsible for DME. Therefore, due to several variabilities in each patient, and even in each eye in the same patient, different responses to the same type of therapy could be indicated. To the best of the author's knowledge, the current study was the first that revealed a significant correlation in changes of CMT and HbA1c, and even a slight alteration in HbA1c in eyes with treatment naive NPDR. Demonstration of asymmetric changes in CMT for both eyes of the same patient under changing systemic conditions is another important outcome of the current research. The limitation of this study was the short follow up period, and further studies are required to evaluate local factors that cause asymmetric involvement of each macula in the same patient. DISCLOSURE No funding or sponsorship was received for this study. All named authors meet the International Committee of Medical Journal Editors (ICMJE) criteria for authorship for this manuscript, take responsibility for the integrity of the work as a whole, and have given final approval for the version to be published.
2018-04-26T23:46:28.668Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "5ed04b425ad327dee4e54c49316e35bf2615e713", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "5ed04b425ad327dee4e54c49316e35bf2615e713", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55602809
pes2o/s2orc
v3-fos-license
The influence of vegetable-oil based polyols on physico-mechanical and thermal properties of polyurethane foams Polyurethanes during the last 50 years have become one of the most developing polymers and it is almost impossible to find an industry field, where they are not used. This term concerns a wide range of materials, both expanded and non-expanded products. PUs are widely used in many applications as foams (flexible, semi-flexible and rigid foams), elastomers, adhesives, fibers and obtained by the exothermic reaction of an oligomeric polyol (the substance which contains at least two hydroxyl groups) and polyfunctional isocyanates. PU foams are considered to be one of the most efficient materials for insulation with many desirable properties (very low conductivity, low density and water absorption, dimensional stability and high eco-efficiency index to save energy). Nowadays, rigid polyurethane foams are synthesized using vegetable-oil based polyols, which is connected to their abundance and economy. What is more, materials synthesized from renewable resources can almost fully replace their petrochemical analogs. Several types of vegetable oils have been already used, such as soy bean oil, palm oil, linseed oil and sunflower oil. Such oils are characterized by low amount of functional groups, however present in the structure unsaturated bonds can be successfully converted into hydroxyl groups. The great possibility is using waste cooking oil to synthesize polyol, but the biggest problem is low number of hydroxyl value and contaminations from food. It is necessary to check the influence of various polyol systems on physical, mechanical and thermal insulation properties, as well as on the cellular structure. Introduction Polyurethanes (PUs) have actually very short history, but very fast became one of the most developing group of polymers.Polyurethane is a material widely used in many different applications, such as foams, elastomers, adhesives or fibers and so on.Additionally, they are used in some specialty applications (biomedical surgery).They are clearly a great research subject due to their mechanical, physical and chemical properties.Polyurethanes are obtained by the reaction of an oligomeric polyol and diisocyanate.The structure and properties of polyol have a great influence on the properties of the resulting polymer [1].In addition, to control reactions and properties of PU foams, many additives such as catalysts, surfactants, fillers, chain extenders and flame retardants are added [2]. Nowadays, mostly petrochemical raw materials are used to produce polyurethanes.It has changed due to the uncertainty in terms of price and availability of petroleum.What is more, there are tendencies to develop a sustainable chemistry and use of renewable resources.This is the reason why chemical industry has paid their attention to the production of biobased polyols, mainly synthesized from vegetable oils [3].It is important to realize that biobased product is not necessarily a biodegradable material, but only a material made from renewable resources [1][2][3]. Most of the polyols used for polyurethane synthesis are polyether polyols (75%), which are obtained by the reaction between a 'starter' polyol and an alkylene oxide.Typical industrial starters are ethylene glycol, glycerol, sorbitol, sucrose, water, ethylenediamine and diaminotoluene.The most important for synthesis of polyether polyols is glycerol which is produced by the hydrolysis of natural triglycerides, from vegetable or animal resources [3].Long-chain polyether polyols usually have on OH number below 100 mg KOH/g and around 2-3 hydroxyl groups/mol, and molecular weights of 2000-10000 daltons.They are mainly used for flexible foams.On the contrary, short-chain polyether polyols usually have an OH number of 200 mg KOH/g, high functionality (3-8 hydroxyl groups/mol) and molecular weight about 300-1000 daltons.Using short-chain polyether polyols leads to a rigid crosslinked polyurethane.Polyester polyols are also used (25%), resulting from step growth polycondensation between dicarboxylic acid and polyol in excess [3][4].Unfortunately, a 100% biobased polyurethane is not available yet because of the unsatisfactory properties.So far researchers have developed materials that contains about 50% biobased components with no worse properties than petrobased polyurethanes [5]. Depending on the application the renewable content of commercially available bio-based polyols varies between 30-100%.As a consequence the renewable content of bio-based substract in PU also varies around 8-70% (Table 1) [6]. Synthesis of bio-polyols Vegetable oils come from many various plants, such as soybean, palm, rapeseed and so on and contain mainly triglycerides molecules, where the three hydroxyl functions of glycerin are esterified with fatty acids, which could be saturated or unsaturated [3].The most important are unsaturated oils, which double bonds can be transformed into hydroxyl groups [3,7]. There are four methods that could be used to prepare polyols from vegetable oils (Table 2) [8]: oxidation and epoxidation; esterification; hydroformylation; ozonolysis.By the direct action of hydrogen peroxide on double bonds and then ring opening by using alcohols or inorganic acids a high degree of hydroxylation may be incorporated into polyols [3][4].Vegetable oils that are characterized by a higher degree of unsaturation always produce polyols with much higher hydroxyl functionalities, which results in higher crosslinked density PUs [ 4]. Name of reaction Scheme Castor oil, derived from the bean contains 87-90% ricinoleic acid (fatty acid triglyceride) and it can be transesterified with a polyhydroxylated compound such as glycerine in order to obtain higher hydroxyl functionality.Hydroformylation route consists of two steps.In the first one, vegetable oils react with syngas-CO and H2 in the presence of rhodium, which is a better catalyst (almost 100% efficiency) or cobalt carbonyls (about 60% efficiency).This reaction introduces aldehyde groups, which are then converted to hydroxyls in the second step (hydrogenation).Polyols obtained in this route have longer network chains than the polyols prepared by epoxidation due to an extra carbon at every double bond.Ozonolysis of vegetable oil typically involves two main steps: formation of ozonide at the unsaturation sites of vegetable oils and its decomposition into aldehyde and carboxylic acid.The second step is reduction of aldehyde into alcohol (with a catalyst).Ozonolysis of soybean oils results in triols, triglyceride diols and some mono-ols [1,[7][8]. A special vegetable oil is castor oil, extracted from the seeds of the plant Ricinus communis, which is a triglyceride of ricinoleic acid and contains naturally occurring hydroxyl groups ( Figure 1).Natural castor oil has a functionality of around 2.7 OH groups/mol and an hydroxyl number of around 160 mg KOH/g and is used in many various applications, such as: coatings, rigid foams, adhesives, thermoplastic elastomers, flexible foams and so on.On the other side, castor oil has some major disadvantages: low functionality and low reactivity due to the secondary hydroxyl groups.It is possible to use two major modification to improve the properties and applicability of castor oil-based polyols for producing PUs.The first one is the transesterification/amidation and the other one is the alkoxylation using its hydroxyl groups and both lead to new polyols which can be used to obtain rigid polyurethane foams with good physico-mechanical properties.Castor oil used solo leads to semi-flexible to semi-rigid PU foams, but by mixing it with polyols such as glycerol a higher hydroxyl number of polyol is obtained [4,7]. Properties of PU foams obtained by using bio-based polyols It is known that most of the conducted researches concerned soybean oil, which has become one of the most popular and used oils of the recent years. A.A. Beltran et all have conducted interesting researches which concern using soy-based polyols to obtain rigid polyurethane foams.As an isocyanate they used polymeric MDI (A side) and as a B ingredient they mixed polyols that constitutes 100 parts for the formulation of the rest of compounds (catalyst, surfactant, foaming agent).The polyol mix included Voranol 446, Voranol 640 and soy-based polyols with hydroxyl numbers of 120 from ethanol and 331 mg KOH/g from ethylene glycol.Both used polyols were prepared through an in situ epoxidation of the soybean oil.The soybean oil polyol percentage in the mix varied between 20 and 30% for a total of 4 blocks of rigid foam, two for each type of oleochemical polyol.Rigid PU foams are mainly used for thermal insulation and the most optimal density value between 40-60 kg/m 3 .When it comes to best possible compressive strength, for densities of 30 kg/m 3 it is in 100-150 kPa.Obtained values (Table 3) indicate that these results validate the use of the produced foams in the polymeric industry.The author marks that produced foams can be especially used for insulation in refrigeration and cooling appliances [5].Yusuf A.K. et all have also checked an influence of castor oil polyols on mechanical properties of rigid PU foams.They used two formulations: isocyanate/polyol (NCO/OH) ratios of ½ and 1/1 and one shot method.The glycerolized castor oil contained varying concentrations (10-60%) of the modifier and other ingredients were mixture of 2,4-and 2,6-toluene diisocyanate at room temperature, catalyst, physical blowing agent (methylene chloride) and surfactant (silicone oil).Hydroxyl number range for the modified castor oil polyols was 168-320 mgKOH/g.Compressive strength of obtained PU foams increased steadily up to about 30 wt% of modifier concentration, followed by a sharp steep rise up to 60% (foams prepared using formulation II).The highest showed compressive strength value up to 450 KN/m 2 .Higher hydroxyl functionality in polyol results in increased crosslinking reactions and therefore in formation of inter-chain network.This structure can results in increased density and rigidity in foam structure, which makes foam harder to compress [9]. Researches from Cracow University of Technology used bio-polyol based on rapeseed oil to produce flexible PU foams.One of the aims was to check the influence of bio-polyols production scale on selected mechanical and physical properties.However, it was observed that it has no significant impact on flexible PU foams properties (density, hardness, hysteresis, support factor and resilience), but it turned out that different values of bio-polyol has a great influence on the foams properties compared to the references foams which were not modified with the bio-polyols.PU formulations for the periodical synthesis are shown in Table 4 [10].Figure 2 shows the effect of polyol type on apparent density and hardness at 40% deformation of foams which were obtained with periodical method.It was observed that bio-polyol addition reduces the density of the obtained materials.It is connected to the increase of a catalyst amount [10].The addition of a bio-polyol to PUR formulation increases the hardness at 40% deformation in a majority of foams, in comparison to the reference foam despite the apparent density reduction, what can be caused by the higher content of hard segments and difference in crosslinking density.The results of the conducted researches confirmed that the modification of polyurethane systems with rapeseed oil-based polyols can give beneficial effects [10].Syuhada Mohd Tahir et all carried out studies to determine the potential of waste cooking oil in preparation of rigid PU foam.First, the raw waste cooking oil was filtered and adsorbed by using sugarcane bagasse activated carbon to purify the oil.Next, the transesterification reaction was used to synthesize polyol and then the obtained polyol was mixed with other chemicals to form PU rigid foam.The density and compressive strength of 60:54:90:40 of glycerol:water:polyol:amine polyurethane foam are 277.7 kg/m 3 and 0.10 MPa.The study shows that waste cooking oils can be used in production of rigid PU foams with obtaining a satisfactory properties.The similarities in organic structure of waste cooking oil to vegetables oil make it a perfect starting material [11]. Future perspectives Vegetable oils are becoming more and more popular in polyols synthesis due to their properties which let change them into the valuable polyols that can be used to produce not only PU foams but also elastomers, coatings and rigid plastics.Every material is characterized by the same or even better properties as products from the petroleum-based substracts.It is possible to obtain polyols with different reactivities, functionalities, molecular weights and other important properties.It is a great potential to change totally the world of PU foams, but vegetable oils and their derivatives still face challenges which are hard to solve, including costs of obtaining.However, advanced technologies are promising and the future of bio-based polyols looks very bright [1,12]. Figure 1 . Figure 1.Chemical structure of major fatty acid in castor oil Figure 2 . Figure 2. Apparent density and hardness at 40% deformation of foams obtained using different bio-polyols Table 1 . Renewable content of commercial available bio-based polyols and PURs Table 3 . Properties for obtained PU foams Table 4 . The characteristics of applied bio-polyols and PUR formulation
2018-12-10T22:27:57.980Z
2017-11-03T00:00:00.000
{ "year": 2017, "sha1": "84fdc8c1aa9a564203422c55fd2bbcb88126b4de", "oa_license": "CCBY", "oa_url": "https://sciforum.net/paper/download/4763/manuscript", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "84fdc8c1aa9a564203422c55fd2bbcb88126b4de", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
119286665
pes2o/s2orc
v3-fos-license
Dark Matter Direct Search Rates in Simulations of the Milky Way and Sagittarius Stream We analyze self-consistent N-body simulations of the Milky Way disk and the ongoing disruption of the Sagittarius dwarf satellite to study the effect of Sagittarius tidal debris on dark matter detection experiments. In agreement with significant previous work, we reiterate that the standard halo model is insufficient to describe the non-Maxwellian velocity distribution of the Milky Way halo in our equilibrium halo-only and halo/galaxy models, and offer suggestions for correcting for this discrepancy. More importantly, we emphasize that the dark matter component of the leading tidal arm of the Sagittarius dwarf is significantly more extended than the stellar component of the arm, since the dark matter and stellar streams are not necessarily coaxial and may be offset by several kpc at the point at which they impact the Galactic disk. This suggests that the dark matter component of the Sagittarius debris is likely to have a non-negligible influence on dark matter detection experiments even when the stellar debris is centered several kpc from the solar neighborhood. Relative to models without an infalling Sagittarius dwarf, the Sagittarius dark matter debris in our models induces an energy-dependent enhancement of direct search event rates of as much as ~20 - 45%, an energy-dependent reduction in the amplitude of the annual modulation of the event rate by as much as a factor of two, a shift in the phase of the annual modulation by as much as ~20 days, and a shift in the recoil energy at which the modulation reverses phase. These influences of Sagittarius are of general interest in the interpretation of dark matter searches, but may be particularly important in the case of relatively light (m_X<20 GeV) dark matter because the Sagittarius stream impacts the solar system at high speed compared to the primary halo dark matter. Introduction Numerous experiments aim to detect particle dark matter directly by measuring the rate of extremely rare nuclear recoil events from the elastic scattering of weakly-interacting massive particles (WIMPs; for context on this process, see [1][2][3]). To interpret properly any putative direct detection signal or the limits on particle properties implied by any null searches, a detailed understanding of the phase space distribution of dark matter in the solar neighborhood is required ( [4][5][6]; see also [7] for general review of the topic). Predictions for the rate of scattering events in direct search experiments are usually made under standard assumptions of a canonical local WIMP density, ρ 0 ∼ 0.3 − 0.4 GeV/cm 3 , and a WIMP velocity distribution described by a Maxwellian function with a three-dimensional dispersion, σ 3D ∼ 270±70 km/s [8]. Current experiments probe the parameter space of WIMP mass and scattering crosssection addressing many interesting dark matter candidates, including the lightest superpartner particles in supersymmetric theories [9]. Tentative signals suggest that the WIMPs comprising the cosmological dark matter may be particles with mass on the order of m χ ∼ 5 − 20 GeV/c 2 (e.g., the CRESST, DAMA/LIBRA, and CoGeNT experiments in [10][11][12], respectively). In this mass range, nuclear recoils with sufficient energy to be detectable require larger relative velocities than dark matter particle candidates in the higher mass range of m χ ∼ 100 − 1000 GeV/ that has been more widely explored in recent years (as demonstrated by Figure 1 in ref. [13]). The requirement of larger relative velocities renders scattering rates significantly more sensitive to the high-velocity tail of the dark matter velocity distribution than has been considered by many recent studies [14,15]. The slight velocity anisotropy and significant spatiotemporal variation of the velocity distribution [16][17][18] in a Milky Way-sized dark matter halo has been shown to affect significantly the predicted direct detection event rate compared to that obtained from a standard Maxwellian velocity distribution [19,20]. The possibility of dark matter streams in the solar neighborhood has induced theoretical investigations of the effect of a coherent population of WIMPs on the annual modulation of the event rate observed by direct detection experiments [21][22][23][24][25][26]. Meanwhile, high-resolution cosmological numerical simulations of halo formation have revealed that there are many kinematically-cold streams comprising the halo as well as myriad debris inflows from stripped subhalos [27,28]. These streams are associated with past mergers that built the Galactic mass to date, and are almost all of insufficient mass density to affect event rates measurably at the Earth's location in the Milky Way halo [29,30]. Previous numerical studies that have addressed the issue of direct detection and velocity distributions in high-resolution N -body simulations (e.g., refs. [18,19]) have largely been limited to analyses of simulations including only dark matter. These simulations do not account for the Milky Way disk and its cosmological growth and evolution to the present day. Such experiments represent only individual samples of the broad statistical ensemble of merger histories that lead to the formation of a Milky Way-sized dark matter halo. Consequently, these simulated halos lack specific structures known to exist in the Milky Way, such as the ongoing Sagittarius accretion and tidal stream, or other features that may have yet to be discovered. Therefore, the problem of mapping the results of cosmological numerical simulations onto specific predictions for the actual Milky Way halo, especially at the solar neighborhood, is a distinct challenge. Likewise, cosmological simulations designed to model the growth of a disk galaxy are not yet capable of accounting for the fundamental structures in a Milky Way-like galaxy. Such simulations generally do not produce thin and dynamically-cold stellar disks analogous to that of the Milky Way (as demonstrated by ref. [31] for a variety of computational techniques and energy feedback algorithms). Moreover, the computational expense of these campaigns imposes limits on the degree to which they can account for constraints on the unique mass assembly history of the Milky Way itself [32,33]. As an example, several hydrodynamical simulations have produced a Galaxy-sized host dark matter halo in a cosmological box, in concert with a significant, coherently-rotating "dark disk" component due to large accretions at late times [34][35][36] -although these components are widely found in a ΛCDM universe, the thin and cold Milky Way has had a quiescent accretion history that is incompatible with such recent minor mergers [37]. Modeling the dark matter and luminous components of the Milky Way in equilibrium with each other, in an isolated simulation calibrated to specific characteristics of the Milky Way (as in the formalism of [38]), complements cosmological simulations in a number of ways. Isolated simulations are presently rather well-constrained by observational data on Galactic structure and substructure and therefore may represent many aspects of the Milky Way more faithfully than an individual sample from a highly-stochastic ensemble of merger and accretion histories of Milky Way-sized halos in dark matter simulations that neglect baryons. Such isolated calculations represent one method for estimating the influences of significant baryonic structures, such as the Galactic disk, and controlling for the unique merger history of the Milky Way. Isolated simulations also provide a framework for incorporating important accidental features such as the Sagittarius dwarf galaxy (Sgr) and its associated tidal stream. The disadvantage of this approach is that it does not preserve the cosmological construction of the host halo, in which virialized streams associated with high-redshift merger activity would be remnants of galaxies with so few stars that their present-day spatial distribution cannot be determined; these mergers are not self-consistently treated in an approach explicitly designed to model the current structure of the Milky Way, in equilibrium with a smooth dark matter host halo. In the absence of good constraints on the late-time accretion history of the Galaxy, as determined observationally by mapping surveys, treating the Milky Way halo as an isolated and equilibrated disk/bulge/halo model at redshift z = 0 is a complementary method to high-resolution cosmological simulations. In this paper, we analyze a set of isolated simulations designed to represent the Milky Way disk along with the Sagittarius galaxy impact on the disk and the associated Sagittarius Upper panels: distributions of stars and dark matter from the disrupting Sagittarius dwarf satellite galaxy (green and gray particles, respectively), in our light Sgr and heavy Sgr models. In this perspective, the Milky Way disk plane is denoted by concentric blue rings at 5-kpc radial intervals, to a Galactocentric distance of 25 kpc. Lower panels: surface density maps of Sgr stars and dark matter through the disk mid-plane for each model, computed in a slice with depth of 2 kpc, similar to the width of the Galactic disk. Note that the leading stellar stream does not pass directly through the solar neighborhood, although a significant amount of dark matter belonging to that stream is found near the Sun (as shown by red particles in the upper inset panels, and by grayscale shading in the lower panels). Indeed, the projected contours of stellar density and dark matter density are not concentric, particularly in the more heavily-stripped light Sgr model. tidal debris. We demonstrate that neglecting important modifications to N -body predictions, such as the presence of a large stellar Galactic disk and the ongoing Sagittarius dwarf merger, both structures that are known to exist, may well result in systematic misestimations of the expected event rates and annual modulation signatures in direct detection experiments. The stellar disk of the Milky Way itself, being in approximate equilibrium with its dark matter host halo, has concomitantly drawn the near-midplane region of the dark matter halo into a phase-space distribution peaked at higher velocity and with larger deviations from the Maxwellian form than found in dark matter-only simulations. These results follow from models of disk-halo equilibria formulated in ref. [38] and are related to numerous previous statements of uncertainty in the local dark matter density and velocity distribution in Milky Way models, as in the reconstructions of [39,40] among many other efforts. Our most interesting results address the influence of the Sagittarius debris on dark matter detection experiments. Some observational measurements of the tidally-stripped stars suggest that the leading stellar arm of the Sgr tidal stream may fall several kiloparsecs away from the solar position in the Galactic plane (e.g., [41,42]); although recent detections of coherently-moving stellar populations in the solar neighborhood seem to rule out largescale flows with Sagittarius-like vertical velocities [43,44], percent-level streams outside one or two kiloparsecs from the Sun are poorly constrained and difficult to disentangle using tracer-sampling techniques. Observationally-viable models for the Sgr infall exhibit several interesting features relevant to contemporary and future direct search experiments, because the models we present here treat the Sgr dwarf in a cosmological context, in the sense that we assume that the Sagittarius galaxy is itself embedded in a dark matter halo as it merges with the Milky Way. Utilizing simulations of the Sagittarius infall, we emphasize that although the primary stellar component of the Sagittarius debris stream may be several kiloparsecs away, the associated dark matter stream is always significantly more spatially extended than the stellar stream (as shown in Figure 1) and more coherent in velocity space. In fact, our models illustrate that the stellar and dark matter streams need not even be spatially concentric in realistic models. In other words, the peak of the stellar density of the stream may be displaced from the peak in dark matter density associated with the stream by several kiloparsecs. Both of these properties of simulated Sagittarius infall models suggest that it is not necessary for the solar neighborhood to be closer than several kiloparsecs away from the primary stellar stream of Sagittarius debris in order for the stream to affect significantly dark matter detection rates. We use the dark matter streams realized in our simulations to model the dark matter velocity distribution in the solar neighborhood. Indeed, we find that the near coincidence of the solar neighborhood and the debris of the Sagittarius satellite's dark matter halo creates a peak in the high-end of the dark matter velocity distribution. These Sgr particles can lead to significant effects on the energy dependence of dark matter detection rates, as well as the amplitude and phase of the annual modulation of event rates, particularly for relatively low-mass dark matter candidates (m χ 20 GeV). Our detailed calculations involve velocity distributions drawn from high-resolution numerical experiments, involving an isolated Galaxy-analog stellar disk/bulge/halo model and self-consistent treatment of the Sagittarius infall. In §2, we discuss the relevant features of simulations introduced by [45] and describe the analysis implying that significant alterations may be necessary to the predictions of event rates in direct detection experiments. Our results are presented in §3, reserving §4 for discussion of their application and potential future prospects regarding detectors with low recoil-energy thresholds and more sensitive annual modulation constraints. Simulations of the Galactic Environment All analyses in this work are based on simulations described in [45], and we refer the reader there for technical details. In brief, we describe four high-resolution experiments based on self-consistent halo/galaxy models, using throughout the notation "host halo" to refer to a standard NFW dark matter halo [46] with a mass profile roughly consistent with that of the Milky Way, and "halo + disk" for the model with an identical halo generated in equilibrium with a Galactic-analogue disk and central bulge with a global density structure consistent with that of the observed Milky Way stellar disk [38], and a total-mass surface density at the solar location that is constrained to match the observations of ref. [47] and therefore consistent with recent refinements to this value [48]. For the purposes of the present paper, in the context of phase-space behavior on small scales, we note that this three-component model represents an approximate, self-consistent solution to the coupled collisionless Boltzmann and Poisson equations, and that all N-body simulations were performed using many millions of particles with a force-softening resolution equal to one parsec. The disk is stable against large-scale perturbations, and develops only a weak bar in isolated evolution over a timescale of 3 − 5 Gyr. The disk remains globally static in density and velocity structure outside a few kiloparsecs from the Galactic center, ensuring that the solar neighborhood in this model remains unaffected by artificial baryonic asymmetries related to spiral-arm evolution. Here and throughout, we define the solar neighborhood as a wedge in the radial range 8 < R ⊙ < 9 kpc and extending ±2.5 (1.5) kpc in the vertical (tangential) direction. Wedges of this size are used to define the kinematic properties of the halo and stream dark matter particles because a large number of particles is needed to probe velocity structure. The relative densities of the dark matter and stream components can be determined within significantly smaller volumes. In each case, we have verified that we derive consistent results when we vary the size of the wedge by ∼ 1−2 kpc in the tangential direction and up to ∼ 10 kpc in the vertical direction. The insensitivity of our results follows from the relatively mild spatial variations in stream density exhibited in Figure 1. Following the example of ref. [14], the rest-frame on Earth is taken to include the galactic rotation (a purely tangential v c = 220 km/s in the host halo model, and calculated according to the stellar disk's rotation in the halo + disk and related models), as well as the peculiar solar motion (U ⊙ , V ⊙ , W ⊙ ) = (11.1, 12.2, 7.3) km/s according to [49], and finally the Earth's orbital motion around the Sun as prescribed by [50]. The position of the impact of the Sagittarius stream on the Galactic disk is somewhat uncertain, so there is some arbitrariness in our identification of the solar neighborhood. We identify the solar neighborhood such that the Sun has approximately the correct position relative to the Sagittarius dwarf remnant, and we return to uncertainty in the relative position of the Sun with respect to the stream in § 3.2. As in [45], we examine two numerical experiments in which the Galactic disk is impacted by cosmologically-realistic progenitors to the Sagittarius dwarf galaxy, resulting in significant spirality and ring-like structure in the host Milky Way. The dark matter halo masses of the two satellite models roughly bracket the expected range motivated by cosmological abundance matching techniques [e.g., 51]: "light Sgr" assumes a Sagittarius progenitor mass of M Sgr = 10 10.5 M ⊙ , while "heavy Sgr" has a progenitor mass of M Sgr = 10 11 M ⊙ . This cosmologicallyplausible range is supported by the dynamical reconstruction of the progenitor as reported in ref. [52], in which the mass immediately prior to tidal disruption is estimated to be at least ∼ 10 10 M ⊙ , a value consistent with the total masses contained within and truncated at the two model subhalos' initial Jacobi tidal radii when our simulations begin. As reported in [45], both models are consistent with the approximate characteristics of the Sagittarius tidal debris as mapped by the Two-Micron All Sky Survey and the Sloan Digital Sky Survey among other efforts [53][54][55]. In particular, these evolved debris distributions reproduce the spatial trends in observed radial velocity and heliocentric distance, although we note that our goal in this work is not to model the Sagittarius stellar stream in precise detail. The massive dark matter component is disrupted on scales much larger than that of the systematic errors involved in both measuring and modeling the luminous debris, such that the general behavior of Sgr dark matter is grossly similar in both experiments, as we now address. The presence of a coherent stream of dark matter in the solar neighborhood is a robust prediction of both Sagittarius models, and would be a feature of any putative progenitor with cosmologically-motivated and observationally-consistent properties as discussed above. The material is stripped from the infalling satellite, and in both models the dark component of the leading tidal arm contributes a non-negligible fraction of dark matter to the solar neighborhood. We emphasize that the contribution of dark matter from Sgr is nonnegligible despite the fact that the stellar arm of the tidal debris is displaced from the solar neighborhood by several kiloparsecs in the Milky Way disk plane in our baseline models. This occurs for two reasons. First, in any model in which the Sagittarius progenitor has a cosmologically-motivated progenitor halo, the progenitor halo is significantly larger than the stellar component of the progenitor galaxy and spawns a tidal stream that is significantly more spatially extended than the stellar debris stream. Indeed, the dark matter streams in our models are more than 10 kpc in cross-sectional diameter. Second, the bulk of the dwarf's dark matter is stripped from the Sagittarius progenitor prior to the onset of stellar mass loss. The dark matter and stellar debris material do not follow precisely identical orbits, so a cross-section of the dark matter stream at the plane of the disk is not necessarily concentric with a cross-section of the stellar debris. These features are both evident in Fig. 1, with the offset between the dark matter and stellar arms most prominent in the light Sgr model, since the progenitor in that case has been more heavily disrupted. Our two Sagittarius models shown in Fig. 1 could in principle underestimate the amount of Sgr dark matter near the Sun, because the axis of the leading stellar stream misses the solar neighborhood entirely in each case, with only the outskirts of the dark matter in that tidal arm contributing to a potential WIMP detection signal. In fact, previous models for the Sagittarius leading arm have indicated that it may impact the disk plane within a few kpc of the Sun [56], although observational explorations of the solar neighborhood have failed to detect any sub-population kinematically similar to streaming substructure with Sagittariuslike geometry within one or two kiloparsecs of the Sun [43,44], and constraints on the descending portion of the leading arm are presently very poor (e.g., [55]; see also the "field of streams" depicted in ref. [41]). We note that the more recent investigation in ref. [57] of the Sgr stream in a triaxial halo suffers from a number of flaws, including the lack of a dark matter component in the progenitor as well as the adoption of a static Galactic potential. In contrast, our modeling techniques and initial conditions self-consistently resolve these issues in a numerical context that correctly treats the evolution of tidal disruption characteristics in the stellar debris. As a further point of distinction, we also emphasize that the preferred model of ref. [57] (in which the leading arm passes ∼ 10 kpc from the Sun) requires a nearly-oblate Milky Way halo (at odds with the prolate shape preferred by long-lived warps in the atomic-hydrogen gas layer in the Milky Way [58] as well as the Holmberg effect of satellites being clustered around a plane normal to that of the central galaxy [59]) with an unusual orientation compared to the cosmological findings of refs. [60][61][62]. In any case, the orbital shape of the satellite's infall is sufficiently well-constrained that any debris model with a cosmologically-realistic dark subhalo will necessarily result in a wide stream of WIMPs raining into the solar system. Dark Matter Direct Detection Event Rates After obtaining the speed distributions characterized by f (v) in the left panel of Figure 2, for each of the four models we calculate differential event rates as a function of recoil energy as in [2]: where σ χ is the WIMP cross-section for scattering on a proton (assuming here that the WIMP couples nearly equally to protons and neutrons), ρ 0 is the WIMP density in the solar neighborhood, µ is the reduced mass of the proton and a WIMP with mass m χ , A is the atomic mass number of the detector nuclei, F (E r ) is the form factor of nuclear scattering as a function of recoil energy E r , and the quantity g(v min ) is the integral in velocity space of the velocity distribution divided by the WIMP speed, where M a and µ A are the atomic mass and WIMP-nucleon reduced mass, and this is the minimum relative speed necessary for nuclear recoil to yield an energy E r (lower WIMP masses require higher values of v min at fixed recoil energy). In our analysis, we calculate g(v min ) using Earth rest-frame velocity distributions and the local WIMP density ρ 0 directly from our simulations. We adopt the Helm form factor F (E r ) (as in [63]; see [64] for details and fitted parameters). Throughout, we choose an arbitrary value of σ χ = 10 −40 cm 2 (= 10 −4 pb) for absolute event rates. Predicted event rates can be scaled linearly for different values of σ χ . We frame our results in terms of unquenched nuclear recoil energy E r , using the standard unit notation keV r , reminding the reader that the quenching factor for a particular detector material must be used to convert this to electron-equivalent recoil energy in keVee, i.e. E keVee = qE x keVr . For the detector examples we investigate in this work, (q, x) ≃ (0.199, 1.12) for germanium [65], and (q, x) ≃ (0.3, 1.0) for sodium [11]. Note that we do not model the finite energy resolution of detectors. General Deviations from the Standard Halo Model Our primary results concern the non-trivial amount of dark matter donated to the solar neighborhood by the leading tidal arm of the Sagittarius dwarf galaxy (shown in Figure 1), but prior to this discussion, we briefly itemize pertinent features of the dark matter component of the primary halo emphasizing deviations from a Maxwellian distribution. Many of the gross features that characterize deviations from a Maxwellian have been pointed out in previous work [16][17][18][19][20]. We note that the particular systems we study have been tuned to equilibrium host halo or halo + disk configurations that represent many of the gross features of the Milky Way galaxy, but that these are not unique solutions for equilibrium models of the Milky Way. Dark matter direct search rates are usually interpreted in the context of a standard halo model (SHM) [14,15] with ρ(r) ∝ r −2 , a local density of ρ 0 = 0.3 GeV/cm 3 , and a velocity distribution described by a Maxwellian form with a three-dimensional dispersion σ 3D = 3/2v c , where v c is the circular speed at the solar radius which is assumed to be identical to the peak speed of the Maxwellian distribution. Fig. 2 shows that the equilibrium models we consider result in direct search rates significantly different from the specific Maxwellian assumed to derive SHM predictions (the shaded bands in the upper right panel). Furthermore, the velocity distributions deviate significantly not only from the SHM, but from the general form of a Maxwellian speed distribution. Fig. 2 shows comparisons between velocity distributions in our equilibrium galaxy models and Maxwellian distributions with mean and dispersion that best fit the simulation data. The isolated host halo exhibits significant deviations from its best-fit Maxwellian distribution as shown in Fig. 2 and noted in previous studies and in agreement with the cosmological N -body results presented by ref. [18]. The same is true for the halo + disk and Sgr-infall models. In both cases best-fit Maxwellians underestimate the value of g(v min ) by ∼ 20 − 40% over a wide range of recoil energies relevant to direct detection experiments. Relative to an isolated halo, the addition of the Galactic disk causes the host halo to contract globally as well vertically toward the disk mid-plane. The global contraction of the Figure 3. Examples of predicted direct search event rates. We show time-averaged differential scattering event rate dR/dE as a function of recoil energy (keV r , unquenched), for scattering off of germanium (left panels) and sodium (right panels). In each detector, we show example spectra for light WIMPs of mass m χ = 5 and 10 GeV/c 2 . We show the fractional deviation in the event rate from the host halo model in the center panels, and the fractional deviation of the Sagittarius model event rates from the halo + disk values are shown in the lower panels. host halo in response to a disk shifts the velocity distribution to larger velocities (Fig. 2), resulting in relatively higher event rates (Figure 3). In addition to this gross shift, equilibrium Galaxy models including a disk exhibit speed distributions with significantly more broad, flat peaks. The kurtosis of the host halo speed distribution is K ≈ −0.3, while the galaxy models have K ≈ −0.5 (by definition K = 0 for a Gaussian distribution, and K ≃ 0.1 for a Maxwellian). Scattering event rates are directly proportional to the local WIMP density ρ 0 , which in all cases is higher for simulations including the Milky Way stellar disk than for the dark-matter-only host halo model. The solar neighborhood in this original halo resides in a region with WIMP density ρ 0 ∼ 0.53 GeV/cm 3 , while the three models including the stellar disk of the Milky Way have ρ 0 ∼ 0.61 − 0.63 GeV/cm 3 , a ∼ 20% increase due to the overall contraction of the halo in response to the Galactic potential and the compression of the halo in the vertical direction due to the planarity of the disk. We note that our models have been tuned to Milky Way structural properties [38] (and also respond to variously-sized satellite impacts in a fashion consistent with that of the quiescent Galactic mass accretion history and the global spiral structure of the disk; see refs. [45,66]), so these values of ρ 0 illustrate the potential error latent in typical choices of this normalization. We present event rates calculated self-consistently with respect to the WIMP density ρ 0 in each model; however, the changes in the speed distribution and, therefore, the integrated quantity g(v min ), represent far more significant alterations to predicted event rates. Figure 2 and Figure 3 show that the shift in the speed distribution increases g(v min ), and thus event rates, by a factor of several at high v min , a boost that is particularly important for light WIMP masses m χ ∼ 5 − 20 GeV/c 2 suggested by several recent experiments [10][11][12]. Unsurprisingly, the gross effect of the Galactic component is to deepen the gravitational potential, resulting in generally larger relative speeds of dark matter particles. It is interesting to assess whether this offset can be modeled simply so that they may be incorporated into future analyses, such as those proposed by ref. [67] among others. A simple proposal would be to employ a model for adiabatic halo contraction [68,69] on the original host halo, which is well described by the standard NFW profile form. Specifying a velocity distribution would still be a challenge, but a simple proposal would be to employ the standard Eddington relation on the contracted halo (see the relevant exercises in ref. [70], for example). Such an approach would certainly not be self-consistent as both formalisms assume spherical symmetry and the Eddington relation does not yield a unique speed distribution; however, it is interesting to explore such an option as a simple, practical alternative to perform a gross correction to account for contraction without the expense of constructing equilibrium models of the Galaxy and its halo. 1 We find that our models can be described in this way at a similar level of precision as using the best-fit Maxwellian for each model. In particular, we find residuals similar to the halo + disk residuals for the best-fit Maxwellian in Figure 2, with adiabatic contraction modeling resulting in residuals from ∼ 20 − 50% for v min 300 km/s. At this level of precision, a simple analytical correction may be able to map results from a host halo identified in a dark-matter-only simulation to event rates in the solar neighborhood. An even more parsimonious exercise recovers the halo/galaxy speed distribution by scaling each particle's velocity in the host halo by an amount equal to the increase in the distribution's peak speed; for our models, this simple adjustment produces event rates within ∼ 5% of those yielded by the halo + disk analysis. Dark Matter from Sagittarius at Earth Our most novel results pertain to the effects of the Sagittarius debris on direct search rates. Before exploring the influence of the Sagittarius tidal debris in detail, several points are worthy of reiterating here. Our Sagittarius models are cosmologically-motivated and are designed to bracket the range of halo masses that a Sgr-like galaxy would be expected to have as it merged with the Milky Way. Both Sagittarius models produce debris streams that are in broad agreement with the known morphological and kinematical properties of the observed Sagittarius stream stars. Moreover, as noted already in Fig. 1, in these self-consistent models of Sagittarius evolution, the dark matter stream is significantly wider spatially than the stellar stream, spanning 10 kpc in the direction transverse to the stream. Lastly, the dark matter and stellar debris streams are not necessarily coincident; in fact, at the point where the Sagittarius stream penetrates the disk closest to the solar neighborhood, the dark matter and stellar streams are not coaxial. The offset between the dark matter and stellar streams stems from the fact that the progenitor Sagittarius dark matter halo is significantly more extended than the stellar component of the progenitor dwarf galaxy, so that dark matter is typically liberated prior to stellar material. Consequently, the dark matter and stellar streams do not follow the same orbits. These considerations suggest that the dark matter component of the Sagittarius stream may likely be a very important contributor to scattering in earthbound direct detection experiments even if the stellar stream penetrates the Milky Way disk more than two kpc away from the Sun as is currently suspected (for example, in the modeling of ref. [57], which incorporates the general observational results of [42,55], among others). These broad points represent an important addition to the literature on the influence of Sagittarius for dark matter direct detection experiments. Lower panels: The dependence of the peak day-number on the minimum velocity v min , with equivalent recoil energies for germanium-and sodium-based detectors plotted on alternate upper axes for m χ = 10 GeV/c 2 . Note that the peak day-number shifts phase to Northern winter, for small unquenched recoil energies E r 1 (2) keV r for germanium (sodium), corresponding to v min ≃ 0.89v c in agreement with ref. [73]. In the lower right panel, the shaded regions represent the ±1σ ranges in the modulation peaks determined by DAMA/LIBRA for scattering on sodium, i.e. peak day-numbers of 136, 142, 146 ± 7 for recoil energy bins E r = 2 − 4, 5, 6 keVee [11]. Where present, thin colored lines represent the maximum possible signal induced by each Sgr model, as discussed in the text. The overall fraction of dark matter in the solar neighborhood is relatively small in all of our models, varying from ∼ 1 − 2% of ρ 0 and may reasonably be as high as ∼ 5% of ρ 0 if ρ 0 = 0.3 GeV/cm 3 rather than the somewhat higher values in our models. Consequently, integrated event rates are altered by only relatively small amounts by the stream. Figure 2 and Figure 3 show that the stream debris can alter event rates by up to ∼ 10 − 20% at relative speeds greater than the typical relative stream speed, v min 400 km/s (if the local density from the host halo is as low as ρ 0 = 0.3 GeV/cm 3 as in the SHM, this boost can be as large as ∼ 25%). Note that the Sagittarius debris give a relative enhancement to the speed distribution at speeds near v ∼ 400 − 500 km/s in Fig. 2. The best-fit Maxwellian is a better description of the models that include Sagittarius debris than it is of the halo + disk models because the high-speed particles in light and heavy Sgr influence the fitted kurtosis sufficiently that some of the halo's natural non-Maxwellianity is accounted for more closely. Rather than representing large modifications to the overall annually-averaged event rate, the speed distribution features around 400 − 500 km/s due to the Sagittarius debris represent potentially interesting peculiarities that may be explored with the annual modulation of the event rate by current-generation experiments as well as directional detection efforts addressing the North Galactic Pole (latitude b ∼ +90 o with respect to the solar neighborhood) in the long-term future. The stripped dark matter from Sgr falls coherently toward the solar system from the North Galactic Pole at a speed of order ∼ 400 − 500 km/s. The speed relative to the Earth peaks during Northern winter, nearly opposite in phase to the relative speed between the Earth and the dark matter in the primary host halo of the Milky Way, due to the geometry of the Sgr orbit with respect to the Milky Way disk plane. We present results regarding the annual modulation of dark matter direct search rates in Fig. 4. We show the amplitude as the difference between the maximum and minimum event rates achieved within a yearly cycle divided by the sum of the maximum and minimum rates. If the modulation induces a simple sinusoid superimposed upon a constant background, this yields the amplitude of the sinusoid. The amplitude of the annual modulation signal is generically an increasing function of v min because the number of particles in the tail of the speed distribution is very sensitive to small shifts in the central position of the speed distribution. The SHM predicts a much more shallow rise in the fractional modulation amplitude at relatively high speeds than that yielded by all four of our models, as shown in the upper right panel of Fig. 4. Each of the models containing a stellar disk results in a fractional amplitude that is more than 20% lower than that of the host halo model alone. The Sagittarius debris alters predictions for the amplitude of the annual modulation markedly. In particular, both of our fiducial models of Sagittarius (dashed lines) exhibit an annual modulation amplitude as much as ∼ 20 − 30% less than the modulation amplitude in the Galaxy models with no Sagittarius debris (and a factor of ∼ 2 smaller than in the host halo) in the range 400 km/s v min 500 km/s. The modulation amplitude is reduced in the Sagittarius models because the dark matter stream of Sagittarius rains down upon the Galactic disk from the North Galactic direction, so that this component is distinctly out of phase with the background "wind" of dark matter particles from the primary Galactic halo [21][22][23][24]. As we have alluded to previously, the location of the impact of the leading stellar stream of the Sagittarius dwarf on the Milky Way disk relative to the solar neighborhood remains uncertain, partly because high-latitude data must be extrapolated to the disk plane in order to estimate the impact position [55], and partly due to uncertainty in the Sun's position. Consequently, we have estimated the possible range of annual modulation effects that may be attributable to the Sagittarius debris by artificially shifting the position of the Sagittarius impact with respect to the Sun in our models by amounts consistent with observational constraints on the relative position of the Sagittarius impact with the disk and the Sun. As shown in the lower panels of Figure 1, the solar neighborhood in the light Sgr model is serendipitously near the peak of the Sagittarius dark matter density (despite the stellar stream being several kpc away, because the central axes of the dark and luminous tidal arms are not coincident), while the heavy Sgr model's solar position is ∼ 2 − 3 kpc farther from the Galactic Center than the axis of the dark stream. Our models suggest that if the Sun is ∼ 10 kpc or more from the center of the stellar stream impact on the Galactic plane, the influence of Sagittarius on direct search experiments may be quite small; however, current modeling and observational constraints indicate that the leading arm is probably significantly closer to the solar neighborhood [55]. To probe this uncertainty, we estimate a range of possible Sgr-related effects by taking the maximal local dark matter density commensurate with observational determinations of the stellar stream position. To a good approximation, the maximal influence of Sagittarius occurs when the leading dark matter arm falls directly onto the solar position. This can be achieved in both models even when the peak density of the stellar arm is more than a kiloparsec from the Sun (see Fig. 1). Moreover, we note that although observations indicate that the solar neighborhood is not significantly contaminated by former Sagittarius stars [43,44], kinematically-streaming sub-populations that may be associated with Sagittarius would only contribute on the order of ∼ 1% to the local stellar density, making them very difficult to rule out on heliocentric-distance scales larger than a kiloparsec. We refer to the cases in which the relative position of the Sun maximizes the annual modulation influence of Sagittarius as our maximal models for the heavy Sgr and light Sgr simulations respectively. The maximal Sagittarius cases are shown by thin, solid lines in the upper right panel of Fig. 4. The additional decrease in the annual modulation amplitude is significant. In particular, the annual modulation amplitude can be decreased by as much as a factor of two in the maximal Sagittarius models compared to the halo + disk models in the speed range 400 km/s v min 500 km/s. The Sagittarius stream dark matter has an important influence on the phase of the annual modulation of scattering rates. The phase of the annual modulation as a function of v min , specified by the day of the peak signal (with day-number 1 set to noon on December 31st in the J2000.0 epoch), is shown in both lower panels in Fig. 4. The peak days in the host halo and halo + disk models are close to the canonical value of day 152.5 for v min 190 km/s. At lower values of v min , the phase switches to a peak day in the Northern winter, close to day 335 (or equivalently day −30 as shown in Fig. 4). The shift in the phase of the oscillation at low energies occurs because the shift to higher relative speeds caused by the greater relative speed of the Earth compared to the primary host halo dark matter particles during Northern summer augments the high-speed portion of the speed distribution, but depletes the low-speed portion of the speed distribution so that low-energy scattering rates are reduced during Northern summer. Conversely, low-energy scattering rates are enhanced during Northern winter. For the SHM, the peak day at high v min occurs on day 152.5 (June 2nd), while the peak day at low v min occurs on day 335 (December 1st). The shift of the peak day from Northern winter to Northern summer occurs at v min ≃ 0.89v c in the SHM [73]. All of our models show a similar phase reversal in the peak day, though the value of v min at which the shift occurs varies slightly with respect to the SHM value, as we discuss shortly in the context of the Sagittarius models. The Sagittarius streams in our simulations alter the annual modulation signals in non- Figure 5. The phase reversal of the annual modulation signal at relatively low recoil energies, for each of our four models. Note that the peak occurs in Northern winter for v min 0.89v c as expected [73], for the host halo and halo + disk models. The Sagittarius models reverse phase at lower v min . This discrepancy could potentially manifest itself following a large number of scattering events observed by the next generation of direct detection experiments, which may feature sub-keV recoil energy thresholds and improved energy resolution. As in other figures, equivalent recoil energies for germanium-and sodium-based detectors are plotted on alternate upper axes for m χ = 10 GeV/c 2 . Heavier WIMPs, near the canonical mass range of m χ ∼ 10 2 − 10 3 GeV, imply higher recoil energies than those shown on the upper axes here. negligible ways. The differences between the halo + disk or host halo models as compared to the Sagittarius models depend upon v min . The most significant deviations are for v min 250 km/s and the range of speeds typical of the relative speed of the Earth with respect to the Sagittarius stream, 400 km/s v min 500 km/s. In the higher speed range, the maximal shifts in peak day bring the peak between 5 and 12 days earlier in the year for the fiducial heavy Sgr and light Sgr models respectively. In the maximal models, the shift can be larger, bringing the peak day as much as 12 and 18 days earlier in narrow energy ranges in the heavy Sgr and light Sgr models respectively. The shifts in peak day are comparable in magnitude to the errors on the DAMA/LIBRA measurement of annual modulation [11]. Moreover, the shifts in our models bring the peak day earlier by amounts and with energy dependence comparable to the offset between the SHM peak day and the peak day quoted by the DAMA/LIBRA experiment, for which the signal peaks at day-number 136, 142, 146± 7 in the recoil-energy bins 2 − 4, 5, 6 keVee, as indicated by the shaded regions in the lower right panel of Figure 4. Additionally, the energy dependence of the annual modulation amplitude in our models may partially mitigate the discrepancy between the low amplitude observed by DAMA/LIBRA at energies 2 − 6 keVee (A ∼ 2% of the mean rate 0.0116 cpd/kg/keVee) and the somewhat higher amplitude A ∼ 13% quoted by CoGeNT at energies 0.5 − 3.0 keVee. Conversely, the peak day shift induced by the Sgr debris becomes indistinguishable from noise, if the solar position can be located as much as ∼ 5 kpc from the center of the stellar debris arm, a distance somewhat greater than contemporary estimates. It is tempting to suggest that experiments such as DAMA/LIBRA or CoGeNT may be probing the stream of the Sagittarius galaxy, but any such statement is subject to numerous important caveats. First, the DAMA/LIBRA results are controversial and are challenged by other experiments. Second, the DAMA/LIBRA peak day estimates deviate from the SHM (and halo + disk model) predictions by only 2σ or less. However, another caveat to any such statement strikes at a limitation of our simulation analysis. As we have already mentioned, the position of the Sagittarius stream impact on the disk remains uncertain. Further the angle at which the Sagittarius stream impacts the disk is also uncertain. Finally, despite the fact that these simulations are the most cosmologically complete of such efforts, having accounted for realistic dark matter halos in each galaxy, some small discrepancies remain between our results and observations of the sky-position and radial velocity of the stellar debris. Unfortunately, a parameter search using numerical simulations is not yet practicable. As an example of the concrete effects of this uncertainty on our predictions, the amount of the phase shift induced by the Sagittarius stream and, in fact, whether the shift is toward earlier in Northern spring (as shown in Fig. 4) or, perhaps, later in Northern summer is sensitive to the angle at which Sagittarius impacts the disk. Contemporary data and techniques do not suffice to specify this impact direction, so it is not possible to interpret our results as a firm statement that the Sagittarius stream causes the peak day to occur earlier by some precise amount (the amplitude of the annual modulation is not subject to this caveat). A proper interpretation of our simulation results is that we have shown that it is possible, in self-consistent models designed to mimic the Milky Way with the Sagittarius impact, for the Sagittarius debris to induce significant, energy-dependent shifts in the annual modulation phase even when the stellar stream of the Sagittarius debris remains several kiloparsecs from the Sun. We now turn to the phase reversal at low velocities/recoil energies. As we have already mentioned, the phase reversal of the annual modulation at low energies is a well-understood effect, occurring at v min ≃ 0.89v c in the SHM [73] and at similar values in our host halo and halo + disk models. As depicted in Figure 5, the presence of Sgr dark matter in the solar neighborhood affects not only the phase of the overall modulation, but also the value of v min , and thus the recoil energy, at which the phase reversal occurs. In particular, the Sagittarius models reverse phase at speeds ∼ 10 − 15 km/s lower than the host halo and halo + disk models. For relatively low-mass WIMPs with m χ = 10 GeV, this corresponds to a shift in recoil energy at the phase reversal point of approximately ∆E r ∼ 0.1 − 0.2 keV r . For larger WIMP masses in the regime m χ 100 GeV/, this shift in the reversal energy would be larger by a factor of 10 or more. This Sagittarius-induced feature may be probed by future detectors with low energy threshold and improved energy resolutions and may be one of the distinguishing features of Sagittarius if WIMP astronomy can ever be undertaken in an era with very large direct search event rates (as explored by ref. [67]). Exploring this particular signature is well suited for efforts to develop very low-threshold direct search detectors with greatly improved energy resolution, examples of which include extensions of germanium-based experiments like MAJORANA [74,75], advanced threshold-lowering and background-eliminating technologies like those proposed by the CDEX-TEXONO and SuperCDMS/CDMSLite collaborations (as in refs. [76][77][78], respectively). This avenue of exploration may be particularly fruitful in the future as the local circular speed v c of the Milky Way is refined by next-generation astrometric surveys so that a similarly tight constraint can be placed on the phase-reversal energy predicted by the SHM and similar models without Sagittarius debris. There is a final addendum to our model results that is worth stating explicitly. In our models, the potential interior to the solar position is dominated by stellar mass, rather than dark matter. In the event that the local dark matter density contributed by the primary host halo of the Milky Way (as opposed to the Sagittarius stream) is as low as the SHM value of ρ 0 ∼ 0.3 GeV/cm 3 , we expect viable models of Sagittarius evolution to remain broadly similar. The implication is that the relative influence of Sagittarius on direct search scattering rates could be yet larger than we have estimated here. Although we have not performed a selfconsistent model that results in such parameters for the Milky Way halo, it is interesting to comment on how our results would change in such a scenario. If the local density contributed by the host halo of the Milky Way were as low as ρ 0 ∼ 0.3 GeV/cm 3 (rather than roughly twice this value as in our equilibrium halo/galaxy models), the Sagittarius debris could contribute as much as ∼ 5% of the local dark matter density and the direct search signatures we have explored would change as follows: the phase-shift in the annual modulation signal could be as large as ∼ 20 − 25 days, compared to the ∼ 10-day shift yielded by our fiducial models, and the fractional amplitude of the annual modulation signal could remain as low as ∼ 5% even at relatively high speeds near v min ∼ 500 km/s (well below the SHM prediction as well as each of the cases we examine here). In addition, the change in the recoil energy at which the modulation undergoes phase reversal could be as much as twice the fiducial change we show in Fig. 5. These estimates, based on a scenario in which the solar neighborhood is near the peak of the dark matter stream and the local density contributed by the parent halo is as low as ρ 0 = 0.3 GeV/cm 3 , likely represent maximum plausible influences that Sagittarius stream material could have on direct search rates without fine-tuning. Discussion We have studied predictions for dark matter direct search scattering rates within the context of isolated numerical models of a Milky Way-like system designed to reproduce the basic properties of the Galaxy, including models of the infall, merger, and tidal disruption of the Sagittarius dwarf system. In modeling such specific features, isolated simulations of this kind complement large-scale cosmological simulations. In accord with previous highresolution studies of cosmological dark matter halos [16,[18][19][20], we find that deviations from standard halo model (SHM) assumptions in observationally viable model Milky Way systems can significantly alter direct search rates relative to SHM predictions. In agreement with these studies, equilibrated host halo systems and halo systems in equilibrium with a Galaxy exhibit speed distributions that differ markedly from the Maxwellian form, in particular being significantly platykurtic. Not surprisingly, in all three models that include an equilibrated Galactic stellar disk, we find that the increased relative speeds of dark matter particles caused by the additional acceleration provided by the disk and the contracted halo result in significantly larger scattering rates. The precise value of this enhancement can be large (a factor of several) and is energy dependent (Fig. 2 and Fig. 3). Studies based upon N -body realizations of Galaxyanalog halos in cosmological simulations make the implicit assumption that a Galaxy-sized halo in a cosmological numerical experiment involving only dark matter will faithfully reflect the solar neighborhood in the real Galaxy [19,20]. For practical purposes, mapping cosmological N-body results onto equilibrium models containing a galaxy is non-trivial. We have shown that our equilibrium halo + disk models can be used to predict rates to within 50% by contracting the dark matter halo using standard adiabatic contraction techniques [68][69][70]. Meanwhile, simply scaling the speed distribution of the N -body host halo by the mean velocity can reproduce the halo/galaxy distribution to within ∼ 5% precision, signifying that dark matter-only predictions for speed distributions can be mapped to models that include a Milky Way Galaxy by scaling speeds up to the rotation speed in the solar neighborhood. These results are broadly commensurable with previous N -body work and a significant caveat to these results is that our equilibrium models represent only one possible equilibrium solution for the halo/galaxy system that is not unique and does not result from self-consistent cosmological evolution. Our most novel results pertain to the influence of Sagittarius debris material on predicted direct search event rates. Our models demonstrate that the Sagittarius stream debris can have an important influence on direct search scattering rates even when the stellar stream of the Sagittarius debris is centered several kiloparsecs from the solar neighborhood, as is thought most likely based on contemporary analyses [42,55,57]. The reasons for this are twofold. First, the Sagittarius dark matter stream is significantly broader than the stellar stream in our models. The Sagittarius stream is a non-negligible contribution to the nearby dark matter content over an area many kiloparsecs in diameter in the plane of the Galactic disk (Fig. 1). Second, the Sagittarius stellar and dark matter streams are not spatially coincident in our models, having drifted away from co-axiality during evolution in the Milky Way's tidal field. The peak of the dark matter density contributed by the stream impacts the Galactic disk several kiloparsecs from the peak of the stellar density, and the spread in surface density demonstrated by Fig. 1 indicates that expected event-rate boosts for detection experiments should be important out to this distance, in our adopted formalism. We reiterate an important caveat before continuing; although the spatial and kinematic distributions of Sagittarius stellar debris in our models are generally good matches to the observational properties of that debris, as elaborated in [45], some discrepancies do remain. Moreover, the relative position of the solar neighborhood with respect to the Sagittarius stream's impact point on the Galactic disk is still poorly constrained. The simulations we analyze are among the most complete descriptions of the Sagittarius debris in the literature and an exhaustive search of the initial parameter space is not computationally feasible, so these uncertainties cannot be explored in detail. A proper interpretation of our results would be that the Sagittarius dark matter debris may give rise to significant signals in direct-search experiments even if the Sagittarius stellar debris is confined to a distance of several kiloparsecs from the solar neighborhood. Tuning the stellar stream more finely would not impact our results, insofar as variation in Sgr WIMP surface-density at the Sun is less important to the event-rate calculation than is the vertical velocity distribution of those WIMPs. The important and general implication of this result is that the stellar stream of Sagittarius being several kiloparsecs from the solar neighborhood does not preclude Sagittarius from having a significant effect on dark matter experiments. Furthermore, near-future constraints on the location of the stellar debris may not suffice to preclude significant dark matter from Sagittarius in the solar neighborhood. The Sagittarius stream gives rise to several important effects on dark matter experiments. The high relative velocity of the Sagittarius dark matter stream relative to the Earth boosts the rate of high-energy recoil events by ∼ 20% in our models and perhaps as much as ∼ 40 − 45% depending upon the local dark matter density contributed by the primary halo of the Milky Way (Fig. 2 & Fig. 3). Sagittarius reduces the annual modulation amplitude by an energy-dependent factor that may be as large as a factor of two at v min ∼ 420 km/s (Fig. 4) relative to models with no Sagittarius debris. This energy-dependent suppression could help explain the disparate values found by DAMA/LIBRA (where the modulation amplitude is A ∼ 2% of the mean low-energy rate dR/dE ≃ 0.0116 cpd/kg/keVee in the energy bin 2 − 6 keVee = 6.7 − 20 keV r ) and CoGeNT (having an amplitude A ∼ 13% of the mean rate for recoil energies ∼ 0.5−3.0 keVee = 2.3−11.3 keV r ) as well as their discrepancies compared to the amplitudes expected within the SHM [11,12], because our Sagittarius models result in amplitudes that increase much more sharply with recoil energy in this range than the SHM formalism as well as the host halo and halo + disk models. The geometry of the Sagittarius impact on the solar system causes the signal from the Sagittarius stream to peak during Northern winter, in agreement with previous studies [21][22][23][24][25] (as well as the general study of debris flows in ref. [79]). Our models of Sagittarius yield recoil energy-dependent shifts in peak day number of between ∼ 5 and 25 days earlier in the year than the SHM peak day-number of 152.5. Both DAMA/LIBRA and CoGeNT have indicated a similar behavior, with both experiments finding trends between peak day-number and recoil energy [11,12]; however, the observational error in the peak remains on the order of a few days and the uncertainty in simulation programs that aim to model Sagittarius remain significant. Nevertheless, our models suggest such a shift is reasonable given contemporary knowledge of Sagittarius debris structure. The Sagittarius streams in our models also cause the phase reversal energy to be lowered (Fig. 5). Exploiting this signature, in particular, to help identify dark matter or use dark matter searches to perform WIMP astronomy will benefit greatly from future low-threshold detectors with improved energy resolution, such as those being considered by refs. [76][77][78]. There have been a number of previous studies of the influence of Sagittarius debris on direct dark matter searches, including refs. [21][22][23][24][25][26]. By and large, these studies have utilized the SHM formalism and contrived approximations for the local distribution of Sagittarius debris based on observational limits on the Sagittarius stellar debris contribution to the solar neighborhood. Further, the prospects for Sagittarius to significantly influence direct search experiments have been challenged as contemporary data suggest that the stellar component of the Sagittarius debris is centered a few kpc from the solar neighborhood [55,57] (and here we reiterate that although refs. [43,44] find no evidence of large coherent sub-populations in the solar vicinity, percent-level debris flows are presently unconstrained beyond one or two kiloparsecs from the Sun). Our study complements previous work in many respects. Most importantly, we have analyzed a self-consistent simulation of Sagittarius dwarf galaxy accretion involving a realistic dark matter component, doing so in a controlled and isolated simulation such that event-rate implications can be clearly discerned. Crucially, we have shown that the proximity of the Sun to the stellar stream alone cannot necessarily be used as an indicator of the local dark matter contribution from Sagittarius, as the dark matter flow accompanying the luminous debris is much more widely spread across the Milky Way disk and not necessarily coincident with the stellar material. Comparing to previous work in more detail, we present the first analysis of the influence of Sagittarius dark matter based on self-consistent models of the Sagittarius infall that describe the observed Sagittarius debris within observational uncertainty. Unlike previous studies, we emphasize that the Sagittarius debris induces an increase in the event rate by ∼ 10 − 20% in our fiducial models (as much as ∼ 40% in our maximal models), and that annual modulation fractional amplitudes are diminished by ∼ 20% − 50% in the presence of that debris at Earth. The phase shifts in the annual modulation that we find are somewhat larger than those presented by refs. [21,22] (for comparable dark matter density contributions), due to the relatively higher speed attained by Sgr particles in our modeling. We find qualitatively similar phase behaviors to those of refs. [23,24], near the reversal point at which the modulation amplitude changes sign. Generally, our work agrees well with past commentary on the detectability of WIMP streams in current-and next-generation detec-tion efforts [25][26][27], and specifically we identify the Sagittarius dark matter stream as an achievable target for direct-search science over the next decade. The effects of Sagittarius that we describe in this manuscript may be relevant to dark matter searches generally. However, if the dark matter is indeed relatively light (m χ 20 GeV as we have assumed in our illustrative examples), the effects of Sagittarius debris on scattering rates are particularly important because of the large relative speed of the debris stream at the Earth. In either case, future direct search experiments may probe such signatures, though a future generation of low-threshold detectors with fine energy resolution [76][77][78]80] may be necessary in the event that the dark matter mass falls in this lower range. In either case, our analysis suggests that the effects of Sagittarius debris on direct search experiments will not be negligible given contemporary limits on the position of the Sagittarius stellar stream. In the far future, the features induced by Sagittarius debris may be among the early measurements to be made in an era of WIMP astronomy with large direct search rates [67].
2012-07-29T22:56:41.000Z
2012-03-29T00:00:00.000
{ "year": 2012, "sha1": "6053277c33b252d4c57f1c1e35dcc776a1ebf9a2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1203.6617", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6053277c33b252d4c57f1c1e35dcc776a1ebf9a2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
120073956
pes2o/s2orc
v3-fos-license
Synthesis, characterization and its photoluminescence properties of group I-III-VI2 CuInS2 nanocrystals We report the synthesis, characterization, and photoluminescence (PL) properties of colloidal I-III-VI 2 CuInS 2 and CuInS 2 /ZnS nanocrystals (NCs). Absorption shoulder and PL bands of the NCs are located at higher energy than those of band gap energy of bulk crystals due to a quantum-confinement effect. The PL band has a relatively large Stokes-shift, broad linewidth, and long decay-time, which suggests that the PL originates from a recombination of confined-excitions associated with donor(s) and/or acceptor(s). We found that quantum yield of the PL depends strongly on the photon-energy of excitation light and that it is up to 40-50% in resonant excitation at the energy positions corresponding to the absorption shoulder. Detailed properties and possible dynamics will be described. We also present preliminary results of PL properties focused on single NCs. There exist high-luminescent NCs exhibiting so-called PL blinking as similar with II-VI NCs, while the others are dark NCs. A duration-time use 2D-CCD detector image series of continuous images are obtained almost seamlessly (transfer 1.024 ms/frame) with the exposure time for each frame of 100 ms. All the measurements were performed at room temperatures. up to The maximum in the resonant excitation is comparable to that of CdSe/ZnS NCs. These that non-radiative recombination tend to occur when photo-generated carriers relax from higher to the lower states or that there is a pathway from higher states to something like a dark state. Introduction In recent years, colloidal II-VI CdSe/ZnS nanocrystals (NCs) [1,2] have been used in the field of chemistry, biology and life-science as a fluorescent labeling [3] in protein, DNA and cells due to their high photostability, wide absorption bands, and narrow and tunable photoluminescence (PL). The NCs possess sufficiently high quantum yields of the PL (> 50 % @ room temperature), and so the PL from single NCs can also be observed by using a standard microscope system with a high-sensitive CCD camera. It has been reported that single NCs exhibit some remarkable PL properties, e.g. PL blinking [4,5] and polarized PL [6,7]. The single NCs are expected to be used as new types of functionalized probes by utilizing the remarkable properties. For example, PL blinking, i.e. on-off intermittency of PL intensity, is quite sensitive to the states of the NC surface [8], and thus it can be used as a sensitive probe of nano-environment. PL polarization measurements of single CdSe/ZnS NCs enable us to detect a direction of c-axis of each NC because the NCs possess two dimensional transition dipoles [6,7]. Thus real-time measurements of the PL polarization make it possible to investigate nanoscale dynamics of labeled molecules, e.g. torsional and rotational motion. While the PL properties of the NCs are quite attractive, the NCs have a problem especially for the biological uses that they contain toxic Cd-element. Group I-III-VI 2 NCs are expected as one of the candidate materials of Cd-free NCs. The NCs are also fascinating because they have significantly large variation of band gap energy ranging from 1.23eV (AgInSe 2 ) to 3.49 eV (CuAlS 2 ). Particularly, the NCs with the band gap energies of near-infrared region have a great advantage for not only the biological use but also solar-cell applications. However, PL properties of the NCs have not been sufficiently elucidated; especially, there has no investigation on PL properties of single NCs as far as we know at preset. One of the reasons is that the synthesis of high-quality NCs is not so easy in the case of I-III-VI 2 NCs. Recently, we have synthesized relatively high-quality I-III-VI 2 CuInS 2 NCs based on previous reports [9,10], and have formed ZnS shell around the CuInS 2 NCs. In this Letter, we will describe first the synthesis and characterization of the CuInS 2 NCs and CuInS 2 /ZnS core-shell NCs, and then present detailed PL properties of the NCs including new results on single NCs. Synthesis. The synthesis method of the CuInS 2 core-NCs is basically based on those in Refs [10,11]. First, 24.4 mg (0.2 mmol Cu) copper acetate, 58.2 mg (0.2 mmol In) indium acetate, 0.5 mL dodecanethiol (2 mmol S) are mixed with 10 mL ODE in a 50 mL two-necked flask, and then 0.13 mL oleic acid (0.4 mmol) is added to prevent from aggregation of the NCs in the synthesis process. Then, the mixture is degassed under nitrogen gas flow for 30 minutes, and is subsequently heated to 240 °C with the rate of 8 °C per minutes. Next, the temperature is kept at 240 °C. In this paper, the duration time at 240 °C is defined as the reaction time. The heating and keeping resulted in nucleation and successive growth of the core-NCs. The ZnS shells are formed around the CuInS 2 cores as follows. First, 0.13 mL Me 2 Zn/TOP solution (0.06 mmol Zn) is mixed with 1 mL ODE, and then the mixture is charged into a syringe. Then the temperature of the flask containing the NC solution is cooled to 180 °C. Next, the mixed solution in the syringe is added dropwise with an interval of 30 seconds. After the addition, the temperature is cooled to 100 °C and kept for 2 hour for annealing. It is noted that using the purchased pre-diluted Me 2 Zn/TOP solution reduces the risk due to pyrophoric nature of Me 2 Zn in the synthesis. The prepared NCs are purified by precipitation with 5 mL acetone to rinse residual dodecanethiol, oleic acid, and ODE, and finally they are dispersed in anhydrous toluene solution. Measurements. Absorption spectra were measured with a spectrometer (V-570, Jasco). PL and excitation spectra were recorded by using a modified spectrophotometer (FP-6500, Jasco). Transmission electron microscopy (TEM) images were taken with a transmission electron microscope (HF-2200, Hitachi). PL decay profiles were measured by a standard time-correlated single photon-counting method. A pulsed diode laser (LDH-PC405, Pico Quanta) was used as an excitation source. The light wavelength and pulse duration-time were 407.5 nm and 50 ps, respectively. We used a single photon-counting module (SPCM-AQRH, PerkinElmer), a time-to-amplitude converter (Ortec 567, EG&G) and also a multi-channel analyser (Ortec TRUMP-PCI-2K, EG&G) for the time-resolved measurements. Time resolution of the system is 600 ps. PL imaging of single NCs was performed under total internal reflection (TIRF) excitation configuration (see Fig. 4(a)) [12] by using an inverted-type optical microscope (TE-2000U, Nikon) with ×100, N.A. = 0.90 dry objective. We use 2D-CCD detector (Cascade-512B, Roper Scientific) for image acquisition; series of continuous images are obtained almost seamlessly (transfer rate: 1.024 ms/frame) with the exposure time for each frame of 100 ms. All the measurements were performed at room temperatures. Results and Discussions CuInS 2 NCs. First, we characterize optical and structural properties of the so-prepared CuInS 2 NCs. Figure 1(a) shows absorption spectra of the NCs dispersed in toluene solutions. Each spectrum is normalized at 3.0 eV. Reaction times of the NCs were 5, 10, 15, 30, 60 and 120 minutes, respectively. A broad shoulder appears at low energy side ranging from 1.7 to 2.2 eV in the spectra. The energy position of the shoulder shifts to red until 30 min, and then hardly shifts. The inset of Fig. 1(a) shows a TEM image of CuInS 2 NCs with reaction time of 30 min. Individual particles show clear lattice patterns of which the lattice spacing is equals to that of bulk CuInS 2 crystals. It ensures that the particles are surely crystallized in chalcopyrite structure. The crystal shape looks angular, but it can be regarded as roughly spherical. The average diameter of the NCs was estimated to be 5.9 nm ± 1.4 nm. The band gap energy due to the lowest state of the confined exciton in a spherical NC is expressed as [13], where E g , ħ, R, m e * , m h * , e, and ε are band-gap energy of bulk CuInS 2 crystal (E g = 1.54 eV), Dirac constant, radius of the NC, effective mass of electron (m e * = 0.16m 0 ), effective mass of hole (m h * = 1.3m 0 ), the elementary charge and the dielectric constant of the NC (ε = 11ε 0 ), respectively. It is noted that donors and acceptors are ignored in this model. In the case that the average diameter is equal to 5.9 nm ± 1.4 nm, the E is calculated to be 1.68-1.96 eV; though the value seems to be slightly small but roughly equals the energy of the absorption shoulder. The broadness of the absorption shoulder can be associated with size and shape distributions of the NCs. It is also confirmed from results of a series of TEM observation that the particles grow monotonically until 30 min, and that the NCs hardly grow but its shape transforms from angular sphere to spheroid after 30 min. Thus the redshift until 30 min in the Fig. 1(a) can be explained basically in terms of quantum size effect. As for the lattice pattern of the NCs, it is ambiguous within the reaction time of 20 min, and after that it becomes clear quickly. This result suggests that the particles are rather amorphous at the first stage, and then in turn become crystalline. Figure 1(b) shows corresponding PL spectra of the CuInS 2 NCs. Photon energy of the excitation light was 2.58 eV. Each NC solution was diluted so that its absorbance at 2.58 eV equalled to 0.04. The PL intensity increases gradually with redshift as the reaction time increases until 30 min, and then the intensity decreases. Here, we describe the PL properties based on the results of PL decay measurements. The inset of Fig. 1(b) shows PL decay profiles of the NCs with the reaction time of 5 and 30 min, respectively. Main component of the profiles (green lines) has a time constant of several hundreds of nanosecond; it is one order of magnitude longer than that of CdSe NCs (15-30 ns). The long decay-time and also broad PL band with large Stokes shift suggest that the PL dynamics of the confined excitons in the CuInS 2 NCs should be associated with donors and/or acceptors in the NCs as similar to the case for bulk CuInS 2 crystals. Fig. 1(a) shows a TEM image of the NCs with reaction time of 30 min. Black and red lines in the inset of Fig. 1(b) show PL decay profiles of the NCs with reaction time of 5 and 30 min, respectively. Intensity ratio of the main component increases monotonically with decreasing that of a fast component (tens of nanosecond) until 30 min. Assuming that the fast component arises from a competition between radiative and nonradiative recombination processes of carriers, the result can be described by a reduction of non-radiative recombination center(s) inside the NCs and/or on the NC surface. This interpretation is consistent with also the results in Fig. 1(b). On the other hand, PL decay profiles hardly change after 30 min, though the PL intensity decreases drastically. It may suggest that part of the NCs becomes dark ones, i.e. such NCs emit fewer photons due to some reasons. To investigate further details of optical properties of the ensemble CuInS 2 NCs, we measured excitation-energy (E ex ) dependence of PL spectra using the NCs with reaction time of 30 min as shown in Fig. 2. The excitationenergies corresponding to the individual spectra are indicated on the left side and also by arrows in the Fig. 2. PL band shifts to low energy side and the PL bandwidth decreases slightly as the excitation-energy decreases in the resonant region of E ex < 2.0, while PL spectra show no dependence on excitation-energy in the nonresonant region of (2.9 >) E ex > 2.0. In the case that the NCs are excited at the edge energy of the absorption shoulder (E ex =1.8), the PL band shifts by 47 meV and the bandwidth is narrowed by 20 % as compared to those in the spectra at E ex > 2.0. The shift and narrowing suggest that the excitons in the lowest state described by eq. (1) are generated selectively by size in the region of E ex < 2.0 as similar with the case of CdSe NCs. It is considered that almost all the NCs can be excited in the region of E ex > 2.0 by generating not only the excitons in the lowest state but also those in the second or more higher states. CuInS 2 /ZnS core-shell NCs. Next, we describe optical properties of CuInS 2 /ZnS core-shell NCs. It is known that PL quantum yield of CdSe NCs increases due to formation of ZnS shell [11]. The band gap of CdSe lies within that of ZnS energetically, and so both electrons and holes are confined inside the CdSe-core, mainly. The confinement can prevent the carriers from approaching non-radiative centers on the NC surface. Thus a similar effect can be expected for the CuInS 2 NCs because its band gap also lies within that of ZnS. Red, green and blue lines in the Fig. 3(a) show PL spectra of CuInS 2 NCs, CuInS 2 /ZnS NCs without and with 1 hour heat-annealing, respectively. The PL intensity increases roughly twice by the formation of the ZnS shell as expected. PL quantum yields were estimated to be 4.8, 6.6 and 7.7 % at E ex = 2.34 eV (530 nm), respectively, and they were kept at least for two weeks. It is noted that we also tried to form ZnS shell by using other starting materials, i.e. zinc stearate and zinc acetate, instead of Me 2 Zn. In these cases, similar increasing of the PL intensity can be achieved, but the PL quantum yields decreased drastically after few days. PL peak energy shifts to high energy side by 37 meV due to the shell formation as shown in Fig. 3, while it shifts low energy side slightly in the case of CdSe NCs because the confinement effect is reduced due to a penetration of the wavefunctions of carriers from core to shell. The blue shift can be described as follows. In our previous study on CdSe/ZnS NCs, we found that the NCs which possess local charges on the NC surface show redshift by tens of meV, and that the redshift disappears due to neutralization of the charges by photo-adsorption of polar-molecules, e.g. water and ammonia molecules [14], on to NC surface. Assuming that there are local charges on the surface of the CuInS 2 NCs, observed blueshift can be interpreted that the charges are neutralized by the shell formation. The origin of the charges is considered to be dangling bonds on the NC surface. Red and blue lines in the inset of Fig. 2 show PL decay profiles of the CuInS 2 and the annealed CuInS 2 /ZnS NCs. Intensity ratio of the main component increases due to the shell formation with decreasing that of a fast component. Decay time of the main component show no change as shown in green lines. These results can be explained by considering a reduction of non-radiative recombination centers corresponding to the dangling bonds on the NC surface. Black solid, broken and blue lines in Fig. 3(a) show absorption, PL and excitation spectra of the CuInS 2 /ZnS NCs, respectively. The PL spectrum was measured under excitation at E ex = 2.34 eV, and the excitation spectrum was monitored at the peak energy of the PL band. It is notable that the excitation spectrum shows a sharp peak at the energy of the absorption shoulder, and the spectral shape is quite different from that of absorption spectrum especially at higher energy. Here, we consider excitation-energy dependence of PL quantum yields in the CuInS 2 /ZnS NCs. The PL quantum yield Q (E ex ) were calculated by using the measurement value of the quantum yield at the E ex = 2.34 eV ( E 0 ) and the equation shown below, where A(E ex ) and N(E ex ) are absorbance at E ex and total photon number of PL per unit time at E ex , respectively. We used here the approximation N(E ex )/N(E 0 ) I (E ex )/I(E 0 ) , where the I (E ex ) is the PL intensity per unit time at the PL peak energy shown in Fig 3(b) (1.82eV). This approximation is correct in the case that there exists no excitationenergy dependence of the shape of PL spectra. Thus it is exactly correct under the condition of nonresonant excitaion because no dependence appears as similar with the case of CuInS 2 NCs (Fig. 2). It is noted that the error is estimated to be at most 10% (totally 10% overestimation; 20% overestimation due to PL narrowing and 10% underestimation due to spectral redshift) even under resonant excitation because the spectral change is not so large. Red circles in Fig. 3(b) show the quantum yield Q (E ex ) which was calculated by using Eq. 2. We found that the quantum yields of the CuInS 2 /ZnS NCs depend strongly on the photon-energy of the excitation light; the quantum yield increases gradually with decreasing photon energy, and finally reaches up to 40-50%. The maximum value in the resonant excitation region is comparable to that of luminescent CdSe/ZnS NCs. These results may suggest that non-radiative recombination tend to occur when photo-generated carriers relax from higher to the lower states or that there is a pathway from higher states to something like a dark state. Note that the quantum yields of non-coated CuInS 2 NCs also show the similar strong dependence, and thus its origin may be associated with donors and/or acceptors in the NC cores. PL properties of single NCs. To get further insight into the optical properties of the CuInS 2 /ZnS NCs, and also to explore specific PL properties of single NCs, we have been measuring the PL from single NCs by utilizing a total internal fluorescence (TIRF) microscopy method (Fig 4 (a) [12]). For the measurements, the NCs were first dispersed in toluene solution, and then were expanded onto a silica glass substrate with a hydrophobic surface by spin-casting. We adjusted the concentration of the NC solution to be 1×10 -7 mol/L. It is noted that the concentration is one or two orders of magnitude higher than that in our study on single CdSe/ZnS NCs because we could detect only fewer particles in the PL measurements in the case of CuInS 2 /ZnS NCs. Fig. 4(b), respectively. It is turned out that PL blinking, i.e. on-off intermittency of PL intensity appears also in the I-III-VI 2 QDs as shown in Figs. 4 c1-c4. The blinking behaviour is associated with the occasional events of capturing and releasing of photogenerated carriers at trap sites on or near NC surface [4,5]. On the other hand, the time trajectory shown in Fig. 4 c4, PL intensity increases twice at around 27 seconds, and so the PL is considered to be from two (or more) NCs. In the case of spot B, spot size seems to be slightly larger than that of diffraction limited size (ca. 470 nm), and its PL trajectory (c5) shows no on-off intermittency. Thus, the trajectory is considered to be composed of PL from several NCs. From such careful examinations of the time trajectories for each blight spot, we found that that the PL intensity during the on-events per unit time of the single NCs is ranging roughly from half to a third as compared with that of CdSe/ZnS NCs and that the occurrence of the on-off intermittency seems to be basically similar (stochastic analysis should be needed for more quantitative comparison). Though these NCs are sufficiently luminescent, there are also many dark NCs of which the PL intensity is difficult to distinguish from background signal in the PL image. Two possible reasons can be considered at present. The first one is that a few NCs are resonantly excited by the 532 nm light irradiation and the NCs can emit photons quite effectively as can be seen in Fig. 3 (b), while nonresonant NCs hardly emit photons. Second one is that all of the NCs are excited nonresonantly, but there are two types of NCs; one type is high-luminescent, and the other is dark NCs. We will be measuring excitation-energy dependence of the PL blinking of single CuInS 2 /ZnS NCs to elucidate the reason. Finally, we present a preliminary result of a polarization modulation measurement of single NCs, though further investigation is needed for discussing the polarized properties of the NCs. The PL from single NCs was analysed by using a rotating linear-polarizer with a frequency of 1/3 Hz. Figure 4 d1 shows one of the trajectories in the measurement. PL modulation with the rotation frequency can be observed in the time range between 20 s to 25 s, i.e. during duration time of a PL on event. This result indicates that PL of single CuInS 2 /ZnS NCs is polarized. Detailed properties of the PL polarization of the CuInS 2 /ZnS NCs will be described elsewhere with quantitative analysis. Smmary In summary, we have synthesized group I-III-VI 2 CuInS 2 NCs and CuInS 2 /ZnS core-shell NCs, and characterize the optical properties of the NCs. In absorption spectra, broad shoulder appears at low energy side of the spectra. Based on the results of TEM observations, the shoulder is assigned as due to exciton absorption confined in the NCs, and the origin of the broadness of the shoulder is large size and shape distribution of the NCs. As for the PL intensity, strong dependence on the reaction-time was observed. The PL process in the CuInS 2 NCs is considered to be dominated by the relaxation process associated with donors/acceptors inside the NCs. The PL decay measurement confirms that the exciton PL is surely affected by the donors/acceptors as similar with the case of bulk CuInS 2 crystals. We have investigated the effects on PL properties of the NCs by ZnS shell formation. As a results, PL quantum yield increases from 4.8 % to 7.7 %, and the value is kept at least for two weeks. We found that the PL quantum yield of the NCs depends strongly on the photon-energy of the excitation light, and the value is up to 40-50%. In addition, we have succeeded to detect PL from single NCs. It is turned out that there are two types of CuInS 2 /ZnS NCs; one is sufficiently luminescent NCs as similar with II-VI NCs, and the other is too dark NCs.
2019-04-18T13:06:59.852Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "1127fe9830d6a1421e6d152c915b0ecad6837e24", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.phpro.2012.03.685", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "60bcc39ddd0b11b3696957518695d127bb1a0f4b", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
262174120
pes2o/s2orc
v3-fos-license
Cerebral Small Vessel Disease Is Associated with Motor, Cognitive, and Emotional Dysfunction in Multiple System Atrophy Background: Cerebral small vessel disease (CSVD) has not been systematically studied in patients with multiple system atrophy (MSA). Objective: We sought to explore whether MSA patients suffer from a heavier CSVD burden relative to healthy individuals and whether CSVD has a relationship with motor, cognitive, and emotional dysfunction in patients with MSA. Methods: This study consecutively recruited 190 MSA patients and 190 matched healthy controls whose overall CSVD burden and single CSVD imaging markers (including white matter hyperintensity (WMH), microbleeds, lacunes, and enlarged perivascular spaces (EPVS)) were measured. Of the MSA patients, 118 completed multi-dimensional outcome assessments. Spearman’s correlations and multivariable linear regressions were performed. Results: We observed a greater burden of overall CSVD, WMH, and EPVS in MSA patients compared with controls, but not for microbleeds and lacunes. Motor dysfunction and cognitive impairment were significantly worse in subjects with severe CSVD than those with none-to-mild CSVD. In patients with MSA, the severity of CSVD burden was positively associated with motor impairments as measured by the Unified Multiple System Atrophy Rating Scale-II (β= 2.430, p = 0.039) and Scale for the Assessment and Rating of Ataxia (β= 1.882, p = 0.015). Of CSVD imaging markers, different associations with MSA outcomes were displayed. WMH was associated with motor, cognitive, and emotional deficits, while the EPVS in the centrum semiovale, basal ganglia, and hippocampus regions was correlated only with motor severity, anxiety, and cognition, respectively. Similar findings were noted in MSA-cerebellar and MSA-parkinsonian patients. Conclusions: Concomitant CSVD may be correlated with worse multi-dimensional dysfunction in patients with MSA. INTRODUCTION Multiple system atrophy (MSA) is an adult-onset, devastating, and fatal neurodegenerative disease characterized by progressive autonomic dysfunction, cerebellar impairment, and parkinsonism [1,2].The pathologic hallmark is the cytoplasmic inclusions formed in oligodendrocytes by the accumulation of aggregated ␣-synuclein [1,2], which qualifies MSA as an ␣-synucleinopathy.Emerging evidence has suggested that non-motor symptoms including cognitive impairment, anxiety, and depression may be the under-recognized features in MSA [3][4][5][6].Multiple symptoms of cerebral small vascular disease (CSVD) are documented to overlap with these symptoms of MSA.The mechanisms underlying CSVD may include the destruction of the blood-brain barrier (BBB) caused by endothelial cell dysfunction and cerebral hypoperfusion resulting from autoregulation disorders, which may also be involved in the pathophysiology of MSA [7][8][9][10].Therefore, due to the additive effect of the symptoms and a shared possible mechanism, it is reasonable to speculate that CSVD may be related to MSA closely. Recent studies mainly focused on single CSVD markers and may have overlooked the effects of overall CSVD burden and other neuroimaging makers.Previous studies reported a greater white matter hyperintensity (WMH) burden in patients with MSA compared with controls, but there were no significant differences in lacunes [11,12].Microstructural abnormalities of WMH in MSA were also confirmed by studies based on diffusion-tensor imaging (DTI) [13,14] and diffusion magnetic resonance imaging (dMRI) [15,16].In addition to WMH and lacunes, CSVD imaging markers also include enlarged perivascular spaces (EPVS) and microbleeds [17].However, they have not been clarified in MSA patients, nor have the overall CSVD burden.Moreover, few studies explored the relationship between CSVD and MSA outcomes, only one study from Japan recruited 16 MSA patients and explored the microstructural white matter abnormalities and clinical symptoms [18].They found that DTI may be useful for the severity assessment of motor dysfunction, but they did not focus on non-motor symptoms including cognitive impairment, anxiety, and depression [18].Accordingly, a comprehensive investigation into whether and how various CSVD neuroimaging features influence multiple functional domains of MSA is an important issue that warrants to be addressed. The total CSVD score integrating four imaging manifestations is created to capture the overall effect of CSVD on the brain and has been used extensively in clinical studies [19].Herein, the first purpose of this study is to explore whether patients with MSA and healthy individuals differ in CSVD burden.The second one is to investigate the impact of overall CSVD burden and four separate CSVD imaging markers (WMH, lacunes, EPVS, and microbleeds) on multidimensional functional domains of MSA. Study population We consecutively recruited MSA patients at the Department of Neurology of Huashan Hospital from August 2019 to February 2022.All patients were diagnosed with clinically probable MSA according to the Movement Disorder Society Criteria [20], including a combination of core clinical features (rigorously autonomic dysfunction, poorly levodopa responsive parkinsonism or cerebellar ataxia), and no unsupporting features.Patients were further classified as MSA-parkinsonian type (MSA-P) and MSAcerebellar type (MSA-C) based on the predominant motor phenotype.Importantly, FDG-PET and DAT-PET were performed to further discriminate it from Parkinson's disease (PD) and Lewy body dementia.For FDG-PET, MSA patients had low metabolism in the putamen (posterior), pons, and cerebellum, which can be used to distinguish it from PD and Lewy body dementia [21].If the DAT-PET was normal, it can help to rule out PD and MSA-P [22].Genetic testing was administered to differentiate it from spinocerebellar ataxia if necessary.All the MSA patients included in this study had their clinical diagnosis verified by follow-up every 6 months and by the recently updated diagnostic criteria [2], if not, the patient was excluded.For assessing CSVD neuroimaging manifestation, only patients who performed cranial MRI examinations were included in our analysis.For the control group, we randomly selected ageand sex-matched healthy subjects who visited the Medical Examination Center of Shanghai Fifth People's Hospital for routine medical evaluation with available cranial MRI and blood examinations.All the subjects with central nervous system infections, head trauma, other neurodegenerative disorders (e.g., PD, Lewy body dementia, spinocerebellar ataxia, Alzheimer's disease), other major neurological disorders, major psychological diseases, severe systemic diseases (e.g., cancer), and family history of genetic diseases were excluded. This study was approved by the regional ethical committees of Huashan Hospital and Shanghai Fifth People's Hospital (approval number: KY2020-116, KY2020-065 for Huashan Hospital, and 2021-211 for Shanghai Fifth People's Hospital).Written informed consent was obtained from all participants or authorized representatives.All research procedures adhered to the tenets of the Declaration of Helsinki. Clinical assessment Fasting venous blood samples were used to measure triglyceride, total cholesterol, low-density lipoprotein (LDL), high-density lipoprotein (HDL), and homocysteine.Baseline demographic characteristics related to age, sex, smoking status, and comorbid conditions such as diabetes and supine hypertension were obtained after admission through face-toface interviews by the neurologists.Supine hypertension was defined as a supine systolic blood pressure ≥140 mmHg or a diastolic blood pressure ≥90 mmHg.Orthostatic blood pressure was evaluated at the bedside, and defined as a sustained reduction of systolic blood pressure of at least 30 mmHg or diastolic blood pressure of 15 mmHg within 3 minutes of standing.Sleep disorders were evaluated by interview, including rapid eye movement sleep behavior disorder (RBD), and sleep-related breathing disorders (nocturnal stridor and obstructive sleep apnea). MSA outcomes assessment Motor dysfunction duration was uniformly defined as the time interval from the onset of motor symptoms to enrollment.Unified Multiple System Atrophy Rating Scale-II (UMSARS-II), Scale for the Assessment and Rating of Ataxia (SARA), and International Cooperative Ataxia Rating Scale (ICARS) were used to measure motor function.The Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA) were performed to assess cognitive function.Hamilton Anxiety Rating Scale (HAMA) and Hamilton Depression Rating Scale (HAMD) were performed to assess anxiety and depression respectively.Composite Autonomic Symptom Score (COMPASS) 31 questionnaire was used to assess the severity of autonomic dysfunction.The evaluation of the scales for multi-dimensional dysfunction in patients with MSA was completed by one physician specializing in movement disorders independently. Neuroimaging assessment The maximum interval between clinical assessments and imaging scans was 7 days.Neuroimaging examinations were performed on a 3.0 T MRI scanner (GE, USA) with a standard 8-channel HRBRAIN coil.MRI protocol was the same for patients with MSA and healthy controls, including the full sequence of cranial MRI such as axial T2-weighted sequences, fluid-attenuated inversion recovery (FLAIR), T1-weighted sequences, diffusion-weighted imaging (DWI), and axial susceptibility-weighted imaging (SWI).The specific parameter information for MRI acquisition was shown in Supplementary Methods.Individual imaging features of CSVD were observed strictly by neuroimaging standards [17].The Fazekas scale was used for periventricular and deep WMH evaluation.Lacunes were defined as lesions with a diameter of 3-15 mm and a signal similar to cerebrospinal fluid but with a surrounding rim of hyperintensity on FLAIR images.Microbleeds were defined as rounded or ovoid hypointensity foci with a diameter of 2-5 mm on SWI images (less than 10 mm maximally in diameter).EPVS was differentiated from lacunes in size because EPVS was smaller than 3 mm and had no surrounding rim on FLAIR images, in addition, the EPVS was rated in the basal ganglia (BG-EPVS), centrum semiovale (CS-EPVS), hippocampus (H-EPVS), and midbrain (M-EPVS) manually based on the different semi-quantitative scales.BG-EPVS and CS-EPVS were defined by 1 grade (the number is 0-10), 2 grade (the number is 11-25), and 3 grade (the number >25) [23].H-EPVS were rated based on the slices where the midbrain and para hippocampal gyrus were visible [24].The sum of the left and right hippocampus EPVS ≥7 was defined as 1 score, representing the extensive burden, and 0 scores presented a non-extensive burden [25].M-EPVS was rated according to the presence or not, 1 score represented presence, and 0 represented absence. Statistical analysis Statistical analyses were performed using R version 4.1.2.For demographic and clinical features, continuous variables with normal distribution were presented as mean ± standard deviation, and those with abnormal distribution were presented as median (interquartile range).Student t-test or ANOVA for normally distributed continuous parameters and Wilcoxon or Kruskal-Wallis test for abnormal distribution continuous parametric variables were used when appropriate.Categorical variables were expressed as frequency (percentage) and the χ 2 test or Fisher exact test was used. Potential confounders affecting the motor or nonmotor outcomes of MSA were evaluated under the univariate linear regression model, and variables significantly correlated with the outcome (p < 0.05) were further added as covariates in corresponding multivariable linear regression models to examine the associations of overall and separate CSVD burden with MSA multi-dimensional dysfunction.Spearman's correlation was applied to assess the relevance of CSVD burden with MSA outcomes and motor dysfunction duration.Subgroup analysis stratified by age was further performed.The heatmap and forest plots were used to visualize the distribution of correlation effects via the 'heatmap' and 'forest plot' R packages, respectively.A two-tailed p < 0.05 was considered to be statistically significant.To account for multiple testing, a more conservative significance level based on Bonferroni correction was applied when necessary. There was no statistical difference in the distribution of overall CSVD burden and separate imaging features between MSA-P and MSA-C, except for CS-EPVS and microbleeds.Compared with MSA-C, MSA-P patients disposed to have a heavier CS-EPVS load and more frequency of microbleeds (Supplementary Table 2). Group comparisons of demographic profiles and MSA outcomes stratified by CSVD severity Table 2 compares the demographic and outcome features stratified by CSVD severity.Of the 118 MSA patients, 58 had a none-to-mild overall CSVD burden, 39 had a moderate overall CSVD burden, and 21 had a severe overall CSVD burden.The median of motor dysfunction duration (months) was 17, 24, and 24 correspondingly.The education level in our study was relatively low, the median was 9 for 3 groups.With the increase in severity of CSVD burden, the UMSARS-II (p < 0.001), SARA (p < 0.001), and ICARS (p = 0.002) scores ascended gradually (Table 2, Figs.1B, and Fig. 2).The same trend was also found for MSA-C (Supplementary Table 3) and MSA-P (Supplementary Table 4).Regarding cognitive function, the MMSE score and MoCA score descended gradually for all MSA patients (Table 2) and MSA-C patients (Supplementary Table 3), but only the p-value for the MoCA score reached statistical significance in all MSA patients (p = 0.033, Table 2). Associations between CSVD neuroimaging makers and multi-dimensional outcomes in MSA We further explored the associations of CSVD neuroimaging features with the severity of motor or non-motor outcomes in patients with MSA.EPVS burden in different regions was related to different multi-dimensional outcomes (Fig. 3B and Supplementary Tables 8-11).Specifically, in the centrum semiovale, a higher EPVS burden was significantly associated with a higher SARA score (␤ = 1.558, p = 0.046).In the hippocampus, a higher EPVS burden tended to be associated with lower MMSE (p = 0.023) and MoCA (p = 0.004) scores.In the basal ganglia, a negative correlation was found between EPVS and HAMA score (p = 0.023). The WMH scores were associated with motor impairment, cognition dysfunction, and mood disorders in multivariate linear regression models.Specifically, a higher WMH burden was positively correlated with UMSARS-II, SARA, and HAMA scores (p = 0.003, p = 0.049, and p = 0.013 respectively), but negatively correlated with MoCA score (␤ = -0.676,p = 0.027) (Fig. 3B and Supplementary Table 12).As for the lacunes and microbleeds, their correlations with MSA outcomes were weak, and no statistical difference was found (Fig. 3B and Supplementary Tables 13 and 14). Subgroup and complementary analysis Next, we explored the relationship between CSVD burden and multiple outcomes in different age subgroups (Supplementary Table 15).Similar but slightly weakened results were obtained, and the results indicated that the observed associations in the primary analysis were not driven by age.Moreover, no association was found between CSVD neuroimaging features and motor dysfunction duration (Supplementary Table 16). DISCUSSION By thoroughly exploring the relationship between CSVD and MSA, we had several main findings.First, patients with MSA had a greater CSVD burden relative to healthy individuals, and no statistical difference was found in the distribution of overall CSVD burden between MSA-P and MSA-C.Second, motor dysfunction and cognitive impairment were significantly worse in subjects with severe CSVD than in those with a none-to-mild CSVD burden.Third, the overall CSVD severity was closely related to motor dysfunction in patients with MSA, no matter in MSA-C or MSA-P.Among CSVD imaging markers, WMH had a close relationship with motor, cognitive, and emotional deficits, while the EPVS in the centrum semiovale, basal ganglia, and hippocampus regions was correlated only with motor severity, anxiety, and cognition, respectively.Taken together, concomitant CSVD may be correlated with worse multi-dimensional dysfunction in patients with MSA, and different CSVD imaging markers may play distinct roles in MSA outcomes. This study ascertained the distinction between overall CSVD burden and CSVD imaging markers that existed between MSA patients and healthy subjects.In our findings, 18.66% of MSA patients had severe CSVD burden, while in patients with PD (another ␣-synucleinopathy), the prevalence was 21.35% [26].Both prevalences were higher than that in healthy controls (9.70%), indirectly verifying that these two ␣-synuclein diseases were frequently complicated by CSVD neuroimaging manifestations.This view was also confirmed in clinicopathological and autopsy studies [27][28][29].In terms of CSVD imaging markers, we found patients with MSA had more severe WMH, but no significant differences were found in lacunes compared with healthy people.The result was in line with previous studies which had relatively small sample sizes and conducted cerebral MRI using a 1.5 T device [11,12].Several studies based on DTI [13,14] and dMRI [15,16] have indeed demonstrated the microstructural abnormalities of WMH in MSA.It is worth noting that in our study, 39.47% of MSA patients had an extensive EPVS burden in the hippocampus, higher than the previously reported 26.14% in patients with hypertension [30].In the midbrain, the burden of EPVS accounted for 41.05%, more severe than that in the healthy controls.Interestingly, no statistical difference was found in the distribution of overall CSVD burden between MSA-P patients and MSA-C patients.Only 35 MSA-P (18.42%) were included in our study, the ratio is similar to a study from Japan (the ratio was 18.18%) [11,12], so future studies need to include more MSA-P patients to verify our findings.In a word, given that these CSVD markers have been rarely investigated in MSA patients, their prominent CSVD burden needs to be validated in more studies. To our knowledge, we are the first to show that motor dysfunction and cognitive impairment were significantly worse in MSA patients with severe CSVD than those with none-to-mild CSVD.In addition, the overall CSVD severity may be related to worse motor dysfunction in patients with MSA.These findings were largely consistent with those of previous studies in the PD field [31,32], so we speculate that CSVD may affect motor dysfunction in ␣-synucleinopathies by extensively destroying multiple brain pathways.Future studies with functional imaging or neurotransmitter biomarkers are needed to test this hypothesis.We found no correlation between the overall CSVD severity and non-motor domains (cognitive function, depression, and anxiety), which was different from the results in patients with PD [26,32].This indicated the default network dysfunction of MSA was different from PD [13].In addition, the enrolled MSA patients were relatively younger may partially explain the results.Nevertheless, we presented the first preliminary evidence of the relationship between CSVD severity and MSA outcomes, a future direction that warrants to be deeply explored. Several mechanisms may underlie the correlation between CSVD and MSA.First is the dysfunction of neuro-glia-vascular unit.Capillary endothelial cells and pericytes interact with astrocytes, oligodendrocytes, and microglia, which together form the neuro-glia-vascular unit.Research in humans has identified that the BBB destruction mediated by endothelial dysfunction may trigger reduced cerebrovascular reactivity, impaired blood flow, and interstitial fluid drainage, thereby initiating or exacerbating the development of CSVD [33][34][35].BBB impairment was involved in the pathogenesis of MSA simultaneously [36].Evidence has confirmed that in patients with MSA, BBB impairment was correlated with clinical severity [10] and the rate of disease progression [9].Therefore, MSA and CSVD shared the BBB impairment mechanism.The second is hypoperfusion resulting from cerebral autoregulation disorders [8], which is one of the main pathogeneses of CSVD [33].Patients with MSA and PD had significant cerebral self-regulation disorders [37], which were thought to be related to neurogenic orthostatic hypotension [37].Previous studies have shown that decreased orthostatic blood pressure was significantly and independently correlated with CSVD in patients with MSA [11,12] and patients with PD [11,38].Our study confirmed the results by showing that postural hypotension was associated with CSVD burden independently after adjusting for various traditional risk factors (Supplementary Table 17), which suggested that the dynamic changes in cerebral perfusions caused by orthostatic hypotension may be predisposing factors for CSVD in patients with ␣synucleinopathies [11,12,38].Furthermore, supine hypertension, as a risk factor of CSVD, can cause complex pathological alterations to the cerebral small vessels, compromising the structural and functional integrity of the cerebral microcirculation, such as promoting microvascular rarefaction, BBB impairment and neuro-glia-vascular unit uncoupling, impairing cerebral blood supply eventually [39].These pathological changes may in turn impair the structure and function of multiple brain pathways in MSA.Third, oligodendrocytes in MSA may be vulnerable to cerebral hypoperfusion, as in basic research, oligodendrocytes were indeed very sensitive to ischemic insults [40,41].Therefore, the two-hit hypothesis for vascular dysfunction or CSVD may also apply to MSA, that is, vascular dysfunction or CSVD may have a "double hit" effect on brain health by accelerating neurodegeneration (hit two) in addition to impairing brain perfusion, function and pathways mediated by multiple focal ischemic or hypoxic micro-injuries (hit one).This hypothesis requires testing in future studies. Our results highlighted the different roles of four single markers of CSVD in MSA.A previous study reported that microstructural abnormalities of WMH may be associated with motor dysfunction in MSA patients [18].In contrast, it is unclear whether the comorbidity of WMH affects non-motor functions and whether other CSVD markers affect adverse outcomes in patients with MSA.In a relatively large MSA cohort, we extended former studies by showing the close associations between WMH and motor, cognitive and emotional dysfunction, which was in agreement with the findings in PD [26].Secondly, consistent with previous WMH studies [11,12], no association between CSVD neuroimaging features and motor dysfunction duration was observed, implying that motor dysfunction duration may not aggravate the CSVD burden.Besides, we found that EPVS burden in different regions affected different functional domains.The EPVS in the hippocampus tended to be correlated with cognitive impairment, as previously revealed in Alzheimer's disease or hypertension populations [30].Patients with higher EPVS count in the hippocampus were found to show worse performance in verbal reasoning [30], which may be related to the functional connection between the hippocampus and medial prefrontal lobe [42][43][44].In addition, in the centrum semiovale, EPVS was associated with motor function exclusively, while in the basal ganglia, EPVS was significantly associated with anxiety.These results were different from previous publications on PD that EPVS in these two regions (centrum semiovale and basal ganglia) was associated with cognitive function [26,45].Relevant large-scale research is necessary. It is noteworthy that the perivascular space has been proposed to be part of the glial lymphatic system in the brain, and proved to play a key role in the material exchange process between cerebrospinal fluid and brain parenchyma, therefore, EPVS is considered to be a pathological feature of glial lymphatic system dysfunction [46].We found that EPVS burden in different regions affected different functional domains for MSA patients, inferring that the fluid and toxin clearance in different regions may be involved in different brain functions.In the future, multicenter studies with large samples need to confirm this point.WMH has been proposed to be associated with vascular pathologies such as arterial sclerosis and regional hypo-perfusion, and EPVS may be associated with neuro-glia-vascular unit dysfunction and BBB destruction [33].Studying the specific pathophysiological mechanisms underlying the course of MSA will be a daunting task ahead. The strengths of this study include the relatively large sample size, the comprehensive cerebral imaging evaluations, and multidimensional outcome assessments.However, some limitations existed.First, the cross-sectional study design limited us to explore the effects of CSVD on disease progression in MSA.Well-characterized longitudinal studies are required in the future.Second, we only carried out semi-quantitative scoring of CSVD imaging markers and lacked quantitative evaluation such as WMH volumes.Third, the diagnosis of MSA was made according to the consensus criteria for probable MSA, whereas a "definite" MSA diagnosis required a postmortem autopsy. Collectively our findings suggest clinicians should be aware that the comorbidity of CSVD is correlated with worse multidimensional dysfunction in patients with MSA, and CSVD imaging markers play distinct roles in MSA outcomes.In the meantime, the early appearance of WMH and extensive H-EPVS burden may reflect the onset of cognitive decline in the course of MSA, and in-depth neuropsychological assessments should be considered for the patients. Fig. 3 . Fig. 3. Associations of CSVD burden with multi-dimensional motor and non-motor scales in MSA.A) Results of Spearman's correlation were displayed in the heatmap, with colors representing correlation coefficients.Significance: * * * p < 0.001, * * p < 0.01, * p < 0.05 (Bonferroniadjusted p values).B) Results of multivariable linear regressions were shown in the forest plot.BG-EPVS, enlarged perivascular spaces in the basal ganglia; CI, confidence interval; COMPASS31, Composite Autonomic Symptom Score 31; CS-EPVS, enlarged perivascular spaces in the centrum semiovale; CSVD, cerebral small vessel disease; HAMA, Hamilton Anxiety Rating Scale; HAMD, Hamilton Depression Rating Scale; H-EPVS, enlarged perivascular spaces in the hippocampus; ICARS, International Cooperative Ataxia Rating Scale; M-EPVS, enlarged perivascular spaces in the midbrain; MMSE, Mini Mental State Examination; MoCA, Montreal Cognitive Assessment; SARA, Scale for the Assessment and Rating of Ataxia; UMSARS-II, Unified Multiple System Atrophy Rating Scale-II; WMH, white matter hyperintensity. Table 1 Demographic characteristics and CSVD neuroimaging features of MSA patients and healthy controlsThe p-value was a comparison of total MSA patients and the healthy control population.For continuous variables, Student t-test for normally distributed parameters (Age and LDL) and Wilcoxon test for non-normally distributed parametric variables (the remaining continuous variables other than Age and LDL) were used.For categorical variables, χ 2 test or Fisher exact test was used.BG-EPVS, enlarged perivascular spaces in the basal ganglia; CS-EPVS, enlarged perivascular spaces in the centrum semiovale; CSVD, cerebral small vessel disease; HDL, high-density lipoprotein; H-EPVS, enlarged perivascular spaces in the hippocampus; LDL, low-density lipoprotein; M-EPVS, enlarged perivascular spaces in the midbrain; MRI, magnetic resonance imaging; MSA, multiple system atrophy; WMH, white matter hyperintensity. Table 2 Group comparisons of demographic profiles and MSA outcomes stratified by the severity of CSVD burden A total of 118 MSA patients with complete data on overall CSVD burden and outcome assessments were compared here.For continuous variables, ANOVA for normally distributed parameters (Age, Total cholesterol, HDL, LDL, and MoCA) and Kruskal-Wallis test for non-normally distributed parametric variables (Motor dysfunction duration, Triglyceride, Homocysteine, UMSARS-II, SARA, ICARS, COMPASS31, MMSE, HAMA, and HAMD) were used.For categorical variables, χ 2 test or Fisher exact test was used.COMPASS31, Composite Autonomic Symptom Score 31; CSVD, cerebral small vessel disease; HAMA, Hamilton Anxiety Rating Scale; HAMD, Hamilton Depression Rating Scale; HDL, high-density lipoprotein; ICARS, International Cooperative Ataxia Rating Scale; LDL, low-density lipoprotein; MMSE, Mini Mental State Examination; MoCA, Montreal Cognitive Assessment; MSA, multiple system atrophy; RBD, rapid eye movement sleep behavior disorder; SARA, Scale for the Assessment and Rating of Ataxia; UMSARS-II, Unified Multiple System Atrophy Rating Scale-II.
2023-09-24T15:08:16.911Z
2023-09-16T00:00:00.000
{ "year": 2023, "sha1": "2d65cc28ee902eb0d477e0f7f1092f3b10facdaa", "oa_license": "CCBYNC", "oa_url": "https://content.iospress.com/download/journal-of-parkinsons-disease/jpd230166?id=journal-of-parkinsons-disease/jpd230166", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "13191e861bb8726176e814d3e39e4e7b22d1c0ac", "s2fieldsofstudy": [ "Psychology", "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235393366
pes2o/s2orc
v3-fos-license
A report of two cases of bulbospinal form Alexander disease and preliminary exploration of the disease Alexander disease (AxD) is a cerebral white matter disease affecting a wide range of ages, from infants to adults. In the present study, two cases of bulbospinal form AxD were reported, and a preliminary exploration of AxD was conducted thorough clinical, functional magnetic resonance imaging (fMRI) and functional analyses. In total, two de novo mutations in the glial fibrillary acidic protein (GFAP) gene (c.214G>A and c.1235C>T) were identified in unrelated patients (one in each patient). Both patients showed increased regional neural activity and functional connectivity in the cerebellum and posterior parietal cortex according to fMRI analysis. Notably, grey matter atrophy was discovered in the patient with c.214G>A variant. Functional experiments revealed aberrant accumulation of mutant GFAP and decreased solubility of c.1235C>T variant. Under pathological conditions, autophagic flux was activated for GFAP aggregate degradation. Moreover, transcriptional data of AxD and healthy human brain samples were obtained from the Gene Expression Omnibus database. Gene set enrichment analysis revealed an upregulation of immune-related responses and downregulation of ion transport, synaptic transmission and neurotransmitter homeostasis. Enrichment analysis of cell-specific differentially expressed genes also indicated a marked inflammatory environment in AxD. Overall, the clinical features of the two patients with bulbospinal form AxD were thoroughly described. To the best of our knowledge, the brain atrophy pattern and spontaneous brain functional network activity of patients with AxD were explored for the first time. Cytological experiments provided evidence of the pathogenicity of the identified variants. Furthermore, bioinformatics analysis found that inflammatory immune-related reactions may play a critical role in AxD, which may be conducive to the understanding of this disease. Introduction Alexander disease (AxD) is a progressive and fatal neurological disorder characterized by astrocytic cytoplasmic inclusions (1). These inclusions, namely Rosenthal fibers, contain glial fibrillary acidic protein (GFAP) along with several stress proteins, such as small heat shock proteins 27 and αB-crystallin (2). In 2001, GFAP was identified as a candidate gene for AxD, which encodes the major intermediate filament (IF) protein in astrocytes (3). GFAP plays an important role in cell migration, motility and mitosis, and has also been implicated in the mechanical integrity of cells and cell signaling (4). Unlike most variants of other IF disorders, which act in a loss-of-function manner, all known GFAP mutations in AxD are genetically dominant and appear to produce a toxic gain-of-function effect (5). The downstream consequences include the sequestration of protein chaperones, the abnormality of IF network assembly and the predisposition for the GFAP protein to form aggregates, as well as the hyperactivation of cellular stress (6). The combination of these consequences then induces numerous dysfunctions, from intracellular vesicle regulation to ion homeostasis, to synapses formation and cellular communication, ultimately causing neurological disorders (7). However, the specific mechanism of AxD pathogenesis remains unclear. In terms of clinical characteristics, AxD is typically classified into three forms, according to age at onset: The infantile (<2 years old), juvenile (2-12 years old) and adult (≥13 years old) forms (8). However, the heterogeneity of neurological manifestations and wide variety in onset age render diagnosis challenging. Based on neurological analysis and magnetic resonance imaging (MRI), Yoshida et al (9) proposed new guidelines for diagnosing AxD in 2011. Under these guidelines, AxD can be classified into three types: The cerebral (type 1), bulbospinal (type 2) and intermediate (type 3) forms. The cerebral form AxD is characterized by delayed psychomotor development, convulsions, macrocephaly and leukoencephalopathy, appearing as frontal lobe predominance on brain imaging scans. Patients with the bulbospinal (type 2) form present with muscle weakness, hyperreflexia and distinct bulbar dysfunction, typically appearing as medulla oblongata or cervical cord atrophy on MRI scans. The intermediate (type 3) form is characterized by several of the symptoms of the cerebral and bulbospinal forms. Although featured neurological and neuroradiological findings can assist the diagnosis of AxD, the definitive diagnosis currently relies on a genetic test or pathological examination (10). The present study reported two cases of bulbospinal form AxD, and clinical, functional (f)MRI and functional analyses were conducted. In addition, bioinformatics analysis of published data was performed to explore the potential pathogenic mechanisms of AxD. Materials and methods Participants. In total, two identified probands (P3433 and P4288) with AxD from two unrelated families and 500 healthy subjects as controls for genetic analysis, as well as 15 normal individuals as controls for imaging analysis were enrolled from Department of Neurology, Rui Jin Hospital, Shanghai Jiao Tong University School of Medicine (Shanghai, China). Diagnostic workups of patients with AxD included taking history, physical examination and brain imaging according to diagnosis guidelines (9 Genetic analysis. Genomic DNA was extracted from peripheral blood using the phenol-chloroform method (11). EDTA anticoagulated blood and red blood cell lysis buffer (10 mmol/l NaCl, 10 mmol/l Tris-HCl, 5 mmol/l MgCl 2 ) were mixed and incubated at 4˚C for 20 min, then centrifuged at 1,811 x g for 10 min at 4˚C. The cell lysates were digested with nuclei lysis buffer (5 mmol/l NaCl, 10 mmol/l EDTA, 10 mmol/l Tris-HCl), 10% SDS and protease K solution (20 mg/ml; Thermo Fisher Scientific, Inc.) at 37˚C overnight. After digestion, equal volumes of tris-saturated phenol and chloroform/isopropanol mixture (24:1) were added to each tube. Following centrifugation at 1,811 x g for 10 min at 4˚C, the upper aqueous phase was transferred to new tubes and mixed gently with 2 volumes of room temperature absolute ethanol to precipitate the DNA. After rinsing the DNA pellet with 75% ethanol twice (3,220 x g, 1 min), the supernatant was discarded and the DNA pellet was air dried. Then, the DNA was dissolved in TE buffer for 2 h at 37˚C and stored at -80˚C. Whole-exome sequencing was performed in two probands. DNA quality was verified by the 2200 TapeStation system (Agilent Technologies, Inc.). A total of 3 µg DNA per sample was utilized for WES using SureSelectXT Human All Exon V6 kits (cat. no. 5190-8864; Agilent Technologies, Inc.) according to the manufacturer's protocol. The concentration and quality of DNA libraries were detected by Qubit 3.0 Fluorometer (Thermo Fisher Scientific, Inc.) and 2200 TapeStation system (Agilent Technologies, Inc.). Using a loading concentration of 2 nM, data were generated by 150 base paired-end reads on an HiSeq X Ten platform (Illumina, Inc.) using Hiseq X HD Reagent V2.5 kit (cat. no. FC-501-2501; Illumina, Inc.). The sequence reads were aligned to the human genome reference sequence (GRCh37/hg19) with BWA-MEM software version 0.7.17 (12). Variant calling and annotation were performed by Genome Analysis Toolkit (GATK version 4.1.9.0) software and Annotate Variation (ANNOVAR version 20191024) software, respectively (13,14). Variants in which the minor allele frequency was >1% were filtered using public databases, including 1000 Genomes (1000 g; internationalgenome.org), The Exome Aggregation Consortium (ExAC; gnomad.broadinstitute.org) and The Genome Aggregation Database (gnomAD; gnomad. broadinstitute.org). PolyPhen-2 (http://genetics.bwh.harvard. edu/pph2), Scale-invariant feature transform (SIFT; http://sift. jcvi.org) and MutationTaster (http://www.mutationtaster.org) were used for pathogenic prediction. The variants were further interpreted and classified according to the American College of Medical Genetics and Genomics (ACMG) guidelines (15). Putative pathogenic variants were subsequently confirmed by Sanger sequencing, and a total of 500 healthy subjects were enrolled as controls. All GFAP variants were denoted as RefSeq NM_002055.5. In addition, all previously reported mutations of the GFAP gene were summarized and labeled in the diagram of GFAP protein domain structure according to the Human Gene Mutation Database (hgmd.cf.ac.uk/). MRI acquisition, preprocessing and statistical analysis. The two probands were scanned on an MR system (Ingenia, Philips 3T MR system; Philips Healthcare) with an 8-channel head coil array, using 15 normal individuals as the controls. The protocol included a three-dimensional high-resolution turbo field echo T1-weighted sequence for neuroanatomy (sagittal slice orientation; matrix = 256x256; repetition time = 7.2 msec; echo time = 3.3 msec; flip angle = 7˚; slice thickness = 1 mm; slice number = 192). Resting-state blood oxygen level-dependent MRI used T2 * -weighted echo-planar imaging sequence (240 functional images; sagittal slice orientation; 39 slices; slice thickness = 3.5 mm; matrix = 64x64; repetition time =2,000 msec; echo time =30 msec; flip angle = 90˚). The two patients also underwent T2-weighted fluid-attenuated inversion recovery to obtain a more accurate image of white matter lesions. The T1-weighted anatomical image was first segmented into grey matter, white matter and cerebrospinal fluid using computational anatomy toolbox (CAT)12 (http://www.neuro. uni-jena.de) in Statistical Parametric Mapping (SPM)12 software (v.6685; http://fil.ion.ucl.ac.uk/spm) in a MATLAB 2014b environment (https://www.mathworks.com), with reference to tissue probabilistic maps in Montreal Neurological Institute (MNI) space. White matter hyperintensity was also estimated using a grey matter-white matter tissue probability map in CAT12. Voxel-based morphometry was performed between each patient and normal controls to analyze for grey and white matter, using unpaired Student's t-tests with total intracranial volume (TIV) as a covariate. False discovery rate (FDR) correction was used to correct for multiple comparisons at an FDR-adj.P<0.05. With regards to the fMRI, the first 40 fMRI images of each individual were discarded. The remaining 200 images were realigned to adjust head motion, co-registered to the anatomical image and normalized to the MNI space using a modified MATLAB toolbox [Data Processing & Analysis of Brain Imaging (DPABI); version 3.0] (16). Mean amplitude of low-frequency fluctuation (ALFF) was computed to represent regional neural activity of the individuals, regional homogeneity (ReHo) and degree centrality (DC) to represent the quantity of functional connections of a region (17)(18)(19). Red/orange, significantly larger volume of grey matter in normal controls than in the patient; yellow, significantly larger volume of white matter in normal controls than in the patient. Analysis of neural activity and functional connectivity of patients (G) P3433 and (H) P4288. Red/orange, voxels with a significantly higher degree centrality; blue, voxels with significantly higher amplitude of low-frequency fluctuation; pink, voxels with a significantly higher amplitude of regional homogeneity. Individual participants' weighted DC values were obtained from all voxels in standard space using DPABI. Mean ALFF and voxel-wise centrality values were also compared between each patient and the controls using unpaired Student's t-tests with FDR-adj.P<0.05, with voxel-based morphometry of grey matter volume as a covariate. Cell culture and transfection. All plasmids were purchased from GeneCreate. cDNA of wild-type (WT) or mutant (MUT) GFAP (NM_002055.5) was inserted into the pcDNA3.1-green fluorescent protein (GFP) plasmids to express GFP-tagged fusion proteins. The 293T cell line was obtained from The Cell Bank of Type Culture Collection of The Chinese Academy of Sciences. 293T cells were grown in DMEM (Gibco; Thermo Fisher Scientific, Inc.) supplemented with 10% FBS (Gibco; Thermo Fisher Scientific, Inc.) and 1% penicillin-streptomycin (Invitrogen; Thermo Fisher Scientific, Inc.) at 37˚C in a humidified incubator with 5% CO 2 . Next, 2x10 5 cells/well were seeded into 6-well plates or 5x10 4 cells/well were seeded into 24-well plates for transfection. Then, 24 h after plating, 293T cells were transiently transfected with WT or MUT GFAP-GFP (c.214G>A and c.1235C>T) plasmids using Lipofectamine ® 3000 transfection reagent (Invitrogen; Thermo Fisher Scientific, Inc.) at room temperature. A hot-spot mutation (c.715C>T, p.R239C) was set as the positive control (8). All experiments were independently repeated three times. Western blotting and immunofluorescence. A total of 48 h after transfection, 293T cells in 6-well plates were collected to extract proteins for western blot analysis. For lysosomal inhibitor treatment, bafilomycin AI (BafAI; 5 nM; Merck KGaA) was added to 293T cells 24 h post-transfection, with DMSO as the vehicle control. 293T cells were then incubated for 12 h to extract proteins (20). Protein concentrations were quantified using Pierce BCA Protein Assay kit (cat. no. 23225; Thermo Fisher Scientific, Inc.). RIPA buffer (Beyotime Institute of Biotechnology) with protease inhibitors was used for protein extraction. Following centrifugation at 13,000 x g for 20 min at 4˚C, cell lysates were separated into two parts, the supernatant as the soluble fraction and the sedimentation as the insoluble fraction. The insoluble fraction was dissolved with denaturing protein solubilization reagents (Invent Biotechnologies, Inc.). A total of 20 µg protein was loaded per lane. Proteins were separated via 10% SDS-PAGE, and then subsequently transferred to a PVDF membrane. The membrane was blocked with 5% BSA (Sangon Biotech Co., Ltd.) for 60 min at room temperature. Anti-GFP (1:2,500; cat. no. GFP-1010; Aves Labs, Inc.), anti-autophagy light chain 3 (LC3; 1:1,000; cat. no. 3868; Cell Signaling Technology, Inc.) and anti-lysosomal-associated membrane protein 1 (LAMP-1) antibodies (1:1,000; cat. no. 9091; Cell Signaling Technology, Inc.) were used to detect relative protein expression levels. GAPDH antibodies (1:1,000; cat. no. 2118; Cell Signaling Technology, Inc.) were used for sample loading and transfer normalization. PVDF membranes with transferred proteins were incubated with primary antibodies at 4˚C overnight. Then, blots were incubated with secondary HRP-conjugated antibodies (1:5,000; cat. no. A0208; Beyotime Institute of Biotechnology; cat. no. D110203; Sangon Biotech Co., Ltd.) for 60 min at room temperature and detected by SuperSignal™ Western Blot Enhancer (cat. no. 46641; Thermo Fisher Scientific, Inc.). Densitometry analysis of protein bands was performed using ImageJ software version 1.52p (National Institutes of Health). Bioinformatics analysis. The GSE116327 expression profile dataset (21) sequenced on GPL16791 (Illumina HiSeq 2500; Illumina, Inc.) was downloaded from the Gene Expression Omnibus (GEO) database (https://www.ncbi.nlm.nih. gov/geo/). A total of five AxD and three normal post-mortem human brain samples were selected (Table SI). All tissues were frontal cortex tissues. Rosenthal fiber accumulation could be detected in the tissues of patients with AxD, but not in those of normal controls (21). Differential gene expression analysis was performed using the DESeq2 package (22). Adjusted (adj.)P<0.05 and the absolute value of log2 fold-change >1.00 were set as the threshold for differentially expressed genes (DEGs). The ggplot2 package was used to construct the volcano plot (23). The clusterProfiler package was used to perform Gene Set Enrichment Analysis (GSEA), basing the 'Biological Process' terms of the Gene Ontology (GOBP) database (24)(25)(26). The list of the top 100 cell type-specific genes for microglia, neurons, oligodendrocytes, oligodendrocyte precursor cells, astrocytes and endothelial cells was obtained from the study of McKenzie et al (27) (Table SII). Based on this list, cell type-specific DEGs were extracted from the selected data. The circlize package was used to visualize the expression levels of cell type-specific DEGs (28). Cell type-specific DEGs lists were uploaded to Metascape (metascape.org/; November 2020) (29) for functional and pathway enrichment analysis. Based on the Metascape online tool, different genes were linked in the circos plot if both were associated with the same function or pathway term. To demonstrate the association between these terms, a subset of significant representative terms from each of the 20 top-score clusters were selected (≤15 terms/cluster; ≤250 terms in total). Then the enriched terms were converted into a network layout by Metascape (29). Statistical analysis. Results of the cellular experiments are presented as the mean ± SD and were statistically analyzed using GraphPad Prism version 8.0.1 software (GraphPad Software, Inc.). One-way ANOVA was used to compare the expression levels of GFAP protein in different groups, followed by Tukey's post hoc test. A paired Student's t-test was used to analyze relative GFAP levels of each group after two different treatments (DMSO or BafAI. P<0.05 was considered to indicate a statistically significant difference. Results Clinical findings. Patient P3433, a 29-year-old woman with no family history of AxD, had been suffering from gait disturbance since the age of 10. No issues with developmental retardation or psychomotor abnormalities had been recorded. She had been diagnosed with scoliosis at 14 years old (Fig. S1A), for which she had received corrective surgery (Fig. S1B). At the age of 20 years old, she was completely wheelchair-bound and began to suffer dysarthria, urinary dysfunction and aspiration pneumonia. Neurological examination revealed strabismus, movement disorders of extraocular muscle and bilaterally horizontal nystagmus. Muscle tension and strength of lower extremities were notably decreased. The bilateral pathological reflexes and finger-to-nose tests were positive. Cognitive function was normal. Laboratory examinations yielded unremarkable findings. A documented de novo mutation was identified in the GFAP gene (c.214G>A), which was absent in the 1000 g, ExAC, gnomAD data and the controls (30). This mutation was also not detected in her healthy parents; the family pedigree is presented in Fig. 1A. c.214G>A was predicted to be damaging by PolyPhen2 (probability score, 0.945), damaging by SIFT (score, 0.001) and disease-causing by MutationTaster (probability score, 1.000; data not shown). According to the ACMG guidelines, this variant was predicted as likely pathogenic. Patient P4288 was a 33-year-old man without a family history of AxD. He began having gait disorders and external rotation of the left foot at the age of 31. Imaging examinations revealed thoracic spinal cord thinning and tethered cord syndrome. Surgical decompression of the cauda equina nerve was performed and symptoms were slightly improved following surgery. However, ~1 year later, the patient presented with poor coordination, spasticity and dysphagia, subsequently relying on a wheelchair. Physical examinations revealed speech disfluency and horizontal nystagmus. Muscle strength in the lower limbs was decreased, while muscle tension was significantly increased. Bilateral pathological reflexes were positive. Coordinated movement tests were unable to complete. Laboratory investigations were almost normal. The c.1235C>T variant was detected in the patient, which was absent in the 1000 g, ExAC and gnomAD data, as well as the controls. In addition, this mutation was not identified in the patient's parents (Fig. 1B). SIFT (score, 0.001), PolyPhen-2 (score, 0.519) and MutationTaster (probability score, 1.000) predicted that the variants were damaging, possibly damaging and disease-causing, respectively (data not shown). According to the ACMG guidelines, the c.1235C>T variant was classified as likely pathogenic. To investigate the association between mutation sites and protein domains, previously reported GFAP mutations were summarised according to the Human Gene Mutation Database. A total of 135 mutations in the GFAP gene (NM_002055.5) had been previously reported, including 121 missense, one nonsense, one splicing, two regulatory, three small deletion, five small insertion and three small indel mutations (Fig. 2). Neuroimaging findings. The brain MRI scan of the two probands displayed white matter lesions in bilateral corona radiata, centrum semiovale and the regions surrounding the 4th ventricle ( Fig. 1C and D). Volume estimation based on a white matter tissue probability map in CAT12 suggested 17.33 ml white matter hyperintensity in patient P3433, while the TIV was 1,218.15 ml (data not shown). Comparatively minor white matter lesions were observed in patient P4288. The average TIV of subjects in the control group was 1,481.34 ml, with a standard deviation of 1,13.83 ml (data not shown). After adjusting for TIV, patient P3433 exhibited atrophy following grey matter analysis, mainly in the bilateral putamen, thalamus and cerebellum. Atrophic white matter was observed in the corona radiata, centrum semiovale, cerebellopontine angle and medulla of the patient, which was consistent with leukodystrophy in AxD. Patient P4288 exhibited a similar pattern of atrophy in the white matter, while no significant grey matter differences were observed (Fig. 1E and F). Regarding neural activity, a higher ALFF in the cerebellar vermis, cerebellopontine angles, occipital and posterior parietal cortex was observed in patient P3433. An increased DC distribution was observed in both the frontal and posterior parietal cortex, overlapping with ReHo mainly in the cerebellum and posterior cortex. In P4288, a higher ALFF, DC and ReHo overlapped in similar regions. An increased DC was also observed in the bilateral insula ( Fig. 1G and H). Functional analysis. To explore the effects of GFAP variants on the protein level and localization, WT or MUT GFAP-GFP plasmids were transiently transfected into 293 cells. In the course of the experiment, it was noted that the levels of soluble GFAP in the c.1235G>T group were lower than those in the WT group. Correspondingly, the c.1235G>T group exhibited a relatively higher level in the insoluble fraction. No significant difference in soluble or insoluble fractions was observed among the WT, c.214G>A and positive control groups (c.715C>T) (Fig. 3A). Immunostaining results showed that the WT group exhibited diffuse distribution of GFAP proteins throughout the cytoplasm with a few aggregates, while MUT GFAP proteins appeared as punctate aggregations in perinuclear areas (Fig. 3B). It was also detected whether aberrant GFAP accumulation was associated with the autophagy-lysosome pathway. As shown by immunofluorescence, MUT GFAP was clearly co-expressed with LC3 and lysosome (labeled by LAMP1; Fig. 3C and D). Next, the autophagic flux was detected using BafAI. Increased levels of LC3-II were observed in soluble MUT groups when they were treated with BafAI (Fig. 3E). However, the WT group also exhibited mild autophagy, since GFAP-WT overexpression could partly contribute to aggregate formation. Of note, under BafAI treatment, the soluble GFAP levels of MUT groups exhibited an increasing trend, particularly in the c.1235C>T and c.214G>A groups (Fig. 3F). Bioinformatics analysis. To investigate potential pathway changes in the pathology of AxD, the RNA-sequencing data of patients with AxD and healthy controls were downloaded from the GEO database. Compared with the control brains, a total of 2,100 DEGs were detected in the AxD brains (Fig. 4A). The overall expression data were analyzed by GSEA based on the GO-BP gene sets. A total of 1,315 up-and 125 downregulated 'Biological Process' pathways were identified in AxD samples compared with control samples (adj.P<0.05). The top five up-and downregulated gene sets are shown in Fig. 4B and revealed the upregulation of 'adaptive immune response', 'adaptative immune response based on somatic recombination of immune receptors built from immunoglobin superfamily domains', 'regulation of T cell proliferation', 'T cell activation involved in immune response' and 'T cell proliferation', and downregulation of 'regulation of calcium ion-dependent exocytosis' , 'regulation of neuronal synaptic plasticity' , 'regulation of neurotransmitter secretion', 'regulation of synaptic vesicle exocytosis' and 'regulation of synaptic vesicle transport' in AxD. Furthermore, a set of brain cell consensus signatures were used to screen for cell-specific changes in AxD brain transcripts (27). Based on cell-specific gene lists, the expression levels of significant DEGs are shown in Fig. 4C. The overlap of functional terms among specific cell types is shown in a circos plot (Fig. 4D). A subset of the representative enriched terms from each of the top 20 clusters, including 'myeloid leukocyte activation', 'lymphocyte activation', 'phagocytosis' and 'signaling by interleukins' , was converted into a network layout (Fig. 4E). Most terms of this network were associated with inflammatory-immune responses and formed closely functional networks. The details are shown in Table SIII. In addition, it was found that microglia may play a crucial role in the development of inflammatory processes, since the upregulated genes in microglia were mainly associated with 'leukocyte activation involved in immune response' , 'leukocyte migration', 'lymphocyte activation', as well as the 'regulation of cytokine production' (Figs. 4F and S2A). Considering that astrocytes are mainly involved in AxD pathogenesis (31), astrocyte gene expression changes were compared between AxD states and healthy states. GO enrichment results of astrocytes showed that the GRM3 gene, the only downregulated cell-specific gene in astrocytes, was involved in 'synaptic signaling' (GO:0099536; data not shown). The upregulated genes in astrocytes were mainly associated with the 'regulation of protein catabolic process' , 'spinal cord injury' and 'myeloid leukocyte activation' (Fig. S2B). Discussion In the present study, two cases of bulbospinal form AxD due to de novo GFAP mutations were described. It has been reported that 98% of patients with a clinical diagnosis of AxD carry a variant of GFAP, while the cause of AxD in the remaining 2% of patients remains unknown (32). GFAPα is the predominant isoform, which is the 432 amino acid protein that accounts for 90-95% of the total GFAP protein in the human central nervous system (1,33). The other GFAP isoforms, such as GFAPβ, δ and κ, derive from alternative RNA start sites (4). To date, GFAPα is the subject of most published studies (4,33). The GFAPα protein comprises a central α-helical rod domain flanked with the non-helical N-terminal head and C-terminal tail domains, which are important for assembling into the cellular IF (diameter, 10 nm) (4). The rod domain is divided into four α-helical segments (1A and B, and 2A and B) and exhibits higher conservation (8). Pathogenic mutations are scattered all over the GFAP protein domains, but are more abundant in the 1A and 2B segments of the rod domain. However, the clinical severity of AxD varies markedly and the genotype-phenotype correlation is complicated. The variants affecting the hot-spot amino acids R79, R88, R239 and R416 account for >50% of mutations identified in patients with AxD, while R79, R88 and R239 mutations are common in infantile and juvenile forms AxD (8,30). In contrast to these typical relations, the phenotype correlations of numerous other mutations are poorly understood. There exists a variety of clinical presentations in AxD, even among individuals carrying the same mutation. For example, the R416W variation can be found in all three forms of AxD (8). In addition, it has been found that individuals carrying the same mutation, such as D78E, S247P, L331P and D417A, show clinical variability, with mixed infantile-adult or juvenile-adult manifestations (1,34,35). Patient P3433 with the c.214G>A mutation in the present study exhibited similar symptoms to those of adult-onset form, but at a juvenile-onset age, which may be linked to the fact that the variant is located near R79 and D78. Patient P4288 carrying the c.1235C>T mutation in the C-terminal domain exhibited typical adult-onset symptoms, which rapidly progressed to severe manifestations. Compared with R416 mutations, these findings were consistent with previous reports that mutations in the tail domain can have varied clinical courses and severities (8,36). The cause of these variations remains unclear. Genetic modifiers or environmental impactors may affect clinical phenotypes (1). A novel analysis method based on the fMRI data was used in the present study to explore the atrophic pattern and spontaneous brain functional network of AxD. A similar pattern of white matter atrophy was found in two patients, with involvements of the medulla and periventricular regions. The subventricular region has been reported to be the most vulnerable to the pathogenesis of AxD (37). In total, ~1/3 of patients with AxD display abnormal signals in the periventricular rim, which may be associated with the abnormal aggregation of Rosenthal fibers in subependymal regions (38,39). Of note, in the current study patient P3433 with major white matter damages also presented grey matter atrophy, mainly in the bilateral putamen and thalamus, which was first reported in AxD. Grey matter volume loss may be linked to long-term disability (40,41), which could explain the grey matter atrophy in patient P3433. Furthermore, several mechanisms may underlie grey matter damage, including iron deposition, mitochondrial failure, white matter lesion-induced retrograde degeneration and meningeal inflammation (42,43). To the best of our knowledge, the present study was the first to evaluate the neural activities and regional connections in these patients through three different types of data-driven analysis: ReHo, DC and ALFF. The results showed increased ReHo, DC and ALFF overlapping in the cerebellum and posterior parietal cortex, indicating a higher amount of neural communication among these regions than controls. The cerebellum, as part of certain largescale networks, participates in communicating with association areas, such as the frontal lobe and posterior parietal cortex (44). Severe atrophies of the white or grey matter could result in the overload and collapse of brain networks (40). Increased connectivity in these regions may be a type of compensatory mechanism or reorganization of the brain network for this disease. Considering the very small sample size of AxD in the present study, more clinical studies are required to reach correlational conclusions and explore the underlying mechanisms. It is commonly known that the best-known pathology in AxD is the accumulation of mutant GFAP (31). It is noteworthy that the solubility of GFAP variant c.1235C>T was significantly decreased in the present study, similar to the performance of variant c.1178G>T and c.1246C>T (45,46). These mutations are located in the tail domain, which is highly conserved and important for stabilizing filament-filament interactions (47). Filament disorganization may enhance the stability of the assembled protein, which could result in increased resistance to salt extraction, and a declined solubility of GFAP (48). Further studies need to examine the detailed mechanism of the tail domain that facilitates the assembly of GFAP. Furthermore, the overlap of LC3 with abnormal GFAP accumulation was clearly observed in the present study. Aggregate-prone proteins unsuccessfully corrected by chaperones are generally ubiquitylated and subsequently recognized by protein degrading pathways, such as the ubiquitin-proteasome system and the autophagy-lysosomal pathway (49). It has been demonstrated that mutant GFAP produces a strong inhibition of proteasome activity and leads to decreased protein turnover rates (50,51). In the present study, the degradation of GFAP aggregates was accompanied by LC3-II upregulation, suggesting that the autophagy pathway may act as a compensatory mechanism for degrading aggregates in AxD (52). Nevertheless, it remains to be explored whether any other potential pathways are associated with GFAP degradation. Collectively, these findings verified the disease-causing of the variants studied herein and supported that GFAP mutations can be distinguished by mutant aggregates and the upregulation of autophagy. To further explore the potential pathogenic mechanisms of AxD, transcriptional alterations in AxD brains were investigated. Bioinformatics analysis of gene expression profiles revealed the involvement of inflammatory immune-related reactions in AxD. It has been demonstrated that AxD astrocytes sustain a state of cellular stress caused by abnormal aggregates and act as origins of pathology (31). Microglia changes may directly result from chemokines released by activated astrocytes, while damage-associated molecular patterns, such as small heat shock proteins, which markedly accumulate in AxD astrocytes, could also function in microglia alterations (2,53). It is possible that the inflammatory responses are due not only to astrocyte stress, but also to the reactions of dysfunctional astrocytes to external stimulations from other cells, particularly activated microglia (31). Consistent with the present findings, dysfunctional astrocytes are less able to maintain the ion transport, synaptic transmission and neurotransmitter homeostasis required for normal cell-cell communications (31), thereby playing an important role in inflammation alongside microglia. In AxD, astrocyte-derived molecules also inhibit oligodendrocyte progenitor cell function and myelination formation (21). These disruptions of brain homeostasis, in turn, influence astrocyte phenotypes and contribute to inflammation, creating a vicious circle. However, the nature of these interactions and their consequences are unclear, and future studies are required to provide novel insights into mechanistic investigations for AxD. Although AxD has not been acknowledged as an inflammatory disease, several studies have revealed a marked inflammatory environment in both mice and patients with AxD (21,54,55). Transgene of WT human GFAP (GFAP Tg ) mice and heterozygous R236H knock-in mutation lines crossing with the GFAP Tg lines (GFAP Tg /R236H +/-) mice exhibit Rosenthal fibers, particularly in the hippocampus, corpus callosum, olfactory bulbs, subpial tissues and periventricular regions, more closely resembling adult-onset AxD than infantile AxD (56,57). Studies have reported clearly upregulated inflammatory processes in the hippocampus and spinal cord of GFAP Tg /R236H +/mice, and olfactory bulb of GFAP Tg mice (54,58). Furthermore, activated inflammatory responses in the brainstem and spinal cord have also been reported in infantile-and juvenile-form patients (54). Accordingly, inflammation may be associated with all three forms of the disease. We hypothesized that inflammatory responses may also occur in these central nervous system tissues in patients with bulbospinal form, particularly in the brainstem and spinal cord, as these regions are particularly affected in these patients. However, no data from the brainstem or cerebellar regions of patients with bulbospinal form were available to be analyzed, and the transcriptional data in the present study were derived from the frontal lobe cortex of patients with infantile form. Future research should focus on the specific brain regions of different subtypes to obtain more detailed findings. In conclusion, two de novo variants of GFAP (c.214G>A and c.1235C>T) were identified in patients with AxD from unrelated families. The functional analysis provided essential evidence revealing the pathogenicity of the identified variants. Increased brain functional connectivity in the cerebellum and posterior parietal cortex was observed in two probands, and grey matter atrophy in the patient with the more severe white matter damage. It was concluded that these changes might be a type of compensatory mechanism or reorganization of the collapsed brain network in AxD. Bioinformatics analysis further indicated that inflammatory immune-related responses play a critical role in AxD. These findings not only broadened the clinical and genetic spectrums of AxD, but also provided an important basis for the study of its pathogenic mechanism.
2021-06-11T06:16:28.641Z
2021-06-08T00:00:00.000
{ "year": 2021, "sha1": "f025ae9bc9e574600e372644abd16bdaefe899dd", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/10.3892/mmr.2021.12211/download", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "dd6c4cf235b4cb371a386708ef562beed2bb83a3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257108847
pes2o/s2orc
v3-fos-license
Exponential energy harvesting through repetitive reconfigurations of a system of capacitors In conventional energy harvesting systems, energy can be extracted from a fixed-level source at a constant rate at best. The resulting growth of harvested energy is bound by a linear function. Here we show that exponential energy harvesting can be achieved in a system of reconfigurable energy storage elements. The exponential extraction results from the positive feedback of the system potential energy due to repetitive system reconfigurations. The concept is studied theoretically and validated with results from systems of droplet capacitors. A device with three 300 μL mercury drops can generate an exponentially growing voltage that reaches 168 V within a few cycles of a low-level and low-frequency mechanical excitation. The same device with water drops can generate a similarly growing voltage that reaches 56 V. This concept holds potential in DC power generation and may be applied in other energy domains. In a world in constant need of electricity, energy harvesting techniques from the ambient environment can provide in-situ power generation. The authors demonstrate and design a system to efficiently extract energy from an external source at an exponentially increasing rate. R apid technological advancements have made the world inextricably dependent on energy. Extensive efforts have been devoted to improving energy-related science and technology, ranging from the development of advanced storage devices [1][2][3][4] and effective harvesting schemes [5][6][7][8][9][10] to the establishment of modernized policies and regulations 11,12 . Of particular interest to this study is the energy harvesting methodology, which dates back to ancient times as demonstrated by antique watermills that utilized hydropower to sustain the functions of the mills. Because electricity has become the primary form of power supply in the modern world, energy harvesting often refers to electricity generation. More recently, the need for reliable, in situ power generation in distributed and autonomous systems has spawned tremendous research in harvesting energy from the ambient environment [13][14][15][16][17][18] . In principle, electricity generation involves cross domain energy transfer. Table 1 shows a list of conventional energy harvesting technologies. Except in the photovoltaics-based technologies, energy that can be extracted and converted to electricity is determined by certain macroscopic variations of the states of an energy source, such as mechanical oscillations in electromechanical technologies, temperature differences in thermoelectric technologies, and concentration gradients in electrochemical technologies. To sustain a continuous conversion process, energy from an environmental source must be coupled continuously into a harvesting system, resulting in a repetitive response from the harvester, e.g., the vibrations of a cantilever beam in an electromechanical energy harvester or the rotations of turbine blades on a windmill. In these conventional technologies, the energy coupled into the harvester in a cycle is converted to electricity and then exits the harvesting system-either delivered to storage or consumed by electric loads. Removal of the harvested energy from the system is necessary because the harvester needs to return to its original state to continue with the next harvesting cycle. Therefore, the harvested energy does not assist with the harvesting process, i.e. no energy feedback is established. An energy source with a fixed level creates a fixed response for the harvester, leading to a constant energy extraction rate at best. The growth of the harvested energy is bound by a linear function [19][20][21][22] . In low-level ambient sources, the extraction rates of conventional methods usually become too low to be of practical use because the generated electricity is not sufficient to satisfy the conditioning requirements for storage or consumption. In this study, a method is developed so that the normally destabilizing nature of positive feedback is utilized to extract energy from an external source at an exponentially increasing rate. It is shown that in a system of reconfigurable energy storage elements, a positive feedback mechanism can be created through an appropriate, repetitive reconfiguration process of the system. The external energy source, which enables the reconfiguration, is both harvested exponentially and stored without rectification in the system. Because of the exponentially increasing rate of energy extraction, this method is particularly effective for distributed devices to scavenge energy from low-level ambient sources, i.e. the local environment, thus enabling self-powered operation. If applied to systems with elements of high energy and power densities, this method may become a viable way of large-scale power generation. Results Concept. Consider a reconfigurable system of interconnected energy storage elements, in which environmental energy is harvested through positive work done on the system and then stored in the system as potential energy. Figure 1a schematically shows the exponential growth of the harvested energy when the system is repetitively switched between two configurations and positive external work is performed only in one configuration. Assume without loss of generality that an energy harvesting cycle starts from the equilibrium state of configuration 1, energy is harvested in configuration 2, and the cycle completes when the system is switched from configuration 2 back to the equilibrium state of configuration 1. The system energy U (1) and U (2) , i.e. the harvested energy, evolve as U ðwÞ ði þ 1Þ ¼ γ 2 w;i U ðwÞ ðiÞ, where w = 1, 2 indicating the configurations, . The gain on the system energy due to external work is represented by Γ i . Because a system cannot move to an equilibrium state of a higher energy level without positive work from an external source, η 12,i , η 12,i+1 , η 21,i ≤ 1 representing the energy loss during reconfiguration. The system energy will grow exponentially if γ 1,i , γ 2,i > 1, i.e. Γ i > max{(1/η 12,i × 1/η 21,i ), (1/η 12,i+1 × 1/η 21,i )}, which implies that the positive external work is sufficient to compensate the loss. Consider the twoconfiguration system of n + 1 capacitors shown in Fig. 1b. The two possible configurations are: Config. 1, in which n capacitors (sinks) with capacitances C 1 ,C 2 … C n are connected in series and Fig. 1 Schematic diagrams of the proposed concept. a Exponential growth of energy when a system is repetitively switched between two configurations. b A reconfigurable system of variable capacitors then in parallel to the source capacitor (C 0 ), and Config. 2, in which all capacitors are connected in parallel with capacitances changed to C ′ 0 ; C ′ 1 ; C ′ 2 Á Á Á C ′ n . When the system is switched repetitively between the two configurations and positive work of the external source is done in Config. 2 to change the capacitances while keeping the same amount of electrical charge carried by individual capacitors, the total summation of the charge in the system follows Q(i + 1) = γQ(i), where where r l = C ′ l /C l , l = 0, 1, 2 … n, C ′ eqv ¼ P n k¼0 C ′ k and s = C 0 /C sr in which C sr ¼ 1= P n k¼1 1=C k À Á . When n > 1 and r 0 > P n k¼1 r k so that γ > 1, the total amount of charge grows exponentially. The corresponding electrostatic energy stored in the system grows exponentially with a base of γ 2 . When C 1 = C 2 = …C n = C and where α = C ′ 0 /C′, β = C/C 0 . Therefore, any arbitrary amount of initial charge in the system will start an exponential growth of charge if γ > 1. A generalized result can be obtained for any reconfigurable system, composed of one-port, two-terminal energy storage elements with generalized across and through variables. The across variable of an element is a monotonic, single-valued function of the generalized through variable, which describes the constitutive law of the element 23 . One element is referred to as the source element and all others as sinks. One of the configurations is defined as the duplicative configuration, in which the change of the through variable of every sink is the negation of that of the source element. The other is defined as the distributive configuration, in which the total amount of through variables in the system is conserved. According to the principle of minimum potential energy, the summation of the across variables of the sinks at equilibrium in the duplicative state is equivalent to the across variable of the source element, whereas the across variable of every sink at equilibrium in the distributive state is that of the source element. Assume that the system is repetitively switched between the two configurations. Denote the total amount of the through variables as Q (1) (i) and Q (2) (i) for the duplicative and the distributive states in the ith cycle, respectively; ΔQ (12) (2) (i) represent the changes of the total amount of through variables corresponding to the transition from the duplicative state to the distributive state and vice versa. The following relationships are obtained. When n ≥ 1 and ζ (12) (i), ζ (21) (i) > 0 for all cycles, γ 1 (i), γ 2 (i) > 1, leading to the exponential growth of not only the through variables, but ultimately the across variables and the harvested energy. Note that the same result applies for systems in which the roles of the across and the through variables are switched. Prototype device. The results from generators fabricated with droplet-based variable capacitors are presented in this section. The simplest generator utilizes one source capacitor and two sink capacitors. A 3D rendered model of a device is shown in Fig. 2a. A commercial ceramic capacitor is used here as the source capacitor C 0 for simplicity. An equivalent circuit for the generator is shown in Fig. 2b. The working principle of the device is illustrated in Fig. 2c-f. A typical sink capacitor can be fabricated on a doped silicon wafer, one side of which is covered by a layer of silicon dioxide. An amorphous fluoropolymer, CYTOP, is applied to silicon dioxide such that the resulting hydrophobic surface contains two regions of equal area but different thicknesses. When a free-standing conductive liquid drop is placed on the surface, a variable capacitor is formed. The drop and the doped silicon substrate function as the electrodes of the capacitor. The capacitance will change if the drop moves across the junction because of the thickness variation. More specifically, C > C′, where C and C′ represent the capacitances associated with the thinner and the thicker sides, respectively. Metal contacts that do not chemically interact with the drop are used at both sides as passive switches to facilitate the connectivity required for the reconfigurations. When both drops touch the metal contacts on the thinner side of the CYTOP coating, the device enters the duplicative state (Fig. 2c), which corresponds to closing SW1 while keeping SW2 and SW3 open in the equivalent circuit. Charge flows from the source capacitor to the sinks. The drops then move to the thicker side so that the device enters the distributive state (Fig. 2d), which corresponds to opening SW1 while closing SW2 and SW3. Charge then flows back to the source capacitor because of the reduction in sink capacitances. If the capacitance variation satisfies αβ = C/C′ > 2 (Eq. (2)), the subsequent motions of the drops back to the thinner side (Fig. 2e) and then to the thicker side (Fig. 2f) will create a geometric growth of total charge in the system. More sink capacitors can be used to increase the base of the exponential growth. Additional liquid drops have been used in this study as passive switches to facilitate the connectivity required by 3 or more sink capacitors ( Supplementary Fig. 1). Because the length scale of the contact area is much larger than that of the thicknesses of the dielectric materials, the droplet capacitor can be reasonably modeled as a parallel-plate capacitor. The effect of the thickness difference of the CYTOP layers on the electric output is shown in Fig. 3. The results correspond to devices with fixed source capacitors, i.e. C 0 = C ′ 0 . Both the silicon dioxide layer and the thinner layer of CYTOP have been assumed to be 200 nm thick. The condition αβ > 2 requires a minimal value of 2.57 for the ratio between the thicknesses of the CYTOP layers. It is seen that the base γ increases monotonically and converges as the thickness ratio increases. A smaller capacitance of the source capacitor (i.e. larger β) leads to a larger limit value of γ with a lower converging rate. Therefore, it may not be practical to achieve the limit values corresponding to very low source capacitances. For example, if the capacitance of the source capacitor is one-tenth of that of the droplet capacitor when the drop is on the thinner side, i.e. β = 10, it will require a thickness difference larger than 5000 times to achieve the limit γ = 1.833. The size of the liquid drops also affects the amount of electrical energy harvested. A larger liquid drop creates a larger contact area that leads to a higher capacitance. However, there will be no size effect on the base of the exponential growth of the charge in the system or the voltages across the capacitors if the ratios between capacitances (α and β) are fixed. While it may be favorable to use larger drops because more charge will be collected due to higher capacitances, the largest size of the drops is limited by the physical constraints of the device and ultimately by the surface energy of the liquid. It is noted that the equivalent circuits shown are RC circuits in reality. However, the effect of resistances can be ignored when the frequency of switching is low compared to the time constants of the RC circuits so that the equilibrium state of each configuration is established before subsequent switching. Therefore, the conductivity of the liquid drops does not affect the base of the exponential growth of the electric output in low-frequency applications, which implies that mechanism depends only on the position of the drops, and thus, the speed of the drops does not affect the electric output. Contact electrification and charge trapping. If a liquid drop is brought into contact with an initially uncharged CYTOP surface and then separated for the first time, the surface will be electrified due to contact electrification. Subsequent oscillatory motions of the drop on the surface will cause the surface charge to increase to a saturated value, which will be different for the two sides because of the molecular-scale fluctuations of surface properties [24][25][26][27][28][29] . The two sides will consequently behave as electrets possessing a different amount of negative surface charge as illustrated in Fig. 2. Electrostatic induction will then become the dominating mechanism that determines the charge distribution on the liquid drop 24,28,29 . The effect of electrostatic induction can be modeled by a fixed amount of charge, Q c , which is transferred to or removed from the drop when it moves from one side to the other ( Supplementary Fig. 2). Because of variations of surface properties, this charge is in general different for droplet capacitors fabricated with an identical procedure. The charge can be estimated as 30 Q c (i) = sgn(q(i))[σ tk (i)−σ tn (i)]A(i), where q(i) represents the charge carried by the ith drop, σ tn (i) and σ tk (i) the surface charge densities of the thinner and the thicker CYTOP side for the ith capacitor, respectively, and A(i) represents the contact area. The contacts between a charged liquid drop and the CYTOP surface will also induce charge trapping at the surface, which limits the amount of charge that can move with the drop. However, the fact that the trapped charge can be annihilated by grounding the drop suggests that trapping occurs on the surface rather than in the insulator [31][32][33] . Because of the internal connectivity of the system, the drop is charged when it is on the thinner side and almost completely discharged when moving to the thicker side. Therefore, charge trapping is negligible for the thicker side. The trapping on the thinner side can be accurately modeled with a parasite capacitor C p , connected in parallel to the effective capacitor (C) associated with the thinner side. In this study, the parasite capacitance was determined experimentally (Supplementary Fig. 3). The total capacitance of an individual capacitor in the duplicative state is then C ¼C þ C p . Therefore, the summations of the charge of all capacitors in the ith cycle for the duplicative and the distributive states are different, which are obtained as follows where w = 1, 2 representing the duplicative and the distributive state, respectively. The effect of charge trapping on the growth of the total charge is represented by γ p = (n − 1)δ/[(n + α)(n + β)], where δ = C p /C′, Q c ¼ 1 n P n i¼1 Q c ðiÞ, and ξ ð1Þ ¼ nαðnÀ1Þ nþα 1þβ nþβ for the duplicative state and ξ ð2Þ ¼ nðnÀ1Þ nþβ for the distributive state. Performance of prototype generators. Figure 4 shows the results obtained from generators involving multiple 150 μL mercury drops. Commercial capacitors with fixed capacitances were used in different devices as the source capacitors. The devices were manually rocked at a frequency of approximately 0.25 Hz and the tilting angle was within ±5°so that the drops were synchronously driven to touch the metal contacts. The growth of the voltages across the source capacitors is shown in Fig. 4a-c for cases corresponding to two, three, and four sink capacitors, respectively. Four devices were evaluated in every case, each with a different source capacitor. The experimental results agree very well with the theoretical predictions from Eq. (4). The ratio between the maximum and minimum capacitances of the sinks was kept unchanged in all cases considered, i.e. αβ = C/C′ = 10.15. Under this condition, α opt = β opt = 3.18 will lead to the maximumγ. For a 150 μL mercury drop, the maximum capacitance was measured to be approximately C = 2.74 nF, corresponding to the optimal source capacitor of C 0 = 0.86 nF. Therefore, out of the four source capacitors used, C 0 = 0.94 nF provided the largestγ. It is worth noting that for any device in which αβ is fixed, there exists an optimal number of sinks that will result in the maximum base of the exponential growthγ. The optimal number of sinks was found theoretically (Eq. (4)) and verified experimentally to be n opt = 3 (Fig. 4d). The results obtained for devices with water drops, 300 μL each, are presented in Fig. 5. The ratio of the maximum and the minimum capacitances was kept at αβ = 7.82 for all relevant experiments. The voltages across the source capacitors are shown in Fig. 5a-c. The theoretical and experimental results showing the optimal number of the sink capacitors are presented in Fig. 5d. The optimal number of sink capacitors for this case was also three. In addition to plain water, deionized water with the conductivity of 0.055 μS cm −1 and 1 mol L −1 (1 M) sodium chloride solution were also used in this study. Very close values ofγ were obtained for the three cases, indicating a negligible effect of the ion concentration on the base of the exponential growth. This is expected for low-frequency vibrations. The ion concentration, however, has been shown to influence the charge due to electrostatic induction 28,34 . The time histories of the output voltages were thus different for the three cases due to different charge (Q c ) resulting from electrostatic induction ( Supplementary Fig. 4). The method has also been applied to generators fabricated with parallel-plate variable capacitors that use air as the dielectric. Passive switching has been realized by metal-metal contacts. While charge trapping is negligible when air is used as the dielectric, the metal-metal contacts play a similar role to that of the induction process due to different work functions 35 . Three two-sink devices have been fabricated using three different source capacitors. The theoretical results obtained from Eq. (4) agree excellently with those from experiments ( Supplementary Fig. 5). Finally, generators made from three identical droplet-based variable capacitors (one source and two sinks) have been used to drive commercial light-emitting diodes (LEDs) under lowfrequency mechanical vibrations. A schematic diagram of the three-drop generators is shown in Fig. 6a. Vibrations of 2.5 Hz were used in the experiment to simulate the vibrations induced by human walking. After a few initial cycles of energy accumulation, the voltage output of the device with three 300 μL mercury drops reached 168 V when the system was in the distributive state. The energy extracted per cycle was sufficient for illuminating 60 green LEDs connected in series (Fig. 6b, Supplementary Movie 1). Under the same condition, the device with water drops of the same size could generate 56 V, sufficient to illuminate 20 green LEDs connected in series (Fig. 6c, Supplementary Movie 2). Because both the source and sink capacitors had the same position-dependent capacitances, the resulting base of the exponential growth (γ) was much higher than those of the cases shown in Figs. 4 and 5, in which the source capacitors had fixed capacitances. More specifically, in this case,γ was 1.674 for mercury drops and 1.669 for water drops. It is noted that αβ = 10.15 2 in this case, which is an order of magnitude higher than those in the cases of a fixed source capacitor. The effects of charge trapping for mercury and water drops were considerably reduced because of the dramatically decreased γ p . The actual resulting values ofγ for mercury and water drops were thus very close. While mercury and water are both liquids at room temperature, the charge carriers are different [36][37][38] . The voltages due to contact electrification were 4.69 and 2.72 V for mercury and water, respectively. Therefore, the voltage across the source capacitor of the device with mercury drops reached 168 V after 7 cycles and it took the device with water drops 6 cycles to reach 56 V. Discussion It has been shown that if energy storage elements are used as the primary energy harvesting components of a system, appropriate reconfigurations of the system can create a positive feedback of the potential energy in the elements, leading to an exponential growth of the energy that is harvested as the elemental potential energy. The efficacy of this method has been demonstrated with droplet generators. Consider the device with a fixed source capacitor of 0.94 nF and three 150 μL mercury drops and the device with a fixed source capacitor of 1.26 nF and three 300 μL water drops. In the experiments conducted, the tilting angle of the wafer was within ±5°. The total available energy was calculated to be 83.2 μJ for the mercury drops and 15.4 μJ for the water drops by adding the two peak values of the potential energy of the drops in a cycle. It has been shown that devices with mercury drops and those with water drops can sustain a continuous operation at 168 and 56 V, respectively. If operated at these voltages, the device with mercury drops can harvest energy at the rate of 10.2 μJ per cycle, corresponding to a harvesting efficiency of 12.2% and the values for the device with three water drops will be 1.2 μJ per cycle and 7.9%. Because energy is harvested as electric potential energy in this method, the harvesting efficiency is independent of the electrical load to which the energy can be delivered. The efficiencies of the prototype devices fabricated in this study far exceed those of the droplet generators reported in the literature, which are on the order of 0.01% under the optimal condition 29 . The devices can be operated at a higher energy level with a higher efficiency. However, since energy is harvested only in the distributive state in this study, the efficiency is limited to 50% per cycle when the total potential energy in both states is considered. Although the experimental study has been limited to scavenging low-level vibration energy, this method can be applied at a larger scale. If applied to high-capacitance devices 1,3,6,39-41 driven by abundant environmental sources [42][43][44] , this concept may lead to efficient, large-scale and possibly grid-level DC power supply systems. In this regard, it is envisioned this report will stimulate the emergence of new research areas, e.g., supercapacitors with a wide range of adjustable capacitances. Because the concept of exponential energy harvesting is not domain specific, it may lead to new research in directional energy transfer systems in various energy domains. Methods Device fabrication. The capacitors were fabricated on 2-in doped silicon wafers (with resistivity of 1-10 Ω ⋅ cm), one capacitor per wafer. The doped silicon was used as the back electrode. A layer of 200 nm silicon dioxide was grown on one side of the wafer. Cytop was spin-coated on silicon dioxide to create a hydrophobic layer. Additional Cytop was manually applied to the spin-coated layer to create an area of a thicker Cytop layer. The thicknesses were 200 nm and 3 μm for devices using mercury drops and 400 nm and 4 μm for devices using water drops. For every capacitor in which a water drop of approximately 300 μL was used as the moving electrode, the maximum and the minimum capacitances were measured to be 3.52 and 0.45 nF, respectively. The parasite capacitance accounting for charge trapping was experimentally determined to be approximately 0.76 nF. For capacitors in which a mercury drop of 150 μL was used, the maximum and the minimum capacitances were 2.74 and 0.27 nF, respectively. The parasite capacitance was approximately 0.40 nF. Measurements were obtained using a Tektronix © electrometer (6517B). Tap water was used to form the droplets used in the experiments. The ion concentration of the water was measured to be 220 p.p.m. The experiments were conducted in an ambient atmosphere of 1 atm, 25°C with a moisture content of 50-70%. The LEDs had a nominal forward voltage of 3.0 V. Derivation of Eq. (1). Assume without loss of generality that a cycle starts from the equilibrium state when the system is in Config. 1. For the ith cycle, the charge is distributed such that q ð1Þ k ¼ q ð1Þ ðiÞ; k ¼ 1; 2 Á Á Á n and q where the superscripts indicate the configurations. The total electrical potential energy of the system is U ð1Þ ðiÞ ¼ 1 2 where QðiÞ ¼ q If the external energy is coupled into the system parametrically, i.e. the capacitances are changed from C k to C ′ k , which leads to an increase in the system energy, the total system energy becomesÛ where C ′ eqv ¼ P n l¼0 C ′ l . The charge is redistributed aŝ q ð2Þ k ðiÞ ¼η ð2Þ k QðiÞ ð8Þ The system is then switched back to Config. 1 and the capacitances revert to the original values. The total charge after equilibrium is reached is the total charge for the start of the (i + 1)th cycle and the charge is distributed as where r l = C ′ l /C l , l = 0, 1, 2…n. Therefore, Qði þ 1Þ ¼ QðiÞ þ ðn À 1ÞΔQðiÞ ¼ γQðiÞ Derivation of Eq. (3). Assume without loss of generality that the ith cycle starts from the duplicative state after equilibrium is reached. For the kth element, the energy and the through variable are denoted as U ð1Þ k ðiÞ and q ð1Þ k ðiÞ, respectively. The total system energy and the summation of all through variables are denoted as U ð1Þ ðiÞ ¼ P n l¼0 U ð1Þ l ðiÞ and Q ð1Þ ðiÞ ¼ P n l¼0 q ð1Þ l ðiÞ, respectively. The cycle completes when equilibrium is established after the system is switched to the distributive state. The energy and the through variable of the kth element for the distributive state are denoted as U ð2Þ k ðiÞ and q ð2Þ k ðiÞ, respectively. The change of the total through variables in the transition from the duplicative state to the distributive state is denoted by ΔQ (12) The through variable of each element can be obtained as q ð2Þ k ðiÞ ¼ η k ðiÞQ ð2Þ ðiÞ; When the system is subsequently switched to the duplicative state, which is the start of the i + 1th cycle, a new Lagrangian is defined as λ l ði þ 1Þ þ λ 0 ði þ 1Þ ¼ 0 The total through variables in the i + 1th cycle are obtained as Derivation of Eq. (4). Assume that the variable capacitors are identical, the capacitances of the parasite capacitors are the same and they are connected in parallel to the sink capacitors in Config. 1 so that the total capacitance of every sink capacitor is C ¼C þ C p , whereC denotes the effective capacitance. It is further assumed that a cycle starts when the system reaches equilibrium in Config. 1. At the start of the ith cycle every drop carries the same amount of charge, q (1) (i), and the sum of the charge on all capacitors is Q ð1Þ ðiÞ ¼ q When the drops move to the other sides of the surface, and immediately before the connectivity is established such that the system is in Config. 2, due to contact electrification, the total charge becomes Q ð2Þ ðiÞ ¼ q After the capacitances change fromC to C′ and then the capacitors are connected so that the system is in Config. 2, the charge is distributed aŝ q ð2Þ 0 ðiÞ ¼ C0 C0þnC′ Q ð2Þ ðiÞ q ð2Þ ðiÞ ¼ C′ C0þnC′ Q ð2Þ ðiÞ The system is then switched back to Config. 1 and the capacitances revert to the original values. At equilibrium, the summation of the total charge in the system becomes Q ð1Þ ði þ 1Þ ¼ q Therefore, where α = C 0 /C′ and β = C/C 0 . Define δ = C p /C′. The following relationship is obtained. where where ξ ð2Þ ¼ nðn À 1Þ n þ β ð27Þ Data availability. All data generated or analysed during this study are included in this published article and its Supplementary Information files.
2023-02-24T14:15:21.956Z
2018-03-22T00:00:00.000
{ "year": 2018, "sha1": "f81c290eab67683f24d54f377d81734771c040b6", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s42005-018-0010-y.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "f81c290eab67683f24d54f377d81734771c040b6", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
225869405
pes2o/s2orc
v3-fos-license
The Impact on Businesses and Public Health Using Lock Down as a Tool against Covid-19 Pandemic in Italy: A Global Perspective Since the spread of Corona Virus Disease 19 (COVID -19) most Italian regions on indication of the Central Government have embarked on a system of either total or partial lock down and have used it as a tool for curbing the spread of COVID-19. This study examines whether lock down can be of help and is it any of the public health policies and can it bring up massive and tremendous change to the health system and general economy to Italian regions that have used it as a way of intervention in the spread of COVID19. The research reviewed some literatures about word economies with Google as the main search tool. Have been also listened to press conferences, editorial reviews from Italian newspapers, Bank of Italy, World Bank, International Monetary Fund and World Health Organization. Interviews were also done through phone calls, questions asked via emails to some of the italian leading epidemiologist, infectious disease specialist in the Italy, carefully reading their scientific works and those scientists they cited. Also, the research individual experience and observations on the COVID-19 pandemic in Italy and measures that policy makers have laid down to mitigate the global health crisis of COVID-19 and its effect in the general economy. After a careful study and analysis of various countries (Germany, Spain, France, Italy) that embarked on lock down either partial or complete is showing plummeting inflation, declining gross domestic product, loss of capital for business groups, loss of jobs especially in the informal sectors, a negative growth due to disruption of the world economy through global value chains, abrupt fall in commodity prices and fiscal revenues and enforcement of travel and social restrictions. The research found that a national lock down is no cure, has never been a cure and is not a cure to any of the pandemics be it previous or recent either in the history of the Spanish flu, influenza and the pending COVID-19. The research also found that lock down seem to be of a politically enforced measure than a public health policy and should not be the main weapon by Italian governments to fight the COVID-19 pandemic. As demonstrated and proven by the Asian giants (Japan, Hong Kong, Singapore, South Korea, Taiwan) who never embarked on draconian lock down methods and intensive restriction orders (Gordis, 2017; Edgerg, 2017). The research shows that is time public health systems of Italian regions are strengthened with befitting budgets, human resource developments, more public health educators recruited to use the mass media be it local radio stations television stations, internet to educate the populace on the effects of pandemics on the country, precaution, preventions and safety steps to follow. The research saw the need and have seen the need as an urgent call for Italian political leaders to put in much effort and develop science and technology especially in the field of health and biomedical sciences. More research laboratories with modern equipments and instruments should be set up in many regions of country coupled with the continuous need and sponsorships to train many research scientists in the fields of biomedical science, biomedical engineering, biotechnology, molecular medicine and biochemistry, Laboratory medicine, public health and epidemiology, infectious disease etc and such can be a solid ground for Italy to stand on whenever there is an outbreak of diseases. Furthermore the M3T that is mass testing, tracing and treating should be a must in this pandemic era and even when figures flatten until total control or herd immunity is gained and acquired. In the M3T, M stands for mass, 1st T for testing, 2nd T for tracing and last T for treating or treatment. In addition, the research also shows that the 2SQ that is social distancing, self isolation and quarantine is an indispensible tool in this pandemic season and should be enforced to the core to help in the management of COVID-19. In the 2SQ, the 2S stand for social distancing and self isolation while the Q stands for quarantine. Introduction The purpose of lockdown is intended to stop people from moving between places, it could also mean putting data in place to record the movement of people from place to place in a particular location. Lock downs could involve cancelling of flights both domestic and international, closing borders, closing down shops, restaurants, schools, churches etc with the main idea by governments to reduce the influx of people to stop transmission of a disease. Almost all the World countries are inadequately prepared to deal with the exponential spread of COVID-19 with its pseudo recovery and recovery features and Italy is no exception coupled with weak health systems, scanty resources, economic and spatial inadequacies on lock down and massive restriction orders from government and it seems to most Italian regions lock down is a major tool and intervention to stop COVID-19. Basic Epidemiology Trend The number of COVID-19 cases in Italy has quickly risen to more than 200.000 and the death rate exceeds 27.000 (to april 30th, 2020) and this can be termed as above global average (except to USA). This steady increasing high death rate of the Italy but with less number of confirmed cases should tell us that Italy rather is doing very low rate of COVID-19 testing and this can be fatal and deadly if proper public health interventions are not put in place and to get the best practices of public health interventions a country needs the capital or income, trained public health personnel, infrastructure as in treatment centers and isolation centers, special hospitals for such purpose with equipments and tools. The World Health Organization says the Italy has 6.1 nurses and midwives for every 1000 inhabitants and 3.8 doctors for every 1000 inhabitants. This really shows how is porous and weak our health system. Since the spread many World countries have put in measures to help respond quickly to the pandemic and despite all this Italy falls within the best practices of such (World Health Organization, 2020;Adhikari et al., 2020). Intrinsic Socio Psycho Economic Dynamics Bothering Italian Government in this Pandemic The inability to produce the basics needed to support life makes life unbearable and it is an undeniable fact that many Italian regions have many deprived and vulnerable social groups that always struggle to feed and care for their immediate families and extended families that is even if all conditions are well and favorable. So in this pandemic era where there are total and partial lock downs with restriction orders on trade and industry and movement it becomes dire and extra difficult for such groups and apart from the socio economic impact, the adverse effect psychologically and emotional is heavier especially when all are home and the little money saved as emergency life fund or life insurance is been used making such people to be more weaker when exposed to life's eventualities like debilitating sudden illness, road traffic accidents, flooding, etc. There are some research works done in the UK by researchers at imperial college estimating that under the most optimistic circumstances the corona virus would kill 40,000 people in Italy and such research works should definitely be a pain to Italian political leaders and government (Fletcher et al., 1983;de Wit et al., 2016). Economic Woes of Italian Regions During this COVID-19 Pandemic The government of Italy has imposed restrictions on imports and exports, closed both international and local borders with neighboring and afar countries, supply and procurement chains disrupted, most industries and production companies have halted operations and definitely such shall have a serious and dire effect on general inflation and gross domestic product with Italy not exception. According to the Italian government up to 30.000 jobs in the formal and informal sectors in Italy could be lost because of COVID-19 and Italy has had high level of unemployment rate before this COVID -19 pandemic so making the case worse for the work force of Italy now. A study released in April 2020 found that Foreign direct investment (FDI) be it tourism receipts and remittance flows will suffer significant declines as the Country tackles this pandemic. If this pandemic stays for 6 months or more then the tourism industry in Italy will suffer the worse and Calabria, Sicily, Emilia Romagna are regions that dwells much on tourism especially revenues from tourist sites, hotels etc. In a research titled the impact of coronavirus on Italian economy conducted early April 2020 shows that if the pandemic lasts for 5 to 7 months there would be a negative growth due to disruption of the world economy through global value chains, abrupt fall in commodity prices and fiscal revenues and enforcement of travel and social restrictions. Further to the study, Italy has also stated that a 35% dip in exports and imports would be worth $270 billion, yet Italy will require $ 80 billion to European Union to fight against the spread of the virus and medical treatments. Many Italian regions have taken bold steps to salvage their people and economies but the approach by the government of Italy during this COVID-19 pandemic supersedes most and is the most action oriented. The Italian government has been taking experts advise and opinions, collaborating with the World Health Organization, the country's ministry of health, Italian health service and other agencies and has urgently designed and implemented ambitious, well informed policies to help the citizenry and the general economy with steps such stop paying bank loan for 6 months, tax exemptions for frontline health workers of COVID-19 pandemic, life insurance for frontline health workers who may be devastated by of COVID-19 in their line of service, increased pay raise and special incentive allowance for three months for some health workers starting March 2020 and if most Italian political leaders exhibit such leadership style then Italy can stand firm and tall in these hard times of the pandemic (Ministero della Salute, Dipartimento della qualità, direzione generale della programmazione sanitaria, dei livelli di assistenza e dei principi etici di sistema, 2018; Comite, 2018). How proactive are Italian political leaders and government towards science and technology Italian leaders were to learn effectively from the African Ebola epidemic in 2014 and were to design strong policies and set up budgets for emergencies, set up well equipped research laboratory centers for infectious diseases in many parts of country, employ more medical research scientist and create budgetary funding for such purposes. If these were done then Italy at this time would be and shall have the human resource, testing dynamics and creating of temporal immediate methods of helping deal with this pandemic. In Italy government tend to concentrate more of political goals and points than science and technology and this is the fall of Italy and the deficit it creates when natural disasters happen. All is not lost and as Italian regions and government suffer, most may and can and should learn from this COVID-19 pandemic and give more room for science and technology to prevail to develop Italy as a whole and more especially the regions of southern Italy which are the poorest parts of the country, infected and nearly all affected with the pandemic. Italy per say is one of the luckiest European nations now especially in the fight of COVID-19 pandemic since the current government has formed a supplementary team to support the citizenry through public education, sharing and distributing DPI, medical consumables, cash donations, food etc to accredited facilities and deprived communities. Italy helps to deprived communities especially now and this trait is worth emulating and precedence for European and world leaders (Dawes, 2020). Global Health Dynamics and Italy Lifting its Internal Partial Lockdown Italy has lifted her local and internal borders of intercity, inter towns, inter suburbs for free and easy trade, access to health care, access to banks and access to pay bills etc, yet restricting churches, mosques and schools which usually are known places for close contact and crowding. Supermarkets are opened but with scrutiny for buyers and sellers to have the approved spacing interval of 2 meters. According to Owusu Nyarko et al., 2020, this is not new and can be effective to curb the continuous spread and improve the local economy when persons respect and adhere to preventive guide lines from the World Health Organization, ministry of health Italy, ministry of information Italy and the Italy health services of staying at home when outing is not urgent and of emergency, washing hands under running water with soap, applying alcohol based hand sanitizers, using face mask when going out and even in the comfort of their homes especially for those living in commercial houses and apartments and flats that share car parks, gymnasium etc. The research see what Italy have done by lifting the lockdown a bold and pragmatic step to open up for domestic trade and it will improve the local economy and also improve the spread of COVID -19 if Italy learn from other countries who aggressively tackled the COVID-19 pandemic without any lock down and heavy restrictive orders on citizenry. There are pace setter countries like Japan, Taiwan, Singapore, Hong Kong, Greece, Sweden, Switzerland, South Korea, Thailand who have still not locked down domestically due to the pandemic and Italy can learn from them. Although these are big economies and developed countries, Italy can still perfect their basics of interventions, policies and implement same to beat the COVID-19 pandemic (Changjiang Daily, 2020). The Asian giants (South Korean, Singapore, Japan, Hong Kong) style of containing the COVID-19 South Korea as of March 25, 2020 had about over 9000 cases of COVID -19 confirmed cases which placed them among the top 10 countries for total cases but the country managed to recently to significantly slow the number of new cases without taking in strict lockdown measures and draconian orders. These countries have been able to make tactical decisions on schools, movement etc. they embarked on mass testing and tested widely for the virus, isolated cases and quarantined suspected cases and same way did Singapore also do and these two countries managed to suppress transmission of the virus. South Korea developed testing kits for the corona virus even before it had a significant number of cases. Health authorities of South Korea conferred with research institutions to develop a test kit and after it was done it was shared to the potent pharmaceutical companies to develop and produce reagents and equipment needed for the testing. Such practice shows that testing is central to the outbreak response and without testing there will be no early detection and catastrophe can set in. Hong Kong after seeing the success of South Korea, Singapore also joined and implemented same strategies and policies to defeat the corona virus disease 19. The approach by these Asian giants (South Korea, Hong Kong, Singapore, Japan) made the World Health Organization's director general refer to the strategy as cutting off the virus from the bud, meaning basically stopping the virus from spreading further and preventing community transmission. These countries have been able to keep most of their factories, malls and restaurants opened and Singapore has even kept schools opened at a time when nations around the world are shutting down classrooms. There is little evidence to show that schools should be closed down in these Asian countries since the young vectors or spreaders of the COVID-19 are not evidenced based. South Korea has used data from surveillance cameras, cell phones, and credit card transactions to map the social connections of suspected cases. Hong Kong doesn't give out the names of those infected; health officials release each person's age, gender, street address, medical symptoms and often the exact location of where the person works. This allows other residents to determine if they might have been in contact with the infected individual. The health department of Hong Kong also releases license number plates of taxi drivers who test positive and the flight numbers of infected travelers who arrive so members of the public can determine if they might have contact. In Singapore, the police force works with the ministry of health to trace connections between cases and to track chains of transmission. Singapore also makes details of these infections' public in the hope that other residents will come forward if they may have come in contact with a confirmed case. (Beaubien, 2020). Swedish Style of Containing COVID-19 In Sweden, a Nordic country during this pandemic still kept playground and schools opened, restaurant working and the government relied on voluntary action to stem the spread of COVID-19. Sweden relied much on people taking responsibility of them, protecting themselves. As of April 9, 2020 Sweden, had 9141 cases of COVID-19 virus with 793 deaths (John Hopkins University figures). The Swedish approach encouraged and recommended and never choose compulsion as done in most countries and went on the wash hands with running water and soap and stay at home awareness. Gathering of up to 50 people were still permitted, then social distancing approach, their main focus was to protect the elderly, any one older than 70 years were told to stay at home and limit any social contact (Beaubien, 2020). Taiwan Style, the Asian Country that is not a Member of World Health Organization Taiwan has a population of about 24 million people almost same as Australia and yet they have to april 30 th , 2020 less than 400 cases of COVID-19 while Australia has excess of 500. Taiwan has been able to keep the virus under control when other parts of the world have not been able. This is a matter of hard lessons learned during the severe acute respiratory syndrome (SARS) outbreak in the year 2003 which Taiwan was one of the most affected worldwide along with Hong Kong and southern china. Asia has the most preventive and secured response to the corona virus by border controls, wearing of the face masks and making it as a routine in early January 2020. Taiwan has a world class health care system with universal health coverage. As the news of COVID-19 began to emerge Taiwan national health command center (NHCC) moved in to respond quickly to the potential threat (Journal of American medical Association). Taiwan rapidly produced and implemented a list of at least 12 action items in 5 weeks to protect the public health and their policies and actions went beyond border control because to them it was not enough. Although Taiwan was at greater risk for COVID-19 due to its close proximity, ties and transport links outside mainland china. Among the early decisions of Taiwan was to ban travel from many parts of china, stop cruise ships docking at the island ports and introduce strict punishment for anyone found breaching home quarantine orders. Further to that, Taiwan officials moved to ramp up domestic face masks production to ensure the local supply. Taiwan rolled out island testing for coronavirus including retesting people who had previously had unexplained pneumonia, in addition, it announced punishment for those who may spread false information about the virus and COVID-19. A platform and made avenue for well trained and experienced teams and cadres of health care to address the emerging outbreak, however strict lock downs that characterized the response in china and many other countries were so much avoided. Taiwan also avoided the type of "Taiwan is not a member of the World Health Organization" hence they are always planning, making policies and implementing strategies that far advances the world health care practices. Now that Italy has opened up its domestic trade in this pandemic, it is time the government collaborates effectively, open up for expert advice, and continue to mobilize experts from the local settings and abroad to find solutions and also draw closer more to the World Health Organization. This is the time political leaders, government in power should do all they can never and not to use excessive powers or abuse or force on citizenry even if some go contrary to regulations but rather use the time to educate such people and continuous defaulters or recalcitrant brought to book with laid down orders. Political leaders should know that this COVID-19 pandemic is a public health crisis and all should come together to help find proper and better solutions and remedies to this course and incumbents' governments occasionally should allow itself for checks and balances since as a developing country definitely. European countries shall borrow from the World Bank, International monetary fund, European Central Bank and others to augment their reserves to stabilize their shaking economies at this critical point of the COVID-19 pandemic. These basics of accountability and loyalty to the ordinary citizenry and the donor agencies can be done through state of nation addresses, press briefings, through a complete break down and dissemination of information to the ordinary citizen through various ministry of information or ministry of communications. A critical observation, analysis and trend monitoring has so far shown that the Current government of Italy has been doing well and all must encourage, back and support the incumbent government's good initiative of making the ordinary citizenry and Italy know how much they are spending and the dire financial crisis this pandemic has be fallen on Italy and European countries. Further to Italy lifting the lockdown on the domestic or internal market and economy, the country cannot stand firm only on such measure and their economy for this period shall continue to fall and dwindle since Italy's main source of income falls on export commodities which brings in foreign exchange and is a bench mark for foreign exchange liquidity that can ease balance of payment constraints and as a source of employment for the many youth. Such an intervention or initiative would help and keep the ordinary Italian and their families stable although the macro economy may suffer; it brings peace and stability within the country to avert any form of tension and aggression among the citizenry and between the governments of the day. 2. This is a good start for Italy and is a clarion call for European countries especially those in the west to emulate Italy and adapt most of the policies, interventions the current government has kept in place for the benefit of citizens and the steady tactics of fighting the COVID-19 pandemic although Italy has a long way to go and a lot to put in place to achieve the optimum (Changjiang Daily, 2020). The M3T (Mass Testing Tracing and Treatment) In the M3T, M stands for mass, 1st T for testing, 2nd T for tracing and last T for treating Mass Testing Although there has been a lift on the domestic or internal lock down, there is still the higher need for mass texting especially as data made available shows levels of community spread. This will yield results since external borders with neighboring countries are closed. In as much as the government is helping, it would be better rapid test kits that can bring results within 10 to 15minutes are procured from world renowned laboratories and bioscience centers. Abbot and Cellex of the United States have passed through approvals from the food and drugs authority of the United States of America to bring such kits to the market and averagely cost around $ 6 per kit (New Rutgers, 2020). Another research institute in the state university of new jersey has gained approval from the food and drugs authority of unite states on their new biomaterial collection approach called the UUCDR infinite biologics to determine SARS CoV -2 corona viruses. This is a new saliva method which will allow broader population screening than most methods used like the nose and throat swabs. Most of the testing methods are done by performing nasopharyngeal or oropharyngeal collections and usually put health care professionals at risk although they put on DPI so the new invention of saliva testing may go a long way to help. In developing economies (like in Africa where the virus is spreading) with huge and always increasing population, it becomes a huge financial constraint for the governments due to their unstable economies yet the need to procure such fast and modern methods of testing for COVID-19 since it is over important purpose and for such a time and season. Further to that African governments need to recruit more public health and laboratory personnel to embark on such a continuous process looking at the population and it becomes another budget and unplanned cost yet vital to achieve success in the fight of COVID-19 pandemic. Tracing Tracing of contact persons in the population has been very relevant since there is community spread. Without such process the number of cases may go uncontrolled and the country can collapse. The social care system of Italy and most European Union are different and this makes the contact tracing very important since infected individuals unknown to them keep spreading the virus in their day to day interaction with loved ones, friends and families which they may also add up to spreading with other friends, relatives and in their line of business. This also needs human resource looking at how varied such practice is and calls for another unbudgeted cost by the government. Treatment Treatment now becomes key when persons infected are identified. This can only be done in accredited centers for such purpose. The fewer the cases the better for our treatment centers and the efficient management of infected persons by the health care professionals. If the number of cases goes higher than the need for more accredited centers to be established and more health care professionals recruited with expanding need for medical consumables and this becomes another headache for Italian government since is a financial burden and an unplanned budget. The 2SQ (Social Distancing, Self -Isolation, Quarantine) In the 2SQ, the 2S stand for social distancing and self-isolation while the Q stands for quarantine. Social Distancing Social distancing purposely is to decrease the transmission risk. It usually stats with bigger groups of people like churches, mosques, schools, football games. In fact, it becomes necessary when one avoids all non-essential activities. One must avoid contact with persons and maintain a 2 meters part distance to any time when nearing people. The very point of social distancing is to prevent those numbers from going up and actually to flatten the curve of infections over time. Nonetheless since the COVID-19 is an airborne disease and despite the social distancing it will still infect people. Self-Isolation Self-isolation is basically for people who have come into contacted with infected persons or someone who has lived in or returned from a country with more cases of COVID-19.such individual usually don't show symptoms but is a kind of precautionary measure and they should avoid staying out of home, limit physical contact with others and avoid petty transits. They should then be educated on the symptoms and they monitor themselves for respiratory symptom so sneezing, coughing, fever and shortness of breath. Quarantine This comes into operation when persons have been tested for COVID-19 and waiting results or have tested positive for COVID-19. Such persons are usually confined to accredited centers or kept home in a prepared place and all is done to prevent them from getting into contact with others as much as possible (Gupta, 2020;Zeni, 2020;Fletcher et al., 1983). Conclusions In as much as the COVID-19 is fatal and a global health crisis, there should be an opportunity for Italy and European government to take bolder, more pragmatic steps to secure and shield supply chains of essential products, strive hard to contain the health crisis, strive hard to maintain the stability of their financial systems, help businesses survive the crisis, support households economic welfare and there should be a way to compensate stimulus package to reverse the economic damage the crisis is still causing. Research shows how fast action in the face of the COVID-19 problem can avoid even more critical situations. In fact, the probability of a person affected by the virus and hospitalized worse depends on the quality of the care provided. The model of care must be multidisciplinary. we can say that we are writing a book of which we currently only know the title (COVID-19). In fact, the disease can leave consequences that require monitoring over time. The reconstruction of the sanitary and economic rubble cannot ignore the psychic wounds, without which the social fabric will be very weakened, with an avoidable risk of increasing poverty. Recommendations As Italy e many European countries always have ailing economies, the International monetary fund, World Bank and European Central Bank should try as matter of relief to give long span loans without interest to affected European countries with this COVID-19 pandemic looking at its devastating effect on the economy. The developed world as been affected by the pandemic shall definitely be stable and recover from any economic decline COVID 19 has befallen on them and should try and consider waiving off sovereign debts of severally hit European countries with the COVID 19 pandemic since at this time debt relief is indispensable especially when imports and exports are hard hit with severe restrictions in most European countries. The research advises the extremely important need for social distancing, self isolation, quarantine and may be international boundary lock down when the need be. Also, it is inevitable at this stage to forget mass testing, tracing and treatment which is a major tool and approach which has been effective and result oriented public health intervention and that border closures or international boundary lock with the public health intervention of mass testing, tracing and treating becomes a zero and defective effort. Some of the italian leading epidemiologist, infectious disease specialists, think is time European countries do as much as they can to emulate the aggressive approach the Asian giants (Singapore, South Korea, Japan, Hong Kong, Taiwan) took through with their health care system and also the process, policies, strategies and implementation done to conquer COVID-19 as against the rest of the world. In summary you will need: -increase the qualitative and quantitative level of the health service; -develop a database that includes all those who have contracted the virus; -gradually resume economic activities to revive the economy; -create a virtual network of collaborative medicine that becomes a model of cure for the future.
2020-08-20T10:01:25.086Z
2020-05-17T00:00:00.000
{ "year": 2020, "sha1": "de8366966ba395f4260c2c33bc5ea8984156d76d", "oa_license": "CCBY", "oa_url": "https://hrmars.com/papers_submitted/7210/the-impact-on-businesses-and-public-health-using-lock-down-as-a-tool-against-covid-19-pandemic-in-italy-a-global-perspective.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "49357b86f5e8f88702b76420e196dfde7c1e18d4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Business" ] }
5622649
pes2o/s2orc
v3-fos-license
Spinal myoclonus following a peripheral nerve injury: a case report Spinal myoclonus is a rare disorder characterized by myoclonic movements in muscles that originate from several segments of the spinal cord and usually associated with laminectomy, spinal cord injury, post-operative, lumbosacral radiculopathy, spinal extradural block, myelopathy due to demyelination, cervical spondylosis and many other diseases. On rare occasions, it can originate from the peripheral nerve lesions and be mistaken for peripheral myoclonus. Careful history taking and electrophysiological evaluation is important in differential diagnosis. The aim of this report is to evaluate the clinical and electrophysiological characteristics and treatment results of a case with spinal myoclonus following a peripheral nerve injury without any structural lesion. Background Myoclonus is defined as a sudden muscular contraction that usually indicates disease of the central nervous system and may be cortical, subcortical, or spinal in origin [1]. Spinal myoclonus is a rare disorder characterized by myoclonic movements in muscles that originate from several segments of the spinal cord. Though structural lesions are usually found in spinal myoclonus, the pathophysiology remains speculative. But there is evidence that various possible mechanisms can be involved: loss of inhibitory function of local dorsal horn interneurons, abnormal hyperactivity of local anterior horn neurons, aberrant local axons re-excitations and loss of inhibition from suprasegmentar descending pathways [2]. This report describes a case with spinal myoclonus following a peripheral nerve injury. Clinical, electrophysiological characteristics and treatment results were discussed. Case presentation A 33-year-old female was admitted to Neurology Department with a complaint of weakness, hypoesthesia, paresis and painless constant involuntary muscle spasms of the left upper extremity. Her complaints started 4 months ago, after she fell upon her left arm. At that time there appeared a collection and oedema on the left arm elbow joint. In a month, she experienced weakness, sensory deficits and minimal muscle spasms in the left ulnar nerve innervation area. Cervical magnetic resonance imaging (MRI) was normal. Electromyographic evaluation (EMG) revealed a conduction delay and/or a conduction block with a neurogenic involvement displaying partial denervation in muscles innervated by ulnar nerve. Collection was evacuated by decompression surgery and ulnar nerve was released. After the operation weakness and sensory deficits did not improve. Involuntary movements in the left ulnar nerve innervated muscles, than increased and spread to the the whole arm. She was referred to our clinic. Her family history was unremarkable. She was not on any medication, she did not smoke or drink alcohol. Neurological examination revealed spontaneous synchronized, involuntary myoclonic jerks in the proximal part of the left upper extremity during action and at rest (see Additional file 1). Myoclonus seen in the agonist and antagonist muscles were persisting during sleep as her parents noted. It was provoked by movements that belonged to the affected muscle groups but there was no response to tactile stimulus. Minimal muscle weakness and sensory deficit in the biceps, triceps and brachioradialis muscles were noted. Routine biochemical laboratory investigations were within normal limits. Secondary causes of myoclonus such as infectious disease (HIV, VDRL, HSV, hepatitis B and C, syphilis) were excluded. Blood calcium, copper, seruloplasmin levels, hepatic and renal function tests, thyroid hormone levels, sedimentation rates, cerebrospinal fluid findings and routine EEG and cranial MRI scanning were normal. Computerized tomography (CT) of the left arm, performed due to the trauma of left upper extremity, revealed a fissure, 1 cm above the humero-radial joint at the level of the lateral epichondylus. MRI of the forearm revealed a partial rupture in the collateral ligament that achive the stabilization of the wrist, a strain in the distal part of the triceps muscle and articular effusion. Needle EMG findings, motor and sensory nerve conduction studies of the upper extremity muscles were in normal limits. Somatosensorýal evoked potentials (SEP) were normal. The surface EMG showed rhythmic, irregular, 1-3 Hz in frequency discharges in motor units of muscles expanding from the fifth to the eighth cervical region of the left upper extremity in a segmented fashion ( Figure 1). Agonist and antagonist muscle contractions and discharges were synchronized. The myoclonic activity started synchronously in the whole segment and there was no startle response in supraorbital, median, ulnar nerve electrical or auditory stimulation, which suggested that it was not stimulus-sensitive. As a result of clinical, laboratory, radiological and electrophysiological evaluations, the patient was diagnosed as having a non-proprioceptive spinal myoclonus. Various drugs were used (Carbamazepine 800 mg/day, Na valproate 1000 mg/day, Piracetam 4.8 g/day, Clonazepam 6 mg/day) but none of them were effective. Since there was no response to medical treatment, botulinum toxin type A (Botox ® ) was applied to the left extremity triceps and biceps muscles. After a week of botulinum toxin injection, a temporary improvement was noted but it was not considered to be satisfactory. Discussion The label of spinal segmental myoclonus was appropriate if there is pathology in the spinal cord and the movements exist according to those segments, In our patient, both clinical and electromyographic findings pointed to the C5 to C8 segments as the site of segmental spinal myoclonus. The collection was evacuated and decompression was performed at the beginning, since there was ulnar nerve compression in the electrophysiological evaluation, but her sypmtoms did not subside. Cervical MRI taken after the trauma was normal. The findings were widespread and not limited to the ulnar nerve tract as expected. These movements were started following a trauma, suggesting that the disease might be triggered by peripheral nerve damage. In clinical and electrophysiological evaluations it was shown that the pathology progressed to the upper segments; above the area of the peripheral nerve. Propriospinal myoclonus affects multiple neighbouring segments. But, in our case, the movement was observed synchronously in the whole segment. Spinal myoclonus may be stimulus-sensitive as well but we did not observe any involvement such as a startle induced by a peripheral nerve or supraorbital stimulus; therefore, we concluded that the pathology was not a stimulus-sensitive type. The diagnosis of psychogenic myoclonus was considered but a psychiatry consultation was completely normal. Furthermore, myoclonus continued during sleep and occurred synchronously in agonist and antagonist muscles. EMG recordings with surface electrodes Spinal myoclonus has been associated with laminectomy, remote effect of cancer, spinal cord injury, post-operative pseudomeningocele, laparotomy, thoracic sympathectomy, poliomyelitis, herpes myelitis, lumbosacral radiculopathy, spinal extradural block, myelopathy due to demyelination, electrical injury, acquired immunodeficiency syndrome, and cervical spondylosis [3]. In rare occasions, spinal myoclonus can be observed after the peripheral nerve lesions. Peripheral nerve lesion as a cause of spinal myoclonus is still the subject of debate. There is evidence that various pathological mechanisms could be involved: e.g. loss of inhibitory function of local dorsal horn inter-neurons, abnormal hyperactivity of local anterior horn neurons, aberrant local axons re-excitations and loss of inhibition from supra-segmentar descending pathways [2]. The following findings support the reasons why the present case considered to be spinal myoclonus and not a peripheral one; the complaints started after a peripheral trauma and persisted, although decompression surgery was performed and even increased. It did not affect only the ulnar nerve tract, as in peripheral myoclonus, but involved the upper segments also and was widespread, had rhythmic and synchronous presentation, continued during sleep and was not stimulus-sensitive. Clonazepam is the treatment of choice. Besides this Carbamazepine, Diazepam and Levatiracetam were tried in a few cases. In our patient, various medical treatments were applied (Clonazepam 6 mg/day, Carbamazepine 800 mg/ day, Na valproate 1000 mg/day, Piracetam 4.8 g/day) but no response was observed. There are suggestions that botulinum toxin type A could be beneficial in cases resistant to medical treatment [4]. In our case, botulinum toxin was injected locally but it was not effective. Conclusion In conclusion; spinal myoclonus can originate from the peripheral nerve lesion and be mistaken for peripheral myoclonus. While the underlying lesion is usually treatable and reversible in peripheral myoclonus, spinal myoclonus usually persists though various treatments. Careful history taking and electrophysiological evaluation is important in differential diagnosis.
2018-05-08T18:09:01.803Z
0001-01-01T00:00:00.000
{ "year": 2008, "sha1": "ed81f77ac28724b2473264958753c3ab6cc21368", "oa_license": "CCBY", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1186/1749-7221-3-18.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ed81f77ac28724b2473264958753c3ab6cc21368", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
55064388
pes2o/s2orc
v3-fos-license
Ways to improve physical and thermal performance of refractory lining materials Refractory lining materials, which include ceramic refractories and nonfired heat-resistant concretes, have a very short lifespan during the turnaround time measured in years and sometimes months. Therefore, increasing the service life of thermal generating units by 1.5-2 times will bring significant economic benefits. The main factor that determines the durability of refractory lining materials is the thermal resistance. It is possible to increase the thermal resistance by improving such physical and mechanical properties as strength and density. As for the ceramic refractory performance improvement, such technological methods as their structural and chemical modification by phosphate binder impregnation, as well as introduction of phosphate components into the ceramic batches during the molding process increase, in particular, their thermal stability. The use of aluminous and high-alumina cements contributes to a significant increase of not only strength, but also physical and thermal performance of heat-resistant concretes with different fillers. Switching to the use of chemical binders in the compositions of heat-resistant concretes (liquid glass with effective hardeners; silicate-block and phosphate binders) enables to develop high-heat resistant materials which do not soften in a wide range of heating temperatures from 400 oС to 1600 oС. The positive results on increasing the thermal resistance of heat-resistant composites can be obtained by reinforcing them with high temperature fibers. Introduction Traditional ceramic refractories including fired ceramic concretes, as well as nonfired heatresistant concretes with different binders belong to refractory lining materials.Piece ceramic refractories made by the traditional ceramic technology (molding → drying → firing) have various chemical and mineralogical compositions affecting ultimately their physical and thermal operation factors (strength properties, deformation temperature under the load, heat and chemical resistance, etc.For example, the thermal resistance of piece refractories, i.e. the main property that determines their durability, varies from 10 water thermal cycles for fireclay refractories to 100 for corundum ceramics.And the limit of the compressive strength ranges from 15 to 100 MPa, respectively.However, lining constructions made of piece ceramic refractories have a large number of joints which are the "neck stage" between piece refractories in the brick masonry, where the entire lining failure begins. Since the amount of refractory lining works increases annually, heat-resistant concretes have been used recently to reduce the number of these joints. Heat-resistant concretes are modern refractory lining materials made with the use of the known hydraulic binders (Portland cement, alumina cements) and special chemical binders (liquid glass, silicate -block, phosphates).Crushed heavy and light piece refractories, as well as artificial fired porous aggregates (expanded clay gravel, haydite, perlite, vermiculite, etc.) serve as aggregates for heat-resistant concretes.On the basis of binders listed above it is also possible to produce heat-resistant cellular concretes. Heat -resistant concretes, as well as conventional ones can be used in the manufacture of thermal unit linings in precast and monolithic variants.These variants of heat-resistant concrete use during refractory lining works enable to obtain thermal unit linings with minimum joints.Reducing the number of joints in lining constructions leads not only to the improvement of their properties and durability, but also to the saving of material resources, because it reduces the number of routine, average and general repairs required in practice of works with refractories.According to the information provided above, the development of ways to improve physical and thermal performance of refractory lining materials, including the thermal stability, is regarded as a priority [1÷3]. Materials and methods The Portland cement is one of the widely used kinds of binders for chemically bonded refractory composites.The Portland cement as a hydraulic binder contains 75-80% of the highly basic calcium silicate, 5-15% of the highly basic calcium aluminate and 10-20% of the tetracalcium aluminoferrite.The major feature of the Portland cement as a binder for making heat-resistant composites is the formation of significant quantities of the calcium hydroxide Ca(OH) 2 (up to 15% by weight) in the cement stone as a result of hardening.During drying and the first heating at a temperature of about 500℃ calcium hydroxide decomposes into calcium oxide and water.Re-slaking of the newly formed calcium oxide with the water vapor contained in the air, leads to the destruction of the cement stone and concrete respectively, due to the formation of Ca(OH) 2 from CaO.To prevent this process, agents capable of binding the calcium oxide to more heat stable compounds are added to the Portland cement.For this purpose traditional fine-ground refractory additives are used: granular blast-furnace slag, alumina-chromium waste in the form of the dead petrochemical catalyst, chamotte, cordierite, volcanic ash, fly ash, alumina cement and other substances capable of reacting with the calcium oxide CaO.Positive results, in particular the strength increase, are provided by the addition of the sodium tripolyphosphate after its introduction into the Portland cement binders, as well as classical plasticizers of C-3 type or nanotehnogenic high-alumina sludge-waste of non-ferrous metallurgy [3][4][5]. The maximum operating temperature for the widely used fireclay refractory concrete on the Portland cement does not exceed 1200℃.When using a heat-resistant concrete on the Portland cement as a fine-ground refractory additive and fillers of more highly refractory materials, such as chromomagnesite or periclase products, its operation temperature rises to 1600℃ [3]. Aluminate binders, including alumina and high-alumina cements, consist of low-basic calcium aluminates.The total content of the aluminum oxide in the aluminous cement is not more than 50%, and in the high-alumina cement is not less than 17%.These cements are quick-hardening and very active, i.e. high-strength binders.During mixing with water alumina cements do not form the calcium hydroxide Ca(ܱ‫)ܪ‬ 2 and other compounds that reduce the performance of heat-resistant composites.The cement stone based on the alumina cement is highly refractory in comparison with the Portland cement due to the increased content of the aluminum oxide ‫݈ܣ‬ 2 ܱ 3 [6].The heat-resistant concrete on aluminous cement has higher performance characteristics than concrete on the Portland cement.This type of the heat-resistant concrete was tested at temperatures of 1300-1400℃ in neutral and weakly reducing environments.However, when working with this type of the heat-resistant concrete on the installation site, the ambient temperature must be taken into account. The heat-resistant concrete on the high-alumina cement and high-alumina aggregate (e.g., a white electrofused corundum) can be used not only in oxidizing, but also reducing environments at an ambient temperature of not more than 1700℃.After heating to a temperature above 800℃, the strength of concretes on the high-alumina cement decreases.However, their residual strength is higher than that of concretes on the Portland and alumina cements and averages 40-45% of the grade value [2; 6]. The dense concrete on the high-alumina cement and corundum can have a grade of 600 and 650 and a residual strength of 25-30 MPa when the average density of the normally compacted concrete mix is 2900-3100 kg/m 3 . This type of concrete is sufficiently stable both in oxidizing and reducing environments and it is resistant to abrasive particles. It is also possible to increase physical and thermal characteristics of heat-resistant composites (concretes, solutions, shotcrete masses) by using liquid-glass and phosphate binders. Experiments and results The liquid glass used in heat-resistant concrete compositions is an aqueous solution of the sodium silicate of 1,3-1,5 g/sm 3 density.The ratio of silica to the sodium oxide is usually 2-3.5.The hardening of the liquid glass and concretes on its basis is the result of their dehydration or action of hardening initiators.The solidification of materials by drying is possible only if they contain a large number of pores and the exposed surface of the concrete.In dense concretes one can observe a rapid formation of a vapor-tight surface film of the liquid glass which prevents drying and hardening of deep sections of the concrete.In this regard, the solidification of liquid-glass materials by introducing chemical hardeners which ensure the volume curing is more promising.The sodium silicofluoride, self-separating slag of ferrochromium production, nepheline sludge, Portland cement, aluminous cement and nepheline flame retardant can be used as hardeners [7]. The melting point of sodium and potassium silicates is relatively low (about 900 0 C) and the liquid glass in the composition of the lining material is a fluxing agent, but in the combination with aluminosilicates, corundum, magnesite and other compounds with high refractory properties, it allows obtaining concretes and protective coatings that withstand the temperature of about 1600°C [8; 9]. Concretes and solutions based on the liquid glass retain for a long time the ductility under the load which is much less than their strength limit.The reduction and complete elimination of this defect is achieved by using high-temperature drying (at the temperature above 120°C). The heat resistant concrete on the liquid glass and fireclay filler is used at the temperature of not more than 1300°C.This type of concrete properly resists the action of acidic media, provided that the sodium silicofluoride is used as a hardener for the liquid glass.Therefore, concretes on the liquid glass with the sodium silicofluoride can be used for linings of thermal units in acidic corrosive media.The concrete on the liquid glass can be placed both by vibrating and by shotcreting.When replacing the sodium silicofluoride with the self-separating ferrochrome slag with the fine-ground aluminosilicate powder, physical and mechanical properties of the concrete are improved.And along with this its residual strength increases by about 10% and before the first heating at the operating temperature it becomes less sensitive to moisture. One of the main advantages of the heat-resistant concrete on the liquid glass is that it unlike concrete on hydraulic binders does not practically reduce the strength when heated.In certain cases, for example, when the ferrochrome slag or aluminous cement are used as liquid glass hardeners and when a finely ground aluminosilicate additive is introduced into the concrete mix; the heat-resistant concrete after heating it to 800•C increases its mechanical strength.The result is that its residual strength becomes higher than the grade one. It is possible to consider the heat-resistant concrete on the silicate-block, i.e. on the same sodium metasilicate, but in the form of a dry fine powder, to be an analogue to the heat-resistant concrete on the liquid glass.The approximate composition of the heatresistant concrete on the silicate -block is as follows, % by weight: silicate-block 6.0; technical barium chloride 0.6; finely ground fireclay additive 34.4; fireclay sand 30; crushed fireclay 30; water of mixing 300 liters per 1m 3 of the concrete mix.The concrete mix is prepared in fixed-drum concrete mixers by mixing dry components during 1,0-1,5 min and placed by vibration.The concrete takes 4-6 hours to harden at 100-150℃ [8; 9]. The basic physical and technical properties of the heat-resistant concrete on the silicate -block are: the usage temperature limit-1200℃-1600℃; grade strength-20-30 MPa; residual strength-100%; average density-1900 kg/m 3 ; high-temperature strength-23 waterbased thermal cycles (water is 800℃).The linear shrinkage after heating to 1200℃ does not exceed 0.5%.The characteristic feature of this type of concrete is the increased hightemperature strength under acidic gases. The heat-resistant concrete on the silicate-block differs from other types of heatresistant concretes by the increased strength and thermal stability.In addition, the silicateblock is cheaper than the liquid glass and its consumption per 1m 3 of the concrete mix calculated on the dried basis is approximately 20% less.However, the heat-resistant concrete on the silicate-block can only be effectively used for the manufacture of blocks and individual elements. Heat-resistant compositions on phosphate cements and binders have very high physical and thermal parameters.Phosphate cements are formed during the reaction of the orthophosphoric acid with metal oxides or metallic mineral materials (including ashes, slags, natural and artificial silicates, etc.).Depending on the activity of the material used against the phosphoric acid, the binding properties of the resulting cements prove themselves at normal, elevated or high temperatures.At the temperature of about 20 0 C the hardening phosphate compositions can be prepared on the basis of magnesium and zinc oxides, titanium, aluminum and zirconium hydroxides previously passivated at 1100-1200 0 C. Binding properties of phosphate cements based on oxides of titanium, aluminum, zirconium, chromium and a number of other metals appear only when heated to 200-500 0 C [10;11]. The settling time of the cold hardening phosphate compositions can be controlled by changing the specific surface of the solid component (e.g. to change its grinding fineness and the reaction activity of grain surface either by thermal or chemical treatment, or preneutralizing of the liquid component. Industrial water solutions of aluminum phosphate or aluminophosphate are the most common liquid phosphate binders.The density of the aluminophosphate binder is about 1,5r/sm 3 Solidification of liquid phosphate binders occurs due to their drying and subsequent polymerization and condensation or introduction of hardening initiators reacting with the acid group present in the composition of binders.Here hardly soluble compounds are formed. The strength increase of phosphate binders together with that of the heating temperature is their characteristic feature.In the production of wear-resistant, highly plastic and corrosion-resistant coatings (slag-resistant included), the mechanical characteristics of materials based on them are taken into account. The heat-resistant concrete with an alumochromophosphate binder (AChPhB) and zircon aggregate can be used at temperature of not more than 1500 0 C. The characteristic feature of this type of concrete is its basic and weakly acidic slag repellency.Therefore, it can be effectively used to line the bottom of boiler furnaces with the liquid ash removal, non-ferrous melting furnaces and other thermal units. Unlike heat-resistant concretes on orthophosphoric acid, concretes on the alumochromophosphoric binder are more technologically advanced, since it is unnecessary to perform heat-treatment to provide the structural strength to constructions manufactured from them.They can harden at a normal temperature above 15 0 C if certain chemical additives are introduced into the concrete mix as a hardener initiator and they do not impair the operation properties of the concrete (i.e. they do not lead to corrosion of the reinforcement, the concrete strength decrease when heated, etc.)One of such additives is the waste of the dead chromia-alumina catalyst IM-2201, which initiates hardening of AChPhS and simultaneously acts as a fine-ground additive.The concrete on the alumochromophosphate binder finally gains strength during the process of drying and the first heating of the structure made of it, thus it can be used to form larger blocks and shields, as well as during monolithic concreting and semi-dry shotcreting of structure lining where the boiler is installed [10; 11]. Discussion The use of alumina-containing components in concretes and concrete grouts promotes the increase of the chemical resistance of refractory composites. It is also possible to increase the strength characteristics of heat-resistant composites by producing individual products from refractory ramming mixtures.Earlier, ramming mixtures were used only for relining in thermal units. The ramming mixtures in contrast to the developed heat-resistant concretes for manufacturing articles and relining have a number of specific differences due to the technological features of their compaction during the molding of individual blocks and maintenance work in furnaces and other thermal units.The consolidation of ramming mixtures (monolithic concretes) during these works is carried out only by the method of tamping which results in the reduction of the amount of the grouting fluid necessary for the high-quality placing of the concrete mix. To improve the technological parameters necessary for the high-quality compaction of refractory ramming mixtures, refractory clay or kaolin are introduced into-their composition.Considering the fact that these components are scarce and costly (practically most of the deposits of refractory clays and kaolins are concentrated on the territory of Ukraine).We have made an attempt to replace plastic components with some wastes from the ceramsite industry. At a number of sintering plants both in the Samara region and the Russian Federation, where double-drum rotary kilns are used for the production of the claydite, a coproduct in the form of the ceramsite dust falls out from the firing unit at the junction of two drums.It was found that this coproduct is about 20-30% clay bond along with fired components. However, the use of phosphate binders (orthophosphoric acid or liquid alumochromophosphate binder) as grouting fluids of refractory ramming mixtures with the ceramsite dust, allowed, as a result of chemical reactions between active liquid phosphates and components (oxides) of mineral fillers (including dust), obtaining high -temperature compounds in the form of the of ferric phosphate FePO 4 ; aluminum phosphate AlPO 4 ; calcium phosphate Ca 3 (PO 4 ) 2 and others. Grouting fluids and ramming mixture consistency are as in composition 1.Such compositions of ramming mixtures are recommended to be used for the manufacture of both large-sized blocks and small piece critical products required for the installation and repair of thermal unit linings.In the case of manufacturing individual blocks, parts and products with the use of ramming mixtures the method of the immediate stripping is quite acceptable. These heat-resistant composites obtain the grade strength directly in the thermal unit after the first lining heating to the operating temperature. For the further growth of strength and other physical and thermal parameters of heatresistant phosphate hardening composites it is recommended to treat products or carry out the structural and chemical modification by impregnation with the solution of the liquid alumophosphate binder. The same technology can be used to improve physical and thermal parameters of fireclay and high-alumina refractories [3]. The major physical and thermal characteristics of the developed refractory ramming mixtures suitable for the production of individual products and blocks, as well as for thermal unit relining are given in Table 1. Conclusion Thus, the use of the industrial waste as fine-grained additives and fillers significantly reduces the cost of heat-resistant concretes and extends their use. Increasing strength and correspondently thermal resistance of small-piece heat-resistant parts and products based on phosphate compositions can also be carried out using at the pressure of 5.0-15.0MPa.The composition of mixtures in addition to the refractory clay can include both finely ground high-alumina fillers and small aggregates of aluminosilicate or high-alumina compositions.Almost all phosphate binders serve as grouting fluids, but the advantage is given to water-soluble aluminophosphate compounds [3; 5; 10]. The method of increasing the thermal stability of heat-resistant concretes practically on all types of binders considered, which consists of the introduction of refractory hightemperature fibers into the composition of concrete mixes, is quite simple.As a result of this technology heat-resistant concretes reinforced with fibers are formed.Kaolin wool, basalt fibers and also metallic fibers prepared from high-temperature alloys, e.g.nickrochrome, are possible to be used as fibrous refractory materials [12]. It has been established that the introduction of above-mentioned refractory inorganic fibers and metallic fibers in the amount of 3-4% of the total weight of the concrete mix increases its thermal stability by 1.2÷1.5 times. the bar are ramming mixture properties; below the bar are properties of ramming mixtures modified with acidic aluminophosphates after the heat treatment at t=200 ℃. . Aluminum chromophosphates have approximately the same density and are a sticky liquid of green color.In addition, partially or completely dehydrated powders of Seminar 2017, Theoretical Foundation of Civil Engineering aluminophosphate and aluminochromophosphate are sometimes used.Binding properties of partially dehydrated powders and water solutions appear at the same temperatures and those of completely dehydrated ones at elevated and high temperatures.
2018-12-07T04:38:48.980Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "1acc5c66640ece651cdd02b54159d539fb105eb8", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/31/matecconf_rsp2017_00077.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1acc5c66640ece651cdd02b54159d539fb105eb8", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
267000160
pes2o/s2orc
v3-fos-license
Klebsiella pneumoniae exhibiting a phenotypic hyper-splitting phenomenon including the formation of small colony variants In this study, we characterized a Klebsiella pneumoniae strain in a patient with shrapnel hip injury, which resulted in multiple phenotypic changes, including the formation of a small colony variant (SCV) phenotype. Although already described since the 1960s, there is little knowledge about SCV phenotypes in Enterobacteriaceae. The formation of SCVs has been recognized as a bacterial strategy to evade host immune responses and compromise the efficacy of antimicrobial therapies, leading to persistent and recurrent courses of infections. In this case, 14 isolates with different resisto- and morpho-types were distinguished from the patient’s urine and tissue samples. Whole genome sequencing revealed that all isolates were clonally identical belonging to the K. pneumoniae high-risk sequence type 147. Subculturing the SCV colonies consistently resulted in the reappearance of the initial SCV phenotype and three stable normal-sized phenotypes with distinct morphological characteristics. Additionally, an increase in resistance was observed over time in isolates that shared the same colony appearance. Our findings highlight the complexity of bacterial behavior by revealing a case of phenotypic “hyper-splitting” in a K. pneumoniae SCV and its potential clinical significance. Introduction Klebsiella pneumoniae, an opportunistic pathogen known for its ability to cause a wide range of nosocomial and community-acquired infections, has emerged as a significant public health threat due to its strain-specific, extensive arsenal of resistance and virulence factors (Wyres et al., 2020;Antimicrobial Resistance Collaborators, 2022).Infections caused by multi-, extensively-, and pandrug-resistant strains result in high mortality due to limited response to antibiotic therapy, which poses an increasing threat (Ventola, 2015;Navon-Venezia et al., 2017;Avgoulea et al., 2018).Apart from classic strains, a hypervirulent K. pneumoniae (hvKp) pathotype occurs and is characterized by invasive, often life-threatening and multiple site infection, characteristically in healthy patients from the general population (Russo and Marr, 2019).In addition, convergent types that successfully combine resistance and hypervirulence represent a "perfect storm" and have been increasingly reported in recent years (Heiden et al., 2020;Lan et al., 2021;Eger et al., 2022). Beyond typical resistance mechanisms against various antimicrobials, functional resistance mechanisms have been elucidated that lead to antimicrobial treatment failure and foster the development of relapses and persistent infections (Ster et al., 2017).The formation of a biofilm matrix represents one of these mechanisms that facilitates antibiotic tolerance and the generation of bacterial persister cells (Ster et al., 2017).Interestingly, it has been demonstrated that a decrease in capsule biosynthesis, which is crucial for hypervirulent phenotypes, leads to increase in vitro biofilm formation and intracellular persistence (Ernst et al., 2020).Another non-classical mechanism leading to antibiotic tolerance is the formation of the small colony variant (SCV) phenotype.SCVs are subpopulations of bacteria that exhibit slow growth, reduced colony size, and altered phenotypic properties compared to their normal-growing counterparts, making them difficult to detect and treat effectively (Proctor et al., 2006;Becker, 2023).Their ability to evade the host's immune surveillance and to undermine the effectiveness of antimicrobial interventions by host cell internalization results in intracellular persistence, which contributes significantly to the recurrence and chronicity of the infection (Tuchscherr et al., 2011;Kahl et al., 2016).Intracellular persistence have been shown for different human and animal cell types including endothelial and epithelial cells such as keratinocytes and osteoblasts (von Eiff et al., 2001;Strobel et al., 2016).Another pivotal attribute facilitating this phenomenon is their capability to modulate metabolic processes and virulence characteristics (Kriegeskorte et al., 2014;Proctor et al., 2014).Hypermutator SCVs characterized by higher mutation frequencies than wildtype strains and isolated especially from cystic fibrosis (CF) patients (Oliver et al., 2000;Prunier et al., 2003) have also been associated with antibiotic resistance (Schaaff et al., 2003;Besier et al., 2008) and biofilm formation (Morelli et al., 2015). To date, research has focused on staphylococcal SCVs, while SCVs of Gram-negative bacteria have been investigated in only a few studies and case reports (Proctor et al., 2006).Although the formation of small colonies in K. pneumoniae has been noticed during resistance studies against cephalosporins in the mid-1960s (Benner et al., 1965), this issue has not received sufficient attention and detailed research has not been conducted on this subject.The first clearly defined SCV of K. pneumoniae (SCV-Kp) in literature was obtained by in vitro exposure to gentamicin (Musher et al., 1979).SCV-Kp were also isolated from a patient treated with aminoglycoside antibiotics (Murray and Moellering, 1982).Smaller and non-mucoid colonies were obtained as a result of conjugation-induced mutation in the outer membrane protein of a hypervirulent K. pneumoniae isolate (Srinivasan et al., 2012).Another study showed that biofilm-forming K. pneumoniae developed heteroresistance to colistin by presenting slow-growing SCV-Kp (Silva et al., 2016). Here, we report on K. pneumoniae isolates displaying 14 different resisto-and morpho-types obtained from an immunocompetent male patient, who had sustained a traumatic injury caused by shrapnel shell fragments.The isolates comprise an initial, mostly susceptible K. pneumoniae isolate with typical morphological characteristics isolated from the patient's urinary specimen.From the urine and tissue samples, 13 additional phenotypes with different combinations of resistance and morphological characteristics including K. pneumoniae SCV phenotypes were isolated. Patient data Sufficient information could not be obtained regarding the period from the patient's first acetabular and femoral head shrapnel-caused war injury in Ukraine in March 2022, where he underwent hip prosthesis at an external center before his transfer to our orthopedic service in July 2022.Fracture-related joint infection treatment in our hospital continued through November 2022.The administration of antibiotics during this period included piperacillin/tazobactam from July to October, 2022, trimethoprim/sulfamethoxazole from July to August, 2022, cefiderocol from August to November, 2022, and colistin from October to November, 2022.Daptomycin was introduced into the treatment protocol starting from October 2022 upon detection of Staphylococcus epidermidis from intraoperatively obtained hip tissue samples and central venous catheter tip, and continued until the patient's discharge.No other bacteria were isolated from clinical samples during this period.Subsequently, a planned course of post-discharge antibiotic suppression therapy with doxycycline for three months was initiated.The first identification of carbapenem-resistant K. pneumoniae (CRKP) occurred in July 2022, followed by the initial detection of SCV-Kp in September 2022.Therefore, we decided to aggregate and systematically assess the entirety of K. pneumoniae strains isolated from the patient. Strain identification The urine sample obtained from the patient was quantitatively inoculated onto a Columbia agar plate with 5% sheep blood (BD Diagnostics, Heidelberg, Germany) and a MacConkey II-Agar plate (BD Diagnostics) using a 10 µl disposable sterile loop.The plates were then incubated for 48 hours.Tissue samples collected during surgery were inoculated onto Columbia agar plates with 5% sheep blood, MacConkey II-Agar plates, and Mueller Hinton Chocolate agar plates (all from BD Diagnostics).These plates were incubated under capnophilic conditions for up to seven days.The remaining tissue material was inoculated onto Schaedler agar and into BBL Fluid Thioglycollate media (both from BD Diagnostics) and incubated for up to 14 days under anaerobic and capnophilic conditions, respectively.Preliminary characterization of each phenotype was grounded in colony morphology and minimal inhibitory concentration (MIC) results for antibiotics encompassed within the VITEK® 2 AST card specific to Enterobacterales (bioMeŕieux SA, Marcy l'E ́toile, France) according to EUCAST criteria.All K. pneumoniae strains, isolated from various patient's specimens during the period from July to December 2022, were identified by matrix-assisted laser desorption/ ionization time-of-flight mass spectrometry (MALDI-TOF MS) utilizing the MALDI Biotyper® sirius system (Bruker Daltonics, Bremen, Germany) with MBT Biotargets 96 (Bruker Daltonics).The presence of carbapenemase-encoding genes was verified by a loop-mediated isothermal amplification (LAMP)-based assay (eazyplex®, AmplexDiagnostics, Gars-Bahnhof, Germany). Characterization of the phenotypes @Sequential subcultures of all phenotypic variants were carried out on various agar plates (including Columbia agar + 5% sheep blood, MacConkey agar from BD, and CHROMID® CPS® Elite agar from bioMeŕieux) to observe whether changes in colony morphology occurred and SCVs remained stable, followed by meticulous analysis of generated phenotypic profiles. In order to determine colony sizes, each phenotype was inoculated onto 5% sheep blood agar plates in triplicate on different days.After overnight incubation at 35 ± 1°C in ambient air, the diameters of ten colonies of each phenotype were measured and mean values were determined.Additionally, colony morphology in different phenotypes was assessed using the stereo zoom microscope Axio Zoom.V16, equipped with the objective Plan Z 1.0x/0.25 and the Axiocam 305 camera (Zeiss, Oberkochen, Germany).After Gram staining, single cells from different phenotypes were observed in transmission light by the Axio Imager.Z2m microscope with the oil immersion objective Plan-APOCHROMAT 100x/1.4 and Axiocam 305 camera (Zeiss). Antimicrobial susceptibility testing In addition to the initial VITEK® 2 AST, the MICs of a standardized set of antibiotics (Table 1) were determined by the broth microdilution (BMD) method using cation-adjusted Mueller-Hinton broth (CAMHB; Micronaut-S 96-well microtiter plates, Merlin, Bornheim-Hersel, Germany), and for cefiderocol using iron-depleted CAMHB (UMIC®, Merlin, Bornheim-Hersel, Germany), as recommended by ISO 20776-1, the European Committee on Antimicrobial Susceptibility Testing (EUCAST), and the Clinical and Laboratory Standards Institute (CLSI) guidelines (CLSI, 2018;Standardization, 2019;EUCAST, 2023b).The results were observed following 18 ± 2 hours of incubation at 35 ± 1°C in ambient air.All tests were conducted in triplicate on different days, and median MIC values were computed for analysis.Escherichia coli ATCC 25922, E. coli ATCC 35218, K. pneumoniae ATCC 700603, and Pseudomonas aeruginosa ATCC 27853 were used as quality control (QC) strains, and their results were within the QC range throughout the study.EUCAST Clinical Breakpoint Tables v. 13.1 were used for MIC interpretation (EUCAST, 2023a). DNA isolation and sequencing After overnight growth on blood agar plates at 37 °C, ten colonies were randomly selected and suspended in 1.5 mL tubes (Carl Roth, Karlsruhe, Germany) with 1 mL of phosphate buffered saline.Total DNA was extracted using the MasterPure DNA Purification kit for Blood, v. 2 (Lucigen, Middleton, WI, USA) according to the manufacturer's instructions.Quantification of isolated DNA was performed with the Qubit 4 fluorometer and the dsDNA HS Assay kit (Thermo Fisher Scientific, Waltham, MA, USA).DNA was sent to SeqCenter (Pittsburgh, PA, USA), where sample library preparation using the Illumina DNA Prep kit and IDT 10bp UDI indices was performed.Subsequently, libraries were sequenced on an I l l u m i n a N e x t S e q 2 0 0 0 , p r o d u c i n g 2 x 1 5 1 b p r e a d s .Demultiplexing, quality control and adapter trimming at the sequencing center was performed with bcl-convert v. 3.9.3(https://support-docs.illumina.com/SW/BCL_Convert/Content/SW/FrontPages/BCL_Convert.htm). Confirmation of clonality Trimmed sequencing reads of all isolates were mapped against isolate 1-A with snippy v. 4.6.0(https://github.com/tseemann/snippy) and the SNP distance matrix calculated with snp-dists v. 0.8.2 (https://github.com/tseemann/snp-dists).Overall, 14 distinct phenotypes were determined (Table 1).From the urine, two phenotypes (1-A and 1-B) exhibiting a normal colony size and glistening surface but differing in the color of their colonies displaying whitish or grey colonies, were isolated.All other phenotypes (n = 12) were isolated from tissue specimens.Strains numbered 1-A, 2-A, 3-A, 4-B, 5-B, numbered 1-B, 2-B, 3-B, 4-C, 5-C, and numbered 4-D, 5-D, displayed identical morphological attributes each, distinguished by whitish, glistening, and smooth (Figure 1B), grey, glistening, and smooth (Figure 1C), and grey, dry, and rough colonies (Figure 1D), respectively (Supplementary Figure S1).These strains revealed a normal colony size of 2.4 mm on average (range, 1 -5.5 mm).The isolates displaying the SCV phenotype, numbered 4-A and 5-A, exhibited similar morphological characteristics, and colony sizes were smaller than 0.5 mm (Figure 1, Supplementary Figure S1).No discernible variation in terms of colony clustering was observed among the various agar plates.There were no obvious differences in size or shape of cells between different phenotypes except bacteria from grey, dry, rough phenotype 5-D, in which cells were clearly elongated (Supplementary Figure S2). Initially, largely antibiotic-susceptible K. pneumoniae phenotypes exhibiting whitish and grey colony morphologies on Columbia agar plates were isolated from the urine sample.Following antibiotic treatment, MDR K. pneumoniae strains displaying the normal colony size were isolated from tissue samples, again characterized by subsequent whitish or grey colony formations.Subsequently, SCVs of K. pneumoniae were isolated from tissue samples.Subcultivation of different SCV colonies consistently yielded a division into four distinct colony morphotypes including one SCV phenotype that resembled the initial SCV, along with three normal-sized phenotypes distinguished by variations in colony color and visual attributes.While normal-sized phenotypes exhibited stability following each round of re-cultivation, SCV isolates displayed instability and recurrently diverged into the four phenotypes described above.We have designated the emergence of these multiple phenotypes as "hyper-splitting".Despite minor variations in MIC values, these "hyper-splitting" phenotypes exhibited multidrug resistance (Table 1). Except for isolates 1-A and 1-B, all isolates were resistant to the tested carbapenems.Initially, during routine diagnosis, isolate 2-B was found to be carbapenem-resistant by VITEK ® 2 AST, and to harbor bla OXA-48 gene by LAMP.After subcultivation of this isolate for MIC determination, this resistance disappeared and the isolate became susceptible to all tested beta-lactam antibiotics except piperacillin.We assume that a mobile genetic element harboring bla OXA-48 gene was lost upon subcultivation.Only isolates 1-A and 1-B were susceptible to piperacillin, and only isolate 4-B was not resistant to the cephalosporins tested.Interestingly, only isolates 4-A and 5-A, which demonstrated the SCV phenotype, were resistant to amikacin and trimethoprim-sulfamethoxazole.Another remarkable finding was the observed increase in the MIC values of cefiderocol and trimethoprim-sulfamethoxazole over time (Table 1). Whole-genome sequence (WGS) analysis revealed that all isolates belonged to sequence type (ST) 147.Lipopolysaccharide antigen (O) loci were O1/O2v1 and capsule biosynthesis (KL) loci were KL64 for all isolates except isolate 4-D, which could not be assigned, as it missed most genes of this locus.Isolates 1-A, 1-B and 2-B showed lower Kleborate resistance score than the other isolates (resistance: 0 vs. 2).The resistance score of 0 indicates that the isolate(s) did not carry any genes for extended-spectrum betalactamases (ESBL) or carbapenemases and a score of 2 correlated with the presence of carbapenemase genes without colistin resistance genes (Lam et al., 2021).In accordance with the resistance scores, we detected several beta-lactamase genes, such as bla SHV-11 , bla TEM-1 and bla OXA-9 , ESBL genes, such as bla CTX-M-15 and bla OXA-1 , and the carbapenemase genes bla NDM-1 and bla OXA- 48 .bla SHV-11 was found in all isolates whereas bla TEM-1 and bla OXA-9 were present in all isolates except 1-A and 1-B.However, bla CTX-M- 15 was not found in isolate 4-A.In isolate 4-B, bla CTX-M-15 and bla NDM-1 genes were initially detected by WGS, however, after subcultivation, a discrepancy between AST and WGS results was observed.Re-testing by LAMP at this later time point revealed the loss of both genes (Table 1).Genes associated with sulphonamide (sul1) and chloramphenicol (catB3) resistance were also detected in all isolates except 1-A, 1-B and 2-B.Note that we did not detect any common cefiderocol resistance genes.The isolates exhibited clonality as emphasized by the low number of SNPs among them (Supplementary Tables S1, S2).Especially isolates from the same time point showed no difference in the core genome alignment (5,360,988 bp) with the exception of 2-A and 2-B (six SNPs) and 5-D (one additional SNP compared to 5-A-C).The largest distance with 17 SNPs was between 2-A and 5-D (Supplementary Table S1). Discussion When evaluating the results, we can roughly identify three distinct outcomes.The first significant observation concerns the emergence of resistance development chronologically within a K. pneumoniae strain, originating from a patient subjected to continuous, uninterrupted antibiotic intervention.This scenario promptly elicits contemplation of the subject concerning withinhost adaptive evolution of bacteria.In fact, in-host resistance evolution, either due to plasmid mediation or chromosome mutations, has been observed even shortly after the initiation of antimicrobial treatment (Jin et al., 2021). The second notable observation in our study is the occurrence of SCVs from patient specimens following the detection of normalsized morphotypes.SCVs demonstrate remarkable abilities to invade and persist within host cells, thus evading the surveillance mechanisms of the immune system (Tuchscherr et al., 2020).The existence of SCVs, mostly observed in Staphylococcus spp., has been documented since the onset of the 20th century and has gained increasing attention due to its potential implications for both clinical and basic research (Jacobsen, 1910;Proctor et al., 2006).Regarding the SCVs of Gram-negative bacteria, studies have particularly focused on Burkholderia and Pseudomonas spp.isolated from CF patients (Oliver et al., 2000;Haussler et al., 2003a, Haussler et al., 2003b).However, there are only sparse data on the occurrence of SCV in Klebsiella spp (Benner et al., 1965;Musher et al., 1979;Murray and Moellering, 1982;Srinivasan et al., 2012;Silva et al., 2016). Basically, SCVs have been determined as a subpopulation characterized by their distinct phenotypic properties, such as atypical colony morphologies including the reduced colony size (Proctor et al., 1995).Their decreased growth rate is thought to contribute to their inherent resistance, given that the decelerated growth dynamics potentially hinder the effectiveness of antibiotics geared towards rapidly proliferating cell populations (Proctor and von Humboldt, 1998).Furthermore, this phenomenon concurrently signifies decreased metabolic activity, which may engender modifications in cell wall permeability, drug uptake, the modulation of efflux pump expression (Mitsuyama et al., 1997). For electron transport chain-defective staphylococcal SCVs, lower efficacy of aminoglycosides known to be taken up through electrical potential across the cytoplasmic membrane (DY) was demonstrated, which is attributable to low DY (Baumert et al., 2002).These alterations could collectively contribute to enhancing resistance patterns.In this study, we observed an increase in the MIC values of amikacin, cefiderocol, and trimethoprimsulfamethoxazole in the isolates recovered over time.This MIC increase was especially pronounced for amikacin in SCV phenotypes.Moreover, most antibiotics penetrate into host cells poorly, so the concentrations required to kill intracellularly persistent SCVs cannot be achieved (Proctor et al., 2006). SCVs, known for their inducible formation through in vitro processes involving various agents, including antibiotics (Benner et al., 1965), have exhibited a propensity for increased persistence and adaptability when confronted with challenging environments (Li et al., 2016).An enhanced ability to form biofilms on biotic and abiotic surfaces has been shown for SCVs of different bacterial species (Haussler et al., 2003b;Webb et al., 2004;Al Laham et al., 2007;Allegrucci and Sauer, 2007;Millette et al., 2023).The substantial implication of SCVs extends to their involvement in biofilm development, as biofilms effectively shield bacteria from harsh host environments, thereby complicating the elucidation of drug resistance mechanisms within biofilm structures (Craft et al., 2019).Biofilms not only confer protection against host immune defenses but also serve as reservoirs for persistent infections and recurrent episodes (Mirani et al., 2015).The impact of SCV phenotype on biofilm formation in in Klebsiella remains to be elucidated in further studies. Furthermore, the emergence of SCVs could plausibly be due to selection pressure from antibiotic regimens or other host-associated factors, e.g., host cationic peptides.Consistent with the case that was the subject of our study, the higher frequency of SCVs in isolates from chronic and recurrent infections compared to acute infections suggests a potential role for these variants in evading host immune responses and antimicrobial treatments (Proctor et al., 2006).In the context of our study, the emergence of SCVs after the initiation of cefiderocol treatment while already undergoing antibiotic therapy could be construed as a form of in vivo or in host induction. The third noteworthy finding from our study underscores the inherent instability of SCVs.This dynamic interplay between stable and unstable SCVs is still poorly understood and its elucidation may contribute to a deeper understanding of their role in infection in general and persistence phenomena in particular (Becker et al., 2006).Despite comprehensive explorations largely focusing on staphylococci, a lack of investigations concerning Klebsiella spp.persists, and requires attention. The observed instability among SCVs, combined with distinct antibiotic susceptibility profiles across phenotypes, increases the significance of investigating SCV plasticity (Proctor et al., 1995).Stable SCVs represent a long-term adaptation strategy, whereas their unstable counterparts may arise as stress-induced variants that result from rapid adaptation to fluctuating environments (Tuchscherr et al., 2010, Tuchscherr et al., 2011, Tuchscherr et al., 2015).This inherent instability potentially serves as a mechanism for evading host immune responses and circumventing antibiotic interventions (Tuchscherr et al., 2015).Furthermore, the involvement of epigenetic modifications, including alterations in DNA methylation patterns, could significantly influence SCV stability (Guerillot et al., 2019).In addition, regulatory systems, such as two-component systems and quorum sensing, play a crucial role in SCV formation by modulating bacterial behavior and adaptation.Disruption or dysregulation of these systems could lead to the emergence of SCVs with altered phenotypic properties (Pader et al., 2014).Due to instability, slow-growing SCVs may generate mutants that exhibit a faster growth rate than usual (Brandis et al., 2017).In instances of reversion to the wild type, rapidly growing mutant revertants may demonstrate either the loss or preservation of antibiotic resistance (Brandis et al., 2017). A high mutation rate might favor the emergence of SCVs (Schaaff et al., 2003) and also explain the emergence of antibiotic resistance as a result of antibiotic selective pressure and the adaptation of hypermutable strains in patients, especially CF patients (Prunier et al., 2003).CF-like chronic infections have been shown to specifically contribute to the development of bacterial mutations (Smith et al., 2006).Hypermutation could result in a subpopulation of bacteria that temporarily does not grow, thus leading to persistence (Witzany et al., 2022).Additionally, an increase in the prevalence of mutator bacterial strains with deficient DNA mismatch repair (MMR) system has been detected in CF patients, who are used as a reservoir for mutation (Mena et al., 2008).To our best knowledge, we were unable to identify any instance in the available literature wherein a solitary SCV colony has given rise to four distinct colonies exhibiting disparate morphologies.Accordingly, we suggest the designation "phenotypic hyper-splitting" for this distinctive phenomenon. We described in this study unprecedented phenotypic attributes and primarily focused on in vitro experiments.Therefore, the clinical relevance of our findings necessitates validation through animal models and clinical sample analyses.In this context, macrophage and neutrophil assays would be valuable for assessing both the extent of immune response and the presence of persistent cells.Moreover, the determination of the auxotrophism (Kriegeskorte et al., 2014;Becker, 2023) of K. pneumoniae SCVs and of the molecular mechanisms that drive SCV formation and the resulting antibiotic resistance in this species require further investigation.Integrating a comprehensive range of approaches encompassing genomics, transcriptomics, and proteomics, the utilization of experimental evolutionary models can yield valuable insights into the genetic determinants and regulatory networks orchestrating SCV phenotypes. The genomic analysis conducted in this study has revealed clonality among all 14 isolates.Further exploration is warranted to uncover the intricate molecular mechanisms underlying phenotypic hyper-splitting and to elucidate the potential pathogenic implications of this phenomenon.To better understand the formation of the SCV phenotype especially in Gram-negative pathogens, efforts need to be intensified (i) to improve the detection and characterization of SCVs recovered from samples and (ii) to elucidate their clinical impact. FIGURE 1Columbia blood agar plates showing the different colonial morphotypes of the K. pneumoniae isolates comprising regular sized colonies (wild-type) with glistening whitish (B) and grey (C), and dry and rough grey colonies (D), respectively, as well as tiny grey and whitish colonies displaying the SCV phenotype (A).Panel (A) also shows the hyper-splitting phenomenon of the SCV phenotype into the colony morphotypes shown in panels (B-D). TABLE 1 Colony morphology and antimicrobial susceptibility characteristics of the 14 phenotypes of the Klebsiella pneumoniae strain.
2024-01-17T14:10:59.437Z
2024-01-12T00:00:00.000
{ "year": 2024, "sha1": "b18e3e9225050af5d97e4f1c4ffaf700adf989df", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcimb.2024.1372704/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bfb5e0a0292ab307f7973f970c4d8b67b6bc79ce", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
16613262
pes2o/s2orc
v3-fos-license
Progress in methods for rare variant association Empirical studies and evolutionary theory support a role for rare variants in the etiology of complex traits. Given this motivation and increasing affordability of whole-exome and whole-genome sequencing, methods for rare variant association have been an active area of research for the past decade. Here, we provide a survey of the current literature and developments from the Genetics Analysis Workshop 19 (GAW19) Collapsing Rare Variants working group. In particular, we present the generalized linear regression framework and associated score statistic for the 2 major types of methods: burden and variance components methods. We further show that by simply modifying weights within these frameworks we arrive at many of the popular existing methods, for example, the cohort allelic sums test and sequence kernel association test. Meta-analysis techniques are also described. Next, we describe the 6 contributions from the GAW19 Collapsing Rare Variants working group. These included development of new methods, such as a retrospective likelihood for family data, a method using genomic structure to compare cases and controls, a haplotype-based meta-analysis, and a permutation-based method for combining different statistical tests. In addition, one contribution compared a mega-analysis of family-based and population-based data to meta-analysis. Finally, the power of existing family-based methods for binary traits was compared. We conclude with suggestions for open research questions. Background Rare variants have increasingly become a focus in studies of complex traits. There are many reasons for this increasing interest. Accessibility, both in cost and technology, of next-generation sequencing has led to the discovery of a plethora of rare variants. Nelson et al. [1] estimated that 95 % of variants were rare with a minor allele frequency (MAF) of less than 0.5 %. This is in stark contrast to previous research suggesting that nearly one-third of variants have a frequency below 5 % [2]. Furthermore, evolutionary theory suggests that deleterious variants are selected against and thus should be rare [3]. Recent research has supported this theory, observing that a large proportion of deleterious variants are indeed rare [4,5]. Despite the effects of this purifying selection, the 1000 Genomes project estimates that individuals carry 76 to 190 rare nonsynonymous variants predicted to be deleterious [6]. A more contentious argument for focusing genetic research on rare variants pertains to the so-called phenomenon of "missing heritability". Genome-wide association studies (GWAS) have successfully identified numerous common variants associated with complex traits; however, the common variants tend to have relatively small effects and explain only a fraction of the overall heritability [7]. Human height serves as an excellent example with estimates of heritability near 80 %. GWAS variants with genome-wide significant associations explain only approximately 5 % of overall variation in height whereas models that use all high-quality GWAS variants with a MAF of more than 1 % explain approximately 45 % of the variation [8,9]. Even though the latter is a substantial improvement, it is still well shy of 80 %. There is emerging evidence that rare variants are involved in complex disease, including Alzheimer disease [10], lipids and coronary artery disease [11], irritable bowel disease [12], prostate cancer [13], and many others [11,[14][15][16]. Despite these encouraging results, many studies continue to be underpowered to detect association to disease-associated rare variants. Continued development of methods is needed to help increase the power to detect these associations. This is particularly true given that tests of individual rare variants are underpowered without exceptionally large sample sizes [14]. Combining variants based on a gene or region is a popular strategy. Other strategies for improving power for detecting rare variants include using family samples or isolated populations to increase the frequency of a variant that is rare in the general or nonisolate population [17,18]. Ascertaining phenotypic extremes can also increase the likelihood of sampling individuals with disease-associated rare variants, thus increasing the power of rare variants tests [19,20]. Finally, incorporating biological knowledge and genomic annotation to exclude, or downweight, variants in analyses is also an effective strategy, focusing tests on variants more likely to be deleterious [21]. Here, we provide a summary of the current literature with respect to the association of rare variants and ways to increase the power of these tests. We then provide results from the Collapsing Rare Variants Working Group of Genetic Analysis Workshop 19 (GAW19) and conclude with recommendations and open problems. Current literature Although there is no formal definition for a rare variant, variants with a MAF between 5 % and 50 % are generally considered common. Variants with a MAF in the range of 1 % to 5 % [15] or 0.5 % to 5 % [22] are considered low frequency or less common. Rare variants have a MAF falling below these ranges, whereas a private variant is specific to probands and their relatives. Basic association models for collapsing rare variants The 2 major types of methods for collapsing rare variants within a meaningful genetic region, such as a gene, consist of burden tests and variance component tests. A general formula for the burden of rare variants within a region is shown in Eq. (1). [27] proposed a weight that increases as MAF decreases while Asimit et al. [24] weighted genotypes by their quality. Even though several tests were first developed outside of the regression framework [23,25,27], nearly all can be easily implemented in a generalized regression framework (Eq. 2) by incorporating B i as a covariate in the regression model. This greatly generalizes the statistical framework allowing for many types of outcome variables (eg, continuous, binary, survival, etc.), and the incorporation of additional possible confounders and covariates: where f(μ) is a function that links a linear combination of the predictors and the mean, μ, of the outcome (eg, disease or trait); γ 0 is the intercept; γ ' is a vector of parameters for the covariates, X; β is the regression parameter for the burden of rare variants within a region, B; and bolded symbols denote a vector. For a quantitative trait f (μ) = μ is used within a linear regression framework, and for a qualitative trait f (μ) = logit(μ) is typically used within a logistic regression framework. Although several test statistics can be implemented within the generalized regression format, we focus on the score statistic, U, testing whether β = 0. The burden score statistic is shown in Equation (3) and under the null hypothesis of no association has a chi-square distribution with 1 degree of freedom (df). Note that the burden score statistic can be written as a weighted sum of the marginal score statistics, S m , for each genetic variant where, S m ¼ P n i¼1 G i;m y i −μ i ð Þ for n individuals, withμ i being the estimated mean for individual i, which includes the effects of covariates as estimated through generalized regression. As marginal score statistics can first be calculated on each variant, this alternative form lends itself nicely to extensions such as meta-analysis as described later. Instead of calculating the burden of variants within a genetic region, variance component tests (eg, sequence kernel association test [SKAT] [29], C-alpha [30] and SumSqU [31]) evaluate the similarity of the variants within the region. Simply, we expect the distribution of variants to be more similar for subjects with similar trait values than for subjects with different trait values. Like with the burden test, a general equation for the score statistic of the variance component test can be written and is shown in Eq. (4). where S m is the previously defined marginal score statistic. U VC follows a mixture chi-square distribution. Because the marginal score statistic is squared, both negative and positive effects can be included in the statistic. This is a notable advantage of variance component tests over burden tests, for which effects of different directions can cancel each other out. For both C-alpha [30] and SumSqU [31], the weights equal 1. C-alpha is further restricted to scenarios where the phenotype is dichotomous, and there are no covariates. The SKAT statistic [29] is identical to U VC , accommodating a variety of weights; as such, C-alpha and SumSqU are special cases of SKAT. Burden tests tend to be most powerful when the majority of variants have an effect in the same direction [25,29,32]. Variance component tests are more powerful when the variants have different effects (ie, many variants with no effect or effects in opposite directions) [29,32]. To combine the different strengths of the burden and variance component tests, Lee et al. [32] developed an optimal unified approach called the SKAT-O, where the burden and SKAT tests are combined with a weighting parameter, ρ (Eq. 5). Note that the optimal test is equivalent to the burden test and SKAT (ie, variance component test) when ρ is 1 or 0, respectively. Others have explored combining the burden and variance components tests as well [33,34]. Finally, more recently the EC test [35] was developed under a Bayesian framework with an alternative hypothesis prior that gives a higher probability to only 1 causal variant per genetic region. Here, we provide a basic overview of general methods; others have done this as well in more detail [22]. In addition, Derkach et al. [36] have provided an excellent review and comparison (both empirical and theoretical) of existing methods. Important conclusions and results include: weighting variants inversely to the MAF does not always increase power even under scenarios where rare variants where simulated to have a larger effect; as the sample size increases the variance component statistic tends to have a higher power than the burden statistic; uniformly optimal tests are difficult to achieve in practice. Incorporating additional information There have been many extensions to the basic frameworks and models to include and account for additional information. Various weights can be defined based on the MAF [27], quality of genotype calls [24], previous evidence for association, direction of effect (eg, aSum [28]), evolutionary conservation (eg, phastCons [phylogenetic analysis with space/time models conservation] [37], and GERP [genomic evolutionary rate profiling] [38]), probability of being functional, and likelihood of being deleterious. There exists several algorithms/software that predict whether a variant is likely to be deleterious, including CONDEL (consensus deleteriousness) [39], SIFT (sorting intolerant from tolerant) [40], PolyPhen (polymorphism phenotyping) [41], CADD (combined annotation-dependent depletion) [42], and several others (see Castellana and Mazza [43]). Although the predictions of these programs can differ greatly [39,43,44], variants that have consistent predictions of either being benign or deleterious across all programs may be more likely to be truly benign or deleterious. Variants can be removed entirely from the model by using a weight of 1 and a weight of 0 for variants fulfilling or not fulfilling a requirement or threshold, respectively. It is often difficult to know the true or best threshold to use when determining which variants to include in the model. Adaptive methods implement the region-based methods over a variety of thresholds (such as various MAF thresholds) and then adjust for multiple comparisons using permutation [28,45]. It is worth emphasizing that the proportion of variants in the collapsing test with association to the outcome is directly related to the power [46,47]. As such, choosing which variants to include is extremely important. When choosing variants, various factors should be considered such as the likely penetrance of the variants, the prevalence of the disease or trait, and the predicted deleteriousness of the variants. As discussed in the previous paragraph, instead of weighting variants, only a subset of variants can be kept, such as those predicted to be deleterious or to result in loss of function. Once a gene or region has been identified as being associated to a disease or trait, an important next step is to identify the causal variants within the region. Experimental studies to determine the functional effects are often costly both in effort and money. In a recent paper, Ionita-Laza et al. [48] proposed and compared 2 methods to identify likely causal variants within gene regions. Unlike rare variants, the parameter estimation for common variants is generally stable. Including disease-associated common variants within a gene region could help to identify genetic regions associated with a trait as well as to help determine if a collapsed set of rare variants produces an independent signal above that from the common variants. Determining which common variants to include in the model is not always straightforward as too many variants will dilute the signal and decrease the power by using up valuable degrees of freedom, while including too few variants may miss a signal all together. Penalized regression methods, such as LASSO (least absolute shrinkage and selection operator), have been proposed [49] as well as an extension to the SKAT framework that incorporates common variants [50]. More recently, methods have been developed to compare the observed number of filtered variants within a genetic region to that expected genome-or exome-wide [51] or expected by an estimated mutation rate [52]. These methods are most often implemented in a case-only framework, and are thus sensitive to the estimates of comparison (eg, genome-/exome-wide averages, mutation rates, etc.). These methods are discussed further below. Study design considerations Although sequencing costs continue to decline, the cost of sequencing continues to impose a limit on the number of samples that can be sequenced. There is increasing evidence that the power to detect an association to rare variants is low regardless of the type of test or statistic used [46,47]. As such, study design is of utmost importance and includes, among others, family-based, trio, case-only, case-control, and population cohort designs. Study design affects the power and generalizability of the study. Certain study designs may increase the power to detect an association in certain situations while decreasing the ability to detect other genetic associations. For instance, sampling families with a particular rare disease increases the likelihood of observing multiple copies of the causal rare variant, thus increasing power to detect an association to that particular variant [53]. However, this sampling framework may reduce the number of detected genetic variants, making it more difficult to discover the variety of genetic variants that would be seen when sampling the general population. Recently, several methods have been developed or extended to accommodate related samples [54,55]. Probably the most widely used of these is famSKAT (family-based sequence kernel association test) developed by Chen et al. [56], which extends SKAT by using a linear mixed effects model to account for the family structure in tests of quantitative traits. For GAW19, Wang et al. [57] studied the type 1 error and power of current family-based methods for rare variant association tests for dichotomous phenotypes. It is also important to note that valid permutation to assess significance in the context of dependent samples (such as with related samples or population stratification) is not straightforward. Others have explored permutation in this setting and have proposed modified permutation procedures [58]. For extremely rare, highly penetrant disorders, researchers have had success sequencing a set of cases [59,60] or trios where the offspring has an extremely rare disorder and the parents are unaffected [61][62][63]. Specific software exists for detecting de novo mutations within trio designs [64]. For more complex and common diseases or traits, study designs such as a case-control or populationcohort are often used [21,65,66]. Although many casecontrol studies are retrospective, few incorporate the retrospective ascertainment of the sampling design into the statistical framework. Such methods were included in GAW19 contributions [57,67]. Unfortunately, detecting rare genetic associations in complex diseases has continued to prove difficult and much larger sample sizes are needed to achieve adequate power. Some study designs use extreme sampling either of cases [68] or of quantitative phenotypes [21] to increase power. For complex traits, extreme sampling can lead to an increase in the number of rare variants detected and subsequently an increase in power [69]. However, not accounting for the traitdependent sampling when analyzing quantitative traits can lead to biased estimates, inflated type 1 error, and even a decrease in power [70]. In 2013, Barnett et al. [69] and Lin et al. [70] each developed novel statistical methods to appropriately analyze quantitative traits with extreme sampling study designs. Within the study design of sequencing a unique and homogeneous set of cases, case-only statistical frameworks exist for detecting exceedingly rare or de novo and highly penetrant variants [51,52]. Statistical frameworks also exist to incorporate external population controls with the unique set of cases in a case-control analysis [71], although more research in this area is needed. As previously discussed, most methods can be expressed within a regression framework. Many of the burden methods are within a generalized linear regression framework while the variance component methods, such as SKAT, are implemented within a mixed effects regression model. The original regression frameworks of these methods required large enough sample sizes to reach an asymptotic distribution of the test statistics and independent observations. Few methods have been developed specifically for small samples, although Lee et al. [72] extended SKAT for use with small sample sizes. Meta-analysis Meta-analysis of test statistics across multiple studies is widely used in GWAS and other genetic studies of common variants to replicate, confirm, and find new associations. Meta-analysis is arguably even more important for studies of rare variants where extremely large sample sizes are important for achieving adequate power. Many simple meta-analysis frameworks that combine information about the test statistic or p value (such as Fisher's and Z-score methods [73]) can be applied to test statistics from current region-based methods. (Although it should be noted that, as there is no direction of effect for variance component tests, only weights based on sample size and not direction of effect can be incorporated into Z-score meta-analysis for variance component tests.) Although simple and easy to implement, these meta-analysis methods do not account for different variants that may be included in the region-based statistics for each study. Lee et al. [74] developed a meta-analysis framework for rare variants that achieves nearly identical empirical power as analyses based on combined individual level data (sometimes called mega-analysis). This framework uses single-variant score statistics and the corresponding between-variant covariance matrix. Importantly, the framework allows for variants to be monomorphic (ie, the alternate allele is not seen) in some of the individual studies. To be included in the meta-analysis statistic, a variant only has to be polymorphic in at least 1 study. Further, meta-analysis has other advantages such as easier sharing of data (given consent or computational barriers to sharing raw data) and controlling for potential confounders or population stratification specific to each study. For instance, one study may adjust for 5 principal components whereas another study may adjust for 3 principal components and recruitment center. In addition to being able to include different study-specific covariates, one can also further account for possible heterogeneity in study statistics in the meta-analysis statistic itself as described below. Here, we briefly outline Lee et al's [74] meta-analysis framework. If we define the single-variant (ie, marginal) score statistics as S k, m for study k and variant m, we can then rewrite the burden score statistic as a combination of the single-variant statistics over all studies: We can also square the single-variant score statistics summed over the studies and then summed over variants to produce a meta-analysis score statistic for the variance component region test: The above variance component statistic requires the additional assumption of homogeneous genetic effects across all studies. If we believe that the genetic effects are instead heterogeneous, the meta-analysis score statistic for the variance component region test can be written as follows: If we believe the heterogeneity can be isolated to clusters of studies, such as ethnicity, the statistics can be combined, first over the studies in each cluster and then over each cluster and marker. Note that the burden and variance component meta-analysis test statistics can be combined in an optimal way similar to that shown for single studies in Equation (5). More details are in Lee et al. [74]. Others have explored meta-analysis for rare variants as well [75]. Contributions from the collapsing rare variants working group GAW19 provided real human sequence and phenotype data for data sets of Mexican American families and unrelated individuals. In addition, 200 simulated data sets were provided based off the real sequence data for phenotypes with true underlying genetic associations as well as a null polygenic trait. Family data (for 959 individuals in 20 pedigrees) consisted of whole genome sequencing calls and GWAS single-nucleotide polymorphisms (SNPs) for odd-numbered chromosomes, as well as longitudinal real phenotype data for systolic and diastolic blood pressure, age, sex and indicators of hypertension, antihypertensive medication use, and cigarette smoking, collected at up to 4 time points. Family data also included genomewide measures of gene expression for a smaller set of individuals; however, no contribution in our group utilized this data nor did any contribution utilize the longitudinal nature of the data. The data set of 1943 unrelated individuals contained exome sequence calls and the same phenotypes as the family data, at a single time point. More detailed information on the GAW19 data sets is available in Blangero et al. [76]. The 6 contributions from the Collapsing Rare Variants Working Group of GAW19 extend upon the current literature and reflect varied goals, including the creation of new statistical tests, developments of meta-analytic techniques and a comparison of existing statistical tests. Table 1 provides overall characteristics of each contribution. New statistics Green et al. [77] developed a general framework for combining different statistical tests of association of rare variants with a continuous trait in family-based studies. A linear mixed model was used to derive residuals by adjusting for covariates as well as a random effect for familial correlation. These residuals were then permuted to create data sets reflective of the null hypothesis of no association, allowing for the derivation of empirical p values that combine information over a set of rare variant tests yielding a single overall test of association. In the Green et al. formulation [77], evidence was combined over 4 burden tests and 4 variance-component tests representing different powers of the marginal score statistics (U, U 2 , U 3 and U 4 ) as well as over 2 weight functions, one based on the Beta distribution [29], and the other based on the inverse standard deviation of the allele count [45]. With increasing powers of the marginal score statistics, the contribution of noncausal variants to the overall statistic is lessened, and the use of the Beta distribution more severely down weights common variants compared to those based on the inverse standard deviation of the allele count. By utilizing all combinations of weight function with powers of the score statistic, a variety of models are included within the test. However, given the permutation framework, their method can be generalized to any set of statistical tests. In evaluating their method, Green et al. [77] focused on the GAW19 simulated data set of 30 genes on chromosome 3 that have at least 1 casual variant; type 1 error and power were estimated for the combined approach, as well as for each of the 4 burden tests and 4 variance-component tests. Type 1 error was controlled at the 0.05 level based on the null trait, Q1, provided with the simulated data. The combined approach consistently yielded intermediate power relative to the power of the four burden and four variance-component tests. Given there is no single best test and that the optimal statistic is unknown a priori, the combined approach allows for proper control of type 1 error and is an approach that is robust to differing genetic architectures. Further research is needed to determine an optimal combination of tests, ones that are uncorrelated, reflecting different patterns of association, and that maximize power. Zhu et al. [67] derived a score test, OW-score, based on the retrospective likelihood for a continuous trait formed by conditioning on observed phenotypes. The resulting test is a function of a weighted combination of genotypes over the variants included in the test, where the weighting is derived to maximize the score statistic. Power for the OW-score method was compared to that of famSKAT [56] for the 14 genes with the largest simulated signal for diastolic blood pressure in the GAW19 data at a significance level of 0.05. Only 4 genes, yielded power that was greater than 40 % for either method; of these, the OW-score test was more powerful for 3 genes and the famSKAT was more powerful for 1 gene. It is important to note that the distribution of the weights by MAF differs between the OW-score test and famSKAT, with famSKAT more highly weighting variants with a MAF within (0.01, 0.05). This aligned with the simulation results and performance of the OW-score test compared to famSKAT: when causal variants fell within this MAF range, the famSKAT was more powerful than the OW-score method. Thus, more research is needed to determine if the retrospective nature of the OW-score test or the varied weighting structure is leading to increased power in certain scenarios. Jadhav et al. [78] used a method, from the branch of statistics called functional data analysis, which is based on analysis of curves, surfaces, or functions [79]. Specifically, a functional analysis of variance (ANOVA) model compared the difference in the genetic structure of a genomic region between cases and controls. To do so, a continuous function was fit to each individual's genotype using cubic B-splines over a 30-kb region, and the resulting mean function was compared between cases and controls using an ANOVA test. Results were compared with a burden test that weighted minor allele counts by the inverse standard deviation for the minor allele count in controls [27], as well as to a burden test that incorporates linkage disequilibrium through a genetic covariance matrix [80]. Simulations were conducted for a 1.4-Mb region of chromosome 3 where causal variants were randomly selected to be 1 % to 50 % of the region, and phenotypes were simulated using both unidirectional and bidirectional effects. The functional ANOVA test had greater power, up to 0.135 higher, compared to the burden test (with or without incorporation of linkage disequilibrium) over all but 1 scenario, in which 50 % of the variants in the region were causal and with unidirectional effect. In this scenario, power was comparable. Developments in meta-analysis Katsumata and Fardo [81] applied the famSKAT statistic to each of the GAW19 family-and population-based data sets, as well as to the combined set of data, resulting in a mega-analysis. These 3 analyses were compared against a meta-analysis of the family-and populationbased data sets for the 15 most causal genes influencing each of diastolic and systolic blood pressure in the GAW19 simulation model (23 genes in total, given overlapping causal genes). They found that mega-analysis could be substantially more powerful than meta-analysis (NRF1, LEPR, LRP8, GAB2 with systolic blood pressure [SBP]) with meta-analysis resulting in discernibly higher power compared to mega-analysis for only one of the top genes (TNN with both SBP and diastolic blood pressure [DBP]). However, when the power to detect association to a gene-region was considerably lower within the family-based sample versus the population-based sample, the power of the mega-analysis was much lower than the analysis based on the population-based sample alone, while the meta-analysis had a less-severe power loss. This suggests that the mega-analysis may be better when there is sufficient power to detect an association in both samples, but a meta-analysis might be more suited to situations where one study is underpowered and/or there is heterogeneity in the genetic associations between study samples. Both meta-analysis and mega-analysis indicated elevated type 1 error with estimates based on the 200 simulated data sets of the null trait Q1 ranging from 0.055 to 0.130 and 0.050 to 0.135, respectively, for the 23 genes. Wang et al. [82] also considered a meta-analysis of the famSKAT statistic applied to the family-and populationbased data sets for DBP. These results were compared to a meta-analysis of results from a haplotype-based association model. For haplotype analysis, a mixed linear model was fit, allowing for covariates, fixed effects of haplotypes (with haplotypes with frequency of less than 0.5 % collapsed into 1 group) and random components for family structure and error. Haplotypes were coded using dosages estimated from genotypes using the expectationmaximization algorithm. Models were fit separately for the family-based and population-based samples, and the weighted-least squares method of meta-analysis was followed by a Wald test of equal haplotype effects. Type 1 error for the haplotype model was found to be elevated for genes with more than 14 haplotypes; hence, results on the real data set were given for only genes with fewer than 14 haplotypes. None of the genes were significant for famSKAT after correcting for multiple testing; however, multiple genes did achieve statistical significance using the haplotype model indicating a potentially more powerful method for association testing. As these results are from real data, further study is needed to understand relative performance of the 2 methods over a range of models. Method comparison Finally, Wang et al. [57] compared existing family-based methods for binary traits including the rare variant transmission disequilibrium test (RV-TDT) [55], the generalized estimating equations-based-kernel association (GEE-KM) test [83], an extended CMC test for pedigree data known as PedCMC [84], a gene-level kernel and burden association tests for pedigree data (PedGene) [80], and the family-based rare variant association test (FARVAT) [85]. Through simulation based on the 6 genes with the largest effects on both simulated SBP and DBP, they found that the FARVAT method based on optimal weights (that adaptively use the data to combine burden and variance component tests) was more powerful than the PedCMC, GEE-KM, or any of the RV-TDT tests. The power for the PedGene method was comparable with that of FARVAT; however, FARVAT required substantially less computing time. Based on dichotomization of the simulated null trait Q1 to correspond to a prevalence of 22.6 %, type 1 error was demonstrated to be deflated for the RV-TDT and inflated for the GEE-KM test, while the PedCMC, PedGene, and FARVAT had reasonable control of type 1 error across a range of significance levels. Discussion and conclusions Over the last 10 years, there has been considerable methods development for association tests of rare variants. Tests have been proposed that are ideal for unidirectional and bidirectional effects, as well as an optimized combination of the 2 types of effects. Methods have been proposed for binary as well as normally distributed traits, for population-based and family studies. Most tests allow for the use of different weighting schemes (eg, based on MAF or genomic annotation), and metaanalysis procedures have also been developed. Contributions to the GAW19 Collapsing Rare Variants group expanded upon the literature in several ways. Green et al. [77] provided a method that could be used to combine any collection of statistics for rare variant association. This is particularly important given there are numerous types of annotation that could be used as weights and these weights could be implemented in a burden model, variance component model, or combination of the 2 models. While an oft-used strategy is to conduct all tests separately, the method proposed by Green et al. would allow for an empirical combination in a statistically rigorous framework while controlling for total type 1 error. New statistical tests were developed to allow for a retrospective likelihood based on optimized variant weights [67] and to incorporate genomic structure into the test of rare variants [78]. Katsumata and Fardo [81] provided guidance regarding design and meta-analysis. Based on GAW19 simulated data, they found that mega-analysis generally led to higher power than a meta-analysis; however, if there were large differences in power between the family-based and population-based studies, a mega-analysis could have power less than that of the studies being combined, metaanalysis was less affected by this scenario. Wang et al. [82] compared meta-analysis based on haplotypes to metaanalysis based on famSKAT statistics, demonstrating the 2 approaches to be complementary by detecting associations to different genes. Finally, Wang et al. [57] compared existing familybased methods for rare variant association to binary traits and demonstrated PedGene and FARVAT to be powerful methods for rare variant association, with FARVAT being more computationally efficient. Although much work has been done, there are still many open research areas pertaining to the analysis of rare variants. We mention a number of these areas; however, this list is by no means exhaustive. For example, great care has been taken in studies of common variants to control for population substructure. These have included the use of genetic principal components, genomic matching and linear mixed models; see, eg, the review by Price et al. [86]. Given rare variants are confounded with population ancestry, it is not clear how to best control for this substructure. Although there has been some work in this area in showing that population substructure is indeed different for common and rare variants [87,88], more work, especially in method development, is needed. It often makes sense to focus on the gene region as the unit for collapsing methods, especially given analyses within the coding regions of the genome. However, GWAS associations are often in intergenic regions, and there is building evidence that much of the noncoding region of the genome is indeed functional [89,90]. Thus, there is an interest in testing noncoding regions of the genome for association with rare variants and how best to define the regions is an open question. A slidingwindow based approach is often used to group regions of the genome for testing. There are many additional questions when using sliding windows such as number, size of window, and size of overlap between windows. Genomic windows will need to be large enough to capture the causal region without being too large so as to include too much noise. There is likely to be a tradeoff between multiple testing adjustments necessary to account for many small windows versus the potential power loss from using fewer windows that are too large. In addition, as the functionality of the noncoding regions continues to be discovered and defined, it is likely that there will be useful information to use when building or defining the windows or meaningful genetic units within the non-coding regions. As we have detailed, methods exist for incorporating genomic annotation as weights in region based methods. The choice of the best weight and, in fact, which information to consider at all, remain somewhat open questions. Currently, MAF, functionality, consequence, evolutionary conservation and many other metrics can be used as weights and the list continues to grow, especially in noncoding regions as functional research continues at a rapid pace. Thus, there is a need to further develop efficient methods of deriving the most appropriate weight. This can be done to some extent through the adaptive methods discussed previously. However, the adaptive methods, which often rely on permutation, may become computationally infeasible given the increasing amount of information on which to weight, increasing sample size, and analysis on the entire genome. Thus, there will continue to be a need for computationally efficient methods of determining the weights while retaining the appropriate type 1 error. To date, most collapsing of rare variants is done on a contiguous region of the genome, whether it is a gene or a genomic window. Alternative approaches include the use of pathways or gene sets developed, for example, from expression studies or protein-protein interaction studies. Recent studies have found some success with this approach [91], but more research is needed. Finally, given the continued struggle to adequately power studies of rare variants, more work is needed on ways to improve power. One approach is to continually increase the sample size of the studies, perhaps through including publically available population controls. Another, perhaps more feasible approach may require refocusing the phenotype through use of multidimensional phenotypes or homogeneous subphenotypes. Given this relatively brief discussion of remaining areas of research for the association of rare variants, there is little doubt that this will continue to be an active area of research for several more years.
2017-08-03T01:18:42.107Z
2016-02-03T00:00:00.000
{ "year": 2016, "sha1": "f7b488cd9377e2b1b47b70b7bb32155066917ece", "oa_license": "CCBY", "oa_url": "https://bmcgenet.biomedcentral.com/track/pdf/10.1186/s12863-015-0316-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f7b488cd9377e2b1b47b70b7bb32155066917ece", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
134526004
pes2o/s2orc
v3-fos-license
Morphology and chemical composition of polymetallic nodules from the Clarion-Clippertone Zone, the Indian Ocean and Rio Grande Rise, a comparative study BENITES, M. Morphology and chemical composition of polymetallic nodules from the Clarion-Clippertone Zone, the Indian Ocean and Rio Grande Rise, a comparative study. 2017. Deep sea polymetallic nodules are concretions of manganese and iron oxides formed around a nucleus. They accrete either hydrogenetically – metals precipitate from the seawater – or diagenetically – metals precipitate from the sediment pore water. The accretion process affects both the nodules morphology and geochemistry. In this study, fourteen polymetallic nodules from four ocean regions, namely the Clarion-Clippertone Zone (Northeast Pacific Ocean), the Central Indian Basin (Central Indian Ocean), the Mascarene Basin (West Indian Ocean), and the Rio Grande Rise (Southwest Atlantic Ocean), were used to compare morphological and geochemical aspects between the different oceanic regions. Computed Tomography (CT) was applied to study the nodules internal structure. Scanning Electron Microscopy (SEM) was used to describe the micro layers within the nodules. Chemical composition of growth layers and nuclei was determined by both Micro X-ray Fluoscence (μ-XRF) and Laser Ablation Inductively Coupled Plasma Mass Spectroscopy (LA-ICP-MS). Finally, X-ray Absorption Near Edge Spectroscopy (XANES) was performed in order to determine the speciation (i.e., the oxidation state) of Mn and Fe. Polymetallic nodules from the Central Indian Basin are diagenetic and the ones from the Mascarene Basin and the Rio Grande Rise are hydrogenetic, while nodules from the Clarion-Clippertone Zone are of mixed type. However, the dominant accretion process varies across the nodules resulting in inhomogeneous layer textures and chemical composition. Strong Mn and Fe fractionation occurs in the diagenetic and mixed type nodules accompanied by fractionation of the trace elements Ni, Cu, Co and Ti. Mn and Fe are present in the nodules mainly as oxidized species Mn 4+ and Fe 3+ , independently of the degree of fractionation. Schematic models of the nodules environment of formation are proposed, in which and the fractionation of Mn and Fe is possibly the result of the variation of the redox front depth through time. were generated and color maps were obtained. From the Excel files, curves of relative elemental concentration were plotted and the Mn/Fe ratio was calculated. signal intensities are represented by red and green color scales, respectively. Metal composition diagrams also show that Ni and Cu follow the behavior of Mn, while Co and Ti follow the behavior of Fe. The Mn/Fe ratio varies between 2 and 40 with mean value of 10. The highest Mn/Fe ratios (up to 40) are found in the massive layers in the internal portion of the nodule. A correspondence between thick and massive layers to high Mn content and porous texture to high Fe content is observed. I am thankful to Christian Millo for incomparable dedication on a not formal cosupervision whose tireless disposition in teaching and learning with me about a new study field was so important for my personal development. Finally, all my Master degree experience would never be a possibility without the enthusiasm and trust of my supervisor Luigi Jovane, to who I owe incessant motivation and endless patience. From now, I would like to address a few personal thanks in my maternal language. INTRODUCTION Marine polymetallic nodules, also known as manganese nodules or ferromanganese nodules, are mineral concretions of manganese and iron oxides that form upon the seafloor about a nucleus at a rate of the order of 1 mm per 10 4 to 10 6 years (CRONAN, 1977;KOSCHINSKY, 2014). Because of this extremely low growth rate, polymetallic nodules absorb high quantity of rare earth elements and trace elements of relevant economic interest (CRONAN, 1978;GLASBY, 2002;PETERSEN, 2013). Together with the marine cobalt-rich crusts, the nodules compose the so-called marine ferromanganese deposits (HEIN and KOCHICNKY 2014). Deep sea polymetallic nodules were early reported by Murray and Irvine (1895), who described the first nodules dredged from the seafloor in the Pacific during the Challenger Deep Sea Exploring Expedition (1872-1876). Since then, the marine polymetallic nodules deposits passed to be more systematically studied only in the 70s and 80s, when a more in deep scientific investigation, driven by economic interests, addressed the question on how they are formed TOOMS, 1969;BONATTI;KRAEMER;RYDELL, 1972;GLASBY, 1977;CRONAN, 1978;BATURIN, 1988). Although the polymetallic nodules have been reported from a variety of marine environments, namely abyssal plains, seamounts, plateaus, mid-ocean ridges and continental margins (CRONAN, 1977), they are more concentrated in deep ocean basins. The largest polymetallic nodules fields known to date are located in the Pacific abyssal plains (up to 5000 m depth) of the Clarion-Clippertone Zone (CCZ), in the Peru Basin (PB) and in the Central Indian Basin (CIB). The CCZ hosts the largest and more extensively studied deposit, where nodules density is about 15 kg per m 2 and reaches 75 kg per m 2 in some areas PETERSEN, 2013). Polymetallic nodules are more concentrated in abyssal plains where the sedimentation rate is low, of the order of a few mm per 10 3 years (GLASBY, 2006). Where sedimentation is inhibited by bottom currents, oxygenation at the sea floor is constant, which promotes oxidation of Mn and Fe (PUTEANUS;HALBACH, 1988;PETERSEN, 2013). Deep ocean basins are mostly washed by deep water masses enriched in dissolved oxygen, for example the Antarctic Bottom Water (AABW) bathing the nodules deposits in the Pacific Ocean (GLASBY, 2006), in the Indian Ocean (VINEESH et al., 2009) and in the Atlantic Ocean (KASTEN et al., 1998). Hydrogenetic accretion occurs by oxidation and precipitation of colloidal metals from seawater at a rate of 1 to 10 mm Myr -1 (HEIN; PETERSEN, 2013). On the other hand, diagenetic growth occurs by oxidation and precipitation of metals from sediment pore water, which became remobilized due to the decay of sedimentary organic matter. Diagenetic accretion takes place at rates of the order of 100 mm Myr -1 or more (DYMOND et al., 1984;PETERSEN, 2013 The morphology and geochemistry of polymetallic nodules are extremely dependent on their genetic process. A classification of polymetallic nodules in terms of their genetic mechanism was proposed by Halbach et al. (1981) and is still widely used. This classification divides nodules into type A (diagenetic), type B (hydrogenetic) and type AB (mixed-type). Most of polymetallic nodules are classified as type AB and form at intermediate grow rates of tenths of mm Myr -1 (HEIN; PETERSEN, 2013). This is the case of the nodules from the CCZ and the Central Indian Basin (HEIN et al., 1997;VINEESH et al., 2009;GONZÁLEZ et al., 2010;MAYUMY AMPARO et al., 2013). Type A nodules are found in the Peru Basin (WEGORZEWSKI; KUHN, 2014), while type B nodules are found in the Cook Islands (HEIN et al., 2015). Bacterial activity may play a role in the mineralization process of nodules, as suggested by the finding of Mn-cycling bacteria within nodules layers (WANG et al., 2009;GAN;MÜLLER, 2009;BLÖTHE et al., 2015). However, the details about bacterially-induced mineralization are still to be understood. Environmental conditions determine which genetic type is dominant in a given polymetallic deposit (figure 1) (HEIN; PETERSEN, 2013). Upon consolidated sediment and even on hard rock substrate polymetallic nodules grow on total exposure to seawater. In these conditions nodule accretion is dominantly hydrogenetic. In contrast, diagenetic accretion dominates in unconsolidated and porous sediment, where metal are remobilized in interstitial water. Mixed-type nodules, formed by a combination of hydrogenetic and diagenetic growth, are more common on seamounts and in deep sedimentary basins. The type of substrate is not the only factor to govern the accretion process of nodules. The availability of nuclei and the sediment composition, together with the organic matter input to the sediments, are fundamental in determining which accretion process prevail. Typical nuclei for nodule accretion are shells, shark's teeth, whale's ear-bones, weathered volcanic rocks, pumice, hardened sediment, and fragments of nodules formed previously (GLASBY, 2006). The main factor controlling the intensity of diagenesis is the sedimentation rate of organic matter, which in turns is a result of biological productivity in surface waters (HALBACH, 1986). Regions of the ocean where the surface primary productivity is higher correspond to the ones in which the seafloor hosts diagenetic polymetallic nodules, as for example the Peru Basin (DYMOND et al., 1984) and the eastern South Atlantic (KASTEN et al., 1998). Low productivity surface waters, on the other hand, correspond to those settings where hydrogenetic nodules are found, like in the southwestern Pacific Basin (GLASBY, 2006) and in the Brazilian Basin (KASTEN et al., 1998). Regarding the polymetallic nodules composition, nodules tend to be enriched in Mn, Fe, Ti, Mg, P, Ni, Cu, Mo, Zn, Co, Pb, Sr, V, Y, Li and REEs relative to the surrounding sediments, while they tend to be in Si, Al and Ba, indicating that these elements have a terrigenous origin (PATTAN; PARTHIBAN, 2011). The morphology of nodules reflects the conditions of their formation (BATURIN, 1988 Morphological studies relying on a high number of samples have been performed (VINEESH et al., 2009;MAYUMY AMPARO et al., 2013;VALSANGKAR;REBELLO, 2015). Nevertheless, none of them considered in detail the internal structure of the nodules. Nowadays, thanks to the CT X-ray tomography, it is possible to obtain a high resolution imaging of the interior of the nodules in a non-destructive way, without the need of cutting the nodules, as performed in this study. Regarding their internal structure, polymetallic nodules present individual concentric layers which may be inhomogeneous in composition and texture. This heterogeneity is considered to be due to changes in environmental conditions during nodules accretion. Hydrogenetic and diagenetic growth have been found to be alternate between individual nodule layers (WEGORZEWSKI; KUHN, 2014), revealing that nodule growth is not in a steady-state and does record changes in environmental conditions during nodule formation. Despite the extensive literature on the genesis of deep sea polymetallic nodules, the link between formation mechanisms, internal structure, external morphology and geochemical composition of nodules is still poorly documented. Moreover, even though the occurrence of polymetallic nodules on the flank of the Rio Grande Rise was reported by Milliman and Amaral (1974), no scientific work exists about their morphological and geochemical characterization, or about how these deposits were formed. This is the first study in which the morphology and the geochemical composition of a nodule from this region are studied in detail, which is of great importance for marine research in Brazil. This Master Thesis aims to test the hypothesis that polymetallic nodules from different ocean regions may exhibit distinct morphology and chemical compositions, although they were formed by the same process (diagenetic or hydrogenetic). OBJECTIVES The main goal of this work is to link the mechanisms of nodules formation with their morphology and chemical composition in four ocean regions. For this purpose, polymetallic nodules from the Clarion-Clippertone Zone (Northeast Pacific Ocean), the Central Indian Basin (Central Indian Ocean), the Mascarene Basin (Western Indian Ocean) and the Rio Grande Rise (Southwest Atlantic) were studied aiming to attain the following objectives:  Describe and compare nodules from different locations, focusing on their external morphology (size, shape and surficial texture);  Describe and compare nodules from different locations, focusing on their internal structure (thickness and texture of layers and nuclei);  Determine the major, trace and rare elements composition across the nodules;  Reveal the geochemical processes that might have acted in the different ocean basins. The fundamental questions that this work addresses are the following:  Do the samples used in this work match the genetic type classification described in the literature for the regions under study?  Does Mn/Fe ratio vary in the same way in all the basins?  Why does this variation happen? And why does it not? STUDY AREAS AND CORRESPONDING NODULES DEPOSITS The Next, thin sections with approximately 100 µm thickness were prepared from the nodules JC120-104B, AAS21-DR19, SS4-280G, SK35/24B and SK35/26A. These thin sections were used for Scanning Electron Microcopy and for two kinds of Synchrotron analyses, namely Micro X-ray Fluorescence and X-ray Absorption Near Edge. Polished X-ray Computed Tomography Three dimensions Computed Tomography (CT) were run using a Versa XRM-510 Xradia equipment from Zeiss at the Technological Characterization Laboratory from Escola Politécnica, University of Sao Paulo. All the nodules were scanned for 2 hours while turning under a 160 kV 10 W X-rays source. Pixel size was 55 µm and detector resolution of 1024 x 1024 pixels, transmission of 8 -19%. Preparation of thin sections and polished sections Thin sections (100 µm thick) were prepared at the Institute of Geoscience of the University of Sao Paulo. Firstly, the nodules were embedded in a 6:1 solution of epoxy resin and hardener and kept under vacuum (-25 mPa) for six hours in order to ensure resin penetration. Samples were then dried in an oven for two days. Once the resin was completely dried, the nodules were cut in two halves. A second cut was made parallel to the first one, in order to get a slab approximately 5 mm thick. The first cut was done using a metal jaw and the second one was done using a diamond wire, both using water cooling. The surface of the slabs was impregnated with epoxy resin to avoid material loss during grinding. To do this, the slabs were placed onto a hot plate where a solution of epoxy resin, hardener and acetone was poured onto the slab surface. Once the resin was dried, the slabs were grinded on a diamond wheel 320 with SiC 600 grit. Then, the slabs were placed on glass slides and cut into 100 µm sections. Next, they were polished using a polish cloth 8'' DiaMat from Allied High Tech Products under 180 RPM rotation adding a mixture of alumina 1 µm and Ethane Diol oil. Finally, the thin sections were coated with carbon to give them the electric conductivity necessary for SEM analysis. Polished sections were also prepared for LA-ICP-MS analyses at the National Oceanographic Centre. The nodules sections embedded in resin were cut into a 7 -8 mm thick slab and their surface was grinded using a 37 µm fixed diamond wheel, making sure to keep the sample flat. Then, the samples were lapped using 9 μm SiC on flat glass to remove any grinding marks and to prepare the surface for polishing. Polishing of the samples was done on a flat wheel with 15um, 9um, 3um, and finally 1um diamond at 80 rpm for around 10 minutes each. The samples were cleaned between each stage and were polished in different orientations to prevent striations. Scanning Electron Microscopy Scanning Electron Microscopy (SEM) micrographs were performed using a Leo Synchrotron Radiation Analyses Synchrotron Radiation analyses applied in this work included micro X-ray Both punctual analysis and maps, including transects, were acquired at 10 keV. Laser Ablation -Inductively Coupled Plasma -Mass Spectroscopy Elemental analyses were performed by LA-ICP-MS at the National Oceanographic Centre, Southampton using a New Wave UP213 laser ablation system coupled to a Thermo X-Series II quadrupole ICP-MS. The tranference of ablated material into the ICP-MS occurs through He flow via a three port mixing bulb. All ICP-MS and laser settings were optimized for optimal sensivity and stability. The New Wave laser system software was used to map across each sample and standard and to set the shot positions, which were aligned along a transect across the nodule sample with shot interval space of 0.25 mm. Statistical analysis Principal Component Analyses of chemical composition data from μ-XRF and LA-ICP-MS were performed using the R language and the RStudio Software Version 0.99.903, downloaded from https://www.rstudio.com/products/rstudio/download/. Clarion-Clippertone Zone, Northeast Pacific Ocean Nodules from the CCZ (JC120-104A, JC120-104B, JC120-104C and JC120-104D) are 8 cm average long, discoidal and exhibit rough surface textures ( figure 3 A-D). The surface texture is more botryoidal than rough on the top side ( figure 4A) and purely rough on the bottom side ( figure 4B), where the nodule surface is more friable. The CCZ nodules present also a pronounced rim marking the transition between rough and rough-botryoidal texture. This rim corresponds to the limit between the buried and exposed portion of the nodules (i.e., the sediment-water interface). Biological structure like whitish-beige worm tubes and radiolaria tests are found attached to the nodules surface on both sides, but dominantly on the botryoidal one (figure 4C and D). The SEM micrographs revealed in detail the texture of layers inside the nodules. The thin layers encountered within the external portion are mostly botryoidal, with variable grey tone and variable thickness, much thinner than the layers from the internal portion ( figure 6A). The micrographs confirm that the bright thick layers are massive and alternate with the porous layers in which the rock grew following a cauliflowerdendritic pattern, as described by Halbach et al. (1981) ( figure 6B and 6C). Central Indian Basin Nodules from the Central Indian Ocean (AAS40, AAS21-17, AAS21-19, SS4-280 and F8-398) are 3-6 cm long (figure 11) with spheroidal to elongate shape, except for nodule F8-398a that is faceted. These nodules present a rough to smooth surface texture all over their surfaces as exemplified by AAS21-19 (figures 12), with the exception of nodule F8-398a which is smooth. Biological structures, like hard worm tubes, are also found upon the nodules surface, but to a lesser degree relative to the CCZ nodules. Three of the four nodules from the Central Indian Basin are polynucleated ( figure 13), which results from oxides growth around more than one nucleus. In these nodules, the nuclei correspond to 40-50% of the nodules volume. Nodule SS4-280, on the other hand, is mononucleated and exhibits a small nucleus relative to its size (10%). Layer texture varies between dendritic and botryoidal, with a high-density contrast between the two textures ( figure 14). The dendritic layers present the highest porosity. The morphological aspects of the nodules from the CIB are summarized in table 5. Mascarene Basin Nodules The chemical composition of the nodules from the MB is also very variable within each nodule, as indicated by the high standard deviations of most of the chemical elements measured by both μ-XRF and LA-ICP-MS (tables 9 and 10). However, the nodules from the MB differ clearly from the CCZ and CIB because they exhibit Mn/Fe ratios closer to 1 over most of their length (figures 24 and 25). They also differ from the CCZ and the CIB nodules in that they are significantly enriched in Ti and Co which, instead of Cu and Ni, are the most abundant elements after Mn and Fe. Rio Grande Rise The nodule from the Rio Grande Rise is 2 cm long, spheroidal, with a micro botryoidal surface texture surrounded by orange material (figure 29). One of its sides is covered by fine calcareous tests ( figure 29D). The RGR coated pebble is made up by a thin oxide layer formed around the nucleus, which represents almost 90% of the nodule's volume ( LA-ICP-MS results reveal that the coated pebble from RGR has the lowest Mn content of all the nodules studied (table 10). In contrast, the REE content is up to five times higher than in the nodules from the CCZ and CIB, and twice to three times higher than in the MB nodules. Transects 1 and 2, from the nodule's edge to the nucleus, present opposite behavior for the curves of Mn and Si concentrations, and similar behavior for the curves of Ti, Co, Cu and REE, even though in the first 0.4 mm of transect 1 the Ti curve diverges from the others ( figure 33). Also, REE concentrations in transect 1 are higher than in transect 2. Statistical analysis The Principal Component Analysis performed using the μ-XRF average concentrations of chemical elements shows that most of the chemical elements group together, excepting for Mn and Fe that are separated ( figure 35). The two Principal samples from the CCZ and the CIB are more influenced by Mn, Cu and Ni. On the other hand, the samples from the MB and the RGR are more influenced by Fe, Ca, Co and Ti. Mechanisms of formation of polymetallic nodules The nodules studied present similarities regarding their morphology and geochemistry, which give insights into the mechanisms of formation common to all the nodules. Distinguishable concentric layers and high standard deviation of the elements concentration reveal that the accretion process does not happen always in the same way and at the same rate. The Mn/Fe ratio high variance indicates that the conditions of formation of nodules changed many times. Dymond et al. (1984) first evinced that the accretion process in nodules from the Equatorial Pacific Ocean is characterized by nonsteady-state process. The chemical composition is basically the same in all the nodules and is has also been found incorporated to lattice vacancies of phyllomanganates when the Mn 2+ upwards diffusion across the redox front is too high, but this chemical species was not found to be relevant in none of the samples measured in this study. The presence of Mn species in the sediments comes from the organic matter diagenesis, in which Mn is reduced to Mn 2+ and is released to the pore water in the suboxic sediment layer, together with Ni 2+ , Cu 2+ and Zn 2+ (CHESTER, 2003). Mn 2+ then migrates to the oxic layer by diffusion and is oxidized to Mn 3+ and Mn 4+ , precipitating as phyllomanganates, which incorporate Ni 2+ , Cu 2+ and Zn 2+ in their lattice vacancies (GLASBY, 2006). Suboxic diagenesis precipitation of Mn may happen and results on a high flux of Mn 2+ into the Mn oxide structure (DYMOND et al., 1984;KUHN, 2014). The suboxic diagenesis nodules accretion occurs when the redox front is closer to the sediment-water interface, where the nodules are generally found. Suboxic diagenesis results Mn/Fe ratios up to 500 and low Ni and Cu contents (BLÖTHE et al., 2015). Such conditions were not observed in the nodules studied, thus this process seems to be not relevant for the formation of the nodules considered in this study. Fe is present in nodules generally as oxyhydroxides FeOOH (MARCUS et al., 2015). However, hematite (Fe 2 O 3 ) was found in small content in nodules (SHIRAISHI et al., 2016). Also, oxyhydroxides can change into hematite by dissolution and reprecipitation, or by solid state transformation (LU et al., 2011). Therefore, the presence of Fe 2 O 3 in some of the nodules may be explained by transformation of oxyhydroxides into hematite. Based on the above considerations, oxic diagenesis and hydrogenetic accretion are possibly the dominant processes of formation for the nodules studied. Characterization of the formation process in the different ocean basins The morphological and geochemical features of the nodules from the CCZ are similar to those of the nodules from the CIB, while the nodules from the MB are similar the coated pebble from the RGR ( Nodules from the CCZ fall into the mixed type category, according to the classification of Halbach et al., (1981). The discoidal shape and different surface texture on the top and bottom sides are in agreement with the model of growth by which diagenetic formation occurs in the buried side of the nodule, while hydrogenetic formation occurs at the exposed side. Besides, the thicker recent oxide layer corresponds to the rough side and the thinner to the smooth one. The diagenetic accretion process is known to happen at least one order of magnitude faster relative to hydrogenetic accretion (DYMOND et al., 1984), resulting in thicker layers in the buried side of nodules. The chemical composition reflects some asymmetry between the top and the bottom side of the CCZ nodules too. It is possible to observe slight differences regarding the profiles of the Mn-Cu and Ti-REE pairs, which clearly vary in opposite ways in the bottom side of the nodule and are not observed in the top side. Wegorzewski and Kuhn (2014) describe the Mn -Fe fractionation as a result of metals release due to the decomposition of organic matter within the sediments into the oxic pore water. Ti and REE abundances occur with the same order of magnitude at both the top and the bottom side of the nodules from the CCZ, which can be explained by diffusion of the seawater into the first centimeters of sediment pore water. The metals released during Mn reduction in the suboxic layer, however, may not reach the seawater when migrating upward, or, if they do so, the dilution effect will be higher comparatively to the one expected for metals diffusing downward as the seawater penetrates into surficial sediment pore water. This mechanism explains why the Mn/Fe ratio variation occurs in the bottom side but not in the top side of the nodule exposed to seawater. Thus, we can assume that the bottom side of the nodules-in contact with the sedimentundergoes metal precipitation under oxic diagenesis, while the top sidein contact with the seawateris undergoes hydrogenetic metal precipitation, as illustrated by figure 38. The overall geochemical composition of the CCZ nodules is characterized by higher Ni and Cu contents in comparison to Co and Ti, as well as a higher Mn content relative to Fe. Also, the REE concentration is < 500 ppm, much lower than the one in nodules from the MB and coated pebble from RGR. This geochemical fingerprint of nodules indicates a weak hydrogenetic component. The existence of nodules with no apparent nucleus was reported by Halbach et al. (1981), for what they give the explanation that the internal part of these nodules is a former nodule functioning as adsorption surface for more recent concretions. They associate these occurrences to the mixed type nodules in which nuclei are predominately composed of broken fragments of old nodules, mainly nodule debris of hydrogenetic nodules, which seems to be the case for the nodules from the CCZ studied. However, the inner portion of the CCZ nodules is much more likely to result from diagenetic accretion, which can be deduced by both chemical and morphological aspects. In these layers, the Mn/Fe ratio varies from 3 to 40 but it is never < 1, Ni and Cu are highly abundant and bright, thick Mn-rich layers alternate with dendritic, Mnpoorer layers. Fragments of marine plankton are very common inside of the diagenetic massive layers and are not observed in the other layers, as also reported by Halbach et al. (1981). In fact, the Mn/Fe ratio generally greater 10 (as high as 70) of the nodular nuclei from the CCZ nodules is characteristic of the suboxic diagenesis accretion described by Dymond et al. (1984). However, these authors associate the high Mn content to high flux of Mn 2+ from organic matter regeneration within the sediment, so that Ni 2+ and Cu 2+ would not be incorporated into the Mn oxide structure, resulting in a low Ni and Cu content. This is not found in the study, because high Ni and Cu contents are associated to high Mn/Fe ratios. Still, the Mn specie prevailing is Mn 4+ and Mn 3+ and not Mn 2+ . Therefore, oxic diagenesis is more likely to have occurred. The nodules from the CIB do not present the same external asymmetry observed for the nodules from CCZ. They are not discoidal, nor do they present a rim separating two different surface textures between exposed and buried sides. Massive and dendritic layers are the main textures observed in the CIB nodules, which together with their spherical and spheroidal shapes and rough surface texture indicate a common diagenetic origin (HALBACH et al., 1981). Besides, Mn/Fe ratios > 5 and high Ni and Cu abundances relatively to Co and Ti confirm that they belong to the diagenetic type category (HALBACH et al., 1981). Still, the REE content in these nodules is < 500 ppm, low in comparison to the MB nodules and RGR coated pebble and close to the CCZ ones. Hydrogenetic accretion is not likely to have occurred in the CIB nodules, since Mn/Fe ratios do not reach 1 in the oxide layer. Also, the columnar layer texture that is typical of this type of accretion (HALBACH et al., 1981) is not present. So the presence of low Mn/Fe layers is explained by exposure to oxic pore water rather than by direct exposure to seawater (hydrogenetic accretion), resulting on oxic-diagenetic precipitated Fe-rich Mn layers (DYMOND et al., 1984). This work suggests that the nodules from the CIB were formed buried within the first centimeters of oxic sediment where, moment of higher flux of metals from the seawater diffused to the pore water, alternated with moments when higher organic matter input released more diagenetic-sourced metals (figure 39). The nodules from the Mascarene Basin are very different from the ones discussed until now. Their morphology is very consistent with those of the nodules ruled solely by hydrogenetic concretion described by Halbach et al. (1981). Looking into the interior of the nodules, the microstructures also seem to be compatible with the Halbach et al. (1981) classification. The nodules from the MB, hydrogenetically formed, present regular and closely packed fabric with columnar patterns, which reflects the exterior surface causing the finely grained smooth texture. The variable degree of reflectiveness power is due to variable abundance of Mn in the hydrogenetic material. The geochemical aspects of the nodules from the Mascarene Basin are very distinct from the CCZ and CIB. They are typical polynodules from slopes and seamounts vicinity (HEIN et al., 2015). The relative elemental composition also reflects the hydrogenetic formation, as the Fe content of these nodules is higher and the Mn/Fe is generally < 1. The low Ni and Cu content and higher Co and Ti content are in agreement with those expected for hydrogenetic accretion. Also, the Rare Earth Elements content is > 1000 ppm, twice higher than the previous nodules. Despite the hydrogenetic signature of nodule SK35-24, its shape is not spheroidal because of its nucleus shape. Nodules from the MB have been studied before by Nath and Prasad (1991) However, neither of the two studies investigated the chemical composition with spatial resolution as the present study does. The high similarity all around the nodules indicates that the accretion did not vary much over the time, i.e. metal source and the rate of accretion. The fractionation between Mn-rich and Fe-rich layers does not occur in the MB nodules, or it occurs in such a small scale that it is not possible to identify. Their geochemical composition is more homogenous regarding all the chemical elements. The small size may be related to their general lower growth rate in comparison to the other types, which explains also the low thickness of the internal layering. Their roundness is still little understood, as some authors hypothesize about a rolling movement because of bottom currents but no evidence exists that the bottom current would be able to move the nodules constantly. Also, the rolling movement would be expected to occur in the entire nodule accretion time in order to give those perfect concentric layers over the time. But from our observation, nodules attach together and this would have required an interval of no movement long enough to the attachment to happen (figure 40). Nodules from red clay sediments have been found to be hydrogenetically in origin (PATTAN; PARTHIBAN, 2011). However, the shape of the nodules from the MB points to an exposed growth environmental set where they can eventually roll. Finally, the investigation of the coated pebble from the RGR, even though it is only one, reveals interesting facts about the environmental conditions from that region. Relying on its morphological and geochemical aspects it is possible to predict environmental conditions closer to the MB instead of the CCZ and the CIB for the RGR. In fact, it has been mentioned above already that small and spheroidal smooth surface texture nodules are typical from slopes and seamounts vicinity and into the hydrogenetic genetic type. The abundance of well-preserved carbonate microfossils upon the nodules surface is a reliable hint that it is from a rise indeed, as carbonate shells would not be found well-preserved bellow the Carbonate Compensation Depth. The planktonic foraminiferal lysoclinethe depth below which the tests began to suffer dissolution -is around 4050 m in the RGR (MELGUEN; THIEDE, 1974). The RGR fits into the environmental set expected for this type of nodules. Besides, the sole nodule studied reflects a hydrogenetic ruled accretion type also geochemically. Its Mn/Fe ratio value is < 1 and the Ni and Cu abundance is low is comparison to Co and Ti. Also, the REE is the highest found among the nodules studied, being > 2800 ppm. Atlantic and even the Walvis Ridge system, which is also a seamount (XAVIER, 1982;KASTEN et al., 1998). As final consideration of the discussion section, this work reached its objectives. The nodules studied are in agreement with previous characterizations of the deposits already described (CCZ, CIB and MB). Also, further aspects were studied in this work adding to the previous description. A first characterization of a Fe-Mn oxides coated pebble from the RGR is presented. The variable Mn/Fe ratio previously observed in the nodules from the CCZ was found to happen also in the CIB bit not in the MB and RGR. This enables for a generalization that this process happens only in the diagenetic and mixed-type nodules and is not detected in the hydrogenetic nodules. Since both diagenetic and mixed-type nodules form at unconsolidated sediments basins, the presence of the redox front seems to play the major role in the existence of the fractionation process, whereas the redox front is not present in the environment of formation of the hydrogenetic nodules. Those nodules in which the Mn -Fe fractionation happens correspond to the ones with the most complex layer texture alternation (dendritic, massive and botryoidal), so the morphology is a good indicator of the nodules genesis. CONCLUSIONS The polymetallic nodules from the four ocean regions studied exhibit common characteristics and are formed fundamentally by the same two process, the oxic diagenetic and the hydrogenetic accretion of Mn and Fe oxides. However, the nodules differ regarding which of these processes rule the growth processes, revealed by the morphological and geochemical. Polymetallic nodules from the Clarion-Clippertone Zone and the Central Indian Basin are dominantly formed by oxic diagenesis, whereas the ones from the Mascarene Basin and Rio Grande Rise are hydrogenetic. Also, the accretion process was not constant during the nodule growth, as revealed by heterogeneous morphology and geochemistry of layers mainly in the diagenetic and mixed-type nodules. In these nodules, the Mn/Fe ratio is highly variable, as well as the metals which are associated to them. The redox front influences unconsolidated sediments, where this type of nodules form, and variances on its depth is suggested as the main responsible. Hydrogenetic nodules form over hard rock or oxic unconsolidated sediment and the metals come from the water column solely. The morphology itself reflects the Mn and Fe fractionation and may be used as an indicative of the environment of nodule formation, whether within the sediment layer or exposed to seawater.
2019-04-27T13:09:46.224Z
2017-06-21T00:00:00.000
{ "year": 2017, "sha1": "1dd9d38312bf98e6223033524e03fc7fc5b99a0e", "oa_license": "CCBYNCSA", "oa_url": "http://www.teses.usp.br/teses/disponiveis/21/21136/tde-23032018-162153/publico/Dissertacao_Benites_Mariana_Corrigida.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "7e14fae792870349920dc2c69f180ef0f2b112b9", "s2fieldsofstudy": [ "Geology", "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
255216055
pes2o/s2orc
v3-fos-license
Lactate-Mediated Signaling in the Brain—An Update Lactate is a universal metabolite produced and released by all cells in the body. Traditionally it was viewed as energy currency that is generated from pyruvate at the end of the glycolytic pathway and sent into the extracellular space for other cells to take up and consume. In the brain, such a mechanism was postulated to operate between astrocytes and neurons many years ago. Later, the discovery of lactate receptors opened yet another chapter in the quest to understand lactate actions. Other ideas, such as modulation of NMDA receptors were also proposed. Up to this day, we still do not have a consensus view on the relevance of any of these mechanisms to brain functions or their contribution to human or animal physiology. While the field develops new ideas, in this brief review we analyze some recently published studies in order to focus on some unresolved controversies and highlight the limitations that need to be addressed in future work. Clearly, only by using similar and overlapping methods, cross-referencing experiments, and perhaps collaborative efforts, we can finally understand what the role of lactate in the brain is and why this ubiquitous molecule is so important. Introduction Our appreciation of the diversity of the actions and interactions of L-lactate (Lac) in the brain continues to expand, and with it, our understanding of the potential mechanisms by which Lac fulfils its diverse roles. While several recent reviews have discussed specific aspects of central Lac effects [1][2][3][4], the field is still full of controversies. Here, we present only a handful of studies put forward by different groups, in order to highlight potential new avenues of viewing previous findings and mechanistic interpretations. The selectivity of tools to study Lac actions is gradually improving but their limitations should be considered when interpreting effects on pathways with complex interconnectivity such as those in which Lac is involved. Following glycolysis, Lac is generated from pyruvate by lactate dehydrogenase (LDH), an enzyme that operates in both directions, using NADH to reduce pyruvate into Lac ( Figure 1). Even though LDH isoforms differ in their interconversion rates, both LDHA and LDHB are thought to establish an equivalent pyruvate-Lac equilibrium under steady-state conditions [5]. As such, Lac cannot be metabolized any further. It is either released to the extracellular space and shared with other cells, or is reconverted by LDH to pyruvate which then can enter the TCA cycle in the mitochondria. All Lac release mechanisms identified so far are gradient-dependent. These involve monocarboxylate transporters (MCT) of which isoforms 1 and 4 are predominantly expressed by astrocytes and MCT2 is mainly found in neurons [6]. In addition, Lac release from astrocytes through connexin hemichannels has been implied by a number of publications [7,8]. A Lac-permeable ion channel which could be activated by depolarization and positively modulated by Lac has also been described [9] and further routes may potentially exist. In the brain, Lac is released by both neurons and astrocytes but it is well established that astrocytes produce and release more Lac than neurons [10]. This creates a gradient of Lac from astrocytes towards the extracellular space and neurons [11,12]. As Figure 1. Interconversion of lactic and pyruvic acid is mediated by LDH and recovers NAD + used in glycolysis. Glucose for glycolysis is imported from the periphery or/and recruited from glycogen under conditions that stimulate glycogenolysis. All Lac release mechanisms identified so far are gradient-dependent. These involve monocarboxylate transporters (MCT) of which isoforms 1 and 4 are predominantly expressed by astrocytes and MCT2 is mainly found in neurons [6]. In addition, Lac release from astrocytes through connexin hemichannels has been implied by a number of publications [7,8]. A Lac-permeable ion channel which could be activated by depolarization and positively modulated by Lac has also been described [9] and further routes may potentially exist. In the brain, Lac is released by both neurons and astrocytes but it is well established that astrocytes produce and release more Lac than neurons [10]. This creates a gradient of Lac from astrocytes towards the extracellular space and neurons [11,12]. As can be seen from the following, some of the proposed mechanisms require Lac entry into neurons in the vicinity, others postulate that Lac acts on receptors located on the cell surface. Our aim here is to highlight the remaining questions, discuss the usefulness of commonly used tools, and suggest interesting avenues to explore further. Mechanisms Which Are Attributed to Lac Entry into Target Neurons The concept of Lac being passed on between different cell types is well established for peripheral tissues [13]. For the brain, the hypothesis of an analogous Lac shuttle operating between astrocytes and neurons was proposed decades ago [14], originally, as a mechanism to subsidize neurons with energy under conditions of high metabolic demand such as periods of active firing of action potentials. The supporting evidence for the Lac shuttle hypothesis, including why Lac may be a preferred substrate to glucose, can be found in publications from P.J. Magistretti and colleagues [1,4]. Up to this day, this hypothesis remains a matter of debate with strong arguments for [12] as well as against it [15][16][17][18]. Perhaps one of the more contentious aspects of the shuttle hypothesis is the question of why astrocytic Lac, once transferred into neurons, should be used in preference to glucose for ATP generation. Tracing experiments with 13 C-labelled glucose and other compounds have indicated that in the brain, in contrast to peripheral tissues, glucose rather than Lac is the primary source of TCA metabolites [19]. Other studies also argued that glucose is, in fact, the preferred neuronal source of energy, at least under physiological conditions and normal glucose concentrations [15,16,20]. Hence, the relative importance of astrocytic Lac as a source of neuronal ATP is still controversial. Interconversion of lactic and pyruvic acid is mediated by LDH and recovers NAD + used in glycolysis. Glucose for glycolysis is imported from the periphery or/and recruited from glycogen under conditions that stimulate glycogenolysis. Mechanisms Which Are Attributed to Lac Entry into Target Neurons The concept of Lac being passed on between different cell types is well established for peripheral tissues [13]. For the brain, the hypothesis of an analogous Lac shuttle operating between astrocytes and neurons was proposed decades ago [14], originally, as a mechanism to subsidize neurons with energy under conditions of high metabolic demand such as periods of active firing of action potentials. The supporting evidence for the Lac shuttle hypothesis, including why Lac may be a preferred substrate to glucose, can be found in publications from P.J. Magistretti and colleagues [1,4]. Up to this day, this hypothesis remains a matter of debate with strong arguments for [12] as well as against it [15][16][17][18]. Perhaps one of the more contentious aspects of the shuttle hypothesis is the question of why astrocytic Lac, once transferred into neurons, should be used in preference to glucose for ATP generation. Tracing experiments with 13 C-labelled glucose and other compounds have indicated that in the brain, in contrast to peripheral tissues, glucose rather than Lac is the primary source of TCA metabolites [19]. Other studies also argued that glucose is, in fact, the preferred neuronal source of energy, at least under physiological conditions and normal glucose concentrations [15,16,20]. Hence, the relative importance of astrocytic Lac as a source of neuronal ATP is still controversial. Mechanisms Primarily Linked to Increased ATP Production in Neurons The hypothesis that astrocytic Lac is the preferred substrate for ATP generation in neurons goes back to the initial Lac shuttle concept as mentioned above [14,21,22]. In some studies, this mechanism is implied, rather than demonstrated directly. Typically, conclusions are based on the sensitivity of the recorded effects to LDH blockade-as, without conversion into pyruvate, Lac cannot be used for ATP production-or to inhibition of Lac transport through MCT, for example by 4-CIN or, in some cases, antisense oligonucleotides (AS-ODN) targeted at the expression of MCT. Often in such studies, it is not possible to confidently exclude contributions of NAD + /NADH ratio changes (see next section) or of intracellular acidification. Studies from two laboratories as examples of Lac-dependent processes that are attributed to Lac's caloric value for neurons are discussed in this section. In 2011, the group of C. Alberini published a paper implicating Lac shuttle in memory at the level of the hippocampus [23]. More recently, this group extended their arguments and reported that astrocyte-derived Lac may affect excitatory and inhibitory neurons in the hippocampus via modulation of mRNA translation [24]. They showed that microinjections of 1,4-dideoxy-1,4-imino-D-arabinitol (DAB) into rat hippocampus disrupted memory, which was rescued by co-injection of pyruvate but not by an equicaloric quantity of glucose [24]. Of note, sensitivity to DAB, an inhibitor of glycogen phosphorylase and synthase, can point to the involvement of astrocytic Lac production in a process. However, it does not distinguish between the actual Lac targets mediating the effect. Rescue by pyruvate implies import via MCT and a preference for monocarboxylates over glucose. Interestingly, while pyruvate can be transported by the same MCT as Lac, and has an equivalent caloric value, it will have the opposite effect on the NAD + /NADH ratio (see section below) since LDH will use NADH to re-establish the Lac-pyruvate equilibrium. In order to selectively inhibit Lac export from astrocytes, AS-ODN was used to suppress the expression of MCT1 and MCT4 [24]. AS-ODN interfere with the translation of specific mRNAs and thus the synthesis of the respective proteins. Long-term memory was disrupted within only one hour of AS-ODN injection. This effect was also rescued by pyruvate, which can be explained by the fact that neurons express mainly a different type of MCT, MCT2 [24]. Given that the half-life of most proteins in mammalian cells, including in neurons and glia, by far exceeds one hour, the reduction of MCT protein levels here is a surprising observation [23][24][25] and may potentially point to off-target effects of this treatment which merit further study. The authors also assessed the impact of the Lac shuttle on mRNA translation, an important step in long-term memory and, as they argue, requires a significant amount of ATP. They found that the training protocol significantly increased the incorporation of puromycin into newly made proteins (so-called SUnSET protocol) as tested 2 h after training. This increase was blocked by DAB and, therefore, is dependent on glycogenolysis in astrocytes. Moreover, both Lac and pyruvate were able to rescue increased translation after DAB administration. Additionally, expression of the immediate early gene Arc/Arg3.1 was also dependent on Lac supply. The take-home message from this work is that Lac derived from glycolysis in astrocytes is important to energetically support the translation of proteins required for memory formation. Another study that argues for the importance of the caloric value of astrocyte-derived Lac looked at nociceptive transmission at the level of the dorsal horn of the spinal cord in mice [26]. The authors used a chemogenetic approach to activate astrocytes in the dorsal horn whereby they employed a Gq coupled Designer Receptors Exclusively Activated by Designer Drugs (DREADD). DREADD was expressed unilaterally using an adenoassociated virus with a GFAP promoter. After activation of DREADD by clozapine-Noxide (CNO) administration, the extracellular concentration of Lac was found raised from~1.2 mM to~2.0 mM. Mechanosensitivity of the hind paw greatly increased after CNO injections and stayed high for several hours. This effect only occurred on the side which expressed DREADD. The authors used 4-CIN to block MCTs and showed that it prevented sensitization caused by CNO. Conversely, intrathecal injections of Lac caused pain sensitization and induction of several immediate early genes, markers of neuronal activation. These effects, too, could be blocked by 4-CIN, implying the need for the transfer of Lac into neurons. Finally, the study looked at mechanical allodynia caused by partial nerve ligation. Several treatments targeting astrocytic Lac production had profound antinociceptive effects, including LDH inhibition by isosafrole, inhibition of astrocytic TCA cycle by fluorocitrate (FC) and MCT blockade by 4-CIN, mentioned above. In some cases, the threshold of mechanosensitivity almost returned to the level of the contralateral control paw. While the paper is consistent with a Lac shuttle mechanism between astrocytes and neurons in nociception, it does not provide direct evidence that it is the caloric value of Lac which is important in these processes, and not, for example, a shift in NAD + /NADH ratio or pH. A further complication is that 4-CIN, while an effective inhibitor of cellular Lac ex-and import, may also effectively deprive neurons (and astrocytes) of ATP generated by the mitochondria (see Section 2.3 below). Finally, the effects of FC are somewhat difficult to interpret as, in addition to affecting the mitochondrial TCA cycle, it depolarizes astrocytes [27], thereby affecting a host of potential-dependent transporters (see further comments below). Overall, the body of studies which support the use of Lac for energy generation in preference to glucose is substantial, for example [21,28], but is it always preferred to glucose and why? A recently published study from the L. Venance group offers an essential clue which may explain the existing controversies [29]. Here, experiments in vitro, in vivo and mathematical modelling are combined to carefully dissect which conditions favor the utilization of glucose vs. Lac. The authors used two types of protocols, for example, in vitro, a high frequency (100 Hz) 5× theta burst stimulation that should require more energy for the generation of LTP and, for comparison, spike timing-dependent plasticity (STDP) where the frequency of stimulation is relatively low. Both forms of plasticity are dependent on NMDA receptors. It was shown that while the high frequency LTP requires Lac provision, STDP does not. In vivo, the authors use a simpler novel object recognition task where the rat needs to detect one new object and compare it with a more complex test (object in place) where several objects have been moved in the arena. Here, the simple test is not sensitive to oxamate while the more challenging test is, again pointing to the preferential use of Lac in situations of high energy demand. These experiments are matched by mathematical modelling. This study demonstrates that, while Lac (provided largely by astrocytes) is required to support synaptic activity when the energy consumption is high, the conditions of the experiment are paramount. Mechanisms Attributed to NAD + /NADH Ratio Changes Apart from serving as a potential energy substrate, the import of Lac by MCT has several other effects on receiving cells. First of all, LDH converts excess Lac into pyruvate ( Figure 1), increasing NADH and decreasing the NAD + /NADH ratio as a result. Potentiation of NMDA Receptor Activity in Memory Formation or Retention The hypothesis that a change in NAD + /NADH ratio can affect NMDA receptors originates from the laboratory of P.J. Magistretti [30,31]. Initially, it was demonstrated that high concentrations of Lac (2.5 mM-20 mM) can trigger the expression of several immediate early genes and potentiate NMDA currents in cultured neurons. As mentioned above, the entry of Lac into the cells raises NADH levels. The study inferred that this increase in NADH is ultimately responsible for the potentiation of NMDA currents and increased expression of immediate early genes. Both events may be expected to affect memory formation. The actual mechanism of how the NAD + /NADH ratio changes the function of NMDA receptors was not unequivocally identified in this paper. Bearing in mind that a strong potentiation of NMDA currents could potentially lead to excitotoxicity, later work by the same group suggested that Lac may counteract any neurotoxic effects by neuroprotective actions by supplying neurons with energy when converted into pyruvate [32]. The balance of the two actions was suggested to depend on the strength of the glutamatergic stimulation. The concept that NMDA receptors mediate the effects of Lac was later developed in a study that used bPAC, a light-sensitive adenylate cyclase from Beggiatoa, to optogenetically stimulate cAMP production in astrocytes [33,34]. In many cells, including astrocytes, cAMP elevation is a trigger for energy mobilization and glycogenolysis. Since astrocytes are essentially the only glycogen-containing cells in the brain, they can mobilize this store upon PKA activation by cAMP and rapidly deliver Lac into the extracellular space [35,36]. Although the link to the NAD + /NADH ratio was not specifically proven, this study reported that raising cAMP in astrocytes disrupted mouse behavior in the object location memory paradigm ( Figure 2). Interestingly, cAMP increased memory scores when the light was applied during training (consistent with the general concept of Lac acting on canonical NMDA-dependent mechanisms of memory formation). However, the same protocol inhibited memory if light stimulation was performed on day two after training and before memory was tested on day three ( Figure 2). The authors discuss these effects as Lac having a positive impact on memory formation and a negative impact on memory retention. Although the link to the NAD + /NADH ratio was not specifically proven, this study reported that raising cAMP in astrocytes disrupted mouse behavior in the object location memory paradigm ( Figure 2). Interestingly, cAMP increased memory scores when the light was applied during training (consistent with the general concept of Lac acting on canonical NMDA-dependent mechanisms of memory formation). However, the same protocol inhibited memory if light stimulation was performed on day two after training and before memory was tested on day three ( Figure 2). The authors discuss these effects as Lac having a positive impact on memory formation and a negative impact on memory retention. The arguments made in favor of the role of Lac shuttle were that the negative effect of cAMP on memory retention 24 h after training could be antagonized by 4-CIN, as well as by an NMDA receptor blocker (MK 801; Figure 2). Figure 2. (A) Astrocyte-derived Lac is suggested to be involved in memory formation at multiple stages. (B) Mice were trained on a memory paradigm in which they spent more time around an object that had been moved from its previous location. Astrocytes were stimulated either during training or 24 h later using bPAC. Discrimination was assessed on day 3. Modified from [34]. (C) Memory test on day 3 demonstrated that optogenetic stimulation of cAMP on day 2 blocked the ability of mice to discriminate object location (second sets of data points). However, both MK 801 and 4-CIN prevented the effect of bPAC activation (fourth sets of data points), and mice performed similarly to animals which were not light stimulated (first sets of data points). Without light stimulation, MK801 and 4-CIN did not change the discrimination scores on day 3 (third sets of data points). ** p < 0.01 vs the Mlc1-bPAC + no light + vehicle group; ## p < 0.01 vs the Mlc1-bPAC + light + vehicle group by Tukey's test after one-way ANOVA. Modified from [34]. Altogether, while the effects of these drugs allow a clear-cut interpretation and the authors focus their discussion on the Lac shuttle hypothesis [34], several questions remain open. It seems that when cAMP is raised in astrocytes, it can have opposite effects on memory acquisition and storage, but in both cases, Lac shuttle and NMDA receptors are Mice were trained on a memory paradigm in which they spent more time around an object that had been moved from its previous location. Astrocytes were stimulated either during training or 24 h later using bPAC. Discrimination was assessed on day 3. Modified from [34]. (C) Memory test on day 3 demonstrated that optogenetic stimulation of cAMP on day 2 blocked the ability of mice to discriminate object location (second sets of data points). However, both MK 801 and 4-CIN prevented the effect of bPAC activation (fourth sets of data points), and mice performed similarly to animals which were not light stimulated (first sets of data points). Without light stimulation, MK801 and 4-CIN did not change the discrimination scores on day 3 (third sets of data points). ** p < 0.01 vs. the Mlc1-bPAC + no light + vehicle group; ## p < 0.01 vs. the Mlc1-bPAC + light + vehicle group by Tukey's test after one-way ANOVA. Modified from [34]. The arguments made in favor of the role of Lac shuttle were that the negative effect of cAMP on memory retention 24 h after training could be antagonized by 4-CIN, as well as by an NMDA receptor blocker (MK 801; Figure 2). Altogether, while the effects of these drugs allow a clear-cut interpretation and the authors focus their discussion on the Lac shuttle hypothesis [34], several questions remain open. It seems that when cAMP is raised in astrocytes, it can have opposite effects on memory acquisition and storage, but in both cases, Lac shuttle and NMDA receptors are involved. Unfortunately, the effects of NMDA or MCT blockade on memory acquisition during or immediately after training, which may be expected to be the most critically NMDA receptor-dependent steps in memory formation, were not demonstrated in this paper. Additionally, the mechanism of memory disruption when astrocytes were activated 24 h after training is not immediately obvious-how should NMDA activation or Lac shuttling inhibit memory storage at that stage? Another study confirmed that activation of astrocytes, in this case using a G q -coupled DREADD or a G q -coupled opsin (OptoG q ), increased memory when performed during acquisition [37] but these stimuli were ineffective when applied during the test for memory recall. This study well illustrates one of the great challenges for elucidating the role of astrocytes in memory formation which is to understand the specific localization of Lac action with respect to the "memory trace": How can an activated astrocyte or a group of astrocytes modulate a specific memory trace, given that each astrocyte contacts numerous neurons, potentially many thousands of synapses? Moreover, Lac almost certainly can spread between astrocytes via gap junctions, which would further diffuse the signal within the network. Can astrocytic modulation be targeted to individual synapses which are contacted by distinct end feet of the same astrocyte? Or is the role of astrocytes to provide a wide-scale change in the extracellular concentration of metabolites such as Lac, ATP, glutamate, etc. that results in more general network modulation? Under which physiological conditions could one expect the activation of large pools of astrocytes in the hippocampus and what would be the mechanism? If, as per [34], such activation occurs, would that elicit amnesia covering the preceding 24 h? These are exciting questions and, clearly, a lot of work still needs to be done to explain how the stimulation of astrocytes affects memory formation and retention and whether there may be specific mechanisms for compartmentalization of intra-and inter-cellular Lac signaling in astrocytes. Potentiation of NMDA Receptor Activity in Memory Formation or Retention An interesting mechanism related to the NAD + /NADH ratio was found in carotid body glomus cells [38]. Although not central, these cells have all the classical features of neurons and deserve mentioning here. The paper proposes another hypothesis which also requires Lac import and conversion by LDH. It demonstrated that the change in cytosolic NAD + /NADH ratio may cause activation of non-selective cation channels that are sensitive to the non-selective cation channel blocker 2-APB and belong to the transient receptor potential (TRP) channel family. As a result, neurons depolarize and start firing action potentials, leading to Ca 2+ entry and elevation of intracellular Ca 2+ concentration. Entry of Lac is mediated by a co-transport with protons and therefore decreases intracellular pH. The authors argue that acidification increases mitochondrial reactive oxygen species production, which further activates glomus cells [38]. This is a rare example of a study where Lac-induced acidification was actually taken into account. Methodological Considerations of Commonly Employed Tools in the Study of Lac Transfer from Astrocytes into Neurons In order to prove the involvement of the Lac shuttle, any effect of Lac needs to be primarily sensitive to the blockade of Lac entry into neurons. Inhibition of Lac-pyruvate interconversion by LDH enzymes is also used as an argument to support the notion of Lac shuttle. However, block of the transporters by 4-CIN is bound to inhibit Lac release from astrocytes, while block of LDH should also block the generation of Lac in astrocytes. Drugs which are currently used are not selective to a specific location of any of these processes. The MCT blocker 4-CIN (α-cyano-4hydroxycinnamate or CHC) is used in many studies on this topic and will block astrocytic release as well as neuronal uptake of Lac. However, it was established long ago that 4-CIN is two orders of magnitude more potent at inhibiting pyruvate transport into mitochondria than at inhibiting Lac transport across the plasma membrane (reviewed in [39]). Therefore, while 4-CIN may well be an effective inhibitor of Lac import into the cells, blockade of pyruvate uptake by the mitochondria could shift the LDH reaction towards Lac formation and lead to increased levels of intracellular Lac. Moreover, 4-CIN could also prevent astrocytes and neurons from using the resultant pyruvate for oxidative phosphorylation in the mitochondria, thus additionally depriving cells of energy. In addition, it has been shown that 4-CIN application, probably by inhibiting the mitochondrial pyruvate transporter, can cause profound acidification in neurons, which is an additional complication [40]. Newer, and potentially more selective, AR-C compounds are available for the inhibition of MCT [41]. Knock-down of astrocytic or neuronal MCT expression using AS-ODN is a potentially more selective approach which has been used in some studies [24]. However, the reported rapidity of the AS-ODN action, which lowered the expression of MCT proteins within one hour is rather surprising. The half-life of MCT proteins must be taken into account to further validate this approach. The effects of FC are very difficult to predict and interpret. Fluoroacetate, which is con-verted into FC in the cells, is listed as a highly toxic compound and is even considered among chemical warfare agents [42]. FC application results in the accumulation of citrate on one hand and a reduction of glutamine, the precursor of GABA, on the other [43,44]. Obstruction of the TCA cycle leads to a rapid increase in lactate and pyruvate extracellular levels [42]. At the same time, because of the failure of the ATP-driven Na + /K + exchanger, astrocytes depolarize and this can cause the release of various signaling molecules, such as purines or possibly glutamate, which would result in the activation of adjacent neurons [27]. It follows that, because of its multiple effects, the interpretation of FC experiments can be problematic. Finally, as mentioned earlier, we should not disregard the fact that entry of Lac into the cells through MCT must necessarily lead to acidification of the cytoplasm by the co-imported protons. These effects may become particularly prominent when concentrations of Lac are used that exceed the physiological range, which is thought to be below 2 mM (reviewed in [45]). Indeed, in some of the studies mentioned above, a 40 mM solution of Lac was microinjected locally into the tissue, or concentrations of 10-20 mM Lac were applied to the cells [26,31]. Numerous studies where Lac was expected to enter neurons disregarded changes in pH, and we think this might be more important than is currently appreciated. Cell Surface Receptor-Mediated Signaling by Lac in the Brain It is well established that Lac can affect cell function via cell surface G-protein coupled receptors (GPCR) without the need to access the cytoplasm (Figure 3). Lac has its cognate receptor, registered by IUPHAR (https://www.guidetopharmacology.org/ (accessed on 1 December 2022)) as HCA1 (previously known as Lactate receptor 1, LACR1, GPR81, GPR104). It is encoded by the gene HCAR1. The physiological role of HCA1 is best established in adipocytes where it inhibits lipolysis via the G i -protein signaling cascade [46]. In neurons, G i signaling is characteristically associated with the inhibition of action potential activity and transmitter release. Lac has very low potency on HCA1. In the original publication, EC50 for Lac for HCA from different species are listed between 3.7-6.9 mM ( [47], Table 1). This is not surprising because, in the periphery, Lac levels in plasma are typically at several millimoles and increase prominently during exercise. Hence, HCA1 sensitivity matches physiological Lac levels (for further discussion see [48]). In the brain, however, according to most sources, average extracellular Lac concentrations do not exceed 1.5-2 mM. While it cannot be excluded that Lac can be more concentrated in microdomains, higher average concentrations of Lac have been reported in pathophysiological situations such as during hypoxia or seizures [48]. The group of J-Y Chatton have published several studies reporting inhibitory effects of Lac on mouse neurons via HCA1 [49][50][51][52]. Using both wild-type and HCA1 knockout mice, the authors report in their studies the inhibitory effects of Lac and 3-chloro-5hydroxybenzoic acid (3Cl-HBA), an agonist of HCA1, on neurons in patch-clamp and calcium imaging experiments. Decreases in miniature excitatory postsynaptic potential (EPSC) frequency were also observed. These inhibitory effects of Lac on neurons are explained by the activation of the canonical G i -protein signaling pathways by Lac [52]. Moreover, the hippocampal neurons that are modulated by HCA1 were suggested to be excitatory, not inhibitory, since they did not counter-stain for GAD67 [50]. In the latter study, additional experiments were carried out on acute slices from human patients where a reduction in spontaneous EPSC frequency after the application of 3Cl-HBA was found. Overall, the take-home message from this body of work is that neurons in various parts of rodent and human brain express HCA1, and HCA1 activation by Lac inhibits neuronal activity by reducing excitability and via presynaptic mechanisms. While these results are interesting and potentially very significant, Lac-mediated inhibition of excitatory neurons in many areas of the brain is quite difficult to reconcile with many of the experiments where the astrocytic release of Lac is seen to facilitate learning and memory (see Sections 2.1 and 2.2). The balance in a physiological context between the support of actively firing neurons metabolically or via potentiation of NMDA currents by Lac and their inhibition via HCA1 needs to be considered further. During strenuous physical exercise, plasma concentrations of Lac rise dramatically and Lac can travel from plasma into the brain, thus increasing central Lac levels [17]. Brain Lac concentrations also rise during arousal [53]. If HCA1mediated inhibition was operational within the physiological range of Lac concentrations, this should result in a shutdown of cortical and hippocampal networks, which clearly does not occur. This suggests that the key questions relate to the relevant concentrations of both Lac and 3Cl-HBA in modulating neuronal activity. Lac levels are increased by exercise in mice, humans and racehorses, and can suppress food consumption [66]. This effect strongly suggests a central site of action although the study does not address this possibility. A speculation, that may merit further investigation is that, if Lac in the brain could also be converted into Lac-Phe, some of the central effects previously associated with Lac could actually be mediated by Lac-Phe or a similar metabolite. To summarize this section, in order to eventually gain a better understanding of the full scale of Lac actions in the brain, there is scope in searching for additional receptors which can be either specifically activated by Lac or modulated by it in a biologically relevant manner. For the sake of completion, we draw the reader's attention to yet another possible mode of Lac action, i.e., via lactylation of histone lysines [67,68]. Via this route Lac can potentially induce long lasting epigenetic modifications of gene expression. The relevance of this process for brain physiology is still to be discovered. Figure 3. Summary of putative Lac receptor-mediated signaling mechanisms in brain cells. Lac transported into the cell can be metabolised and/or influence gene expression, e.g., via NMDA receptor modulation or ERK pathway activation. Increased ATP levels may inhibit KATP channel activity and decrease cell excitability. Lac may also act via surface GPCR to stimulate or inhibit neurones. The effects of acidification caused by protons co-imported via MCT require clarification. We also hypothesise that Lac can be converted into Lac-Phe in the brain, the implications of which have yet to be discovered. Black arrows-stimulatory action; red lines-inhibitory. AC-adenylate cyclase; NA-noradrenaline; NDRG3, ERK(P)-phosphorylation of extracellular signal-regulated kinases; mit-mitochondria. Modified from [48]. Figure 3. Summary of putative Lac receptor-mediated signaling mechanisms in brain cells. Lac transported into the cell can be metabolised and/or influence gene expression, e.g., via NMDA receptor modulation or ERK pathway activation. Increased ATP levels may inhibit KATP channel activity and decrease cell excitability. Lac may also act via surface GPCR to stimulate or inhibit neurones. The effects of acidification caused by protons co-imported via MCT require clarification. We also hypothesise that Lac can be converted into Lac-Phe in the brain, the implications of which have yet to be discovered. Black arrows-stimulatory action; red lines-inhibitory. AC-adenylate cyclase; NA-noradrenaline; NDRG3, ERK(P)-phosphorylation of extracellular signal-regulated kinases; mit-mitochondria. Modified from [48]. A study by Ordenes and colleagues offers a completely different view of the potential mechanism by which HCA1 can modulate neurons [54]. They studied the arcuate nucleus (ARC) where proopiomelanocortin (POMC) neurons synthesize the anorexigenic neuropeptide α-MSH derived from the POMC transcript. Brain slices used in this study were perfused with ACSF containing 1 mM glucose. This factor (concentration of glucose in the media) can be quite important in many analyses on cultured cells and slices but is not often discussed or considered. Of note, the vast majority of slice studies use solutions with 5 or even 10 mM glucose. This specific study found that~60% of POMC neurons were activated by 15 mM Lac. Surprisingly, 15 mM D-lactate and 15 mM glucose also activated POMC neurons, but 4-CIN did not prevent the Lac effect, suggesting an extracellular target. The HCA1 agonist 3Cl-HBA (40 µM) also depolarized POMC neurons, and its action could be blocked by pertussis toxin, confirming the involvement of G i -protein signaling. Altogether these results point to a role of HCA1, however, the authors could not find HCAR1 transcripts in single-cell transcriptomes of POMC neurons. Instead, using immunohistochemistry, they demonstrate that HCA1 is expressed by local astrocytes. According to the paper, activation of HCA1 on astrocytes leads to a paradoxical increase in astrocytic intracellular Ca 2+ . While Ca 2+ increase seems an unexpected effect following activation of a G i -coupled receptor, it has been reported for astrocytes in other studies [55][56][57]. The authors speculate that this could lead to the release of excitatory gliotransmitters, possibly glutamate, and by this mechanism, POMC neurons may be stimulated by Lac and 3Cl-HBA. With respect to the coupling of HCA1, the current situation is not entirely clear and it is possible that some of the effects are mediated by the βγ complex or another type of Gα subunits, which is not uncommon for G-protein coupled receptors. In addition, the authors argue that if Lac is taken up by POMC neurons, it would lead to increased ATP production, closure of ATP-sensitive K + channels (KATP), and depolarization, although the paper does not appear to contain direct evidence in support of this suggestion [54]. Finally, we consider studies from our own group which also indicate that the brain operates with Lac-sensitive GPCR, but ones distinct from HCA1. We reviewed some of the relevant studies in [48] and now only briefly summarize the current state of play. In 2014, our group demonstrated a link between astrocytes local to a specific subset of noradrenergic neurons in the Locus coeruleus (LC; [58]). We concluded that astrocyte-derived Lac stimulates the release of noradrenaline from LC neurons and activates these neurons via a cAMPdependent signaling pathway. Multiple experiments in that study indicated that Lac does not need to enter LC neurons and that the most logical explanation for these effects was the existence of another, yet uncharacterized, GPCR-mediated signaling pathway which can be recruited by Lac. LC neurons are specialized and different from glutamatergic neurons studied in most other papers. They are rather unique in their morphology and physiology and project all across the central nervous system. Activation of LC is associated with central arousal and active brain states [59][60][61]. Hence, the excitatory effect of Lac on these cells could possibly provide a link between overall brain activity and attention or positive motivation responses and reflect a mechanism by which LC activation by salient stimuli may be amplified to orchestrate generalized cortical desynchronization. Interestingly, we later observed that Lac can also activate another group of noradrenergic neurons in the rostral ventro-lateral medulla that is responsible for activating the sympathetic nervous system, consistent with an autonomic arousal response [62]. We postulated that this effect is mediated by a yet unknown GPCR which has been termed Lac receptor x (or "LLRx") and is expected to operate via a cAMP-mediated mechanism. Over the years, we have made multiple attempts to identify LLRx and learn more about it but with limited success. For instance, we characterized in a later study its activation by compounds derived from Lac [48]. Apart from HCA1, which is a G i -protein coupled receptor and inhibits cAMP production, there are at least two GPCRs sensitive to Lac and known to couple to G s -proteins, thus being able to raise cAMP. These are the olfactory receptor OR51E2, which is expressed in several other tissues outside of olfactory epithelium, such as the prostate. We could not confirm the expression of OR51E2 in LC neurons and its pharmacological characteristics do not match what we know about LLRx [48]. By serendipity, we found that the proton receptor GPR4, previously known as GPR19 [63], can be modulated by Lac [64]. GPR4 can probably couple to various G-proteins, but its main signaling partner is G s and, when stimulated, GPR4 leads to profound increases in cAMP [63,64]. GPR4 is expressed by endothelium around the body, including the brain and also by some subsets of neurons, but not the LC neurons. We found that Lac negatively modulates GPR4 and reduces proton-induced cAMP responses [64]. Hence, the characteristics of this GPCR are not consistent with the elusive LLRx but, nevertheless, modulation of GPR4 by Lac could be important for some aspects of brain function. We undertook a screening effort, analyzed a range of orphan GPCRs that are expressed by LC neurons and found one receptor that could be a viable candidate for further experimentation [48]. In a luminescence assay, the application of Lac within the range of concentrations which we consider physiological (less than 5 mM) to HEK cells expressing GPR137 resulted in highly significant increases in cAMP. 5 mM of Lac elevated luminescence to~175% relative to control. Moreover, 0.4 mM D-lactate antagonized the effect of 2 mM Lac, consistent with our previously reported observations [58]. In terms of its se-quence and splicing pattern, GPR137 is not a typical GPCR. While the typical number of 7 transmembrane regions has been predicted for its amino acid sequence (www.proteinatlas. org (accessed on 1 December 2022), it shares little homology with other GPCRs and its coupling to G-proteins has not been confirmed (IUPHAR guidetopharmacology.org). Nevertheless, its expression is verified by multiple databases, and it has been preferentially localized to the lysosomal compartment (Ensemble, Genecards, NCBI). The potential physiological roles of GPR137 are still little understood. According to in situ hybridization data in the Allen Brain Atlas, there is a widespread expression of the GPR137 transcript in mouse brain (http://mouse.brain-map.org/experiment/show/75651149 (accessed on 1 December 2022). While apparently present in human and mouse, in rat it may not even exist as a full-length protein (https://www.guidetopharmacology.org/ (accessed on 1 December 2022). We believe that the effect of Lac via this receptor should be investigated further. Similarly, we also found that in cells transfected to express GPR180, Lac could reduce cAMP [48]. While recently it has been shown that GPR180 is not a GPCR but, instead, a component of the TGFβ signaling complex, it may still be an interesting candidate to mediate some Lac effects in the brain [65]. An interesting and entirely unexpected avenue might have been opened by a recent discovery of biological activity of the Lac metabolite N-lactoyl-phenylalanine (Lac-Phe). This product of conjugation of Lac with phenylalanine is formed in the periphery when Lac levels are increased by exercise in mice, humans and racehorses, and can suppress food consumption [66]. This effect strongly suggests a central site of action although the study does not address this possibility. A speculation, that may merit further investigation is that, if Lac in the brain could also be converted into Lac-Phe, some of the central effects previously associated with Lac could actually be mediated by Lac-Phe or a similar metabolite. To summarize this section, in order to eventually gain a better understanding of the full scale of Lac actions in the brain, there is scope in searching for additional receptors which can be either specifically activated by Lac or modulated by it in a biologically relevant manner. For the sake of completion, we draw the reader's attention to yet another possible mode of Lac action, i.e., via lactylation of histone lysines [67,68]. Via this route Lac can potentially induce long lasting epigenetic modifications of gene expression. The relevance of this process for brain physiology is still to be discovered. How Do Activated Neurons Engage Local Astrocytes to Release Lac? As of today, there is no generally accepted theory which explains how astrocytes "know" that the neighboring neurons are active and require metabolic support, or modulation via Lac release is indicated. Moreover, it may be that such a general mechanism does not exist or there is more than one acting together, as it is so often in nature. So, what are the main listed coupling mechanisms between neurons and astrocytes? The group of L.F. Barros focused on this topic for several years and explains such coupling by the action of elevated extracellular K + , NO, glutamate and possibly NH4 + [69,70]. Elevation of extracellular K + is a result of neuronal activity and the subsequent opening of voltage-gated K + channels required to restore and maintain the neuronal resting membrane potential. An increase in extracellular K + concentration has a depolarizing effect on the membrane potential of astrocytes, and this could be the signal to activate glycolysis, resulting in Lac output. One concept proposes Na + as a link between extracellular signals, astrocytic energy metabolism and Lac production [71]. Large quantities of glutamate released from excitatory neurons are taken up by astrocytes in a Na + gradient-dependent manner, raising intra-astrocytic Na + that then needs to be extruded by the Na + /K + -ATPase. The resulting drop in ATP/ADP ratio is proposed as one of the triggers of glucose uptake and anaerobic glycolysis. Hence, according to this view, Na + is the "energy currency" and a "mediator of metabolic signals in the context of neuron-glia interaction" [71], see also [72]. Depolarization of astrocytes could also cause Lac release via opening of the channel described by [9]. An earlier study by M. Nedergaard's group emphasizes the role of Ca 2+ as a link between neuronal activation and astrocytic signaling [73]. It was reported that, when neurons are actively discharging, the opening of voltage-gated Ca 2+ channels results in rapid Ca 2+ influx into the neurons and thus a drop in its local extracellular concentration. This opens astrocytic connexin Cx43 channels which may release ATP. While that paper does not touch upon the Lac shuttle concept, the release of ATP by an astrocyte can be postulated to feedback to the astrocyte via an autocrine loop, since astrocytes express numerous P2Y receptors and are extremely sensitive to ATP and vigorously respond to it in vivo and in vitro [74], see review by [75]. ATP-triggered Ca 2+ waves in astrocytes are a well-known phenomenon [62]. In addition, hemichannels in astrocytic membranes can release Lac directly [8]. Hence, coupling via a drop in extracellular Ca 2+ with the resultant release of ATP could link neuronal activity, astrocytic metabolism, and the release of gliotransmitters together. Finally, regarding the communication between noradrenergic neurons and glia, the expression of adrenoceptors on astrocytes is well described and confirmed by RNA sequencing [76]. Astrocytes respond to noradrenaline with distinct patterns of Ca 2+ and cAMP responses that are implicated in gliotransmitter release in general, and Lac release in particular [77,78]. The latter may establish a reciprocal positive feedback mechanism for central noradrenergic transmission (see [58]). To summarize, currently, we do not have a unifying concept which could explain how the activity of neuronal networks engages astrocytes to release Lac. Development of such is an important goal that requires further evidence linking various mechanisms that have been established to date. Future Perspectives What are the most obvious controversies surrounding the role of astrocyte-generated Lac, where should we move next? The analysis presented here shows that the experimental conditions, concentrations of Lac which are seen as acceptable, tools and interpretation of the results are so different between individual groups that in many cases it is hard to compare them or come to any consensus. We believe that the following questions require a concerted effort of various laboratories, possibly via directly coordinated efforts and sharing of the same tools between them. Under which conditions are the effects of Lac at concentrations exceeding 3-5 mM physiologically relevant, especially if the impacts of intracellular acidification were not monitored? In that context, when are the changes in the NAD + /NADH ratio physiologically relevant? What is the role of the HCA1 receptor in the brain-is Lac actually inhibitory to many neurons in the cortex and hippocampus and how could this tie together with the proposed role of Lac in the process of memory formation? What are the local dynamics of Lac in the extracellular space? Can local Lac concentrations significantly exceed the "average" extracellular concentrations reported in the literature? This would require measurements with a new type of genetically encoded biosensor. Are there any other receptors that are responsive to Lac in the brain, with sensitivity better suited to the reported Lac physiological concentrations (<2 mM)? By what mechanism does Lac excite LC and RVLM catecholaminergic neurons? What is the trigger for the production of Lac by astrocytes in response to the activation of the neuronal networks? Are there several mechanisms, are they brain area-specific? This list may be continued. We hope that this review will stimulate and facilitate further collaborative efforts to resolve some of the long-standing mysteries surrounding the roles of Lac in the brain.
2022-12-29T16:14:01.349Z
2022-12-27T00:00:00.000
{ "year": 2022, "sha1": "c52f47ce7b3db9269644d6d25535ca72b1fdc206", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3425/13/1/49/pdf?version=1672113378", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a6fbb13b4ffa0b02e08dc6891bcadd0e34e18305", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
271031401
pes2o/s2orc
v3-fos-license
Mollaret’s Meningitis due to Herpes Simplex Virus 2: A Case Report and Review of the Literature Mollaret’s meningitis is a rare neurological disorder characterized by recurrent episodes of aseptic lymphocytic meningitis, often associated with herpes simplex virus 2 (HSV-2) infection. We report the case of a 39 y.o. Italian woman who experienced four episodes of aseptic lymphocytic meningitis between 2004 and 2023, diagnosed as Mollaret’s meningitis. In each episode, the patient presented with fever, severe headache and photophobia. In two episodes cutaneous vesicles in the left gluteal area preceding meningitis symptoms were also reported. A diagnostic evaluation included a physical–chemical analysis and a real-time PCR of the cerebrospinal fluid (CSF). The CSF presented pleocytosis with lymphocytic predominance and a positive HSV-2 load, with a peak of 1234 copies/mL. The patient was treated successfully with acyclovir, and the symptoms resolved without neurological sequelae. This case highlights the importance of comprehensive diagnostic testing and vigilant monitoring to manage Mollaret’s syndrome effectively. Introduction Mollaret's meningitis (MM) is a neurological syndrome characterized by recurrent aseptic lymphocytic meningitis episodes that usually last for 2-7 days with a favorable outcome but unpredictable recurrences, often associated with herpes simplex virus 2 (HSV-2) infection. It was first mentioned by Pierre Mollaret in 1944, who described recurrent aseptic meningitis in three patients with similar neurological symptoms and absence of a bacterial etiology [1].The French neurologist observed the presence of "fantomes cellulaires" (cell ghosts), a type of large endothelial-like cell, in the cerebrospinal fluid (CSF).The increasing use of molecular diagnostic techniques has highlighted HSV-2 as the most commonly isolated pathogen in this condition.Other viruses that can rarely lead to the syndrome are HSV-1 [2][3][4], varicella zoster virus [5], Epstein-Barr virus [6], human herpesvirus-6 [7] and enterovirus [8]. MM is also called benign recurrent aseptic meningitis, benign recurrent endothelial meningitis and benign recurrent endothelial-leukocytic meningitis.Using electron microscopy, de Chadarévian and Becker [9] demonstrated how Mollaret's endothelial-like cells were monocytic in origin.Typically, patients present with headache, meningism, and photosensitivity.Once the meningitis resolves, the patient has no neurologic sequelae until the next episode.Symptom-free intervals last from a few weeks to many years, with some patients experiencing only three episodes and others reporting over 30 episodes [10]. The definition of MM varies among studies; in a recent cohort study, a minimum of only two episodes of meningitis were applied to diagnose MM instead of three as usually required.The authors, only considering HSV-2-associated MM, reported an annual incidence of 1.2 cases per 1,000,000 adults [10]. This present paper describes clinical features and laboratory findings in a patient who presented with four episodes of meningitis over a twenty-year period; a literature review of the last decade is also included. Case Presentation In October 2023, a 39 y.o.Italian woman was admitted to the Emergency Department of IRCCS Azienda Ospedaliero-Universitaria of Bologna, reporting, from the day before, intense headache, neck stiffness and photophobia.Her temperature was 37.2 • C, her blood pressure was 116/75 mmHg, her heart rate measured 84/min, and she had a pulse oxygen saturation of 98% on room air.A physical examination revealed an erythematous cutaneous rash over the neck, in the thoracic and abdominal region, without signs of herpetic lesions, although the patient reported the presence of cutaneous vesicles in the left gluteal area 10 days before. The patient's medical history included hypothyroidism, latex and penicillin allergies, allergic asthma and a gastro-esophageal reflux disease.In addition, the patient reported recurrent meningitis episodes that occurred in 2004, 2013 and 2016.An urgent lumbar puncture was performed, revealing limpid CSF with normal pressure.The CSF profile showed pleocytosis with a predominance of lymphocytes and a normal glucose concentration (Table 1).A syndromic multiplex PCR on CSF was performed (FilmArray ME Panel, BioFire Diagnostics, Salt Lake City, UT, USA), confirming the clinical suspicion of HSV-2 meningitis in the absence of bacterial etiology.Antiviral therapy was promptly started (acyclovir 750 mg i.v.every 8 h) in association with an antihistaminic for the neck rash, and the patient was transferred to the Infectious Disease Unit.The quantitative PCR (HSV-2 ELITe MGB ® kit, ELITech Group, Torino, Italy) on CSF revealed a viral load of 1234 copies/µL, whereas HSV-2 PCR on blood was negative.Imaging findings were normal in the absence of endocrine hypertension and ischemic or hemorrhagic areas.A progressive reduction in photophobia and neck rigidity, in conjunction with the resolution of the headache, were described in the following 7 days of hospitalization.Subsequently, the patient reported mild hyposthenia in the left side of the body, drowsiness and dizziness.The symptoms disappeared after antihistaminic therapy suspension.Laboratory parameters revealed mild leukocytosis (10.6 K/µL; range 3.6-10.5)with a normal differential, hemoglobin (15 g/dL) and platelet count (263 K/µL).During medical work-up, the patient reported that similar symptoms had occurred during the previous three hospitalizations, specifically mild fever (37.2-37.5 • C), headache, neck stiffness and photophobia. In the first episode, which occurred in 2004, the clinical examination showed the presence of Brudzinski and Kernig signs in the absence of cutaneous vesicles.Although no molecular testing was performed in CSF, Pandy's test was positive, indicating hyperproteinorrachia.The whole blood count showed increased white blood cells (9.29 K/µL; range 4.8-8.5).Serum antibodies for the coxsackie virus, echovirus, poliovirus and adenovirus were negative.Serological tests for EBV, cytomegalovirus and Borrelia burgdorferi revealed IgG-positive and IgM-negative results.The electroencephalography was normal.The patient was discharged with a diagnosis of lymphocytic meningitis. The patient returned to the emergency department in 2013 and in 2016 presenting intense headache, back pain and photophobia persisting from the day before.At admission, the patient's temperature was 37.2 • C. No cutaneous vesicles were reported in the episode of 2013, while in 2016, they were present 7 days before the onset of the symptoms in the left gluteal area.On both occasions, urgent lumbar puncture was performed, and a positive result for HSV-2 was detected, with a low viral load (<500 copies/mL), as reported in Table 1.Blood tests showed leukocytosis in both episodes (11.71K/µL; range 4.8-8.5 and 12.94 K/µL; range 3.6-10.5,respectively) with a normal differential count. Several diagnostic tests were performed during the four hospitalizations, showing positive HSV-2 IgG antibodies and negative hepatitis C virus and human immunodeficiency virus antibodies.Imaging findings were normal through all the episodes in the absence of endocrine hypertension, hypodensity areas and ischemic or hemorrhagic areas. During the first episode, in 2004, Ceftriaxone and corticosteroids were administered, and the patient was discharged after 9 days without any symptoms.Antiviral therapy was administered (acyclovir i.v.750 mg every 8 h) during the other episodes.In 2013 and 2016, the HSV-2 CSF positive PCR result allowed for an interruption of antibiotic therapy, which is empirically administered by protocol when signs of meningitis are present.No antibiotic therapy was administered in the last episode. CSF analysis during the entire study period is outlined in Table 1, including pleocytosis, elevated protein levels and normal or decreased glucose levels consistent with aseptic meningitis. Discussion In 1962, Bruyn et al. published the criteria for the diagnosis of MM as recurring episodes presenting with severe headache, meningismus and fever in the absence of a detectable etiological agent [11].After the introduction of molecular methods, many studies reported HSV-2 as a frequent etiological agent of MM and analyzed characteristic clinical features in depth.Thereafter, Bruyn's criteria were revised and modified.In 2020, Gadhiya et al. [12] defined its main characteristic features, in particular (1) recurrent episodes of aseptic meningitis; (2) absence of symptoms between episodes; (3) spontaneous remission of symptoms; (4) transient neurological symptoms in 50% of patients; (5) absence of neurological sequelae; (6) HSV-2 as the main etiological agent; and (7) genital herpes in 50% of cases. The case herein described met the diagnostic criteria specified above.In particular, the patient experienced three episodes of HSV-2 meningitis and one episode of presumed, but not microbiologically confirmed, viral meningitis in a period of 20 years.The CSF profile was typical for aseptic meningitis, with mildly elevated protein levels, low glucose and leukocyte count >5 cells/mm 3 with a lymphocyte prevalence.To our knowledge, this is the first time that the CSF viral load in the different MM episodes was reported.The HSV-2 load was very low, ranging from less than the lower limit of quantification of RT PCR (500 copies/mL) and 1234 copies/mL. The occurrence of HSV-2 MM was described by several authors, and in Table 2, 16 case reports of immunocompetent patients are reported that have been published in the last ten years [12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27].In these patients, the CSF parameters presented a similar pattern char-acterized by pleocytosis and, in most cases, hypoglycorrhachia and hyperproteinorrachia as well.The number of episodes described (Table 2) ranged from 2 to 12.In this regard, Petersen et al. reported that the risk of recurrence is higher for patients that experienced more than three previous episodes.A higher prevalence in female (70%) than in male patients and a mean age of 40 years was also reported [10].An association between HSV-1 and MM has been described in rare cases; the first report was by Steel et al. [2], who identified HSV-1 in the CSF of a patient with four episodes of aseptic meningitis.However, viral meningitis in immunocompetent adults is usually caused by HSV-2, while HSV-1 is more commonly associated with encephalitis [28]. The basis for the susceptibility to develop MM in some individuals is still unknown.Franzen-Röhl et al. observed that patients with recurrent HSV-2 meningitis have increased in vitro, HSV-specific, adaptive and innate immune responses compared to healthy HSV-2 seropositive blood donors, raising the possibility of immune-mediated pathology in the development of MM [29].The role of genetic factors in developing MM was also investigated. A genetic mutation in the TLR3, UNC-93B gene has been observed in some patients.This mutation is involved in regulating inflammation and may cause an overactive immune response and increased inflammation in the brain, causing recurrent meningitis episodes [13].Recently, it was suggested that single-gene inborn errors of immunity could contribute to the inability to maintain HSV-2 latency in sensory ganglia, thereby predisposing certain people to recurrent meningitis.In particular, the gene variants involved were related to the autophagy pathway [21].In addition, autoimmune disorders like systemic lupus erythematosus have been linked to MM [30]. In our patient, cutaneous vesicles in the left gluteal area occurred 10 days before the onset of meningitis in two episodes.This is in agreement with the observation that a history of genital herpes occurs in less than 50% of patients [12].Overall, latency in dorsal root ganglia is presumed to be the source of recurrencies leading to meningitis [16].However, HSV-2 is currently being seen less often in genital herpes in favor of HSV-1, due to recent changes in herpes infection epidemiology [31]. Recently, a nationwide cohort study compared the clinical features of 47 adults hospitalized for HSV-2 MM with 118 patients with single-episode HSV-2 meningitis.The clinical findings of headache, neck stiffness and photophobia were similar, but functional outcomes, evaluated by the Glasgow Outcome Scale, were more favorable in HSV-2 MM patients [12].The same triad of symptoms are reported in our patient.Taking into consideration neurological sequelae, our patient had a favorable outcome, with symptom-free intervals between the episodes and without neurological permanent deficit, according to Bruyn's criteria. Although benign and spontaneous recovery is described in MM in the absence of antiviral therapy, our patient was treated with acyclovir, as commonly administered in aseptic meningitis.Regarding prophylaxis with oral antiviral therapy (0.5 g of valacyclovir twice daily), this was suggested for patients with recurrent HSV-2 meningitis, but no effect in terms of meningitis recurrence was observed [32]. Conclusions This case underscores the importance of comprehensive diagnostic testing and a vigilant clinical approach to recognize this benign syndrome associated with a good prognosis.Further research is needed to evaluate the appropriate treatment or long-term prophylaxis for MM and the predisposing environmental or genetic factors in the development of the disease. Table 1 . CSF findings during the patient's hospital admissions. Table 2 . Case reports of Mollaret's meningitis in immunocompetent patients (publication period: 2014-2023).Patients with hematologic malignancies, immunodeficiencies, autoinflammatory genetic disorders and spinal tumors were not included.Meningitis caused by viruses other than HSV-2 were not included.Only reports in English were included from the literature.The age refers to the last episode/hospitalization of the patient.
2024-07-07T15:54:56.368Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "0e4b053176a708eb475a4b9d3a8b6e57129f227d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/microorganisms12071363", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9363b4540bed38c28d735fe4e5c61e4e0513cf5a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
220523988
pes2o/s2orc
v3-fos-license
Modelling of Language Syntax and Semantics: The Case of the Assembler Compiler Application of software language technologies, whether analytical, transformational, or generational, in an industrial context is usually a taxing endeavour, with high demands in qualification levels of developers involved in it. Yet, if applied successfully, in the right places and with the right amount of effort, they promise high returns in terms of optimisation, effectiveness, validity and verifiability. In this paper, we report on our experience on writing a compiler for a complex second generation legacy programming language originally intended to be used on a mainframe. The business case for this product deals with companies migrating their software systems off the mainframe to cloud native or PC. Leveraging the documentation, available domain knowledge, several sample projects and a test suite, as well as several proprietary DSLs, we successfully modelled syntax and semantics of hundreds of instructions of that language, to the point of producing a compiler with a very limited group of compiler developers in limited time. The compiler is currently deployed at some of our customers and has received a top technology award from Microsoft. This report is meant to serve as a sample snapshot of how compilers can be built in the industry with software language engineering techniques. Traditional problems of compiler construction such as parsing or code optimisation either did not present a noticeable challenge or did not manifest themselves altogether in the course of this project, but MDE matters such as model transformation, modular design, the use of DSLs and meta-tools, were a constant concern. The focus of the report is in truthful representation of the domain as well as the details of the project, on reflection of the choices that were taken or could have been taken in the meantime, and on lessons learnt during the project. Introduction The Raincode ASM370 compiler [Rai16] is a product that can be used to run programs written in the IBM mainframe assembler, on PCs or servers with .NET Framework or .NET Core. It is a proper compiler, since its input is a program text written in assembler, and it treats this assembler program as a normal compiler would have treated a program in any other high level language: parses it, constructs an intermediate model of it, annotates it, transforms the model and finally produces an executable file. (This goes against the industrial state of the art in modernisation of second generation languages, which usually entails mapping assembler instructions to statements in some other low level language such as C [Mic, War13, WZH04, B + , Sou, War01, JSW99, War99] or semi-automated extraction of some sort of abstract models that could potentially guide system redevelopment [LB96,War00]). The technologies to implement it, are proprietary metaprogramming DSLs [Bla95,Bla01], as well as native .NET languages like C \ and framework-specific software languages like the LINQ language extension [MBB06], the Roslyn framework [NPC + 19], C \ libraries, WPF event handling API [Mic06], XAML user interface definitions [Mic08], etc. The development of the compiler involved overcoming many challenges like reimplementing proprietary macros, solving performance issues and dealing with self-modifying code [Zay17b], but ultimately led to a product that satisfies customers [Rai16] and wins awards [Pre16]. In this paper we focus on the modelling aspect and follow how the syntax and semantics of the ASM370 was formulated, extracted, transformed and finally evolved to enhance the already working compiler to target yet another platform. The paper is structured as follows: subsection 1.1 goes deeper into the background of the issue, explains what ASM370 is, how it is used, why have we embarked on the journey of building a compiler for it in the first place, and what were the main abstract challenges of this endeavour; section 2 motivates the metamodel(s) for modelling syntax and semantics of the elements of the instruction set of the chosen language and explains how different elements of the models were manifesting themselves and were being used at various stages of development and execution; section 3 explains the first model transformation step where the syntax models conforming to the desired metamodel were extracted from faulty and only semi-structured documentation, debugged and corrected; section 4 switches to the semantic models and provides details on how their metamodel was designed and what is its link to the artefacts that eventually needed to be generated from the models; section 5 revisits a number of related research endeavours that either were used in this project or go in parallel to it but could be profitably combined into it. Finally, section 6 summarises the entire project in a somewhat verbose way in an attempt to provide valuable lessons to be learnt for future industrial projects of similar nature, or some tension points and open problems for future academic projects to solve and address. Background and Problem Statement High Level Assembler (HLASM from this point on) is a second generation language available on IBM mainframes (Figure 1). The other classical "generations" are the first (raw machine code), the fourth (DSLs) and the third (a catch-all category for all other general purpose software languages for programming, modelling, data definition, action description, screen specification, batch processing, and so on) [BJZ16,Zay17c]. Notwithstanding widespread third generation alternatives like COBOL or PL/I, HLASM is being used for a number of reasons, some typical arguments for a lower level language (fine-grained or bespoke memory management, error handling, tailoring and optimisations, bit-level interoperability), others being legacy consequences of the software development process (e.g., avoiding the costs of a 3GL/4GL compiler) [BJZ16]. The main reasons for us to develop an HLASM compiler was to provide the option for our customers to migrate their existing HLASM codebase in the scope of a global migration off the mainframe into PC, Azure, Cloud-Native, etc. In such cases this option would allow them to ignore the presence of HLASM assets in their codebase until better times, and use a modern IDE later to help reverse engineering it and rewriting it with modern technologies. These HLASM assets are not meant to be kept operational forever: usually we see barely several thousands lines of HLASM within a typical portfolio of 20-200 million lines of code in higher level languages (there are known cases up to 343 MLOC at Bank of New York Mellon [Mit12], but the largest portfolio to have been migrated by Raincode Labs, is 250 MLOC [Rai19]). HLASM code typically covers basic features like date and format conversions, with a great fan-in (many other components relying on them extensively, up to millions calls per program). Such a setup means that replacement of HLASM components is feasible but extremely dangerous without proper tool support for testing and refactoring, and is undesirable to undertake during massive migration where resources are inevitably running thin anyway. In rare cases with enough inner administrative support it is possible to take the quality-first approach and work on improving the quality of the existing production codebase prior to migration [WPP + 19], but it is more common to postpone such activities until after the migration is complete [Fea04]. Main Challenges and Approaches On the grand scheme of things, the two main engineering complications in implementing HLASM as a compiler, are its massive instruction set and its equally massive and wickedly flexible macro system. Good sources of information about both of them, can be found in IBM's official documentation: Principle of Operation [IBM17] for the instruction set (and a general overview of the z/Architecture), and General Information [IBM13] for the macro language (and other HLASM extensions). For the purpose of limiting the scope of this report, we focus here on the former. The most noticeable challenges faced by the project, were the following: • Dealing with legacy ecosystems: e.g., language design choices were motivated by punchcard-era technology and are hard to link to anything explainable nowadays. • Customer inflexibility: given the extremely fragile nature of assembler code and the significance of this code to its owner, there were no compromises to be made in technical choices within the system as well as within the migration process. • Efficiently executing low-level code written with the use of peculiar idioms up to and including self-modification at runtime. • The massive scale of the instruction set: while typical high-level programming languages have 20-50 major constructs, HLASM was treated as a high-level language but contained almost a thousand of instructions, each of which had to be examined and implemented individually. • Non-orthogonality of the instruction set: conceptually similar instructions tend to have subtly and counter-intuitively varying semantics. • Artefact unavailability: we had no direct access to the original compiler for which our product was meant to be a replacement, and for legal reasons were not allowed to consult its source code, its other components (such as macro definitions) or existing alternative reimplementations. • Lack of automation for initial steps: instead of tangible verifiable executable models as a starting point, we had to rely on documentation which was manually written, incomplete, contained errors and explicitly (legally) prohibited automatic derivations of commercial artefacts from it, even when it was possible. In order to face them, we: • Combined compilation (code generation) and interpretation (emulation) in one language processor [Zay17b]. • Inferred models of syntax and semantics of HLASM [IBM13,IBM17] from available sources and generated compiler [BJZ16, GZ19] components from them. • Designed several tool-supported domain-specific languages [MHS05] to express both commonalities and peculiarities in syntax and semantics. ICM Our first attempt at modelling all of them was to classify each manually according to its syntax and semantics, into one or more of general classes like "addition" or "floating point" and inferring the final picture by composing known fragments. Unfortunately, this endeavour relatively quickly came to a halt due to our overestimation of the orthogonality of the language and underestimation of the scale. We have seen cases where each combination of conceptually different classes had to be uniquely implemented, effectively nullifying the non-explosion contribution of model composition. We have seen cases where there would be an addition instruction of a particular subkind but not a corresponding subtraction instruction, and the opcode that would have been logical for such a subtraction instruction to have, was devoted to something entirely unrelated. We have seen cases where within the same group different instructions were assigning a "condition code" (the result code of an operation) with different strategies or not assigning it at all. This all prevented the straightforward "top-down" remodelling of the instruction set and forced us to look more individually at the most relevant of them, slowly broadening our scope whenever possible and in general working our way upwards from the classification and properties already listed in the documentation or apparently required for the artefacts that were to be generated. Despite the fact that we mostly treat HLASM as a high level language, it is an assembler with all the consequences attached to it -in particular, it allows to treat code as data and data as code. Hence, a program can read its own parts as data (so we need to imitate the memory model at runtime) and can alter its own code by simply writing over it (so we have to retain the knowledge of how to parse bytes into instructions at runtime as well). It is important to know this at this point of the story because it means that beside the traditional source-to-code compiler we have to generate an emulator capable of parsing and executing any instruction given its address in memory. The actual compiler will then resort to calling the emulator when the analysis of the compiled code indicates this necessity (i.e., when the code is modifying itself). As an example, consider the INSERT CHARACTER UNDER MASK instruction, which structure is depicted on Figure 2. It has three variants, distinguished by programmers with the use of the right mnemonic: "ICM" is used for the basic version, "ICMY" for the version that uses a longer 20 bit displacement and "ICMH" for the ASM 370 version that "inserts" its "characters" into the higher 32 bits of the 64 bit register R 1 (otherwise all 32 bit operations are assumed to operate on the lower 32 bits of 64 bit registers). The formula "R 1 , M 3 , D 2 (B 2 )" after the mnemonic also refers to the encoding used by the programmers: it means that if a programmer writes "ICM 1,2,3(4)", it means the first register, the fourth base, a displacement of 3 and a mask equal to 2. Note that the order of operands is different for the programmer, for the emulator and for the description (we have no explanation for this difference, but learnt to accept legacy systems as they are and not as they should be; there was probably a very good reason for it at design time). So when the text of the description of the instruction's behaviour mentions its "second argument", they mean D 2 (B 2 ), which is the third of the arguments that the programmer writes down. Also, the programmer writes it down as a displacement with the base following it in parenthesis, while the memory layout puts the base before the displacement (which in turn can have its higher bits positioned even lower for the ICMY and ICMH variants). Handling three numbering schemes is too error-prone to be done manually, so we extract the knowledge of them, save it in the models and generate artefacts from it automatically as a way to avoid any mistakes when handling all 952 instructions. To the right of the programmer's notation, we see "[RS-b]" or "[RSY-b]", which is the so-called "format" of the instruction. All instructions of the format RS-b are structured similarly: they start with eight bits of the fixed opcode (different per instruction variant), have the next four bits for the register, then the mask, then the base and then the displacement. Dealing with 60 reusable formats is easier than dealing with 952 individual formats, even though the documentation contains errors and slight inconsistencies about them (will be elaborated in section 3). The little bit positioning scheme shows for each format, which bits correspond to which argument. These arguments are still too low level for modelling instruction behaviours: for example, the base and the displacement are never used independently, they are two parts of one conceptual entity representing an address in the memory. Hence, if we just implement the correct mapping between the format's bitwise parts and the conceptual runtime entities, we will be able to let ICM and ICMY to have exactly the same semantics. (The only difference is how the displacement is calculated, which is irrelevant when we already think on the level of addresses, registers and masks). Figure 3 can help us see the information flows though the solution. The Principles of Operation [IBM17], broadly speaking, has four interested sources of information: chapter 5 contains its definition of formats, from which we extract intermediate format models to be tested, validated, completed and finally used to build a model of each instruction's syntax; chapters 7-20 contain many natural text sections, 381 for the fourth edition we mostly used in our project, which had to be consulted regularly to resolve ambiguities as well as to build models of each instruction's behaviour; appendix B contained several tables (called "lists") with basic information such as instruction's name, mnemonic, opcode and format, all ending up in each instruction's syntax model; and appendix C with another table concerning different strategies to calculate condition codes-the two-byte result codes-which will be explained in more detail in section 3. The condition code (CC) models, together with the models of emulator's desired behaviour, lead to generating the emulator code, which forms a part of the runtime that both the compiler and the compiled program will use. Since the emulator needs to parse the instruction from memory bytes, its code also incorporates the knowledge about syntax. The model of the instruction set (in XML) together with the definitions of all the used macros (in HLASM), is provided to the compiler together with the source program. This is done to increase configurability at the customer's site where they might want to turn off support for certain instructions without recompiling the compiler. Not shown on the diagram is documentation, derived from all the models put together and rendered in a human-readable form to simplify debugging and providing visual aid to our compiler developers. The final model of the instruction set contains, for each instruction, the following components. Name, such as "insert character under mask" or "branch and link", as a short description of the instruction, intended for human comprehension. Besides naming a group of conceptually (but not always implementationally!) related instructions, it is essentially useless for the HLASM compiler except for the runtime if logging is turned on (this was used to test previously unseen customer code at customer premises outside our usual development environment). Mnemonic, such as "ICM" or "BAL", is intended for the HLASM programmers and used in the parser that reads an HLASM program in order to recognise instructions, macros and commands it uses. Please note that the compiler frontend uses a parser model and does not encode the syntactic structure explicitly in the parsing algorithm as some other compilers do: as indicated earlier, we need to retain the possibility of tweaking the instruction set at the customer's side without recompiling the compiler. Next is operation code (or "opcode" for short), such as 0xBF for ICM or 0xEB81 for ICMY. The opcode is a bytecode-level representation of the instruction, that identifies it for the CPU (as well as for our emulator) and thus must also be used as a part of the code generation. This is known as "instruction decoding" [Kli19b]. "Inlining" is the name we use to refer to the "true" compilation where an input instruction is compiled to one or more individual atomic bytecode instructions of the target platform. For instructions that allow inlining, their model in the instruction set includes the explicit inlining conditions and the code template to be generated. This is required to enable customisation at the user side to switch inlining on or off per instruction for each particular customer. The microcode DSL will be explained in section 4. Model Extraction Now that we know what kind of information we need in our models and where we can get it from, let us focus on the information extraction process itself. We extracted the initial models directly from the HLASM manual PDF [IBM17] with an ad hoc developed technique similar to grammar extraction [LZ11,Zay12]: tolerant error-correcting heuristic-based semiparsing of unstructured data with its subsequent curation [Zay14]. As shown on Figure 3, of particular interest for that process are several multi-page tables in appendix B of the Principles of Operation [IBM17] that contain instruction summaries, coupling the name (intended for human comprehension, such as "branch and link" or "compute message authenticating code") with the mnemonic (intended for programmers and the parser, like BAL or KMAC), the opcode (hexadecimal identification of an instruction, spanning one or more bytes) and "characteristics" (97 flags describing frequently occurring behaviour like raising particular kinds of exceptions or having a particular bitwise format). Appendix C contains another useful 6-page table that defines per instruction the strategy used to set the condition code depending on the result of the local computation (an example strategy could be "0 if zero, 1 if negative, 2 if positive, 3 if overflow"). There are 88 such different strategies in total. Each of the tables contained a small number of mistakes and inconsistencies, so we cannot honestly claim to have derived the resulting models from the documentation, but rather reconstructed by heavily relying on several official information sources. We found and (manually) fixed some errors and inconsistencies in the documentation, most commonly referred to formats of instructions. The problems mostly fell into one of the five categories: • lacking formats that use only the first 8 bits for the opcode instead of 16 announced (three cases of the S format); • referencing non-existing formats (e.g., RRF instead of RRF-c; or RSL on multiple occasions instead of RSL-a or RSL-b) which were used in earlier versions of the same document and have not been properly replaced [IBM04]; • slightly misstating a different format within the same group such as RRF-e instead of RRF-c (19 cases), RRF-b instead of RRF-a (4 cases), RRF-e instead of RRF-b (2 cases) or RRF-a instead of RRF-b (1 case), possibly related to plans of finding some uses for extra arguments but ending up not using them in the final version; • lacking formats with varying uses of a register field (RR and RRE have variants with just one register instead of two for SPM and IPM; RR has a variant with a mask instead of a register for BCR); • lacking formats for having an immediate value occupy the space normally allocated to an address field (10 cases of RSY-a) or to several fields (8 cases of RS-a). Of these five, the first two are the most severe and not tolerably ignorable: the first one will yield undefined behaviour depending on the implementation of the final model-to-code transformation; the second one will even leave models non-well-formed. The third and the fourth categories would lead to performance problems: in the third case, data will be fetched without need, and in the fourth case, data will be fetched in a wrong format (and will need to be refined explicitly as an additional semantic step, adding to the modeller's manual effort). The last case is the most dangerous of all, since it can lead to not only unnecessary data fetches, but also fetches from incorrect addresses, occasionally leading to access violations at runtime in a very difficult to reproduce fashion. One of the useful model properties that was not coded explicitly in any of the tables in the original documentation but was possible to extract manually after reading the textual descriptions of all instructions, was the bitness of their arguments -i.e., for each of the arguments to remember whether it is 8 bit long, 16 bit, 32 bit or 64 bit; whether it is signed or unsigned; whether the argument is in a binary coded decimal form; and whether the argument designates a pair of registers instead of just one (e.g., in MR R4,R8 the first argument is a pair of registers R 4 and R 5 which are virtually concatenated and used as one 64-bit number). All these examples serve to demonstrate problems that are typical in dealing with documentation of large legacy languages: information is partly missing, partly incomplete, partly subtly wrong. In retrospective, HLASM documentation was on the higher side of the quality spectrum: there were no legal reasons to stop us from using it, it was reasonably complete and quite thorough. All the bugs we found in it, were just manifestations of its manual nature and differences between a manually written 2000-pages text and an explicit executable verifiable model. Semantic Steps Modelling It has been mentioned that as a fallback mechanism to deal with self-modifying code and using code as data, we incorporate an emulator into our runtime. This runtime is used by the compiler, as well as by any compiled program, since it contains support for omnipresent data structures like binary coded decimals. Executing a HLASM instruction through the emulator is the safest way since it works in all circumstances, but it is also the slowest. Thus, if the compiler determines that more optimal ways of executing an instruction, are unsafe, or that there is no alternative (like for EXECUTE explained below), it compiles the instruction to a call to the emulator. This emulator has to perform basically five tasks: • determining which instruction is the next one to be executed; • parsing its arguments according to the syntactic model; • fetching the required input data; • actually executing the steps of core behaviour; • possibly modifying the condition code; • determining the program counter for the next instruction. HLASM, since no low-level language would be complete without an eval-like construct [RHBV11], also has an instruction called EXECUTE whose actual execution means emulating another instruction from an arbitrary memory location, so one part of the emulator must be able to connect to the others as well. Determining which instruction to execute next ("instruction decoding" [Kli19b]), in a broad sense, is just parsing [ZB14]. Having the knowledge about the general syntactic structure of each instruction (its length in bytes, positions of instructiondefining opcode bytes and their values) makes this a trivial generative task, with some handling of corner cases-e.g., the EXECUTE instruction mentioned above, does not return to the original call location if the executed instruction is a branching one. Within the individual instruction execution part, there is also a syntactic part that fetches the required number of bytes from the memory, reconstructs actual values out of them (for instance, a 12 bit value would be constructed by masking and bitshifting one byte and disjuncting it with the other), and preparing them for use (for example, a memory address is composed out of base, index and displacement, which are never used individually). The next part is truly individual: for example, an arithmetic instruction actually adds, subtracts, bit-manipulates or otherwise transforms its input values and stores the result at the expected location. While the first parts (instruction identification and its arguments preparation) were fairly straightforward to infer from our models of syntax, the individual semantic part demanded more work. In order to model the behaviour of each instruction explicitly, we defined "microcode", a DSL for modelling typical atomic semantic steps such as: • Fetch a value from a register or from an address in memory The idea of semantic steps and modelling the behaviour of each of the instructions as a sequence (in fact, a tree) of such steps, works to our satisfaction, with two adjustments. First of them was dealing with condition codes: in HLASM there are many instructions that change a special "magic" two-bit flag called "the condition code" as a result of their execution. For example, any addition instruction on machine words (AR, AGR, AGFR, A, AY, AG, AGF) or halfwords (AH, AHY, AHI, AGHI) assigns a condition code 0 when the result of the addition is zero and no overflow occurs, a condition code 1 when the result is negative without overflowing, a condition code 2 when the result is positive without overflowing, and finally a condition code 3 in the case of an overflow outside the expected bit length. However, actually assigning such condition codes is a costly operation -mostly due to the fact that the .NET Framework does not detect arithmetic overflows naturally, and there exists no perfectly reliable algorithm known for making such predictions, so one needs to perform the operation (addition in this example) on larger data types and then check whether the result would have fit in the smaller type. This by itself would not have been such a problem and would have been seen as necessary evil, but in our observation these condition codes are not always checked right after they are assigned, and it is not uncommon to have several consequent instructions overwriting condition codes without reading them. Hence, it made sense (and a measurable impact on performance) to explicitly model the condition code computation instead and delay the actual computation of them until (if ever) it is actually required. Once the condition code has been accessed, it is computed and cached for possible repetitive uses later. Evolving our system to switch to this lazy condition code evaluation was done by tweaking the model transformation only, without any change to the models themselves. Having witnessed the impact of this optimisation, we engaged in a separate performance analysis project [Mje17] to investigate other possible bottlenecks in the system and ways to overcome them. The HLASM compiler has never been performancefocused, but it could not afford to be entirely performance-ignorant, for both technical and marketing reasons. The findings included a list of hotspots -instructions that were both used often in the sample code of our customer, and took significant time to execute. For some of them we have proposed alternative implementations, coded directly in IL (the bytecode-level language of the .NET Framework and .NET Core). One of the biggest performance gains was the ICM instruction we have seen above. Normally, it takes a register, a mask and a memory address and fetches consecutive bytes from the memory location into the register according to the mask -for example, if the mask is 1001 binary, then there are two bytes fetched, one ends up as the highest byte of the target register, the other one as the lowest byte, and the two bytes in between remain unchanged. As it turns out, it is a common idiom among HLASM programmers to have ICM with a mask of 1111 which just straightforwardly fetches four consecutive bytes from a memory location into four consecutive bytes of a target register. The reasons to use ICM instead of the L ("load") instruction that performs the same procedure is that L leaves the condition code unchanged while ICM assigns a value to it, allowing to fetch four bytes with one instruction and, for instance, check if they yielded zero with the next instruction. The general implementation of ICM easily fills up the entire screen (see Figure 4: all comments except the one on the first line are added manually for readability, the rest of the program is fully generated), but its full-mask-simplified form is extremely simpler and consists of two IL-level instructions for the actual semantics and barely a dozen more for computing the condition code. This is how we arrived at the idea of conditional inlining: if the code is safe (not self-modifying) and if the inlining conditions are met (in this case, if the mask is 1111 binary), then the instruction is compiled to its shorter optimised form; otherwise the emulator is called with the right arguments. This technique has been clunkily named previously as "compilepretation" [Zay17b] since it combines aspects of traditional compilation (model-to-code) and interpretation (operational semantics). Introducing concurrency into the mix has bears another clunky name of "interpretisation" [Kli19a]. To verify the equivalence of the general operational semantics of the emulator and the partial optimised inlining semantics, we used an old program transformation technique called supercompilation (supervision + compilation), based on works of Lombardi [Lom67], Futamura [Fut71], Ershov [Ers77] and Turchin [Tur80]. It was designed specifically to transform executable models by observing/supervising their behaviour and compiling them to self-sufficient models that achieve the same effect while being smaller thanks to utilisation of additional (meta)data. For the example from Figure 4, if the mask is known to be 1111 binary, then all the ifs checking for individual bits on lines 12, 14, 16 and 18 succeed, so we reach the same effect by immediately executing the positive branches of each conditional statement. Furthermore, the counter t1 reliably increases by 1 so it does not need to be kept since all the instances of reading it become constants, and then it does not even have to be defined on line 11. Then, since all bytes of v1 will be overwritten, there is no need to mask-carve the right ones with bit conjunctions and bit disjunctions. In fact, there is no need to read its original value into v1 on line 5 at all, it will be overwritten in any case. In a similar series of near-trivial simplification steps we can prove that constructing t2 in lines 25-27 and checking it for being equal to 0 is the same as checking v1 directly for being equal to 0. Theoretically we could have implemented such supercompilation steps to infer the inlining code based on the model of the emulator semantics and the known condition, but for technical reasons we decided to write the inlining code manually and then use supercompilation for verification of its correctness. The microcode language had to be evolved in a significant way twice. The first change was to accommodate the inlining abstractions due to the differences between domains. For instance, assignment statements in the emulator assume some high level language that they will have to generate at some point down the toolchain (such as C \ ) and thus rely on things like variables and even their automatic type inference, while the inlining semantic steps are meant to produce very low level bytecode constructs. Thus, they have to integrate well with their context of gaining input values and producing output values. For example, AssignProgramCounter is a microcode command that makes sense in both contexts (emulating and inlining), so it requires no special attention. However, DeclareValue, when used in an emulator, allows an optional initialiser that can be used to assign the value right after declaring it (since it is extremely easy to do on the level of C \ , and quite useful at times), but there are too many complications for inlining, so we disallow initialisers there. Similarly, the Argument expression is an approximate equivalent of the Variable expression and its variants (Value, Address, etc), but requires an explicit type each time it is used, due to the low-levelness of IL. The second change was related to the current activities around the HLASM compiler where we aim to use LLVM as a backend instead of the .NET Framework, so it covered a polishing pass over the microcode commands to remove the C \ bias and simplify possible generation of C code such that our customers can execute their HLASM programs on arbitrary Linux machines without relying on .NET Core. These adjustments were relatively minor and concerned details like explicit and implicit type conversion rules in C and C \ when dealing with signed and unsigned integers. In section 2 and Figure 3 we mentioned that some models are kept until compile time or even runtime. To be more precise, in this case there is a model of inlining semantics used at compile time, but it conforms to a different metamodel, better suitable to the architecture of our compiler. The transformation from models conforming to the microcode metamodel to models conforming to this compile-time-specific metamodel, is not far beyond trivial in complexity, and does not challenge the state of the art in model transformation. Related Work Extracting fully structured curated data with heuristics from a semi-structured source, as we used in section 3, is related to many things: we have already mentioned grammar extraction based on textual cues [LZ11] and on known properties of anchor symbols [Zay12]. The bibliography of [Zay14] provided us with a comprehensive view on the topic of using all sorts of tolerant, permissive and error-correcting parsing. The research area of mining unstructured data has been active for at least two decades, and produced quite a number of various techniques [Fel99], mostly based on heuristics and/or data mining. Optimising a compiler by specifically targeting code idioms [AS14] is not a new idea and has been successfully employed for almost three decades in FORTRAN compilers [HSVF08,PP91,PE95] and later even on the mainframe [KKM + 06]. In a contemporaneous project we are trying to find ways to identify such idioms automatically with graph mining [PNM + 19, FZM + 19, PBM + 19, NPF + 19] since their manual construction for each language is rather labour-intensive. One of the substantial recent contributions to research on execution semantics of software languages was done by Tikhonova [Tik19]. In her terms, our microcode (section 4) defines a semantic domain and what we call models of instruction semantics together form a semantic mapping as specification templates (possibly with less sufficient formal rigour on our side). Conceptually Tikhonova's work on Constelle rhymes with our experience and is well aligned with it; however, technically even if Constelle was released before the start of our project, it is unlikely that we would have chosen to use it directly for the fear of relying on third party technology with unknown and unpredictable lifespan and maintainability status. On the other hand, the component-based executable semantics of funcons [vBMS19, M + 19] served as a major inspiration in this project to design the microcode, toned down by the fact that we needed it for one very specific language in the scope of one project, and Mosses initially planned funcons to serve as a playground for creating all kinds of different DSLs. Besides that, the approach obviously aimed at experimental forward engineering of small software languages, was successfully applicable to this project of reverse engineering semantics of a relatively large legacy software language. Similarly, our own model transformation framework was developed in-house and was never meant to cover the entire domain of model transformation. There are much better general purpose academic frameworks like MOMENT that formalised a model transformation language in a term rewriting framework Maude [BCR06], which has also seen applications in transforming legacy software [BCR05]. One of the unmissable references in the field of assembler modelling is the work of Kennedy et al [KBJD13] who managed to model the Intel x86 assembler with type classes and dependent types in Coq. They never reached complete coverage of the language, but for the covered subset they provided auto-proven theorems on correctness (relating in-memory code to a verifiable formula). There are similar projects such as one by Schmaltz and Shadrin modelling joint semantics of C with macro assembler also for the purpose of verification [SS12] -notably the authors recognised later that the definition was leaky with respect to some stack manipulations and needed to cover the basic assembler as well [PSS12]. Even simpler methods of semantic modelling seeing programs as collections of execution paths with weak preconditions [WF03], are inherently incapable of modelling self-modifying code similar to well-used HLASM features omnipresent in industrial codebases. At the current point it does not seem possible for us to make a step from having constructed models of semantics for individual instructions, to inferring a full system specification suitable for verification and proving useful properties. On a more technical side, Klimiankou recently published an interesting story centred primarily on parsing of instructions for IA-32 (which is the Intel assembler as opposed to our IBM assembler, but the two are very much alike) [Kli19b]. In our work his "instruction decoding" corresponds to the emulator figuring out at runtime which instruction to execute next and how to turn its bits into meaningful entities corresponding to its arguments in terms of which its core semantics is expressed. Klimiankou managed to build the fastest decoder for IA-32 commands [Kli19b] and was able to leverage it to migrate from switch-based dispatch (that we also use) to concurrent threaded code [Kli19a]. Conclusion and Lessons Learnt In this document we have reported on a project that focused on extracting and refining models of syntax and semantics of instructions of High Level Assembler (HLASM) [IBM13,IBM17] with the final goal of building a compiler for that language [BJZ16,Rai16]. The project was seen as successful from our side, since it met the expectations of customers and was completed within a very limited time frame by a small team of people, even though the language consisted of hundreds of instructions and macros. git statistics show a total of 817 commits from May 2015 till May 2020 concerning the folders with the HLASM compiler after filtering out non-human committers like the nightbuild system: 423 are made by this paper's author, 244 by Ynès Jaradin, the lead architect of the project and a co-author of the original report [BJZ16], and 150 commits made by 13 other senior software developers occasionally contributing to the project. Knowledge extraction was done ad hoc, yet according to the state of the art methodology [LZ11,Zay12,Zay14,Bav16,HKLM16]. Mining and extracting semistructured data is an active field of research [Bav16,HKLM16], but there was no ready to use tool for this particular text-to-model transformation, and the cost of developing it was not that high, given prior experience and expertise of our developers. Model fixing was a labour intensive process due to its manual nature, but cross-checking different parts of the models with one another, as well as models extracted from different sources even within the same original document, was useful. Conceptually this was a straightforward application of abstract model repair [CBSK12]. Enriching the models with new information was, again, done with bespoke technology, and then redone to refactor away idiosyncrasies. The technology was not the bottleneck, but the metamodel was -in the sense that we needed to make sure the models contained all the information that can be properly expressed and that will be useful later at the code generation stage. Given the context of the project and the policies within our company, it seems unlikely that we would have used any available tools if they were adding technical dependencies of their own. However, it was crucial to employ as much automation as possible, to avoid introducing or propagating hard-to-catch bit-level errors. Existing model transformation frameworks were not used, and new ones were not developed-seemed like overengineering since we did not need any intricate expressiveness. Models of the syntax of the instructions were more straightforward than the models of the semantics of them, and they were fairly structured already in the original documentation, so our part was limited to extracting them in a form suitable for automated processing, fixing inconsistencies and imperfections (possibly introduced by manual processing and typesetting instead of relying on generative techniques), augmenting the models with additional information that was not present in the source explicitly (even though it could have been, but it probably just never occurred to the documentation writers to summarise it), and generating the desired artefacts. In MDE terms, this was metamodelling in a low complexity domain. To model semantics, we had to read through thousands of pages of descriptions given in natural language, and encode it in a specially designed DSL (microcode, see section 4). The language design was challenging as it always is [Zay17a]. After further analysis of the performance of the compiled code of the HLASM emulator [Mje17] and the structure of our customers' source code, we came to the conclusion that our existing models of instruction semantics were insufficient. We had to invest significantly into enhancing them to cover not only the behaviour of the emulator, but also rules for conditional inlining that can be used if the code is safe (i.e., is not modifying itself). Later, they were enhanced to be rid of C \ idiosyncrasies in order for us to be able to use it with an alternative backend such as LLVM. This step did not use any model-level profiling, so it needed manual lifting of the numbers crunched at the code level, to the inlining semantic models level. The documentation was round-tripped: our tools try to produce the documentation inferred from our models, in a form that is as close as possible to the original, for easier comparison and visual verification. The main obvious change is that the original natural language description of the semantics of each instruction, is replaced in our case with microcode or semi-structured prose generated from it. Again, we followed the state of the art in what is desirable and advisable for generated executable language documentation, and gained expected results [ZL11]. For the framework to implement our model-to-text transformations we used T4, a Microsoft template language [MSD]. Several templates were developed: one for the documentation, one for several versions of the emulator, etc. The chosen technology turned out to be satisfactory, but did not contribute to the project in any overly significant way. The main reason for choosing it was its integration into the IDE that we were already using (Visual Studio .NET). Some of the models had to be kept at hand beside the compiler (and be delivered as a part of the product) for the sake of the possibility to tailor the project to each specific customer by supporting older versions of the HLASM language, different sets of macros, etc (cf. Figure 3). The architecture of the compiler had to take this modularity into account on many levels. We are unaware of other industrial or academic compilers that go this far towards full configurability, and can allow the end user of the shipped compiler to dramatically alter the language definition of the language being compiled. At the frontend side (websites and mobile apps) comparable approaches are called "low-code" [RRM + 14]. Given the context of the problem of implementing a massive low-level language from scratch, for legal reasons without looking at the baseline IBM assembler nor at its existing open-source partial replacements, in a team of very limited size within a rigid timeframe, this project was subjectively for us a very successful application of software language engineering, software modelling and model transformation. Determining the right level of abstraction and identifying the right elements to put in the metamodel, in order to automatically refine the models of both syntax and semantics of each of the instructions in the set, and produce final components of the compiler in a reliable and testable [GZ19] way, was a winning strategy that allowed us to avoid burnout, produce a viable product and deploy it to our customers' satisfaction.
2020-07-15T12:38:00.991Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "4d0d1af7d9d19390d51c53f3556556f046392ecf", "oa_license": null, "oa_url": "http://www.jot.fm/issues/issue_2020_02/article5.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "39ad670648d4bde0f6b5994a3e6325a26eb85f02", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
257242314
pes2o/s2orc
v3-fos-license
Long-Term Results of Adjunct Autologous Platelet-Rich Plasma in Lamellar Macular Hole Surgery Showing Lasting Restoration of Foveal Anatomy The aim of this study was to evaluate the long-time results of highly concentrated autologous platelet-rich plasma (PRP) used as an adjunct in lamellar macular hole (LMH) surgery. Nineteen eyes of nineteen patients with progressive LMH were enrolled in this interventional case series, on which 23/25-gauge pars plana vitrectomy was performed and 0.1 mL of highly concentrated autologous platelet-rich plasma was applied under air tamponade. Posterior vitreous detachment was induced, and the peeling of tractive epiretinal membranes, whenever present, was performed. In cases of phakic lens status, combined surgery was carried out. Postoperatively, all patients were instructed to remain in a supine position for the first two postoperative hours. Best-corrected visual acuity (BCVA) testing, microperimetry, and spectral domain optical coherence tomography (SD-OCT) were carried out preoperatively and at minimum 6 months (in median 12 months) postoperatively. Foveal configuration was postoperatively restored in 19 of 19 patients. Two patients who had not undergone ILM peeling showed a recurring defect at 6-month follow-up. Best-corrected visual acuity improved significantly from 0.29 ± 0.08 to 0.14 ± 0.13 logMAR (p = 0.028, Wilcoxon signed-rank test). Microperimetry remained unchanged (23.38 ± 2.53 preoperatively; 23.0 ± 2.49 dB postoperatively; p = 0.67). No patients experienced vision loss after surgery, and no significant intra- or postoperative complications were observed. Using PRP as an adjunct in macular hole surgery significantly improves morphological and functional outcomes. Additionally, it might be an effective prophylaxis to further progression and also the formation of a secondary full-thickness macular hole. The results of this study might contribute to a paradigm shift in macular hole surgery towards early intervention. Introduction Any macular lesion can impair visual acuity, lead to metamorphopsia, and further result in reduced vision and quality of life. This also applies to lamellar macular holes (LMH), which were first described in 1976 by Gass et al., using slit lamp biomicroscopy, as oval reddish macular lesions without subjective hole formation in the gap of light presented to the patient [1]. Over time and with the introduction of high-resolution spectral domain optical coherence tomography (SD-OCT), partial-thickness macular defects were redefined by Hubschman et al. and the international vitreomacular traction study group and subdivided into LMH, ERM foveoschisis (ERM-FS), and pseudoholes. The diagnosis of LMH is based on the following OCT criteria: (1) irregular foveal contour, (2) foveal cavity with undermined edges, and (3) signs of tissue loss. In many cases, additional OCT signs may be present, such as epiretinal proliferation (EP), a foveal bump, and ellipsoid zone disruption [2,3]. To avoid misunderstandings, we use the terminus ERM-FS as synonymous to tractional lamellar macular holes (TLMH), which was proposed by Govetto et al., as the pathogenesis of both is identical [4,5]. ERM-FS appears as a "moustache"-like lesion with a sharp-edged split between the outer plexiform layer (OPL) and the outer nuclear layer (ONL) premacular membrane and an intact ellipsoid zone. In contrast, the LMH presents as "top hat"-shaped with round-edged cavitations, foveal bumps, epiretinal proliferation (EP; syn. to lamellar macular hole associated proliferation (LHEP)) and ellipsoid zone defects. The latter was first described by Pang et al. and manifests as a midreflective layer in the OCT [6,7]. EP seems to be composed mostly of proliferating and/or hypertrophied Mueller cells of the foveal walls that were disrupted and have migrated to the retinal surface [8]. In LMH without degenerative cavitations, EP is connected to the Mueller cell conus of the foveola. This tissue of medium reflectivity covers the whole inner surface of the LMH (non-elevated foveal walls) and connects the cell conus of the foveola with EP at the vitreous surface of the walls [5]. The current OCT classification distinguishes between the subentities that seem to be relevant in clinical routine with regard to progression [9]. The implications for morphological and functional outcomes after surgery are controversial and still being debated [10][11][12][13][14]. Whereas diagnostic criteria for LMH are precisely defined, there still exists no clear guideline for standardized treatment. Similarly, the benefits and especially the optimal timing of surgical intervention are still matters being resolved through discussion. While some studies only cautiously recommend surgical intervention, others show promising results with regard to visual and morphological outcomes [14][15][16][17]. Platelet-rich plasma (PRP) was first used in the 1990s for macular hole surgery. Promising results have been documented especially regarding its use for treating refractory, traumatic, or full-thickness macular holes [18][19][20]. Platelets are a natural reservoir of growth factors, e.g., epidermal growth factor (EGF), vascular endothelial growth factor (VEGF), and platelet-derived growth factor (PDGF) [21]. These are secreted when platelets come into contact with disintegrated tissue, such as after ILM peeling of lamellar macular holes, and therefore play a pivotal role in the regeneration of macular defects [22]. This has led to the use of platelets as an adjuvant in macular hole surgery to modulate wound healing processes and tissue remodeling, thus improving anatomical and visual outcomes. To date, only few clinical data are available for LMH surgery, especially for modifications such as highly concentrated autologous platelet-rich plasma [23,24], which was used in our study. The aim of our study is to add to the knowledge of morphological and functional outcomes of lamellar macular holes undergoing vitrectomy with ILM peeling in combination with PRP. Results In total, 19 eyes from 19 patients with a symptomatic and progressive degenerative lamellar macular hole were enrolled in this interventional case study (Table 1). All patients fulfilled the SD-OCT-based main diagnostic criteria of degenerative lamellar macular holes. Lens status was evenly distributed, with 8 pseudophakic and 11 phakic patients. All phakic patients underwent combined phacovitrectomy with phacoemulsification and implantation of an intraocular lens. Morphological Findings Preoperatively, all patients fulfilled the mandatory criteria for LMH on SD-OCT. Next to the irregular foveal contour, the foveal cavity with undermined edges and signs of foveal tissue loss, and associated alterations on SD-OCT, were present at the vitreoretinal interface, as shown in Table 1. Hyperreflective epiretinal tissue was used as an umbrella term to refer to (tractional) epiretinal membranes as well as the vitreous cortex, which are very challenging to distinguish using only SD-OCT. Initially, the restoration of the foveal contour with no signs of tissue loss remaining was observed in all cases (Figures 1 and 2). Ellipsoid zone defects improved in 6 of 11 cases (55%). This morphology was stable during the whole follow-up period, except for in the cases of three patients, which are described below. In detail, subgroup analysis of pseudophakic patients only showed an improvement of BCVA from 0.34 ± 0.11 logMAR (median 0.35 logMAR, range 0.50-0.20 logMAR) preoperatively to 0.21 ± 0.12 logMAR (median 0.20 logMAR; range 0.40-0.10 logMAR) at the last documented follow-up, which was also found to be statistically significant (p = 0.047, Wilcoxon signed-rank test). Microperimetry ranged from a preoperative mean threshold of 23 In two of three patients, there was a recurrent foveal defect present postoperatively at 6 months. Those patients were the only two of all the patients who had not received ILM peeling. The foveal defect was stable, with no functional decline over a follow-up of 12 months. Due to these findings of stability, a re-vitrectomy has not yet been performed. The third patient disregarded recommendations to postoperatively remain in a supine position, which presumably led to PRP dislocation. A secondary vitrectomy with a reapplication of PRP was performed after the resorption of endotamponade. After 3 months, the foveal morphology was restored, and the functional parameters indicated improvement. Postoperative cystoid macular edema was seen in 4 of 19 eyes (21%). These were treated with nonsteroidal anti-inflammatory eye drops (0.3% Nepafenac) and/or the parabulbar injection of 40 mg triamcinolon. The resolution of the macular edema was achieved in all cases. Discussion In this interventional case study of 19 patients with progressive LMH treated by pars plana vitrectomy with ILM peeling and highly concentrated autologous platelet-rich plasma, we could observe morphological and functional improvement at the long-term follow-up. The prevention of further progression even into stages that are more visually limiting thus also seems to argue for earlier surgical intervention. Until now, no national nor international guidelines have been established for the management of partial-thickness macular holes. Therefore, whether the correct approach is to treat or to not treat lamellar macular holes is still a matter of discussion. A very important step toward shedding light on this question is the new classification of Hubschman et al., which allows differentiating between the distinct entities of partial defects [2]. Thus, the results of previous studies have to be taken with caution, as the terminology has not been clearly defined, and partial-thickness defects must also be individually addressed. Considering the results of our research, two different results have to be taken into account. On the one hand, there was morphological improvement of the foveal structure and prevention of progression. On the other hand, the measurements show that visual acuity is functionally improved. Morphological Improvement The pathogenesis of LMH is not yet been fully understood, nor are the exact mechanisms of regeneration after macular surgery known, especially in combination with PRP. While, in most cases, degenerative LMH seems to remain stable or may even close spontaneously over time [25], in others, a progressive degenerative natural course is observed, with the development of ellipsoid zone defects and possible conversion into a full-thickness macular hole (FTMH) [26]. Two mechanisms seem to be important for foveal restoration after vitreomacular surgery: (1) Release of the vitreous adhesion/traction: Peeling of the vitreous cortex together with ILM results in the closure of the cavitations, but retinal layers still appear disrupted [9]. Microstructure continues to be disorganized, and cavitations are replaced by midreflective material. Improving structural defects and halting the degenerative process thus seems to require additional surgical modifications in form of, e.g., PRP. (2) Activation of Mueller cells and stabilization of the foveal microenvironment: Mueller cells account for 90% of the retinal glia and play a pivotal role in retinal wound healing [27]. ILM peeling leads to the shaving of the basal membrane of Mueller cells, which acts as a stimulus for proliferation [27]. PRP acts as an important factor in further supporting the healing process. PRP is composed of platelets that are activated through contact with disintegrated neuroretinal tissue. They are known to be rich in growth factors and cytokines such as vascular endothelial growth factor (VEGF), platelet-derived growth factor (PDGF), epidermal growth (EGF), fibroblast growth factor (FGF), insulin-like growth factor 1 and 2 (IGF-1, IGF-2), transforming growth factor beta 1 (TGFβ1), and cytokines [21]. Of these, PDGF, EGF, IGF-1, and FGF seem to be the most relevant [22]. Thereupon, signal transduction pathways are activated in Mueller cells and regulate migration, proliferation, and tissue remodeling [27]. As we did not observe any secondary FTMH in our study, such an optimized microenvironment may well preclude the development of FTMH. PRP seems to additionally improve the success rate of complete defect closure, increasing the likelihood of superior foveal architectural restoration and, consequently, functional improvement. Regarding defects in the ellipsoid zone, which are a sign of chronicity, we observed restoration after vitrectomy with PRP over time in 6 of 11 cases (54.5%). The status of the foveal external limiting membrane (ELM) and the ellipsoid zone (EZ) is correlated with central retinal sensitivity and BCVA. Therefore, some authors have proposed that restoration of foveal configuration is not the only important factor for BCVA improvement but, rather, that continuity of the ellipsoid zone seems to be more essential [28,29]. The number of patients in our study is unfortunately too low to allow the evaluation of these statements on a statistically convincing basis. The results are, however, consistent with the observations of Holland et al., who described improved preoperative to postoperative visual acuity due to fewer ellipsoid zone defects. Based on this finding, one should consider earlier surgical intervention in LMH patients before the development of ellipsoid zone defects [30]. Functional Improvement One of the main reasons why vitrectomy in partial-thickness macular holes is still controversial is the reduced functional benefit found in a few prior studies [10,11]. In accordance with most of the available publications, this study demonstrates a significant postoperative increase in the visual acuity of phakic (0.15 logMAR) and pseudophakic (0.13 logMAR) patients [12,14,31]. A recent metanalysis of Parisi et al. reported the surgical outcomes for 463 eyes with tractive or degenerative LMH from 13 studies [32]. In these studies, the increase in visual acuity after surgical intervention ranged from 0.1 to 0.21 logMAR. Taking a closer look at studies with the largest improvement in visual acuity, such as Obata et al. with an increase of 0.21 logMAR, it must be considered that functional improvement may only be due to cataract surgery, as, of the 13 included patients, 12 received combined phacovitrectomy [33]. Coassin et al. studied 106 symptomatic LMH patients that either underwent a simple PPV or phacovitrectomy and experience significant improvements in postoperative BCVA (p < 0.001) [34]. When phacovitrectomized patients were excluded from the analysis, there was still significant improvements in postoperative BCVA (p = 0.0036), as was the case with the pseudophakic subgroup in our study. The subgroup analysis of pseudophakic patients or-even better-prospective trials with a homogenous, pseudophakic cohort will be very important to eliminate this confounder. In terms of safety aspects, the use of PRP as an adjunct therapy did not cause any additional complications and, in particular, did not lead to the loss of visual function. However, there are two factors that have to be considered when using PRP. The widely discussed ILM peeling seems to be mandatory because it was not performed in the two cases in which we saw a recurrent defect. This has led to the hypothesis that PRP needs to come into direct contact with disintegrated tissue to be activated [22,27]. The second important factor is the postoperative supine positioning of the patient for 1-2 h. Ignoring this might lead to PRP dislocation. The different endotamponades do not seem to have a significant influence on the results; however, air tamponade seems to be sufficient and the preferable choice due to the short resorption time. Another promising method is the EP embedding technique, where the EP material is placed in the foveal defect [35]. Considering the hypothesis that EP is formed as an attempt by Mueller cells to regenerate the foveal tissue defect, there are similarities here to the hypothesized mode of action of PRP. The common factor is the activation of the Mueller cells. While our results, but also those of other groups, demonstrate a reason for surgical intervention, the exact surgical procedure with any necessary modifications or use of adjuvants is still a matter of discussion. One possibility is the use of highly concentrated autologous platelet-rich plasma. To date, our study has the largest cohort with the longest period to follow-up of degenerative lamellar macular holes undergoing vitrectomy with peeling and the use of PRP. The additional use of PRP as an adjuvant might further enhance the morphological and functional outcomes and, even more importantly, is able to prevent the progression of LMH to stages of high vision impairment. Therefore, early surgical intervention seems reasonable. Our study is limited by its small sample size, lack of control group, and inhomogeneous lens status. Further studies are needed to compare the advantages of the different techniques and approaches and to determine the most efficient method. Authors should discuss the results and how they can be interpreted from the perspective of previous studies and of the working hypotheses. The findings and their implications should be discussed in the broadest possible context. Future research directions may also be highlighted. Study Design We included 19 eyes from 19 patients with progressive and symptomatic lamellar macular holes in this prospective, interventional case series. All eyes underwent 23-gauge vitrectomy in combination with an endotamponade (SF6, C2F6) and with the application of autologous, highly concentrated platelet-rich plasma. Surgery was performed by highly experienced vitreoretinal surgeons (SGP, TCK, and WJM) at the Department of Ophthalmology, Ludwig Maximilian University of Munich, Germany. Surgery was carried out between December 2019 and November 2022. The study was approved by the institutional review board of the University Eye Hospital of the Ludwig Maximilian University of Munich and was conducted in accordance with the tenets outlined in the Declaration of Helsinki. All subjects provided written informed consent before undergoing the interventions as described below. The literature research was carried out via PubMed ® of the National Library of Medicine, and relevant scientific publications were selected. Patient Selection Clinical examination and multimodal imaging, including SD-OCT, were performed on all patients. SD-OCT-based diagnostic criteria of LMH were met when the fovea showed (1) an irregular contour, (2) undermined edges, and (3) signs of tissue loss [2]. Patients with concomitant retinal pathologies such as diabetic retinopathy, vitreous hemorrhage, retinal detachment, age-related macular degeneration, inflammatory disease, vascular occlusion, high myopia ≥ −6.00 dpt, or trauma were excluded. Surgery was recommended when at least two of the following findings occurred during the preoperative follow-up period: (1) significant reduction in visual acuity, (2) progression of the foveal morphology, and/or (3) significant impairment of quality of life caused by metamorphopsia. All patients were evaluated preoperatively and at minimum 6 months or longer after surgery, which included identical work up. Potential postoperative complications, e.g., macular edema, were recorded at any time point during the follow-up period. PRP Preparation PRP preparation was performed as described in previous publications [23,24]. Whole blood (105 mL) was drawn and anticoagulated at a ratio of 1:7. Separation into plateletrich plasma, red blood cells, and platelet-deficient plasma was conducted using a special closed-circuit centrifugation method (Arthrex Angel System TM, Arthrex, Naples, FL, USA). Highly concentrated PRP is characterized by a low fraction of pro-inflammatory leucocytes and an 8.8× higher concentration of platelets than in whole blood. Surgical Procedure The procedure of 23-/25-gauge pars plana vitrectomy was performed by highly experienced surgeons through induction of posterior vitreous detachment and peeling of epiretinal tissue, if present, and ILM except in two patients, as described in Table 1. After staining with MembraneBlue-Dual Dye (0.125 mg Brilliant Blue G and 0.75 mg Trypan Blue D.O.R.C., Zuidland, The Netherlands) peeling was conducted followed by a second control staining. All phakic patients underwent combined phacovitrectomy with implantation of a previously calculated intraocular lens. After gas (SF6, C2F6) or air tamponade, highly concentrated PRP (0.1 mL) was added to the posterior pole. Patients were strongly recommended to postoperatively remain in a supine position for 2 h. Main Outcome Measures Primary anatomical success was defined as hole closure and postoperative morphology on SD-OCT, such as integrity of the inner and outer retinal layers and the inner foveal contour during all follow-up scans. Secondary endpoints were functional results and included best-corrected visual acuity, microperimetry, and appraisal of metamorphopsia. Statistical Analysis Statistical analysis was performed using IBM SPSS Statistics Version 26 (IBM Corporation, New York, NY, USA). All data are presented as the means ± SD unless otherwise stated. The Wilcoxon signed-rank test was used to compare two related groups (BCVA, central retinal thickness, data of microperimetry). Values of p ≤ 0.05 were considered to indicate statistically significant differences. Conclusions This is an interventional case study with-to the best of our knowledge-the largest cohort of patients with progressive and symptomatic LMH undergoing vitrectomy with ILM peeling and the use of highly concentrated autologous platelet-rich plasma (PRP). Using PRP as an adjunct was shown to improve morphological and functional outcomes, as well as to prevent further progression as assessed at long-term follow-up. For treatment of LMH, the use of PRP seems to be more effective than conventional surgery. Most importantly, the results show that the intervention can be seen as prophylaxis to secondary degenerative full-thickness macular hole formation and, thus, further vision loss. The data support early surgical intervention, which could lead to a paradigm shift in macular hole surgery.
2023-03-01T16:13:48.693Z
2023-02-27T00:00:00.000
{ "year": 2023, "sha1": "f1c5fcbdc87f62f48c325579a7cf726d58bf682b", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "de08439d5c99c415380c701d6f713c3f37ab279c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
260040719
pes2o/s2orc
v3-fos-license
Viability and Desiccation Resistance of Bartonella henselae in Biological and Non-Biological Fluids: Evidence for Pathogen Environmental Stability Pathogen environmental stability is an often-neglected research priority for pathogens that are known to be vector-transmitted. Bartonella henselae, the etiologic agent of Cat Scratch Disease, has become a “pathogen of interest” in several serious human illnesses, which include neoplastic, cardiovascular, neurocognitive, and rheumatologic conditions. Survival in the flea gut and feces as well as the association with a biofilm in culture-negative endocarditis provides insight into this organism’s ability to adjust to environmental extremes. The detection of B. henselae DNA in blood and tissues from marine mammals also raises questions about environmental stability and modes of pathogen transmission. We investigated the ability of B. henselae to survive in fluid matrices chosen to mimic potential environmental sources of infective materials. Feline whole blood, serum and urine, bovine milk, and physiologic saline inoculated with a laboratory strain of B. henselae San Antonio 2 were subsequently evaluated by culture and qPCR at specified time intervals. Bacterial viability was also assessed following desiccation and reconstitution of each inoculated fluid matrix. Bartonella henselae SA2 was cultured from feline urine up to 24 h after inoculation, and from blood, serum, cow’s milk, and physiologic saline for up to 7 days after inoculation. Of potential medical importance, bacteria were cultured following air-desiccation of all fluid inoculates. The viability and stability of Bartonella within biological and non-biological fluids in the environment may represent a previously unrecognized source of infection for animals and human beings. Introduction Bartonella henselae is an increasingly important emerging zoonotic vector-borne pathogen, with a worldwide distribution among cats, other mammals, and Ctenocephalides felis fleas [1][2][3][4]. Currently, 75% of emerging infectious diseases are considered zoonotic, and 28% of these infections are transmitted by one or more vectors [5][6][7]. To protect human and animal health, it is critical to determine potential exposure risks and infectivity of these bacteria in the environment, in addition to ongoing efforts to elucidate each pathogen's zoonotic and vector potential. It is increasingly clear that members of the genus Bartonella, all of which are proven or suspected to be vector-borne endotheliotropic [8,9] and intra-erythrocytic pathogens [10,11], are responsible for a variety of emergent or re-emergent diseases worldwide, including recent outbreaks of urban "Trench Fever" (Bartonella quintana) in Denver, Colorado [12], and bacillary angiomatosis (Bartonella henselae, Bartonella quintana) in immune-competent patients associated with skin trauma or following solid organ transplantation in immunocompromised patients [13,14]. Of the sixteen Bartonella species reported to cause disease in humans, B. henselae, the etiologic agent of Cat Scratch Disease (CSD), has become a primary pathogen of interest. Compared to other Bartonella species, B. henselae has been associated with numerous novel chronic medical conditions, collectively termed "Bartonellosis", in deference to the Pathogens 2023, 12, 950 3 of 20 of this bacterial species. To further investigate environmental stability, we evaluated the ability of B. henselae to survive in various fluids, chosen to mimic fluids from an infected host, which could be a potential source of environmental spillover into terrestrial and aquatic environments. We also questioned whether B. henselae could survive desiccation following culture in various fluid matrices. Given that cats are the primary reservoir host and sustain long-standing bacteremia [78], feline whole blood, serum, and urine were chosen as fluid matrices for this study. Bovine milk was also tested, as B. henselae bacteremia has been infrequently documented in cows [52]. Physiologic saline was selected to approximate the salinity in the coastal marine environment. Brugge, a liquid mammalian cell culture medium, was chosen as a control fluid [79]. We hypothesized that B. henselae would not remain viable in any of the fluid matrices except feline blood, and that the bacteria would not remain viable following desiccation in any fluid matrix. Study Design To address these hypotheses, we first evaluated the ability of B. henselae strain San Antonio 2 (Bh SA2) to survive in the various fluids for a period of up to 7 days, through direct culture and qPCR amplification of DNA to detect trends in bacterial genome equivalents (GE) over time. To assess the ability to survive desiccation, sequential fluid inoculates were allowed to air-desiccate for seven days, reconstituted in Brugge medium, and then assessed for viability through agar plate culture. The overall study design is depicted in Figure 1. Other than a study that documented B. henselae survival in flea feces for up to 12 days [1], we are unaware of other research that has addressed the environmental stability of this bacterial species. To further investigate environmental stability, we evaluated the ability of B. henselae to survive in various fluids, chosen to mimic fluids from an infected host, which could be a potential source of environmental spillover into terrestrial and aquatic environments. We also questioned whether B. henselae could survive desiccation following culture in various fluid matrices. Given that cats are the primary reservoir host and sustain long-standing bacteremia [78], feline whole blood, serum, and urine were chosen as fluid matrices for this study. Bovine milk was also tested, as B. henselae bacteremia has been infrequently documented in cows [52]. Physiologic saline was selected to approximate the salinity in the coastal marine environment. Brugge, a liquid mammalian cell culture medium, was chosen as a control fluid [79]. We hypothesized that B. henselae would not remain viable in any of the fluid matrices except feline blood, and that the bacteria would not remain viable following desiccation in any fluid matrix. Study Design To address these hypotheses, we first evaluated the ability of B. henselae strain San Antonio 2 (Bh SA2) to survive in the various fluids for a period of up to 7 days, through direct culture and qPCR amplification of DNA to detect trends in bacterial genome equivalents (GE) over time. To assess the ability to survive desiccation, sequential fluid inoculates were allowed to air-desiccate for seven days, reconstituted in Brugge medium, and then assessed for viability through agar plate culture. The overall study design is depicted in Figure 1. physiologic saline were inoculated with Bartonella henselae strain San Antonio 2 to reach a concentration of 10 9 bacteria per µL. Samples were obtained from each of the inoculated fluids at time 0 h, 24 h, 48 h, 96 h, and 7 days as follows: (a) Paired 250 µL aliquots were placed into 1.8 mL cryovials for DNA extraction and qPCR amplification, and 100 µL plated onto Trypticase Soy Agar (TSA) with 5% sheep blood, incubated, and monitored for colony development. (b) A total of 100 µL from each fluid inoculum was incubated in 5 mL of Brugge medium for bacterial culture enrichment. After 7, 14, and 21 days of incubation, samples were obtained for DNA extraction and blood agar plate inoculation as described in (a). (c) Paired 250 µL aliquots were placed into individual wells of paired 6-well plates, desiccated overnight in a biosafety laboratory level 3 (BSL-3) vented biosecurity cabinet, then fitted with lids and transferred to an enclosed benchtop container at ambient temperature for the remainder of 7 days. On day 7, the desiccated material was reconstituted using 2.5 mL of Brugge medium, and the 6-well plates were placed under incubation. After 7, 14, and 21 days of incubation, each well of the paired 6-well plates was sampled as outlined in (a). The negative control fluid, uninoculated Brugge medium, was sampled and stored alongside the test fluids. Note: Samples for manual DNA extraction were stored at −20 • C pending extraction, and all incubated samples were kept in a dedicated incubator at 35 • C with 5% CO 2 . Figure created in BioRender.com (https://app.biorender.com/ accessed on 29 August 2022). Type and Source of Fluid Matrices Feline whole blood, serum, and urine were obtained commercially (pooled samples from healthy male and female cats) from Biochemed Services (Winchester, VA, USA). Whole, ultra-pasteurized, organic cow's milk was purchased from a local retail grocery store, and laboratory-grade 0.9% physiologic saline was obtained from Intermountain Lifesciences (Salt Lake City, UT, USA, cat.# Z1376). Brugge medium, dedicated for experimental use only and sterilely prepared in-house, was used as a positive control culture matrix, to provide culture enrichment following inoculation of test fluid matrices, and for reconstitution of fluid matrices following desiccation. Pre-Inoculation Evaluation of Fluid Matrices for Bartonella Species DNA and Bacterial Growth Prior to inoculation with Bh SA2, 100 µL of each fluid was plated on TSA with 5% sheep blood (Thermo Scientific, Raleigh, NC, USA, cat.# R01200) and incubated at 35 • C/ 5% CO 2 . In addition, paired 250 µL samples were interrogated for the presence of B. henselae DNA by qPCR amplification targeting the Bartonella 16S-23S intergenic spacer (ITS) region (see Section 2.4). As cats are the known reservoir of B. henselae, antibody screening was performed to assess for the presence of anti-Bartonella antibodies that could impact bacterial survival in serum. Serum was assessed by immunofluorescence antibody testing (IFA) for the presence of Bartonella species-specific antibodies through the Vector Borne Disease Diagnostic Laboratory (VBDDL) at the North Carolina State University College of Veterinary Medicine. All samples were screened for B. henselae, B. koehlerae, and B. vinsonii subspecies berkhoffii with titer dilutions tested between 1:16 and 1:8192. IFA antigens were grown in vitro by personnel at the VBDDL in DH82 cells (canine macrophage line) used for fluorescent antibody assays. Slides were prepared and assessed in-house using a Zeiss Axio Lab A1 ultraviolet microscope (Fisher Scientific, Waltham, MA, USA, cat.# 12-071-321) under 40× objective. End-point titers ≥1:64 were considered positive in order to account for the potential of a dilution effect subsequent to the use of pooled serum samples from cats with unknown Bartonella exposure without overinterpretation. Preparation of Bacterial Stock for Inoculation of Fluid Matrices A low passage (passage # 5) of Bh SA2 was grown on TSA with 5% sheep blood (Thermo Scientific, Raleigh, NC, USA, cat.# R01200) incubated at 35 • C/5% CO 2 . Five to six colonies, removed via a sterile culture loop, were inoculated into 10 mL of freshly prepared Brugge culture medium in a T-25 tissue flask and allowed to grow in an incubator at 35 • C/5% CO 2 for 4 days. The negative control, an un-inoculated flask containing 10 mL of Brugge medium, was prepared simultaneously and incubated alongside the Bh SA2-inoculated stock preparation. Manual DNA extraction (Qiagen DNeasy Blood and Tissue Kit, Germantown, MD, USA, cat.# 69504) following the manufacturer's protocol was performed on paired 250 µL aliquots obtained from the Bh SA2 stock and the negative control flasks. Stock bacterial concentration was determined through qPCR amplification of the Bartonella 16S-23S ITS region using primers: BsppITS325s: 5 CCTCAGATGATGATCCCAAGCCTTCTGGCG 3 and BsppITS543as: 5 AATTGGTGGGCCTGGGAGGACTTG 3 . A BioRad CFX Opus 96 Real-Time PCR System (BioRad, Hercules, CA, USA, cat.# 12011319) was used for all qPCR testing under the following conditions: 95 • C × 5 min for enzyme activation, followed by 45 cycles of 94 • C × 10 s for denaturation, 68 • C × 10 s for annealing, and 72 • C × 15 s for elongation. PowerUp SYBR Green Master Mix (Thermo Fisher, Raleigh, NC, USA, cat.# A25741) was utilized for all experiments in a total reaction volume of 25 µL. GE were plotted against known bacterial DNA stock dilutions to determine the Bartonella concentration in the inoculum culture. Culture, DNA Extraction, and qPCR Evaluation of the Inoculated Fluid Matrices Fluid matrices were stored at temperatures recommended for stability prior to use; feline blood, cow's milk, and Brugge medium were refrigerated at 4 • C, feline serum and urine were frozen at −20 • C, and saline was stored at ambient room temperature (20-22 • C). All fluids were brought to ambient temperature before being aliquoted into 10 mL volumes and instilled into sterile T-25 tissue flasks. Each experimental fluid was inoculated with 500 µL of the bacterial stock concentration to result in 10 9 bacteria per µL of fluid matrix. As the positive control, 10 mL of Brugge medium was inoculated, and the original uninoculated Brugge flask was maintained as the negative control. Following inoculation, the flasks were gently agitated to allow for bacterial distribution, and time 0 h samples were obtained as follows: paired 250 µL aliquots were placed into 1.8 mL cryovials for storage at −20 • C pending DNA extraction, and 100 µL was plated onto TSA with 5% sheep blood ( Figure 1a). Then, 100 µL was placed into 5 mL of fresh Brugge medium, used for bacterial culture enrichment, in a sterile T-25 tissue flask (Figure 1b). Paired 250 µL aliquots were instilled into individual wells of paired 6-well plates for desiccation ( Figure 1c). All timed collections followed the above protocol ( Figure 1). Brugge culture enrichment flasks and inoculated agar plates were placed under incubation at 35 • C/5% CO 2 . Agar plates were evaluated for colony formation every four to seven days, and colonies, removed via a sterile inoculating loop, were placed into 100 µL of Buffer AL (Qiagen, Germantown, MD, USA, cat.# 19075) for subsequent DNA extraction, qPCR, and DNA sequencing. Brugge culture enrichment flasks were sampled at 7, 14, and 21 days post-inoculation: paired 250 µL aliquots were collected for manual DNA extraction and qPCR, and 100 µL was instilled onto TSA with 5% sheep blood (Figure 1b). Subculturing original fluid inoculates into Brugge medium was performed to assess bacterial viability in a known growth medium following incubation in an experimental fluid matrix [79]. As all experiments were run concurrently, the viability of original fluid inoculates was unknown at the onset of the experiment. Desiccation of the Inoculated and Un-Inoculated Fluid Matrices Sterile lidless 6-well plates were placed into a biosecurity cabinet with positive airflow overnight to allow for fluid desiccation prior to being fitted with lids and placed into an enclosed benchtop container maintained at ambient temperature for seven days. Each well was then reconstituted with 2.5 mL of Brugge medium, covered, and placed into incubation as previously outlined. On days 7, 14, and 21 post-reconstitution, paired 250 µL samples were obtained from each well for DNA extraction and qPCR amplification, and 100 µL from each well was plated onto TSA with 5% sheep blood (Figure 1c). Colony growth was monitored every four to seven days, and visible colonies were removed via sterile Statistical Analysis Data analysis of bacterial concentration (GE) based on qPCR Ct value, including mean and standard error from paired sample evaluation, was performed using Microsoft Excel 2019 (version 16). Paired-sample t-tests were used to determine whether there was a significant change (p < 0.05) in bacterial DNA concentration over inoculation time within each fluid, and unpaired t-tests were used to assess for differences between fluid types (GraphPad Prism 9, San Diego, CA, USA). Fold change ratios were also calculated within fluid inoculates over time and are reported as the difference between the measured and original inoculate concentration divided by the original inoculate concentration. Pre-Inoculation Evaluation of the Fluid Matrices for Bh SA2 DNA and Bacterial Growth Pre-Bh SA2 inoculation screening for the presence of Bartonella species DNA by qPCR amplification targeting Bartonella 16S-23S ITS region was negative for all fluid matrices. No Colony growth was detected one week after feline urine, cow's milk, physiologic saline solution, or sterile Brugge medium were inoculated onto blood agar plates; however, feline blood and serum grew contaminant bacteria when plated onto blood agar. These colonies were qPCR-negative for Bartonella DNA. Contaminant bacterial growth was attributed to a lack of sterile technique utilized by the commercial distributor during blood collection or subsequent sample pooling. The feline serum was IFA reactive for antibodies to B. henselae (1:512), B. vinsonii subsp. berkhoffii (1:512), and B. koehlerae (1:256), suggesting prior exposure to a Bartonella spp. Bartonella Henselae SA2 Viability and Stability in Six Fluid Matrices Bacterial colonies were observed from all fluid inoculates at all culture time points, except for B. henselae SA2 growing in urine, where colony formation was observed on the 0 h and 24 h blood agar cultures (Table 1). Table 1. Growth of Bartonella henselae colonies on blood agar following incubation in feline blood, serum, or urine; cow's milk; physiologic saline; or Brugge medium for up to 7 days. Colonies identified as Bh SA2 through DNA sequencing are denoted as (+). No colonies were cultured from the feline urine after 24 h of bacterial incubation, and no colonies developed from the negative control. Blood agar plates were incubated for 4 weeks to allow sufficient time for bacterial growth. These results documented bacterial viability within all fluid matrices, indicating that the Bh SA2 organisms remained viable at the time of desiccation for all fluids other than urine. Bacterial isolate identity was confirmed by PCR amplification and DNA sequencing for all inoculated matrices (Eton Bioscience, Research Triangle Park, NC, USA). Despite the growth of contaminant bacteria from blood and serum at the time of inoculation, small colonies, confirmed as Bh SA2 by DNA sequencing, were visible on the respective blood agar plates from these two matrices at all time points. Blood agar plate colonies were never Pathogens 2023, 12, 950 7 of 20 visualized from the negative control Brugge medium flask, co-incubated alongside the Bh SA2 inoculated matrices. Inoculated Fluid To assess the growth or stability of Bh SA2 in each inoculated fluid, bacterial concentration was measured by DNA amplification using qPCR, as described in Section 2.4. The results are depicted in Figure 2. Pathogens 2023, 12, x FOR PEER REVIEW 7 of 21 agar plates from these two matrices at all time points. Blood agar plate colonies were never visualized from the negative control Brugge medium flask, co-incubated alongside the Bh SA2 inoculated matrices. To assess the growth or stability of Bh SA2 in each inoculated fluid, bacterial concentration was measured by DNA amplification using qPCR, as described in Section 2.4. The results are depicted in Figure 2. Concentration of Bartonella henselae SA2 DNA identified following incubation in feline blood, serum, or urine; cow's milk, physiological saline, and Brugge enrichment medium over time. Paired measurements were averaged and are depicted as GE per µL with standard error of that fluid's mean concentration. The concentration of Bh SA2 in inoculated feline serum was lower between 24 h and 7 d (#) than that measured in the other fluids (* p ≤ 0.001), but no significant difference was found for the 0 h GE across fluids. Note: The Brugge positive control bacterial GE at 0 h was not determined due to sample loss. Variation was anticipated based on whether the fluid supported bacterial growth and stability or caused damage resulting in bacterial death. Although not a statistically significant decrease (p = 0.198), serum bacterial DNA concentration dropped 9.8-fold from time 0 h to 24 h and did not subsequently rebound. Unpaired t-tests (Table 2), however, demonstrated a statistically higher bacterial concentration between 24 h and 7 d of incubation in each of the other matrices compared to serum, with no difference between any of the fluids when GE were compared at time 0 h. There was no amplification of Bh SA2 DNA from the negative control. Variation was anticipated based on whether the fluid supported bacterial growth and stability or caused damage resulting in bacterial death. Although not a statistically significant decrease (p = 0.198), serum bacterial DNA concentration dropped 9.8-fold from time 0 h to 24 h and did not subsequently rebound. Unpaired t-tests (Table 2), however, demonstrated a statistically higher bacterial concentration between 24 h and 7 d of incubation in each of the other matrices compared to serum, with no difference between any of the fluids when GE were compared at time 0 h. There was no amplification of Bh SA2 DNA from the negative control. (Table 3). From the inoculated blood matrix, colonies developed from all time points following Brugge media enrichment, with the exception of Bh SA2 incubated in feline blood for 7 days, which produced colonies only at the 7-day evaluation. Milk and the Brugge positive culture control had identical colony growth patterns: both had colony development from all initial incubation times when evaluated at 7 and 14 days following inoculation into Brugge media, while colonies developed at 21 days only after Bh SA2 had incubated in the respective fluids for 96 h and 7 days. Bh SA2 incubated in serum and saline solution had variable colony formation following incubation in Brugge medium. For serum, colony formation was observed in all but the 0 h culture after 7 days of Brugge enrichment, whereas only the 96 h and 24 h cultures developed colonies following Brugge enrichment for 14 and 21 days, respectively. Bh SA2 incubated in saline for 0 h through 48 h developed colonies after 7 and 14 days of Brugge support, while colonies were observed from the 7-day saline inoculate following 7 and 21 days of Brugge enrichment. In contrast, urine cultures only yielded colony formation directly following inoculation (0 h). No colonies were visualized from the Bh SA2 un-inoculated (negative control) Brugge medium flask. Bacterial concentration (GE) was measured in each of the Brugge-enriched fluid inoculates to assess for growth, decline, or stability. The results are depicted in Figure 3. Bh SA2 inoculated blood displayed the largest range of amplified DNA concentration after being sub-cultured in Brugge medium, with a positive trend in the 0 h, 96 h, and 7 d inoculates evaluated after 7-21 days of Brugge enrichment (Figure 3a). DNA concentration of Bh SA2 incubated in feline blood for 24-48 h remained relatively stable across all timed measurements. When averaged over the three weeks of Brugge enrichment, the blood culture Bh SA2 concentrations incubated for 24 h through 96 h were significantly higher than the measurable DNA from the 7 d Bh SA2 inoculate over the same time period (24 h p = 0.0167, 48 h p = 0.0002, 96 h p = 0.0335) (Figure 3a). Bh SA2 incubated in serum with Brugge enrichment displayed the lowest measured bacterial concentration, with a decline in detectable DNA between 0 h and 7 d, similar to what was observed in Figure 2 (Figure 3b). Average bacterial concentration in serum across the 21-day Brugge supplementation was higher for the 0 h inoculate compared to each other timed inoculate using a paired-samples t-test, although not significant by the 24 h measurement (24 h p = 0.466, 48 h p = 0.0368, 96 h p = 0.0300, 7 d p = 0.0231) (Figure 3b). Aside from a positive trend seen in the 0 h urine culture between 7 and 14 days of Brugge enrichment in Figure 3c, detectable DNA remained low and without variation across the timed interrogations. When Bh SA2 was cultured in milk, there was no discernible pattern in bacterial concentration between the length of incubation and Brugge enrichment (Figure 3d). Average bacterial concentrations across the 21 days of Brugge enrichment illuminated that Bh SA2 from the 0 h milk inoculation was higher than the subsequent timed measurements, but only significantly higher than the 96 h milk incubation time point (p = 0.0208). When Bh SA2 was inoculated in saline with Bh SA2 inoculated blood displayed the largest range of amplified DNA concentration after being sub-cultured in Brugge medium, with a positive trend in the 0 h, 96 h, and 7 d inoculates evaluated after 7-21 days of Brugge enrichment (Figure 3a). DNA concentration of Bh SA2 incubated in feline blood for 24-48 h remained relatively stable across all timed measurements. When averaged over the three weeks of Brugge enrichment, the Bh SA2 Viability and Stability after Desiccation of Inoculated Fluid Matrices and Reconstitution with Brugge Growth Medium Following air-desiccation and incubation in Brugge medium, colony growth was obtained from all reconstituted fluid matrices (Table 4). As paired 6-well plates were utilized for the desiccation component of the experiment, a blood agar culture was established from each paired well. Compared to the other reconstituted matrices, colonies grew most consistently from blood and serum cultures plated following desiccation and reconstitution with Brugge medium, regardless of the inoculate incubation duration time. For milk, saline, and the positive Brugge control, no colonies grew from the 0 h desiccated cultures, but growth was obtained from these three matrices following Brugge reconstitution from the longer incubation periods (Table 4). For urine, only a single timepoint post-desiccation and reconstitution resulted in colony growth: the 24 h inoculate incubated in Brugge medium for 14 days. From a representative sample group of positive blood agar cultures (n = 23), DNA was extracted, amplified as previously described, and submitted to Eton Bioscience (Research Triangle Park, NC) for genomic sequencing. Based on qPCR melting curves and the DNA sequences obtained, colonies growing post-desiccation and reconstitution were Bh SA2, which was the species and strain used as the inoculum for the experiment. There were no colonies observed from the negative control. Amplification of Bh SA2 DNA by 16S-23S qPCR was attempted for all inoculates at three time points: 7, 14, and 21 days following Brugge medium reconstitution (Figure 4). For the desiccated feline blood culture, there was a trend towards increased Bh SA2 DNA following Brugge reconstitution between the 7-and 14-day measurements for most of the original culture testing time points; however, only the 0 h culture time point approached statistical significance (p = 0.0509). Compared to the concentration of Bh SA2 measured prior to desiccation, the 0 h and 24 h inoculates were higher, although not significantly (p = 0.0720 and 0.2006, respectively). However, the average concentration of the 0 h and 24 h inoculates following desiccation and reconstitution compared to their pre-desiccation concentration was significantly higher (p = 0.0061) (Figure 4a). The 14-day blood incubation period yielded the highest DNA level in the 0 h inoculate, which was significantly higher than that attained at 14 days from the 48 h-7 d incubation times (p = 0.0480). For the other fluid matrices, DNA concentrations remained lower than those detected in the pre-desiccation inoculates. Bh SA2 inoculated serum had the lowest residual DNA concentration among the six matrices, with no discernibly consistent DNA amplification pattern (Figure 4b). Average Bh SA2 DNA concentrations measured across all the time points were not significantly different for the serum or milk inoculates (Figure 4d). There was a significant increase in average GE for the 7 d saline inoculate between the 7-and 21-day interrogations compared to the other timed incubations in that matrix (p = 0.0254) (Figure 4e). For the Bh SA2 desiccated urine, the highest DNA concentration was measured after 14 days of Brugge enrichment for the 0 h urine inoculate, followed by the 21-day measurement for the 7 d desiccated urine culture. The average Bh SA2 DNA in both the 0 h and 7 d urine cultures exceeded that obtained from the 24-96 h cultures (p = 0.0020 and 0.0022, respectively) (Figure 4c). Of interest, the desiccated 24 h urine inoculate after 14 days of Brugge enrichment was the only time point at which urine colony growth was visualized (Table 4). This growth occurred from only one of the two desiccated inoculates, and when compared against the genome equivalents obtained from each of these wells, correlated with the higher DNA concentration (9.54 × 10 5 vs. 4.38 × 10 5 GE/µL). In general, the GE graphs for milk and saline had similar patterns. Both had the highest level of GE obtained the longer the matrixinoculated bacteria were incubated prior to desiccation (7 d inoculate). As well, although small, these inoculates trended towards an increase in GE over the period of the 21-day Brugge enrichment time. For milk, there was a percent variance increase of 7.73% between the 7-and 21-day incubation periods, compared to a change of 2.47% for saline (Figure 4d,e). Discussion In this study, B. henselae strain San Antonio 2 was successfully cultured in cow's milk, sterile physiological saline solution, and three body fluids derived from cats, i.e., blood, serum, and urine. Following desiccation by air-drying and reconstitution in Brugge medium for culture enrichment, viable Bh SA2 grew from each fluid matrix. In the context of environmental stability, our results support bacterial persistence in various physiological fluids, as well as the ability to survive desiccation by air-drying. Following inoculation of Bh SA2 into the five fluid matrices and the Brugge positive control flask, Bh SA2 DNA was amplified and sequenced from all six liquid cultures at time points spanning 0 h to 7 days ( Figure 2). During this period, there was no denaturation of bacterial DNA in five of the six fluid matrices. However, when cultured in cat serum, amplifiable Bh SA2 DNA remained below the inoculated concentration following 24 h of incubation, possibly related to the presence of anti-Bartonella species antibodies in the combined serum samples utilized, as the comparable times measured across the other test fluids did not result in a similar pattern. In contrast, all other inoculated matrices resulted in significantly higher Bh SA2 concentrations during the 7-day incubation period (Table 2). Contrary to our hypothesis, Bh SA2 colonies were isolated from inoculated milk, saline solution, and serum, as well as from blood, at all time points between inoculation (0 h) and 7 days of incubation, and from urine at 0 h and 24 h, underscoring bacterial viability within these matrices (Table 1). Although some loss of bacterial viability occurs when transitioning from a liquid environment (growth medium) to a solid medium, we did not confirm viability by isolation in feline urine past 24 h; however, since Bh SA2 colonies were also obtained from the 24 h desiccated urine culture after 14 days of Brugge enrichment, it seems likely that these bacteria are also more viable in urine than has been previously reported [80]. Future research could include an evaluation of RNA (through reverse transcription qPCR using 16 s rRNA as a target) to help further determine the extent and duration of B. henselae viability in urine. Bacterial viability in each fluid matrix was also assessed by sub-culture using Brugge medium (specifically used to support and enrich Bartonella growth) [79] followed by attempted blood agar plate isolation. Colonies of B. henselae SA2 were obtained from Brugge sub-cultured blood, milk, saline solution, and serum. Bartonella colony formation was similar between these inoculated matrices and, to a lesser extent, saline when compared to colony isolation following Brugge medium enrichment. Bh SA2 inoculated into blood resulted in a higher bacterial DNA concentration across the 21-day Brugge supplementation period for the 24 h-96 h blood cultures compared to the 7 d blood culture (Figure 3), where less colony growth was visualized at this time point (Table 3). It is anticipated that Bh SA2 cultured into feline blood for 7 days may have experienced nutritional deficits, resulting in this observation. For milk, the 0 h culture tested at 7, 14, and 21 days had significantly higher bacterial concentrations than the 96 h culture, but in contrast to blood, the 96 h milk culture produced colonies at all measured time points, emphasizing a disconnect between amplifiable DNA and the development of viable colonies. Serum inoculated with Bh SA2 resulted in colony formation in all but the 0 h culture after 7 days of Brugge enrichment (Table 3). Selective pressure from anti-Bartonella antibodies may have afforded those bacteria surviving in serum for longer periods a temporary growth advantage. Evaluation of differential gene expression in B. henselae cultured in the presence and absence of antibodies represents another interesting area for future endeavors, with the possibility of identifying diagnostic or therapeutic targets. With the exception of Bh SA2-inoculated blood, increased variability in colony formation was observed across cultured matrices enriched with Brugge medium for 21 days. This finding may be attributed to bacterial death secondary to nutrient limitations, as the medium was not refreshed throughout the experimental time span. Similar to the matrix fluid cultures, Bh SA2 DNA was amplified from all Brugge-enriched cultures (Figure 3), despite the lack of bacterial isolation for some enriched fractions, denoting DNA stability among the Brugge-supplemented matrices. These in vitro findings reiterate the comparison value of DNA detection via PCR when considering diagnostic relevance. From serum, colonies were visualized from the 96 h culture after 14 days of Brugge culture enrichment (Table 3), yet DNA concentration measured at this point was 4.0 × 10 4 genome equivalents per µL, one of the lowest measured concentrations among the serum series (Figure 3b), a scenario that was replicated in comparing colony growth to extracted bacterial DNA concentration in other fluids. Colony formation was not obtained when Bh SA2 incubated in feline urine was inoculated into Brugge medium, with the exception of colony growth from the 0 h inoculate (Table 3), despite 21 days of culture enrichment. Our results suggest that either feline urine does not support the presence of viable Bh SA2 over prolonged periods, or that the method of analysis used in this study was inadequate to assess viability in urine. Unexpectedly, Bh SA2 was viable after bacterial cultures in blood, serum, urine, milk, saline solution, and the Brugge(+) control were allowed to air-desiccate. When Brugge medium was used to recover dried cultures, colonies were observed at variable testing time points from almost all of the desiccated culture fluids (Table 4), with blood being the most supportive of bacterial recovery. For the urine culture, colony development following desiccation occurred from only the 24 h inoculation. As direct culture onto blood agar from inoculated urine over time is more suggestive of the inability of urine to support Bh SA2 (Table 1), it is anticipated that colony development, in this case, was due to the short length of time the Bartonella inoculate spent in that matrix, combined with the additional support of Brugge medium for culture enrichment. It is interesting, however, that live colonies were attained from urine following desiccation, as Bartonella henselae, unlike Bartonella quintana, which was deemed infectious through cutaneous inoculation of patient urine, has not been determined to be transmitted through the urine of infected individuals [80]. Assessment of Bartonella's ability to persist in urine, along with mechanisms of survival, outlines another area for future research. For saline and milk, colony development following desiccation was successful the longer the original inoculum was exposed to those matrices (Table 4). Here, it is possible that bacteria were able to acclimate to these environments long enough to allow for the development of protective mechanisms. Future research directed at determining the mechanism(s) of desiccation resistance in this and potentially other Bartonella species is warranted. As previously observed in the other experimental conditions in this study, Bartonella DNA was amplified from all post-desiccation/reconstitution time points (Figure 4), again pointing to the stability of bacterial DNA despite air-desiccation. Interestingly, for the 0 h and 24 h blood cultures, there was an increase in bacterial genome equivalents between the 7-and 14-day interrogations, with averaged genome equivalents between these times reaching statistical significance, yet again, colony development on blood agar did not strongly correlate with the measured DNA concentrations (Table 4). Similarly, amplifiable DNA from Bh SA2 incubated in feline urine for 0 h and 7 d prior to desiccation and reconstitution was higher than at the other interrogation times, yet colonies developed from only the 24 h inoculate. In saline, the average DNA concentration was greatest when Bh SA2 was incubated for a period of 7 days prior to desiccation and reconstitution, and here, the 7 d inoculate more consistently resulted in colony development compared to the other interrogation times. Following desiccation and reconstitution, the 96 h and 7 d Brugge control inoculates had significant increases in DNA between the 7-day and 21-day measurements (Figure 4), more indicative of bacterial growth, which better correlated with the colony observations obtained at these time points (Table 4). Traditionally, and for diagnostic purposes, blood has been the primary target sample type for the detection of Bartonella species in reservoir hosts and incidentally infected patients. Although few reports describe the detection of Bartonella DNA from other body fluids such as serum [56,81], cerebrospinal fluid (CSF) [82,83], lymph node aspirates [84], aqueous humor [85], urine [86], and saliva [64,65,86,87], to our knowledge, a systematic assessment of the viability of this or other Bartonella species in these diagnostic patient fluid specimens has not been undertaken. Historically, case reports describing people acquiring CSD from rose thorn injuries [88], as well as cat or dog salivary transmission from bites and scratches [89][90][91], point to this organism's ability to tolerate a wide range of environmental conditions. The fact that B. henselae can be revived from several fluid matrices following desiccation not only disproved our hypothesis but supports the hypothesis of previously unrecognized infectious sources and alternative modes of transmission. In the context of occupational risk, these findings are of particular concern for veterinary medical professionals and other individuals or professions with extensive animal exposure [92]. Whether exposure to desiccated bacteria presents a medical risk for humans and other animals remains unknown; however, our findings warrant the need for consideration of non-vectorial modes of transmission in future laboratory, clinical, and epidemiological studies. B. henselae DNA, along with DNA from other human pathogenic Bartonella species, has been amplified from dust mites (Dermatophagoides farinae) and their feces [93,94], and Bartonella koehlerae, a cat-associated species related to blood culture-negative endocarditis in humans [95], has been associated with respiratory symptoms [96]; therefore, a potential role for inhalation of dead or viable Bartonella spp. in asthma or other respiratory conditions should be investigated. Although dust mites and their excrement have not been evaluated for potential vector capacity or as an environmental source of Bartonella infection, if B. henselae can survive desiccation in a natural setting, this may represent an unexplored repository of infectious material. Importantly, viability following desiccation could provide the bacterium with extended time for environmental transmission to a mammalian reservoir host, insect vector, or incidental host, accentuating the need to further assess the environmental transmissibility of this bacterium. The ability of B. henselae to survive as planktonic microorganisms in fluids other than blood has identified the possible existence of an unexplored environmental niche for this bacterium in nature. Multiple worldwide studies have elucidated coinfections of Toxoplasma gondii and B. henselae in domestic and wild felids, with B. henselae commonly being the pathogen with a higher prevalence [47,[97][98][99]. Like T. gondii, following the shedding of B. henselae into terrestrial and aquatic environments, either via fleas, flea feces from infected cats, or infected bodily fluids from cats or other terrestrial animals [73], the bacteria might be capable of surviving in regional watersheds or ocean water, as evidenced by bacterial viability within physiologic saline. It remains unknown whether ingestion of one or more intermediary aquatic species, vector transmission, or inoculation of wounds with contaminated water might be associated with the acquisition of B. henselae infection by cetaceans [66,67,70], and this may be an exciting avenue of future study. Also, the viability of B. henselae in feline blood and serum and bovine milk for up to seven days could represent sources of environmental contamination. In this study, we examined the ability of Bh SA2 to remain viable in several fluid matrices for up to seven days, independent of its flea vector or primary mammalian reservoir host, and to survive desiccation in physiological fluids, when reconstituted using an enrichment culture medium. Comparison of agar plate isolation with PCR amplification of B. henselae DNA illustrated a lack of correlation between these two methodologies, which has been noted in previous studies [42,46,62]. Despite these important preliminary findings, there were limitations associated with this study. First, only a single strain of Bartonella henselae (San Antonio 2 strain type) was used for all experiments. The inoculum bacteria were accustomed to growing in Brugge medium under laboratory conditions, which may have influenced its ability to survive in other chosen fluid matrices and may have impacted bacterial desiccation tolerance [100,101]. For comparison to the results from this study, future studies should be performed using multiple strains of B. henselae as well as different species of Bartonella, including recently isolated wild-type bacteria. This could provide not only confirmation for our observed results, but also potentially delineate the differential ability of other Bartonella strains and species in response to the experimental conditions. Replicates were collected in duplicate, most likely negatively impacting statistical power for comparison of results. Although hypothesis-based, this study was observational-we did not ascertain growth kinetic curves based on colony formation unit (cfu) enumeration due in part to the difficulty of assessing singular colonies after plating from liquid media to blood agar, and due to the slow rate of growth of this organism impacting colony visualization [42,46,61]. Though mechanisms that support the viability of Bh SA2 in the various fluid matrices and allow it to survive desiccation are suspected to be related to biofilm production, this was not investigated and remains the topic of future investigation [102]. To assess bacterial persistence and viability, a large inoculation concentration was utilized, much higher than would be present during natural infection of a flea or feline reservoir host. To approximate more closely what might happen in a natural setting, lower inoculum concentrations should be evaluated. Bacterial contamination in the commercially purchased feline blood and serum was present, and although this did not prevent the growth of Bh SA2, we cannot rule out a synergistic or antagonistic effect on bacterial viability. Potentially, heat inactivation of blood and serum could be used for future experiments to decrease the risk of bacterial contaminants. Lastly, these experiments were completed in a highly controlled in vitro laboratory setting that does not replicate numerous factors that would likely impact the potential for bacterial environmental contamination and stability. Despite these limitations, the information gained from these experiments provides a starting point for future endeavors to clarify the stability and viability of B. henselae outside of a mammalian host or arthropod vector, in terrestrial and aquatic environments, including the assessment of virulence genes and biofilm formation following inoculation into liquid matrices and after desiccation. Conclusions As biomedical research publications continue to increase the collective understanding of this insidious pathogen, B. henselae is proving to have previously unrecognized environmental survival capabilities [103,104]. There remains a substantial need to better understand the breadth and depth of Bartonella species' zoonotic disease ecology as it pertains to animal and human health in diverse terrestrial and marine environments. Additionally, still substantially unexplored is the potential role that Bartonella species may play in diseases affecting wildlife, or its impact on biodiversity [105][106][107][108]. Research documenting the presence of B. henselae DNA in cetacean blood and tissue may represent only the tip of the proverbial iceberg for marine environments. Also, knowledge of its environmental stability should add Bartonella to the list of pathogens that need to be investigated in association with sylvatic disease outbreaks. The impact of global climate change on vector range, in combination with urbanization and loss of wildlife habitat, may equate to people more routinely coming into contact with Bartonella species [109][110][111]. If indeed this pathogen can switch between an indirect means of infection through its vector and a direct means of infection, then researchers need to remain vigilant to its potential to cause disease in animals and human patients in conjunction with a new epidemiologic transmission paradigm. In considering the potential for Bartonella henselae to exhibit a higher level of environmental stability, as well as the possibility of the bacteria remaining viable outside of its mammalian host or insect vector for prolonged periods of time, it is important to utilize a One Health approach that addresses the comparative aspects of human and animal infections, as well as impacts of various vectors and the environment, to achieve a better understanding of B. henselae transmission in nature. Patents In
2023-07-21T15:24:21.970Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "6caf4e41f7b4cc1d1313250cb38fd1c46d85f65e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-0817/12/7/950/pdf?version=1689746001", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "94b81c543e911b85d4c4fb5c82a4cac7239e7ad4", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247731108
pes2o/s2orc
v3-fos-license
A Bioinformatics Study of Immune Infiltration-Associated Genes in Sciatica Sciatica has been widely studied, but the association of sciatica with immune infiltration has not been studied. We aimed to screen key genes and to further investigate the impact of immune infiltration in patients with sciatica. The bioinformatics analyzes were performed based on the GSE150408 dataset. Subsequently, we used CIBERSORT to study the immune infiltration in the disease group. Results showed that 13 genes were with differentially expressions in the sciatica group compared to healthy participants, including 8 up-regulated and 5 down-regulated genes. Through the LASSO model and SVM-RFE analysis, a total of 6 genes have intersections, namely SLED1, CHRNB3, BEGAIN, SPTBN2, HRASLS2, and OSR2. The ROC curve area also confirmed the reliability of this method. CIBERPORT analysis showed that T cell gamma delta infiltration decreased and neutrophil infiltration increased in the disease group. Then the association of these six key genes with immune infiltration was further verified. We found six overlapping genes and found that they were closely associated with the total immune infiltration in the sciatic nerve disease group. These findings may provide new ideas for the diagnosis and therapeutics of patients with sciatica. Introduction Sciatica is commonly caused by lumbar disc herniation involving peripheral neuropathy [1,2]. According to statistics, the incidence of sciatica in one's life is as high as 40%. e common treatment methods for sciatica include nonsurgical conservative treatment and nonsurgical treatment. 90% of acute sciatica can be effectively relieved by nonsurgical treatment [3]. Proteomic analysis has identified proteins related to sciatica or intervertebral disc degeneration, which may be involved in the pathophysiological process of sciatica [4]. It is generally believed that mechanical compression combined with immunity and inflammation can lead to sciatica during lumbar disc herniation. Many cytokines related to immunity and inflammation are activated in lumbar disc herniation [5,6]. In this study, we used two machine learning methods to explore and identify the key genes of patients with sciatica and preliminarily analyzed the immune cell infiltration. en further evaluate the correlation between immune cell infiltration and the central gene in sciatica so as to provide new research ideas for the treatment and early detection of sciatica. Screening of Differentially Expressed Genes (DEGs). We downloaded GSE150408 in the GEO database (https:// www.ncbi.nlm.nih.gov/geo/). e platform of the GSE150408 mRNA microarray is GPL21185, which was used for the following analyzes. Identification of Feature Gene. e feature genes were screened by two machine learning algorithms, least absolute convergence and selection operator (LASSO) and support vector machine-recursive feature elimination (SVM-RFE) and validated in the validation dataset. Machine learning is a new type of algorithm analysis tool. is study used the method of machine learning to identify features for algorithm analysis. Two machine learning algorithms, LASSO and SVM-RFE, were applied for marker screening. LASSO is a regression algorithm regularized through "glmnet" package in R. SVM-RFE is a supervised learning technique that can rank features based on recursion. We adopted the "e1071" package to complete the SVM algorithm. Analyzes of Immune Infiltration. e CIBERSORT deconvolution algorithm was adopted for the estimation of different immune cell proportions. Totally, we obtained twenty-two types of immune cells. CIBERSORT filters data with p < 0.05. We then calculated each immune cell type's percentage and displayed it as a bar graph. e "pheatmap" package was adopted for the construction of the heat map of the twenty-two types of immune cells. Comparisons of levels of the twenty-two types of immune cells were done using a package. Statistical Analysis. Analyzes of the association of immune cells with feature genes were performed using Spearman's rank via R software. We used the "ggplot2" package for the visualization of the plot. P < 0.05 indicated statistically significant. Diagnostic Feature Biomarkers Screening. After removing the batch effects, thirteen DEGs were screened out: 8 significantly up-regulated and 5 significantly down-regulated (Figures 1(a) and 1(b)). Using the LASSO regression algorithm, we found 8 potential Figure 1(b) variables for the disease group (Figure 2(a)). A total of thirteen features were determined in Figure 2(b). SLED1, CHRNB3, BEGAIN, SPTBN2, HRASLS, and OSR2 were finally selected in Figure 2(c). en, ROC was performed for the evaluation of the value of the prediction of the 6 characteristic genes. e AUCs for all 6 genes were greater than 0.8 ( Figure 3(a)). It showed that the characteristic biomarkers have a high diagnostic ability, Figure 3(f ). Analyzes of Immune Infiltration. Immune infiltration in control and sciatica groups was explored with the twentytwo subpopulations of immune cells. e percentage of the twenty-two types of immune cells was visually displayed in Figure 4(a). CIBERPORT analysis showed that T cells gamma delta infiltration decreased and the degree of neutrophils infiltration increased in the sciatica group ( Figure 4(b)). Figure 5 Discussion So far, there is no specific diagnostic method for sciatica. Combining medical history with a physical examination is the most common diagnostic method [7]. As a common clinical syndrome, sciatica is caused by two causes: one is internal and the other is external factors [8,9]. When sciatica occurs, it often causes pain in the legs, back and below the knee, usually accompanied by tingling in the legs, numbness or muscle weakness [10,11]. is study showed that 13 genes were differentially expressed in patients with sciatica. rough two methods, we identified six key genes, which are SLED1, CHRNB3, BEGAIN, SPTBN2, HRASLS2, and OSR2. We determined the association of these differentially expressed genes and immune infiltration in patients with sciatica. CIBERPORT analysis showed that T cell gamma delta infiltration decreased and neutrophil infiltration increased in the sciatica group. Up to now, there has still been a lack of research on brain-enriched guanylate kinase-associated protein. Studies have shown that brain-enriched guanylate kinase-associated protein participates in chronic pain. We demonstrated in the SNI model that mechanical abnormal pain, an abnormal pain condition caused by harmless stimuli, was significantly attenuated in BEGAIN deficient mice [12,13]. Another key gene found is SPTBN2. It is the research on SPTBN2. At present, it is mainly used in research on congenital cerebellar ataxia and various cancers. In the study of cancer, miR-424-5p was found to be able to accelerate the development of endometrial cancer through regulating SPTBN2 and then the cldn4/PI3K/Akt axis [14][15][16]. Combined with bioinformatics and cell experiments, SPTBN2 may become a novel target of lung adenocarcinoma. SPTBN2, highly expressed in LUAD, might indicate poor prognosis. Cell experiments confirmed that SPTBN2 could promote the proliferative, migrative, and invasive abilities of LUAD cells [17]. e researchers found that glia was significantly activated in the brains of patients who experienced chronic pain, indicating that immune cells can spread and maintain disease states, including neuropathic pain, through communication with neurons rather than being regarded as bystanders [18]. During nerve injury, neuronal activity will be activated, resulting in the recruitment of monocytes/ macrophages (peripheral) to the injured site. At the same 2 Computational Intelligence and Neuroscience time, microglia will release inflammatory-related mediators after activation, resulting in neuronal sensitivity [19]. e signal molecules of the immune system are cytokines. An increase of proinflammatory cytokines is related to the existence of pain after nerve injury, while antiinflammatory cytokines are related to the down regulation of the immune system and the relief of neuropathic pain [20,21]. Immune system activation has been shown to promote and increase neuropathic pain [22]. Immune cells play an important role in different pathophysiological processes in the state of neuropathic pain. It brings the pain field to different directions and provides opportunities for new methods for the treatment of chronic pain. Computational Intelligence and Neuroscience However, there are still some limitations to the present study. is is a purely bioinformatics study without further experiments for validation, which weakened the evidence level of our results. In the future, we will conduct in vivo and in vitro assays to further explore the exact effects of the abovementioned immunerelated genes and the potential underlying mechanisms on sciatica. Conclusions In summary, we systematically discussed the functions of immune-related genes of sciatica and provided new ideas for new methods for the treatment of chronic pain. Data Availability e datasets used and analyzed during the current study are available from the corresponding author on reasonable request.
2022-03-27T15:07:23.982Z
2022-03-24T00:00:00.000
{ "year": 2022, "sha1": "c141fb3ed2015a3799a114fb7b46a61627140c6d", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/cin/2022/7372431.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c8ce69ffaf4498cb3d4bc990929f1f39e7e3262b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
251843950
pes2o/s2orc
v3-fos-license
Cost function for low-dimensional manifold topology assessment In reduced-order modeling, complex systems that exhibit high state-space dimensionality are described and evolved using a small number of parameters. These parameters can be obtained in a data-driven way, where a high-dimensional dataset is projected onto a lower-dimensional basis. A complex system is then restricted to states on a low-dimensional manifold where it can be efficiently modeled. While this approach brings computational benefits, obtaining a good quality of the manifold topology becomes a crucial aspect when models, such as nonlinear regression, are built on top of the manifold. Here, we present a quantitative metric for characterizing manifold topologies. Our metric pays attention to non-uniqueness and spatial gradients in physical quantities of interest, and can be applied to manifolds of arbitrary dimensionality. Using the metric as a cost function in optimization algorithms, we show that optimized low-dimensional projections can be found. We delineate a few applications of the cost function to datasets representing argon plasma, reacting flows and atmospheric pollutant dispersion. We demonstrate how the cost function can assess various dimensionality reduction and manifold learning techniques as well as data preprocessing strategies in their capacity to yield quality low-dimensional projections. We show that improved manifold topologies can facilitate building nonlinear regression models. In the era of big data, numerous science and engineering disciplines use dimensionality reduction to obtain lower-dimensional representations of complex physical systems with many degrees of freedom [1][2][3][4][5][6][7][8] . Large data coming from system measurements or simulations is frequently the starting point of reduced-order modeling. These high-dimensional datasets, ubiquitous in areas such as plasma physics, chemically reacting flows, neuroscience, genomics and transcriptomics, electrochemistry or atmospheric physics, often exhibit strongly attracting low-dimensional manifolds [9][10][11][12][13][14][15][16][17][18][19][20] . Describing the system evolution on those manifolds alone can thus be a viable modeling strategy [21][22][23][24] . After projecting the original variables onto a lower-dimensional basis, system dynamics can be tracked on a lower-dimensional manifold, embedded in the original state-space. This approach provides substantial reduction to the number of parameters needed to visualize, describe and predict complex systems. To date, linear and nonlinear dimensionality reduction techniques have been used to find lower-dimensional spaces to represent multivariate datasets and build reduced-order models in those spaces. Some topological properties of low-dimensional data representations can make reduced-order modeling difficult. A particularly undesired behavior are overlapping states on a manifold which can result in non-uniqueness in dependent variable values. For instance, with the manifold parameters used as regressors in nonlinear regression tasks, any ambiguity in dependent variable values can hinder successful modeling. Another characteristic of a problematic manifold are large gradients in the dependent variable values. These can appear when observations on manifolds in important regions are compressed with respect to other, less important regions. If dependent variable values change rapidly over such compressed regions, features of small sizes are formed that can pose modeling difficulty 25 . Maintaining moderate gradients on manifolds is thus a desired characteristic. In this paper, we are motivated by the emerging discussion on the need to characterize the quality of lowdimensional manifolds [26][27][28][29][30] . Quantitative tools are needed in areas where researchers tune the hyper-parameters of dimensionality reduction or manifold learning techniques to obtain improved manifolds of particular types of data [31][32][33][34][35] . When manifold learning is used for efficient data visualization, good quality manifolds can help uncover important multivariate relationships in complex datasets such as genomics or transcriptomics data 34,[36][37][38] . In artificial intelligence, there is a need to determine the quality of manifold representations in neural networks, where the recent work on predictive learning demonstrated how those representations can vary throughout the learning stages 19 . Detecting intersection between several manifolds can be of interest in learning object manifolds Results Cost function formulation. We base our discussion on the observation that regions of poor manifold topology (such as non-uniqueness) will only affect a dependent variable if there is variation in the variable's values over those regions (see Fig. 1). With this premise, our cost function is derived from variance in dependent variable values happening across different length scales on a manifold. As recently proposed in Ref. 30 , we compute the normalized variance of a dependent variable, N (σ ) , for selected length scales, σ , on a manifold. We then analyze the derivative of N (σ ) with respect to σ , denoted as D (σ ) . This derivative captures the information about how the normalized variance changes as the manifold is scanned at varying length scales given by σ ∈ �σ min , σ max � . The detailed mathematical description of N (σ ) and D (σ ) can be found in the "Methods" section and in Ref. 30 . Given q manifold parameters, η η η = [η 1 , η 2 , . . . , η q ] , the resulting normalized variance derivative, D (σ ) , can be computed for each dependent variable in a relevant set, φ φ φ = [φ 1 , φ 2 , . . . , φ m ] . Figure 2a shows a visual example of how the normalized variance derivative assesses information content at various manifold length scales for one dependent variable, φ . The length scales at which peaks occur in a variable's D (σ ) profile Various 2D projections of a 3D synthetic dataset can be formed by looking at the dataset at various angles to the z-axis and collapsing all observations onto a plane of sight. We demonstrate two example projections that can be formed: a top-down projection and a projection resulting from looking at an angle to the z-axis. In the new projected coordinates, [η 1 , η 2 ] , the top-down projection is unique everywhere, while the projection at an angle introduces regions of non-uniqueness with overlapping observations. www.nature.com/scientificreports/ indicate feature sizes. Each scale at which a dependent variable shows variation over the manifold, including variation due to non-uniqueness, will create its own imprint in D (σ ) . The largest spatial scale at which the dependent variable exhibits variation (the largest feature size) on a manifold is reflected in the rightmost peak location in the D (σ ) profile, and we define it as σ peak (see Fig. 2a). In the "Methods" section, we present a more detailed discussion of how σ peak is obtained. Any smaller feature sizes appear as additional peaks for σ < σ peak . The 2D projection seen in Fig. 2a exhibits severe non-uniqueness which manifests itself in variance occurring at very small scales, σ ≪ σ peak . Two particularly appealing characteristics of the normalized variance derivative are taken into account for the design of our cost function. First, the locations of peaks in the D (σ ) curve convey feature sizes on manifolds. With an appropriate penalty with respect to the largest feature size, σ peak , we favor manifolds that maintain large feature sizes, as those should facilitate modeling. Second, with multiple scales of variation present on manifolds, the area under the D (σ ) curve will increase due to the additional peak(s) for σ < σ peak . The cost function proposed herein, L = L(η η η, φ φ φ) , is computed from a penalized area under the D (σ ) curve(s). By integrating the penalized D (σ ) curve(s), we sum over the effects that multiple scales of variation have on the D (σ ) profile. A visualization of the penalty function and its effect on the D (σ ) curve for the manifold introduced in Fig. 2a is shown in Fig. 2b. We see how the area under D (σ ) weighted by the penalty function is amplified at scales far from σ peak , therefore penalizing the variance occurring at smaller scales more heavily. Furthermore, the cost function gives a single number representing the parameterization "cost" for a given manifold. A larger cost indicates a worse manifold topology and a lower cost indicates an improved manifold topology in terms of modeling. If the cost is computed for m dependent variables, a norm over all L i costs can be taken to yield a single cost value for the entire manifold: L = ||L i || , ∀i ∈ [1, 2, . . . , m] . Figure 2c demonstrates costs computed for a single dependent variable, φ , over two different 2D projections. The first projection is the same as analyzed in Fig. 2a. The second projection has an improved topology and non-uniqueness is significantly reduced. The corresponding D (σ ) curve for the improved projection exhibits a single dominant peak which indicates a single scale of variation in φ values over the entire manifold. We report the cost, L = L φ , for each projection, with the cost for the projection with non-uniqueness being greater. The mathematical description of the proposed cost function and additional details are provided in the "Methods" section. There are three key advantages of our proposed cost function. First, manifolds obtained using any technique can be assessed. This includes any ad hoc selected manifold parameters or empirical manifolds obtained directly from training data using dimensionality reduction or manifold learning techniques. Second, manifolds of any dimensionality can be assessed. Finally, manifolds can be assessed with respect to an arbitrary set of m relevant dependent variables, φ φ φ . Dependent variables that we are most interested in accurately modeling can be selected, which can include any of the original state variables and any functions of them. Cost function response to feature size and non-uniqueness. To demonstrate the behavior of the proposed cost function to an increasing feature size, we generate bivariate functions on a uniform square xy grid, centered around (0, 0). The dependent variable, φ , is calculated from a multivariate Gaussian normal distribution, φ = exp(−(x 2 + y 2 )/(2s 2 )) , where the standard deviation, s, is gradually increased to imitate an increasing feature size. The smallest feature is selected with s = 0.05 and the largest feature with s = 0.6 . Figure 3a shows these functions corresponding to ten Gaussians with increasing s. Below each Gaussian, we plot . The location of the rightmost peak, σ peak , denotes the largest feature size on a manifold. Variance occurring at scales σ ≪ σ peak is a strong indicator of non-uniqueness. (b) The cost function, L , is computed by integrating the penalized D (σ ) curve. The introduced penalty amplifies the area under D (σ ) occurring at σ < σ peak and especially at σ ≪ σ peak . The contributions from all scales of variation are thus summed up and embedded in the cost value. (c) Assessing example 2D projections with the proposed cost function. The first projection exhibits non-uniqueness, with regions where smallest and highest values of the dependent variable, φ , overlap each other. The non-uniqueness is significantly reduced on the improved projection and the corresponding D (σ ) curve exhibits a single rise indicating a single feature size over the whole manifold. We show the associated costs for the two projections, with the cost for the projection with non-uniqueness being greater. the corresponding cost, L = L φ , and we observe a decreasing trend in L with increasing feature size. This is a desired behavior, since larger features should facilitate modeling. A very small feature, like the one obtained from s = 0.05 , introduces relatively steep gradients, which can be more challenging to model. The response of the D (σ ) curves to an increasing feature size is shown in Fig. 3b. The location of σ peak gradually shifts to the right with increasing feature size and the area under the D (σ ) curves decreases. The decrease in cost with an increasing feature size seen in Fig. 3a is thus a result of rewarding the increasing σ peak location and the slightly decreasing area under the D (σ ) curve. The latter is the effect of a decreasing gradient of φ with increasing feature size, leaving less variance in φ present at scales away from σ peak . To further test the response of the cost function to multiple feature sizes, we generate another set of functions, such that the dependent variable, φ , is now computed from a superposition of sine functions with various frequencies. Multiple feature sizes are generated from the general formula φ = n k=1 sin(2 k x) for n = 1, 2, . . . , 5 . Thus, for n = 1 , the function has only one feature size. In Fig. 3c, we observe increasing cost values with each added feature size. The reason for this increase becomes clear once we look at the corresponding D (σ ) curves in Fig. 3d, where each new feature generates an additional rise in D (σ ) at length scales smaller than σ peak . Next, we perform a similar test using functions which introduce increasing levels of non-uniqueness. The dependent variable, φ , is now calculated as a linear function of the independent variable, φ = x , being the x-axis. This results in a constant gradient along the x-axis. We introduce non-uniqueness in φ by adding overlapping observations whose φ values are set to zero. The number of observations added to increase the non-uniqueness is measured with the overlap depth, d, and is increased from no overlapping observations ( d = 0 observations) to a maximum number of overlapping observations ( d = 90 observations). Figure 3e presents ten such functions with an increasing depth of non-uniqueness. Here, we observe an increasing trend in L as we increase the non-uniqueness depth. The multiple scales of variation introduced from non-uniqueness lead to an increased area under the D (σ ) curves, which is penalized by the cost function. In order to observe how severe the increase in area is under the D (σ ) curves associated with manifold non-uniqueness, Fig. 3f shows the D (σ ) curves www.nature.com/scientificreports/ corresponding to the ten functions with non-uniqueness. The first interesting observation is that the location of σ peak for all D (σ ) curves is almost identical. This is understandable, since the size of the main feature (the constant gradient) stays the same. The only reason for the increased area under the D (σ ) curves is the appearance of additional peaks at length scales σ ≪ σ peak . Only in the case d = 0 (no overlap) the D (σ ) curve exhibits a single rise. As long as any overlapping observations are introduced, an additional area shows up under the D (σ ) curve at σ ≪ σ peak . This additional area is most pronounced for the largest non-uniqueness depth, d = 90. Finally, it is instructive to discuss the scales, σ , at which we detect overlap in Fig. 3f. The highest rise in D (σ ) linked to non-uniqueness is happening at σ ≈ 5 × 10 −4 for all cases d > 0 . This exact value can be linked to the sample density for the ten toy functions with increasing overlap depth. For a normalized manifold, where x ∈ [0, 1] , the distance along the x-axis between data points in the unique region is σ ≈ 10 −3 . The overlapping observations are located in-between every other unique observation, such that the distance along the x-axis between any observation from the unique region and its nearest overlapping observation is of the order of 10 −4 . Thus, as the manifold is scanned with varying length scales, σ , once σ ≈ 10 −4 , the variation in a dependent variable values is captured between a single point from the unique region ( φ → 1 ) and a single point from the overlapping region ( φ = 0 ). This creates a sudden increase in the captured variance and shows up as the peak at σ ≈ 5 · 10 −4 in the D (σ ) curves in Fig. 3f. A more in-depth discussion of how scales of variation can be linked to data density on a manifold grid can be found in Ref. 30 . Assessing data preprocessing strategies. We now turn our attention to multivariate datasets and their lower-dimensional projections. Each dataset considered next is represented by a matrix X ∈ R N×Q , where N is the number of observations and Q is the number of state variables. Since most often N ≫ Q , Q determines data dimensionality. We consider three disciplines that deal with datasets that are notoriously high dimensional: plasma physics, reacting flows and atmospheric physics. For example, the state-space of a reacting flow is described by the temperature, pressure and chemical composition. There, the high state-space dimensionality originates from many chemical species involved that can easily reach the order of hundreds. In this and the following sections, we demonstrate a few practical applications of the proposed cost function using those datasets. Reduced-order models leverage the fact that multivariate datasets can often be successfully re-parameterized by a reduced set of low-dimensional parameters. To date, numerous techniques have been employed for dimensionality reduction of multivariate data. Those include linear techniques such as principal component analysis (PCA), independent component analysis (ICA), distance metric learning (DML) or linear discriminant analysis (LDA) and nonlinear techniques such as kernel PCA (KPCA), Isomap 57, locally linear embedding (LLE) and its variants 58 or autoencoders 59 . For the purpose of data visualization, t-distributed stochastic neighbor embedding (t-SNE) 55 and uniform manifold approximation and projection (UMAP) 60 are gaining popularity in various research disciplines 7,36-38,61-64 . A short summary of dimensionality reduction and manifold learning techniques explored in this work can be found in the "Methods" section. Prior to applying dimensionality reduction, training datasets are often preprocessed. The most straightforward strategy is data normalization by centering and scaling each state variable. Other preprocessing approaches involve data sampling to mitigate imbalance in observation density, or feature selection. The effect of data preprocessing alone can have a large impact on the resulting low-dimensional manifold topology constructed from such data 53,65 . To demonstrate how significant those changes can be, Fig. 4a shows 2D PCA projections of a dataset describing argon plasma (N = 100,700, Q=36) 66,67 , where the state-space is spanned by the following variables: temperature of heavy species, T h , temperature of electrons, T e , and 34 species mass fractions that contain 31 electronic states of argon, two levels of ionized argon, Ar and Ar + , and electrons. The projections are generated with various scaling techniques applied on the dataset (see the "Methods" section for the summary of all scaling techniques used in this work). In the top row of Fig. 4a, the projections are colored by T e and in the bottom row by the electron mass fraction, Y e . Our visual analysis shows that preprocessing can affect manifold topologies significantly. Various topologies are expected to perform differently in reduced-order modeling due to changes in feature sizes or the level of non-uniqueness. We quantitatively assess those changes and rank the preprocessing strategies in their capacity to generate quality manifold topologies. In Fig. 4b, we show costs, L , for the various 2D PCA projections visualized above, and for comparison, for the analogous 3D PCA projections (although not visualized). Here, L is computed as the L 1 -norm over the individual costs for the relevant dependent variables: T h , T e , Y e and Y Ar . Thus, in Fig. 4a, the projections are colored by the two target dependent variables to allow us for making a visual connection between how the projection topologies look like versus how the cost function assesses a given projection. We note that VAST scaling results in the lowest cost for 2D projections. Looking at the corresponding visualization in Fig. 4a, we can see that T e and Y e values change relatively smoothly across the manifold, even though the VAST projection introduces the "spike" region, where many data observations are compressed. For comparison, projections resulting from the remaining scalings introduce relatively steep gradients and overlap for a few of them, in some region of the projection, either for the T e or the Y e variable. Finally, we note that costs drop across all scaling strategies explored when the PCA projection dimensionality is increased to 3D. We additionally explore feature selection (otherwise known as variable subset selection) as another strategy in the data preprocessing pipeline. Feature selection involves finding a meaningful subset, X S ∈ R N×S , of the original Q state variables ( S < Q ), and using this subset in data science or machine learning algorithms, instead of the full set of state variables 68 . This time, we use a reacting flow dataset describing combustion of syngas in air. In Fig. 4c, we show 3D PCA projections of an 11-dimensional dataset (N = 14,550, Q = 11), where the eleven dimensions are spanned by temperature, T, and ten mass fractions of chemical species, denoted Y i for species i. The different projections seen in Fig. 4c www.nature.com/scientificreports/ dataset (see "Methods"). The bottom row shows 3D projections resulting from different scalings combined with feature selection applied to the original training data. All PCA projections are colored by the temperature. For the purpose of this demonstration, we use a newly developed feature selection algorithm that uses the cost function to guide the optimal selection of the variables from the original training data 65 . By minimizing the cost of the resulting data projection, we optimize the feature selection process from the point of view of manifold topology. The detailed description of the feature selection algorithm can be found in the "Methods" section. In Fig. 4d we show costs, L , tied to the two preprocessing strategies explored: just scaling the data (circles) versus scaling with feature selection (triangles). Here, L is computed as the L 1 -norm over the individual costs for the relevant dependent variables: The dependent variables were selected manually as the most important candidates in modeling. Black markers show costs corresponding to the 3D PCA projections visualized in Fig. 4c. For comparison, with gray markers we show costs for analogous 2D PCA projections. Three of these 2D projections corresponding to scaling only are visualized in Fig. 4e-the two worst projections (largest L ) and the best projection (smallest L ). We note that the worst 2D projection corresponding to �−1, 1� scaling is severely folded over itself which is likely the reason for the high cost. The Level projection also introduces significant non-uniqueness, with the low-temperature regions being represented over narrow geometry on a manifold. The best 2D projection corresponds to VAST scaling. As seen in Fig. 4e, this projection is much more unique compared to �−1, 1� or Level scaling 2D projections. However, there is a visible "twist" on the VAST 2D projection, which is untangled when a 3D projection is considered instead. This is a possible factor for a decreased cost between the 2D and the 3D PCA projection. The lowest costs for 3D projections happened for VAST (for scaling only) and Auto (for scaling with feature selection). The corresponding visualized projections are marked with thicker axes in Fig. 4c. These two projections are characterized by relatively well-spaced feature sizes and reduced non-uniqueness. Finally, similarly as we have observed for the argon plasma data, costs drop across all scaling strategies when the PCA projection dimensionality is increased from 2D to 3D. We also compare costs corresponding to the 3D projections visualized above (black markers) with the analogous costs of 2D projections (gray markers). The optimal manifold topology corresponding to the lowest L for each preprocessing strategy is highlighted with thicker axes in (c). (e) Visualization of 2D projections corresponding to three selected cases of only scaling the original data: �−1, 1� and Level scaling (corresponding to the two highest L ) and VAST scaling (corresponding to the lowest L). www.nature.com/scientificreports/ Our analysis from Fig. 4 reveals that appropriate data preprocessing can help in generating a better quality low-dimensional manifold. In particular, optimized feature selection can be beneficial as it decreases costs over the relevant dependent variables with respect to costs associated with only scaling the data. With the many possible data preprocessing techniques encountered in the data science community, the cost function allows for quantitative rankings without the need to analyze manifold topologies manually. This, in turn, can help fine-tune data preprocessing to a given dataset and a desired manifold dimensionality. Scientific Reports Detecting large gradients on manifolds. Using the proposed cost function, manifolds can be assessed with respect to different relevant dependent variables separately. If there is a good coverage of regions where the selected variables vary over the entire manifold, problematic regions on manifolds can be detected with greater reliability. Figure 5a demonstrates a 3D manifold embedded in a 9-dimensional state-space of a reacting flow dataset describing combustion of hydrogen in air (N = 13,468, Q = 9). The topology of this manifold is curved such that two regions where fuel and oxidizer originate from are brought closely together. This region is better visualized in the zoomed-in dashed box. When the manifold is colored by the hydrogen (fuel) mass fraction in Fig. 5a, we see a step-change in the φ = Y H 2 values over the considered region. This potentially undesired behavior can be detected with the cost function as it increases the area under the D (σ ) curve at length scales smaller than σ peak . The cost associated with the hydrogen mass fraction is L H 2 = 1.8 . For comparison, the temperature variable does not experience large variation within the considered region (Fig. 5b) and the cost associated with temperature is lower ( L T = 1.0 ). We observe a single peak in the D (σ ) profile corresponding to the temperature variable. Regions where opposing physical phenomena meet over small length scales on a manifold (in this case fuel and oxidizer streams) can prove to be difficult to model accurately. For instance, some regression models are known to struggle in the presence of large gradients in dependent variable values 25 . Such problematic regions of compressed observations on manifolds can be detected by analyzing costs across several dependent variables. If there exists an important dependent variable that varies across the problematic region (such as the Y H 2 variable in Fig. 5a), it can help discard the problematic manifold topology. We note that the choice of the relevant dependent variables is important in assessing whether a given manifold topology is appropriate or not from the modeling perspective. Some variables, such as the temperature variable alone in the example shown in Fig. 5b, might not be effective at exposing regions of non-uniqueness in data projections, that can affect other relevant variables. We also note that the dependent variables for which the cost is computed can be arbitrary and do not have to be selected from the set of the original state variables as we have done here. They can equally be functions of the state variables. In reacting flow applications, these can for instance be the production rates of chemical species, related to the state-space through nonlinear Arrhenius expressions. Manifold assessment across dimensionality. We now demonstrate how the cost function applies to manifolds of arbitrary dimensionality. This can be useful in determining an appropriate dimensionality for rep- www.nature.com/scientificreports/ resenting the variables of interest when using a reduction technique. The demonstration is first done using an experimental combustion dataset known as the Sandia flame D dataset 69 . This dataset contains approximately N = 57,000 observations each for temperature and various species mass fractions (Q = 10) over six different heights in a methane and air piloted jet flame. The experimental dataset also presents the opportunity to demonstrate the cost function behavior on manifolds containing noise. Figure 6a shows an example 2D PCA projection with Max scaling of the Sandia flame D data colored by temperature. This manifold is less structured than those seen in Figs. 4 and 5 for the numerically simulated data. With noisy data, such as seen from experiment, it can be difficult to visually assess the quality of manifold topologies, especially in higher dimensions. The proposed cost function should adequately assess such topologies since the D (σ ) curves remain smooth even for noisy data 30 . Figure 6b demonstrates how the cost function behaves with increasing dimensionality of a projection using PCA, up to the original dimensionality of the system. The two opaque curves correspond to the scalings that resulted in the highest and lowest costs for the projected experimental data. Also included is the cost for the original ten-dimensional manifold. For the most part, we see a decrease in the cost with increasing dimensionality. Increasing the dimensionality further after four dimensions appears to have negligible effects on the feature sizes of the optimizing variables since the cost function flattens out. At this point, the computational cost of adding dimensions to a model most likely outweighs any benefit seen from slight increases in feature sizes. Any differences between the various scalings also become less noticeable as dimensionality increases. We also see that rotating the manifold along new principal component axes, even without reducing dimensionality, can slightly reduce the cost compared to the original manifold. This is due to variance being measured along new coordinates. For comparison, in Fig. 6b we also include costs with increasing dimensionality for the Sandia flame F dataset (transparent curves), which is another experimental case that exhibits stronger effects of flow turbulence than flame D. As a result, flame F experiences conditions with local flame extinction, absent in the flame D dataset. Thus, flame F includes many more states of methane combustion than flame D and the manifold corresponding to flame F covers a wider range of attainable thermo-chemical states. For instance, for a specific stoichiometric condition, the flame can be either in an ignited or in an extinguished state. We thus expect an increased required manifold dimensionality for flame F to achieve the same parameterization quality as compared to flame D. This is indeed the case seen in Fig. 6b, where costs are generally higher for flame F than for flame D. The proposed cost function could be used in this way to determine an appropriate minimum dimensionality for the projection of data, with the goal of facilitating modeling of the optimizing variables 70,71 . Manifold assessment across various dimensionality reduction and manifold learning techniques. We now revisit a recent PCA-based reduction of a high-fidelity dataset obtained from delayed detached eddy simulation in atmospheric physics 46 . This data simulates the Cedval A1-5 wind tunnel measurements of atmospheric dispersion of a pollutant in the vicinity of a rectangular building 72 . The numerical data that we use here represents a planar slice through the computational domain, perpendicular to the downstream wind direction. The dataset has N = 18,540, Q = 16 and the original state-space is composed of variables common to atmospheric physics applications, such as the three velocity components, pressure, Reynolds number, or the turbulent viscosity, kinetic energy and dissipation rate. All parameters in the data have been averaged over a 3.5 s period. We use an outlier detection algorithm to remove outliers from the data (see "Methods"). The aim of the original study 46 was to predict the turbulent Schmidt number, Sc t , from the PCA-derived manifold parameters. This demonstrates an application where regression is the end goal for obtaining a data-driven correlation of a physical quantity with the manifold parameters, η η η . We thus seek such regression function f that Sc t ≈ f (η η η) . While in the original work PCA was used to reduce the state-space dimensionality, here we benchmark three additional techniques: UMAP, Isomap and t-SNE. Following a similar methodology as demonstrated in Fig. 4, we first searched for the best scaling option for each reduction technique. In addition, we introduce three novel scaling methods inspired by the variable stability (VAST) scaling 49 . We denote them S1, S2 and S3 (see Table 1). www.nature.com/scientificreports/ S1 scaling is an extension of VAST, where the effect of non-normality is considered by multiplying the standard deviation by the data kurtosis. S2 and S3 are variations of S1 obtained by replacing the mean value in the coefficient of variation by the maximum and the range of each variable respectively. In Fig. 7a, we compare qualities of 2D projections, corresponding to the best scaling, both visually and using our cost function. We report costs for the φ = Sc t only, since that was the modeled dependent variable. An appropriate scaling allowed us, in for ANN prediction of Sc t across all four projection techniques (blue diamonds) using 3D data projections. The minimum L and the minimum MAE for each technique is marked with a shaded outline. Scalings that generally exhibit lowest costs (VAST, S1, S2, S3) also result in the smallest MAE. (d) Example 3D PCA projections resulting from applying two scaling options to the original data: S1 and Pareto scaling. For this dataset, S1 scaling allowed for generating a manifold where the dependent variable of interest, Sc t , has a smooth gradient across one of the manifold dimensions. Pareto scaling, exhibiting high cost, collapses most of the data observations onto a planar structure. (e) The D (σ ) curves corresponding to the 3D PCA projections with S1 and Pareto scaling. (f-i) Scatter plots of L versus MAE from ANN and kernel regression predictions of Sc t . We show predictions based on 600 different 2D PCA projections (f, g), and based on 600 different 2D t-SNE projections (h, i) as the independent manifold parameters. We test a few selected scaling techniques applied to the atmospheric dispersion data; legend applies to all figures (f-i). www.nature.com/scientificreports/ many cases, to find significantly better projections. For instance, the worst 2D PCA projection is associated with L = 5.2 , while the best (visualized) with L = 1.2 . Figure 7b shows the D (σ ) curves corresponding to the 2D projections visualized in Fig. 7a and to φ = Sc t . The D (σ ) curves are mostly composed of a single rise, suggesting that the non-uniqueness in the 2D projections has been remedied by selecting an appropriate data scaling. Improved manifold topologies yield more accurate regression. Nonlinear regression can be used in combination with dimensionality reduction to provide mapping functions, f, between the manifold parameters and physical quantities of interest. Regression can help discover robust relationships between manifold parameters and physical quantities that can then be injected in computational models and simulations 18,44,46,73 . Techniques, such as artificial neural networks (ANNs), Gaussian process regression (GPR) or kernel regression are commonly used in this context. Below, we continue the example of the atmospheric dispersion dataset and demonstrate improvements in nonlinear regression performance when parameters of an improved manifold topology are used as regressors. We first train an ANN model to predict Sc t based on 3D PCA, UMAP, Isomap and t-SNE projections resulting from different data scaling options. The details on the ANN model used here are provided in the "Methods" section. The scalings ranking for these 3D projections, along with the mean absolute errors (MAE) for Sc t predictions using ANN are reported in Fig. 7c. We note that generally, the scalings which resulted in lowest costs (VAST, S1, S2 and S3) across all four reduction techniques exhibit lowest MAE. The minimum L and MAE for each technique is marked with a shaded outline. With the exception of t-SNE, the minimum L and the minimum MAE happened for the same scaling option. The reason for a good ANN performance on those four scalings can be further understood by visualizing example projections. In Fig. 7d, we show 3D PCA projections of the atmospheric dispersion data, corresponding to the best scaling option (S1 with L = 1.3 ) and the worst scaling option (Pareto with L = 4.2 ) as per the ranking shown in Fig. 7c. The quality of projections changes visibly with the change of data scaling. For the S1 scaling projection, we observe a clear gradient of Sc t throughout the manifold, while for the Pareto scaling case, the 3D projection introduced significant scatter in Sc t values, with many observations compressed onto a nearly planar structure in the reduced 3D space. The latter projection thus introduces significant non-uniqueness in Sc t values and is more difficult to accurately regress over. The difference in costs for the two 3D PCA projections from Fig. 7d can be further understood by looking at the comparison of D (σ ) curves in Fig. 7e. Pareto scaling case creates much more variance present at smaller length scales which is consistent with our visual inspection of the projection. More generally, we observe correlation between the verdict given by L and the nonlinear regression performance. In Fig. 7f-i, we show scatter plots of L versus MAE for varying manifold topologies obtained from the atmospheric dispersion dataset. Here, we focus on PCA and t-SNE as the two projection techniques. Each scatter plot takes into account six selected scaling techniques: Auto, Pareto, VAST, 0, 1 , Level and S1. For each scaling technique, we generate 100 distinct 2D projections through creating random variable subsets (feature selections) of the full dataset, before applying PCA or t-SNE. This allows us to have sufficiently many distinct manifold topologies (600 for each scatter plot) for a trend to emerge in Fig. 7f-i. In addition to ANN regression, we apply kernel regression of Sc t to observe if the correlation between L and MAE is still present for a different nonlinear regression technique. The details on the kernel regression model used here are provided in the "Methods" section. Although the trends of L vs. MAE are different between ANN and kernel regression, the correlation measured with Spearman coefficient is high in each case. For ANN regression we observe 96% Spearman correlation for 2D PCA and t-SNE projections. The correlation is lower for kernel regression as compared to ANN regression, with 90% and 93% Spearman coefficient for PCA and t-SNE respectively. In Fig. 7f-i, manifolds corresponding to Level scaling are clustered in the region of high L and high MAE. This result is consistent with Fig. 7c, where Level scaling is seen to yield high costs and high regression errors. Conversely, the region of smallest L and smallest MAE in Fig. 7f-i is occupied by topologies resulting from S1 scaling, although some random subsets can result in poorer manifold topologies even with S1 scaling. We observe similar correlation trends for 3D PCA and t-SNE projections; these are shown in the Supplementary material. With the regression hyper-parameters kept constant throughout this exercise, the results presented in Fig. 7f-i suggest that better regression performance can be achieved when manifold topologies are improved. The cost function can thus be used to find optimized regressors for nonlinear regression techniques such as ANN or kernel regression. Other regression techniques are expected to be similarly affected. Detecting overlap between classes in categorical data. So far, we have shown examples where the cost was computed using continuous dependent variables. Here, we briefly explore the cost function application to categorical data. The dependent variable, φ , can now be formed from numerical values of the class labels. Figure 8a shows in an illustrative way the capability of the D (σ ) metric to detect overlap between classes on a projection. The overlap between the two clouds of points, each representing one class, is reflected in an additional peak seen in the D (σ ) profile. When the clouds become sufficiently separated in space, D (σ ) exhibits a single rise. This behavior is then translated in the cost value, with the cost for the case with overlapping classes being greater. Next, we generate 2D projections of the MNIST handwritten digits dataset 74 using PCA and t-SNE. The data observations are divided into ten classes, each representing one digit. We sample the full MNIST dataset by selecting only 1500 random samples from each class. The projections are visualized in Fig. 8b. The PCA projection introduces significant amount of overlap between classes and the cost associated with this projection is L = 4.4 . The cost for a much more unique t-SNE map, with clearly separated classes, is lower, L = 1.7 . Figure 8c additionally shows a comparison of the D (σ ) curves corresponding to the 2D PCA projection and the 2D t-SNE map, showing that the D (σ ) metric detected the significant class overlap in PCA. With some amount of scattered www.nature.com/scientificreports/ observations still present in the t-SNE map, the D (σ ) is not composed of a single rise as was seen in the fully non-overlapping classes example in Fig. 8a. This behavior in D (σ ) can be exploited to inform the appropriate hyper-parameter tuning to improve t-SNE maps. In the Supplementary material, we further demonstrate the potential of the cost function to guide the selection of an important hyper-parameter of t-SNE called perplexity. Discussion Many factors can affect the quality of low-dimensional data parameterizations and there is a need to quantitatively assess those factors from the reduced-order modeling perspective. We propose a cost function that reduces the low-dimensional manifold topology to a single number. The two topological properties that the cost function pays attention to are uniqueness and feature sizes in relevant dependent variable(s). There are two main strengths of the proposed cost function. First, the manifold topology can be optimized for any target dimensionality. Second, the manifold topology can be optimized with respect to any user-selected dependent variables. Only the most important dependent variables need to be included in the manifold topology optimization. This can become particularly helpful in approaches where large state-spaces are compressed to a smaller number of parameters and regression is used to predict a physical quantity based on the compressed representation. Optimal manifolds can be found specifically from the perspective of the physical quantities of interest. We demonstrate applications on numerically generated datasets and on experimental data containing noise. Our cost function can have useful applications in searching for the best data preprocessing strategies. This can include benchmarking existing strategies with newly invented ones. Often, those settings need to be tailored to a specific dataset and even to the target manifold dimensionality. In addition, various dimensionality reduction and manifold learning techniques can be assessed from the point of view of generating quality manifolds. A possible application might be to optimize the hyper-parameters to result in improved manifold topologies, especially for techniques such as t-SNE or UMAP which are known to strongly depend on hyper-parameter settings. While visual inspection of low-dimensional manifolds can be helpful, our cost function is a quantitative metric that can aid in qualitative assessments, especially when the manifold dimensionality exceeds 3D. Good quality of manifold topology is crucial when nonlinear regression is employed on manifolds. We show that improved projections can bring modeling benefits in regression tasks. We also briefly delineate possible applications of the cost function in dealing with categorical data where the cost function is computed from a dependent variable containing discrete values (class labels). For categorical data, the numerical values for the class labels and the distances between these values will affect the cost. Future work can investigate further the application to categorical data, especially for datasets where the number of classes becomes large or where classes exhibit a meaningful hierarchical structure, such as genomics or transcriptomics data. Future work can also include incorporating the cost function directly as an objective function in dimensionality reduction. There remain some limitations of our proposed cost function. First, the computational cost for the normalized variance derivative, D (σ ) , becomes high for large datasets. A potential solution to this restriction is to sample the dataset prior to computing the cost function. The Supplementary material sheds some light on how the cost function might respond to data sampling. Second, the current definition of the cost function cannot distinguish between the multiple scales of variation on a unique manifold and the non-uniqueness alone. This is due to the fact that the additional peaks in D (σ ) will show up in both scenarios, and both will increase the area under the D (σ ) curve. In our experience, however, the raise in D (σ ) due to non-uniqueness usually happens for σ ≪ σ peak , while the raise in D (σ ) due to varying feature sizes on an otherwise unique manifold happens at length scales closer σ peak (cf. Fig. 3d,f). The work in Ref. 30 demonstrates how peak locations in D (σ ) due to non-uniqueness show a sensitivity to data sampling that peak locations due to unique features do not. Future work can consider incorporating information such as this into the cost function to better distinguish manifolds with non-uniqueness from those with small features. Finally, the resulting cost for a given manifold can only be interpreted in relation to cost(s) for other manifold(s). Thus, the cost function can help identify the best manifold among a set of manifolds, but, as of yet, no objective judgment can be made for a single cost value obtained for www.nature.com/scientificreports/ any one manifold. The same applies to cost associated with any one dependent variable on a single manifold-it should be interpreted in relation to costs for other dependent variables. Nevertheless, we believe that this is a timely contribution that can help researchers across various disciplines and across numerous applications where low-dimensional data projections play an important role. Although we focused the demonstrations in this paper to four main datasets (argon plasma, reacting flows, atmospheric pollutant dispersion and categorical data), the cost function proposed can have broad applications in virtually any domain of science. We particularly look forward to exploring other applications, for instance on biological or medical data. We argue that further improvements in parameterization quality can be achieved in many areas of research if the low-dimensional parameter space is thoroughly explored and then assessed using the proposed cost function. Methods Data normalization. Prior to applying dimensionality reduction, datasets are often normalized. This can be especially beneficial if the dataset is composed of variables that have very different numerical ranges. Given a dataset composed of Q variables, X = [X 1 , X 2 , . . . , X Q ] , we normalize each variable X j by subtracting its center, c j , and dividing it by the scaling factor, d j . In a matrix form, we can write the normalized dataset, X , as: where C ∈ R N×Q is a matrix of centers with the jth column of that matrix populated with a value c j and D ∈ R Q×Q is a diagonal matrix of scales with the jth element on the diagonal equal to d j . In this work, we adopt several data scaling criteria collected in Table 1, where s j is the standard deviation and k j is the kurtosis of X j . Techniques S1-S3 are newly introduced scaling techniques that we explore in this work. Throughout the main text, we refer to a particular scaling by its name as per Table 1. The name "None" is equivalent to no scaling applied to the dataset. Outlier removal. For the atmospheric pollutant dispersion dataset, we perform outlier detection and removal using the principal component classifier method 53,75 . Outliers are detected based on major and minor principal components (PCs). The observation i is classified as an outlier if the first PC classifier based on the q first (major) PCs: or if the second PC classifier based on the (Q − k + 1) last (minor) PCs: where z ij is the (i, j)th element from the PCA-transformed data matrix and L j is the jth eigenvalue from PCA. Major PCs are selected such that the total variance explained is 50% (this determines the number q). Minor PCs are selected such that the remaining variance they explain is 20% (this determines the number k). PCA is performed with Auto scaling. Coefficients c 1 and c 2 are found such that they represent the quantile of the empirical distributions of the first and second PC classifier respectively. Here, we set the quantile to 98%, which allowed to find 509 outlying observations out of 18,540 total observations. Dimensionality reduction. A linear projection of a dataset onto a basis defined by A ∈ R Q×q can be performed as: where η η η = [η 1 , η 2 , . . . , η q ] defines the q-dimensional manifold parameters. In PCA, which is frequently used in this work, the basis matrix A is computed as the eigenvectors of a data covariance matrix. In the equation above, we assume that the dataset, X , has already been appropriately preprocessed. The summary of linear and nonlinear dimensionality reduction techniques explored in this work is given in Table 2. For results reproducibility, random seed of 100 is used in all techniques that rely on randomness. PCA. We use the PCA implementation from the PCAfold Python package 77 developed by the authors. LDA. We use the LDA implementation from the scikit-learn library. For the swiss roll dataset example, all parameters are set to default. DML. We use the the DML implementation from the pyDML Python package 76 . For the swiss roll dataset example, all parameters are set to default. MDS. We use the MDS implementation from the scikit-learn library. For the swiss roll dataset example, all parameters are set to default. Autoencoder. We use the Keras Python library to set up the autoencoder for the swiss roll dataset example. For results reproducibility, random seed of 100 is used in TensorFlow (tf.random.set_seed(100)). The model architecture is 3-2-3 with hyperbolic tangent activations in all layers. The network weights are initialized from the Glorot uniform distribution and the biases are initially set to zeros. We use batch size of 50 and 200 epochs. The Adam optimizer is used with the learning rate 0.001 and the loss function is the mean squared error. We train the autoencoder model on 80% of the data. The proposed cost function. The starting point for formulating our cost function is computing the normalized variance proposed by Armstrong and Sutherland 30 . The goal is to assess the low-dimensional parameterization quality defined by q manifold parameters, η η η ∈ R N×q , obtained using any dimensionality reduction or manifold learning technique. For length scales on a manifold given by the parameter σ ∈ �σ min , σ max � , the normalized variance, N (σ ) , is computed for the ith dependent variable, φ i , as: where N is the number of observations in a dataset, φ i is an arithmetic average of φ i , and K i is computed as a weighted average of observations of φ i : where the weights, w j , are determined using a Gaussian kernel: Table 2. Dimensionality reduction and manifold learning techniques used in this work. In the last column, we list the most important hyper-parameters of each method. The cost function can help fine-tune the hyperparameters to achieve quality manifold topologies. www.nature.com/scientificreports/ In Eq. (7), the quantity ||η j − η|| 2 2 is the squared Euclidean distance between the current location on a manifold, η , and any jth point on a manifold, η j . We then construct the normalized variance derivative function as per 30 : and normalize it by its maximum value: The basis for the cost function proposed in this work is integration of the penalized normalized variance derivative function, D (σ ) , over length scales given by σ . For the ith dependent variable, φ i , the area under the D i (σ ) curve can be computed as: where the tilde denotes a log 10 -transformed quantity (e.g. σ = log 10 σ ). We compute the area in the log 10 -space of the length scales σ so that all scales of variation with different orders of magnitude are treated equally. We further introduce two penalties when computing the area: 1. Penalty for peak locations relative to the rightmost peak, σ peak . This favors large feature sizes on a manifold over small ones. 2. Penalty for the area under D (σ ) happening at σ < σ peak , and especially for σ ≪ σ peak . This penalizes multiple scales of variation that might show up as additional peaks in D (σ ) and becomes particularly useful when these additional peaks can be linked to non-uniqueness. The cost function proposed herein takes these two penalties into account and is defined for φ i as: where P i (σ , σ peak,i ) is the penalty function defined as: The first term in P i penalizes non-zero values in D i (σ ) at length scales σ < σ peak . It will especially amplify any area under the D i (σ ) curve happening for σ ≪ σ peak . By increasing the power, the user can increase the amount of penalty for the variance occurring at σ ≪ σ peak . The second term in P i introduces a gentle penalty for the location of the rightmost peak and essentially increases or decreases the entire penalty function by a constant value. This second term acts to reward larger feature sizes as those should be easier to model. The parameter b is another hyper-parameter which controls the amount of the constant vertical shift of the entire penalty function. By increasing b, the user can increase the amount of penalty for the rightmost peak location, σ peak . Throughout this work, we use r = 1 and b = 1 . For completeness, in the next section, we illustrate the effect of setting r and b to values other than unity. Python implementation of N (σ ) , D (σ ) and the proposed cost function, L , has been developed by the authors and is available in the PCAfold software package 77 . In practice, numeric integration of D (σ ) is performed using a composite trapezoid rule. The peak values in D (σ ) are computed using the scipy.find_peaks function, which finds all local maxima by comparing the neighborhood of all discrete values of D (σ ) . The value for σ peak is taken as the rightmost peak found by scipy.find_peaks, but the user can also select a percentage, p, of the value σ peak (such that the true rightmost peak becomes p(σ max − σ peak ) ). Our recommended range for the parameter σ is �10 −7 ; 10 3 � , with logarithmically-spaced in-between values. For instance, throughout this work we typically set sigma = numpy.logspace (-7, 3, 200). Throughout this work, the manifold parameters η η η are scaled to a unit box (each η i is scaled to a [0, 1] range) before computing the cost. This is done so that the length scale, σ , has the same meaning in each manifold dimension. Effect of hyper-parameters r and b on the cost function. The hyper-parameters r and b from Eq. (12) may be changed to emphasize penalties applied to non-uniqueness and/or small feature sizes when computing overall costs. Figure 9 illustrates the effect of setting r and b to non-unity values. In Fig. 9a-c, we utilize the toy functions from Fig. 3 and show how L behaves for various r and b. For clearer analysis, whenever r is varied, b is set to unity, and vice versa. For comparison, with the red dashed lines, we mark costs corresponding to r = 1 and b = 1 . In principle, increasing b increases L more severely for manifolds where a dependent variable exhibits small feature sizes (see Fig. 9a). Conversely, increasing r for unique manifolds does not affect the L values significantly. As can be understood from Fig. 9c, increasing r increases L more severely for manifolds with ) . www.nature.com/scientificreports/ non-uniqueness. Even if non-uniqueness is present on a manifold with a fixed largest feature size, changing b does not affect the L values significantly. For a unique manifold with multiple feature sizes, increasing r or b can increase L by a similar amount (Fig. 9b). Due to the blending of multiple scales of variation in the D (σ ) profile seen in Fig. 3d, we have also applied a p = 70% shift in σ peak in Fig. 9b, as per discussion in the previous section. In Fig. 9a-c, the overall trend of the cost function is preserved when changing r and b. We further test the impact of r and b on the verdict given by the cost function across various dimensionality reduction and manifold learning techniques. We use the classic 3D swiss roll dataset and generate various 2D projections. The original 3D topology of this dataset can be seen in Fig. 9d, colored by a dependent variable, φ . The cost corresponding to the parameterization in the original 3D space is L = 0.98 with r = 1 and b = 1 . Figure 9e shows twelve different 2D projections of that dataset generated with various linear and nonlinear techniques. The cost value, L , is reported for each projection taking r = 1 and b = 1 . Among all the reduction techniques selected for this demonstration, PCA and a linear autoencoder (AE) introduce the most significant non-uniqueness. The costs for AE and PCA projections are the highest, while SE, LDA and UMAP show the lowest costs. The small cost for SE, LDA or UMAP projections (comparable to the cost in the original 3D space) is likely due to a combination of two factors: projection uniqueness and large feature sizes created through sufficient separation of distinct φ values in space. Figure 9f shows the D (σ ) curves for the original 3D data (red dashed lines) and for the 2D projections, in groups of four. The location of σ peak is shifted to the right for the SE and UMAP projections with respect to the original 3D data parameters. A similar observation holds for LLE, H-LLE, LTSA and Isomap projections. This can be due to a relatively large distance between the smallest and the largest φ values on these projections. In the original 3D space, the high and low values of φ are relatively close to each other due to manifold curvature. In Fig. 9g,h, we show the effect on the cost function evaluations of the various swiss roll data projections when changing the hyper-parameters r and b. In Fig. 9g, costs increase with increasing r only for the PCA and AE projections. This is understandable, since those are the only projections with significant levels of non-uniqueness. Increasing the hyper-parameter r thus increasingly amplifies the "secondary" rise seen in the D (σ ) profile at σ < σ peak for PCA and AE. Interestingly, this behavior with r can potentially be used to detect manifolds with non-uniqueness among a set of parameterizations. In Fig. 9h, www.nature.com/scientificreports/ increasing b increases costs for all parameterizations (3D and 2D). Costs increase more rapidly for PCA and AE projections than for any other explored projection (note the logarithmic scale of the vertical axis). This is also due to emphasizing the "secondary" rise seen in the D (σ ) profile when computing the penalized area, but this time through increasing the vertical shift of the entire penalty function. With the circled outlines in Fig. 9g,h, we mark the lowest cost that happened for any of the 2D projections across the explored values of r and b. Among the r and b values explored, the smallest cost consistently happens for the SE projection. We also note that the ratio between L for the worst projection (AE) and L for the best projection (SE) is generally higher when r is set above unity than when b is set above unity. Thus, increasing r above unity can help create clearer separations in L values when ranking manifolds with varying levels of non-uniqueness. (10) On the computational impact of evaluating the cost function. We note that the computational time required to compute L for a dataset with N observations and for m dependent variables scales as O(mN 2 ) , assuming the same number of discrete values of σ . The code for computing the cost function, that we provide in the PCAfold library, is parallelized with respect to σ , since the normalized variance computations are entirely independent of one another for different values of σ . Thus, our code can be readily ran on multiple CPUs, which we generally recommend for N > 10 5 . In the Supplementary material, we show the effect of data sampling on the cost function. We note, that while subsampling the data can ease the computational cost of L , it has an enhanced effect on dependent variables which exhibit variation at multiple scales (e.g. from non-uniqueness). A possible future implementation that could reduce the computational time of evaluating Eq. (6) is to approximate the weighted average with a sum over points within/near the currently considered length scale, σ . This implementation can make the computational time scale approximately as O(mN) for small σ and only scale as O(mN 2 ) as σ encompasses all points on a manifold. There is a second aspect to the question of the computational cost, that is more elusive to quantify, and that is the time required for the researcher to find a good manifold topology through trial and error exploration, if no quantitative tools were applied to guide the choice of a manifold. We argue that this second aspect can become a bottleneck in effective application of reduced-order models. In this regard, one may find that our cost function allows for "quick" assessments of projections resulting from a range of dimensionality reduction and manifold learning techniques, as well as a variety of data preprocessing strategies applied to the training data. Moreover, since L can be implemented in optimization tasks, it is likely that an optimum in manifold topologies is found in an automated way, compared to a trial and error approach. L-informed feature selection. We developed a feature selection algorithm that iteratively eliminates state variables from the dataset based on minimizing the cost, L , of PCA projections 65 . There are three inputs to the algorithm: the original dataset, X ∈ R N×Q , the target dependent variables, φ φ φ ∈ R N×m , and the target manifold dimensionality, q. The pseudocode below shows in closer detail how the algorithm computes the optimized subset of the original state variables. At each iteration i, the algorithm computes PCA projections resulting from removing each variable, one at a time. At the end of the iteration, the variable whose removal decreased the cost value the most is discarded from the dataset. At the next iteration, the process repeats, but now on a dataset with one less variable. We only allow Q − q iterations so that we never reduce original data dimensionality below q requested by the user. Once all iterations have finished, the algorithm looks back at final costs from all iterations and returns the optimized subset, X S , corresponding to the iteration that showed the minimum cost value. Such subset should then generate an optimized manifold topology. The algorithm is available in the PCAfold software package 77 . Nonlinear regression using artificial neural networks (ANNs). In the atmospheric pollutant dispersion example, we use nonlinear regression using ANNs. We use the Keras Python library to set up the ANN regression. For results reproducibility, random seed of 100 is used in TensorFlow (tf.random.set_ seed(100)). The inputs of the network are the three manifold parameters, η η η = [η 1 , η 2 , η 3 ] , either from PCA, UMAP, Isomap or t-SNE. The output is the turbulent Schmidt number, φ = Sc t . The model architecture is 2-5-5-1 for 2D projections and 3-5-5-1 for 3D projections with sigmoid activations in all layers except the output layer, where we use linear activation. The network weights are initialized from the Glorot uniform distribution www.nature.com/scientificreports/ and the biases are initially set to zeros. We use batch size of 100, validation split of 0.2 and 500 epochs. The Adam optimizer is used with the learning rate 0.001 and the loss function is the mean squared error. We train the regression model on 80% of the data and measure the MAE on the remaining 20% test data not seen by the ANN model. The results reported in Fig. 7 are for the 20% test data. Nonlinear regression using kernel regression. In the atmospheric pollutant dispersion example, we use kernel regression from the PCAfold software package 77 . We use the Nadaraya-Watson estimator with a Gaussian kernel with a varying bandwidth. The bandwidth is determined locally based on 50 nearest neighbors of the query point. We train the regression model on 80% of the data and measure the MAE on the remaining 20% test data not seen by the kernel regression model. The results reported in Fig. 7 are for the 20% test data. Reacting flow data generation. The reacting flow datasets for combustion of syngas in air and hydrogen in air were generated using Spitfire Python package 78 available at: github.com/sandialabs/Spitfire. The datasets were generated using a steady laminar flamelet model for a range of dissipation rates from chemical equilibrium to extinction and a range of mixture fractions between 0 and 1. For the syngas/air dataset, the fuel stream is composed of a mixture of carbon monoxide and hydrogen in 10:1 molar proportion 79 . The oxidizer stream is air. Both streams have initial temperature 300 K and pressure 101,325 Pa. For the hydrogen/air dataset, the fuel stream is composed of hydrogen and the oxidizer stream is air 80 . Both streams have initial temperature 300 K and pressure 101,325 Pa. For both fuels, the mixture fraction grid is denser closer to the stoichiometric conditions.
2022-08-27T06:16:35.587Z
2022-08-25T00:00:00.000
{ "year": 2022, "sha1": "1d95ea5f5fe0e198e4c3f0f31f656f7f153939f4", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "00fb7acc7d0d0e7930b620d0e9014f3f7707f4dc", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
268890885
pes2o/s2orc
v3-fos-license
The computational foundations of dynamic coding in working memory Working memory (WM) is a fundamental aspect of cognition. WM maintenance is classically thought to rely on stable patterns of neural activities. However, recent evidence shows that neural population activities during WM maintenance undergo dynamic variations before settling into a stable pattern. Although this has been dif fi cult to explain theoretically, neural network models optimized for WM typically also exhibit such dynamics. Here, we examine stable versus dynamic coding in neural data, classical models different task periods (Figure 1C).They found low correlation between cue and late delay period selectivities (Figure 1C, left) and high positive correlation between mid and late delay period selectivities (Figure 1C, right).Related to this, another study measured the raw correlation (i.e., not mean-centered across stimulus conditions) between neural activities taken from the cue and late delay periods in two WM tasks [16].They found overall high correlations between activities at any two time points.However, during the cue period, the correlation of neural activities with late delay activity fell below that expected by chance (i.e., correlations with activities during the fixation period ≃0.9; Figure 1D, orange).Similarly, the correlation of activities with cue period activity fell to chance levels throughout the delay period (Figure 1D, purple).In another, more recent study, the authors built on these earlier results and measured the mean-centered correlation (rather than raw correlations) of PFC neural activities between all pairs of time points while monkeys performed a WM task [12].This removed the effect of overall firing rate differences (and accompanying raster plots) of neurons recorded in monkey prefrontal cortex (PFC) while subjects performed a memoryguided saccade task [27,29].Vertical lines (from left to right) show beginning of cue, delay, and response periods, respectively.Three example neurons are shown displaying cue-(top), delay-(middle), and response-selective activity (bottom), respectively.Adapted with permission from [29].(B) Firing rate over time of an example neuron in monkey PFC during performance of a parametric working memory task that consisted of seven different frequencies of haptic stimuli (colors), two of which were presented sequentially in a given trial [31,32].The vertical gray bar shows the time period of the first stimulus presentation.Adapted with permission from [32].(C) For the same data as in panel (B), the selectivity of each neuron (dots) to the frequency of the first stimulus was extracted during the cue period, mid delay period, and the late delay period [33].Scatter plots show these selectivities plotted across different periods.Adapted with permission from [33].(D) Raw correlation of neural population activity patterns in monkey PFC with either cue period activities (purple, 'Sensory') or late delay period activities (orange, 'Late memory'), while subjects performed a memory-guided saccade task [16].Correlations are normalized to peak at 1.The gray bar shows the time period of stimulus presentation.Adapted with permission from [16].(E) Mean-centered correlations between neural population activity patterns in monkey PFC for any pair of time points in a trial while subjects performed a memory-guided saccade task [12].The light gray bar shows stimulus presentation and the dark gray bar shows reward presentation.Asterisks show strongly negative correlations.Adapted with permission from [12].(F) Cross-temporal decoding of neural population activity in monkey PFC while subjects performed a memory-guided saccade task with a variable delay period [13].Vertical yellow lines (from left to right) show beginning of cue, delay, and response periods, respectively.Adapted with permission from [13]. Glossary Attractor neural network: a recurrent neural network whose state space has stable fixed points ('attractors'), such that the network will always settle into an activity pattern that corresponds to one of those fixed points. Coding subspace: a subspace within the full state space of a network that contains (most) stimulus information at a given time during the trial (typically, the late delay period between neurons, which can dominate raw correlations, and resulted in low (and even negative) correlations between cue and delay period activities and positive correlations between pairs of time points taken within the delay period (Figure 1E, asterisks). To clearly measure how stimulus coding changes over time in a neural population, crosstemporal decoding has become a well-established analysis tool [7][8][9][10][11][12][13][34][35][36].For this, a linear decoder is typically trained to classify neural population activity patterns according to the stimulus condition to which they belong.This algorithm can then be used for cross-temporal decoding, such that it is trained to correctly classify neural activity patterns appearing at a given time point in the trial (e.g., late delay period), but then applied to neural activity patterns recorded at a different time (e.g., the cue period).If cross-temporal decoding performance is high, it means that the encoding of the stimulus by neural activities remains stable across the two time points (Figure 1F, dark gray colours in the top right quadrant, i.e., within the late delay period).Conversely, a clear indication of dynamic coding is low cross-temporal decoding performance (Figure 1F, pale gray colours in the top left and bottom right quadrants, i.e. between the cue and delay periods). Neural recordings from the PFC during WM typically exhibit two key features of cross-temporal decoding: (i) cross-temporal decoding is reciprocally poor between cue and delay period activities, and (ii) it is relatively high between pairs of time points within the delay period [7][8][9][10][11][12][13]16,34] (Figure 1F).Despite poor cross-temporal decoding, it is often possible to train a fixed decoder on activities throughout the delay period (rather than in a short interval at a time, as in crosstemporal decoding) that is able to read out stimulus information from even such time-varying activities [16,37].Thus, in contrast to cross-temporal decoding, such fixed decoders cannot be used as reliable indicators of the presence or absence of dynamic coding. Taken together, the results reviewed in this section reveal a consistent picture of the dynamics of PFC across many different WM tasks: the way in which the neural population encodes the stimulus changes markedly during the cue and early delay period but the dynamics ultimately settle into a relatively stable pattern of activity during the late delay period (Figure 1 A-F).Importantly, such dynamics have been observed in tasks ranging in complexity from simple short-term memory tasks, in which information needs to be simply maintained during a delay period [5,6,[10][11][12][13][27][28][29], to more complex WM tasks, which also require manipulation of maintained information [7,9,10,16,[30][31][32][33]36].In fact, more complex tasks can display even more dynamic activities compared with simpler tasks [7,36]. Dynamics of hand-crafted models Neural network models of WM are some of the most widely used models in computational neuroscience [38,39].These models perform computations, such as WM maintenance, by their dynamics (i.e., the way neural activities change in them over time as a function of network parameters and the inputs they receive) [40] (Box 1). Classical models of WM were constructed by careful considerations about their architecture and parameters.In particular, they have typically relied on attractor neural network dynamics to maintain stimuli during delay periods [30,38,[41][42][43][44][45][46][47][48][49] (Box 1).Recurrent connections between neurons in these models are specifically structured (Figure 2A, top) to create stimulus-specific attractors in state space.(Note that although the two top rows in Figure 2A show illustrations for only two neurons as in Figure I in Box 1, these insights generalize to larger networks and the two bottom rows in Figure 2A show results from actual simulations of 100-neuron networks; see also Outstanding questions).The stimulus inputs are then designed to drive neural activity close to an attractor (Figure 2A, top and upper middle, purple arrows).During the delay period (Figure 2A, upper middle, dark green dots), neural activity quickly approaches and then remains in the appropriate stimulus-specific attractor (Figure 2A, upper middle, purple lines).As a result, neurons show stimulus-specific persistent activity corresponding to this attractor and they reach their steady state firing rates almost instantaneously following stimulus onset (Figure 2A, lower middle).These dynamics thus allow for the robust maintenance of WM contents.Importantly, cross-temporal decoding of stimulus identity is high throughout the cue and delay periods (Figure 2A, bottom) precisely because neural activities hardly change over time and, even if they do change, neural activities remain highly distinguishable along the 'coding subspace' (i.e., the subspace in which the attractors lie, Figure 2A, upper middle, thin yellow line connecting black crosses).Such stable coding of stimuli during delay periods occurs in various different types of attractor models, even if their detailed architectures and dynamics show substantial differences [13,16,24,33,45,46]. In order to better account for the strongly dynamic neural activities observed in neural recordings, a fundamentally different class of models have been developed that produce time-varying activities [33,37,[50][51][52][53][54].These models often rely on highly non-normal, feedforward, or effectively Box 1. A primer on recurrent neural network dynamics In recurrent neural network models, the change in the activity x i of neuron i over time, t, is typically governed by the following dynamical equation: where τ is the effective single neuron time constant, W ij is the strength of the recurrent synaptic connection from neuron j to neuron i, f ⋅ ð Þ is a single neuron input-output nonlinearity mapping from subthreshold activity (e.g., total somatic input) to the output (instantaneous firing rate) of the neuron, b i controls the baseline activity of neuron i, and h i t ð Þ is its external time-dependent input (e.g., conveying information about the stimulus).(Several models use more realistic extensions of these dynamical equations to spiking neurons [17,46,71,88], but for the phenomena we are focusing on here, simpler firing rate-based descriptions of neural activity, as in Equation I, proved sufficient.) The most straightforward way to track how neural activities change in the network, is to plot them as a function of time (Figure IA).However, there is also another way of visualizing the activity of neurons that can give important insights into the principles of network dynamics: showing the state space of the network (Figure IB).This is a space whose axes are the activities of individual neurons (i.e., the number of dimensions is the number of neurons) and each point in the space corresponds to a distinct pattern of population activity that could be expressed by the network.(Note that Figure I shows results for a two-neuron network for illustration, but all the concepts discussed here readily generalize to networks of arbitrary size, see Outstanding questions.)Equation I dictates how neural activities change at a given time as a function of the state of the network (i.e., the current set of neural activities).As such, it defines a flow field in the state space (Figure IB, gray arrows) that the network will follow over time, thus tracing a trajectory in this space (Figure IB, purple curve), corresponding to a sequence of population activity patterns. In general, changes in neural activities, and thus the flow field, also depend on input to the network, which can itself change over time [h i t ð Þ in Equation I], but it is often [and in neural network models of working memory (WM), typically] assumed that inputs are only non-zero for a brief initial period (the cue period of a WM trial) [13,41,85].In such cases, inputs essentially determine the point in state space from which the network is started at the beginning of the delay period (the initial condition; Figure IB, green circle) and the trajectory that unfolds from there is governed by the input-free (autonomous) dynamics of the network.Different initial conditions will result in different trajectories in state space. Of particular relevance to WM are attractor neural networks.These are recurrent networks of which the state space includes a number of special points: the attractors (i.e., stable fixed points; Figure IB, black cross) [13,16,30,38,[41][42][43][44][45][46][47][48][49].Any initial condition in the 'vicinity' of an attractor produces a trajectory that converges to the attractor and, asymptotically, stays there.When multiple attractors exist in the state space, then sufficiently different initial conditions will make the network converge to different attractors.Thus, the network can robustly maintain information about the stimulus during the delay period, even in the absence of external inputs, by the attractor state it occupies. Whether and how many attractor states exist in a network's state space, or if instead a network exhibits attractor-free (e.g., chaotic [52,54,89]) dynamics, depends on the single neuron nonlinearity [f ⋅ ð Þ in Equation I] and, importantly, the setting of its synaptic connection strengths (W ij in Equation I). feedforward network connectivity to produce strongly time-varying activities, resulting in dynamic coding [37,50,55] (see Outstanding questions).In such models, a chain of effective feedforward connections exists between neurons (Figure 2B, top) or, more generally, between different patterns of neural activity that are orthogonal to one another [37].As a result, the stimulus representation changes over time as these different patterns along the chain become active during different time periods (Figure 2B, upper and lower middle).If each pattern along the chain is relatively short-lived (because the effective feedforward connections between patterns dominate over their self-connections), then activities expressed due to stimulus inputs will be determined only by patterns early in the chain, while late delay activities will reflect only patterns at the end of the chain and therefore they will be nearly orthogonal in state space (Figure 2B, upper middle, compare dark green dots and open black circles).Interestingly, despite the complete absence of attractor states, stimulus information is still highly decodable by a fixed decoder in these networks, demonstrating that, as we discussed earlier, such fixed decoders cannot distinguish stable from dynamic coding.Importantly, cross-temporal decoding of the neural activities of these models reveals their strongly time-varying dynamics, such that decoding is only high between neighboring time points (Figure 2B, bottom).Therefore, these models lack the characteristic stable dynamics often seen in experimental recordings during the late delay period (Figure 1). In an attempt to account for the combination of both sequential and persistent dynamics observed in data, variations of a classical attractor model (specifically linear integrator models [42,43,49]; Figure 2C) have been proposed.In these models, network dynamics exhibit stimulus-dependent persistent statesas in attractor networks.However, unlike in classical attractor networks, stimulus inputs do not drive neural activity directly into the persistent state (Figure 2C, upper middle, purple arrows).Instead, cue-driven dynamics are the sum of changes along two directions in state space: one that is aligned with the coding subspace, as in classical attractor networks (Figure 2C, upper middle, thin yellow line connecting black crosses), and one that is orthogonal to it and, therefore, aligned with the coding 'nullspace' instead [16,56].Changes along the coding nullspace create relatively lower raw correlations between cue and late delay period activities.Importantly, because the intrinsic dynamics of these networks are constructed such that neural activities do not change along their coding subspace, the transient changes following stimulus offset only change activities along the coding nullspace, such that eventually any activity in the nullspace diminishes (Figure 2C, upper middle, purple lines going from dark green dots to black crosses are orthogonal to the yellow line).As a result, the stimulus representation remains completely stable in the coding subspace during the delay period.This explains the higher raw correlations during the late delay period observed in experiments (Figure 1D).However, because their intrinsic dynamics can only change the coding-orthogonal component of activities, these models still necessarily rely on a large overlap of the stimulus input with the coding subspace, and thus predict unrealistically high mean-centered correlations.Cross-temporal decoding also reveals strongly stable coding of stimuli throughout the trial (Figure 2C, bottom), much like in classical attractor models (Figure 2A, bottom).Indeed, because of this, the dynamics of such models have been shown to be incompatible with PFC recordings in which low (or even negative) meancentered correlations have been found between cue and delay period activities [12,16,33] (Figure 1C-E) and, correspondingly, a near-chance level of cross-temporal decoding accuracy was achieved when decoding cue-related activity using a late delay-trained decoder [7][8][9][10][11][12][13]16,34] (Figure 1F). In summary, hand-crafted models of WM dynamics can display various different dynamics (Figure 2A-C).However, none are compatible with the fundamental neural dynamics typically seen in experimental data.In particular, this consists of stimulus inputs that drive neural activity in directions strongly orthogonal to (i.e., only weakly aligned with) late delay activity (Figure 2D, upper middle), persistent activity that only emerges during the delay period (Figure 2D, lower middle), and poor cross-temporal decoding between cue and late delay periods together with stable late delay decoding (Figure 2D, bottom).Even models including purpose-built mechanisms to account for dynamic coding do not fully capture these phenomena. Dynamics of task-optimized neural networks Instead of using hand-crafted models to study WM, in recent years it has become increasingly common to study networks with connection weights that are optimized to perform WM tasks [13,18,20,21,33,[57][58][59][60][61][62].In such studies, typically all connections in the network are optimized during training, including recurrent connections between neurons the network, input connections that determine how stimuli (usually represented by abstract activation patterns rather than as actual visual images) affect neural activities, and readout connections that allow the decoding of stimulus information from the network.As these models are optimized for task performance, rather than fitting neural (or behavioral) data, the dynamics they discover over the course of optimization provide readily testable predictions without overfitting those data.When these predictions are confirmed, these models thus offer mechanistic insights for how the brain might solve similar tasks [58,63,64]. Analyses of task-optimized networks have revealed that their dynamics appear to be much more in line with experimental data compared with classical models.A recent study offered a direct comparison of networks that were optimized for a memory-guided saccade task to previously suggested hand-crafted networks performing the same task [13] (compare Figure 3A and B-D, left with Figure 2).After training, the task-optimized networks exhibited a combination of features characterizing both hand-crafted attractor and feedforward models.First, as seen in feedforward networks, they had non-normal connectivity [13,20,58] (Figure 3A) and the optimized stimulus inputs drove neural activity strongly orthogonal to late delay activity [13,33,58] (Figure 3B, left, purple arrows).Second, similarly to hand-crafted attractor networks, they had attractors in their state space towards which activities converged by the late delay period (Figure 3B, left, purple lines and black crosses).Due to the orthogonality of inputs to the subspace of attractors, these networks exhibited sequential activity during the cue and early delay followed by more persistent activities during the late delay period (Figure 3C, left).A pattern of low cross-temporal decoding between the late delay and the cue and early delay periods is also observed in these networks (Figure 3D, left), similar to that seen in data (Figures 1 and 2D). The signatures of these dynamical motifs have also been found consistently across several earlier studies using networks optimized on a variety of WM tasks.For example, in a landmark study in Compare with left panel.Adapted with permission from [58].(C) Left: mean-centered (across stimulus conditions) firing rate over time for two stimulus conditions (purple curves) of an example neuron in a 100-neuron task-optimized network [13].Vertical green lines indicate stimulus onset and offset times.Adapted with permission from [13].Right: firing rate over time of an example neuron in a network optimized to perform a parametric working memory task using haptic stimuli [33] for all seven vibration frequencies of the first stimulus (blue to red colors).Compare with left panel and Figure 1B in the main text.Adapted with permission from [33].(D) Left: cross-temporal decoding for the same network as in panel (C), left. Compare with Figure 1F in the main text.Adapted with permission from [13].Right: two-interval decoding of time in a network optimized to perform a context-dependent trace conditioning task [57].A decoder is trained to distinguish neural activities taken from two different time points (off-diagonal elements) in a cross-validated way.Compare with left panel.Adapted with permission from [57]. which a network was optimized to perform a context-dependent decision-making task [58] (Figure 3B, right), stimulus inputs initially drove neural activity (Figure 3B, right black and gray lines at 'Dots on') strongly orthogonal to a line of attractors (Figure 3B, right red line and crosses) onto which the dynamics eventually fell during the delay period (Figure 3B, right 'Dots off').In another study, networks were optimized to perform a parametric WM task [33].The activities of stimulus selective neurons changed dynamically during the stimulus and early delay periods and then became relatively stable during the late delay period (Figure 3C, right colored lines).Finally, in a network that was optimized to perform a context-dependent trace conditioning task, an analysis related to cross-temporal decoding also revealed analogous results [57].In this analysis, a decoder was trained to discriminate neural activities taken from two different time points (Figure 3D, right).This resulted in high discriminability between early delay times and all other times (indicating strongly sequential neural activities during the early delay period) and near-chance discriminability between any pair of time points during the late delay period (indicating strongly static neural activities).Thus, when using this temporal discrimination analysis, dynamic coding results in the near-inverse of the pattern of decodability obtained with traditional cross-temporal decoding. Although we have only shown a few examples here, evidence for such dynamics (Figure 3) has been consistently found across studies, covering a wide range of WM tasks, training protocols, and cost functions, and using various different analysis techniques [13,18,20,21,33,[57][58][59][60][61][62].In fact, it has also been shown that the strength of dynamic coding scales with task complexity [20,21], in line with several empirical observations [7,36]. A framework for understanding when circuits exhibit stable or dynamic coding The ubiquity of dynamic coding in task-optimized networks suggests that dynamic coding results from an optimality principle.Conversely, the lack of dynamic coding in classical attractor models suggests that specific modeling choices must have led to purely stable dynamics there.We examine here two key factors that together determine whether a neural circuit exhibits dynamic coding: the connectivity of the network (Figure 4A, columns), and the inputs that it receives (Figure 4A, rows).While the mathematical results we review below formally describe networks with linear dynamics, many of their insights generalize to the more realistic case of non-linear dynamics (but see Outstanding questions). Effective feedforward connectivity provides a greater stimulus signal-to-noise ratio If a network has effective feedforward (i.e., mathematically non-normal) connectivity, then it is able to amplify stimulus inputs and generate more noise-robust dynamics compared with a network with mathematically normal (e.g., symmetric) connectivity [13,37,50,55,[65][66][67][68][69].In particular, each step along an embedded excitatory feedforward chain presents an opportunity for amplification.Thus, inputs delivered to the top of such a chain will be more amplified, as they will pass through a longer section of the chain than inputs delivered to the end of the chain [37,50,68].Noise does not systematically discriminate between different parts of these embedded chains.In contrast, by choosing stimulus inputs strategically, it is possible to focus the signal specifically toward the early parts of these chains.Thus, with the right choice of inputs, amplification can be made to affect the signal more than the noise.As a consequence, such strongly non-normal networks allow more robust maintenance of information during delay periods [13,50,65,66,69]. Many classical attractor models use normal or nearly normal recurrent connectivity.This includes networks with symmetric connectivity, as in some of the most classical attractor models of WM [38,[41][42][43][44] (Figure 4A, left column).Even networks with quasi-symmetric connectivity still lack nonnegligible feedforward connections, such as when a single inhibitory neuron is included in schematics of network dynamics shown in a two-dimensional state space spanned by the (mean-centered) firing rates of two neurons (neuron i and neuron j; cf. Figure I in Box 1) for three different types of network connectivity (columns), combined with three different stimulus input directions (rows).For simplicity, linear dynamics are shown.Gray arrows show flow field dynamics, black line shows a line attractor (i.e., a continuous line of persistent activity states along which neural activities do not change), purple arrows show network activities at the time of stimulus onset (pale green dots) and offset (dark green dots) following two stimuli with corresponding inputs of equal magnitude and opposite sign, purple lines show firing rate trajectories unfolding after stimulus offset.Black box on the top left indicates classical attractor model dynamics and the red box in the bottom right indicates task-optimized network dynamics (cf.red box in (B)).The shaded blue box indicates networks with dynamic coding in which a stimulus period trained decoder does not generalize perfectly to delay period activity and vice versa.Note that although random input directions for classical (symmetric) connectivity lead to changes in activities, these changes are orthogonal to the attractor (the coding subspace) and thus such networks still exhibit stable coding.(B) Task-optimized networks: dependence of delay period coding (upper middle), attractor dynamics (lower middle), and non-normal connectivity with inputs utilizing it (bottom) on task (top).Schematics in top row show task events (stimulus: green line with abutting dots, retention interval: black line) and decoding time windows (gray) as a function of time (horizontal axis), parallel lines represent example trials.Red box shows task optimization for which dynamics are shown in (A). an otherwise perfectly symmetric network of excitatory neurons [25,30,[45][46][47][48]70], or when the actual connectivity has a random independent component in each connection, but such that it is a realization of underlying connection probabilities that are still perfectly symmetric [71,72].In such networks, there are no (or only very weak) effective feedforward connections and so the signal-to-noise ratio in the decoded stimulus cannot be increased over time using non-normal amplification. Importantly, even the existence of effective feedforward connections alone does not necessarily mean that the stimulus inputs will utilize them.This leads us onto the next key factor. Optimal inputs utilize effective feedforward connections leading to sequential activities The same mathematical analysis that predicts effective feedforward connectivity to be optimal for information maintenance also makes predictions about the nature of optimal inputs that the network should use.Specifically, to make use of the feedforward connections, the inputs must be delivered to neurons (or patterns) near the top of the feedforward chain [13,37,50,55] (Figure 2B, top). There are various different input directions that make optimal use of the effective feedforward connectivity under various different definitions of optimality.These input directions include: the 'most amplifying mode' [13,65,66,73] (mathematically the leading eigenvector of the 'Observability Gramian' of the network, [74]), the 'selection vector' [58,[75][76][77] (the leading left eigenvector of the recurrent weight matrix [58]), or the 'Fisher-optimal' direction (the leading eigenvector of the Fisher memory matrix, [50]).Although in general, each of these directions are distinct from one another, they all make strong use of effective feedforward connectivity to amplify neural responses over time and are therefore usually strongly correlated with each other.Indeed, in certain cases, these directions can be identical to one another [13,50,58,66,75].When these inputs are applied to networks with effective feedforward connectivity, different patterns of activity are generated over time as population activity propagates along the feedforward pathways.These sequential dynamics thus give rise to dynamic coding. Interestingly, however, in classical attractor networks with normal connectivity, the optimal stimulus input simply drives neural activity directly into an attractor (or persistent activity) state [13,50] and thus the dynamics do not change during the delay period (Figure 4A, left column).Even if their connectivity is not symmetric [16,56], as we mentioned earlier, these models still often use stimulus inputs that rely on a large overlap with the desired attractor state [16,45,47,56,71,72] and the coding of the stimulus hardly changes during the delay period (Figure 4A, top row). The generality of dynamic coding As we saw earlier, networks with normal (or quasi-normal) connectivity do not have effective feedforward connections.However, neural circuits, either real or artificial, are guaranteed to contain effective feedforward connections purely by chance.For example, if the connectivity matrix is randomly generated (as is common in the field [13,23,33,52,54,58,65,78]), the matrix will be non-normal with probability 1.Furthermore, as more biological details are incorporated into network models, they typically lead to more strongly non-normal connectivities.For example, if the connectivity matrix distinguishes between excitatory and inhibitory neurons (known as Dale's principle), this creates an asymmetric architecture, which typically means that the network will be strongly non-normal [13,55,66,67,69,73,79].Additionally, spike-timing-dependent plasticity (STDP) rules are typically anti-symmetric in terms of whether a pre-synaptic spike arrives before or after a post-synaptic spike [80], thus further encouraging asymmetric connections and sequential activities to form [81] (although see [82]). For optimized networks, the degree to which the network exhibits dynamic coding with sequential activities (due to non-normal connectivity) and/or stable coding with persistent activities (due to attractor dynamics) depends on the details of the functional objective for which the network has been optimized (Figure 4B).If the network was optimized to retain information over long and variable intervals (as is typical in WM experiments), and this information needs to be decodable with the same decoder for any interval, then the optimal solution is to develop attractor dynamics that generate stable coding with persistent activities over the period when these retention intervals might end (Figure 4B, left two columns).If even the shortest retention interval has a considerable length (again as is typical in experiments), then the network will develop non-normal attractor dynamics, with inputs that optimally make use of non-normal amplification.This means that a later period of stable coding will be preceded by an early period of dynamic coding [13] (Figure 4B, left).If, however, some retention intervals end immediately, or very shortly, after stimulus offset, then there will be no time for the sequential propagation of activities along effective feedforward connections and hence the network will remain normal, or even if it develops nonnormal connectivity, its inputs will not make use of this.Hence, such networks will not exhibit dynamic coding, only stable coding throughout the whole range of retention intervals [13] (Figure 4B, center left).In the other extreme, when retention intervals are not variable, then there is no need for stable coding and so network dynamics will be non-normal and attractor-free, giving rise to purely dynamic coding with strongly sequential activities [13,50] (Figure 4B, center right). Finally, if, in a setting relevant for sensory processing, stimulus information needs to be decodable immediately after stimulus onset without a need for retaining it, then network connectivity will not need to either form stimulus-specific attractor states (because there is no need for stable coding), or be strongly non-normal [13] (because there is no time to make use of effective feedforward connections; Figure 4B, right).In line with these predictions, cortical dynamics in sensory areas were described as lacking such attractors [69,83] and effective connectivities (as estimated from neural activities) had a much lower degree of nonnormality in mouse visual cortex in a perceptual decision making task (without a delay) than in monkey PFC during WM tasks [13,76,77] (see Outstanding questions). To summarize, there are two intertwined aspects of classical attractor models that reduce their propensity to exhibit dynamic coding: (i) many use symmetric (or quasi-symmetric) recurrent connectivity [25,30,38,[41][42][43][44][45][46][47][48][70][71][72]; and (ii) they typically use stimulus inputs that drive neural activity directly into an attractor state (Figure 4A, top left, black box).Satisfying these two conditions has the benefit of strong mathematical guarantees on the performance of such networks [39], but it also restricts classical models to live in a very small part of parameter space (Figure 4A, top left, black box).However, given the highly specific part of parameter space that a neural circuit must be in so that it does not exhibit dynamic coding (Figure 4A, left column and top row), most neural circuits will exhibit dynamic coding.When these restrictive conditions are dropped, networks exhibit dynamic coding in general (Figure 4A, shaded blue area, Dynamic coding').Moreover, as we argued earlier, if the connectivity and/or the stimulus inputs are optimized to perform a broad range of WM tasks, we expect more extreme levels of dynamic coding as the stimulus inputs make greater use of the strong non-normal connectivity that results from optimization (Figure 4A, B, red box).Indeed, a recent study found specific evidence for this dynamical regime in the lateral PFC during a memory-guided saccade task [13]. Concluding remarks Ultimately, WM is a cognitive function implemented by the concerted dynamics of large neural populations.The study of several other cognitive functions, most prominently motor control, has greatly benefited recently from such a dynamical systems perspective [40].In particular, optimal neural dynamics for movement preparation and control provided a parsimonious Outstanding questions We have focused on relatively lowdimensional dynamics, partly to provide basic intuitions but also because many of the tasks we have discussed only require either one-or twodimensional dynamics.In reality, both neural recordings and task-optimized networks typically consist of many neurons and can exhibit higher dimensional dynamics.What is the relationship between the dimensionality of the dynamics of a network and the strength of dynamic coding it exhibits?Dynamic coding appears to result from an optimality principle, but how much do other factors contribute to dynamic coding?Some of these factors may include short-term plasticity, different neurons being active in different task epochs, oscillations at the single trial level, or the network performing a coordinate transformation from the cue to delay period. How do we generalize insights from linear dynamics to non-linear dynamics?Although many of the models we have discussed have non-linear dynamics (i.e., the single-neuron input-output function is non-linear), the main intuitions, and their underlying mathematical derivations, for why dynamic coding is useful come from networks with purely linear dynamics.For example, the consequences of non-normal versus normal connectivities on network dynamics are only known for linear network dynamics. When fitting dynamical systems models to neural recordings, why do the fitted model dynamics appear to be more non-normal in PFC than in visual cortex?Is this because information only needs to be retrieved from PFC at the end of a delay period, compared with needing the information straight away in the visual cortex (e.g., during perceptual decision-making tasks)?How does task complexity control the strength of dynamic coding?For example, tasks that require manipulation of information (i.e., more formally working memory tasks) may lead to more dynamic changes in neural activities than tasks that require simple short-term memory. explanation for orthogonal pre-movement to movement dynamics in motor cortex [66,[84][85][86][87], a close analog of the phenomenon of dynamic coding in WM.Although the notion of memory maintenance might conjure ideas based on static representations, the results we discussed here suggest that it is time to update computational models of WM and fully embrace the role of (non-normal) dynamics in optimal memory maintenance. Figure 1 . Figure 1.Widespread evidence for dynamic coding in frontal cortical recordings.(A) Peristimulus time histograms Sciences, July 2024, Vol. 28, No. 7 615 Figure I.A primer on recurrent neural network dynamics.(A) Firing rates over time for a two-neuron recurrent attractor neural network.(B) State space plot for the network simulated in panel (A).Purple curve shows firing rate trajectory corresponding to the simulation shown in panel (A) (note persistent activities once the network reaches the attractor state).Gray arrows show flow field dynamics (direction and magnitude of movement in the state space as a function of the current firing rates), green dot shows initial condition of the dynamics (e.g., the state of the dynamics following stimulus offset), and the black cross shows an attractor state. Figure 2 . Figure 2. Dynamics of hand-crafted models.(A-C) Top row: illustration of network architectures for two neurons where reversed triangle-shaped line endings illustrate connections (either excitatory or inhibitory) between neurons.Colored arrows next to the neurons indicate the magnitude (arrow length) and sign (up, positive; down, negative) of the two stimulus inputs (dark and pale purple).Upper middle: schematic of firing rate trajectories in state space for two stimulus inputs (dark and pale purple lines, with arrows showing direction of travel in time).Pale and dark green dots show time of stimulus onset and offset, respectively, black crosses show attractor locations with a thin yellow line connecting them, showing the coding subspace.Lower middle: mean-centered (across stimulus conditions) firing rate over time for two stimulus conditions (purple curves) of an example neuron (neuron i shown on the horizontal axis of the panel above) in large classical attractor networks[13].Vertical green lines indicate stimulus onset and offset times.Bottom: cross-temporal decoding of neural population activity resulting from large classical attractor networks[13].Vertical and horizontal green lines indicate stimulus onset and offset times.(A) Classical attractor network with symmetric connections between neurons.(B) Feedforward network[37,50].Notice the black dots (instead of crosses) in the upper middle panel indicating the lack of attractor states in this model.(C) Linear integrator models with strong transient dynamics[16,56].(D) Same as panels (A-C) but showing schematics of neural recordings from prefrontal cortex during WM tasks.The bottom two rows in panels A-C were adapted with permission from[13]. Figure 3 . Figure 3. Dynamics of task-optimized neural networks.Panel (A) and the left panels of (B-D) can be directly compared with the four rows of Figure 2. (A) Illustration of the fundamental architecture of networks optimized on working memory tasks.Same notation as in Figure 2 (top row).(B) Left: schematic of firing rate trajectories in the state space of a task-optimized network for two stimulus inputs (purple lines with arrows).Same notation as in Figure 2 (upper middle row).Right: state space dynamics projected into a two-dimensional subspace of a network optimized to perform a context-dependent decision-making task [58].Circles connected with line segments show firing rate trajectories corresponding to six stimulus conditions (black to gray filled and open circles).Red crosses with a red line connecting them show attractors states.Compare with left panel.Adapted with permission from[58].(C) Left: mean-centered (across stimulus conditions) firing rate over time for two stimulus conditions (purple curves) of an example neuron in a 100-neuron task-optimized network[13].Vertical green lines indicate stimulus onset and offset times.Adapted with permission from[13].Right: firing rate over time of an example neuron in a network optimized to perform a parametric working memory task using haptic stimuli[33] for all seven vibration frequencies of the first stimulus (blue to red colors).Compare with left panel and Figure1Bin the main text.Adapted with permission from[33].(D) Left: cross-temporal decoding for the same network as in panel (C), left.Compare with Figure1Fin the main text.Adapted with permission from[13].Right: two-interval decoding of time in a network optimized to perform a context-dependent trace conditioning task[57].A decoder is trained to distinguish neural activities taken from two different time points (off-diagonal elements) in a cross-validated way.Compare with left panel.Adapted with permission from[57].
2024-04-05T13:47:45.271Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "21ec8abb3b19f485d7c7337d0467147042ef2c69", "oa_license": "CCBY", "oa_url": "http://www.cell.com/article/S1364661324000536/pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "019f6f21ce8f293001c6468cfc6f7855e247ed09", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
27584498
pes2o/s2orc
v3-fos-license
High-resolution complementary chemical imaging of bio-elements in Caenorhabditis elegans † Here, we present a sub- l m multimodal approach to image essential elements in Caenorhabditiselegans . Acombinationof chemical imaging technologies reveals total metal concentration, chemical state and the protein to which an element is associated. This application of distinct yet complementary chemical imaging techniques provided unique insight into essential and trace elements at the subcellular level. Caenorhabditis elegans is a model system that elemental the sample to Caenorhabditis elegans is a model system that displays highly compartmentalised elemental distribution ranging from high abundance species to ultra-trace elements. When imaging these elements, abundance within the sample does not necessarily equate to high sensitivity or the capacity for high spatial resolution. 1 Application of synchrotron-based X-ray fluorescence microscopy (XFM) is no exception; a number of factors determine whether an analyte can be detected and spatially mapped at subcellular resolution. These factors relate to both technical limitations and the nature of the sample itself. Considerations for imaging elements in C. elegans include: the energy of the incident beam, duration of exposure, the atomic mass of the element, the energy of the fluorescence, the characteristics of the detector used, the composition and thickness of the sample, and the environmental conditions in which the analysis takes place. 2 Fluorescence profiles of specific elements must be discerned from elastic (Rayleigh) and inelastic (Compton) scattering; 3 i.e. element-specific fluorescence must not be obscured by scatter peak tails. The incident X-ray energy determines which excitation events can occur, but also positions the scattering peaks within the X-ray spectrum. Lower mass (Z) elements have reduced cross-sections, fluorescent yields and are easily absorbed by the sample matrix, all limiting sensitivity. While self-absorption may be negligible for heavier elements in thinner specimens, it places real limitations on low-Z elements, even within single cells. 4,5 In addition, for specimens measured in air, argon (Ar) fluorescence causes a major interfering peak in the collected spectra (with associated tail) that can overwhelm the signal derived from lighter bio-elements, making their detection impossible. 6 Previously, within whole animals biologically important mid-Z elements, such as calcium (Ca), zinc (Zn) and redox-active metals have often been analysed without corresponding data on low-Z elements (e.g. phosphorus (P) and sulfur (S)). C. elegans have been a successful test bed for pushing the boundaries of microscopy, 7,8 and are particularly well suited for whole-organism imaging of fundamental biochemistry. Examples span from Raman vibrational spectroscopy for imaging lipid metabolism 9 to scanning electron microscopy for profiling the C. elegans connectome. 10,11 We have used this nematode to extensively study metal metabolism via XFM, conducting population studies, 12 tomography 13 and X-ray absorption nearedge structure (XANES) spectroscopy 14 to appraise the complex biochemistry of metals in vivo. C. elegans are highly resistant to ionising radiation, 15 which permits analysis of hydrated and anesthetised samples for mid-Z elements using hard X-rays (410 keV). While hydrated imaging of anesthetised samples is preferred in principle, the water content increases the absorption of low energy fluorescence, and therefore dehydration aids detection of elements with atomic masses below potassium (K; Z o 19). Preserving subcellular distribution of elements is challenging, particularly following chemical fixation. Even brief (o30 s) formalin fixation of thin tissue sections can cause redistribution and leaching of transition metals and electrolytes. 16 However, we previously demonstrated that cryofixation of C. elegans in liquid N 2 -cooled propane followed by lyophilisation does not cause significant variation in elemental content or subcellular distribution. 12 Evolving detector capabilities, such as those of the 384-channel Maia reduces sampling overheads by collecting spectra 'on-thefly' (i.e. continuously during a transit across the specimen). 18 The reduced overheads allow scanning of larger samples including whole C. elegans in a practical timeframe. Furthermore, this enables spatial oversampling, where data is collected at intervals less than the full width at half maximum (FWHM) of the beam profile. Although the modulation transfer function (MTF) supports the utility of sampling at half of the beam size, in most practical cases this oversampling is not performed due to detector time limitations. Recent upgrades to the Maia detector have improved low-energy sensitivity, thus mitigating this limitation. Here, we have combined the Maia detector (Rev C), a N 2 environment for reduced Ar fluorescence, sample dehydration and incident energy selections to simultaneously image a broad range of endogenous biological elements from P to strontium (Sr) in wild type C. elegans. Specimens were mapped via XFM using 12.9 keV and 18.5 keV incident beams. Fluorescence emission was collected by the Maia detector mounted in the backscatter geometry (see ESI † for Experimental Methods). We present highly spatially resolved and correlated images of low-Z elements previously considered as 'difficult' analytes with respect to XFM in biology, whilst also using different modes of imaging to demonstrate the high complementarity possible using a unified imaging approach. 1 Mapping biological samples in a frozen hydrated state can reduce incident beam 'damage', permitting radiation exposure of up to 10 10 Gy, allowing the extended dwell time necessary for the collection of multiple spectra as in XANES analysis 19 or oversampling. However, cryogenic conditions can be prone to artifacts 4 and retain problematic water content. 20 At room temperature, cryofixed and lyophilised Vicia faba (fava bean) chromosomes showed no morphological change at radiation doses of up to 10 7 Gy, 21 suggesting cryogenic measurements may not always be necessary. Our dehydrated samples received a combined radiation dose of B10 6 Gy with no apparent morphological changes in Compton scattering used to identify microstructural features, 22 or in known elemental distributions previously described. 12 Derived Compton maps were used to relate C. elegans anatomy to all other measured elements in the hyperspectral image stack ( Fig. 1a; full resolution maps available as ESI †). Sulfur was ubiquitously distributed and best recapitulated the majority of anatomical structures (Fig. 1b). Phosphorus also reproduced structural definition of the specimen, though was found at comparatively lower concentrations in the head and full extent of the tail. Consistent with expectations, both elements showed high concentrations within embryos known to be rich in lipids, phospholipids and yolk proteins required for development. As we have previously shown, Ca and manganese (Mn) were highly compartmentalised along the intestinal lumen, 13 as was Zn, in addition to being rich within the gonad and embryos, consistent with Zn finger transcription factors necessary for early development. Using the higher incident energy of 18.5 keV, we also found subcellular concentrations of Sr in the most anterior intestine. Strontium commonly substitutes for Ca in biological systems at a greatly reduced concentration; 23 higher sensitivity of XFM is needed to map to quantify distribution further along the intestinal tract. Due to minimised time penalties from oversampling, the spatial resolution achieved approached that of light microscopy (approximately 200 nm 24 ), allowing application of statistical approaches to objectively determine if true co-localisation was occurring between specific elements that are both low-Z and highly mobile in the previously hydrated specimen. Potassium and chlorine (presumably as K + and Cl À ) appeared to have highly correlated cytoplasmic distribution (Fig. 2a). Pearson and Mander's correlation measures of the merged images showed strong association (Pearson's r = 0.708; Mander's R = 0.926). Using Li's method for intensity correlation analysis (ICA), 25 which overcomes several limitations of both Pearson and Mander's overlay comparisons, we determined the ICA quotient (Q) for the entire organism was 0.234 (+0.5 = perfect correlation; À0.5 = no correlation). Examination of both merged elements and the mapped product of difference from the mean (PDM) used to calculate the overall ICA Q showed distinct regions of K-enrichment within the gonad, as well as marked positive correlation in embryos. Interrogation of the pixel (concentration) histograms for each element showed a bimodal distribution indicative of highly enriched K in the gonad. Frequency distribution and PDM versus signal intensity (areal concentration) plots further demonstrated a skew towards areas of high K. Li's ICA Q measure is particularly useful for visualising the degree of spatial co-localisation; to demonstrate this we assessed the correlation between Ca and Sr, which within the whole organism was less distinct due to low Sr concentration (ICA Q = 0.185). In the anterior intestinal cells where Sr was detectable, we observed high correlation between Ca and Sr in the resulting PDM image (Fig. 2b), consistent with intestinal co-localisation observed in other taxa. 26 These results illustrate that PDM imaging allows both spatial correlation at a sub mm level of detail, as well as the within the whole organism. The XFM methods used here significantly improved spatial resolution for in vivo mapping. Previously we have shown a distribution of iron (Fe) about the intestine at approximately 2 mm resolution. 12 Our sub-mm imaging approach permitted assessment of Fe revealing a level of detail comparable to histological staining and light microscopy. We found that punctate Fe deposits (Fig. 3a) resembled Fe distributions in formalinfixed, paraffin-embedded sections stained using the Perls method for non-heme Fe (Fig. 3b). 27 In addition to localised Fe deposits, XFM mapping also showed a more generalised Fe distribution not seen in Perls staining, indicative of heme. The correspondence between these two diverse imaging modalities suggests that both approaches accurately report on in vivo Fe. Although Perls staining is not quantitative, these results differ from those reported by Hackett et al., 16 who suggested formalin fixation alone alters Fe distribution in biological tissue. We suggest that neither radiation damage from XFM, nor extensive chemical processing for histological staining necessarily disturbs the distribution of non-heme Fe in C. elegans. To further explore consistency between complementary imaging methods, we examined the in vivo localisation of the dominant Fe storage protein ferritin. 28 Using a green fluorescent protein (GFP) fusion to ferritin we compared high-resolution confocal fluorescence in vivo microscopy to XFM mapping and Perls staining ( Fig. 3c and d). Distribution of GFP fluorescence, and thereby ferritin localisation, was again remarkably similar to the Fe puncta previously imaged (see Movie, ESI †). Ferritin accounts for almost half of the Fe content of C. elegans and therefore represents a good proxy for non-heme Fe. The multiple imaging methods used serve as validation of each respective technique, providing the first consistent representation of subcellular Fe within a whole organism. Future directions of this complementary imaging approach could employ C. elegans with mutated genes that affect Fe metabolism, as well as ageing studies (such as those described in James et al. 14 ) to exploit the higher resolution mapping protocol described here. Here we focused on Fe to demonstrate complementary imaging of metal distribution; similar studies could employ the genetically encoded fluorescent calcium sensor GCaMP, 29 which has been used in the C. elegans model system, 30 to compare total body Ca concentration with cell-specific Ca 2+ content. In summary, we have demonstrated sub-mm XFM mapping of bio-elements, both rare and ubiquitous, ranging from low-Z to highly abundant transition metals in a model organism ideally suited for studying metal metabolism. Mapping low-Z elements with confidence will facilitate new experimental paradigms. Fig. 2 (a) High abundance and low-Z elements K and Cl were analyzed for co-localisation in an individual C. elegans specimen. Images were merged and underwent whole organism correlation analysis using Pearson's, Mander's and Li's intensity correlation analysis (ICA; inset table). 25 Visualisation of the product of difference from the mean (PDM; presented on a black background) at high resolution improved interpretation by presenting the ICA quotient (Q) on a pixel-by-pixel basis. Using this method, both elements showed marked correlation in embryos (0 4 Q 4 +0.5; white arrowheads), as well as distinct potassium-rich regions within the distal and proximal gonad (À0.5 o Q o 0; white arrows). Scale bar = 50 mm. Histograms of pixel values revealed a bimodal distribution for K, consistent with K enrichment in the gonad (black arrow). (b) The advantages of visualising correlation are clear when comparing co-localisation of high abundance Ca with low abundance Sr, which shares similar biochemistry but is close to the XFM limit of detection. Co-localisation is less robust (lower Pearson's r, Mander's R and ICA Q) across the whole organism (inset table); though in the anterior intestine where both Ca and Sr are most concentrated shows high spatial correlation (0 o Q o +0.5).
2017-06-01T09:28:00.768Z
2016-02-17T00:00:00.000
{ "year": 2016, "sha1": "d146c51ecda45b7fa29c29cfdfca503821fdf209", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2016/mt/c5mt00288e", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "6ec35f4e96c4d836894e23e598d4ce8fa9e0d47b", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Biology", "Chemistry" ] }
253158817
pes2o/s2orc
v3-fos-license
Paramagnetic Sensors for the Determination of Oxygen Concentration in Gas Mixtures One of the most important methods of measuring the concentration of gaseous oxygen uses its paramagnetic properties, thanks to which oxygen molecules are drawn into the area of a strong magnetic field. This Review presents the current state of knowledge, achievements, and development prospects in the field of magnetic oxygen sensors using this phenomenon. We present the theoretical basis of the physical phenomena used in the paramagnetic oxygen sensors. The principles of operation of individual types of paramagnetic oxygen sensors, including the well-established and widely used magnetoacoustic and magnetopneumatic devices as well as the Pauling cells, are also described. In addition, this Review presents the existing and conceptual innovative sensors known mainly from the scientific and patent literature, including refractometric, interferometric, and ultrasonic sensors. This Review also discusses the advantages and limitations of individual devices, indicating the potential areas of their application. T he need to measure oxygen concentration occurs in many branches of industry, science, medicine, and everyday life. Areas in which the monitoring of oxygen concentration is of particular importance include the control of combustion processes (energy, automotive, chemical industry, etc.); the monitoring of metabolic processes (medicine, biology, agriculture, etc.); air quality monitoring in confined spaces or households; atmosphere monitoring in fruit storages, greenhouses, fermentation silos, and warehouses; the monitoring of potentially explosive areas; and in the cement industry. There are a number of analytical methods used to determine the gaseous oxygen concentration, the most important of which are electrochemical and magnetic methods. 1 Electrochemical methods are based on the measurement of current or voltage during a chemical oxidation reaction. There are some distinct types of electroanalytical methods used for oxygen monitoring; the most important of them are polarography and electrocatalysis. The advantages of electrochemical sensors include the simplicity of their design, their high sensitivity, and their low unit cost. However, they possess a number of limitations related to the very strong temperature dependence of the signal, their short service life, the significant influence of interferents, and the possibility of poisoning by various chemicals. For this reason, magnetic rather than electrochemical methods are used in applications where high durability and accuracy of measurements are crucial (medicine, industrial process control, etc.). These methods take advantage of the fact that oxygen molecules are paramagnetic and are therefore drawn by the magnetic field into the area of greatest field strength. A separate group of magnetic methods that can be used to determine the oxygen content are methods based on EPR (electron paramagnetic resonance) spectroscopy. These methods use the fact of splitting the energy levels of paramagnetic atoms in a magnetic field. EPR spectroscopy is commonly used to determine the structure of chemical compounds, and to study the mechanisms of chemical reactions and biochemical processes. 2,3 However, it requires sophisticated measuring equipment and is mostly used for condensed-phase studies. The theory of EPR spectroscopy and the design solutions of EPR spectrometers constitute a separate, broad field of science and are not discussed in this work. Paramagnetic oxygen analyzers are used in many fields of science, industry, and medicine, but the design of the sensors themselves is similar in all of these instruments. Often the same oxygen sensor is used in different instruments for many applications. The differences between oxygen analyzers usually come down to the method of sampling, gas mixture flow rate, measuring range, response time constant, or the method of visualization of measurement results. For this reason, this work focuses on the design solutions of the oxygen sensors themselves, giving examples of their use in commercial analyzers. Because of their common use, these are mainly analyzers used for fuel combustion control and respiratory monitoring in intensive care. ■ PHYSICAL BASIS OF PARAMAGNETIC SENSOR OPERATION Paramagnetic oxygen sensors use several physical mechanisms related to the effect of a magnetic field on oxygen molecules. These include the following: (1) the action of forces drawing oxygen into the area of the magnetic field; (2) changes in the magnetic susceptibility of oxygen as a function of temperature, resulting in a change in the magnetic forces acting on oxygen at different temperatures; (3) differences in paramagnetic gas pressures between areas with different magnetic field strengths; (4) the generation of gas flow or a change in its direction under the influence of a magnetic field; and (5) a change in other physicochemical properties of the fluid (density, refractive index, etc.). This chapter describes the most important physical relationships used in the design of paramagnetic oxygen sensors and presents them in a mathematically consistent form. Paramagnetism. Magnetism arises from the movement of electrons in atoms. Hund's rule says that the molecular orbitals are occupied first by electrons with the same direction of magnetic moment (spin) and then by electrons with opposite spin. Orbitals that are completely filled with electrons do not exhibit magnetic properties. However, within orbitals with an unpaired electron, the magnetic effect is not balanced, and these electrons will order themselves according to the externally applied magnetic field. Figure 1 shows the electron configuration of the oxygen molecule. 4 In the case of the two highest occupied orbitals (π*), the magnetic fields are not compensated; hence, there is a paramagnetic effect. O 2 molecules strengthen the magnetic field, and in the external nonhomogeneous magnetic field they will be drawn into the area of greater field strength. One of the most important properties of paramagnets is that, although their atoms have a constant, nonzero resultant magnetic moment, the interactions between these moments are very weak. Consequently, in the absence of an externally applied magnetic field, the resultant magnetization of the material is zero due to thermal fluctuations. Only when an external magnetic field is applied are the magnetic moments of individual atoms partially oriented, and the resultant magnetic moment appears toward the external field. Thus, the magnetic moment induced by the field is directed parallel to this field, and, hence, the paramagnetic susceptibility is positive, although it is a relatively small value. Magnetic susceptibility is a measure of the paramagnetic properties of a medium. It is a dimensionless proportionality constant that represents the degree of magnetization of a material caused by an external magnetic field. The volumetric magnetic susceptibility (χ) is given by the following relationship: where M [A/m] is the magnetization of the material (the magnetic dipole moment per unit volume) and H [A/m] is the ACS Sensors pubs.acs.org/acssensors Review magnetic field strength. The magnetic induction, B, is related to H by the following formula: where μ 0 = 4π × 10 −7 H/m is the vacuum permeability and μ = μ 0 (1 + χ) is the relative permeability of the material. According to the Langevin theory, 5 the following is given: where N is the number of atoms per unit volume, m is the permanent magnetic dipole moment of a molecule, k is Boltzmann's constant, and T is the absolute temperature. Equation 3 results in Curie's law, which has the following form: where C is the material-specific Curie constant. In addition to volumetric magnetic susceptibility, two other measures of magnetic susceptibility are distinguished, the mass magnetic susceptibility (χ m ) and the molar magnetic susceptibility (χ M ). Both quantities are defined by the following: where ρ is the density in kg/m 3 and M is the molar mass in kg/ mol. The above definitions (eqs 3−6) are according to the International System of Units (SI) conventions. However, many tables give values of magnetic susceptibility expressed in the centimeter-gram-second system of units (CGS). The dimensionless volumetric magnetic susceptibility value in the CGS system can be converted into the SI value by multiplying the CGS value by 4π. The vast majority of gases that make up the air are diamagnets (nitrogen, argon, carbon dioxide, and water vapor), and they have a slight negative magnetic susceptibility. Among the components of air, only oxygen and nitrogen oxides (which may be present as air pollutants, but usually their content in the air is negligible) are paramagnetic. For this reason, the magnetic properties of oxygen can be successfully used for its determination both in air and in mixtures where other paramagnetic gases are not present in significant concentrations. The molar magnetic susceptibilities of the main components of air and its most common pollutants are presented in Table 1. Magnetic Forces and Their Influence on the Flow of Paramagnetic Fluids. Considering the force, F m , acting on the dipole in an externally applied magnetic field, H, the Kelvin dependence is obtained by the following: 7 By treating the medium as a set of dipoles, this formula can be used to determine the magnetic force acting on a single element of the medium. This dependence does not take into account the modification of the applied magnetic field by the dipoles of the medium, but in the case of para-and diamagnetic media, the changes in the external field are negligible. For paramagnetic fluids, after substituting formulas 1 and 5, relation 7 takes the following form: As a mixture of gases, air has a volumetric susceptibility of the following: = = y i n i i 1 (9) where y i is the volume fraction of the substance i in air and χ i is the volumetric magnetic susceptibility of the substance i. Because the volumetric susceptibility of oxygen is much greater than that for other air components, the formula for the magnetic susceptibility of air is simplified to the following: where the O 2 index denotes gaseous oxygen. Using the relationship for the strength of magnetic field: the magnetic mass force can be approximated by the following relationship assuming that χ m ρ ≪ 1: By comparing the molar magnetic susceptibility of the main components of air, measured at room temperature (Table 1), it can be seen that this assumption is fully justified. The Kelvin force (eq 12) can be introduced into the Navier−Stokes momentum equation as the external force acting on the differential element of the fluid, 8 as follows: where Du/Dt is the material derivative of flow velocity vector u with respect to time t, p is the pressure, and η is the fluid viscosity. If χ m and ρ are constant and there are no forces acting on the fluid other than the magnetic force, it remains at rest (u = 0), and eq 13 takes the form: where subscript 0 in p 0 , χ m0 , and ρ 0 indicates the static state. If there is a temperature difference in the medium, the magnetic permeability and density of the fluid have different values depending on the local temperature. The pressure, p, then can be expressed as the sum of the static pressures, p 0 , and its disturbance, p′: = + p p p 0 (15) By subtracting eq 14 from eq 13, we obtain the following: For slight temperature differences, the magnetic susceptibility can be expanded in the Taylor series of (T − T 0 ) as where, according to eq 4: The gas density can by approximated as follows: Hence, the momentum eq 16 takes the following form: This formula shows that the magnetic buoyancy term is dependent on the gradient of the square of the magnetic induction and the temperature difference. Pressure Difference. The basis of the operation of many paramagnetic oxygen analyzers is to create a pressure difference between regions of different magnetic field strength. To determine the value of the pressure difference, let us consider the elementary volume of gas dV = dx dy dz, placed in an inhomogeneous magnetic field of intensity H ( Figure 2). According to the ideal gas law, the number of moles of oxygen in the volume dV is equal to where p O2 is the oxygen partial pressure and R is the gas constant. Let us perform a balance of forces acting on this dV element along the x-axis. The pressures p(x) and p(x + dx) = p(x) + ∂p/∂x dx are exerted on the surfaces A 1 and A 2 , respectively. Hence, the force resulting from the pressure difference is equal to At equilibrium, this force is counterbalanced by the magnetic force: By comparing eq 21 and eq 22, we get the following differential equation: Hence: It should be emphasized that the pressure difference is independent of the spatial distribution of the field between 0 and H 0 . Formula 26 allows for estimation of the pressure increase in the magnetic field for typical operating conditions of the oxygen sensor. Assuming B = 1 T, with a pressure of 1 atm and a temperature of 298 K, the increase in oxygen pressure is 0.70 Pa. In turn, an increase of 100 K in temperature under these conditions causes a change in oxygen pressure of −0.18 Pa. OXYGEN SENSORS There are different criteria for the classification of paramagnetic oxygen sensors. The most general division takes into account the type of magnetic field used. Devices equipped with permanent magnets are called magnetostatic, while devices equipped with electromagnets generating an alternating magnetic field are called magnetodynamic. Sintered rare earth oxide magnets with a volume of several milliliters can create a magnetic field of up to 1 T; therefore, magnetostatic sensors can be small and have low power consumption. Usually, measurements made with such instruments are slow. Time constants range from a few seconds to several seconds. However, this is not so much due to their operating principle as to their design solutions because their correct operation requires a small and stable flow of the measured gas. Significant increases in the measurement speed are observed in microelectromechanical system (MEMS) designs, where the miniature size and small dead volumes ensure rapid gas exchange even at very low flow rates. Magnetodynamic oxygen sensors use a strong, fast-changing electromagnetic field in the air-gap of electromagnets, in which oxygen molecules vibrate at a frequency corresponding to the ACS Sensors pubs.acs.org/acssensors Review frequency of the current driving the coil. The intensity of particle vibrations can be measured with a microphone or other sensor. Because of the energy losses in the core of the electromagnet, the typical operating frequency corresponds to the lower range of acoustic frequencies between 100 and 200 Hz. Because of the principle of operation, these sensors are also called magnetoacoustic. The great advantage of magnetoacoustic oxygen sensors is their very short response time. The disadvantage, however, is the large size resulting from the presence of the electromagnet and the high power consumption. These types of sensors are now widely used in intensive care monitors. Measurement with magnetoacoustic sensors requires a continuous flow of the gas mixture through the measuring chamber. For their operation, a reference gas is required, which must be mixed with the measured gas within the area of the homogeneous magnetic field. In most applications, ambient air can be used as a reference gas, and the differential principle of operation is especially useful in cases where the difference in oxygen concentration of two different gases needs to be determined. In practice, the need to use a reference gas necessitates a more complex structure, larger dimensions, and the use of pneumatic balance. In addition, the microphone used must be characterized by high sensitivity, stability, and tightness. Both magnetostatic and magnetodynamic devices use various physical phenomena occurring under the influence of a magnetic field in a gas mixture containing paramagnetic oxygen. There are practically no oxygen sensors that measure the absolute values of the physical parameters of the gas, that is, pressure, magnetic susceptibility, viscosity, or refractive index. This is due to the difficulty of measuring very small changes in the signal against the large DC component. Therefore, differential measurements are made with respect to a gas not subjected to a magnetic field, or the magnetic forces acting on the gas are measured because in the absence of oxygen they are equal to zero. The physical phenomena most commonly used to measure oxygen concentration are as follows: (1) the buoyancy force acting on a diamagnetic gas in a closed vessel surrounded by a paramagnetic gas, placed in a constant magnetic field (Pauling cell); (2) the periodic changes in pressure or flow in the pneumatic bridge, which arise as a result of applying an alternating magnetic field to one of the bridge arms (magnetoacoustic and magnetopneumatic sensors, respectively) (these systems require the use of a reference gas, usually air); (3) "thermo magnetic wind" generated by differences in the magnetic susceptibility of different regions of the gas, which are the result of its unequal heating (in a constant magnetic field, areas of gas with greater susceptibility are then subjected to greater force); (4) the deviation of the gas flow direction from a straight line under the influence of a constant magnetic field; (5) changes in magnetic field strength induced by paramagnetic oxygen; and (6) changes in other physicochemical properties of paramagnetic gas, such as the refractive index. The following sections present the design solutions of sensors using individual types of the physical phenomena to measure oxygen concentration and new concepts appearing in the scientific and patent literature. Sensors Measuring Buoyancy Force. The first designs of devices measuring magnetic buoyancy force were developed in the late 1930s and early 1940s. 9 The oldest solution of this type is the Pauling cell. In this type of sensor, two nitrogen-filled glass spheres are connected to each other like dumbbells and mounted on a rotating suspension (quartz thread). This assembly is placed in the field of a strong permanent magnet (Figures 3 and 4). A stream of the analyzed gas mixture (several milliliters per minute) flows around it. Because of the paramagnetic properties of oxygen, its molecules tend to accumulate between the poles of the magnet, what increases the gas pressure locally. As a result, nonparamagnetic dumbbells are pulled out of the equilibrium position. The movement of the assembly is optically detected. A small mirror attached to a quartz thread (and rotating with the assembly) reflects the light beam on a scale calibrated in oxygen concentration or partial pressure units. There is a drying agent at the inlet to the chamber, which is silica gel. A description of the design and application of the direct reading analyzer can be found in work by Woolmer. 10 To obtain greater accuracy, modifications are made to the Pauling cell enabling measurement by the null balance method. The deflection of the light spot in the analyzer is compensated by a current flowing in a coil surrounding a quartz dumbbell pendant. A multiturn potentiometer, which controls the compensation current, is calibrated in oxygen concentration units. An example of such an analyzer is DLC.101 produced by Servomex Controls Ltd. 11 It has a measuring range of 0−100% oxygen and a potentiometer graduated in 0.1% steps. The device provides an accuracy of ±0.1% for a 5 K change in ambient temperature. The operating temperature range is 263−313 K, and the optimal sample flow rate is 100 mL/min. The tilt is exponential, and a 90% response time is less than 8 s. The contemporary design of the Pauling cell, used in ABB Advance Optima and EasyLine 12,13 oxygen analyzers, is shown in Figure 4. The cell has an internal volume of approximately 6 mm 3 , and the diameter of the glass probe body is 2 mm. Such ACS Sensors pubs.acs.org/acssensors Review classic paramagnetic sensors have a number of significant advantages. They have a short response time (3−10 s) limited mainly by the gas exchange rate in the relatively large volume of the sensor. Interferences with diamagnetic gases are negligibly low. Currently, most of the commercial devices using oxygen paramagnetism are dumbbell constructions. They are produced by, among other companies, ABB, 14 Servomex, 15 Ankersmid, 16 Teledyne Analytical Instruments, 17 Sigas, 18 LFE Process Analytical Instrumentation, 19 Systech Illinois, 20 and Fuji Electric. 21 The improvement of this classic design is also ongoing, in terms of both mechanics and signal processing, as reflected in the patent literature. 22−26 Sensors of this type allow for a very precise and accurate measurement of the oxygen content in gas mixtures. Highquality dumbbell sensors can achieve a resolution and accuracy better than 100 ppm in the oxygen concentration range of 10− 100%. 27 In recent years, developments in materials and electronics engineering have led to miniaturization and further significant improvements in cell performance, while maintaining the original operating principle. A response time of 1 s has been achieved, which extends the range of applications of this sensor to fast-changing processes. In addition, the degree of complexity has been reduced, which shortened the production time of the devices. An example is the MEMS sensor, described in the literature, 12 which consists of the same functional elements as the classic structure shown in Figure 4. The sensor has a rotating body probe in the shape of a paddle with dimensions of 3 × 7 mm, and it is suspended on folded flat springs ( Figure 5). The compensation coil is structured on the edge of the body probe. On the spring surface, there are conductive tracks connecting the compensation coil with the pads (ohmic contacts) on the bond frame. The gas channels and the inlet to the sensor have been designed in such a way as to minimize the gas exchange time in the sensor, at the same time not causing signal disturbances that may occur with a fast gas flow. As in classic solutions, the position of the probe is determined optically through a window located in the sensor housing. The above sensor has a detection limit of 50 ppm of O 2 at 1 Hz and a response time of 1.3 s. The influence of the flow rate was determined as ±70 ppm of O 2 in the flow range of 6−7 mL/min. The obtained results show that this sensor has appropriate parameters for a number of industrial applications; however, similar to classic solutions, the sensor is sensitive to vibrations, and, therefore, its use is limited to stationary applications. Another sensor design based on the force measurement exerted on a nonmagnetic body placed in a magnetic field is described in other work. 28 This is achieved by the use of a piezoelectric bimorph. It is composed of two piezoelectric plates, placed one of the top of other, attached with one end to a wall of casing ( Figure 6). The second end is inserted into a nonmagnetic body and placed in a magnetic field between poles of a permanent magnet. An alternate voltage induces deflection of the bimorph, generating a pressure variation. This pressure variation induces a variable load, which may be processed by dedicated electronics. The signal frequency is matched to the mechanic resonance of the transducer. Experimental studies have shown that the response time for oxygen concentration in the range of 10−90% is equal to 95 ms, and the concentration determination error is less than 3%. 28 Magnetoacoustic and Magnetopneumatic Sensors. The first magnetoacoustic sensor was developed by Hummel. 29 He built a cell in which two gases mix in a homogeneous magnetic field (Figure 7). In addition to the gas to be measured, a reference gas, usually air or nitrogen, is fed into the gap. The alternating magnetic field causes periodic pressure ACS Sensors pubs.acs.org/acssensors Review changes in the gap and gas lines to which the differential microphone is connected (Figure 8). Figure 8 shows the operation diagram of the magnetoacoustic sensor and the pressure distributions in the gas lines for different oxygen contents in the sample and in the reference gas. The pressure measured by the microphone corresponds to the pressure difference at inlets A and C. Apart from the pressure drop due to flow resistance, the pressure at point B, where the sample and reference gas lines connect, is the same for both gases. However, along the AB and BC lines, an increase in pressure occurs proportional to the partial pressure of oxygen. It follows that if the oxygen content in both is different, the pressure difference will be proportional to the oxygen concentration difference when the magnetic field is turned on. Hence, the amplitude of the output signal at a constant amplitude of an alternating magnetic field and a constant temperature is given by the following formula: where k is a constant depending on the design of the microphone. The small volume of the measuring cell ensures a short response time, which makes the system suitable for medical applications, for example, in anesthesia. Using an appropriate design, noise levels of less than 0.03% oxygen can be achieved. Magnetoacoustic sensors were used in metabolic and respiratory care monitors manufactured in the 1970s and 1980s by Godard, 31 Hartmann and Braum, 32 and by Datex Instrumentarium Corp. 30,33 This principle of operation is also used in analyzers for combustion control and industrial processes. 34 Although magnetoacoustic instruments were among the first to be used to measure oxygen concentration, this technique is constantly being improved. 35−37 The analyzer containing the microphone is inherently sensitive to mechanical vibrations and sounds; therefore, it is important that the design isolates the microphone from the environment as much as possible. The differential measuring principle as used in magnetoacoustic sensors is an advantage in applications such as measuring oxygen consumption. However, the need for a continuous reference gas supply is a significant disadvantage in closed anesthetic circuits where air cannot be used as a reference gas because this would result in a slow accumulation of nitrogen in the respiratory system. Therefore, other designs of magnetoacoustic sensors are being developed, in which the acoustic wave generated directly in the electromagnet gap is measured. The alternating magnetic field causes the oxygen molecules within the electromagnet gap to vibrate synchronously with the current fed to its winding. The resulting acoustic wave is detected by a microphone located near the aperture. Such a system does not require the use of a reference gas and a complex gas system; however, so far, descriptions of such systems can be found mainly in patent applications. 38−40 Therefore, there is no reliable information on their metrological parameters. The structure of magnetoacoustic sensors is similar to that of magnetopneumatic sensors. The only difference in design is that instead of a microphone measuring the pressure difference in the pneumatic bridge, microflowmeters are used to measure the flow between the bridge arms. The advantage of this type of construction is the elimination of a microphone that is sensitive to vibrations and temperature changes. The differential operation of thermoanemometric flowmeters also ensures very good compensation of temperature and power fluctuations. The disadvantage is that the signal is smaller and the time constant is many times greater than that of the microphone. This limits the frequency of the magnetic field modulation to several Hz, as compared to 100−200 Hz in devices with a microphone. Magnetopneumatic sensors are widely used in modern designs of oxygen analyzers. Examples are instruments from Fuji Electric, 41 Siemens, 42 and Horiba, 43 which provide a measurement error of less than 1% O 2 for a measuring range of 0−100% O 2 , with a time constant of 1.5−3 s. Magnetic Wind Oxygen Sensors. The movement of paramagnetic oxygen molecules toward the magnetic field is called a magnetic wind. A detailed description of the early use of this phenomenon can be found in the literature. 44−46 A schematic diagram of such an analyzer is shown in Figure 9. The inlet gas stream flows through a ring-shaped measuring chamber. A thin-walled glass tube passes through the center of the ring, providing a direct gas connection between the left and right sides of the measuring chamber. A heating wire is wound around the glass tube to form the two arms, P1 and P2, of the AC bridge. The left part of the tube (at the point indicated in Figure 9) is placed between the poles of a permanent magnet in such a way that the magnetic field lines are perpendicular to the plane of the drawing. If there are regions with different temperatures, T 1 and T 2 , in a homogeneous magnetic field, the force, F, attracting the cold element of the oxygen volume is proportional to the following: where H is the magnetic field strength, dH/dx is the field gradient, and p is the oxygen partial pressure. Therefore, into the glass tube, cold oxygen-containing gas is drawn from the left side. Once the gas is drawn into this part of the tube, it is heated and loses its magnetization. It is then pushed out by the cold gas entering the tube from the left side, which creates a flow of gas (so-called magnetic wind) through the tube. Furthermore, the flow cools the P1 winding in relation to P2 and causes an imbalance of the AC bridge. The resulting temperature difference between the arms is influenced by the specific heat of the gas and its flow rate through the tube. The bridge signal, therefore, depends not only on the oxygen concentration but also on the specific heat and the viscosity of the gas mixture. In turn, the analyzer output voltage, V, depends on many factors, including the bridge current, magnetic field strength of the permanent magnet, ambient temperature, absolute gas pressure, type of carrier gas, and oxygen concentration. In fact, the output voltage usually decreases by about 1.5% per 1 K change in temperature and increases by 1.8% with a pressure increase of 1 kPa. 47 Therefore, the analyzers are usually thermostated and often pressure compensated. The bridge output is nearly linear up to 10% O 2 . The zero settings depend on, among others, the position of the tube due to the influence of natural convection. Numerical analysis of the magnetic wind phenomenon in a cylindrical pipe is presented in ref 48. The study considered a pipe with a diameter of 6 mm and a length of 28 mm with a heated zone (10 mm long) placed in a constant magnetic field ( Figure 10). The study assumes a steady state, laminar gas flow through the pipe and that the gas is incompressible. Moreover, it was assumed that with the temperature change, only the magnetic susceptibility of the gas changes and the other physical parameters remain unchanged. The effect of natural convection was also ignored. Magnetic thermal convection has been found to significantly increase the average velocity of gas flow through the pipe and heat transfer coefficient. For 1.32 T magnetic induction, 200 W/m 2 heat flux, and 0.008 Pa in inlet and outlet pressure difference, when the oxygen concentration in the gas changes from zero to 100%, the average gas flow velocity increases by 70.7%, and the temperature of the tube wall changes by 15°C. As the pressure difference between the inlet and outlet increases, the thermal-magnetic convection weakens. Only when the pressure difference is less than 0.014 Pa can the influence of thermal-magnetic convection be observed. The resolution of the tested system was estimated at approximately 0.0067% of oxygen concentration. In other work, 49 one can find theoretical considerations on the possibility of miniaturization of this type of sensor and its implementation in low-temperature cofired ceramics (LTCC) technology. A contemporary instrument using the magnetic wind method is the XMO2 analyzer from General Electric. 50 Figure 11 shows the structure of the used sensor and its principle of operation. ACS Sensors pubs.acs.org/acssensors Review The sensor contains a permanent magnet located in the center of the cell. Two pairs of thermistors are placed above one of its poles in such a way that one thermistor of each pair is in the strong magnetic field and the other is beyond it. The thermistors are electrically heated, and the entire cell is thermostated to 45°C. Figure 12 shows the arrangement of both pairs of thermistors. A small portion of the measured gas diffuses from the lower to the upper part of the measuring chamber. The presence of paramagnetic gas causes an increase in pressure in the center of the chamber, where the magnetic field strength is greatest. At the same time, the pressure of the gas to be measured is somewhat lower near the thermistors because their high temperature reduces the magnetic susceptibility of oxygen. This slight pressure difference causes gas to flow from the center of the magnetic field outward over the thermistors. As a result, the internal thermistors cool, while the external thermistors, influenced by the warm gas, heat. Both pairs of thermistors are placed in the arms of an electric bridge that measures the asymmetry in the resistance induced by the temperature difference of the thermistors. A signal from the bridge is proportional to the oxygen concentration in the measured gas. This sensor has an accuracy of 1% in the range of 1−100% oxygen, with a linearity of ±0.5% of the measuring range and a response time of not more than 5 s. The influence of flow rate in the range of 50−1000 mL/min is less than 1% of the scale. The effect of pressure is ±1.5% per kPa (without compensation). 50 Sensors with Deflection of the Gas Stream. Another method of measuring oxygen involves changing the direction of its flow in a magnetic field. This causes a partial separation of oxygen from the remaining components of the gas mixture. An example of such a sensor was described in the literature. 4,51 The schematic diagram of this sensor is shown in Figure 13. Behind the gas inlet, there is an area where the stream of oxygen molecules is deflected in a nonuniform magnetic field generated by a suitably shaped permanent magnet placed on the side of the sensor. The gas stream is distributed into three channels: reference, central (main), and measurement channels. Thermoanemometers in the reference and measuring channels measure the gas flow velocity. The measured gas introduced into the sensor is subjected to an inhomogeneous magnetic field. The diamagnetic gas interacts very weakly with the magnetic field and flows mainly through the central channel. In the measurement and reference channels, the flow velocity assumes the minimum value. Because of the symmetry of the sensor, the flows in the reference and measurement channels are the same. If the gas contains paramagnetic oxygen, it is deflected toward the measuring channel, increasing the flow velocity in it and, at the same time, reducing the flow velocity in the reference channel. The change in flow velocity depends on the oxygen concentration in the measured gas. Thermoanemometers incorporated into the Wheatstone bridge measure the difference in gas flow velocity between the measuring and reference channels, which allows for the determination of the oxygen concentration in the measured gas. The main advantages of this type of sensor result from the division of the measured gas stream and the side arrangement of the permanent magnet. Because of the geometry of the sensor channels, the diamagnetic gas flows mainly through the central channel, which results in a low gas flow velocity through the reference and measurement channels. Low flow velocities make it possible to accurately measure very small flow changes. The operation of thermoanemometers in a bridge system enables compensation of the influence of temperature and pressure. The lateral position of the permanent magnet makes it possible to increase its size, unlike other sensors operating on the principle of changing the flow velocity, where the size of the magnet is limited by the size of the channel in which the measured gas is subjected to the magnetic field. By selecting a larger, stronger magnet, the accuracy of the sensor increases. The sensor described in previous work 4,51 was fabricated using MEMS technology. The scanning electron microscope (SEM) picture of this sensor is shown in Figure 14. Figure 15 shows the sensor response as a function of the oxygen concentration in nitrogen. In Figure 15, the sensor response noise of 1% O 2 and the signal drift due to changes in gas flow rate are observed. Because thermoanemometers are used to measure the gas flow velocity, the response depends on the thermal conductivity and heat capacity of the gas mixture. However, after appropriate signal processing, the sensor may be sufficient in applications that do not require high precision of measurement. ACS Sensors pubs.acs.org/acssensors Review Oxygen Sensors with Magnetic Field Strength Measurements. Paramagnetic substances increase the magnetic field induction, and, although the change is small, this phenomenon can be used to determine oxygen concentration in the presence of diamagnetic gases. The greatest difficulty with such measurements is the determination of a very small change in ΔB against the large constant component B. To estimate ΔB, let us assume that the sensor has a strong magnetic field with an induction of B = 1 T, which can be produced by large magnets made of rare earth oxides. If we introduce pure oxygen into the field area under a pressure of 1 atm at 298 K, then, according to eq 2, the change in induction with respect to nitrogen will be ΔB ≈ χ O2 H = 1.8 μT. Many types of sensors can be used to measure the strength of the magnetic field, the operation of which may be based on, among others, the Hall effect, giant magneto-resistance (GMR), anisotropic magneto-resistance (AMR), tunneling magneto-resistance (TMR), and giant magneto-impedance (GMI). A good review of new materials and possible mechanisms of giant magneto-resistance is described in ref 52. Moreover, interesting two-dimensional magnetic materials were recently developed. They have unique functions as the electric field control of a magnetic phase and the anomalous spin Hall effect. 53 All of them are based on the change in the electrical properties of a material when an external magnetic field is applied. According to other work, 54 the lower measurement limit of the GMI, TMR, and AMR sensors is below 1 nT, while for the GMR and Hall sensors it is about 1 μT or more. Hall sensors are preferably used at higher magnetic field values because they show no saturation effects in contrast to magnetoresistors (MRs). However, the detection of changes in the magnetic field induction by oxygen is at the limit of the measurement capabilities of Hall sensors. For this reason, oxygen sensors based on the absolute measurement of the magnetic field by means of Hall sensors have not been put into practice, although there are patents describing their operation. 55−57 The relative orientation of the measured magnetic field vector with respect to the Hall sensor chip is perpendicular, and for MR sensors it is parallel. If the MR sensor is positioned perpendicularly to the magnetic field force lines, its indications will be zero. This fact can be used to measure the slight transverse fluctuations in field strength caused by the interaction of paramagnetic oxygen molecules with an external magnetic field. One of the ways of implementing this idea in practice is presented in another paper. 58 The principle of operation of the micro paramagnetic oxygen sensor described in the paper cited above is based on the deflection of a magnetic field in the vicinity of a gas channel. The sensor (Figure 16) consists of a silicon body placed on a glass substrate. The gas channel is etched into the silicon, and an AMR sensor is placed on one of its side walls. The whole body is located between the poles of a permanent magnet producing a flat magnetic field in the z direction. If diamagnetic gas (e.g., nitrogen) is present in the channel, the B z component of the magnetic field is constant and the B x component is zero. The orientation-sensitive AMR device is set up to measure only the B x component of the magnetic field, so the signal is also zero. In the presence of oxygen, there is an The two-dimensional simulation of sensor operation was conducted with FEMM (Finite Element Method Magnetics) software. 58 A constant magnetic field of 20 kA/m was applied to the area including the measurement channel with a geometry of 200 × 400 μm, and the perpendicular magnetic field at a distance of 100 μm outside the measurement channel was evaluated. The results of the simulation showed that the expected signal was approximately 2 nT. The oxygen sensor was tested with oxygen/nitrogen mixtures. With a magnetic field (0.6 T) applied, 20% concentration steps from 0% to 100% of oxygen/nitrogen were observed with a change in the output signal of about 150 nV. A similar concept of measurement can be found in a patent, 59 but the design solution itself is different ( Figure 17). In this case, the oxygen sensor comprises a GMR device, a magnetic field generator arranged to generate a magnetic field overlapping the GMR device, and an examination region. A component, B x , of the magnetic field, dependent on the oxygen concentration in the examination region, is detected by the GMR. In the absence of oxygen in the examination region, the symmetry of the system causes the magnetic field to be oriented transversely to the planar magnetic field sensor. The spin-valve type of GMR device is sensitive to the magnetic field component B x and is insensitive to other magnetic field components. As the oxygen concentration in the examination region increases, oxygen molecules align (in a statistical sense) with the magnetic field and strengthen it. This perturbation of the magnetic field introduces an asymmetry in the magnetic field that includes a perturbation magnetic field component, B, oriented along the x-direction, as shown in Figure 17. The GMR device detects and measures the perturbation magnetic field component B x . The measured in-plane component B is proportional to, or at least monotonically increasing with, the oxygen concentration in the examination region. Another physical mechanism that makes it possible to measure changes in B, with the required accuracy, is laser interferometry. However, this method is difficult to apply due to the enormous sensitivity of the system to mechanical disturbances. In another study, 60 the authors presented the conceptual oxygen analyzer based on a phase sensor modulator detecting the change in the optical path length of a light flux in the signal arm of a fiber-optic interferometer (FOI). The change in the optical path is due to a distortion of the magnetostrictive material that is attached to the FOI signal fiber. An example of such a sensor design is shown in Figure 18. The sensor consists of a toroidal measuring vessel (V), to which a strip of magnetostriction material (MS) is attached with a closely adjacent optical fiber (OF) loop. The OF loop together with the mirrors, R 1 and R 2 , at their ends form an FOI. Dielectric coatings are used on the ends of the optical fiber to increase the reflectance (R) and thus ensure the required FOI quality factor. An electric coil, powered by a voltage source E 2 , is wound on the outer surface of the measuring chamber to form a constant magnetic field, H 0 , inside the chamber. The piezo-corrector is used to adjust the FOI to the operating point (a phase φ 0 ), which corresponds to the maximum value of dR/dφ. The other structural elements marked in Figure 18, such as the lens (L), beam splitter (S), photodetector (PD), and laser (Ls), are standard for fiber interferometry in sensing applications. The authors carried out numerical evaluation of the sensor's operation and found that if the upper boundary of the measured concentration is mainly determined by the mechanical stability of the sensor construction, then its lower boundary will depend to a considerable extent on the choice of the method for measuring small currents of a photodetector. They suggest that a bridge measurement method provides comparatively simple measurement of O 2 concentration at a level of ≤370 ppm, which corresponds to the relative values of O 2 in gas mixtures at the level of fractions of a percent. Sensors Measuring the Change in the Physical Properties of a Gas. The drawing of oxygen molecules into the magnetic field causes not only an increase in pressure but also a local change in many other physical properties of the gas, such as density and thermal conductivity, as well as changes in the speed of sound or the refractive index. There have also been attempts to use these effects to measure oxygen concentration. In one patent application, 61 a magneto-optical measurement method was described. The idea is to group the oxygen molecules present in the gas mixture in the immediate vicinity of the sensor surface by means of a periodic magnetic field so that a diffraction grating is formed in the gas layer at the sensor surface. When the gas diffraction grating is illuminated, diffraction occurs, and by placing the light detector at a location corresponding to the diffraction angle, the intensity of the incident light can be measured. Intensity is a function of the refractive index change, which depends on the local magnetic field strength and the partial pressure of the paramagnetic gas. The advantage of this kind of sensor is the short response time and the long sensor service life. In addition, this sensor works without a reference gas and does not require a pump. The sensor has a small size and simple design and is not very sensitive to environmental disturbances. The principle of operation of such a sensor is shown in Figure 19. The device (Figure 19) is made of a matrix of elongated magnetic elements placed periodically under the outer surface of the sensor. The magnetic elements are magnetized in such a way that their magnetic poles are located on the longer edges of the elements. The elements are arranged with defined gaps, and the neighboring magnets face each other with opposite poles. In each gap between the magnets, a magnetic field is created, which extends over the outer surface of the sensor, affecting the gas mixture there. Paramagnetic gas molecules present in the gas mixture above the sensor surface move toward the longitudinal regions of the magnetic field from the magnetic elements below the surface. As a result, long, narrow, and shallow areas with a high concentration of paramagnetic gas are formed on the outer surface of the sensor. As the gas density in this regions increases, the refractive index also increases, and a phase difference of monochromatic light reflected from this regions is created, causing diffraction. One patent application 61 describes the many configurations of this apparatus, including a reflection and transmission diffraction grating composed of magnetic elements, various grating patterns, and methods of generating a magnetic field. Again, however, there are no experimental results confirming the practical usefulness of this method. Patent applications 62,63 describe oxygen sensors based on measuring changes in speed of sound and thermal conductivity. However, both of these methods are, by definition, nonselective because both the thermal conductivity and the speed of sound strongly depend on the gas composition. As a result, the effects arising from changes in the concentration of components accompanying oxygen may be many times greater than those resulting from oxygen paramagnetism. Moreover, they also have the problem of measuring very small changes in a physical quantity in the presence of a large constant component. ■ SUMMARY Magnetic sensors are an important group of oxygen sensors, characterized by high measurement accuracy and durability. They are used mainly in areas where the credibility and reliability of the measurement are the most important, such as industrial process control or medicine. There are many types of sensors that use the paramagnetic properties of oxygen, but the most common are "dumbbell" type, magnetoacoustic and magnetopneumatic sensors, or sensors using the principle of thermo magnetic wind. In industrial and laboratory research, dumbbells and magnetic wind sensors are mainly used. However, where a very small time constant of the device and differential measurement against the reference gas are required, for example, in medicine, magnetoacoustic and magnetopneumatic analyzers dominate. With all paramagnetic sensors, the number of oxygen molecules per unit volume is measured. Therefore, when calculating the oxygen volumetric concentration according to the gas state equation, we must take into account the influence of temperature and pressure. In addition, the change in magnetic susceptibility with temperature must also be considered. For this reason, the measurement error of these instruments largely depends on ensuring stable measurement conditions, that is, temperature and flow control, as well as isolation of the measuring chamber from mechanical disturbances and is usually below 1% of the measurement range. The detection limit of paramagnetic analyzers is at the level of single ppm of O 2 . Table 2 lists the most important features of each type of oxygen analyzer. By observing the reports presented in the scientific literature and the latest patent applications, two major trends in the development of paramagnetic oxygen sensors can be distinguished. The first is the miniaturization of known types of sensor designs. Examples include microphone sensors and inert gas displacement sensors manufactured using MEMS technology. The miniature analyzers produced so far have slightly worse parameters than their classic counterparts, but the very rapid development of MEMS technology allows for continuous improvement of their design. In the future, miniature, cheap, and commercially manufactured oxygen sensors will probably find applications in areas where they are currently used sporadically, for example, in household ventilation systems. In addition, sensors produced in the MEMS technology, due to their miniature size and low energy consumption, will be able to be mounted in everyday objects such as, for example, mobile phones or watches. The second visible direction of development is the construction of sensors with a different principle of operation than commonly used devices. Novel ideas concerning the measurement of oxygen concentration by means of various physical effects resulting from its paramagnetic properties are constantly appearing in the literature. These include sensors that use gas flux deflection and changes in magnetic field strength, refractive index, thermal conductivity, and speed of sound. The greatest advantages of this type of solutions may be the simplicity of construction, reliability, and insensitivity to shocks resulting from the lack of moving mechanical parts. The time constant of the analyzers depends on the measuring cell volume and the gas flow rate. In many types of sensors, the measurement of the physical effect itself is instantaneous, and the time constant results solely from the gas exchange rate in the measuring cell. b Detection limit applies to experimental or commercial instruments and not to the physical effect itself. These sensors are currently at the stage of laboratory research, and their continuous development and systematic improvement of metrological parameters should be expected. In the future, these devices will probably find application in measurements carried out in harsh environments, exposure to vibrations, shocks, and noise, where the use of traditional designs is problematic.
2022-10-28T06:19:32.252Z
2022-10-27T00:00:00.000
{ "year": 2022, "sha1": "643035f3141ece0a9eb87befe5e02b8006e726d3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1021/acssensors.2c00938", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "568e1a7041d1207396d4e7a0cd43d6ba46845865", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
227101677
pes2o/s2orc
v3-fos-license
Naa20, the catalytic subunit of NatB complex, contributes to hepatocellular carcinoma by regulating the LKB1–AMPK–mTOR axis N-α-acetyltransferase 20 (Naa20), which is a catalytic subunit of the N-terminal acetyltransferase B (NatB) complex, has recently been reported to be implicated in hepatocellular carcinoma (HCC) progression and autophagy, but the underlying mechanism remains unclear. Here, we report that based on bioinformatic analysis of Gene Expression Omnibus and The Cancer Genome Atlas data sets, Naa20 expression is much higher in HCC tumors than in normal tissues, promoting oncogenic properties in HCC cells. Mechanistically, Naa20 inhibits the activity of AMP-activated protein kinase (AMPK) to promote the mammalian target of rapamycin signaling pathway, which contributes to cell proliferation, as well as autophagy, through its N-terminal acetyltransferase (NAT) activity. We further show that liver kinase B1 (LKB1), a major regulator of AMPK activity, can be N-terminally acetylated by NatB in vitro, but also probably by NatB and/or other members of the NAT family in vivo, which may have a negative effect on AMPK activity through downregulation of LKB1 phosphorylation at S428. Indeed, p-LKB1 (S428) and p-AMPK levels are enhanced in Naa20-deficient cells, as well as in cells expressing the nonacetylated LKB1-MPE mutant; moreover, importantly, LKB1 deficiency reverses the molecular and cellular events driven by Naa20 knockdown. Taken together, our findings suggest that N-terminal acetylation of LKB1 by Naa20 may inhibit the LKB1–AMPK signaling pathway, which contributes to tumorigenesis and autophagy in HCC. Introduction N-terminal acetylation (Nt-acetylation) is one of the most prevalent cotranslational modifications of eukaryotic proteins, although some recent studies have shown that it also occurs posttranslationally 1 . This reaction is catalyzed by a set of enzyme complexes called N-terminal acetyltransferases (NATs), which have recently been reported to be composed of seven NAT complexes (NatA-NatH) in eukaryotes [1][2][3] . Each NAT complex seems to be specific for the first and second N-terminal amino acid residues of the nascent protein 1 . Among these complexes, N-terminal acetyltransferase B (NatB), which is composed of the catalytic subunit N-α-acetyltransferase 20 (Naa20) and the auxiliary subunit Naa25, exhibits strong preferences for proteins starting with a methionine-acidic/hydrophilic amino acid motif at their N-termini (i.e., MD-, MN-, ME-, and MQ-) 1 . Thus, NatB can presumably acetylate the Nterminus of 15% and 18% of all yeast and human proteins, respectively 4 ; however, a recent study identified only 180 human and 110 yeast NatB substrates through a combined quantitative N-terminomic approach 4 , probably indicating the existence of substrate redundancy among NATs. Accumulating studies have reported that Nt-acetylation is implicated in a wide range of pathological processes, including tumorigenesis, developmental defects, and neurodegeneration [1][2][3] . Notably, many NATs may play an oncogenic role in tumorigenesis in diverse cancers [1][2][3]5 . At the protein level, Nt-acetylation has been discovered to regulate various protein activities, such as degradation, protein-protein interaction, and localization [1][2][3] . For the more typical NAT, which Nt-acetylates many thousands of proteins, it can be challenging to connect effects at the protein level to specific cellular and organismal phenotypes. In yeast, the naa20-Δ and naa25-Δ (mdm20-Δ) deletion mutants exhibited a variety of cellular defects, including reduced mating; aberrant morphology; defective mitochondrial division and vacuolar segregation; elevated sensitivity in response to several stresses, such as high temperature, caffeine, and DNA damage; and, most recently, abnormal NAD + homeostasis [6][7][8][9] . According to previous reports, some of these defects in yeast may have resulted primarily from abnormal cytoskeletal functions caused by the lack of Nt-acetylation in either or both of two critical cytoskeletal proteins-actin and tropomyosin -or in an unknown protein 7,8 . In addition, Nt-acetylation of nicotinamide mononucleotide adenyltransferases (Nma1 and Nma2) was reported to be essential for maintaining NAD + homeostasis in yeast 9,10 . In mammals, NatB has been shown to participate in several cellular processes, including growth 11,12 , autophagy [13][14][15] , and viral infection 16 , by altering the levels of cell cycle-related genes, mammalian target of rapamycin (mTOR) C2 signaling, and the hippo/YAP and ERK1/2 pathways [11][12][13][14][15][16][17][18] , respectively, but the underlying mechanisms connecting Nt-acetylation with proteins are still unknown. Regarding tumorigenesis, several studies revealed that both subunits of NatB are upregulated in hepatocellular carcinoma (HCC) tumor tissues compared with nontumor tissues 11,12 , suggesting that NatB may be implicated in promoting tumorigenesis. Moreover, silencing Naa20 or Naa25 in HCC cells leads to dysfunction of cyclin-dependent kinase 2 or tropomyosin, and to subsequent impairment of several proliferative signals or pathways, resulting in significant growth retardation 11,12 . However, the underlying mechanisms by which NatBmediated Nt-acetylation affects cell proliferation need to be further elucidated. It has been well documented that liver kinase B1 (LKB1) regulates various cellular processes, such as metabolism, proliferation, and migration, by phosphorylating and activating several kinases, including AMP-activated protein kinase (AMPK) 19 . Moreover, mutation and dysregulation of LKB1 have been reported to occur in most types of tumors, and LKB1 is thus considered a tumor suppressor in a wide variety of organs 19,20 . However, some recent studies have revealed that LKB1 is upregulated in animal models of HCC and in tumor tissues of HCC patients [20][21][22] , indicating that it may have a dual role in tumorigenesis. LKB1 forms a complex with the pseudokinase ste20-related adaptor (STRADα) and the scaffolding protein mouse protein 25 (MO25), which induces the cytoplasmic localization and promotes the activity of LKB1 (refs. 19,20 ). In addition, accumulating studies have revealed that the activity of LKB1 is also controlled by several types of posttranslational modifications, such as phosphorylation, lysine acetylation, ubiquitination, and methylation 20 . Thus, it has been proposed that the comprehensive PTMs of LKB1 in HCC may contribute to the dual role of LKB1 in tumorigenesis. Interestingly, one previous study showed that naa20 deletion in yeast caused elevated phosphorylation levels, and the kinase Snf1p, which is the yeast homolog of AMPK, was predicted to be responsible 23 . This report provided insight into how Naa20 depletion in mammalian cells causes autophagy activation and cell growth retardation, because AMPK is a well-known major regulator of autophagy and cell growth 24 . Considering this assumption, we investigated whether AMPK is responsible for the autophagy activation and growth delay caused by Naa20 depletion in HCC cell lines. Consistent with previous reports, we found that Naa20 silencing led to significant growth retardation and increased autophagy in several HCC cell lines. Importantly, Naa20 negatively regulated the LKB1-AMPK axis to promote the mTOR signaling pathway through Nt-acetylation of LKB1, which contributes to tumor progression and autophagy in HCC. Our results indicate that Nt-acetylation by Naa20 is implicated in the regulation of the LKB1-AMPK-mTOR signaling pathway, which may impact tumorigenesis and autophagy in HCC. Materials and methods Cell culture SK-Hep1, Hep3B, and HepG2 cells were purchased from the American Type Culture Collection (ATCC, VA, USA) and maintained under the conditions recommended by the supplier. The Hep3B-GFP-LC3 stable cell line was kindly provided by Professor Yong-Keun Jung (Seoul National University, Korea). All cells were cultured at 37°C in humidified air with 5% CO 2 in Dulbecco's modified Eagle's medium (DMEM) or Roswell Park Memorial Institute (RPMI) medium (Corning Incorporated, Corning, NY, USA) containing 10% fetal bovine serum (Corning Incorporated, Corning, NY, USA) and 1% penicillinstreptomycin (Corning Incorporated, Corning, NY, USA). The stable cell lines were maintained in DMEM or RPMI medium containing puromycin (2-6 μg/mL). Lentivirus production and generation of stable cell lines For lentivirus production, the lentiviral vector pLKO.1 containing sh-Naa20 (Sigma-Aldrich, St. Louis, MO, USA) was cotransfected with packaging vectors into 293 T cells using Lipofectamine® 3000 (Invitrogen, Thermo Fisher Scientific, Carlsbad, CA, USA). Transfected HEK293T cells were incubated at 37°C for 24 h, and the medium was then replaced with fresh medium. The supernatant was harvested after incubation for 24 h and filtered through a 0.45-μm filter. To generate stable cell lines, cells were treated with 6 μg/mL polybrene and infected with sh-Naa20-expressing lentivirus for 48 h, after which stable cells were subsequently selected using puromycin (2-6 μg/mL). Colony forming assay Stable SK-Hep1 and Hep3B cells were plated in six-well plates (1 × 10 3 cells/well) and grown in fresh medium at 37°C for 10-12 days. Colonies were fixed with 80% methanol and stained with 0.5% crystal violet. Cell proliferation assay Stable SK-Hep1 and Hep3B cells were plated in 35-mm dishes (2 × 10 4 cells/dish). Cells were harvested every 24 h after transfection. Then, 10-μL aliquots of the cell suspensions were measured using a hemocytometer. LC-MS/MS analysis Protein samples were processed with Thermo Scientific Pierce Detergent Removal Resin to remove any residual NP-40. Then, we used Amicon Ultra-0.5 mL 10-kDa centrifugal filters to remove the Flag peptide and to exchange the buffer from PBS to 0.1% RapiGest and 50 mM Tris-HCl (pH 8.0). The protein concentration was then determined by a bicinchoninic acid protein assay (Thermo Fisher Scientific, Carlsbad, CA, USA). Proteins in the sample were subsequently reduced and alkylated with 5 mM dithiothreitol at 45°C for 30 min and 15 mM iodoacetamide at 45°C in the dark for 30 min, respectively. Then, we digested the protein samples using sequencing-grade trypsin (Promega, Madison, WI, USA) at a ratio of 1:50 (micrograms of enzyme:micrograms of protein) at 37°C overnight. After digestion, the sample was acidified using 1% TFA, incubated at 37°C for 15 min to cleave the RapiGest surfactant, and centrifuged at 20,000 × g for 15 min. We transferred only the supernatant into a new tube, and the peptide sample was then purified, as previously described 26 . Briefly, the peptide sample was desalted over a C18 column and eluted in 0.1% formic acid in 60% acetonitrile. The eluted peptide sample was dried via vacuum centrifugation. The dried peptide sample was reconstituted with 10 µL of 0.1% formic acid. Liquid chromatography-tandem mass spectrometry (LC-MS/MS) analysis was performed, as previously described 27 . Mass spectrometry data were acquired with an LTQ-Orbitrap XL mass spectrometer (Thermo Fisher Scientific, Carlsbad, CA, USA) coupled to an Eksigent nanoLC 2D LC system. In this system, purified peptides were separated on a microcapillary column (15 cm × 75 µm I.D.; packed in house with ReproSil Gold 120 C18, 5 µm resin (Dr. Maisch GmbH)) at a flow rate of 300 nL/ min. Peptides were eluted with a linear gradient from 5 to 40% buffer B (0.1% formic acid in acetonitrile) over 90 min and 40 to 70% buffer B over 15 min. This elution step was followed by re-equilibration of the column with 5% buffer B for 15 min. Therefore, the total run time was 120 min. The eluted peptides were ionized under a spray voltage of 1.9 kV. The mass spectrometer was operated in data-dependent acquisition mode. In each data collection cycle, one full survey scan (300-2000 m/z) was acquired in the Orbitrap at a resolution of 60,000. Then, the top ten most abundant ions were selected for fragmentation by collision-induced dissociation in the ion trap with a precursor isolation window width of 2 m/z, an AGC setting of 1e 5 , and a maximum ion injection time of 500 ms. The acquired data were searched against the UniProt Human reference database (released in June 2020) using SEQUEST (Proteome Discoverer 2.2), with a target-decoy strategy. The search parameters included a precursor mass tolerance of 20 p.p.m., a fragment ion tolerance of 0.6 Da and trypsin digestion with up to three missed cleavages. Carbamidomethylation of cysteine residues (+57.02146 Da) was set as the static modification, and methionine residue oxidation (+15.9949 Da) and protein Nt-acetylation (+42.0106 Da) were set as the dynamic modifications. To validate the peptide identifications, a peptide-level false discovery rate of <1% was used as a threshold. The annotated spectrum figure was generated using open-access PDV software. Analysis of TCGA-LIHC and GEO data To compare the difference in Naa20 expression between HCC and normal tissues, publicly available data were used, including three HCC microarray data sets and data from a TCGA-LIHC (The Cancer Genome Atlas-Liver Hepatocellular Carcinoma) study. We retrieved the normalized expression data sets (GSE36411, GSE36376, and GSE54236) from the NCBI Gene Expression Omnibus (GEO) using the GEOquery R package. RNA-Seq gene expression data (Illumina HiSeq, FPKM normalization) from the TCGA-LIHC study were downloaded using the R/Bioconductor tool GenomicDataCommons (https://gdc.cancer.gov/), and the expression values were log2 transformed. Statistically significant differences were identified using the Wilcox test, and P values were adjusted using the Benjamini-Hochberg procedure. Statistical analysis Data are expressed as the mean ± SEM of three or more independent experiments. Significant differences between groups were analyzed using Student's t test. Statistical significance was accepted at *P < 0.05 and **P < 0.01. Naa20 is upregulated in tumors of HCC patients and promotes proliferation in HCC cell lines According to previous studies, the expression level of Naa20 is higher in HCC tumors than in nontumor tissues, and Naa20 silencing leads to retarded cell growth in HCC cell lines 11,12 , suggesting that Naa20 may act as an oncogenic factor in tumorigenesis. To further validate the clinical relevance of Naa20 in HCC patients, we analyzed microarray data from patients with HCC in GEO data sets and RNA-Seq data from the TCGA-LIHC data set. Bioinformatic analysis of several GEO data sets revealed that Naa20 expression levels were markedly higher in HCC tumors (GSE36411, n = 42; GSE36376, n = 240; and GSE54236, n = 81) than in nontumor tissues (GSE36411, n = 21; GSE36376, n = 193; and GSE54236, n = 80; Fig. 1a and Supplementary Fig. S1a, b). Consistent with this finding, bioinformatic analysis of TCGA-LIHC data also showed that Naa20 expression levels in tumors from HCC patients (n = 374) were significantly higher than those in normal tissues (n = 50; Fig. 1b). This analysis further indicates that Naa20 may be implicated in promoting HCC tumor progression. Next, to explore the role of Naa20 on tumorigenic features in several HCC cell lines, including Hep3B, SK-Hep1, and HepG2, cells were transiently transfected with expression vectors of wild-type human Naa20 or the catalytically inactive mutant Naa20 YF (Y123F; Supplementary Fig. S2a, b) 28 , followed by cell counting and MTS assays to determine the cell growth rate and cell viability, respectively, at different times. Overexpression of Naa20 WT but not Naa20 YF led to increased cell growth and viability compared with that in mock-control in SK-Hep1 or Hep3B cells (Fig. 1c-f), indicating that Naa20 may promote tumorigenesis through its enzymatic activity. To further support this finding, Naa20 was silenced in cells by lentiviral transduction of two different sh-Naa20 vectors (#3 and #5) ( Supplementary Fig. S2c, d), and colony formation, cell counting, and MTS assays were conducted to analyze tumorigenic features. In agreement with previous reports 11,12 , Naa20 depletion significantly reduced cell proliferation and viability compared with that in control cells (Fig. 1g-l). Importantly, reexpression of Naa20 WT in Naa20-silenced cells ( Supplementary Fig. S2e, f) rescued these phenotypes, but expression of the catalytically inactive mutant Naa20 YF did not (Fig. 1i-l), further validating the oncogenic role of Naa20 in tumorigenesis. Naa20 deficiency stimulates AMPK activity to suppress the mTOR signaling pathway According to a previous report 23 , the naa20-Δ deletion mutant in yeast displayed increased phosphorylation levels, and the kinase Snf1p, the yeast homolog of AMPK, was most prominently involved in this process. Moreover, recent studies showed that silencing Naa20 in cells led to increased autophagy 14,15 . Because the AMPK-mTOR signaling pathway has been well documented to play an important role in autophagy and cell growth in mammals 24,29 , we investigated whether the Naa20 level is correlated with AMPK-mTOR signaling pathway activity. First, we examined the expression and phosphorylation (T172) levels of AMPKα in HCC cell lines transfected with sh-con and sh-Naa20. As expected, Naa20 depletion led to enhanced phosphorylation levels of AMPK (T172; Fig. 2a). Since AMPK has been well reported to be a major inhibitor of mTOR 24 , we next investigated the mTOR signaling pathway by monitoring the phosphorylation of mTOR at S2448, as well as the phosphorylation of the well characterized mTOR downstream molecules S6K (ribosomal protein S6 kinase) at T389 and 4E-BP (eukaryotic translation initiation factor 4E-binding protein) at S65 (ref. 24 ). Consistent with the above finding, Naa20silenced cells also displayed a marked reduction in mTOR signaling compared with that in control cells (Fig. 2a). To exclude the possibility that this reduction might result from long-term adaption of Naa20-depleted cells, endogenous Naa20 was transiently depleted in HCC cells by RNA interference (RNAi), followed by western blot analysis for investigation of AMPKα and mTOR levels. Consistent with the previous finding, these results showed that Naa20 deficiency led to not only greatly increased p-AMPK levels, but also markedly reduced mTOR signaling in all HCC cell lines analyzed ( Fig. 2b and Supplementary Fig. S3). Importantly, reconstitution of Naa20 WT, but not Naa20 YF, in Naa20-depleted cells sufficiently reversed the molecular events caused by Naa20 deficiency (Fig. 2c), further indicating that Naa20 may regulate the AMPK-mTOR axis through its catalytic activity. Naa20 contributes to cell growth and autophagy by regulating AMPK activity The AMPK-mTOR axis reportedly plays a critical role in autophagy 24 , and previous studies showed that Naa20 knockdown resulted in increased autophagy activation 14,15 . Hence, we next investigated whether Naa20 might be involved in autophagy activation. To this end, Naa20 was knocked down by RNAi in Hep3B and SK-Hep1 cells, and V5-Naa20 WT or YF was reexpressed in these cells followed by western blot analysis using antibodies against autophagy markers, such as p62 and LC3. Indeed, increased levels of LC3-II and reduced levels of p62 were detected in extracts of Naa20-deficient cells compared with control cells, and the original levels were restored by reconstitution of Naa20 WT but not Naa20 YF (Fig. 3a, b), indicating that Naa20 regulates autophagy. Consistent with this finding, silencing of Naa20 in cells stably expressing GFP-LC3 also led to greatly increased GFP-LC3 distribution in cytoplasmic puncta (Fig. 3c). Collectively, these data indicate that Naa20 regulates autophagy in an Nt-acetylation-dependent manner. Next, to determine whether AMPK plays a critical role in the growth retardation and autophagy upregulation driven by Naa20 deficiency, Naa20-deficient Hep3B cells were treated with the AMPK inhibitor compound C or transiently transfected with siRNA specific for AMPKα, followed by western blot, cell count, and autophagy Fig. 1 Naa20 is upregulated in HCC tumors and enhances oncogenic properties in HCC cell lines. a, b Naa20 mRNA expression levels in HCC tissues and adjacent normal tissues from the GEO data set GSE36376 (193 normal and 240 tumor tissues) (a) and TCGA-LIHC data (50 normal and 374 tumor tissues) (b) were analyzed, as described in the "Materials and methods" section. c-f SK-Hep1 (c, e) and Hep3B (d, f) cells were transiently transfected with V5-Naa20 WT or YF, followed by cell counting (c-d) and MTS assays (e-f) at the indicated times to determine the cell proliferation rates and cell viability, respectively. g-h Naa20 was stably silenced by a lentiviral system (sh-Naa20 #3 or #5) in SK-Hep1 (g) and Hep3B (h) cells, which were grown for 12 d, and were subsequently stained and counted. i-l V5-Naa20 WT or YF was reexpressed in Naa20-depleted SK-Hep1 (i, k) and Hep3B (j, l) cells, and cell counting (i-j) and MTS assays (k-l) were conducted at the indicated times to determine the cell proliferation rates and cell viability, respectively. All data are presented as the mean ± SEM of three independent experiments. *P < 0.05, **P < 0.01. marker analysis. Western blot analysis revealed that AMPK inhibition reversed the alterations in p-mTOR (S2448) and autophagic marker levels elicited by Naa20 depletion in Hep3B cells (Fig. 3d). Furthermore, AMPK inhibition was sufficient to rescue the delayed cell growth, as well as the increased autophagy observed in Naa20depleted cells (Fig. 3e, f). Collectively, these results demonstrate that AMPK may be essential for the phenotypes observed in Naa20-deficient cells. Nt-acetylation by Naa20 is implicated in LKB1 activity toward AMPK Next, we addressed the question of how the enzymatic function of Naa20 affects AMPK activity. Because it has been reported that the substrate specificity of NatB is determined by the first two N-terminal residues containing a methionine-acidic/hydrophilic amino acid motif, such as MD-, MN-, ME-, or MQ 1-4 , we focused on the Nterminal amino acids of AMPK or its regulators. Among these proteins, LKB1 contains Met-Glu (ME) and Met-Asp (MD) as the first two N-terminal residues in humans and other species, respectively (Fig. 4a), indicating that LKB1 may be a possible substrate of Naa20. This finding encouraged us to investigate whether LKB1 is subjected to Nt-acetylation by NatB. To first determine whether Naa20 directly Nt-acetylates LKB1 at the methionine residue of the N-terminus in vitro, we performed an in vitro Ntacetylation assay using DTNB-based quantification, as Fig. 2 Naa20 deficiency activates AMPK to promote the mTOR signaling pathway. a Naa20 was stably knocked down with the lentiviral system (sh-Naa20 #3 or #5) in SK-Hep1 and Hep3B cells, as analyzed by western blotting using the indicated antibodies. b Naa20 was transiently silenced by transfection of si-Naa20 #1 or #2 into SK-Hep1 and Hep3B cells, and western blot analysis was then performed using the indicated antibodies. c V5-Naa20 WT or YF was reexpressed in SK-Hep1 and Hep3B cells with stable Naa20 silencing, and western blot analysis was then conducted using the indicated antibodies. indicated in Fig. 4b; this method is a simple, fast, and nonisotope method for in vitro quantification of Ntacetylation 25 . The in vitro Nt-acetylation assay revealed that Naa20 WT significantly increased the Nt-acetylation of LKB1 wild-type peptides with nonacetylated methionine at the N-terminus, whereas it had no effect on the Nt-acetylation of two LKB1 mutant peptides with a NatBpermissive substitution or insertion at the second amino acid of E to V (LKB1-E2V), which is recognized by NatA but not NatB, or to P (LKB1-MPE), which should be unacetylated based on a previous report (Fig. 4c), respectively. However, the catalytically inactive Naa20 mutant had only minimal or no activity toward LKB1 WT (Fig. 4c), suggesting that Naa20 can directly Nt-acetylate LKB1 with high specificity. Next, to investigate whether Naa20-mediated Nt-acetylation of LKB1 occurs at the cellular level, LKB1 WT with a C-terminal Flag tag was expressed in HEK293T cells transduced with sh-con or sh-Naa20 lentiviral vectors, purified by IP with an anti-Flag antibody, and analyzed for N-terminal modification by nanoLC-MS/MS. Unexpectedly, the results showed that all detected N-terminal peptides of LKB1 were Nterminally acetylated not only in control cells but also in Naa20-deficient cells ( Fig. 4d and Supplementary Fig. S4a, b), indicating that the N-terminus of LKB1 is indeed modified by Nt-acetylation, but this Nt-acetylation may be catalyzed by NatB, as well as other members of the NAT family. Furthermore, reciprocal coimmunoprecipitation analysis using exogenous or endogenous proteins revealed that Naa20 and LKB1 interact with each other (Supplementary Fig. S5a-c), indicating that Naa20 may be closely correlated with LKB1. Taking these results together, we presumed that Naa20 may play its role in modulating AMPK activity through Nt-acetylation of LKB1, although we could not exclude other possible mechanisms related or unrelated to LKB1 activity. It has been known that LKB1 activity can be regulated by diverse ways such as through the formation of a complex with the pseudokinase STRADα, and the scaffolding protein MO25 and via PTMs, such as phosphorylation, ubiquitination, neddylation, and lysine acetylation 19,20 . To gain further insights into the role of Naa20 in modulating LKB1 activity, we first determined whether Naa20 regulates the protein stability or phosphorylation levels (S428) of LKB1. For this, Naa20 was silenced or overexpressed in SK-Hep1 and Hep3B cells, followed by immunoblot assay. Interestingly, p-LKB1 (S428) and p-AMPK levels were markedly elevated in Naa20-silenced cells and, in contrast, were greatly reduced in Naa20-overexpressing cells compared with control cells (Fig. 4e-g and Supplementary Fig. S6a, b), suggesting that Naa20 may negatively affect the phosphorylation level of LKB1 and its activity toward AMPK. Next, to further validate whether Nt-acetylation of LKB1 regulates its activity toward AMPK through modulation of its phosphorylation, Flag-tagged LKB1 WT or LKB1-MPE mutant was expressed in Hep3B cells, which were subjected to IP with an anti-Flag antibody and western blot analysis with the antibodies indicated in Fig. 4h. Importantly, cells expressing LKB1-MPE, which should be not Nt-acetylated by NATs, showed significantly increased p-LKB1 levels compared with cells expressing LKB1 WT (Fig. 4h). However, there were no differences in the formation levels of the complexes containing LKB1 WT or LKB1-MPE and STRADα/ MO25, a major regulator of LKB1 (Fig. 4h). These findings indicate that Nt-acetylation of LKB1 can inversely affect p-LKB1 levels by unknown mechanisms. Moreover, expression of the LKB1-MPE mutant led to notably increased p-AMPK levels compared with those in cells expressing LKB1 WT or LKB1-E2V, which is expected to be Nt-acetylated by NatA but not NatB (Fig. 4i). Most importantly, coexpression of Naa20 in cells transfected with LKB1 WT, but not in cells transfected with LKB1-MPE provoked an obvious reduction in the p-AMPK level compared with that in control cells (Fig. 4j). These results indicate that Naa20 may suppress LKB1 activity toward AMPK at least partially through Nt-acetylation of LKB1, although we cannot exclude other possible mechanisms unrelated to LKB1. Naa20 regulates cell growth and autophagy through the LKB1-mediated AMPK-mTOR signaling pathway To confirm our hypothesis that Naa20 regulates the AMPK-mTOR axis through LKB1, which contributes to cell growth and autophagy, Naa20 was transiently silenced by RNAi in HCC cell lines, and the AMPK-mTOR signaling pathway, cell proliferation rate, and autophagy (see figure on previous page) Fig. 3 Loss of Naa20 promotes autophagy and cell proliferation through AMPK-dependent inhibition of the mTOR signaling pathway. a, b Naa20-silenced SK-Hep1 (a) or Hep3B (b) cells were transfected with V5-Naa20 WT or YF, and western blot analysis was then performed using the indicated antibodies. c V5-Naa20 WT or YF was reexpressed in Hep3B-GFP-LC3 cells with stable Naa20 silencing, and fluorescence microscopy analysis was conducted for quantification of GFP-LC3B puncta. d-f For genetic or pharmacologic inhibition of Naa20, Hep3B (d, e), or Hep3B-GFP-LC3 (f) cells were cotransfected with si-Naa20 #1 and si-AMPKα or transfected with only si-Naa20 #1, and then treated with compound C (20 μM) followed by western blot analysis (d), cell counting (e) and fluorescence microscopy analysis (f). All data are presented as the mean ± SEM of three independent experiments. *P < 0.05, **P < 0.01. All GFP-LC3B puncta quantification data were analyzed with Zeiss LSM880 Airyscan microscopes at Ewha Fluorescence Core Imaging Center, Ewha Womans University. activation levels were then assessed in these cell lines. In support of our findings, the results showed that LKB1 deficiency in Naa20-silenced cells reversed the alterations in the AMPK-mTOR signaling pathway caused by Naa20 depletion (Fig. 5a, b). Consistent with this result, LKB1 deficiency also reversed the cellular phenotypes, such as autophagy activation and enhanced cell growth, elicited by Naa20 depletion (Fig. 5c-g). Taken together, these results strongly indicate that LKB1 may play a crucial role in the growth retardation and autophagy activation observed in Naa20-deficient HCC cells (Fig. 5h). Discussion The NatB complex has recently been reported to act as an oncogenic effector in HCC and to be involved in autophagy [11][12][13][14][15] . However, the molecular mechanism underlying the involvement of Naa20 in tumorigenesis and autophagy remains elusive. In this study, we provide evidence further supporting previous reports that Naa20 acts as an oncogenic factor, as well as an autophagy suppressor in HCC cell lines [11][12][13][14][15] and, importantly, propose a novel and plausible mechanism responsible for that activity: Naa20 inhibits AMPK activity to promote the mTOR signaling pathway, which contributes to tumorigenesis and autophagy. Naa20-mediated inhibition of AMPK may be largely controlled by LKB1 activity. Importantly, LKB1 may undergo Nt-acetylation by NatB and/or, likely, other members of the NAT family in vitro and in vivo, which may negatively influence both its phosphorylation and activity toward AMPK (Fig. 5h). Although previous studies have revealed the implication of Naa20 in tumorigenesis and autophagy in HCC [11][12][13][14][15] , the underlying mechanism remains unclear. In this respect, our data reported here show that Naa20 negatively regulates the phosphorylation (T172) and activity of AMPK to promote the mTOR signaling pathway in a Naa20 catalytic activity-dependent manner, leading to the induction of oncogenic features and suppression of autophagy in HCC cells. AMPK has been consistently reported to suppress oncogenic features and activate autophagy in several types of cancer, including HCC 24,29 . Notably, a recent study revealed that Naa20 depletion in yeast led to elevated protein phosphorylation, possibly owing to Snf1, the yeast homolog of AMPK 23 . Collectively, these results suggest that AMPK plays an important role in Naa20-mediated regulation of tumorigenesis and autophagy. How does Naa20 regulate the activity of AMPK? The activity of AMPK is largely regulated by the ratio of cellular AMP or ADP to ATP or by phosphorylation (T172) mediated by several protein kinases or phosphatases, such as LKB1, calcium/calmodulin-dependent kinase kinase 2, protein phosphatase 2, or protein phosphatase 2C, in a cellular context-or cell type-dependent manner 24,29 . Among these proteins, LKB1 attracted our attention as one of the Naa20 substrates because it has an N-terminus that starts with a Met-Glu motif, one of the determinants of the substrate specificity of NatB. Consequently, we showed here that LKB1 is N-terminally acetylated within cells in vivo or by NatB in vitro. However, we also found that all N-terminal peptides of LKB1 detected in Naa20silenced cells were still N-terminally acetylated, indicating that LKB1 may either not be a substrate of NatB or can be a substrate of other NATs. This discrepancy between the in vitro and in vivo results might have occurred because of the cell types used, the efficiency of Naa20 knockdown, or the existence of substrate redundancy among NATs. Indeed, a previous report showed that the successful identification of NatB substrates in cells could be largely affected by the knockdown efficiency or by substrate redundancy among NATs 4 . Although we could not determine whether LKB1 is a target of NatB in vivo, we propose, based on a previous report, that the acetylation status of LKB1 at its N-terminus can affect its activity toward AMPK. Indeed, overexpression of the LKB1 mutant (LKB1-MPE), which is not expected to be Ntacetylated by NATs, led to greatly enhanced LKB1 and (see figure on previous page) Fig. 4 Naa20 acetylates the N-terminus of LKB1 in vitro and reduces its activity toward AMPK. a Sequence alignment of LKB1 N-termini from several species. b Scheme of the DTNB-based in vitro Nt-acetylation assay. c To determine whether LKB1 is a substrate of the NatB complex, a DTNBbased in vitro Nt-acetylation assay was performed as described in the "Materials and methods" section. Ac-LKB1, LKB1-MPE, and LKB1-E2V peptides were used as negative control substrates; *P < 0.05, **P < 0.01. d Assessment of the N-terminal acetylation of LKB1. N-terminal acetylation of Flag-LKB1 overexpressed in sh-Con or sh-Naa20 HEK293T cells was analyzed by mass spectrometry. In the right panel, sh-Con or sh-Naa20 #3 HEK293T cells were transfected with Flag-LKB1, and western blot analysis was then conducted using the indicated antibodies. e SK-Hep1 and Hep3B cells were infected with lentiviruses expressing sh-Naa20 #3 or #5 and were then analyzed by western blotting using the indicated antibodies. f-g Wild-type Naa20 was overexpressed in SK-Hep1 (f) and Hep3B (g) cells, followed by western blot analysis using the indicated antibodies. h Flag-tagged Naa20 WT or MPE was overexpressed in Hep3B cells. Flag-Naa20 WT and MPE were immunoprecipitated separately with an anti-Flag antibody, followed by western blot analysis using the indicated antibodies. i Flag-LKB1 WT or MPE was overexpressed in Hep3B cells, followed by western blot analysis using the indicated antibodies. j Flag-LKB1 WT or MPE was overexpressed with or without V5-Naa20 in Hep3B cells, followed by western blot analysis using the indicated antibodies. The bands were quantified using image analysis software, and the relative band intensities were expressed as p-AMPKα/AMPKα ratios. 5 Naa20-mediated cell proliferation and autophagy are dependent on LKB1 in HCC cells. a-g si-Naa20 was transfected alone or cotransfected with si-LKB1 into SK-Hep1 (a, d, f), Hep3B (b, e, g), or stable Hep3B-GFP-LC3 (c) cells. Western blot analysis (a, b) was then conducted using the indicated antibodies; cell counting (d, e), and MTS assays (f, g) were performed to evaluate the cell proliferation rates or cell viability, respectively, and fluorescence microscopy analysis (c) was conducted for quantification of LC3B puncta. h Proposed model showing how Naa20 contributes to cell proliferation and autophagy through the LKB1-AMPK-mTOR signaling pathway in HCC cells. All data are presented as the mean ± SEM of three independent experiments. *P < 0.05, **P < 0.01.
2020-11-22T14:09:31.425Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "2526181298250805d3f0d8b33e23d6439cbbf4a2", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s12276-020-00525-3.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "491385878954b6e983cefa6c184e6f6576240b4d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
15460933
pes2o/s2orc
v3-fos-license
The Brainarium: An Interactive Immersive Tool for Brain Education, Art, and Neurotherapy Recent theoretical and technological advances in neuroimaging techniques now allow brain electrical activity to be recorded using affordable and user-friendly equipment for nonscientist end-users. An increasing number of educators and artists have begun using electroencephalogram (EEG) to control multimedia and live artistic contents. In this paper, we introduce a new concept based on brain computer interface (BCI) technologies: the Brainarium. The Brainarium is a new pedagogical and artistic tool, which can deliver and illustrate scientific knowledge, as well as a new framework for scientific exploration. The Brainarium consists of a portable planetarium device that is being used as brain metaphor. This is done by projecting multimedia content on the planetarium dome and displaying EEG data recorded from a subject in real time using Brain Machine Interface (BMI) technologies. The system has been demonstrated through several performances involving an interaction between the subject controlling the BMI, a musician, and the audience during series of exhibitions and workshops in schools. We report here feedback from 134 participants who filled questionnaires to rate their experiences. Our results show improved subjective learning compared to conventional methods, improved entertainment value, improved absorption into the material being presented, and little discomfort. Introduction This century has been marked by the development of new brain imaging techniques, which have allowed us to better understand how our brain functions when we experience different mental states. The brain appears as a key integrative organ where a variety of inputs are simultaneously processed and combined: exteroceptive stimuli, that is, stimulations coming from the external world, proprioceptive inputs that provide body state information, or interoceptive inputs such as thoughts, emotions, and other inner experiences [1]. This processing is the result of lifelong learning, shaping, and adaptation of our neural system through our interaction with the world [2]. 2 Computational Intelligence and Neuroscience computer science, we are now able to record different electrical brain rhythms with millisecond precision and process this activity in real time by placing electrodes on subjects' head. The technique of electroencephalography (EEG) is now widely used both in fundamental and in clinical research, as well as a diagnostic tool in clinical environment. In addition to basic research and clinical applications, EEG rhythms have been recently used to control computers in real time. In Brain Machine Interface (BMI) or Brain Computer Interface (BCI) [3][4][5] characteristic patterns of EEG activity during specific mental activity are mapped to a given computer command. Some BCI systems allow controlling a mechanical device, a graphical interface, or a video game using thoughts only. Subjects may voluntarily learn to retrain specific brain EEG patterns in order to correct pathological activity of the brain. This specific range of application is called neurofeedback or neurotherapy [6][7][8]. When EEG began to be recorded in the 1930s, researchers realized that several typical rhythms could be distinguished in the brain electrical activity recorded at the surface of the scalp. The first "brainwave" was identified by the father of electroencephalography, Berger [9]. It was denominated using the first Greek letter, alpha, and became the "alpha" rhythm, which is a brain rhythm that oscillates at about 10 cycles per second (10 Hertz). This rhythm is particularly active when a person is awake, resting with his/her eyes closed or while relaxing [10,11]. Using alpha brainwaves to create or modulate sound and/or music has been pioneered by Lucier [12] as recently as 1965. Later in 1969, Kamiya showed that it was possible to voluntary control the alpha brain rhythm and modulate audio feedback in real time [13]. Following technical and theoretical progresses in neuroscience, computer science, and signal processing, EEG signals have recently been used in new ways [14,15]. With the development of affordable and user-friendly EEG systems, the last few years have seen an increasing number of art projects using brain electrical activity as an input or way to produce or modulate artistic content such as computer graphics, animations, music, and choreography. Several performances have been created around the concept of music generation using brainwaves [16][17][18][19]. The Global Mind Project (http://www.globalmindproject.com/) is an example of such an artistic project. This system allowed for audiovideo rendering of brain data, which, when combined with live interactive performance, has helped further develop new interactive artistic productions. According to Clarke, an Honorary Fellow in the Department of Culture and Communication at the University of Melbourne, "drawn together in a coalescence of self and technology, the artists connected to the EEG headsets are presented as both automata -selfoperating machines -and intentional, self-activating beings, that have the ability to affect and be affected by the on-screen imagery generated" [20]. Another recent realization developed by a team of Rensselaer Polytechnic Institute students is Yehuda Duenyas' Infinity Simulator which involves control of a 3D automated rigging system using specific brainwave patterns [21]. This device led to the creation of the Ascent project (http://theascent.co/), a live-action, participatory theatrical experience that combines mind-control and levitation via an automated custom-built lifting platform system. Our system uses similar ideas with the important addition of an immersive environment. This is the first time to our knowledge that real time EEG recordings are being displayed in a full dome immersive environment allowing direct spatialization (spatial transposition) of brainwave data. One of the main originality and strengths of our system is also the brain metaphor regarding the shape of the device. Among developed applications, it allows projecting EEG topographic activity directly on the dome surface of a planetarium as if viewers were standing at the centre of the brain, looking up at electrical brain activity projected on the scalp. The "Brainarium" (originally "Cerveaurium" in French) was initially designed to present neuroscience concepts in a fun, attractive, and interactive way for educational and entertainment purposes, by mixing art and science. In the method section of this paper, we will first present the general concept and architecture of the system in order to outline and illustrate its general functioning, describe a first performance that was designed for the Brainarium, and detail the specific implementation of the performance using open source software platforms. In the Result and Critical Reception section, we then mention the context in which our device has been used during exhibitions at museums and during "The Brain's Awareness Week" and present data on audience's experience in the Brainarium. In Discussion, we finally introduce potential extension and further development of our system in the fields of education, entertainment, arts, and more specifically its possible benefits in clinical applications such as neurotherapy. Figures 1 and 2 summarize the architecture of the system and the different hardware it is comprised of. EEG signal is acquired on a person present in the dome. As shown in these figures, we used the Emotiv Epoc headset (Emotiv, Inc.), which includes 14 metal electrodes recording electrical brainwaves on the surface of the scalp at a frequency of 240 Hz (240 samples per second), but any EEG system compatible with BCI software can potentially be used. The signal is then transmitted, using a wireless connection, to a computer. This processing unit handles the signal processing part and calculates the control signals, which will be used to drive multimedia contents. Visual representations are finally projected onto a planetarium dome via a video projector equipped with a hemispherical lens (the system can be adapted to project on a hemispheric mirror which will reflect the image on the dome surface rather than directly project on the dome using a hemispheric lens). The system can be upgraded to multiprojector full dome systems but the main advantage of using a transportable inflatable dome and monoprojector hemispheric projection system is that it decreases the overall cost and allows an itinerant use. The computer display adapter should have two video outputs in order to allow simultaneous control of the different software on one screen and output to the video projector for the dome. In addition, a video splitter was used to send the video signal Figure 1: General principle of the Brainarium. EEG is recorded using the Emotiv headset and sent to a computer that computes brain rhythm activity in real time and projects it on the planetarium dome. EEG activity visualization Full dome projection to a second screen so that the person driving the performance could see what was being projected. If the system is used in the context of an art and science performance, brain electrical activity may be recorded from a member of the audience, an organizer of the projection, or an artist who participates in the event. Our device opens a wide range of possibilities among which we have integrated and used the following for a first performance: (i) Interaction with animation in computer graphics through electrical brain(s) activity(ies). (ii) Visualization of a brain rhythm (Alpha rhythm) associated to the suppression of visual input when the subject closes his eyes or relaxes. (iii) Real time presentation of topographies of brain electrical activity. (iv) Interactive presentation of brain's structures on a 3D brain model. After presenting the general implementation of the system, the next sections will be devoted to description of each one of these applications. General Implementation. Our set-up is based on combining a hemispheric projection system such as the one used in a planetarium, a hemispheric projection surface, and a brain computer interface system. Since every functional block of the system is modular, various solutions may be developed depending on budgetary constraints and available material. As we are writing this paper, the cost of building such a system could range from about US$5,000 to about a hundred thousand dollars when using research grade apparatus; the intermediate set-up we present here costs about US$40,000 although we also provide suggestions on how to build a similar system for a lesser amount. For projection, we used a transportable planetarium system, which comprised a Digitarium5 Delta Portable Digital Planetarium System [22] and a Digitalis6 Portable Dome [23] which has a diameter of 7 meters. However, both hemispheric projection systems and projection surfaces may be made at a lesser cost using custom made tools [24][25][26]. We implemented a low cost solution to replace the Digitarium Delta Portable Digital Planetarium System. This solution is composed of four parts: a full HD video projector (Acer H7531D), a condensator (Rodenstock TV Heligon 75 mm F/D = 1.1 can be replaced by a classical 50 mm with F/D = 1.4 combined with a +4 diopters lens as well), a 50 mm 45 ∘ mirror mount (Skywatcher), and a fisheye lens (Peleng 8 mm f3.5 fisheye lens). The dome we used was made of a thick fabric inflated by a powerful fan. This solution makes it more convenient to transport and set up the system compared to a rigid dome solution. However, this method has the drawback of having to leave the fan turned on in order to keep the dome inflated. Even if the sound of the fan is not covering the sounds played inside the dome, it still creates a distracting background noise. The control console was composed of a classic personal computer equipped with a dual screen graphic card powerful enough to handle HD projection and two LCD monitors. One of the LCD monitors was used to control the demonstration. On the second graphical output, a video splitter was used to send the display signal to both a control LCD monitor and the video projector. For the EEG signal acquisition, the research edition package of Emotiv Epoc headset was used [27]. Emotiv Epoc is a wearable EEG "headset" composed of 14 gold-plated electrodes. In order to record electrical signals generated by the brain, each electrode is covered by a small felt-based pellet that acts as a bridge between the electrode and the scalp. These pellets have to be soaked in a saline solution, of water mixed with salt, which allows electrical conduction from the skin to the metal electrode through the pellet. The advantages of using this system are that it is relatively low in cost compared to clinical or research oriented devices. It is also wireless, fast, and easy to set up and provides some level of spatial resolution since it has 14 electrodes. However, clinical or research EEG systems with better signal quality can be used if available. Dry active electrodes would be the most adapted for such a system as they provide acceptable signal quality with a minimum preparation time but they are still expensive to date compared to the Emotiv Epoc solution. The complete list of software used to run the system is depicted in Table 1. Except from the Emotiv software suite (the basic software package provided with the Epoc headset by Emotiv), the software used to do signal processing and visualization is all part of the open source community. For the fractal application, the software package "Mind Your OSC" was used to collect data from Emotiv Control Panel software and send it as an Open Sound Control (OSC) [28] stream to visualization software. The interactive fractal video was displayed using the vvvv software (https://vvvv.org/), a graphical programming environment for easy prototyping and development. The vvvv software application is designed to facilitate handling of large media environments with physical interfaces, real time motion graphics, audio, and video that can simultaneously interact with many users. The freely available OpenVibe software [29] was used for signal acquisition, signal processing, and visualization of the EEG Performance Design and Implementation. In this section we detail technical implementation of each application used for the different phases of the original performance designed for the Brainarium. Brain-Controlled Animation of Fractals. This application is an example of live interaction. Figure 3 depicts the general architecture of the hardware and software for this application. We first placed the EEG cap on the subject's head. Ideally, the subject's alpha brainwaves should be large compared to the overall electromagnetic noise. Since individual brains show different electrical rhythmic activities, some subjects can exhibit low amplitude alpha oscillations and this might make it more difficult to process the signal without the use of advanced artifact rejection techniques. Due to time limitation between sessions, we often asked a preselected person with known high amplitude alpha rhythm (i.e., easily observable on the signal trace) to be the subject. After checking electrodes contact quality and signal quality, a calibration step lasting approximately two minutes is performed in order to evaluate some statistical features of the signal of interest's amplitude for the selected subject such as its mean and standard deviation. We used the index "Meditation" provided by the Emotiv Control Panel as the control signal. Since it has not been made public by Emotiv, we do not have the exact formula used to compute this index out of the raw EEG signal. However, it is known to be positively correlated with the alpha rhythm and relaxation. Emotiv indexes result from a statistical analysis based on a large normative database collected of many subjects and are therefore already normalized. However, a calibration procedure is still used in order to adapt the system to subject's specific statistics. In our case, we used the standard deviation and the mean value of the "Meditation" index over the calibration period as a reference value to tune the feedback set-up. During the first minute of calibration we asked the subject to keep his eyes open and during the second minute we ask him to keep his eyes closed. Even if the subject has already performed the experiment, it is important to repeat the calibration step since EEG features widely vary throughout the day and from one day to another. "Meditation" values are calculated for both the eyes-closed and the eyes-open period and are used to calibrate the system to allow balanced behaviour of the visual feedback animation. Once calibration is performed, the session starts with the video feedback being projected on the dome and the audience enters the dome. In the meantime, a professional musician is improvising based on the visual display. This allows creating a complete interactive feedback loop between the subject wearing the EEG device and the musician (Figure 4(a)). The musician uses inspiration of what he sees on the dome to play music and adapt it. Furthermore, he can engage in an interplay with the wearer of the EEG and can try to induce changes in what is displayed. The "Meditation" measure controls the display projected on the dome. We used a zoom into Mandelbrot's ensemble fractal as visual feedback. More details about the video used are given in following paragraphs; we focus here on the interaction configuration. The speed and direction (forward or backward) of the zoom depends on the brainwaves of the subject wearing the EEG cap. The system was set up so that the animation was played forward, as if diving or moving forward into the fractal, when the current alpha wave amplitude generated by the subject was over its mean level. By contrast, when the current alpha wave amplitude was lower than its mean level, the animation was played backward, as if travelling away from the fractal. The speed of the animation was modulated by the difference between the current value of the alpha "Meditation" wave amplitude and its mean amplitude; that is, zooming becomes faster as the current value is further away from the mean. As a result, a "Meditation" value equal to the mean value would result in a static image. The shapes projected on the dome are fractals. A 2D fractal is a mathematical expression, which may be represented as a 2D image. We choose to use fractals because, in addition to their aesthetic dimension, more and more research is showing that certain aspects of brain activity or even its own structure share some features with fractals [32][33][34][35]. Because fractals are based on mathematical expressions, there is no theoretical limit to the resolution of fractal images making it possible to zoom in on a small portion of the image and expanding it indefinitely. Another feature of fractal images is that their structure is preserved regardless of the "zoom." Finally, fractal images are self-similar representations. If the appropriate "zoom" is applied to a fractal image, the same image may be found again. An interesting feature resulting from the use of a fractal animation is that it produces an immersive tunnelling effect. Fractal images presented in the Brainarium were made dynamic by zooming in or out in the fractal image. The animation used in the Brainarium was "a precalculated journey into the heart of the Mandelbrot fractal set" (http:// www.hd-fractals.com/), which is named after Benoit Mandelbrot, the mathematician who studied and popularized it [36]. The video used in our demonstration features a 2 ∧ 760 zoom in the Mandelbrot fractal set and it was produced by Teamfresh (http://www.hd-fractals.com/), an independent production company which specialized itself in rendering fractal animations. We used a commercially available High Definition version of the animation. The fractal video control application has been specifically implemented for this project using vvvv, a graphical programming environment for easy prototyping and development (https://vvvv.org/). We have made the vvvv patches developed for this application freely available [31]. EEG Real Time Topography Application. Specific software for this application can be seen in Figure 3. During the second part of the performance, participants observe EEG raw brainwaves, followed by their representation as topography or how EEG brainwaves are distributed on the surfaces of the scalp. These EEG topographies may be likened to topographies used in elevation maps for hiking. Instead of representing the terrain elevation on the Earth surface, colors represent the strength of a specific brainwave at different locations on the head surface. In our case, we focused on brainwaves in a frequency band ranging from 8 to 12 Hz called the alpha band. Alpha brainwave amplitudes vary quickly in time and space and this dynamic may be rendered as animated colored maps on the dome. The topography is represented using either classic 2D spherical projection or an interactive 3D head model from OpenVibe software [29]. Using this set-up, participants may observe that when the subject closes his eyes, alpha wave amplitudes increase on the part of the dome that represents the back of the head. The part of the brain that is activated is called the occipital region, which is a brain area largely devoted to visual processing. When this region does not process visual information, that is, when the subject closes his eyes, alpha waves tend to increase in this brain area. Another way to increase alpha wave amplitude over the entire brain is to ask the subject to enter a deep relaxation state but this requires more training from the subject and this is more difficult to achieve in a single session: we have succeeded to perform the second part of this demonstration with only a few subjects. While the brain dynamic is shown on the dome, a musician is simultaneously playing his instrument, trying to help the subject to go into deeper relaxation states and simultaneously giving him auditory feedback about his relaxation state using his own interpretation of ongoing EEG patterns (Figure 4(b)). We have made available under an open source license the OpenVibe software scenario we developed to display alpha wave topography [31]. Neuroanatomy Using a 3D Interactive Brain Model. After the two interactive real time EEG sessions, the last part of our demonstration interactively showed different parts of the cortex in human brain volume. Despite the BCI being not involved in this part, we still want to describe it briefly to keep the description of the system's features complete. On the basis of gross topographical conventions, the cortex can be classified into four lobes: the temporal lobe, occipital lobe, parietal lobe, and frontal lobe. The system developed using Blender Game Engine (https://www.blender.org/) allows manipulating the 3D model in order to show the different lobes and introduce some of basic neuroanatomy concepts. We implemented rotation around different axis, zooming in and out for projection of these 3D models on the dome. The 3D models are rendered using the "Blender embedded full dome plugin" to compensate deformation due to the domespecific projection lens and surface. We are making publicly available the Blender file we developed [31]. Results and Critical Reception The innovative aspect of our project was to combine real time brain electrical activity visualization tools with an immersive full dome environment. Participants were seated inside the space enclosed by the projection dome, which induces a special atmosphere and feeling. In addition, scientific and artistic content interactive display exploited the analogy between the shape of the projection space and the near spherical shape of the brain (see Methods). What participants heard was not necessarily limited to what was being played inside the dome, as the material used for the projection surface was not soundproof. Nevertheless, acoustic properties of the dome were specific to its hemispheric shape, and this tended to enhance participants' experiences. The Brainarium was inaugurated during "The Brain's Awareness Week," an event organized every year in all large European cities. For a week, series of exhibits are set up to present to the general public the latest advances in brain research. During "The Brain's Awareness Week" 2013, we performed more than 17 sessions demonstrating the Brainarium to more than 200 visitors. Following this encouraging start, our demo was also presented in Paris during the Cognitive Sciences Forum in the "Couvent des Cordeliers," at the Medical School of Paris, where it proved to be a very popular animation with more than 180 visitors in one day. Our project was also featured on the most popular newspaper of South-West France (6 million readers), "La Depeche," and also mentioned on local radio stations. It is now regularly requested for performances in more and more cities across France and Belgium, for workshops in primary and secondary schools, and for various national events such as the French National Science Week. A questionnaire was filled in by participants after the performance to collect their feelings and how their experience in the Brainarium compares to traditional conferences and lectures they attended. This questionnaire allowed us to collect demographic data about participants, on four closed questions with Likert scales, and an open text field where subject could give us their feedback freely. The first question asked the participant if he or she feels this type of demonstration promotes learning and memory compared to a conventional conference. Answer was given on a 5-point Likert scale ranging from 1 ("not at all") to 5 ("a lot"). The second question asked the participant whether it was more entertaining than a traditional conference or course. Answer was given on a 5-point Likert scale ranging from 1 ("less entertaining") to 5 ("more entertaining"). Question three addressed whether participant was more or less absorbed by the presentation on the 3D dome compared to a presentation on a conventional rectangular screen. Answer was given on a 5-point Likert scale ranging from 1 ("less absorbed") to 5 ("more absorbed"). Finally, the fourth question asked if the participant felt discomfort (i.e., if he felt dizzy) due to the presentation on the 3D dome. Answer was given on a 5-point Likert scale ranging from 1 ("not at all") to 5 ("a lot"). We collected data on a total of 134 participants in two distinct performance places, during four different days. 52 participants were men and 82 were woman with an average age of 30.4 ± 17.3 years old across all participants (minimum age was 7; maximum age was 80). Results from the questionnaire are shown in Figure 5. Our results show improved subjective learning compared to conventional methods, improved entertainment value, improved absorption into the material being presented, and little discomfort with no participant experiencing strong discomfort. Discussion Planetarium domes have previously been used to display various contents. However, to our knowledge, this is the first time that real time EEG data is being shown in such an environment. Our demonstration appeared to arouse some 8 Computational Intelligence and Neuroscience level of popular success and seemed to provide participant with a new type of interactive experience. Thus, we have made all the tools we developed available in the public domain for anyone interested in reproducing our demonstration. In the following sections, we will focus on four domains of application in which the Brainarium may potentially be used and further developed: education applications, entertainment applications, art applications, and immersive neurofeedback applications. Education and Training Applications. The current Brainarium set-up already provides educational material to explain some basic concepts in Cognitive Sciences. We are currently exploring the possibility of showing content using stereoscopic projection methods, with the goal of providing an even more intense immersive experience to the public. We currently focus on porting two classical BCI applications to the dome environment and developing pedagogical materials. The first application involves visualizing brain electrical activity related to emotion. Recent studies have reported that it is possible to differentiate emotional reactions and states using EEG in real time [14,37]. When the participant wearing the EEG headset is experiencing a given emotion, an appropriate dynamical pattern reflecting the subject's emotion would be shown on the dome. The second application involves visualizing brain electrical activity associated with real and imagined body movements. Execution or mental visualization of body movement gives rise to typical brain rhythms [38]. These rhythms are recorded at the scalp surface and may be used to control visual display or even robotic devices. Moreover, results brought by fMRI studies on these domains can be shown to complement the explanations, showing brain areas and brain processes involved. The ultimate goal is to use the interactive and immersive dimensions to create and stimulate curiosity, attention, and interest in order to serve pedagogical purposes. Entertainment Applications. The Brainarium could potentially be used as an immersive environment for BCI based games. BCI appear as a potential new way to gain control over a video game or a virtual world [39,40]. Several EEG products specifically developed for BCI games have recently been made available to the general public in the form of commercial games (Star Wars Force Trainer and Mindflex by Matel, Inc.) and video games (Mindout: http://www.mindoutgame.com/, Free [41]). Several game studios have even specialized to solely design BCI games (MindGames: http://mindgames.is/, Dreams of Danu: http://www.dreamsofdanu.com/). Immersive environments such as hemispheric projection surfaces have been already used for video games (e.g., with Blender full dome compatible Game Engine) [30], but never in conjunction with BCI systems. Moreover, it has been pointed out in a previous study by Lalor et al. [42] that subjects report that the multimodal feedback, such as the visuoauditive feedback delivered by the Brainarium, is useful in learning to control the game by suggesting that immersion increases sensation and therefore provides a more enjoyable game experience. However, the engagement in the task of controlling the game using brainwaves might be too demanding and might degrade game experience. Nelson et al. [43] showed that concentration on the BCI task interacted with the feelings of presence in a virtual reality environment. However, they report as well that over time BCI control became more automatic for subjects as their brain adapts to the device, which allowed them to be gradually more absorbed by the virtual reality environment and feel more present. This description varies from what most subjects who experience the same virtual environment without BCI report: initially participants feel a high sense of presence which gradually drops as they realize the limitations of the virtual environment [44]. But what does the dome bring compared to a classic head mounted virtual reality device such as 3D goggles? An experiment studied the experience of users in an immersive device called the Cave [45], a room in which the user is presented with high-resolution stereo-pair images projected in real time on 3 walls and the floor, which provides an experience similar to a dome environment. They compared the experience of users in several environments: no immersion, head mounted 3D goggles, and the Cave. Subjects rated the Cave as providing a more immersive experience than all other conditions. Subjects also reported that the Cave was more comfortable than the head mounted goggles. There are numerous potential causes of visual discomfort when viewing stereo displays [46]. One of them is the vergenceaccommodation conflict, that is, small amounts of left/right asymmetries, which is potentially present in all conventional stereo [47]. These results argue in favour of dome or room based systems for producing highly immersive environments. Art Applications. More and more exploratory work using digital media and interactive devices are emerging on the art scene, leading to the relatively new field of interactive art. This developing genre of art usually has the public providing input in order to determine some parts or characteristics of the created content. Interactive art provides a ground for dialogue between the artist and the public through the potential of actions or reactions, introducing either intentional or passive ways to act upon the artwork. The Brainarium is specific in the sense that the participant brainwaves are the source of interactivity. The artist may modulate multimedia artwork projected on the dome based on participant brainwaves. As mentioned for the education application parts, the artist may be able to extract subject's emotion and adapt the art forms being shown on the dome. Our system finally opens up the possibility to live coparticipation involving one or several participants wearing EEG headsets. Medical Applications Using Immersive Neurofeedback. Neurofeedback is a type of brain computer interface application used in clinical environments to help to treat pathological traits [48][49][50][51]. Neurofeedback is being used to treat neuropsychological pathologies, epilepsy, ADHD, addiction, and depression [6,52,53], and to improve performance (stress management, creativity, attention and focus, and control of impulsivity [7,8,[54][55][56][57]). The idea behind neurofeedback is that pathological mental states generate abnormal brain rhythms. By training patients to control their brain rhythms and suppress the pathological ones, it might be possible to treat specific mental pathologies. Note that neurofeedback is not yet widely accepted in the scientific and medical communities although recent neuroscientific works indicate some level of clinical efficacy and a bright future for this discipline [58][59][60][61][62]. Recent research results brought evidence that, in the context of neurofeedback training, immersion tends to improve training efficiency compared to classic feedback on a 2D screen [63]. As stated by Lécuyer et al. [64], virtual reality (VR) technologies provide motivating, safe, and controlled conditions that enable improvement of BCI learning. As reported in a recent review by Pfurtsheller et al. [65], a realistic virtual and immersive environment enhances the feeling of presence, task performance, and also cortical activation [66][67][68]. Studies indicate that the more game-like and engaging neurofeedback applications often resulted in a better performance [69,70]. Subjects report the games are more stimulating and that multimodal immersive feedback is useful [42]. Previous studies have used virtual reality goggles with neurofeedback [63] but neurofeedback has never been performed in immersive environments like the one we are presenting here. Immersive environments could potentially offer numerous other benefits to patients, such as reduced training time, improved classification accuracy, increased sense of immersion and presence in an artificial setting, and reduced boredom or fatigue [71]. Finally, in the context of a therapeutic neurofeedback session, the dome environment provides a unique environment for enhanced intimacy between the patient and the therapist. In the specific field of emotion regulation, fMRI neurofeedback recently brought very promising results [72][73][74][75][76]. However and despite the difficulty of recording subcortical regions of the brain involved in emotion generation, result obtained with EEG recordings [14,15,37] could be extended and refined in order to benefit from the high temporal resolution of the EEG and target in particular cortical areas involved in emotion monitoring and regulation [77]. Independent component analysis and source reconstruction methods could potentially be used to improve EEG spatial resolution and signal to noise ratio. Cannon et al. [78] showed that limbic lobe and hippocampal activity can be recorded and visualized using LORETA during affective memory recall. In another study, Cannon et al. [79] showed that it was possible to learn to self-regulate activity in anterior cingulated gyrus, an area of the brain known to be involved in both cognitive and affective processes. ICA neurofeedback and LORETA neurofeedback are indeed possible in an immersive set-up such as the Brainarium. Following recent developments in the field of virtual reality technology, several studies argued in favour of several benefits from using virtual reality in treatment of various pathologies or disorders related to emotions such as anxiety disorders (for a review see [80]). Bringing together BCI and VR could potentially help to not only better monitor and therefore optimize the therapy, but also give birth to new therapeutic techniques. Conclusion We described the first interactive system allowing real time spatialized visualization of electrical brain activity in a brainlike shaped immersive environment. This device was initially intended to deliver scientific knowledge using a pedagogical medium at the crossing between art, science, and technology. Its modular architecture allows extending and adapting it to various implementation solutions leveraging the costs to different contexts of deployment. This innovative concept can be further developed into a rich variety of applications in educational, entertainment, art, and medical domains.
2018-04-03T01:09:49.815Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "ce20b0e154fef7afc5448d15ddfefa86eb99ac80", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/cin/2016/4204385.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e56b17a3d187acd2aaacde7cff77832383eef0b6", "s2fieldsofstudy": [ "Art", "Computer Science", "Education" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
261599852
pes2o/s2orc
v3-fos-license
Chemical and Biological Profiling of Fish and Seaweed Residues to Be Applied for Plant Fertilization : Brown algae and fish waste contain high-value compounds with potentially beneficial effects on plant growth. Several commercial fertilizer products are currently available, but the characteristics of the materials are usually not well-described. Fish and seaweed residues originating from the Norwegian coast are available, after industrial processing, which may be combined into complete fertilizers exerting additional effects on crop plants (biostimulants). In this study, raw samples of fish and seaweed residues were investigated using ecofriendly technologies (drying, leaching), targeting search and isolation of potential biostimulants, followed by physicochemical characterization (elemental analysis, UV–visible, FT-IR, ICP-MS, ICP-OES, electrical conductivity, pH, etc.). Organic solvent extractions were employed to determine the available mineral content, micro-and macro-nutrients, antioxidant compounds, and amino acid content by chemical hydrolysis. The in vitro biotoxicity profile (cell viability, morphology, migration) of the generated extracts was also perused, employing Gram-positive ( Staphylococcus aureus ) and Gram-negative bacteria ( Escherichia coli ) along with sensitive neuronal eukaryotic cell lines N2a58 and SH-SY5Y, to assess their time-and concentration-dependent efficacy as antimicrobials and agents counteracting oxidative stress. The analytical composition of all raw materials showed that they contain important nutrients (K, P, Ca, N) as well as organic compounds and amino acids (Gly, Asp, Glu, Leu, Phe) capable of acting as plant biostimulants. Concurrently, the inherently high conductivity values and salt content necessitated leaching processes, which result in Na + and K + decreasing by more than ~60% and justifying further their use in soil treatment formulations. The aforementioned results and assertions, combined with physical measurements (pH, electrical conductivity, etc.) on naturally occurring and dried samples as well as green solvent extracts, formulated a physicochemical profile reflecting well-defined inorganic–organic species that might function as biostimulants. The collective physicochemical and biological properties support the notion that appropriate mixtures of marine organism residues may be efficient fertilizers for crop plants and concurrently possess biostimulant characteristics Introduction As mineral reserves of phosphorus (P) [1], potassium (K) [2], and other essential plant nutrients on the planet are progressively getting scarce, there is increasing interest in natural sources of minerals and other valuable compounds to be applied as fertilizers, concurrently ical profile, and (b) the establishment of a well-defined biological activity profile reflecting the biotoxicity (cell viability, morphology, chemotactic migration, proliferation) and antioxidant potential of the raw materials and/or green extracts thereof in in vitro bacterial (E. coli, S. aureus) and eukaryotic (N2a58, SH-SY5Y) cell line cultures. For the first time, a multilateral investigation of variable nature marine organism residual materials provides a novel comprehensive global (bio)chemical profile, based on which well-defined hybrid composites can emerge that are capable of acting as ecofriendly fertilizers/biostimulants in crop plant agriculture. Materials and Methods The work included both physicochemical and biological investigation of the marine residual materials at hand. In that framework, an entire series of physical and analytical chemical methodologies were evoked to discover the properties of the natural origin samples that would ensure their identity and justify further biological perusal of their potential fertilizing and biostimulant characteristics. In line with that logic, the ensuing biological investigation included a plethora of biotoxicity and antioxidant activity assays in bacterial and eukaryotic cultures, collectively providing a complete picture of the scientific background, further supporting the formulation of naturally emerging hybrid fertilizers capable of stimulating crop plant growth. The individual methods and instrumentation used in that effort are presented below. Description of the Tested Materials The production of liquid fertilizers/biostimulants is a well-established industry in Norway, where wild rockweed (Ascophyllum nodosum) is harvested along the coast, dried, ground, and extracted with acid and alkaline solutions to produce a liquid with significant positive effects on plant growth. The producing company is Algea AS, located in Kristiansund, NW Norway. The residues after such extractions are a sludge with about 30% dry matter (DM), which is still rich in important minerals, such as K, and could be applied as a soil amendment, whereas the amounts per hectare and year would have to be restricted because of the concentrations of some potentially toxic elements, especially cadmium [19]. The industry (Algea AS) uses two main types of extraction, resulting in two main types of sludge, both chemically quite comparable, except for their total nitrogen (N) content. Nitric acid (HNO 3 ) is applied in the extraction process of the sludge, which is herein dubbed high-N seaweed (HNSW), whereas another acid is also applied in the extraction of the second sludge type, which is herein dubbed low-N seaweed (LNSW). Sludge was transported from Algea AS to the Norwegian Centre for Organic Agriculture (NORSØK) in open IPC tanks, and representative samples of HNSW and LNSW were frozen and brought to our labs for investigation. Clip fish is a typical fish product in Norway, made by salting and drying fillets from captured white fish. The material, which was analyzed in the present study, was ground backbones from cod (Gadus morhua), cusk (Brosme brosme), and common ling (Molva molva). The backbones contain some other tissues, but the main part is fishbone. The material is hereby dubbed ground fish bones (GFB). The material was created from thawed backbones (Sigurd Folland AS, Averøy, Norway) and sent to our labs for investigation. Upon arrival, the samples were still frozen and as such they were stored frozen (−20 • C) until further use. The two conductivity standards used in this work were supplied by PanReac Ap-pliChem ITW and had 147 and 1413 µS/cm values, respectively. The calibration standards used for ICP analyses were purchased from CPA Chem (Bogomilovo, Bulgaria), including (a) one standard containing a 100 mg/L mixture of 32 metals, and (b) the internal standards at 1000 mg/L, specifically containing 6 Li, 72 Ge, 89 Y, 103 Rh, 232 Th, 59 Co, 89 Y, 140 Ce, and 205 Tl. Restek FAME Mix, Food Industry FAME Mix, and Methyl heptadecanoate standard neat were obtained from Restek (Restek Co, Bellefonte, PA, USA). An AS14a Dionex column (Thermo Fisher, Waltham, MA, USA) was used as an anion separation column. A 100 m HP-88 0.25 µm × 0.25 µm capillary chromatographic column was used for the separation of fatty acid methyl esters (F.A.M.E.). A citrate buffer solution (pH 2.2) was prepared using sodium citrate dihydrate (1.967 g), thioglycolic acid (10 mL), and HCl 37% until the pH reached a value of 2.2. Physical Measurements As a general rule, all physical, chemical, and biological parameters investigated in this work are the result of triplicate experiments carried out according to the protocols, methods, and instrumental measurements described below. Due mention of the multiplicity of experiments run over the course of the present research is provided in the ensuing description of experimental methods and techniques employed. pH and Conductivity An automated robotic pH conductivity system model AR-2 (Seal Minilab, Mequon, WI, USA) was employed to conduct reliable measurements of pH and conductivity on the materials at hand. For pH and conductivity measurements, a 2:1 water (Type 1 ASTM water, ultrapure) to sample ratio was used. Subsequently, the mixture was stirred for 10 s with a robotic extraction system and measurements ensued automatically using the pH and conductivity probes. Total Carbon, Total Organic Carbon (TOC), and Total Nitrogen Determination A CN elemental analyzer (Leco Truspec, St. Joseph, MI, USA) was employed for the analysis of carbon and organic carbon content as well as total nitrogen content. To that end, a sample of known weight was placed into a high-temperature furnace. The ensuing combustion converts carbon to CO 2 , and nitrogen to N 2 for carbon and nitrogen measurement, respectively. The respective gases are swept through scrubbers into detection systems. The same method was employed for organic C by determining the carbon content after removal of calcium carbonate with HCl (37%). For total carbon, total nitrogen, and total organic carbon (TOC) determination, 100% silver cups of 8 × 4 mm dimensions (Elemental Microanalysis, Devon, UK) were used for the preparation of samples. In order to measure TOC using the elemental analyzer, it was necessary to effectively remove all inorganic carbon. To that end, the procedure involved the following: after removal of all inorganic carbon, the remaining carbon in the sample was only the organic carbon portion. In that respect, 100 mg of every sample was weighed into a pure silver cup and 100 µL of pure HCl acid was added three times every 8 h for a total of 24 h to remove all carbonates, i.e., inorganic carbon. Subsequently, the remaining Agronomy 2023, 13, 2258 5 of 32 sample was placed into the elemental analysis system for carbon analysis. The amount of carbon measured is the organic carbon (OC %) content. All experiments were run in multiple sets of three independent measurements, with each individual group involving three repeated measurements. FT-IR Spectral Measurements FT infrared spectral measurements were taken in the solid state, on a Thermo Finnigan FT-Infra Red IR-200 spectrometer (Thermo Fisher, Waltham, MA, USA) using KBr pellets. FT-IR spectroscopy provides the vibrational imprint of a material (solid, liquid, gas), through which its identity as well as properties can be revealed. The materials were dried, mixed with KBr in a ratio of 99:1 w/w, and pellets were produced through a press. The samples were subsequently introduced into the spectrometer and scanned in the range from 4000 to 400 cm −1 . Microwave Digestion A CEM digestion unit (Mars 6, Matthews, NC, USA) was used with the appropriate Mars XP-1500 carousel starter set (Mars 6, Matthews, NC, USA). Teflon high-pressure liners were used. Practically, a quantity of 500 mg of sample was weighed, with 0.1 mg accuracy, into a teflon liner. A volume of 4 mL of nitric acid (HNO 3 65%) and 1 mL of hydrogen peroxide (H 2 O 2 30%) were added to the liner containing the sample under investigation. Subsequently, a pressure ramp program gradient up to 200 psi (25 min ramp, 10 min hold at 200 psi) was used to fully digest the sample. Then, the digest was diluted into a 50 mL volumetric flask and taken for ICP-MS and ICP-OES analyses. ICP-MS Spectrometry ICP-MS spectrometry was run on a 7500 Series ICP-MS (Agilent Technologies, Santa Clara, CA, USA) facility. It is an inductively coupled plasma mass spectrometer (ICP-MS), which can measure trace elements as low as one part per trillion (ppt) or quickly scan more than seventy elements to determine the composition of an unknown sample. The system consists of an RF-generated plasma system coupled with a single quad MS detector. The octopole reaction system (ORS) before the detector quadrupole was an octopole ion guide, contained in a stainless steel vessel, and pressurized helium gas. The ORS eliminates any interference coming from the sample matrix. For liquid handling, an A-IS autosampler by Agilent Technologies was used. In addition, the system incorporates a high-throughput sample system for faster sample analysis. The MS system was under vacuum with a roughing Edwards 18 pump and a high-vacuum region in the vacuum manifold, maintained by a turbomolecular pump. For the ICP-MS unit, the isotope selection was as follows: for Pb, all isotope masses were used, i.e., 206, 207, and 208, with an internal standard (ISTD); for Th, the isotope with mass 232 was employed; for Cd, the isotope was that with mass 111 and ISTD was used; for Y, the isotope with mass 89 was used; for Ni, the isotope mass was 60 and ISTD was used; for Ge, the isotope with mass 72 was employed; for Cr, the isotope mass 52 was selected with ISTD; finally, for As, the isotope used was the one having mass 75 with ISTD. All measurements were performed in triplicate with median values recorded (vide infra). ICP-OES Analysis The same digests used for ICP-MS analysis were also used for ICP-OES analysis. To that end, a 5110 ICP-OES (Agilent Technologies, Santa Clara, CA, USA) inductively coupled plasma optical emission spectrometer was employed, with technology enabling synchronous radial and axial measurements. An SPS-3 autosampler was used for running the standard solution and the unknown samples. The ICP-OES was used in radial plasma viewing mode and the respective lines used were as follows: Total Fat Analysis An automated Soxtherm 6 position system (Gerhardt, Bonn, Germany) was used and an extraction cycle at 150 • C was programmed for 1.5 h. The extracted solution was subsequently evaporated to remove the extraction solvent. Then, it was placed at 105 • C for 45 min to remove any solvent residues. The difference in weight provided the extractable content, expressed as total fat. Apparently, there were extracted compounds that were not fatty acids. That is the reason why the samples were further examined through F.A.M.E. analysis (vide infra). Nitrate Determination through Ion Chromatography The method used an anion separation AS14a Dionex column (Thermo Fisher, Waltham, MA, USA) with an A SRS 300 anion suppressor (Thermo Fisher, Waltham, MA, USA) set at 100 mA power. The flow was set at 1.2 mL/min under isocratic conditions, with an aqueous mixture of Na 2 CO 3 and NaHCO 3 , at 3.5 mM and 1.8 mM concentration, respectively. A calibration curve from 0.15 to 500 mg/L NO 3 − was constructed to measure the unknown samples. Ammonium Determination through Derivatization Ultraviolet Spectroscopy (UV-Visible) Measurements were carried out on a UV-visible spectrophotometer 1240 UV-Mini (Shimadzu, Kyoto, Japan). During a typical experiment, ammonium ions in a sample react with hypochlorite ions (HClO -) and form chloramine, which in turn reacts with alkaline phenol in the presence of nitroferricyanide. A blue indophenol dye was produced and this reaction was used to create a calibration curve with ammonium standard solutions, thus allowing measurement of unknown samples at 660 nm (the blue color absorption wavelength). A range of up to 3 mg/L was sufficient to measure all diluted samples. All experiments were run three times, each in triplicate samples. Fatty Acid Methyl Ester Analysis through GC-FID Methyl esterification took place with boron trifluoride to produce fatty acid methyl esters (F.A.M.E.). Their profile was determined using the information from the retention times of the 37 standard FAME mix (Restek Co, Bellefonte, PA, USA). A 100 m HP-88 type column (Agilent Technologies, Santa Clara, CA, USA) was employed with a HP6890 GC-FID system (Agilent Technologies, Palo Alto, CA, USA). Running conditions include 2 mL/min flow and a temperature gradient as follows: 120 • C, hold 1 min; ramp 10 • C/min to 175 • C, hold 10 min; ramp 5 • C/min up to 210 • C, hold 5 min; ramp 5 • C/min up to 230 • C, hold 5 min. The areas of the generated methyl esters identified were compared to the areas of the methyl heptadecanoate standard in order to calculate the actual fat content of the samples. All experiments were run three times, each in triplicate samples. Gas Chromatography-Mass Spectrometry (GC-MS) For the organic screening, analyses of extracted samples were performed using a Trace GC Ultra, Thermo, TSQ Quantum XLS system (Thermo Fisher Scientific, Waltham, MA, USA) in full scan mode, with a DB-5UI capillary column (Agilent, Santa Clara, CA, USA) (30 m, 0.25 mm i.d., 0.25 µm film thickness). The carrier gas was helium, running at a flow rate of 1.3 mL/min. The column temperature was initially 45 • C for 15 min, then gradually increased to 280 • C at 25 • C/min, and finally stayed for 18 min at 280 • C. For GC-MS detection, an electron ionization system was used with an ionization energy of 70 eV. The extracts were injected undiluted, at 2.0 µL volume, in split mode with a 1:50 split ratio. Injector and detector temperatures were set at 250 and 280 • C, respectively. All experiments were run three times, each in triplicate samples. Leaching Procedure A mass of 20 g of sample was leached with 1 L of type 1 ultra-pure water. The water was poured, while mixing on the surface of a 0.500 µm sieve. Water was poured at a rate of 100 mL/min. The sample was subsequently dried and analyses were performed on the leached out samples (HNSW-L, LNSW-L, and GFB-L). All experiments were run three times, each in triplicate samples. Extraction Procedure of HNSW, LNSW, and GFB Samples A quantity of 7-8 g of fresh sample of each material was weighed and allowed to stay in the open air until completely dry. After that,~1.5 g of dry sample was ground (ceramic mortar) into fine powder, and then placed in a 250 mL separatory funnel. A volume of 12.5 mL of ethyl acetate (EA) was then added and the mixture was shaken for 10 min. The resulting extraction mixture for HNSW (yellow solution for LNSW, colorless solution for GFB) was subsequently filtered into a 50 mL falcon tube, thereby producing a green filtrate (yellow for LNSW and colorless for GFB). The procedure was repeated twice. The final volume (out of the three extractions in each case) of the~35 mL solution was centrifuged for 5 min at 9000× g at 4 • C. The so-generated clear solution was allowed to evaporate at room temperature. After 15 days, the extracted materials (HNSW-E-EA, LNSW-E-EA, and GFB-E-EA) were used for further analysis and biological studies. The aforementioned procedure was repeated with n-hexane as a solvent (HNSW-E-H, LNSW-E-H and GFB-E-H). All experiments were run three times, each in triplicate samples. Amino Acid Analysis The samples (HNSW, LNSW, and GFB) were digested according to the AOAC 994.12 method with some adjustments [20]. Specifically, a defined amount of a dried sample was weighed in a 30 mL vial (on the basis of the calculation formula of the AOAC 994.12 method) and 25 mL of 6 N HCl, with a 0.1% phenol solution, was added. Protein hydrolysis was performed in a drying oven (Memmert, Poznan, Poland) with temperature control for 23 h at 110 • C. Subsequently, after the mixture reached room temperature, it was filtered and rinsed three times, and brought to a volume of 100 mL in a volumetric flask. From this solution, 1 mL of the extract was evaporated under a nitrogen flow and the residue was dissolved in 2 mL of H 2 O. The sample was then ready for analysis. For the amino acid (AA) analysis, a standard solution of amino acids (AAs) (mixture), which contained His, Ser, Arg, Gly, Asp, Glu, Thr, Ala, Pro, Lys, Tyr, Val, Ile, Leu, Phe, and Met, at the same concentration of 2.5 mmol/L and Cys at the concentration of 1.25 mmol/L, was purchased from ThermoFisher. Reagents for pre-column derivatization of amino acids were purchased in the form of the AccQ-Tag reagent kit (Waters, Milford, MA, USA). The mobile phase was composed of solvent A, 5% AccQ-Tag Ultra Eluent A, and solvent B, 100% acetonitrile. For the actual experiment, the ACQUITY I UPLC system (Waters, Milford, MA, USA), comprising a column oven (thermostat), autosampler, highpressure binary pump, and photodiode array detector PDA, was used for the analysis of the 17 AAs. Chromatographic separation was pursued with the AccQ-Tag Ultra C-18 column (2.1 mm × 100 mm; 1.7 µm). The separation of the AAs was carried out according to the protocol provided (Waters, Milford, MA, USA). Briefly, the following chromatographic conditions were employed: PDA detector wavelength-260 nm, injection volume-1 µL, samples and column were kept at a temperature of 20 • C and 55 • C, respectively. The AA separation was carried out using gradient elution for 10 min, with a flow rate of 0.7 mL/min. All experiments were run three times, each in triplicate samples. DPPH Radical Scavenging Activity The 2,2 -Diphenyl-1-picrylhydrazyl (DPPH) radical scavenging activity of the selected extracts (vide infra) was investigated according to a procedure described elsewhere [21], with slight modifications. After full evaporation of the solvent at room temperature, 55.5 mg of HNSW-E-EA, 48.4 mg of LNSW-E-EA, and 19.8 mg of GFB-E-EA were dissolved in 1 mL, 0.5 mL, and 1.5 mL methanol, respectively. A quantity of 50 µL of the extract was mixed thoroughly with 1.95 mL of 0.1 mM DPPH solution in methanol. The samples were placed in darkness for 30 min. Subsequently, the absorbance was measured at 515 nm, also including a control sample (50 µL of methanol and 1.95 mL 0.1 mM DPPH methanolic solution), using a U-1900 Hitachi spectrophotometer (Hitachi, Tokyo, Japan). Every sample was measured in triplicate and the percentage of DPPH scavenging activity was calculated as follows: % DPPH Scavenging activity = Abs Control − Abs Sample Abs Control The results were expressed as ascorbic acid equivalent (AAE)/g dry extract. Bacterial Cell Cultures In vitro bacterial cell culture experiments were conducted using the well diffusion method on Luria-Bertani agar (LB agar) (Applichem PanReac, Darmstadt, Germany) petri dishes. A Luria-Bertani broth (LB broth) (Sigma Aldrich, Munich, Germany) was also used as a liquid nutritional medium in cell cultures. Specifically, sample extracts were studied in Gram-positive (Gram (+)) (Staphylococcus aureus; S. aureus) and Gram-negative (Gram (−)) (Escherichia coli; E. coli) bacterial cultures. During the experiments, positive (LB Broth) and negative controls (penicillin-streptomycin (Biowest, Nuaillé, France)) were also included in every experiment run. All experiments were run in triplicate under aseptic conditions. A shaking incubator operating at 37 • C was used for bacterial culture incubations (solid and liquid cultures). Optical density (O.D.) measurements attesting to the growth of liquid cultures of bacteria were carried out on a Hitachi UV-visible U-2800 spectrophotometer (Hitachi, Tokyo, Japan). Specifically, prior to a solid or liquid bacterial culture investigation, 3-5 freshly grown bacterial colonies were inoculated into a 3 mL LB broth, using a 25 mL Erlenmeyer flask in an Edmund Bühler TH15 shaking incubator (Edmund Bühler Gmb, Bodelshausen, Germany) at 37 • C for 1-2 h until the O.D. at 600 nm reached a value of 0.5 (5 × 10 5 CFU·mL −1 ). The specific inoculum was further used for solid or liquid culture studies. The specific extract concentrations investigated are shown next in % v/v, for simplicity, and correspond to a specific amount of extract/dry matter as shown in Table S1. All experiments were run three times, each in triplicate samples. Growth Rate of Bacteria in Liquid Cultures A 1:50 dilution of inoculum was further carried out in a 250 mL Erlenmeyer flask and the growth rate of each type of bacterium was investigated at 37 • C under standard shaking conditions for 3-6 h. UV-visible spectrophotometry U-2800 (Hitachi, Tokyo, Japan) was used for the measurement of the optical density (O.D.) value at specific time intervals, until the value had reached~1.0. LB broth was used as positive control and penicillinstreptomycin as negative control. The growth rate of bacteria was monitored upon exposure to HNSW-E-EA, LNSW-E-EA, GFB-E-EA samples in DMSO (compared to the appropriate control). All experiments were run three times, each in triplicate samples. Determination of Zone of Inhibition (ZOI) in Solid Bacterial Cultures Well diffusion tests were carried out for the determination of ZOI on Muller-Hinton agar plates. Specifically, 30-35 mL of LB agar was poured out onto a plate and left to dry out for about 30 min, in order to allow for the absorption of excess moisture. Subsequently, 6 mm wells were created using a sterile micropipette tip through punching into the agar Agronomy 2023, 13, 2258 9 of 32 flat bed. Subsequently, 100 µL of each investigated extract concentration was applied into the wells. The plates were incubated at 37 • C overnight (15 h). Four wells were created and thus investigated in every independent plate. Two of the wells served as control groups and two for the monitoring of the investigated extracts. All experiments were run four times, each in triplicate samples. Neuronal Cell Cultures In the present study, murine neuroblastoma N2a58 and human neuroblastoma SH-SY5Y cell lines were employed to investigate the in vitro biological profile of the seaweed (HNSW-E-EA, LNSW-E-EA) and fish (GFB-E-EA) samples dissolved in DMSO. Cells were cultured in 75 cm 2 cell culture flasks, under appropriately chosen conditions (5% CO 2 at 37 • C and standard humidity), in either Dulbecco's modified Eagle's medium DMEM (Biowest, Nuaillé, France) for N2a58 or in a 1:1 mixture of DMEM and Ham's F-12 nutrient mixture, hereafter called DMEM-F12 (Biowest, Nuaillé, France), for SH-SY5Y cells. Culture media were supplemented with 10% fetal bovine serum (FBS) (Biowest, Nuaillé, France) and 1% penicillin-streptomycin (Biowest, Nuaillé, France) prior to use. All experiments were run at least in triplicate, employing cells with a low passage number. For the SH-SY5Y cell line only, the adherent population was taken into consideration and used further, whereas floating cells were discarded during media change. All experiments were run four times, each in triplicate samples. Cell Viability and Proliferation Studies Cell viability (expressed as % survival rate), following incubation in the presence of the herein generated samples under the aforementioned conditions, was investigated using the XTT (Sodium 3-[1-(phenylaminocarbonyl)-3,4-tetrazolium]-bis(4-methoxy-6nitro)benzenesulfonic acid hydrate) (Cell Signaling) assay. Briefly, 96 well plates were used, with 5000 cells in a 100 µL volume of the complete medium seeded into each well and incubated overnight. Subsequently, the cells were treated with the sample extracts at various concentrations (~0.20-6.5 × 10 3 ng extract /g dry extract) for 24, 48, and 72 h. The XTT detection solution was prepared according to manufacturer instructions (electron coupling solution to XTT reagent (1:50 volume ratio)) and 50 µL of that was added to each well, followed by incubation of the plate for four hours. Immediately after incubation, the absorbance was measured at 450 nm, using an EL10A Elisa Microplate Reader spectrophotometer (Biobase, Shandong, China), as described elsewhere [22,23]. The XTT assay is based on the reduction of the XTT tetrazolium salt to a highly colored formazan dye by dehydrogenase enzymes in metabolically active cells, thus providing a proportional correlation between the amount of formazan produced (measured by the absorbance) and the viable cells in each well. Extracted materials (HNSW-E-EA, LNSW-E-EA and GFB-E-EA) (vide supra) were dissolved in 8.5 mL of molecular biology grade DMSO, followed by vortexing, sonication, and filtration. Fresh stock solutions of the sample extracts (HNSW-E-EA, LNSW-E-EA and GFB-E-EA) were prepared from the DMSO stock solution at the desired concentration, in the appropriate complete culture medium (DMEM or DMEM-F12, with both containing 1% penicillin-streptomycin and 10% FBS). All derived solutions were freshly prepared prior to experimentation. Final working concentrations of samples were added directly to the cell cultures and the latter were incubated for the desired time periods according to the protocols followed. All experiments were run three times, each in triplicate samples. Cell Morphology Studies To further assess the biotoxicity profile (cell viability, morphology, migration) of the extracts, cell morphology studies on both cell lines were conducted. Briefly, 200,000 cells and 2 mL of culture medium for each cell line were seeded into a well of a sterile 6-well plate and left to incubate overnight, in order for the cells to attach onto the well surface. Subsequently, the cells were treated with the sample extracts (~0.20-6.5 × 10 3 ng extract /g dry extract) and incubated for 24, 48, and 72 h, to examine any morphological alterations. Visualization of the cells was performed using an Oxion Inverso biological microscope (Euromex, Arnhem, The Netherlands). Observations made during visualization were followed by processing the generated images of cell cultures through the ImageJ 1.53t imaging software (National Institutes of Health, Bethesda, MD, USA), thereby resulting in pictures depicting any morphological alterations of cells under the influence of the employed samples and the extent to which changes occur. All experiments were run three times, each in triplicate samples. Cell Migration Studies An important feature of the studied cells, contributing to the formulation of the biotoxicity profile of the extracts, is chemotacticity (the movement of cells in response to a chemical stimulus) and any changes that might occur in the presence of the extracts. To pursue such experiments, a wound-healing (scratch) assay was employed [24]. Specifically, 200,000 cells in 2 mL of culture medium were seeded into a well of a sterile 6-well plate. The plate was placed in the incubator until confluency of the cells had reached ≈80%. Then, a scratch on the monolayer of the cells was drawn, using a sterile pipette tip. Subsequently, the culture medium was replaced by the sample extract solutions and the plate was returned to the incubator for 24, 48, and 72 h. At the end of the indicated periods, visualization and quantification of cell migration was observed, using an Oxion Inverso biological microscope (Euromex, Arnhem, The Netherlands). Observations made at this juncture were followed by processing of the generated images of cell cultures through the appropriate imaging software, ImageJ version 1.53t, (National Institutes of Health, Bethesda, MD, USA) [25], thereby resulting in pictures depicting any chemotactic movement of cells (reflecting biologically repaired scratches through cell migration) that might have taken place and the extent to which that has occurred. All experiments were run three times, each in triplicate samples. Statistical Analysis The obtained experimental data are presented as average ± standard error mean (SEM) values of multiple sets of independent measurements. Mean cell survival rates and SEMs were calculated for each individual group. Absolute survival rates were calculated for each control group and one-way analysis of variance (ANOVA) was performed for all pair comparisons, followed by post hoc analyses (Dunnett) using GraphPad Prism v.6 (GraphPad Software Inc., Boston, MA, USA). Significance levels were assessed as follows: * p < 0.05 (significant), ** p < 0.01 (highly significant), *** p < 0.001 (extremely significant) and **** p ≤ 0.0001 (extremely significant) or non-significant (p > 0.05). pH and Electrical Conductivity Measurements Filter cakes of HNSW, LNSW, and GFB samples (air-dried) were used for determination of their pH and electrical conductivity (EC) at room temperature. Table 1 shows alkaline pH values exhibited by HNSW and LNSW, whereas neutral pH values are exhibited by GFB. Measured electrical conductivities exhibit high values shown for both seaweed samples, whereas lower values emerge for ground fish bone (GFB) samples. FT-IR Spectra of Marine-Derived Materials Comparative FT-IR spectra of air-dried HNSW, LNSW, and GFB samples were recorded in the solid state and are shown in Figure S1A. Important vibrationally active functional groups can be observed and identified as hydroxyl, amino, ester, and amide moieties, thus verifying the existence of compounds of biological importance [26]. The broad bands ( Figure S1B-D) at 3374 cm −1 for HNSW, 3343 cm −1 for LNSW, and 3280 cm −1 for GFB are assigned to hydrogen-bonded O-H and N-H stretching vibrations, which correspond to polysaccharides and amino acids. Intense peaks at 2922 and 2856 cm −1 , in all cases, are related to C-H antisymmetric stretching modes on saturated carbon atoms, in line with the existence of aliphatic groups. In the observed pattern of seaweed residual samples ( Figure S1A,B), shoulders and low intensity resonances appear in the range 2540-2150 cm −1 , attributed to the stretching modes of the C≡C and C=C moieties. In the range from 1740 to 1388 cm −1 , strong features arose verifying the presence of COOH (C=O) and amino acids (N-H) [27,28]. The peaks between 1082 and 1020 cm −1 were due to the C-O bond stretching mode of polysaccharides and the features from 879 to 790 cm −1 were attributed to C=C stretching vibrations [29]. Total Carbon, Total Nitrogen and Total Organic Carbon Determination Upon complete combustion of the samples, infrared absorption and thermal conductivity were employed to measure combustion gases for carbon and nitrogen, respectively. The results of TOC analysis are provided in Table 1. A small proportion of the total C in the fish material, about 22%, was inorganic. We might expect a higher proportion of inorganic C, since quite a high proportion of the material is fish bones, which contain calcium and magnesium carbonate [30]. However, significant amounts of soft tissue were attached to the bones. In the seaweed sludge material, about 50% of the total C was inorganic. This is likely due to alginate bound by calcium (or magnesium) [31], which is dissociated by HCl and recorded as inorganic C. Essential Elements and Heavy Metal Distribution The nutrient and heavy metal (potentially toxic elements) content of all samples was investigated and determined through ICP-MS and ICP-OES, with the results shown in Table 2. Of the heavy metal presence noted in the content of the samples, in (a) HNSW, arsenic (As) and chromium (Cr) appear to be the species at high concentrations (mg/kg) compared to the rest of them. Nickel (Ni), cadmium (Cd), and lead (Pb) were found to be at low concentrations, with Pb being at the lowest concentration (<0.1 mg/kg) among all of the metals examined. In (b) LNSW, arsenic (As), chromium (Cr), and nickel (Ni) were found to be at concentrations higher than 1.0 mg/kg, with cadmium (Cd) staying at 0.3 mg/kg and lead (Pb) being present at the lowest concentration (<0.1 mg/kg). And in (c) GFB, arsenic (As) was the only element found at concentrations higher than 1.0 mg/kg, with the rest of the metals staying at levels below 1.0 mg/kg. It is worth pointing out that arsenic is an element common in marine-derived materials. In the case of nutrients, in (a) HNSW, the alkali and alkaline earth metal ion concentrations were found to decrease in the order K > Ca > Mg > Na; in (b) LNSW, the alkali and alkaline earth metal ion concentrations were found to decrease in the order Ca > K > Na > Mg; and in (c) GFB, the alkali and alkaline earth metal ion concentrations were found to decrease in the order Na > K > Mg. In the case of transition metals, the trends observed suggest that in (a) HNSW, metal concentrations decreased in the order Fe > Zn > Mn > Cu, with all species present at concentrations >1.0 mg/kg. The exception to this trend comes from cobalt (Co) and molybdenum (Mo), with both being present at concentrations below 0.5 mg/kg. In (b) LNSW, the comparative order of concentration of the metal ions is the same as in the case of HNSW, including cobalt and molybdenum. And in (c) GFB, the concentration order is Zn > Fe, with the rest of the metal ions pronouncedly lower in concentration, exhibiting an order of Cu > (Mn, Co, Mo), with the latter group of metals at concentrations <0.5 mg/kg. Of the non-transition metal elements, in (a) HNSW, the presence of silicon (Si), boron (B), and aluminum (Al) at concentrations displayed in the order Si > B > Al stands out in comparison to selenium (Se), sulfur (S), and phosphorus (P), the concentrations of which line up in the order Se > S > P, with all of them being below 1.0 mg/kg; in (b) LNSW, the same trends in concentrations were observed here as well; and in (c) GFB, the same trends in concentration was observed as in HNSW and LNSW, with the non-metal concentrations exhibiting a trend of P > Se > S, but with higher concentrations of P in GFB than in the seaweed materials (1.2% w/w). Altogether, the chemical analyses show that in order to obtain a fertilizer containing the plant macro nutrients, arranged in approximate order of magnitude according to plant demand, i.e., N, K, Ca, Mg, P, and S, we need to combine materials from fish to obtain N and P, whereas seaweed materials will support K, Mg, and S, with both materials containing relatively meaningful amounts of Ca. Total Fat Determination To pursue total fat determination in all samples, extraction of the raw materials was carried out in petroleum ether. The extracted crude material of the employed samples was then determined to be (total fat %) 11.5 ± 1.2% in HNSW, 2.3 ± 0.3% in LNSW, and 0.20 ± 0.02% in GFB. Nitrate and Ammonium Analysis Nitrates were expressed as nitrate-nitrogen (NO 3 -N). Measured values of 78.7 ± 9.5 mg/kg for HNSW, 69.1 ± 8.3 mg/kg for LNSW, and 15.8 ± 1.9 mg/kg for GFB were determined. Ammonia was also determined and expressed as ammonium nitrogen (NH 4 -N). The experimentally determined concentrations were 256.7 ± 33.4 mg/kg for HNSW, 67.7 ± 8.8 mg/kg in LNSW, and 41.3 ± 5.4 mg/kg for GFB. This shows that the major part of the total nitrogen in GFB is present as organic nitrogen. Fatty Acid Methyl Ester (F.A.M.E.) Analysis through GC-FID In this set of GC-FID experiments, the actual fatty material was determined through methyl esterification of fatty acids to fatty acid methyl esters. Chromatograms derived from the above-mentioned process depict the actual samples run. Chromatograms (Figure 1 Leaching Samples Leaching with water was deemed imperative to arrive at samples rid of any extraneous substances (e.g., NaCl, metal ions) reflecting the original environment, from which the samples (HNSW and LNSW) were retrieved. To that end, after the leaching Leaching Samples Leaching with water was deemed imperative to arrive at samples rid of any extraneous substances (e.g., NaCl, metal ions) reflecting the original environment, from which the samples (HNSW and LNSW) were retrieved. To that end, after the leaching process, each sample had to be dried and subsequently weighed. Derived data before and after the leaching test are expressed on dry basis for direct comparison with the corresponding data from the originally employed raw materials. In the subsequently employed procedure, analytical determination of the composition of both HNSW and LNSW was pursued ( Table 3). In that respect, in HNSW, Na and K concentrations were reduced more than two-fold compared to their original concentrations. In the case of metal ions (alkaline earth and transition), iron was slightly reduced in concentration, with the rest of them (Mg, Mn, Zn, and Cu) remaining essentially the same (within standard deviation). Phosphorus concentrations remained intact (0.24%), just like Cu. Finally, the carbon C and total N content underwent a marginal change, staying essentially the same. In LNSW, Na and K concentrations were reduced more than two-fold compared to their original concentrations. In the case of metal ions (alkaline earth and transition), there was a reducing trend in the iron concentration, with Mg, Mn, Zn, and Cu remaining essentially the same (within standard deviation). Phosphorus concentrations remained intact (0.27%), just like Cu. Finally, total carbon C decreased in concentration from 41.2 to 37.2%, with the total N content undergoing a marginal decrease, staying virtually unchanged. Molecular Composition of the Extracted Samples All three samples of HNSW, LNSW, and GFB were subjected to extraction with ethyl acetate (EA) and hexane (H) solvents. The derived extracts were subsequently subjected to GC-MS analysis. Comparative spectra for every type of sample in two different solvents (EA and H) are shown in Figure 2. For the HNSW sample extracted with ethyl acetate, the compounds detected and identified through GC-MS analysis include the following: tetradecanoic acid, 2-pentadecanone, 6,10,14-trimethyl-pentadecanoic acid, arachidonic acid, phytol, cis-13-octadecenoic acid, and 1-heptatriacontanol. Among these molecules, the ones detected with higher abundance include tetradecanoic acid and pentadecanoic acid. For the LNSW sample extracted with ethyl acetate, the same observations were made as in HNSW. All compounds were identified using the NIST libraries for GC-MS, employing NIST MS Search Software version 2.0. The identified compounds are found to be of the same For both HNSW and LNSW samples subjected to hexane extraction, the following compounds were detected through GC-MS: 2-pentadecanone, 6,10,14-trimethyl-pentadecanoic acid, arachidonic acid, phytol, cis-13-octadecenoic acid, and 1-heptatriacontanol. All compounds were identified using the NIST libraries for GC-MS, employing NIST MS Search Software version 2.0. The identified compounds are found to be of the same type of molecular species reported in the literature [32][33][34]. Further biological work has subsequently employed the ethyl acetate extracts. Sample Hydrolysis and Composition HNSW samples were also subjected to hydrolysis for further investigation of the possibility of determining the amino acid composition. To that end, the hydrolyzed samples (vide supra) were analyzed for total amino acid content, expressed as leucine, using the ninhydrin method. Briefly, adjustment of the pH of the hydrolyzed material was achieved using NaOH solution of 50% w/v to a final value of 5.5 in a test tube. The ninhydrin reagent (35 mg ninhydrin in 10 mL ethanol) was then added and the tube was heated to boiling. After cooling the sample, measurements were taken against a reagent blank solution at 570 nm with a 10 mm glass cuvette. The method was calibrated with a calibration curve using leucine standards [35,36]. The derived results suggest the presence of 20 mg/kg of total amino acids, expressed as a leucine median value on a wet basis for the hydrolyzed samples. The free amino acids can be useful as building blocks for the synthesis of phytohormones, which are excellent biostimulants. Amino Acid Analysis All three samples (HNSW, LNSW, and GFB) were digested as per description provided (vide supra), ultimately affording a detailed amino acid analysis, useful in the identification of (a) the nature of the raw materials to be used as fertilizers, (b) the composition of molecular agents (amino acids) useful in defining their ability to assist and activate plant growth, and (c) the biological potential, when acting as nutrient-specific formulators of cell plant growth under physiological conditions. The results show a wealth of amino acids (17 AAs) being identified, with their content progressively increasing in the order LNSW < HNSW << GFB, thus exemplifying the very nature of the samples themselves as sources of amino acids. In fact, fish backbones, especially when fish meat is still attached to the bones after filleting, are rich in protein, especially glutamine (Glu) and glycine (Gly). About 20% of the mass of fish bones is typically collagen (a protein), which has about 30% glycine [30]. The detailed account of AAs present in the examined samples is provided in Table 4. DPPH Scavenging Activity Ethyl acetate extracts of dry HNSW, LNSW, and GFB samples were used for the determination of their free radical scavenging ability, following evaporation of the solvent. The hydrogen atom donating ability of the extracts was determined by the decolorization of a methanol solution of 2,2 -diphenyl-1-picrylhydrazyl (DPPH), producing a violet/purple color in methanolic solution, which fades into yellow in the presence of antioxidants. Consequently, dry extracts after dissolution in a minimum amount of methanol, until the solution was clear, were further used in the DPPH assay. To that end, in the case of HNSW, the scavenging activity was determined to be 0.77 ± 0.01 mg AAE/g dry HNSW extract. For LNSW, the scavenging activity stood at 0.78 ± 0.01 mg AAE/g dry LNSW extract, with the corresponding scavenging activity of GFB being absent, as there was no scavenging effect. Liquid Cultures LB broth and penicillin-streptomycin have been used as positive and negative controls, respectively. Specifically, the LB broth control samples were run in parallel, from the same inoculum, with every actual extract investigated. The extracts HNSW-E-EA, LNSW-E-EA, and GFB-E-EA as well as their solvent (DMSO) alone, were examined at various concentrations so as to determine their effect(s) on each bacterium. Particularly, in in vitro E. coli cell cultures ( Figure 3A), HNSW-E-EA 1%, LNSW-E-EA 1%, GFB-E-EA 1%, DMSO 1%, and DMSO 5% were examined. They all seemed to enhance bacterium growth with no toxicity observed over the entire incubation time (~3 h), with the exception of DMSO 10%, which appeared to inhibit their growth over the same period of incubation. In the case of S. aureus cultures, the observed effect of GFB-E-EA 1%, DMSO 1%, DMSO 5%, and DMSO 10% showed almost the same profile as in E. coli. In contrast to that observation, the seaweed extracts (HNSW-E-EA 1%, LNSW-E-EA 1%) exhibited a different behavior. More specifically, HNSW-E-EA 1% and LNSW-E-EA 1% seemed to kill bacteria in a range of time 30-330 min (~6 h of incubation) as shown in Figure 3B. 5%, and DMSO 10% showed almost the same profile as in E. coli. In contrast to that observation, the seaweed extracts (HNSW-E-EA 1%, LNSW-E-EA 1%) exhibited a different behavior. More specifically, HNSW-E-EA 1% and LNSW-E-EA 1% seemed to kill bacteria in a range of time 30-330 min (~6 h of incubation) as shown in Figure 3B. . Solid Agar Cultures To determine the antimicrobial efficacy of the entitled extracts, the minimum inhibitory concentration (MIC) values were determined on an agar plate using the disc diffusion method. The MIC and ZOI values are presented in Table 5. All of the extracts, including DMSO, were studied from 1-100% concentration in E. coli and exhibit no ZOI. In the case of S. aureus cultures, the MIC was determined to be 80%. Fish residue extracts and DMSO showed no ZOI at all concentrations investigated. . Solid Agar Cultures To determine the antimicrobial efficacy of the entitled extracts, the minimum inhibitory concentration (MIC) values were determined on an agar plate using the disc diffusion method. The MIC and ZOI values are presented in Table 5. All of the extracts, including DMSO, were studied from 1-100% concentration in E. coli and exhibit no ZOI. In the case of S. aureus cultures, the MIC was determined to be 80%. Fish residue extracts and DMSO showed no ZOI at all concentrations investigated. Cell Viability and Proliferation Studies To assess the biotoxicity profile of the extracts, N2a58 and SH-SY5Y neuronal cell cultures were treated with the title extracts at various concentrations for 24, 48, and 72 h. Triton X-100 (1% v/v) was used as positive control for the cell viability assay, exhibiting cytotoxicity in both cell lines, with the DMSO solution alone being used to probe into possible cytotoxicity of itself. The first parameter optimized was the volumetric percentage of DMSO (in which the extract was dissolved) used in experiments. In concentrations of the DMSO solution greater than 0.5% v/v, slight cytotoxicity was observed in both cell lines. Because of these observations, all working solutions of extracts were generated using 0.1% v/v of DMSO. The viability of the cell lines in the presence of the three extracts (HNSW-E-EA, LNSW-E-EA, and GFB-E-EA) at various concentrations was investigated, following incubation for 24, 48, and 72 h. The results are shown in Figures S2, S3 and S4, for both N2a58 (A) and SH-SY5Y (B) cultures, respectively. In the case of HNSW-E-EA, a mild proliferative effect was observed after 24 h, in both the N2a58 ( Figure S2A) and SH-SY5Y ( Figure S2B to the control group, suggesting a mild proliferative effect. An analogous picture was depicted in the case of the N2a58 ( Figure S3A) cell line cultures, treated with LNSW-E-EA over all of the time intervals investigated. Finally, in the presence of GFB-E-EA, a slight proliferative effect was observed after 24 h both in the N2a58 ( Figure 4A) and SH-SY5Y ( Figure 4B . Significance levels were assessed as follows: * p < 0.05 (significant), ** p < 0.01 (highly significant), *** p < 0.001 (extremely significant) and **** p ≤ 0.0001 (extremely significant) or nonsignificant (p > 0.05). Cell Morphology Studies To further evaluate the biotoxicity profile of the generated extracts, the morphology of the cells, in both the N2a58 and SH-SY5Y cell cultures, in the presence of the extracts, was considered. The extract concentrations chosen were for HNSW-E-EA 28 × 10 2 , 28.0, and 28.0 × 10 −2 ng extract /g dry HNSW/mL DMSO; for LNSW-E-EA they were 24.6 × 10 2 , 24.6, 24.6 × 10 −2 ng extract /g dry LNSW/mL DMSO; and for GFB-E-EA they were 64.8 × 10 2 , 64.8, and 64.8 × 10 −2 ng extract /g dry GFB/mL DMSO, thereby preserving the volumetric percentage of DMSO in each case. Also, the culture medium (control) and 0.1% DMSO solutions were used to compare each result. Both cell lines (N2a58 and SH-SY5Y) appear to have undergone no morphological changes after 24, 48, and 72 h at the highest concentration of each extract used (Figures 5 and S4), with normal cell adhesion and proliferation taking place on the well surface. Migration Studies In an attempt to enrich the biotoxicity profiles of the generated extracts, migration studies provide an opportunity to assess cell motility in the presence and absence of the extracts. The extract concentrations tested were for HNSW-E-EA 28.0 × 10 2 , 28.0, and 28.0 × 10 −2 ngextract/g dry HNSW/mL DMSO; for LNSW-E-EA they were 24.6 × 10 2 and 24.6, 24.6 × 10 −2 ngextract/g dry LNSW/mL DMSO; and for GFB-E-EA they were 64.8 × 10 2 , 64.8, and 64.8 × 10 −2 ngextract/g dry GFB/mL DMSO, thus preserving the volumetric percentage of DMSO in each case. In the N2a58 cell culture case, after the scratch had been made on the cell monolayer, what was observed was that adherent cells away from the scratch were detached from the plate surface. As a result, the created wound was progressively filled with floating cells, which randomly attached to the plate surface over the duration of the experiment, thus negating the purpose of the assay itself. In the SH-SY5Y cell culture case, the scratch of the monolayer was completely covered at all concentrations tested over less than 72 h. The picture of the progress of the experiment for the highest concentration tested is shown in Figure 6. In all cases of extracts studied through the specific assay, the cell migration speed in each case was determined and the calculated values are shown in Table 6. The observed values of migration speed in all cases of exposure experiments, examined for the SH-SY5Y cell cultures, are very comparable to the reported speeds determined for a variable number of eukaryotic cell cultures, which had been subjected to wound healing (scratch) assays in the course of the investigation of their cell motility and chemotactic behavior [37]. Overall, there is no inhibition of the natural cell motility, in any case of extracts used in the specific assay, thus providing further evidence for their Migration Studies In an attempt to enrich the biotoxicity profiles of the generated extracts, migration studies provide an opportunity to assess cell motility in the presence and absence of the extracts. The extract concentrations tested were for HNSW-E-EA 28.0 × 10 2 , 28.0, and 28.0 × 10 −2 ng extract /g dry HNSW/mL DMSO; for LNSW-E-EA they were 24.6 × 10 2 and 24.6, 24.6 × 10 −2 ng extract /g dry LNSW/mL DMSO; and for GFB-E-EA they were 64.8 × 10 2 , 64.8, and 64.8 × 10 −2 ng extract /g dry GFB/mL DMSO, thus preserving the volumetric percentage of DMSO in each case. In the N2a58 cell culture case, after the scratch had been made on the cell monolayer, what was observed was that adherent cells away from the scratch were detached from the plate surface. As a result, the created wound was progressively filled with floating cells, which randomly attached to the plate surface over the duration of the experiment, thus negating the purpose of the assay itself. In the SH-SY5Y cell culture case, the scratch of the monolayer was completely covered at all concentrations tested over less than 72 h. The picture of the progress of the experiment for the highest concentration tested is shown in Figure 6. In all cases of extracts studied through the specific assay, the cell migration speed in each case was determined and the calculated values are shown in Table 6. The observed values of migration speed in all cases of exposure experiments, examined for the SH-SY5Y cell cultures, are very comparable to the reported speeds determined for a variable number of eukaryotic cell cultures, which had been subjected to wound healing (scratch) assays in the course of the investigation of their cell motility and chemotactic behavior [37]. Overall, there is no inhibition of the natural cell motility, in any case of extracts used in the specific assay, thus providing further evidence for their atoxicity profile. BlueBio Waste as a Potential Ecofriendly Fertilizer in Plant Growth Naturally occurring materials originating from industrially processed organisms have in recent years assumed significance as raw materials containing useful molecular components of diverse physicochemical properties and thus potentially benevolent biological properties in a diverse spectrum of applications. Among such a plethora of marine organisms processed at the industrial level and leaving behind residues (waste) of yet unexplored applications, seaweed residues after chemical extraction, and ground fish 28.0 × 10 2 ng extract /g dry HNSW/mL DMSO; LNSW-E-EA, 24.6 × 10 2 ng extract /g dry LNSW/mL DMSO, and GFB-E-EA, 64.8 × 10 2 ng extract /g dry GFB/mL DMSO). Close ups of the provided experiments, provide tangible proof of the progress of cell migration during the investigation. Table 6. Migration speed of SH-SY5Y cells in the presence of the extracts studied. BlueBio Waste as a Potential Ecofriendly Fertilizer in Plant Growth Naturally occurring materials originating from industrially processed organisms have in recent years assumed significance as raw materials containing useful molecular components of diverse physicochemical properties and thus potentially benevolent biological properties in a diverse spectrum of applications. Among such a plethora of marine organisms processed at the industrial level and leaving behind residues (waste) of yet unexplored applications, seaweed residues after chemical extraction, and ground fish bones constitute a well-defined group of waste of natural origin waiting to be investigated. In this context, the prospect of using such industrially produced waste as a raw material for the investigation of its potential as source for molecular components capable of acting as (a) fertilizers, and (b) biostimulants in contemporary agriculture, was explored in this work in due length. For such an effort to be implemented, two basic tenets should formulate the approach used and the technically apt strategies to be employed: (a) effective physicochemical screening of selected raw materials, and (b) biological assessment of their potency to act as efficient fertilizers and possibly also as biostimulants. The so-arisen opportunity has therefore focused on (a) seaweed (HNSW and LNSW), and (b) ground fish bones (GFB), both emerging from their corresponding industrial processes, thus leaving them as unexplored waste for further perusal. The physicochemical profile included elemental composition linked to metals and non-metals, nitrogen content (in the form of total and mineral nitrogen and amino acids), and organic and inorganic carbon. That way, both the potential of the starting materials to satisfy the demand of crop plants for mineral nutrients, as well as any potential toxic elements (heavy metals) could enter the profile. The physicochemical profile served as a basis of evaluation and assessment of a pluripotent biological activity emerging through a plethora of biological experiments in vitro, thus formulating the biological profile of the raw materials. Collectively, the overall arisen biochemical profile could serve as a guide to variably configured formulations of composite biofertilizers, fit to contribute to crop plant growth, potentially affecting soil, plant roots, leaves, and ultimately the fruit. Potential target plants could include strawberries, lettuce, and cucumber among others, thus exemplifying applications to be pursued in the future. Establishment of Fundamental Physicochemical Properties Both of the original HNSW and LNSW samples were strongly alkaline (pH > 9.1), with the ground fish bone (GFB) samples being close to the physiological pH value (7.0). Electrical conductivity (Table 1) was high in both the HNSW and LNSW samples, with that of HNSW sample being close to two-fold higher than that of the LNSW sample. In both cases, however, conductivity was exceedingly high, i.e., several-fold higher than that in materials considered as potential biofertilizers in agricultural practices (i.e., 800-4000 µS/cm) [38,39]. In the case of the ground fish bone (GFB) samples, the electrical conductivity was still high (8500 µS/cm) compared to that of an ordinary fertilizing material, yet close to seven times lower than that of the HNSW sample and around four times lower than that of the LNSW sample. The large difference between the seaweed and GFB samples could be attributed to the origin of the raw materials, with the seaweed sludge having been treated with salts from alkalis and acids, thus resulting in considerably higher conductivity than that in the fish bones. In contrast, there were no significant differences between the three samples with respect to the organic carbon content, whereas the total carbon content was almost twofold higher in the HNSW and LNSW samples compared to the total organic content and significantly higher than that in the GFB sample (Table 1). Furthermore, the fact that the total carbon content in both the HNSW and LNSW samples was practically the same is worth noting. A different picture arose for the total nitrogen content. There, in the HNSW and LNSW samples, the total nitrogen content was low (<0.5%), with the LNSW content being close to 2.5-fold lower than that in the HNSW sample. Beyond that assertion, the nitrogen content in the GFB sample was three-to six-fold lower than the total organic carbon and carbon content, respectively, of the HNSW and LNSW samples, and three-fold lower than the carbon content values of the GFB sample. Furthermore, it was~5-13 times higher than the corresponding nitrogen content in the HNSW and LNSW samples (Table 1). According to our data, the HNSW and LNSW samples contain very small amounts of phosphorous (P) (0.07 and 0.08%, respectively) ( Table 2). Both types of samples, however, had minuscule amounts of P compared to the GFB samples (1.20%). In light of such experimental observations, GFB represents a significant source of phosphorus, potentially suitable for use as fertilizer. HNSW and LNSW, too, may contribute slightly to phosphorus application, but to a considerably limited extent due to their comparatively very low concentrations of P. Identity Formulation upon Leaching The raw material samples, since they are of marine origin, have a high sodium content due to the (a) inherently present salt, and (b) residues of NaOH treatment used in the seaweed extraction process. Consequently, the industrially generated HNSW and LNSW samples have by nature a high sodium content. High salt content is to be avoided in fertilization applications, as sodium chloride is harmful and toxic at high concentrations to almost every crop [40][41][42]. Since the exact amount of aspired biofertilizers, namely their percentage in soil, is yet to be determined, it was deemed useful to run an initial study on the effects of leaching salt out of the sample using water. This is a straightforward procedure, which can easily be scaled up, with minimal cost and effort. The parameters selected for investigation included electrical conductivity, which is an expression of the exchangeable salts, and basic nutrients such as total carbon, total nitrogen, potassium and most micronutrients (Table 3). More specifically, in the HNSW samples subjected to leaching and subsequently dried, the percentile reduction of sodium content upon leaching was 62%, reflecting a >2.5-fold drop from the original raw material. The percentile reduction of the potassium content upon leaching was 66%, reflecting an approximately three-fold drop compared to the original raw material. In the case of the alkaline earth metal ions, the Mg concentration was essentially unaltered (within standard deviation). In the case of the LNSW samples, the percentile reduction in sodium content upon leaching was 65%, reflecting an approximately three-fold drop compared to the original raw material. The percentile reduction of the potassium content upon leaching was 69%, reflecting a greater than three-fold drop from the original raw material. On the other hand, an insignificant decrease was observed in the case of the alkaline earth metals Mg and Ca (within standard deviation). Thus, divalent metal ions (Mg 2+ , Ca 2+ ) were essentially not lost upon leaching. Therefore, in the case of alkali metals, leaching was instrumental in removing Na and K, which were inherently present in the raw material samples, with the derived levels associated with normal pH and electrical conductivity values (vide supra). Since K is an important nutrient needed by crop plants in large amounts, comparable to nitrogen, loss of K by leaching to remove Na is not beneficial for subsequent application as fertilizer. This calls for further studies to reveal practical solutions and the seaweed industry should be aware of the fact that application of sodium salts should be avoided in any processing. Therefore, in overall handling of the differential changes in sodium and potassium concentrations, future attempts could concentrate on avoiding potassium loss or replenishing lost potassium upon leaching by adding the amount lost. On the other hand, the presence of considerable amounts of organic matter in the samples studied may be beneficial to the fertilizing potential of the materials, from the point of view that potassium is retained in them upon leaching instead of being completely removed. In fact, organic matter can enhance soil structure and improve cation exchange capacity (CEC), thereby formulating differential concentrations of sodium and potassium upon irrigation. With regard to the transition metals present in the dry samples, there was a decreasing trend in the concentration of iron in the HNSW and LNSW samples (within standard deviation). As for the remainder of the alkaline earth and metal ions, the concentration of Mg, Zn, Mn, and Cu remained essentially unchanged upon leaching. Undoubtedly, the forms of the aforementioned transition metal ions in the tested samples allude to discretely configured metal-organic structural variants of such solubility that either any increase or decrease thereof follows the same pattern in both the HNSW as well as LNSW samples. In an analogous fashion, in both HNSW and LNSW samples, the phosphorus content remained unchanged upon leaching. To that end, it appears that the form of phosphorus in the investigated samples is tied down to distinctly differentiated forms that are not subject to removal upon leaching with water. Apart from the aforementioned changes taking place in the above elements, the total carbon content in the case of HNSW dropped slightly, with the corresponding drop in the case of LNSW being~4.0%. For the total nitrogen content, the decrease was insignificant (~0.1%) in both the HNSW and LNSW samples upon leaching. The experimental results suggest that in the case of HNSW samples, the total carbon content reflects forms of that element not easily removed in their majority through aqueous leaching (e.g., hydrophobic). The observations might be linked to alkaline earth, Zn, Mn, Cu, and P changing contemporaneously with C, thus affording ultimately the same overall concentrations as before leaching. In the case of total nitrogen content in both HNSW and LNSW samples, no nitrogenous forms leach out, thus alluding to the equally distinct nature of the compounds in which that element is a component. Amino Acid Hydrolysis Supporting Fertilizer Formulations In general, amino acids are fairly water-soluble. At the pI of an amino acid, the carboxylic acid group is deprotonated and the amine group is in the ammonium form. When pH is high, all groups in an amino acid undergo deprotonation. For some amino acids, in order to facilitate solubilization, pH needs to increase above the amino acid pKa. Cognizant of the aforementioned pertaining to the structure and chemical reactivity of amino acids and the fact that seaweed samples are fairly rich in organic matter and possess a pH of 10, (a) the choice of sample to be analyzed for amino acids should be based on hydrolytic processing, and (b) a specific method should be used to subject the sample to hydrolysis, pursuant to which amino acid analysis could be carried out. The crux of this part of the investigation rests on the principle that a sample reasonably rich in amino acid content could be processed, from which amino acids or amino acid-containing fragments/species would arise and subsequently be isolated. The so-arisen amino acids and/or amino acid peptide fragments generated thereof could be used to enrich raw materials of the two types of marine organism residual waste in a manner that (a) is in line with the composition of conventional fertilizers supporting plant growth [43], and (b) leads to a plethora of combinations that would arise, distinctly differentiating the nature of the hybrid mix generated so as to fit the needs of the soil and enhance the plant growth of a crop, while concurrently being in line with conventional fertilizer characteristics when applied in the field. Ostensibly, enrichment of the hybrid combinations emerging upon mixing of the two marine organism residues should come from GFB (vide infra). As a first approximation to the implementation of the approach described above, the amino acid content of all three samples was determined according to officially acceptable methods involving digestion under acidic conditions ( Table 4). The sample with the highest content of AAs was, as could be expected from its origin, GFB, followed by HNSW and LNSW. Our study confirms that fish bones with some soft tissue attached are a rich source of AAs. Since they also contain significant amounts of P and Ca, the fish bones may be considered as a viable source of nutrients in agricultural practices. Furthermore, observation of the fact that GFB is the only sample containing cysteine (albeit low in concentration compared to other amino acids) (Table 4), suggests another reason for which reducing sulfur-containing amino acid sources (e.g., cysteine) could enhance the provision of essential amino acid strength to the plant. In fact, cysteine plays a central role in plant metabolism due to its chemical potential as a donor of a reducing sulfur atom or a sink for sequestering potential heavy metallotoxins. In that capacity, this specific amino acid is involved in the synthesis of molecules vital to the integrity, growth, and resistance-defense mechanism in oxidative stress [44][45][46]. On the other hand, specific criteria reflecting a well-defined carbon and nitrogen content were used as the basis for selecting a sample for further hydrolytic processing on a large scale to obtain the much-needed amino acids for application in a field-linked composite fertilizer comprising an appropriate combination of the three types of samples. In that sense, the HNSW sample was chosen for further processing as it was found to have the highest amount of fatty content (266.0 ± 50.5 mg/kg) followed by the LNSW sample containing 225.0 ± 42.3 mg/kg. The fish bone sample (GFB) shows the lowest fatty content, close to 41.9 ± 8.0 mg/kg. A pH 10 buffer was used in that case, with the emerging content being 20 mg/kg of total amino acids, expressed as a leucine median value on a wet basis (vide supra). The emerging hydrolyzed sample was transferred to and mixed with the unprocessed seaweed samples. The experimental results suggest that all seaweed samples contain the most necessary nutrients for plant growth in the form of macronutrients and minerals, except for phosphorus, but not in the proportions required by crop plants. Phosphorus and nitrogen could be enriched through the use of ground fish bone (GFB) samples. The majority of the nitrogen in the GFB material is organic, and present as amino acids. The amino acid content in GFB was~20-and~75-fold higher than that in HNSW and LNSW samples, respectively. Taking into consideration the cumulative physicochemical data on all three types of samples it appears that their unraveled composition and properties jibe with past reports in the literature on commercially available fertilizers, derived from mixed fish residual materials or seaweed, albeit of a different nature from the ones investigated in the present work [47]. These fertilizer materials are currently employed in agricultural practices as stimulators of plant growth, productivity, etc. [48]. Undoubtedly, a more in-depth look at the three types of materials studied here will confirm the fact that (a) they contain a more extensive list of micro and macro-nutrients, thus providing a more comprehensive and global picture of the potential of such nutrients useful in plant growth, and (b) a distinct compositional milieu reflects similar or higher percentages of certain inorganic and organic constituents that could support plant growth [49]. In view of the aforementioned, if liquid fertilizers are to be produced, hydrolysis of the fish residues is an option and many liquid fertilizers are available from such materials. However, with the semi-solid seaweed material as a basic component rich in organic matter, which is beneficial to most agricultural soils, solid fertilizers are more realistic in practice. Previous studies [7,17] have demonstrated that fish bones give a very rapid growth effect, in fact even better than mineral nitrogen fertilizers. Hence, no hydrolysis would be required for the production of solid fertilizers. Bacterial Cell Cultures The bacterial growth profile of E. coli (Gram-negative) and S. aureus (Gram-positive) cultures has been investigated upon exposure to the extracts derived from the HNSW, LNSW, and GFB raw materials. The results show that there is a difference in the growth of the two bacteria as a function of time, when it comes to the effect of seaweed extracts. Specifically, the differential influence that the extracts HNSW-E-EA and LNSW-E-EA exert on the two bacterial (Gram-negative E. coli and Gram-positive S. aureus) cell cultures occur as these seaweed extracts contain, among other things, arachidonic acid (ARA). ARA belongs to the family of polyunsaturated fatty acids (PUFAs) with the following features: (a) it is present in the phospholipids (especially phosphatidylethanolamine, phosphatidylcholine and phosphatidylinositide) of membranes of the body's cells; (b) it is abundant in the brain, muscles, and liver; (c) it is released during inflammatory bursts by macrophages and neutrophils; and (d) it is metabolized enzymatically to prostaglandins, hydroxytetraenoic acids, and leukotrienes [50]. In the case of S. aureus, ARA causes production of various electrophilic substances (e.g., isoprostanes, prostaglandins) through a lipid peroxidation (autoxidation) mechanism. These substances can react with nucleophilic groups of cellular macromolecules (e.g., proteins) and induce a stress response in a bacterium. This, however, cannot happen in E. coli due to (a) the different structure that it has as a Gram-negative bacterial organism, containing a thin peptidoglycan layer and an outer lipid membrane. Thus, PUFAs can be incorporated into membrane phospholipids, thereby preventing them from further reaction with cellular macromolecules, and (b) the absence of teichoic and lipoteichoic acid that can further contribute to the lipid peroxidation mechanism [51]. An observation worth pointing out is that the level of ARA released and the amount of reactive oxygen species (ROS) generated in the host-pathogen ARA system determines the degree of toxicity that can be modulated by altering cellular reactive oxygen species (ROS). Furthermore, in the case of both bacteria studied, it can be seen that under the employed experimental conditions, E. coli cells grow up to 180 min, at which point the growth rate is stabilized, whereas in the case of S. aureus, bacterial growth rate stabilization occurs at 245 min. Also worth mentioning is the fact that in E. coli, the same time (180 min) is needed for the stabilization of bacterial cell culture in DMSO 1%, HNSW-E-EA 1%, LNSW-E-EA 1%, and GFB-E-EA 1%. In contrast to this behavior, the Gram-positive bacterium S. aureus adopts a growth rate stabilization profile in its cell culture that takes 240 min in the presence of DMSO 1% and GFB-E-EA 1% to reach a plateau, whereas in the presence of HNSW-E-EA 1% and LNSW-E-EA 1% it does not take any time to achieve that (plateau from the starting time point). In addition, the stabilization of the cell growth rate in DMSO 5%, takes 165 min for E. coli and 225 min for S. aureus. In the case of DMSO 10% and in the presence of penicillin-streptomycin (PEN), no plateau is observed from the starting time point in both profiles. Therefore, the differential profiles observed in the case of the two bacterial cell organisms are juxtaposed against the employed controls in a well-defined manner. As far as solid agar cultures are concerned, in the case of the Gram-positive organism S. aureus, a distinct ZOI was observed for PEN 16% (15.6 mm). In the case of LB broth alone, no ZOI was observed as expected (Table 5). Furthermore, whereas HNSW-E-EA 1% and LNSW-E-EA 1% were detrimental to the integrity of the S. aureus cells in liquid cultures, in the solid agar cultures a distinct ZOI was observed at high concentrations (80-100%) of HNSW-E-EA and LNSW-E-EA, exhibiting values in the range from 8 to 13 mm. Eukaryotic Cell Cultures The employed cell cultures involved eukaryotic cells from sensitive neuronal tissues in an effort to assess the fortitude of potentially toxic components (even minutely toxic ones) in the marine residue samples (as fertilizers) that could (a) negatively affect the growth of plants, thus either limiting the growth-promoting /enhancing ability of the samples or becoming detrimental to the integrity of the plants, or (b) positively affect plant growth through the activation of their biologically active components. To that end, the cell viability studies on the N2a58 and SH-SY5Y cell cultures, exposed to variable concentrations of HNSW-E-EA (28.0 × 10 −2 -28.0 × 10 2 ng extract /g dry HNSW/mL DMSO), LNSW-E-EA (24.6 × 10 −2 -24.6 × 10 2 ng extract /g dry LNSW/mL DMSO), and GFB-E-EA (64.8 × 10 −2 -64.8 × 10 2 ng extract /g dry GFB/mL DMSO), over 24, 48, and 72 h, showed that (a) at all time points, the extracts were not toxic to both cell lines and (b) there was a slight proliferative effect noted in all cases over the initial period of 24 h of exposure. The latter observation may be a temporary transition phase for the cells exposed to the extracts, with likely enhancement of their physiology due to the infusion of essential components from the extracts (at the indicated concentrations) into the growth media, thereby leading to slightly increasing numbers. Gradual adaptation of the cells to the extracts over the ensuing 48 and 72 h periods returns the cells to a normal cell cycle state in comparison to the control. Individual aberrations from that behavior in both cell lines and at different concentrations (e.g., LNSW-E-EA at 24.6 × 10 1 ng extract /g dry LNSW/mL DMSO in N2a58 cells over 24 h; LNSW-E-EA at 24.6-24.6 × 10 2 ng extract /g dry LNSW/mL DMSO in SH-S5SY cells over 48 h) were also observed, thereby signifying the individualized effects bestowed upon the cells by the same extracts at different time points, all indicative of the discrete nature of the cells themselves. Associated with the above behavior of the cells was also the investigation of their morphology as a function of concentration and time. To that end, the cell culture experiments conducted with both cell lines, using the same concentrations of extracts as in the previous case, over 24,48 and 72 h, showed that there was no morphological change occurring during the examined periods of exposure. More specifically, in the case of N2a58 cell cultures, careful examination of their behavior over the monitoring period reveals that there are two types of cell shapes: cells with a round shape and cells with an extended shape. In all culture treatments with the three extracts HNSW-E-EA, LNSW-E-EA, and GFB-E-EA (compared to control), the cultures were very similar, with the round shaped cells dominating over the extended type of cells [52]. This behavior is common and has been previously noted in the literature [52]. Worth noting is the case of SH-SY5Y cell cultures, where careful observation of the cells throughout their period of incubation in the presence and absence of the three extracts HNSW-E-EA, LNSW-E-EA, and GFB-E-EA (in comparison to control) reveals the presence of two distinct types of cells, i.e., N-type and S-type, consistent with a) previously seen culture populations, exemplifying neuroblast-like and epithelial-like morphology, respectively, and b) retention of the hybrid cell nature reflected in their appearance, shape, and protruding processes in the culture media [53]. Concurrently, in both N2a58 and SH-SY5Y cell cultures, the cells exhibited normal cell adhesion in all cases (over all time points considered) and slight proliferation in distinct cases (vide supra), thereby complementing the observations made during the viability studies. Further assessment of the chemotacticity of the cells exposed to the specified extracts at the defined concentrations mentioned above over a period of 24, 48 and 72 h, led to the employment of migration studies initiated through "scratch" or wound-healing assays in both N2a58 and SH-S5SY cell line cultures. Two factors were examined in this context: the migratory ability of the cells and the migration speed under the presence of various concentrations of the extracts HNSW-E-EA, LNSW-E-EA, and GFB-E-EA. The conducted assays are important in describing the migratory ability of the cells under the influence of exogenous agents (in this case the three extracts), thereby providing a more detailed picture of cell behavior that could not be observed through the previous two studies. To that end, monitoring of their chemotactic behavior in trying to expand, proliferate, and reach confluency, while concurrently closing the artificially generated gap in the culture (hence the wound-healing term), reveals that the cells retain to a great extent their potential to grow, expand and repossess their original area of coverage over a period of~72 h. An added advantage to conducting such experiments was the concurrent determination of the migration speed, with which the cells move to reclaim the space allotted to them due to the inflicted scratch, thereby providing a measure of how well they function under the influence of the three distinctly defined extracts (compared to control). The results ( Table 6) suggest that the speed with which the cells migrate remains almost the same and is not affected significantly by the exposure to the three extracts as a function of their concentration over a period of~72 h. In that respect, the results obtained complement the observations made in the previous viability and morphology studies. Undoubtedly, both facets of the experimentation projecting the migratory ability of the two types of cells under the influence of the extracts describe useful factors contributing to the overall picture of the biological potential of the extracts themselves. Antioxidant Potential The antioxidant potency of the extracts of all three studied materials was examined through DPPH scavenging activity experiments, thereby reflecting their ability to scavenge free radicals emerging as a result of oxidative stress conditions in all cells. The specific in vitro assay was conducted with ethyl acetate extracts of dry HNSW, LNSW, and GFB samples and showed that a) HNSW and LNSW both displayed very mild scavenging activity, and b) GFB exhibited no activity. The observed results are not surprising in view of the fact that the samples are essentially residues of industrially processed raw marine organisms. Therefore, significant amounts of antioxidant components have been removed, with the ultimate case being that of GFB, which represents ground fish bones. Even so, the mild scavenging activity of the two seaweed samples denotes their existing capacity as antioxidant agents and to that end, the specific property adds to the global biological profiles determined for the investigated extracts. The cumulative experimental data for all three samples examined establish a welldefined atoxic biological profile for the three extracts, projecting distinctly described properties of the investigated cell lines, collectively useful in assessing the potential of the extracts in further applications in agricultural practices. Conclusions This study has provided thorough insight into the chemical and biological characteristics of marine residual materials, which may be processed into complete fertilizers for agricultural crops. Detailed screening of HNSW and LNSW samples from seaweeds and ground fish bones (GFB) revealed their analytical composition (metals and nutrients, total carbon, total nitrogen), pH, and electrical conductivity, among other things. With high conductivity, leaching was necessary. Leaching with a ratio of 1:50 removed significant proportions of monovalent cations Na + and K + . Whereas removal of sodium is required for fertilization, especially of horticultural crops, removal of potassium is not positive from a fertilization perspective, with the conductivity of marine materials calling for further in-depth studies. None of the studied materials contain an appropriate blend of essential plant nutrients, when applied as a single fertilizers. In horticulture, nitrogen-rich fertilizers are often applied in addition to basic fertilizer dressings to enhance the growth of nitrogen-demanding crops. The fish bone material GFB may be applied for such purposes. For a complete fertilization formulation, the materials need to be blended, and it may also be relevant to blend other types of materials into such fertilizers, e.g., to increase the potassium content. Amino acid analysis revealed that the HNSW material contained a rich suite of AAs at relatively low concentrations. Even more AAs, at much higher concentrations, were found in the fish GFB material, demonstrating the high biological quality of this resource. Fish protein seems to be easily degradable in soil, since fish material will increase plant growth very quickly. Fish waste also contains significant amounts of phosphorus, a scarce resource, and should definitely be utilized for fertilization purposes instead of going to waste as is often the case today. By analogy, in-depth studies employing organic solvent extraction led to the discovery of a family of organic compounds of potential biostimulant activity (e.g., amino acids, arachidonic acid) [43,54]. Collectively, the physicochemically formulated global (bio)chemical profile of the samples at hand compelled further work on the biological properties of the generated materials as essential factors for normal cell physiology in plant growth. To that end, both bacterial (Staphylococcus aureus and Escherichia coli), and eukaryotic cell lines (N2a58 and SH-SY5Y) were employed in in vitro work, with the sought out experiments targeting (a) the biotoxicity profile formulation of the generated extracts (viability, antimicrobial activity, defined zone of inhibition values) in bacterial cell cultures, (b) the biotoxicity profile (viability, morphology, chemotacticity, proliferation) of the generated extracts in eukaryotic cell lines, and (c) determination of the antioxidant potential of the generated extracts (low DPPH scavenging activity). So-conducted, the experiments revealed the
2023-08-30T15:06:07.880Z
2023-08-28T00:00:00.000
{ "year": 2023, "sha1": "65449c1f42be2b531eeff0a9a7f3a8df6d9880a0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4395/13/9/2258/pdf?version=1693288353", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "95c0ecb1eb54e9ce7ca70f8d780758c9e552f660", "s2fieldsofstudy": [ "Environmental Science", "Chemistry", "Biology" ], "extfieldsofstudy": [] }
244908376
pes2o/s2orc
v3-fos-license
ActiveZero: Mixed Domain Learning for Active Stereovision with Zero Annotation Traditional depth sensors generate accurate real world depth estimates that surpass even the most advanced learning approaches trained only on simulation domains. Since ground truth depth is readily available in the simulation domain but quite difficult to obtain in the real domain, we propose a method that leverages the best of both worlds. In this paper we present a new framework, ActiveZero, which is a mixed domain learning solution for active stereovision systems that requires no real world depth annotation. First, we demonstrate the transferability of our method to out-of-distribution real data by using a mixed domain learning strategy. In the simulation domain, we use a combination of supervised disparity loss and self-supervised losses on a shape primitives dataset. By contrast, in the real domain, we only use self-supervised losses on a dataset that is out-of-distribution from either training simulation data or test real data. Second, our method introduces a novel self-supervised loss called temporal IR reprojection to increase the robustness and accuracy of our reprojections in hard-to-perceive regions. Finally, we show how the method can be trained end-to-end and that each module is important for attaining the end result. Extensive qualitative and quantitative evaluations on real data demonstrate state of the art results that can even beat a commercial depth sensor. Introduction Depth sensors can provide 3D geometry information about a target scene, which is critical in various robotic applications, including mapping, navigation, and object manipulation [6,17,26]. Among the different types of depth sensors available, active stereovision depth sensors (eg. Intel RealSense™D series) are the most widely adopted in both industry and academic settings due to their high spatial resolution, high accuracy, and low cost [19]. These sensors Original RGB Image Figure 1. ActiveZero produces more accurate and complete disparity estimates on real IR stereo images for objects with complex optical characteristics (specular, transparent) than commercial depth sensors with zero real depth annotation using mixed domain learning by leveraging self-supervised reprojection loss on temporal IR patterns in the real domain and direct disparity supervision in the simulation domain. are composed of an infrared (IR) pattern emitter and two IR cameras with the IR pattern projected onto the target scene to facilitate stereo matching. However, since these sensors use classical stereo algorithms, they suffer from common stereo matching issues such as over smoothing, edge fattening and holes for specular and transparent objects so they are non-ideal for robotic applications which require high precision and completeness [5]. Learning based methods can solve the aforementioned issues by generating more accurate and complete depth maps through the utilization of prior samples to understand how to correctly handle edges and uncertain pixels [2][3][4]35]. However, a large scale stereo dataset with ground truth depth is required to train these learning based methods, which is costly and time-consuming to collect in the real world. Therefore, one way to alleviate this problem is to use self-supervised learning. Self-supervised stereo methods [38,39] use reprojection or other related losses between binocular images as supervision, but the fluctuation of these losses prohibit the network from reaching a meaningful optima. Another approach is to use simulation data where ground truth depth is readily available. However due to the domain gap between the simulation and real world, networks trained on only simulation data cannot be reliably transferred to the real domain. Domain adaptation methods have been proposed to overcome the Sim2Real problem [23], but the introduction of GANs makes the training process unstable [20] and the performance suboptimal. This paper proposes an end-to-end learning stereo method that combines the advantages of self-supervised learning in the real domain and supervised learning in the simulation domain which we call mixed domain learning (Fig. 1). This strategy significantly boosts the stereo network performance while also stabilizing and speeding up the optimization process. Specifically, by only needing to train on shape primitives in the simulation domain with ground truth depth as supervision and an unrelated set of scenes in the real domain with reprojection as selfsupervision, we are able to achieve comparable performance on completely out-of-distribution objects in the real domain as though we were directly training on those objects. In addition, we observed that there are fundamental issues with performing direct image reprojection as previous works had done so we propose the use of temporal IR by periodically adjusting the brightness of the emitted IR pattern and extracting the binary pattern from the temporal image sequences. The reprojection loss on the temporal binary pattern eliminates the influence of scene texture and also the effect of illumination strength decaying with increased distance. Experimental results demonstrate that our method is able to outperform state-of-the-art learningbased stereo methods and commercial depth sensors, and ablation studies verify the effectiveness of each module in our work. Related Work Depth sensors can be classified into four categories according to their underlying sensing principle [5]: passive stereo-vision, active stereo-vision, structured light, and time-of-flight. Each depth sensing technique has its own advantages and drawbacks. Giancola et al. [13] introduces the principles of different depth sensors and evaluated their metrological performance independently. Chen et al. [5] compared the short-range depth sensing performance of 8 commercially available depth sensors for different illumination settings and objects and found that active stereovision sensors and structured light sensors have similar performance to each other and better performance than the other two kinds of sensors. Furthermore, depth sensor performance varies among different objects with these sensors performing especially poorly on objects with complex optical characteristics [29]. In this paper, we focus on improving the visual and numerical performance of active stereovision depth sensors, but the framework can also be applied for structured light sensors. Learning Based Stereo has become much more prevalent with large-scale benchmarks and higher computational ability [12,16,21]. Stereo matching for depth estimation is typically done in four steps: matching cost computation, cost aggregation, optimization, and disparity refinement [31]. Zbontar and LeCun were the first to design a network for computing matching costs by utilizing a deep Siamese architecture [37]. Building on this, DispNet introduced the first end-to-end framework for predicting entire disparity maps from stereo image pairs [25]. Works such as GWCNet followed and improved on this framework by using 3D convolutions to compute better cost volumes [18]. Recent works have improved performance even further by utilizing multi-scale context aggregation to estimate depth at different resolutions in order to leverage global image-level information [2,15]. However, the requirement of ground truth depth as supervision has limited the application of learning based stereo. Self-supervised Stereo is a popular approach for stereo matching when ground truth depth is unavailable. Godard et al. [14] explored the use of left-right consistency in a rectified stereo image pair for self-supervision. They reconstruct the right view based on the given left view and its predicted disparity map and then use the reconstruction loss as a supervision for training. PDANet [11] introduced the idea of perceptual consistency to improve reconstruction quality on regions with low texture and high color fluctuations. ActiveStereoNet [38] used local-contrast-normalized (LCN) reprojection loss on IR images as self-supervision to train a stereo network. However, this reprojection loss fluctuates along the epipolar line and is heavily influenced by occlusion and viewpoint variance. Not only that, LCN loss also suffers in areas where camera noise and environmental illumination dominate the projected IR pattern since it only uses the IR image with projected pattern. Our method addresses these concerns using temporal IR reprojection loss by way of actively adjusting the brightness of the emitted IR pattern which is more robust to camera noise and environmental illumination. Domain Adaptation techniques have shown great promise in closing the gap between the simulation and real domains. Tobin et al. [33] proposed using domain randomization through randomizing rendering in the simulator to train a robust model that would interpret the real domain as just another variation of the simulation domain. Previous works have also tried aligning the source and target domains by matching their input distributions or their feature statis- Real Domain Predicted disparity on real Figure 2. Architecture overview. The simulated and real stereo IR images are fed to a shared weight stereo network consisting of a CNN for noise reduction and a cost-volume-based 3D CNN for disparity prediction. The network is trained with reprojection loss on temporal binary IR pattern in the real domain, reprojection loss and disparity loss in the simulation domain as mixed domain learning. tics [24,32]. Other works have attempted to learn domaininvariant representations by augmenting the input based on certain criterion set forth in the task and approach itself [10]. Moreover, unsupervised losses have seen increased use for domain adaptation in tasks such as semantic segmentation and object detection [7,30,34]. Our work is most related to StereoGAN [23], which uses ground truth depth maps in the simulated domain and reprojection loss in the real domain along with unsupervised GAN losses in order to close the domain gap between simulation and real images. Our work differs from theirs in three key ways: (1) we utilize IR images with actively projected patterns for stereo matching instead of passive RGB images, which leads to a smaller sim2real gap and better transferability; (2) we use the proposed temporal IR reprojection loss as self-supervision which is more effective in correlating local matching features; (3) we train using only shape primitives and random real objects that are outof-distribution from test time data. Method In this section, we introduce mixed domain learning for active stereovision. We first define the task setup: in real domain X , we have a target set of real IR stereo images with projected pattern , and our goal is to learn an accurate disparity estimation network F to estimate . We utilize mixed domain data to train the network: in real domain X we collect another without disparity annotation. To be clear, the objects appearing in X are different from the ones in X t . In simulation domain Y, we generate a set of synthetic IR stereo images with ground truth disparity annotation Y = {(y l , y r , y d ) i } K i=1 . In order to guarantee the generalizability of the trained network to unseen objects, we only use shape primitives (sphere, cube, capsule) with different scales, textures and materials to generate Y. Figure 2 shows the framework of our proposed method. In the real domain, we propose the use of temporal binary IR reprojection loss as self-supervision (Sec. 3.1). In the simulation domain, we use the loss between predicted disparity and the ground truth disparity y d as supervision (Sec. 3.2). The network is trained jointly using the selfsupervision in real domain and supervision in simulation domain (Sec. 3.3). The stereo network architecture and other implementation details are introduced in Sec. 3.4. Real Domain: Self-supervised Learning on IR Images The prerequisite for computing reprojection loss of grayscale stereo images in conventional self-supervised learning methods [14,38] is that the object surface is Lambertian diffused where the reflection intensity is invariant to the viewpoint, which is usually not satisfied in real world. Therefore, we propose to extract the binary projected active pattern from temporal IR stereo image sequences, ... Binary Pattern Extraction From Temporal IR Images. For the real captured IR images x l or x r , the grayscale at pixel (u, v) is: where I l (u, v) represents the environmental illumination intensity, K l (u, v) represents the binary pattern captured by the camera, α represents the reflection coefficient determined by the object surface material, texture, angle and distance, e represents the pattern emittance, and represents the camera sensor noise. For active depth sensors, we manually adjust the pattern emittance e by changing the emitter power. Therefore, as shown in Fig. 3, our pattern extraction procedure is as follows: we set e to {e 0 , e 1 , ..., e n }, capture a temporal sequence of corresponding IR images {x (0) , x (1) , ..., x (n) }, and fit x (0) ,. . . , x (n) to the linear model regressed and obtainx (0) , . . . ,x (n) . We extract the binary IR pattern K(u, v) from the temporal image sequence through local window normalization and binarization: where W (x, u, v) is a local window centered at pixel (u, v) in x with window size w, c is a threshold to filter out noise and areas where the reflection coefficient is extremely small such as pure specular reflection regions. In our work, we use n = 6. In Fig. 4, we compare the pattern extracted by different methods. By utilizing the temporal image sequence, our method is able to extract the pattern accurately and completely even in distant areas where the SNR (signal noise ratio) is low. The local normalization and binarization window filters out camera sensor noise and environmental illumination while retaining the projected active pattern, which is beneficial for further reprojection loss computation. Binary Pattern Reprojection Loss. As demonstrated in traditional stereo matching and active stereo methods [1,8,9,38], patch-wise reprojection losses are smoother and more accurate than pixel-wise losses and are beneficial for matching. Therefore, we construct the patch-wise reprojection loss on the extracted binary IR pattern (K l , K r ) : where P (u, v) represents the patch centered at pixel (u, v) with patch size (2p+1)×(2p+1),K l represents the warped right image using the predicted disparityx d . As shown in Fig. 4, since the temporal binary IR pattern eliminates the influence of object texture and environmental illumination and only retains the projected pattern, the reprojection loss computed on the binary IR pattern reaches global minima at the ground truth disparity while the losses computed on the other two patterns could be misleading for the stereo network. Simulation Domain: Supervised Learning on Shape Primitives Although the proposed temporal IR reprojection loss can be used as the sole loss for stereo network training, it still has some limitations: the binary IR pattern cannot be extracted accurately for translucent and transparent objects and there are local minima in the loss with respect to the disparity hypotheses. On the other hand, traditional supervised learning with ground truth depth does not suffer from the aforementioned issues. However, it is costly and time-consuming to acquire ground truth depth in real world settings. Thus, we perform supervised learning only in the simulation domain. Dataset Generation based on Ray-tracing. In the last decade, there has been significant progress in ray-tracing rendering techniques in terms of speed and quality. Compared with rasterization, ray-tracing rendering can accurately simulate the light transmission process on translucent and transparent objects [28]. Therefore, we use ray-tracing rendering to generate the simulated training dataset: we first build a cone lighting with mask to imitate the pattern emitter in the real active stereovision depth sensor, and then construct two cameras similar to stereo cameras in the real setting. The relative position between cameras and lighting are set using parameters from real sensors. We also add dim ambient light in the simulation environment to imitate the filtered environmental light in the real setting. Shape Primitives. The semantic-specific biases in CAD model datasets may mitigate the generalizability of the learned stereo network. Thus we only use base shape primitives for simulated dataset generation. We use images from tiny ImageNet [22] as object textures. The number of primitives is randomly sampled from 5 to 15. The sizes, layouts and materials are also randomly generated. Disparity loss. Given the synthetic stereo image pair with ground-truth disparity (y l , y r , y d ), we follow [2] and adopt smooth L 1 loss between y d and the predicted disparity on synthetic stereo images: Mixed Domain Learning Given the real stereo IR image(x l , x r ), and the simulated stereo IR image with ground truth disparity (y l , y r , y d ), we train the stereo network F (·, ·) by combining the reprojection loss in the real domain and the disparity loss along with reprojection loss in the simulation domain: where λ r and λ s represent the weights of the real domain and the simulation domain respectively. The loss terms on real domain guarantee transferrability to unseen real data. However, we find that it is quite hard to train the network using these terms alone, due to noise in the self-supervision signals. Interestingly enough, after adding the supervised loss terms in simulation domain on primitive shapes, the behavior of loss minimization is much more tame: not only does the network converge faster, but also the final solution has better quality (see Sec. A.1 in supplementary material and Sec. 4.3 for empirical evidences). Implementation Details In the stereo matching network, we adopt PSMNet [2] as the backbone, which aggregates image features at different scales, constructs a cost volume and uses 3D CNNs to regress the disparity. The max disparity of PSMNet is set to be 192. We also use a 6-layer CNN to filter out irrelevant noise before feeding the stereo images into PSMNet. To make the model more robust, we apply color jitter and gaussian blur to the input images. Experiment details Datasets. Figure 5 shows example images from the three datasets in our work. For the testing dataset, we used an Intel RealSense D415 as the active stereovision depth sensor. All the real RGB and IR images are captured using the RealSense camera. In order to quantitatively evaluate the performance of the camera, the complete and accurate ground depth is required. To do so, we constructed a set of simulated scenes which are pixel-wise aligned with the real ones by precisely aligning the shapes and poses of objects and the intrinsic and extrinsic parameters of the RealSense camera. To evaluate the influence of object material on depth estimation performance, we include two categories of objects: 3D-printed objects and real objects. The 3Dprinted objects are printed using color plaster powder, and are considered Lambertian diffused, while the real objects' material are complex (specular, translucent, transparent) and difficult for active stereovision depth sensors. Overall, the testing dataset consists of 504 stereo images of 24 different scenes. For the training dataset in the simulation domain, we rendered 20,000 stereo IR images with ground-truth disparity annotation using random shape primitives, including spheres, cubes and capsules. 10% of the primitives are set to be transparent, 50% are textured by images from tiny imagenet [22], and the rest are set to random colors. For the raytracing rendering, the number of samples per pixel is 128 and the max bounces is set to 8. The rendered IR images are post-processed by the NVIDIA OptiX denoiser [27]. For the training dataset in the real domain, we collected 1,047 real stereo IR images of random objects which are different from the testing dataset. The objects are randomly placed on the table, and captured by the same RealSense from different viewpoints. Note that we only use the real IR stereo images to construct the temporal IR reprojection loss, and the depth images are not collected. Training. We train the network using the Adam optimizer with the initial learning rate set to 2e-4, decaying by half every 10k iterations for a total of 40k iterations. The network is trained on 2 GPUs each with 11GB GPU memory and a batch size of 4. We use λ s = 0.01 and λ r = 2 for the loss weight to set the two losses to similar scales. For fair comparison, data augmentation is applied to both our method and baseline methods. Specifically, brightness and contrast is uniformly scaled by a value between 0.4 to 1.4, and 0.8 to 1.2 respectively. For gaussian blur, kernel size is fixed to 9 × 9 and the standard deviation is selected uniformly between 0.1 to 2. Evaluation metrics. Several common stereo estimation metrics are used to evaluate the proposed method. Endpoint-error (EPE) is the mean absolute disparity error. Bad 1 is the percentage of pixels with disparity errors larger than 1 pixel. By converting disparity to depth, we also measure the average absolute depth error (abs depth err) RGB IR Disparity Figure 5. Example images from our dataset. (a) the simulation training dataset of random shape primitives; (b) the real training dataset of random objects different from testing; (c) the sim2real aligned testing dataset, including specular surfaces such as metals and translucent bodies such as liquids. Note: we don't rely on any annotation for real scenes which is why we have no disparity annotation in (b). and the percentage of depth outliers with absolute error larger than 4mm, which is denoted as >4mm. To evaluate the performance of our model on objects of different materials, these depth metrics are measured separately on two kinds of objects in the testing dataset using object masks. Since the RealSense camera outputs a value of zero at areas with high depth uncertainty, metrics are computed in terms of excluding and including uncertain pixels so that the evaluation is in the same completeness level. Comparison with other Methods For evaluation, our method is compared with other learning based methods and a decent commercial depth sensor -the RealSense D415. As shown in Tab. 1, our method outperforms other methods in all metrics. Learning-based methods. Our method is best compared with PSMNet [2] and StereoGAN [23] and we use them as our baselines. To test vanilla PSMNet, we train it on input stereo images with and without active pattern using only the training dataset in the simulation domain and then test it directly in the real testing dataset. As shown in Tab. 1, using active pattern can improve the stereo matching accuracy across all metrics and is beneficial for eliminating the simreal domain gap. This intuitively makes sense since active light adds pattern to textureless areas which are the most difficult to match. Furthermore, besides the original StereoGAN [23], we extend the StereoGAN architecture by using PSMNet as the disparity prediction backbone, which is denoted as StereoGAN+PSMNet. This improved StereoGAN uses cost volume aggregation in its stereo matching module, which makes it more powerful and comparable with our method. The results show that StereoGAN+PSMNet performs better than StereoGAN in all metrics. Although, when compared with our method, StereoGAN+PSMNet performs considerably worse as the absolute depth error increases from 4.377mm to 13.762mm. This is further corroborated by Fig. 6, where StereoGAN+PSMNet struggles to predict depth on real objects such as the metal can, which is a specular surface. On the other hand, our mixed domain learning method has improved accuracy on these types of objects. This large performance improvement can be attributed to direct supervision in the simulation domain of primitives with random shapes and materials, a wellshaped temporal IR reprojection which accurately locates the correct correspondences, and a more robust pipeline overall since it doesn't use the GAN module. Intel RealSense D415. To the best of our knowledge, we are the first work to be quantitatively compared with commercial products. The Intel RealSense D415 uses a traditional CENSUS-based stereo matching method [19,36], which has high computation efficiency but will leave uncertain pixels without depth values. Therefore, we report our results on the same completeness levels as RealSense and demonstrate that our method outperforms RealSense in every metric. In Fig. 6, RealSense is unable to accurately predict pixels in specular areas, while our method is able to match those pixels well. In addition, for 3D-printed objects, our model also demonstrates lower depth error. Ablation Study In this section, we validate the effectiveness of each component and design choice through ablation experiments. Reprojection Loss. We compare the network's performance when doing reprojection on different patterns which is shown Tab. 2. First, we use traditional reprojection loss on input stereo images which simply computes the patch-wise Mean Squared Error (MSE) of the warped images. Second, we use an advanced reprojection loss function from ActiveStere-oNet [38], which uses an LCN module to alleviate the condition where two matched pixels have large residuals due to the distance from the camera and the physical properties of the surface. Third, we experiment with applying reprojection loss on a 2-step IR Pattern. For the sake of fairness, we add synthetic ground truth depth supervision to all of the experiments above. The Raw IR reprojection has the worst result because it doesn't take into account the different intensities of IR light of two matched pixels. While LCN IR helps address this issue, it employs reprojection on the continuous local normalized grayscale IR image, which is still affected by environmental illumination and object texture. To tackle this issue, we proposed a reprojection loss on 2-Step IR patterns which shows better performance since the binary pattern eliminates the small residual of two matched pixels. Lastly, since the SNR is low for pixels that are far away from the camera, 2-Step IR cannot properly extract the active light pattern in distant areas. This issue is addressed by our temporal IR patterns. By tracking the intensity difference in the temporal IR image sequence, our approach extracts a more accurate and complete IR pattern. The results prove that our reprojection on temporal IR images is superior to all other reprojection methods. Simulation Supervision. In order to investigate the effect of simulation supervision, we implement the experiments listed on Tab. 3. Specifically, we observe a significant performance drop in the trained model after removing supervision on simulation disparity. Therefore, we can conclude that supervision on simulation domain helps the network achieve better performance. As mentioned before, the simulation domain can help Generalization. In order to evaluate the generalizability of the learned stereo network trained on the simulated dataset consisting of shape primitives, we construct another simulated dataset using the same objects as in the testing dataset. As shown in Tab. 4, the model trained on the random shape primitives dataset outperforms the model trained on the dataset that contains only shapes and textures that appear in the testing dataset, which validates the claim that greater variation of geometry, texture, and material introduced in our shape primitives dataset leads to superior generalizability of the learned stereo network. Conclusion and Future Work In this paper, we propose a novel end-to-end training framework, mixed domain learning, for learning-based active stereo that surpasses commercial depth sensors and state-of-the-art methods in the real world without any real depth annotation. One limitation of our work is that we only evaluate its effectiveness on one type of active stereovision sensor. Further study is needed to understand the extent to which our learned stereo network transfers to other outof-distribution real datasets and types of sensors. Additionally, in order for this framework to be useable in real applications, we would need to investigate how to accelerate network inference to achieve real-time depth predictions. Supplementary Material for "ActiveZero: Mixed Domain Learning for Active Stereovision with Zero Annotation" 1. Additional Ablation Study Effect of Simulation Ground-truth In this section, we study the effect of using the supervised simulation disparity loss L disp during training. To do so, we conduct experiments with and without L disp added to the final loss term and observe their convergence rate as well as final converged solution. Figure 1 shows that adding simulation disparity loss (blue) helps the network converge faster to the global optima. Patch Size of Reprojection Loss In this section, we conduct an ablation study on the patch size of the patch-wise reprojection loss. In the main paper, we chose a patch size of 11. For this study, we change patch size to 7, 15 and 21, train each one with only the real reprojection loss term, and evaluate them on the same testing dataset. Table 1 suggests patch size 15 has the best result on the absolute depth error (abs depth err) metric while patch size 21 has the lowest percentage of depth outliers with absolute depth error larger than 4mm (>4mm). However, the loss curve in Fig. 2 indicates that patch size 11 converges faster than the other patch sizes. Considering patch size 11 also occupies less GPU memory during training, we choose patch size 11 in our main experiments. Loss Ratio between Simulation and Real Domain In this section, we conduct an ablation study on the loss weight λ s and λ r described in Sec. 3.3 of the main paper. In our main experiment, we use λ s = 0.01 and λ r = 2. We change λ s and λ t to different values and test the trained models on the testing dataset. The results in Tab. 2 indicate that when λ s = 0.01 and λ r = 2, the network achieves the best result, which is consistent with our experiment setting. The 6-layer CNNs We experiment on the effectiveness of the 6-layer filter module in our proposed pipeline. As shown in Tab. 3, when training with the 6-layer filter, we achieve better performance than the pipeline without this module. The reason behind this is that this filter alleviates the lighting effect of the original image, in Fig. 3, so that the gap between the simulation dataset and real dataset decreases. Method Abs depth err (mm) ↓ > 4mm ↓ w/o 6-layer filter 4.592 0.356 6-layer filter 4.377 0.335 Table 3. Performance of network trained with 6-layer filter and without 6-layer filter. Figure 3. The effect of the 6-layer CNN filter. The top image is the captured IR image; the bottom image is the output of the 6-layer CNN filter. As shown, the lighting effect and the texture of the objects are reduced after passing through the filter. Inference Time We measure the inference time of our proposed pipeline in Tab. 4. Our method has an average inference time of 0.25 seconds per image pair with a resolution of 960×540. Compared to StereoGAN with PSMNet backbone, our method achieves faster inference times while also having better performance. We will continue to reduce our inference time in future studies. Method Inference Time(s) ↓ StereoGAN+PSM 0.303 Our Method 0.256 Table 4. Inference time of StereoGAN+PSM and our method More Details of Datasets The training simulation dataset has 18000 image pairs with random camera extrinsics, shape primitives, textures and poses. As in Fig. 4 (a), in order to make the scene more complicated, the primitives can overlap with each other and are not strictly attached to the table. Therefore, they can either overlap with the table or float above the table. In Fig. 4 (a), the textures are randomly selected to improve generalizability. For IR images in Fig. 4 (a), the simulated IR pattern is projected onto each scene of the simulation dataset. Samples of the training real dataset are shown in Fig. 4 (b). The objects in the training dataset are not present in the testing dataset and the ground truth depths are not required for this dataset. To preserve its generalizability, the optical properties of the objects are diversely selected. In Fig. 4 (b), there exists objects that are transparent (glass bottle), specular (the cover of the glass bottle) and diffused (black paper box). These objects have different abilities to reflect IR pattern as seen in Fig. 4 (b). Temporal IR images are collected by adjusting the power of the pattern emitter. There are 6 images with increasing IR power in each scene. The testing dataset contains objects that are never used in training to best represent the generalizability of our method. As shown in Fig. 4 (c) and (d), the object properties are also diversely selected. For example, this dataset contains specular objects (metal ball), transparent objects (bottled water) and diffused objects (printed cell phone). The IR pattern is collected by adjusting the IR emitter to the max power used in the training dataset. To obtain accurate ground truth, we align the scene using the same object poses and camera parameters in simulation, as shown in Fig. 4 (c) and (d).
2021-12-07T02:16:22.413Z
2021-12-06T00:00:00.000
{ "year": 2021, "sha1": "31065a23bd5cef0d20753d3d5e01c277f7edc35c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "31065a23bd5cef0d20753d3d5e01c277f7edc35c", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }