id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
30594268
pes2o/s2orc
v3-fos-license
Near-infrared Spectroscopy as a Process Analytical Technology Tool for Monitoring the Parching Process of Traditional Chinese Medicine Based on Two Kinds of Chemical Indicators Background: The active ingredients and thus pharmacological efficacy of traditional Chinese medicine (TCM) at different degrees of parching process vary greatly. Objective: Near-infrared spectroscopy (NIR) was used to develop a new method for rapid online analysis of TCM parching process, using two kinds of chemical indicators (5-(hydroxymethyl) furfural [5-HMF] content and 420 nm absorbance) as reference values which were obviously observed and changed in most TCM parching process. Materials and Methods: Three representative TCMs, Areca (Areca catechu L.), Malt (Hordeum Vulgare L.), and Hawthorn (Crataegus pinnatifida Bge.), were used in this study. With partial least squares regression, calibration models of NIR were generated based on two kinds of reference values, i.e. 5-HMF contents measured by high-performance liquid chromatography (HPLC) and 420 nm absorbance measured by ultraviolet–visible spectroscopy (UV/Vis), respectively. Results: In the optimized models for 5-HMF, the root mean square errors of prediction (RMSEP) for Areca, Malt, and Hawthorn was 0.0192, 0.0301, and 0.2600 and correlation coefficients (Rcal) were 99.86%, 99.88%, and 99.88%, respectively. Moreover, in the optimized models using 420 nm absorbance as reference values, the RMSEP for Areca, Malt, and Hawthorn was 0.0229, 0.0096, and 0.0409 and Rcal were 99.69%, 99.81%, and 99.62%, respectively. Conclusions: NIR models with 5-HMF content and 420 nm absorbance as reference values can rapidly and effectively identify three kinds of TCM in different parching processes. This method has great promise to replace current subjective color judgment and time-consuming HPLC or UV/Vis methods and is suitable for rapid online analysis and quality control in TCM industrial manufacturing process. SUMMARY Near-infrared spectroscopy.(NIR) was used to develop a new method for online analysis of traditional Chinese medicine.(TCM) parching process Calibration and validation models of Areca, Malt, and Hawthorn were generated by partial least squares regression using 5.(hydroxymethyl) furfural contents and 420.nm absorbance as reference values, respectively, which were main indicator components during parching process of most TCM The established NIR models of three TCMs had low root mean square errors of prediction and high correlation coefficients The NIR method has great promise for use in TCM industrial manufacturing processes for rapid online analysis and quality control. Abbreviations used: NIR: Near-infrared Spectroscopy; TCM: Traditional Chinese medicine; Areca: Areca catechu L.; Hawthorn: Crataegus pinnatifida Bge.; Malt: Hordeum vulgare L.; 5-HMF: 5-(hydroxymethyl) furfural; PLS: Partial least squares; D: Dimension faction; SLS: Straight line subtraction, MSC: Multiplicative scatter correction; VN: Vector normalization; RMSECV: Root mean square errors of cross-validation; RMSEP: Root mean square errors of validation; Rcal: Correlation coefficients; RPD: Residual predictive deviation; PAT: Process analytical technology; FDA: Food and Drug Administration; ICH: International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use. INTRODUCTION Parching, a process changes active ingredients and pharmacological efficacy greatly, plays an important therapeutic role in traditional Chinese medicine (TCM). [1][2][3] These changes are complex and difficult to measure. The current method to identify the degree of parching process is based on the color judgment, which is not only subjective but also unable to reflect the real variations accurately. [3] According to the International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) Q8, the product performance can be gained by the application of process Pharmacogn. Mag. A multifaceted peer reviewed journal in the field of Pharmacognosy and Natural Products www.phcog.com | www.phcog.net analytical technology (PAT). [4] Moreover, the US Food and Drug Administration (FDA) also advocated the use of PAT to improve the pharmaceutical manufacturing and quality assurance. [5] Thus, the online analysis and quality control should be studied to identify the critical sources of variability, to manage the process variability, and to improve the quality of industrial products effectively. [6][7][8][9][10][11] Conventional high-performance liquid chromatography (HPLC) or ultraviolet-visible spectroscopy (UV/Vis) methods are not ideal for online analysis to control TCM parching process because they require tedious sample preparation and time-consuming sample analysis. Near-infrared spectroscopy (NIR) is a fast, precise, noninvasive, and nondestructive technique that requires little or no sample preparation. Recently, NIR has been routinely preferred and gradually becomes one of the most efficient online analytical tools for qualitative and quantitative analysis of herb materials, separation monitoring, and extraction process. [8][9][10][11][12][13][14][15][16][17][18] Therefore, to control the parching process and guarantee the stable quality of parched TCM, NIR spectroscopy in line with the FDAs PAT initiative and guidance was applied in online analysis of TCM parching process. [5] Areca, Malt, and Hawthorn have diverse pharmacological efficacy in different degrees of parching process as common TCM. Raw Areca is mainly used to expel parasite, whereas stir-charred Areca is used to disperse food stagnation. [3,9,19] Raw Malt's efficacy is strengthening the spleen and stomach function, stir-fried Malt is good for digestion and delectation, and stir-charred Malt is helpful in relieving dyspepsia. [3,20] Raw Hawthorn can eliminate blood stasis and alleviate pain, stir-fried Hawthorn can promote digestion, stir-charred Hawthorn is helpful in anti-diarrhea, and carbonized Hawthorn can stanch bleeding. [3,21] Hence, it is meaningful to control the parching process of these TCMs to guarantee product performance of these parched TCMs. During the parching process of most TCM, Maillard reaction (the reaction process includes nucleophilic addition, dehydration, cyclization, Amadori rearrangement, enolization, and Strecker decomposition) and caramelization reaction always occur. [22] As shown in Figure 1 and 5-(hydroxymethyl) furfural (5-HMF) is an important intermediate product, melanoidins and caramels are the main end products. The reaction degree of browning could be reflected indirectly by measuring the absorbance of nonenzymatic browning reaction products (melanoidins and caramels) at 420 nm. [22][23][24] Thus, both 5-HMF content and 420 nm absorbance change obviously in most TCM parching process and can be used as chemical indicators to reflect TCM parching process. In this study, we took Areca, Malt, and Hawthorn as TCM examples and used NIR to establish two kinds of models of 5-HMF contents and 420 nm absorbance for online analysis and quality control during the TCM parching process, respectively. The validation results showed that the models were robust, accurate, and repeatable for online analysis and quality control. On the above foundation, the product quality can be monitored online by NIR efficiently to get the parched products of best quality. Samples and reagents Areca pieces (Yunnan, China), Hawthorn pieces (Shandong, China), and Malt (Sichuan, China) were all collected from TCM markets (Sichuan, China). Each kind of TCM weighed 18 kg was divided into 18 parts Figure 1: The generation process of (5-(hydroxymethyl)furfural and other components which have high absorbance at 420 nm during parching process equally. Each part was stir-frying parched; the temperature of the herbal medicine during parching process was controlled by infrared thermometer (GM320, BENETECH). Parched samples were collected at 25°C, 110°C, 130°C, 150°C, 160°C, 170°C, 180°C, 190°C, 200°C, and 210°C, respectively. Hence, 18 batches of parched samples of each herbal medicine were obtained from this process, and each batch contained ten kinds of TCM of different degrees of parching process. To ensure that moisture was not an interfering factor, all samples were dried in a silica gel desiccator for at least 7 h at room temperature until the weight loss was <0.0003 g. 5-HMF was obtained from the Sigma (Sigma-Aldrich Inc., St. Louis, MO, USA). HPLC-grade methanol was obtained from Tianjin Kermel Chemical Reagent Company (Tianjin, PR China). Water was purified by an ultrapure water instrument. All other reagents were of analytical grade. Near-infrared spectroscopy data collection The NIR spectra of each parched sample were recorded by QuasIR 3000 spectrometer (Vspec, USA) equipped with a PbS detector, sample cup, and rotary tables. The system was operated by Essential FT-IR spectral acquisition and processing software (Operant LLC Licensed to MTG, USA). The spectra were obtained at a resolution of 8 cm −1 over a wavelength range of 12,000-4000 cm −1 with 32 scans per spectrum, and air absorbance was recorded as the reference standard. Each sample measurement was repeated two times. The average NIR spectra were shown in Figure 2. Reference values collection All parched samples were first milled into powder and then passed through 60 mesh sieves, respectively. About 1.0 g sample powder was ultrasonic extracted by 25 mL 60% methanol for 45 min and then filtrated by 0.45 µm filters to get the sample solution. A 1100 HPLC system (Agilent Technologies Inc., USA) consisting of UV/VIS detector was used for the quantitative determination of 5-HMF at a wavelength of 280 nm. A Kromasil C18 (4.6 mm × 250 mm, 5 µm) was employed in 30°C column temperature to separate and analyze 10 µL sample injections. The elution system was composed of methanol-water with 0.5% acetic acid (10:90) at a flow rate of 1.0 mL/min. The 5-HMF content was calculated by daily linear regression equation to ensure the accuracy. Each sample solution was diluted by 10 times with 60% methanol, then tested in 420 nm with UV-6300PC spectrophotometer (MAPADA, Shanghai) to get the absorbance, and the 60% methanol absorbance was recorded as the reference standard. Data processing by partial least square The intensity of the measurements at different wavenumbers of NIR spectra can be correlated to the concentrations of the 5-HMF and 420 nm absorbance in the sample through partial least square (PLS) with the OPUS 6.5 software (Bruker Optik, Ettlingen, Germany), respectively. Sample set selection To ensure the representativeness of the calibration set and validation set, cluster analysis of NIR spectrum was used first to classify the samples (pretreatment method: vector normalization (VN); Spectral segment: 12,000-4000 cm -1 ). Each kind of TCM was divided into four categories roughly (raw, stir-fried, stir-charred, and carbonized product, shown in Figure 3). Among the 180 samples, 44 samples were selected randomly from each cluster (11 samples in one cluster) as a validation set to validate the PLS model, and the remaining 136 samples were divided into calibration set to establish the calibration model. Moreover, the calibration set contains full potential variations. Spectral pretreatment methods There are ten kinds of important spectral pretreatment methods including constant offset elimination (COE), straight line subtraction (SLS), VN, min/max normalization, multiplicative scatter correction (MSC), first derivative, second derivative, first derivative + SLS, first derivative + VN, and first derivative + MSC. Each pretreatment method was utilized in each NIR spectrum to eliminate noise, baseline shift, enhance the spectral features, and matrix background interference to extract the relevant information before PLS modeling. Development of calibration models The spectral pretreatment methods, spectral range, and the dimension factor (D) are critical parameters for the optimum model in PLS. The best parameter was evaluated based on the correlation coefficients of the calibration set (R cal %), the root mean square errors of cross-validation (RMSECV), and the residual predictive deviation (RPD). The best calibration model was selected by highest R cal (%) and RPD as well as lowest RMSECV. As shown in Table 1a and 1b, calibration equations were modeled with ten spectral pretreatments and optimized not only by spectral range but also by dimension factor (D). The evaluation on the predicted result of the validation set The above-optimized models were used to predict 5-HMF content and 420 nm absorbance of the samples in the validation set individually. The validation set can test the predictive ability of the optimized PLS Table 1a and 1b. The optimized near-infrared spectroscopy model The optimized NIR models with highest R cal (%) and RPD, lowest RMSECV and RMSEP were selected by comparing the parameters of these models. As shown in Table 2a and 2b, calibration equations which used 5-HMF content as reference values were modeled with COE for three kinds of TCM, and calibration equations which used 420 nm absorbance as reference values were modeled with COE for Area and Hawthorn and NV for Malt. The results indicated that the established models were robust, accurate, and repeatable for online analysis and quality control. CONCLUSIONS This research indicated that NIR combined with PLS as well as using 5-HMF content and 420 nm absorbance as reference values, could provide accurate and rapid online analysis of three kinds of TCM (Areca, Malt, and Hawthorn) during parching process. The results showed that the established NIR models had low RMSEP and high correlation coefficients. Compared with the conventional analytical procedures, this method is more comprehensive, more intuitive, and more convenient, and it can be widely applied in TCM parching process because the used reference values (5-HMF content and 420 nm absorbance) in the models are chemical indicators during parching process of most TCM. Therefore, this method is promising for monitoring most kinds of TCM industrial manufacturing processes to achieve rapid online analysis and quality control ensuring stability and desired product quality. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2018-04-03T04:00:57.941Z
2017-04-01T00:00:00.000
{ "year": 2017, "sha1": "52f93ac8a8e9eec691be4c06819507d9ebb0a61d", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc5421435", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "52f93ac8a8e9eec691be4c06819507d9ebb0a61d", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Mathematics", "Medicine" ] }
39739713
pes2o/s2orc
v3-fos-license
Validation and reliability of a guideline appraisal mini-checklist for daily practice use Background The use of comprehensive instruments for guideline appraisal is time-consuming and requires highly qualified personnel. Since practicing physicians are generally busy, the rapid-assessment Mini-Checklist (MiChe) tool was developed to help them evaluate the quality and utility of guidelines quickly. The aim of this study was to validate the MiChe in comparison to the AGREE II instrument and to determine its reliability as a tool for guideline appraisal. Methods Ten guidelines that are relevant to general practice and had been evaluated by 2 independent reviewers using AGREE II were assessed by 12 GPs using the MiChe. The strength of the correlation between average MiChe ratings and AGREE II total scores was estimated using Pearson’s correlation coefficient. Inter-rater reliability for MiChe overall quality ratings and endorsements was determined using intra-class correlations (ICC) and Kendall’s W for ordinal recommendations. To determine the GPs’ satisfaction with the MiChe, mean scores for the ratings on five questions were computed using a six-point Likert scale. Results The study showed a high level of agreement between MiChe and AGREE II in the quality rating of guidelines (Pearson’s r = 0.872; P < 0.001). Inter-rater-reliability for overall MiChe ratings (ICC = 0.755; P < 0.001) and endorsements (Kendall’s W = 0.73; P < 0.001) were high. The mean time required for guideline assessment was less than 15 min und user satisfaction was generally high. Conclusions The MiChe performed well in comparison to AGREE II and is suitable for the rapid evaluation of guideline quality and utility in practice. Trial registration German Clinical Trials Register: DRKS00007480 Background Clinical practice guidelines are defined by the Institute of Medicine as "statements that include recommendations intended to optimize patient care that are informed by a systematic review of evidence and an assessment of the benefits and harms of alternative care options" [1]. There is evidence to suggest that rigorously developed guidelines have the power to translate the complexity of scientific research findings and other evidence into recommendations for healthcare action [2][3][4][5][6][7][8]. To increase guideline quality, several institutions [1,[9][10][11][12][13][14][15][16][17][18][19][20][21][22][23] have prepared manuals, that attempt to define standards for guideline developers. At the same time, tools have been developed to help potential guideline users to assess guideline quality. The most commonly used international guideline appraisal tool is the AGREE II Instrument [24], but its use is time consuming and demands considerable skill on the part of the guideline appraiser. Graham 2000 identified and compared guideline appraisal tools in a systematic review [25], which was later updated by Vlayen in 2005 [26] and Siering in 2013 [27]. Siering identified 40 different appraisal tools that vary considerably in terms of the number of quality dimensions taken into account. In the opinion of the authors, appraisal tools containing many quality dimensions may not represent the best choice in all cases. Depending on the problem being addressed, a tool containing a few well thought out questions may well suffice. To be effective, guidelines must be applied by clinicians. An appraisal tool that is quick and easy to use and assesses the most relevant quality dimensions of a guideline would generally encourage their wider use. We therefore developed and published a minichecklist (MiChe) for the rapid appraisal of the usefulness and quality of a guideline for clinical practitioners. Detailed information on the development process is provided elsewhere [28]. However, the development was based on a systematic search in guideline directories and bibliographic databases for guideline appraisal instruments. The assessment criteria used in the retrieved instruments were identified, and their importance to the development of an effective rapid assessment tool was judged by German guideline experts. The key criteria for MiChe were then selected on the basis of the most commonly found criteria in the retrieved instruments and the ratings from the expert survey. Our primary objective was to validate the MiChe vs. the AGREE II instrument and determine its reliability for daily users in terms of ability to rapidly assess the strengths and weaknesses of a guideline and dependability of content. Methods Twelve general practitioners (GPs) were asked to use the MiChe to assess 10 eligible guidelines that had already been evaluated by 2 independent reviewers using the AGREE II instrument. Aims Primary outcomes a) Validate the overall quality rating of AGREE II as the gold standard vs. the overall quality rating of the MiChe. b) Estimate the inter-rater reliability of the overall quality rating assigned by different guideline assessors using the MiChe. Secondary outcomes relating to the MiChe alone: a) Demonstrate the inter-rater reliability of endorsement: willing to recommend this guideline for use in practice ("yes"; "yes, with certain reservations" or "no"). b) Demonstrate user satisfaction to indicate whether the MiChe would help raters decide whether to use a specific guideline or not. c) Feedback to improve the MiChe. d) Time required for an assessment using MiChe. Tertiary outcomes: Evaluate the correlation between overall quality rating and endorsement of the MiChe vs. quality ratings of individual items (domain 1 -6) of AGREE II. Participants During a quality circle that took place in November 2014, a convenience sample of GPs working as resident doctors was recruited from the more than 100 accredited general practices that make up GP Research Network Frankfurt (ForN) [29]. GPs with experience of guideline development or appraisal, i.e. members of guideline commissions or GPs in postgraduate training were excluded. All participants received 1.5-h of training on the basics of guideline development and appraisal at the Institute of General Practice in Frankfurt. In addition, a sample guideline was provided, along with instructions to read and appraise the guideline using the MiChe. Participants later received a folder with a printed version of 10 guidelines and were asked to use the MiChe to appraise them. Results were returned by mail. In the Federal State of Hesse, Germany, the code of medical ethics allows formal ethical approval to be waived upon request if the biomedical research to be conducted on patients or healthy volunteers involves no risky procedures and is not invasive. We contacted the local ethics committee of Frankfurt University Hospital, who informed us that ethical approval could be waived. As we were not expecting ethical approval to be required, participating GPs were only required to provide their verbal consent before starting to review the guidelines. Guideline selection and guideline assessment tools The selection process was initiated by choosing guidelines already known to the study team and by studying a list of 20 guidelines, sorted according to their characteristics. Of these, 10 guidelines were selected that covered subjects that are relevant to general practice, had varying AGREE II quality levels, varied in length and were written in either German or English. Two independent reviewers with professional expertise in guideline appraisal from the Institute for Quality and Efficiency in Health Care (IQWiG) first assessed the guidelines using the AGREE II instrument. These assessments served as the gold standard for the validation [24]. The MiChe [28,30] contains 8 key-criteria that focus on important methodological features (quality of guideline creation, quality of reporting, quality of presentation, quality of evidence synthesis), as well as a 3-level assessment scale (see Fig. 1). Data management The GPs had to complete 10 MiChes for the 10 different guidelines and a short questionnaire on their personal characteristics and previous experience of guidelines. To indicate whether the MiChe would help raters to decide whether or not to use a guideline, 5 questions addressed user satisfaction (satisfaction, frequency of future use, makes it easier to deal with guidelines, influence of guideline recommendations on future daily practice use, comprehensibility) using a six-point Likert scale from 1 -6, with 1 indicating a strong positive response. The average time required for assessment was measured separately for each guideline and GP. Suggestions for improvement and notes were documented in a free text field. Ethics approval was not required, since no patients were involved. The protocol for this validity and reliability study was registered in the German Clinical Trials Register: DRKS00007480. Data analyses Validity The strength of the correlation between the average MiChe ratings of the guidelines and the AGREE II total score were estimated using Pearson's correlation coefficient. A correlation of more than 0.70 is considered desirable. Additionally, correlations between the average recommendation on the MiChe and the separate AGREE II domains were calculated using Spearman's rank order correlation. Inter-rater reliability Inter-rater agreement for the various MiChe ratings of the GPs was determined using intra-class correlations (ICC) and Kendall's W for ordinal recommendations for endorsement ("yes", "yes, with certain reservations", "no"). For both coefficients, we consider values over 0.75 as good, values between 0.40 and 0.75 as moderate, and values below 0.40 as poor [31]. Evaluation of the mini checklist To determine the GPs' satisfaction with the MiChe, mean scores for the ratings on the five questions quoted in the data management section were computed. Determination of required sample size The required sample size to estimate inter-rater agreement can be determined by defining a specific null hypothesis and a specific alternative hypothesis, and selecting a desired type I and type II error rate (α and β level) [32]. We chose to set ICC = 0.50 as the lowest acceptable agreement for the null hypothesis and an expected value of ICC = 0.75 for the alternative hypothesis. For α = 0.05 and β = 0.20, 10 GPs would have to evaluate 14 guidelines. The 10 guidelines evaluated in our study would still yield a statistical power between 60 % and 70 %, and, as the guidelines are not sampled randomly but selected to elicit high variation in the AGREE II and MiChe ratings, it would probably be higher. This, in turn, makes it more likely to statistically confirm high inter-rater agreement. Characteristics of the GPs and the tested guidelines Twelve GPs (6 female) participated in our study. Their mean age was 53 (SD 7) years and mean professional experience as a GP 19 (SD 7) years; 6 worked in a joint practice and 7 in a rural area with less than 60,000 inhabitants. None of the participants had used a guideline assessment tool before, but all 12 GPS had previously used guidelines as a source of information ( Table 1). The included guidelines were published between 2006 and 2013, and covered different areas of relevance to general practice. Six guidelines were in German and 4 in English. They differed in length from 4 to 278 pages. The overall quality of the guidelines as assessed by AGREE II varied between 2 and 6 points on the 7-point scale. Four of them received a recommendation of "yes", 4 of "yes, with certain reservations" and 2 were given a "no" recommendation. The average MiChe overall quality score across the 12 GPs ranged from 2.4 (SD 1.0) to 6.7 (SD 0.7) for the 10 guidelines. Based on the MiChe assessment, 6 guidelines received a majority recommendation of "yes", 1 of "yes, with certain reservations" and 3 were given a "no" recommendation by the majority of the GPs. The total AGREE II score was lower than the total MiChe score for 7 of the 10 guidelines [33][34][35][36][37][38][39] and higher for the remaining 3 [40][41][42]. The DEGAM guideline on heart failure [34] was ranked best overall by both instruments and 2 guidelines [39,42] were poorly ranked by both assessment tools (Table 2). Methodological Guideline Quality -Mini-Checklist 1. The guideline has been written in a generally comprehensible manner and its key recommendations are easy to identify. 2. The guideline's target audiences and scope of application were specified. 3. The background, the objectives of the guideline, and the patients for whom the guideline is relevant were clearly described. 4. The persons that developed the guideline are named, and their financial independence and any conflicts of interest are clearly documented. 5. The search for evidence was systematic and the criteria used to select evidence were described. 6. The guideline recommendations are unambiguous and the evidence they are based on is clearly presented. 7. Different treatment options are presented that take account of potential benefits, side effects and risks. 8. Clear information is provided on how up-to-date the guideline is and for how long this is expected to be the case. YES TO SOME EXTENT NO Primary endpoints on validity and inter-rater reliability of the overall quality rating The average MiChe quality rating of the guidelines was strongly related to the total AGREE II score (Pearson's r = 0.872; one-tailed P < 0.001), as were the recommendations to use the guidelines (Spearman's ρ = 0.909; one-tailed P < 0.001). Both results indicate a high level of validity in the MiChe ratings. Secondary endpoint for the assessment of the mini-checklist For the inter-rater reliability of willingness to recommend the guidelines, or "endorsement" for use in practice, Kendall's W for ordinal ratings was 0.73 (P < 0.001), also indicating good agreement between raters. Concerning user satisfaction, the mean value for overall satisfaction with the MiChe was 1. The 12 GPs required an average of 12.9 min (SD 9.2) for the MiChe assessment. The mean appraisal time for each guideline ranged from 6.8 to 20.1 min ( Table 2). Eight GPs provided feedback. They would have liked to have a more differentiated assessment scale and mentioned that questions 2 and 3 were rather similar in content. Another suggestion was to add a test question regarding the existence of a structured pocket-version of the guideline for use in practice. Some GPs reckoned that assessments may depend on the language in which the guideline was written and some criticized the questions for their focus on methodological and formal aspects, as they felt this may influence a result even when a recommendation was of proven efficacy. It was further mentioned that the MiChe does not assess the practical usefulness of a guideline on a day-to-day basis. Correlation between the domains of AGREE II and the MiChe The average overall quality rating of the 10 guidelines using MiChe was highly correlated (Pearson's correlations between 0.74 and 0.87) with the expert ratings in the AGREE II domains II -IV and VI. Correlations for the domains I and V were not statistically significant ( Table 3). The pattern of correlations on the level of recommendation with the individual AGREE II domains is very similar. Details of the AGREE II for assessment per domain are shown in Table 4 in the Appendix. Discussion Guidelines have the potential to improve the quality and safety of health care, but are often not used in clinical practice. In order to be helpful, a guideline must be of high methodological quality. The use of comprehensive research-focused instruments such as AGREE II is timeconsuming and requires highly qualified personnel. Since practicing physicians are generally very busy, a new rapid-assessment tool (MiChe) was developed to help them evaluate the quality and utility of a guideline quickly and on their own. This paper presents the results of a validationstudy for MiChe [28], as compared to the AGREE II instrument [24]. Ten guidelines that are relevant to general practice and reflect a spectrum of methodological quality ranging from low to high according to an appraisal using the AGREE II instrument were included and assessed using the MiChe by 12 GPs that were inexperienced in guideline appraisal. The study showed a high level of agreement in the quality rating of guidelines between MiChe and AGREE II and recommendations to use the guideline. In addition, inter-rater-reliability for the overall MiChe quality ratings and MiChe recommendation for use in practice were high. With high user satisfaction and a mean time required for guideline assessment of less than 15 min, the MiChe was shown to be suitable for the rapid assessment of guideline quality and utility in practice. Although the study shows high validity and interrater-reliability for the MiChe, it nevertheless has a number of limitations. The validation of the MiChe was performed using the AGREE II instrument as the gold standard for guideline appraisal. AGREE II is the most frequently used instrument for the assessment of methodological guideline quality and has been validated in several studies [43][44][45][46]. Nevertheless it remains unclear whether all items and domains of AGREE II contribute equally to the quality of a guideline [25]. The results of our findings that the same individual AGREE II items (II-IV, VI) correlated with both average overall quality ratings and levels of recommendation should not be over-interpreted. The correlations were probably caused by chance, even though it was an interesting result that for domain 3 in particular (rigor of development), the correlation was very high. In addition, we clearly recognize that the questions on the MiChe cannot be seen as independent of the individual AGREE II items. Further empirical studies are needed to find out which items and quality dimensions are essential to the assessment of guideline quality. Unfortunately we didn't measure the time it took to assess the guidelines using the AGREE II instrument. However, the AGREE II consortium recommends the use of at least 2 and preferably 4 appraisers and consists of 23 key items organized within 6 domains followed by 2 global rating items. Therefore, we assumed that it requires considerably more time and personnel resources to apply than are typically available to a GP. Guideline appraisal instruments can be used to assess whether a guideline has been developed in a methodologically accurate and transparent way in accordance with international standards. Guidelines containing adequate information on these topics will therefore be judged to be of high (methodological) quality. But this appraisal is made regardless of whether all recommendations made in the guideline are correct or not. Thus some guidelines of high methodological quality may still contain individual recommendations that are not internally valid in terms of content. Equally, a guideline of low methodological quality may contain recommendations of high content validity [26,47,48]. Although the GPs involved in this study were inexperienced in guideline appraisal, they received short, basic training on guideline development and assessment before using the MiChe. A comparison between trained and untrained clinicians with regard to the usability and reliability of the MiChe was not part of this investigation. In addition, convenience sampling of the participants limits generalizability of the results. To achieve wider implementation, future research should assess whether clinicians with no prior training come to the same results as trained clinicians and apply random sampling techniques. To date only the German language version of the MiChe has been validated. It would be useful to know to what extent the use of an English translation of the MiChe would lead to corresponding results. A large number of manuals and instruments can be used for guideline development and quality assessment. A systematic review carried out by Siering et al in 2013 [27] identified a total of 40 different appraisal tools. Information on quality and validity was only available for 11 of these 40 tools, while detailed information concerning the validation process was reported for only 6. Among these, AGREE II was the most extensively validated instrument [43][44][45][46]. In recent years, a number of clinician-focused rapid assessment tools have been developed contemporaneously, and in addition to comprehensive research-focused instruments. Apart from MiChe, these include the iCAHE Guideline Quality Checklist [49], the Global Rating Scale (GRS) of the AGREE Collaboration [50], and the surgeons' checklist by Coroneos et al. [51]. Of these, MiChe is the only instrument of this type that is available in German and is thus more easily accessible for German speakers. It is also the only tool that has been validated for use in general practice. In 2014, Grimmer et al tested the validity, inter-rater reliability and clinical utility of the iCAHE Guideline Quality Checklist in comparison to the AGREE II instrument [49]. In their study they found a moderate to strong correlation between the iCAHE and the AGREE II scores. A comparison of these four tools was published by Semlitsch et al. in 2015 [30] and showed that, although developed independently, they all focus on a few, broad-based and very similar key questions. They can therefore only give a rudimentary impression of the value of a guideline. They are not intended to provide a comprehensive and detailed guideline appraisal, and include only a broad-based rating system. Conclusion Physicians increasingly use guidelines to gain clinical knowledge. To be dependable, these guidelines need to be prepared using proper methods and to be of sufficiently high quality. The MiChe is a validated rapid-assessment instrument that allows busy physicians to assess the methodical quality of guidelines without the need for experts in guideline appraisal and judge whether a guideline is applicable in patient care or not. It thus increases the likelihood that guideline recommendations will be used in practice and contributes towards sustained improvement in patient health care. Declaration and availability statement The dataset supporting the conclusions of this article are available from the corresponding author on request.
2018-04-03T04:36:47.148Z
2016-04-02T00:00:00.000
{ "year": 2016, "sha1": "dcd10ffa25b4af0ccc15548e6d5ded399a1c903c", "oa_license": "CCBY", "oa_url": "https://bmcmedresmethodol.biomedcentral.com/track/pdf/10.1186/s12874-016-0139-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dcd10ffa25b4af0ccc15548e6d5ded399a1c903c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6216101
pes2o/s2orc
v3-fos-license
Fractional quantum Hall effect in the absence of Landau levels It is well known that the topological phenomena with fractional excitations, the fractional quantum Hall effect, will emerge when electrons move in Landau levels. Here we show the theoretical discovery of the fractional quantum Hall effect in the absence of Landau levels in an interacting fermion model. The non-interacting part of our Hamiltonian is the recently proposed topologically non-trivial flat-band model on a checkerboard lattice. In the presence of nearest-neighbouring repulsion, we find that at 1/3 filling, the Fermi-liquid state is unstable towards the fractional quantum Hall effect. At 1/5 filling, however, a next-nearest-neighbouring repulsion is needed for the occurrence of the 1/5 fractional quantum Hall effect when nearest-neighbouring repulsion is not too strong. We demonstrate the characteristic features of these novel states and determine the corresponding phase diagram. A s one of the most significant discoveries in modern condensed matter physics, fractional quantum Hall effect (FQHE) 1 has attracted intense theoretical and experimental studies in the past three decades. The wavefunctions proposed by first explained FQHE by introducing fractionally charged quasi-particles. Later, the ideas of flux attachment and the composite Fermi-liquid theory 5 have provided a rather simple but deep understanding of the nature of the FQHE. Among many of its interesting and unique properties, two of the most striking features of FQHE are the following: fractionalization, where quasi-particle excitations carry fractional quantum numbers and even fractional statistics of the constituent particles, and topological degeneracy, where the number of degenerate ground states responds nontrivially to the changing of the topology of the underlying manifold. These two phenomena are the central ideas of the topological ordering 6 . In addition, on the potential application side, they are also the essential components in the studies of topological quantum computation 7 . Since 1988, great efforts have been made to study quantum Hall effects in lattice models without Landau levels. The first such example is the theoretical model proposed by Haldane 8 , where it was demonstrated that in a half-filled honeycomb lattice, an integer quantum Hall state can be stabilized upon the introduction of imaginary hoppings. This study was brought to the forefront again recently owing to a major breakthrough in which a brand new class of topological states of matter, known as the time-reversal invariant Z 2 topological insulators, was discovered (see refs 9,10 and references therein). From the topological point of view, both Haldane's model and Z 2 topological insulators can be considered as generalizations of the integer quantum Hall effect. Unlike the FQHE, they do not support fractional excitations, and the ground states here have no topological degeneracy. For fractional states, fractional Z 2 topological insulators are found to be theoretically possible 11,12 . However, it is unclear how to realize such states with lattice models. This difficulty originates from the strong coupling nature of the fractionalized states. In contrast to the integer quantum Hall effect and the Z 2 topological insulators, where all their essential topological properties can be understood within a non-interacting picture, the interaction effects are expected to have a vital role in stabilizing a fractionalized topological state. In fact, without interactions, a fractional quantum Hall system will become a Fermi liquid due to the fractional filling factor. Most recently, a series of models with topologically non-trivial nearly flat-band models have been proposed [13][14][15] . These lattice models have topologically non-trivial bands (different from early proposal 16 ), similar to Haldane's model and Z 2 topological insulators, and their bandwidth can be tuned to be much smaller than the band gap, resulting in a nearly-flat-band structure. In particular, based on the mechanism of quadratic band touching 13,14 , a large class of flatband models have been explicitly obtained with the ratio of the band gap to the bandwidth reaching the high value of 20-50. Owing to the strong analogy between these nearly flat bands and the Landau levels, it was conjectured that in these models, FQHE (or fractional topological insulators) can be stabilized in the presence of repulsive interactions. However, the conjecture is challenged by competing orders, for example, the charge-density wave, and thus the fate of these systems are unclear. More importantly, the flux attachment picture 5 may also break down, and it is very interesting to examine the nature of the corresponding emergent fractionalized quantum state, if it can be realized in such a model without Landau levels. Here we report the discovery of FQHE at filling factor 1/3 and 1/5 in models with topologically non-trivial nearly flat bands 14 , based on exact calculations of finite-size systems. The existence of the FQHE are confirmed by using two independent methods, both of which are well established and have been widely adopted in the study of the traditional FQHE. We have studied both the characteristic low-energy spectrum and the topologically invariant Chern numbers [17][18][19][20][21][22] of the low-energy states. The first method directly detects the topological degeneracy and the other is related to the phenomenon of fractionalization. Results Phase diagram. We consider the following Hamiltonian on a checkerboard lattice: where H 0 describe the short-range hoppings in the two-band checkerboard-lattice model defined in ref. 14, n i is the on-site fermion particle number operator. The nearest-neighbouring (NN) and next-nearest-neighbouring (NNN) bonds are represented by 〈i, j〉 and 〈〈i, j〉〉, respectively. The effect of the NN (U) and NNN (V) interactions are summarized in the phase diagrams Figure 1a and b. At filling factor 1/3, the FQHE emerges by turning on a relatively small U. Interestingly, the FQHE phase remains robust at the large U limit, where particles avoid each other at NN sites and can only be destroyed by an intermediate V. At filling factor 1/5, the FQHE occurs in most region of the parameter space as long as the NNN repulsion V exceeds a critical value V c , whose value drops to zero for larger U. The observed FQHE states are characterized by nearly p-fold (P = 3 and 5 for filling factors 1/3 and 1/5 respectively) degenerating ground states with the momentum quantum numbers of these states related to each other by a unit momentum translation of each particle as an emergent symmetry, a finite spectrum gap separating the ground-state manifold (GSM) from the low-energy excited states with a magnitude dependent on the interaction strengths U and V, and the GSM carrying a unit total Chern number as a topological invariant protected by the spectrum gap, resulting in a fractional effect of 1/p quantum for each energy level in the GSM. We identify the quantum phase transition on the basis of the spectrum-gap collapsing. As shown in Figure 1, the Fermi-liquid phase with gapless excitations is found for relatively strong NNN interaction at 1/3 filling and for relatively weak NNN interaction at 1/5 filling. This observation is consistent with Haldane's pseudopotential theory 3 , and previous studies on ordinary FQHE, where a FQHE state is found to be sensitive to interactions and other microscopic details (for example, the thickness of the two-dimensional electron gas 23 ). Low-energy spectrum and topological degeneracy. is the translation operator with j = x and y representing the x and y directions, respectively. Note that the filling factor is defined as the ratio of the number of particles (N p ) over the number of unit cells (N x × N y ). In the absence of impurities, the total momentum of the many-body state is a conserved quantity, and thus the Hamiltonian can be diagonalized in each momentum sector for systems with N s = 24 to 60 sites (depending on filling factors). We consider periodic boundary conditions (θ 1 = θ 2 = 0) first. Figure 2a illustrates the evolution of the low-energy spectrum with changing U for V = 0 and N s = 2×4×6 (N x = 4 and N y = 6) at particle-filling factor ν = 1/3. Here the NN hopping strength (t)is set to unity. We denote the momentum of a state q = (2πk x /N x , 2πk y /N y ) by using two integers (k x , k y ) as shown in Figure 2. For vanishing NN interaction U = 0.0 (the bottom panel of Fig. 2a), the ground state has k = (0,0) whereas no particular structure is observed in other k sectors. For a weak interaction U = 0.2, we find an interesting change in the spectrum. Here the energies of two states with momenta (k x , k y ) = (0, 2) and (0, 4) have been lowered substantially. For a stronger interaction, the energies of the three states with k x = 0 and k y = 0, 2, 4 form a nearly degenerate GSM 6 at U > 0.3. In the mean time, a sizable spectrum gap opens up, separating the GSM from the other excited states as shown in the top panel of Figure 2a for U = 1.0. The obtained threefold ground state near degeneracy and a robust spectrum gap are the characteristic features of the 1/3 FQHE phase, which emerge with the onset of the NN repulsion U. By increasing the NNN repulsion V to a certain critical value V c , we have observed the collapse of the spectrum gap, which determines the boundary of the 1/3 FQHE phase, as shown in Figure 1a. Further evidence of the FQHE based on topological quantization will be presented later in Figure 3. In Figure 2b, we present the formation of the 1/5 FQHE by showing the energy spectrum at particle filling factor ν = 1/5 with increasing V at U = 1 for a system with N s = 2×6×5 (N x = 6 and N y = 5). From the bottom panel to the top panel, five states with momenta (2, k y ) (k y = 0, …, 4) form the nearly degenerate GSM with the increase of V, whereas a large spectrum gap is formed at V = 1.0. The same feature of the energy spectrum is observed for the whole regime of the 1/5 FQHE above the critical V c line shown in the phase diagram Figure 1b while V c drops to near zero for larger U. We note that there is a small energy difference between the states in the GSM in both filling factors. This is a finite-size effect 21,24 as each of these states has to fit into the lattice structure. The finitesize effect is substantially smaller for 1/5 FQHE comparing with the 1/3 case due to the lower particle density. Interestingly, for all cases that we have checked for different system sizes (N s = 24-60), the members of the GSM are always related to each other through a momentum space translation as an emerging symmetry of the system. Namely, if (k 1 , k 2 ) is the momentum quantum number for a state in the GSM, then another state in the GSM can be found in In fractional quantum Hall phase, a linear curve is observed whose slope is determined by filling factor, as expected. In the Fermi-liquid phase, we observed large fluctuation and non-universal behaviours, indicating the absence of the topological quantization. the momentum sector (k 1 + N e , k 2 + N e ) modulo (N x , N y ). This relation of the quantum numbers of the GSM demonstrates the correlation between the real space and momentum space in a manner precisely resembling the FQHE in a uniform magnetic field. For the case where the particle number N p is integer multiples of both N x and N y (for example, N s = 2×3×6 and N s = 2×5×5 at 1/3 and 1/5 fillings, respectively), all the states of the GSM are indeed observed to fall into the same momentum sector as expected. Fractional quantization of topological number. To further establish the existence of the FQHE here, we study the topological property of the GSM by numerically inserting flux into the system using generalized boundary phases. As first realized by Thouless and co-workers [17][18][19] , a topological quantity of the wavefunction, known as the first Chern number, distinguishes the quantum Hall states from other topological trivial states. In particular, one can detect the fractionalization phenomenon in FQHE by examining the Chern number of the GSM through inserting flux into the system [20][21][22] . The results of this calculation are shown in Figure 3. With details presented in the Methods part, the total Berry phase as a function of N mesh /N tot should be a linear function with slope 2π/3 and 2π/5 for the FQHE with filling factors 1/3 and 1/5, respectively. This agrees very well with the observation shown in Figure 3. On the other hand, for the Fermi-liquid phase, strong fluctuations and non-universal behaviours of the Berry phase are found, suggesting the absence of topological quantization. FQHE at strong coupling limit. We further examine the phase at strong coupling limit for lower filling factor ν = 1/5, where 1/5 FQHE demonstrates less sensitivity to either large U or V. We show the lowest 20 eigenvalues as a function of U in Figure 4, while we always set V = U for simplicity. At small U, we see the flatness of the spectrum in consistent with a Fermi-liquid phase with small energy dispersion. As we increase U > U c = 0.35, all the lowest five states remain nearly degenerate, whereas higher energy states jump a step up making a robust gap between them and the GSM. In fact, the 1/5 FQHE persists into infinite U and V limit as we have checked by projecting out the configurations with double or more occupancy between a site and all its NN and NNN sites. Physically, this can be understood as particles at lower filling having enough phase space within the lower Hubbard band, and thus the FQHE remains intact. It would be very interesting to establish a variational state for the FQHE on the flatband model, which will be investigated in the future. Discussion Finally, it is important to emphasize that the fractional topological phases we found are very stable and the same effect survives even if the hopping strengths are tuned by an amount of ~10%. On the experimental side, it is known that checker-board lattice we studied can be found in condensed matter systems (for example, the thin films of LiV 2 O 4 , MgTi 2 O 4 , Cd 2 Re 2 O 7 , etc.), and the imaginary hopping terms we required can be induced by spin-orbit coupling and/ or spontaneous symmetry breaking 25 . In addition, this lattice model also has the potential to be realized in optical lattice system using ultra-cold atomic gases, in which the tuning of the parameters are much easier compared with condensed matter systems. On the basis of these observations, we conclude that there is no fundamental challenge preventing the experimental realization of these novel fractional topological states, but further investigation is still needed to discover the best experimental candidates. Methods Calculation of Chern number for many-body state. The Chern number of a many-body state can be obtained as: where the closed path integral is along the boundary of a unit cell 0 ≤ θ ≤ 2 π with j = x and y, respectively. The Chern number C is also the Berry phase (in units of 2π) accumulated for such a state when the boundary phase evolves along the closed path. Equation (2) can also be reformulated as an area integral over the unit cell C = (1/2 π) ∫ d θ 1 dθ 2 F, where F(θ 1 , θ 2 ) is the Berry curvature. To determine the Chern number accurately [20][21][22] , we divide the boundary-phase unit cell into about N tot = 200 to 900 meshe. The curvature F(θ 1 , θ 2 ) is then given by the Berry phase of each mesh divided by the area of the mesh. The Chern number is obtained by summing up the Berry phases of all the meshes. Chern number quantization in FQHE. We find that the curvature is in general a very smooth function of θ j inside FQHE regime. For an example, the ground-state total Berry phase sums up to 0.325 × 2π, slightly away from the 1/3 quantization for a system with N s = 30, U = 1 and V = 0 at 1/3 filling. Physically, as we start from one state with momentum (k x , k y ) in the GSM, it evolves to another state with a different momentum k x →k x + N e (k y →k y + N e ), when the boundary phase along x (y) direction is increased from 0 to 2π. Thus, with the insertion of a flux, states evolve to each other within the GSM. We observe that only the total Berry phase of the GSM is precisely quantized to 2π and the total Chern number C = 1 for all different choices of parameters inside either the 1/3 or 1/5 FQHE regime of Figure 1. When states in the GSM have the same momentum, the Chern number of each state is an integer due to non-crossing condition of the energy levels with the same quantum number, while the total Chern number remains the same 21,22 . Fluctuating Chern number in other phases. As we move across the phase boundary from the FQHE state into the Fermi-liquid phase, there is no well-defined nearly degenerate GSM or spectrum gap, and the Berry curvature in general shows an order of magnitude bigger fluctuations. The obtained total Chern integer varies with system parameters (for example, U and V). To illustrate this feature, we start from the lowest-energy eigenstate and continuously increase the boundary phases for three periods, which allows the first state to evolve into other states and eventually return back to itself. In Figure 3, we plot the accumulated total Berry phase as a function of the ratio of the total meshes included N mesh over the total number of meshes N tot in each period. For the system in the 1/3 FQHE phase with N s = 30, U = 1 and V = 0, the total Berry phase follows a straight line in all three periods, well fitted by ( / ) / 2 3 p N N mesh tot , indicating a nearly perfect linear law of the Berry phase to the area in the phase space with a deviation around 10%. In the Fermi-liquid phase with N s = 30, U = 1 and V = 2, we see step-like jumps of the total Berry phase, with a magnitude in the order of 2π, in sharp contrast to the linear law in the FQHE phase. The total Chern number for the Fermi-liquid state sums up to three, indicating the decorrelation between three states. Different integer values for the Chern number are found in this region (including negative ones) (2) (2) with changing system parameters, demonstrating a measurable fluctuating Hall conductance if particles are charged. For 1/5 FQHE state, by following up the Berry phase of five periods for the ground state as shown in Figure 3b, we observe the same linear law with a slope of 2 5 p / and the total Chern number is quantized to one. Interestingly, a negative integer Chern number for Fermi-liquid state for the parameters U = 1 and V = 0.1 is observed confirming the non-universal nature of the topological number for such a gapless system. We conjecture that the Fermi-liquid phase may be unstable towards Anderson localization, especially at lower filling factors, similarly to the conventional FQHE systems 21 .
2011-02-14T01:26:00.000Z
2011-02-14T00:00:00.000
{ "year": 2011, "sha1": "6a0b876503e52985914e9fe96e35666f311c008a", "oa_license": "CCBYNCSA", "oa_url": "https://www.nature.com/articles/ncomms1380.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "53afb5a8c19ed681f808105e382d9aba25ef66ef", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
269630738
pes2o/s2orc
v3-fos-license
Identification of Predictors of Metastatic Potential in Paragangliomas to Develop a Prognostic Score (PSPGL) Abstract Context Paragangliomas (PGLs) are rare tumors in adrenal and extra-adrenal locations. Metastasis are found in approximately 5% to 35% of PGLs, and there are no reliable predictors of metastatic disease. Objective This work aimed to develop a prognostic score of metastatic potential in PGLs. Methods A retrospective analysis was conducted of clinical data from a cohort with PGLs and tumor histological assessment. Patients were divided into metastatic PGL (presence of metastasis) and nonmetastatic PGL (absence of metastasis ≥96 months of follow-up) groups. Univariate and multivariable analysis were performed to identify predictors of metastatic potential. A prognostic score was developed based on coefficients of multivariable analysis. Kaplan-Meier curves were generated to estimate disease-specific survival (DSS). Results Out of 263 patients, 35 patients had metastatic PGL and 110 patients had nonmetastatic PGL. In multivariable analysis, 4 features were independently related to metastatic disease and composed the Prognostic Score of Paragangliomas (PSPGL): presence of central or confluent necrosis (33 points), more than 3 mitosis/10 high-power field (HPF) (28 points), extension into adipose tissue (20 points), and extra-adrenal location (19 points). A PSPGL of 24 or greater showed similar sensitivity with higher specificity than the Pheochromocytoma of the Adrenal Gland Scaled Score (PASS) and Grading System for Adrenal Pheochromocytoma and Paraganglioma (GAPP). PSPGL less than or equal to 20 was associated with a risk of metastasis of approximately 10%, whereas a PSPGL of 40 or greater was associated with approximately 80%. The presence of metastasis and Ki-67 of 3% or greater were related to lower DSS. Conclusion The PSPGL, composed of 4 easy-to-assess parameters, demonstrated good performance in predicting metastatic potential and good ability in estimating metastasis risk. As none of these criteria have, alone, sufficient sensitivity and specificity for predicting metastatic potential, scores have been developed.These scores are composed of histological and nonhistological parameters that when used together allow the risk of metastasis in PGLs to be estimated.The Pheochromocytoma of the Adrenal Gland Scaled Score (PASS) proposed by Thompson in 2002 [32] and the Grading System for Adrenal Phaeochromocytoma and Paraganglioma (GAPP) proposed by Kimura and colleagues in 2005 [33] are the best-known scoring systems.PASS, composed of 12 histological parameters, has been validated in several studies [17,[34][35][36][37][38], but problems related to its reproducibility and conflicting data about its specificity in identifying metastatic potential in PGL do not allow for this score to be used as the only tool to predict future behavior of these tumors [39][40][41].GAPP combined some of histological parameters included in PASS with immunohistochemical (IHC) and biochemical tumor characteristics and was later expanded with the participation of several centers in Japan [42].Recently a study demonstrated advantages of GAPP over PASS regarding prediction of metastatic behavior and reproducibility [40].Other prognostic scores include the modified GAPP score (M-GAPP) [36], the Composite Pheochromocytoma/Paraganglioma Prognostic Score (COPPS) [37], and the Age, Size, Extra-adrenal location, Secretory type (ASES) score [43]. Cocaine-and amphetamine-regulated transcript (CART) is a highly expressed peptide in the rat brain in response to psychostimulants [64] and is poorly studied for predicting the metastatic potential of PGLs.Studies that assessed plasma concentrations of CART in patients with PGLs have shown that elevated levels of this peptide correlate positively with disease progression [65,66].No studies have assessed the performance of CART as an IHC marker in predicting metastatic potential in PGLs. The objective of this investigation was to identify predictors of metastatic potential in PGLs and select the best predictors to compose the Prognostic Score of Paragangliomas (PSPGL).In addition, we investigated factors related to worse prognosis in patients with metastatic disease. Ethical Considerations The research protocol was approved by the local ethics in research commission (Comissão de Ética para Análise de Projetos de Pesquisa do HCFMUSP-CAPPESQ, consubstantiated opinion No. 4.920.314). Population Participants included patients diagnosed with PGL and followed at a single center (Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo-HC-FMUSP, São Paulo, Brazil), from 1967 to 2019.Clinical, laboratorial, and genetic data from patients were obtained from medical records and were retrospectively analyzed; the histological and IHC data of tumors were newly reviewed.Patients admitted until 2019 for whom we had access to progression data up to July 2023, were included in the study. Clinical Data Data were retrospectively collected from medical records and included age, sex, clinical presentation at initial diagnosis (presence and duration of signs and symptoms or incidentaloma or genetic screening), follow-up time between diagnosis and last assessment or death, absence or presence of metastasis including time and site of appearance, and data related to genetic, biochemical, and topographic diagnosis.The genetic diagnosis was clinical (family history of PGL or of other tumors related to syndromic genetic diseases such as multiple endocrine neoplasia type 2 and von Hippel-Lindau disease [VHL]), and/or molecular as of when this technique became available.Molecular genetic investigations, conducted in DNA extracted from peripheral blood leukocytes, were performed, initially, using Sanger method (VHL, succinate dehydrogenase complex subunits [SDHB, SDHC, SDHD], myc-associated factor X [MAX], transmembrane protein 127 [TMEM127]).In patients without a genetic diagnosis defined by this method, multiplex ligation-dependent probe amplification (MLPA-SDHx and VHL) was performed.Patients who remained without genetic diagnosis after using both methods were investigated using a target next-generation sequencing panel on Illumina NextSeq 500 platform sequencers (Illumina Inc) that includes the following genes: fumarate hydratase (FH), MAX, neurofibromatosis 1 (NF1), rearranged during transfection (RET), succinate dehydrogenase complex subunit (SDHA, SDHB, SDHC, SDHD), TMEM127, VHL, Egl-9 family hypoxia-inducible factor 1 (ENGL-1), endothelial PAS domain protein 1 (EPAS1), kinesin family member 1B (KIF1B), proto-oncogene, receptor tyrosine kinase (MET), succinate dehydrogenase complex assembly factor 2 (SDHAF2), ATRX chromatin remodeler (ATRX), B-Raf proto-oncogene-serine/ threonine kinase (BRAF), fibroblast growth factor receptor 1 (FGFR1), HRas proto-oncogene-GTPase (HRAS), lysine methyltransferase 2D (KMT2D), and cellular tumor antigen P53 (P53) [55,[67][68][69][70][71][72].Biochemical diagnosis and tumor functionality were assessed by determining catecholamines and/or their metabolites in 24-hour urine (U) or in plasma (P): vanilmandelic acid (VMAU), total metanephrines (tMnU), fractionated catecholamines (adrenaline [AU and AP], noradrenaline [NAU and NAP], and dopamine [DopaU and DopaP]), and free and fractionated metanephrines (metanephrines [MnU and MnP] and normetanephrines [NMnU and NMnP]).Tumors were classified as functional or nonfunctional based on these determinations and when biochemical evaluation data were unavailable tumors were classified by the presence or absence of typical PGL clinical presentation.Functional tumors were classified as adrenergic (increased concentrations of adrenaline or its metabolites regardless of noradrenaline and/or its metabolites or dopamine concentrations) and noradrenergic (increased concentrations of noradrenaline or its metabolites with adrenaline and/or its metabolites within the normal reference range).Data regarding topographic diagnosis of the tumor were collected from the following imaging exams: abdominal ultrasound, computed tomography (CT), magnetic resonance imaging, and 123/131-metaiodobenzylguanidine ( 123/131 MIBG).For investigation of metastases, in addition to these methods, 111 In-pentetreotida scintigraphy scan (OctreoScan), 18-fluordeoxyglucose positron emission tomography scan ( 18 F-FDG PET/CT), and 68-gallium DOTATATE PET scan ( 68 Ga-DOTATE PET/CT) were performed in some patients.Tumor size was obtained from macroscopic analysis of the tumor after surgery or by analyzing presurgical imaging exams. Histology Slides of each tumor, stained with hematoxylin and eosin (HE), were provided by the Department of Pathological Anatomy of HC-FMUSP and were reviewed by only one pathologist (pathologist 1) with experience in adrenal pathology and who was blinded to clinical data.In the cases of patients with multiple tumors, one tumor per patient was considered, opting for the tumor with the larger size and/or higher PASS score. Immunohistochemistry IHC evaluation was performed by 2 pathologists (1 and 2).All the IHC studies carried out used paraffine sections (4 µm for IHC-Ki-67 and 3 µm for synaptophysin, chromogranin A [CHGA], chromogranin B [CHGB], and CART) for slide preparation.The slides were deparaffinized and rehydrated before IHC reactions were carried out.In the IHC-Ki-67 study, the slides were immersed in citrate buffer solution pH 6.0 at 95 ° C and steamed-treated for 40 minutes for antigen retrieval.After peroxidase blocking, the slides were incubated with the primary antibody (mouse monoclonal antibody MIB1, 1:100 dilution, DAKO, RRID: AB_2142367) for 18 to 24 hours at 4 °C.Signal amplification was performed using the Novolink Polymer Detection System (Vision Biosystems), followed by diaminobenzidine tetrahydrochloride and dimethyl sulfoxide (DAB) reaction (Sigma).The slides were stained with hematoxylin and covered with Entellan (Merck).Nontumoral lymph node slides were used as an external control for the reaction, and intratumoral lymphocytes were used as an internal control.The slides were scanned using a Pannoramic 250 Flash III scanner with Pannoramic Viewer 1:15 software (3DHISTECH).Assessment of the Ki-67 index was performed by automatic counting of tumor hot spots using QuPath software [73].The areas were selected by pathologist 1, who assessed at least 500 cells for each case.Results were described in percentage. In the IHC staining for CHGB, CART, synaptophysin, and CHGA, the tissue microarrays technique was used (Manual Tissue Microarrayer 1-Beecher Instruments).Whenever possible, 3 tumor areas of interest were selected and marked by pathologist 1, both in HE-stained slides and in the respective donor tissue paraffin blocks, and a spreadsheet containing the corresponding block numbers was elaborated for mapping purposes.The donor block was perforated in the exact region marked by the pathologist and the material was transferred to the recipient block.After executing all the spots, the recipient block was placed in the oven at 60 °C for paraffin softening and spot leveling, treated with layers paraffin to preserve immunoreactivity, and was cut for slide preparation.To carry out antigen retrieval, the slides were immersed in a Tris-EDTA solution, pH 9.0 (K800421-2, Agilent) and steamed-treated at 100 ° C for 35 minutes.After peroxidase blocking, the slides were incubated with the primary antibody (CHGB-mouse monoclonal antibody MAB8868, 1:2000 dilution, R&D Systems, RRID: AB_3096181; CART-rabbit monoclonal antibody NBP1-91749, 1:400 dilution, Cell Signaling Technology; RRID: AB_2798480) for 30 minutes at 37 °C and then for 18 to 24 hours at 4 °C.Signal amplification was performed using EnVision FLEX+ (Agilent), followed by DAB reaction (Sigma).The slides were stained with hematoxylin and covered with Entellan (Merck).Adrenal tissue slides were used as controls for CHGB and CART.The results were assessed by pathologist 2 as a percentage of positive cells (0%-100%) and in intensity (weak [1], moderate [2], and strong [3]).Using that data the IHC positivity index (PI) was calculated (percentage × intensity).The final PI value was calculated as the average of the results obtained for the available spots in each case and categorized as negative (<30), very weak , weak , moderate (151-199), and strong (≥200).Automated IHC reactions (Benchmark ULTRA Ventana) for synaptophysin (IHC-synapthophysin) and CHGA (IHC-CHGA) were used to confirm the neuroendocrine origin of the tumor.For antigen retrieval, the slides were immersed in ULTRA Conditioning (Ultra CC1, pH: 8.4, Ventana Medical Systems) for 76 minutes for synaptophysin and for 92 minutes for CHGA, and steamtreated at 95 °C.After peroxidase blocking, the slides were incubated with the primary antibodies (synaptophysin-rabbit monoclonal antibody MRQ-40, ready to use, Cell Marque, RRID:AB_3096182; and CHGA-mouse monoclonal antibody LK2H10, ready to use, Ventana, RRID:AB_2335955) for 1 hour and 36 minutes at 37 °C.Signal amplification was performed using the ultraView Universal DAB Detection Kit (Ventana Medical Systems).The slides were stained with hematoxylin and covered with Entellan (Merck).Central nervous system tissue was used as a control for synaptophysin, and gastric tissue was used as a control for CHGA.Results were assessed as positive or negative by pathologist 2. Definition of Metastatic and Nonmetastatic Disease Patients were identified as having metastatic paraganglioma (MPGL) if they presented metastasis at the diagnosis of the primary tumor or during postoperative follow-up (until July 2023).Metastatic disease was suspected by recurrence of hypertension and/or other signs and symptoms of adrenergic hyperactivation and/or elevation of catecholamines or their metabolites above normal limits.It was always confirmed by imaging (CT scan and magnetic resonance imaging) and/or by nuclear medicine techniques (most frequently bone scintigraphy and 123/131 MIBG but also OctreoScan, PET/CT-FDG and PET Gallium-68 DOTATATE PET/CT).Patients who did not show evidence of metastatic disease during the minimum follow-up period, defined based on the assessment of the maximum time interval between initial diagnosis and the detection of metastasis in patients with MPGL, were classified as having nonmetastatic PGL (NMPGL).Patients who presented with local tumor recurrence caused by tumor cell implantation during surgery (pheochromocytomatosis) [74][75][76][77] and those with inferior vena cava thrombosis without concomitant metastasis were excluded.To investigate prognostic predictors in patients with MPGL, this group was divided into 2 subgroups: aggressive MPGL (aMPGL), patients who died earlier in the period following surgery, and indolent MPGL (iMPGL), patients who survived for a prolonged period, with or without disease, at the last assessment. Statistical Analysis The results were expressed as absolute values and frequency percentages for categorical variables and as mean ± SD, median, and minimum and maximum values for numerical variables.Univariate analysis was performed to test the association between each variable and metastasis as an outcome.Categorical variables were analyzed using the chi-square test with Fisher exact test or likelihood ratio tests when necessary.Data normality in the studied population was assessed using the Shapiro-Wilk test.The t test was used for numerical variables with normal distribution, and the Mann-Whitney test was used for numerical variables that did not follow a normal distribution.It was possible to calculate the odds ratio (OR) for variables present in both groups with a minimum number of 1. Subsequently, multivariable analysis was conducted to identify which variables were independently associated with the outcome of metastasis.Considering the total number of positive outcomes studied (metastatic tumors), we calculated that to establish good reliability for the model we should select 1 variable for every 5 outcomes for this analysis.Cases with one or more missing data in the selected variables were omitted.The choice of variables included in the multivariable analysis followed these criteria: variables with a P value of less than .05 in the univariate analysis (MPGL vs NMPGL); and histological variables that, according to the pathologists, did not represent the same histological phenomenon.If the same histological phenomenon was seen, the higher OR variable and/or the easier to identify or more reproducible variable was chosen.The multivariable analysis was performed by adjusted multiple logistic regression using the stepwise backward selection method.The β coefficient (β coef) generated in the multivariable analysis model was used to weight each variable by multiplying its value by 10 and rounding it up to the next whole number [78,79].With the numbers obtained we developed a prognostic score for PGL.The estimated probability of metastasis was calculated using the same coefficients in one equation (P = e (β 0 +β 1 +β 2 +...+β n ) 1+e (β 0 +β 1 +β 2 +...+β n ) where P = outcome probability, e = Euler number, β 0 = constant of regression equation, β 1 , β 2 … β n = regression coefficients of each variable) and with the data obtained a curve to estimate the risk of metastasis was built.Receiver operating characteristic (ROC) curves were generated to assess the performance of the predictors in differentiating cases with and without metastasis by calculating the area under the curve (AUC).Sensitivity and specificity were obtained from inflection points of the curve and positive predictive values (PPVs) and negative predictive values (NPVs) were calculated.The 95% CIs were established for each of these parameters.The cutoff point with best sensitivity and specificity together was calculated using the Youden method [80].In the analysis of disease-specific survival (DSS), only deaths related to disease (PGL) were considered.Kaplan-Meier curves were generated for both groups (MPGL and NMPGL).The same characteristics used for determining metastatic potential in addition to time of onset and site of metastasis were analyzed in the 2 subgroups of MPGL (aMPGL and iMPGL) to assess prognostic predictors in MPGL.Kaplan-Meier curves were generated for patients with MPGL considering characteristics associated with worse prognosis.Survival curves were compared using the log-rank test. In all the analyses carried out, P values less than .05were considered statistically significant.Data analysis was performed using IBM SPSS Statistics for Windows from IBM Corp. version 27.0 released in 2020 by IBM Corp. Results During the period of analysis, 263 patients were identified with a diagnosis of PGL.Out of the 263 patients identified, 13.3% (35/263) were diagnosed with MPGL.The diagnosis of metastasis was synchronous with primary tumor diagnosis in 57.1% (20/35) and metachronous in 42.9% (15/35) of the cases.In patients with metachronous metastases, the diseasefree interval ranged from 12 to 84 months after primary tumor surgery, with a mean of 44 months.The main sites of metastasis were bones (68.6%), lymph nodes (57.1%), lungs (31.4%), and liver (28.6%) (Table 1). The maximum period in which metastatic disease was observed in patients with MPGL was 84 months after removal of the primary tumor.Therefore, patients who were free of metastatic disease with follow-up of 96 months or more after primary tumor surgery were considered as having NMPGL.Out of the initially studied 263 patients, 111 patients were excluded because of insufficient follow-up, even though they did not present with metastasis.Additionally, 7 other patients were excluded: 2 patients due to an isolated inferior vena cava thrombosis at the time of primary tumor surgery, 3 patients who developed pheochromocytomatosis during followup after surgery, and 2 patients who had inconclusive findings in laboratory tests.Therefore, with a final selection of 145 patients, data from the MPGL group (35 patients) and the NMPGL group (110 patients) were analyzed and compared (Fig. 1). Clinical Data The comparison of patient clinical, genetic, and progression characteristics is shown in Table 2.There was no difference in terms of age and sex of the patients.Clinical diagnosis predominated in both groups and was more frequent in patients with metastatic disease (P = .036;OR = 3.2), whereas diagnosis by genetic screening only occurred in NMPGL (P = .012).Genetic investigation was performed in 106 of 145 (73%) of the patients, 21 from the MPGL group and 85 from the NMPGL group, and was more frequently positive in the NMPGL group (P = .011).In only 6 patients the genetic diagnosis was based on clinical presentation of MEN 2 (PGLAd + medullary thyroid carcinoma + primary hyperparathyroidism).The presence of PV in the RET and VHL genes occurred only in patients with NMPGL with P = .001for RET; PV in the SDHB and NF1 genes were present in both groups, with P = .014and OR = 6 for SDHB; PV in the TMEM127, SDHD, and FH genes were present only in patients in the NMPGL group, whereas PV in the SDHA gene occurred only in MPGL.Follow-up was shorter in MPGL vs NMPGL (median 144 months vs 168 months; P = .033)(see Table 2). Data regarding tumor size and location are shown in Table 4. Extra-adrenal location was more frequent in MPGL (P < .001;OR = 6.75).The PGLexAd were most often located in the abdomen (80.8%) and, less frequently, in the pelvis (11.5%) and head and neck (7.7%).The tumors were larger in MPGL vs NMPGL (median 7.9 cm vs 4.5 cm; P < .001;OR = 1.34) (see Table 4).The area under the ROC curve (AUC-ROC) for tumor size was 0.760, and tumor size with the best sensitivity and specificity to differentiate metastatic tumors was 8.1 cm [81].When we consider this value, we observe an increased association with metastatic disease (OR = 7.5) (see Table 4). Histology The total number of points in the PASS and GAPP scores and the frequency of histological parameters comprising them, assessed in MPGL and NMPGL, are shown in Table 5.The total points in PASS were greater in MPGL compared with NMPGL (median 9.5 points vs 2 points; P < .001;OR = 1.7).All the MPGL patients had a score of 4 or more points.Most histological variables were more prevalent in MPGL patients, except the presence of spindle cells and nuclear hyperchromasia (see c Patients with metastasis only to regional lymph nodes.Table 5).The AUC-ROC for the PASS score (Fig. 2) was 0.914, and the cutoff of 4 or greater showed 100% sensitivity (CI, 62.3%-77.3%),65.6% specificity (CI, 62.3%-77.3%),48.8% PPV (CI, 32.9%-64.9%),and 100% NPV (CI, 91.2%-100%) for detecting potentially metastatic disease. Immunohistochemistry All the tumors were positive for IHC-CHGA and IHCsynaptophysin, except for one PGLM that was negative for CHGA but positive for synaptophysin, confirming the neuroendocrine origin of the tumors studied. Prognostic Score of Paragangliomas The PSPGL was developed based on the results of the multivariable analysis.For this analysis, we chose 7 variables (1 variable for every 5 outcomes).Initially, we selected those variables that had P less than .05 in the univariate analysis, thus, we would have 10 histological variables (diffuse growth and/ or large nests, central or confluent necrosis, high cellularity, cellular monotony, > 3 mitoses/10 HPF, atypical mitotic figures, extension into adipose tissue, vascular invasion, capsular invasion, and profound nuclear pleomorphism) and 6 nonhistological variables (adrenergic symptoms, extra-adrenal tumor location, PV in the SDHB gene, concentrations of 24-hour urinary vanilmandelic acid and 24-hour urinary noradrenaline, and tumor size ≥8.1 cm).Among the 10 initially selected histological variables, some represented the same histological phenomenon: 1-diffuse growth and/or large nests and central or confluent necrosis (a central or confluent necrosis occurs in the center of a large nest or extends diffusely through several large nests), we opted for the variable necrosis, as it is more reproducible; 2-cellular monotony and profound nuclear pleomorphism (cells exhibiting a monotonous pattern generally have deep nuclear pleomorphism, with a high nucleus-cytoplasm index) [32], we chose the variable cellular monotony because of its higher interobserver agreement [37]; 3-more than 3 mitoses/10 HPF and atypical mitotic figures (atypical mitotic figures are more common with a higher mitotic index), we opted for more than 3 mitoses/10 HPF due to its higher interobserver agreement [37].4-vascular invasion, capsular invasion, and extension into adipose tissue (all represent local tumor invasiveness), we chose the variable extension into adipose tissue because this variable received greater weight in the PASS score and had a higher OR in the univariate analysis.High cellularity appears to represent an isolated phenomenon and was also selected (total histological variables = 5).Ki- 7).Each variable received a value equal to its β coef multiplied by 10, rounded up to the nearest whole number [78,79].The sum of these values was 187, and the percentage participation of each variable in decreasing order was 33% for necrosis, 28% for more than 3 mitoses/HPF, 20% for extension into adipose tissue, and 19% for extra-adrenal location.The weight assigned to each variable was equal in absolute value to the percentage, and with these values we developed the PSPGL, which ranges from 0 to 100 points (Table 8).Fig. 3 illustrates these parameters observed in the patients studied.The AUC-ROC for the PSPGL (Fig. 4) was 0.970, and a cutoff of 24 showed 89.5% sensitivity (CI, 66.9%-98.7%),91.5% specificity (CI, 81.3%-97.2%),77.3% PPV (CI, 54.6%-92.2%),and 96.4% NPV (CI, 87.7%-99.6%) in identifying metastatic potential.Table 9 shows these parameters (sensitivity, specificity, PPV, and NPV) of PASS, GAPP, and PSPGL and their respective 95% CIs.The comparison of CIs showed similar sensitivity, PPV, and NPV among the 3 scores and higher specificity for the PSPGL (see Table 9). We calculated the PSPGL for tumors with information on these 4 characteristics and compared the data obtained in the MPGL (19 tumors) vs NMPGL (59 tumors) groups.The total points in the PSPGL were higher in MPGL vs (median 19 points vs 0 points [0-39]; P < .001;OR = 1.98).A score of 24 or more was achieved in 89.5% of MPGL vs 8.5% of NMPGL (P < .001;OR = 91.8)(Table 10).We generated a curve based on the coefficients of the logistic regression and correlated the PSPGL score with the probability of metastasis (Fig. 5).On observing the curve, we can conclude that the chance of metastatic disease is high (∼80%-100%) in patients with tumors with PSPGL of 40 or more, intermediate for PSPGL greater than 20, and less than or equal to 39, low (∼10%) in patients with tumors with a score of 20 or less, and practically null in patients a score of zero.On the same curve, we indicated the actual occurrence of metastases and found that the estimated probability is very close to or equal to the observed incidence of metastases (see Fig. 5). Prognostic Factors in Paragangliomas We performed the analysis of DSS using the Kaplan-Meier method.Follow-up for patients with NMPGL was a median of 168 months (94-504 months) and no deaths related to the diagnosis of the disease were observed, resulting in a DSS of 100%.In patients with MPGL, the median was 144 months (12-384 months) and there was great variability in the clinical course of the disease (Fig. 6A).In this group, 3 patients were lost to follow-up after surgery, and out of the remaining 32, 13 patients died.The deaths occurred within 72 months or less in 8 patients (median = 48 months [12-72 months]) and they were defined as aMPGL, while 4 patients died after more than 72 months (84-348 months).These patients, plus the 18 patients who remained alive (all alive for ≥96 months), with disease (14 patients), or free of metastatic disease (4 patients with regional lymph nodes metastasis removed with the primary tumor) were defined as iMPGL (4 late deaths + 18 alive).Due to the sample size, we performed only a univariate analysis [81], which showed differences in 3 variables when comparing aMPGL vs iMPGL (Table 11).The 3 variables were the presence of atypical mitoses (50% vs 0%; P = .029)and higher IHC-Ki-67 indices (median 5% [2.5%-8.5%]vs 0.6% [0.1%-6.1%];P = .010)that were more frequent in aMPGL vs iMPGL, while lower concentrations of 24-hour urinary noradrenaline were observed in aMPGL vs iMPGL (median 84 mcg/24 hours [9-2763] vs 698.5 mcg/24 hours [170-5187]; P .040)(see Table 11).Kaplan-Meier curves were generated for these 3 variables and there was a significant difference between the survival curves only regarding IHC-Ki-67 (<3% or ≥3%) (Fig. 6B). Discussion Of the total of initially selected patients, 13.3% (35/263) developed metastatic disease at a frequency similar to that described in the literature 18, 82] and were classified as having MPGL.As metastases can occur a few months to several years after primary tumor surgery, it has not yet been established what disease-free interval could relatively safely classify a patient as having nonmetastatic disease [5,10].We defined this interval as 96 months or more, and we believe that a patient who does not develop metastasis after this long follow-up period is, with great probability, a carrier of NMPGL.Thus, 41.8% (110/263) of the patients were classified as having NMPGL, and 44.9% (118/263) were excluded mainly due to insufficient follow-up time.Therefore, we evaluated 35 MPGL and 110 NMPGL cases (see Fig. 1). Many of the variables analyzed showed different expressions in the univariate analysis for MPGL vs NMPGL.Since we had a total of 35 positive outcomes, we selected 7 variables for the multivariable analysis.The variables were chosen according to their OR in the univariate analysis and when more than one histological variable representing the same histological phenomenon was available, one variable was chosen based on the its reproducibility and OR.Thus, the variables selected for the multivariable analysis were central or confluent necrosis, mitotic index more than 3 mitoses/10 HPF, high cellularity, tumor size of 8.1 cm or greater, PGLexAd, presence of PV in the SDHB gene, and extension to adipose tissue.Three histological variables (central or confluent necrosis, >3 mitoses/10 HPF, and extension to adipose tissue) and one nonhistological variable (extra-adrenal tumor location) remained independently related to the metastatic behavior of the tumor.The selected histological variables are present in the PASS score and received the maximum weight in it (weight = 2) [32].They indicate rapid tumor growth (central or confluent necrosis), high cell proliferative index (>3 mitoses/10 HPF), and invasive tumor (extension to adipose tissue).The nonhistological variable (extra-adrenal location) has already been identified as a predictor of metastatic disease in several studies [12,22,23,49,52].In the TNM staging system for tumor staging, extra-adrenal paraganglioma are classified as T3, regardless of size [5].Although all PGLs have the same cellular origin, extra-adrenal paragangliomas, especially abdominal and pelvic tumors, more frequently present with more aggressive biological behavior, which may be related both to their genetic basis (eg, PV in the SDHB gene) and to other, not yet identified factors [12,18].The presence of PV in the SDHB gene was selected for multivariable analysis but did not prove to be an independent risk for MPGL.However, its association with the metastatic behavior of the tumor has been widely demonstrated in the literature [18, 28-31, 55, 83, 84].We believe that this unexpected result found in this study was due to the insufficient number Results obtained in the studied population.Abbreviations: GAPP, Grading System for Adrenal Phaeochromocytoma and Paraganglioma [33]; NPV, negative predictive value; PASS, Pheochromocytoma of the Adrenal Gland Scaled Score [32]; PPV, positive predictive value; PSPGL, Prognostic Score in Paragangliomas. of patients with complete molecular investigation for genetic disease.Tumor size, a variable widely assessed to differentiate between MPGL and NMPGL, shows a relationship with metastatic potential; it was positive in some studies [12,16,37,43] and showed no importance in others [17,32,85,86].In the present study, even by adopting the cutoff value of 8.1 cm, this variable was not independently related to tumor behavior.Although not included in the multivariable analysis, 24-hour urinary noradrenaline was higher in the MPGL group.Catecholamine type produced by the tumor represents a nonhistological parameter that seems to relate to cellular differentiation; poor differentiated tumors may present with impairment of several enzymes involved in the synthesis of catecholamines leading to preferential synthesis of adrenaline precursors such as noradrenaline and dopamine [20,33,52].Dopamine urinary concentrations were higher in MPGL (P = .017),with OR = 4.396 but with a 95% CI of 0.482 to 40.104.This could possibly be associated with limitations inherent to dopamine detection methods [87] since its metabolite, methoxytyramine, has been pointed out as a marker of metastatic disease [22].Unfortunately, the assessment of this compound is not available in our service.Ki-67, which is widely used in assessing the metastatic potential of PGL [23,[44][45][46][47], was not included in the multivariable analysis, as it represents cellular proliferation already identified in histology as the mitotic index.The choice of more than 3 mitoses/10 HPF was based on its higher OR in the univariate analysis and on the fact that its analysis exempts the need for IHC.CHGB has been considered an inversely related factor to the metastatic potential of PGL [38,59].In our evaluation, this was not clearly demonstrated because, although it has high specificity for identifying NMPGL (85%), the AUC-ROC was small, which demonstrates the low efficiency of this variable in discriminating from NMPGL [81]. It is worth noting that the CART peptide, evaluated in IHC, was not useful in differentiating metastatic potential in PGL.We believed that this marker could be a possible predictor of malignant behavior in these tumors, as it was shown to be related to disease progression in PGLs [65,66].However, IHC-CART was weak in most PGLs and similar in MPGL vs NMPGL. The 4 variables selected were assigned points according to their relative importance in the outcome (metastasis) (see Table 8).Based on these values we generated the PSPGL score, which was calculated only for tumors with results available for the 4 variables (78 tumors: 19 MPGL and 59 NMPGL).A PSPGL score of 24 or greater discriminated MPGL from the NMPGL patients with a sensitivity of 89.5%, specificity of 91.5%, VPP of 77.3%, and NPV of 96.4% (see Fig. 4 and Table 9).We calculated in the tumors in the present study, the PASS, GAPP, and PSPGL and compared the CI of these indices.We demonstrated that the 3 scores had similar sensitivity and accuracy, and PSPGL had greater specificity (see Table 9).Table 12 shows sensitivity, specificity, PPV, and NPV of PASS [32] and GAPP [33] original studies and PSPGL. As previously discussed, the main issue with the classically used scores is the limitation regarding specificity and accuracy for predicting metastatic PGL (PPV) [40,41].PSPGL presented 91.5% specificity and 77.3% PPV, higher than those observed in the original studies of PASS and GAPP (see Table 12).We consider that the main advantage of the PSPGL is that it is derived from a smaller number of variables-only 4-which are generally available and easily reproducible.This will allow it to be more widely used because it is more accessible and will likely have less interobserver variability than classic scores. When we analyzed the logistic regression curve of the PSPGL, we verified that the estimated probability of metastasis and the actual incidence of this occurrence are very similar, which reinforces the high capacity of this score in predicting metastatic behavior (see Fig. 5).According to the score achieved by tumors in the PSPGL score, patients can be classified regarding their risk of developing metastatic disease as follows: 1, very low risk (PSPGL = 0 points: probability ∼0%) and low risk (PSPGL = 19-20 points [extra-adrenal PGLs without any of the 3 histological variables or adrenal PGLs only with extension to adipose tissue]: probability ∼10%); 2, moderate risk (20 < PSPGL ≤ 39: probability of 10%-80%); and 3, high risk (PSPGL ≥ 40 points: probability >80%).PSPGL identified, with greater certainty, patients with low (<10%) and high (80%-100%) probability of developing metastases but did not clearly identify this probability in patients with intermediate scores (12 patients).Of these, 5 had NMPGL, and the evaluation of Ki-67 showed values of 0.1% to 0.8% in 3 patients, 1.3% and 3.6% in 2 patients (carriers of PV in VHL, which was present only in NMPGL patients).In the other 7 patients with MPGL, the evaluation of Ki-67 showed values of 0.1% to 3.7% in 4 patients with iMPGL and 5% to 6.2% in 3 patients with aMPGL, 2 of whom were carriers of PV in SDHB.Therefore, we suggest that in patients with intermediate PSPGL (20 < PSPGL ≤ 39), we should consider other factors for risk prediction such as Ki-67 and the presence of PV in genes that are associated or not with metastatic disease.Our findings do not allow for definitive conclusions on the time, frequency, and quality of monitoring of clinical, laboratory, and imaging data of patients with nonmetastatic disease at the time of surgery, based only on the PSPGL.However, we recommend that patients with high-risk tumors (PSPGL ≥ 40) be monitored preferably every 6 months in the first 4 years following surgery (mean time to appearance of metastasis = 44 months).If patients remain disease free, tests can then be performed annually during the next 4 years.If they continue to be disease free, these patients can undergo clinical examination and laboratory tests every 2 years for an extended period.The PSPGL also allows us to recommend that patients with very low or low risk be followed-up with annual clinical examination and biochemical tests, and imaging exams every 2 years.These patients can be considered nonmetastatic after 8 years of follow-up, but they must remain under observation.It is not possible to make any other more precise recommendation for patients with an intermediate risk (20 < PSPGL ≤ 39) based only on the PSPGL assessment, and in these cases, we recommend using other markers of metastatic (eg, Ki-67 ≥ 3%, PV in SDHB) or nonmetastatic disease (eg, PVs in VHL, RET, TMEM127).According to these markers that are not part of the PSPGL, patients should be monitored as low or high risk. The identification of prognostic factors for MPGL is also a topic of great interest.As demonstrated in the survival curve of patients with MPGL, there are two types of tumor behavior, one more aggressive and responsible for short survival (aMPGL) and one more indolent that allows for long survival (iMPGL) (Fig. 6A).Studies related to the progression of metastatic tumors are difficult to conduct due to the rarity of PGLs and, mainly, of MPGL.Older age at diagnosis, male sex, synchronous metastases, and increased plasma concentrations of dopamine and methoxytyramine are factors that have been related to shorter survival in some studies [8,9,12,14,88,89].In the present study, the small number of tumors assessed (8 aMPGL vs 22 iMPGL) allowed the comparison among the several variables only by using univariate analysis.This analysis showed that 3 variables presented a positive correlation with poor prognosis: presence of atypical mitosis, Ki-67 of 3% or greater, and smaller concentrations of 24-hour urinary noradrenaline.Kaplan-Meier curves were generated for patients with MPGL taking into account these variables, and only Ki-67of 3% or greater was associated with shorter DSS.This result is consistent with the results of a multicenter European study that included 169 patients with metastatic disease that found IHC-Ki-67 of 2% or less was associated with better survival [8].We found that survival at 8 years was approximately 90% and 38% in patients with tumors with IHC-Ki67 less than 3% and 3% or greater, respectively (Fig. 6B).Synchrony and shorter time elapsed between surgery and detection of the metastasis have been studied as worse prognostic factors in MPGL [8,14,88,89].In this study, it was not possible to establish a relationship between these variables and prognosis, and this may be attributed to the small number of patients.The presence of PV in the SDHB gene has an already established relationship with metastatic potential but does not seem to be related to shorter survival [8,14]. The main limitations of this study include its sample size, which, although numerically important, if we consider a single study center, was still small, especially the absolute number of patients with metastatic disease; difficulties in data collection inherent to retrospective studies; impossibility of obtaining data related to genetic diagnosis due to the unavailability of molecular assessments prior to 2014, and the fact that the current assessment, although does not reach all genes involved in the pathogenesis of PGLs; tumor functionality-type assessments were impaired prior to 2012 because free and fractionated metanephrine assessments were not available; and finally, our score has not yet been internally or externally validated. In summary, we proposed a prognostic score for PGLs, the PSPGL, which includes a nonhistological variable (extraadrenal location) and 3 histological variables (central or confluent necrosis, mitotic index >3 mitoses/10 HPF, and extension to adipose tissue), all easily assessed.The PSPGL showed a performance similar to the PASS and GAPP but with higher specificity.The PSPGL score showed good capacity in predicting low and high risk of metastases.Genetic diagnosis and the Ki-67 index can be auxiliary tools in predicting risk in patients with intermediate scores.IHC-Ki-67 greater than or equal to 3% was shown to be a predictor of worse prognosis in MPGL. a PSPGL was calculated only for tumors with available data for the 4 parameters included in the score.b Significant P less than .05. Figure 5 . Figure 5.Estimated probability vs observed incidence related to PSPGL.The incidence indicators at each point represent proportional size to number of observed patients in each situation.N/D, not detected; O, observed incidence; P, estimated probability; Pts, points. Figure 6 . Figure 6.A, Disease-specific survival for patients with metastatic paraganglioma (MPGL) and nonmetastatic PGL.B, Disease-specific survival for patients with MPGL: immunohistochemistry-Ki-67 less than 3% or greater than or equal to 3%.Significant P less than .05. Table 1 . Time of detection and site of metastasis in metastatic paraganglioma Variable MPGL (n = 35) Results expressed as mean ± SD, median (minimum value-maximum value), or percentage (n positive/n available).a Synchronous metastasis: less than 12 months after primary tumor diagnosis.b Metachronous metastasis: 12 months or more after primary tumor diagnosis. Table 4 . Site and size of metastatic paraganglioma vs nonmetastatic paraganglioma aSignificant P less than .05. Table 5 . Pheochromocytoma of the Adrenal Gland Scaled Score and Grading System for Adrenal Pheochromocytoma and Paraganglioma scores in the metastatic paraganglioma group vs the nonmetastatic paraganglioma group a PASS histologic parameters were assessed in 19 to 20 MPGL and in 59 to 61 NMPGL patients.b GAPP score was assessed in 15 MPGL and in 50 NMPGL patients.c Significant P less than .05. Table 6 . Immunohistochemistry for Ki-67, cocaine-and amphetamine-regulated transcript, and chromogranin B in the metastatic paraganglioma group vs the nonmetastatic paraganglioma group Variable IHC-CHGB was assessed in 20 MPGL and in 57 NMPGL patients.c IHC-CART was assessed in 20 MPGL and in 57 NMPGL patients. aNAU results were available for 5 aMPGL patients and for 10 iMPGL patients.b d Significant P less than .05. (17/22) a Seventeen of 50 patients did not present with metastatic disease.b PSPGL was calculated only for tumors with available data for the 4 parameters included in the score.c Results of comparison between PSPGL and PASS and GAPP original studies.
2024-05-09T15:03:04.006Z
2024-05-07T00:00:00.000
{ "year": 2024, "sha1": "f71efdd67fce95a38a67539e9cd9b80980cb1b54", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/jes/advance-article-pdf/doi/10.1210/jendso/bvae093/57440067/bvae093.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "261fb0b56e7cf2845b0a111a90bacda30d27334c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7522539
pes2o/s2orc
v3-fos-license
Long-range Atmospheric Transport of Polycyclic Aromatic Hydrocarbons is Worldwide Problem – Results from Measurements at Remote Sites and Modelling Despite the fact that the occurrence of polycyclic aromatic hydrocarbons (PAHs) in the atmospheric environment has been studied for decades the photochemistry, deposition and, consequently, the long-range transport potential (LRTP) are not well understood. The reason is gas-particle partitioning (GPP) in the aerosol, its sensitivity to temperature and particulate phase composition, and sampling artefacts’, and reactivity’s sensitivities towards particulate phase composition. Furthermore, most PAHs are subject to re-volatilisation upon deposition to surfaces (multihopping). Levels and sources of 2-6-ring unsubstituted PAHs were studied in remote environments of Europe, Africa and Antarctica. Global atmospheric transport and fate of 3-5-ring PAHs were simulated under various scenarios of photochemistry and GPP. GPP influences drastically the atmospheric lifetime, compartmental distributions and the LRTP of PAH. Mid latitude emissions seem to reach the Arctic but not the Antarctic. Introduction Polycyclic aromatic hydrocarbons (PAHs) are unavoidable by-products of any kind of combustion, in particular incomplete combustion processes.] Gas-particle partitioning is significant for many PAHs (so-called semivolatility, expected for saturation vapour pressures in ambient air in the range p sat = 10 -6 -10 -2 Pa 4 ).In terms of water and organics solubility they span a considerable wide range of properties, i.e. 3-4 orders of magnitude, but less than with regard to vapour pressure (9 orders of magnitude; Table 1). In a wider sense, the substance class also encompas-ses alkylated, partly oxygenated and other substituted PAHs, besides the so-called parent PAHs. 1,5he relevance of PAHs is given by their potential to form carcinogenic and mutagenic metabolites and, partly, by being carcinogenic themselves. 2,9-10Among atmospheric trace chemical substances, PAHs probably form the class most harmful to human health. 11One parent PAH, benzo(a)pyrene (BAP), is considered as an important substance because of its toxicity (carcinogenicity) and a criteria pollutant in many countries.BAP and at least some other parent PAHs are bioaccumulative and therefore discussed as persistent organic pollutants (POPs) in the UNEP POP Convention.The entire substance class is considered as POP under the Arhus Protocol amended to the Convention on Long-Range Transboundary Air Pollution (CLRTAP). Resistance to photochemical degradation, obviously a condition for long-range atmospheric transport, in fact, is not well understood: In the gas-phase the reactions with (a) saturation vapour pressure (b) Henry coefficient (c) octanol-water partitioning coefficient (d) second order rate coefficient for the reaction of the gaseous molecule with the hydroxyl radical (e) second order rate coefficient for the reaction of the particle-associated molecule with the hydroxyl radical (f) on graphite particles the hydroxyl radical and ozone limit the parent PAHs' atmospheric residence times to hours or days at most (Table 1 [2][3]5 ). Inthe particulate phase, however, PAHs may undergo long-range transport and reach pristine areas in high altitudes and latitudes.[12][13][14][15] Obviously, rate coefficients for particle bound PAHs determined in the laboratory tend to overestimate atmospheric degradation, most likely as a consequence of matrix effects which so far could not be mimicked in laboratory experiments, i.e. shielding against oxidant attack.16-17 All active and passive air (QFF and PUF) samples were extracted with dichloromethane in an automatic extractor (Soxhlet).Surrogate recovery standards were spiked on each PUF and QFF prior to extraction.The volume was reduced after extraction under a gentle nitrogen stream at ambient temperature, and fractionation achieved on a silica gel column.Samples were analyzed by gas chromatography coupled with mass spectrometry. Meteorological Analysis Regional and global information about air parcel origin was derived from back-trajectory statistics.The regional provenance of advected air was determined by three-dimensional 96 h-back-trajectories (HYSPLIT model 23 ).Trajectories were calculated usually 4-hourly (24hourly for the study of deposition in Antarctica), with various arrival height, 200-750 m above ground.This level ensures that the trajectory starts in the mixing layer of the atmosphere.5] Individual samples were allocated to 'air sheds', i.e. the area or region of air mass passage during the sampling period.For tracking air masses back > 10 and up to 20 d a Lagrangian particle dispersion model (FLEXPART model 26 ) was used.The fraction of parcels above and below the boundary layer height was budgeted and used to identify samples of free tropospheric as opposed to boundary layer air. 21 3. Global Multicompartmental Modelling The model used is based on the atmosphere general circulation model ECHAM5 with simplified atmospheric chemistry and a dynamic aerosol sub-model (HAM 27 ).Two-dimensional ground compartments, i.e. the ocean mixed surface layer (spatio-temporally varying in depth) and single layers representing vegetation surfaces and topsoil are coupled such that multicompartmental cycling (deposition, volatilisation) is described.The PAHs behave similarly in the ground compartments upon deposition from gas or particulate phase.Uptake of PAHs into leaves and other parts of vegetation is ignored.] PAH kinetics in the particulate phase is not sufficiently understood. 5A sensitivity analysis suggests that the heterogeneous reaction of PAH with ozone is effectively limiting atmospheric lifetime and long-range transport. 30In this study, heterogeneous reactions are neglected.Several models for the processes determining gasparticle partitioning were tested in separate substance scenarios.The time step used was 30 min and the horizontal resolution ≈2.8 °× 2.8 ° with 19 levels in the vertical between 1000 and 10 hPa.ANT, FLT and BAP have been studied under each three scenarios of atmospheric degradation and gas-particle partitioning.The model simulations were initialized by sea-surface temperature distributions according to present-day climate and run over 10 yr. Emissions were compiled based on emission factors in 27 major types of combustion technologies, scaled to 141 combustion technologies and their global distribution as of 1996 (1 °× 1 °) according to fuel type and the PM 1 emission factor. 31The emissions were entried uniformly throughout the entire simulation time.Scenarios tested: adsorption (šAD', i.e. according to the Junge empirical relationship 32 ), absorption in organic matter and adsorption to soot 33 without (šOB') and with (šDP') degradation in the atmospheric particulate phase. 1. 1. Central Europe and Free Troposphere Over Europe A pronounced seasonality is found: Both a stronger emission source (heating season) and a weaker chemical sink (photo-chemistry) contribute to higher abundances in winter.It had been stressed [34][35] that, influenced by domestic burning sources, PAHs' concentrations can be higher at a rural than at an urban site.Winter-time PAH peak levels are 1-2 orders of magnitude higher at boundary layer sites in central Europe than summer minima (Fig. 1). At Mt. Zugspitze the total PAHs level observed (sum of gas and particulate phases) was 1.0 (0.35-2.5) ng m -3 in winter (2007-08) and 0.07 (0.05-0.16) ng m -3 in summer (2007; sum of 15 species (i.e.16 USEPA priority PAHs without NAP) addressed. 21Interestingly, the levels were higher in free troposphere air than in boundary layer air, which indicates resistance to photochemical degradation during transport in air.Obviously, the lifetime of PAHs can exceed 20 days in the free troposphere, despite elevated (≈100 μg m -3 ) ozone. PHE and PYR accounted for > 90% of the PAH mass, which certainly indicates relative photochemical stability of these (mostly) gaseous compounds.Also, BEP/(BEP+BAP) > 0.66 indicates aged air, as BAP should be degrading faster than its isomer BEP 36 (see also values of k i OH in Table 1), as this ratio ≈ 0.5 close to sources, as reflected by the data from the Brno area urban and rural sites (so-called diagnostic ratio 37 ). Apart from primary sources, PAHs may be re-emitted upon atmospheric deposition.Re-suspended dust (or other coarse particulate matter) and air-soil exchange could constitute such secondary sources.Air-soil exchange is occurring, 22,[38][39] despite sorption of PAHs to accumulation in soils.The significance of this process as a local or regional PAH source has not been assessed so far. 1. 2. Boundary Layer and Free Troposphere Over Africa At the high mountain site Mt.Kenya the total PAHs level observed was 0.3-0.6 ng m -3 and at two savannah sites (Molopo and Barberspan, Rep. of South Africa) it was 0.5-1.4ng m -3 in 2008; sum of 15 species (i.e.16 USEPA priority PAHs without NAP) addressed 40 ), hence, well below levels found at European background sites.The analysis of potential source areas and advection paths (3D back-trajectories) suggests that the available chemical kinetic data would lead to unplausibly high concentrations in continental source areas.It is concluded that current knowledge overestimates degradation of (at least) some 4ring PAHs during atmospheric transport.A modelling study 30 showed that the contribution of vegetation fires to exposure to PAHs in Africa is probably >10%, but cannot be quantified due to lack of knowledge with regard to both emission factors and photochemistry. 1. Deposition in Antarctica PAH concentrations in snow on the Ekström shelf ice were found within the range of 26-197 ng L -1 .The most prevailing substances were determined to be NAP, 1and 2-methyl-NAP, ACY, ACE and PHE with NAP accounting for an overall mean of 82% of total addressed PAH. The depositional flux was highest for periods with high frequency of air mass passage over close sources and lowest for air sheds which seemingly encompassed sources distributed over almost the entire continent. 19Potential emission sources of PAHs are stations and ships, both stronger in summer.The distance to the sources (ships and research stations) in this region was found to control the snow PAH concentrations.There was no indication for intercontinental transport or marine sources, i.e. southern mid latitude emissions apparently have not reached the Antarctic in 2003-05. 2. Global Multicompartmental Modelling The model simulations show that gas-particle partitioning in air influences drastically the atmospheric cycling, total environmental fate (e.g.compartmental distributions) and the long-range transport potential (LRTP) of the substances studied.The largest fractions of the total environmental burden is predicted to reside in soils and vegetation (85-99%), less in ocean (0.9-11%) and atmosphere (0.3-3.6%; Fig. 2). 29Comparison with observed levels indicate that degradation in the particulate phase must be slower than in the gas-phase (exclusion of the assumptions made under the DP scenario).Furthermore, the levels of the semivolatile PAHs ANT and FLT at high latitudes and a European mid latitude site cannot be explained by partitioning due to adsorption alone, but point to both absorption into organic matter and adsorption to black carbon (soot) to determine gas-particle partitioning.Global modelling, therefore, suggests that the Arctic is receiving PAHs emitted in mid latitudes. Long-range transport of PAHs is enhanced to some extent by multi-hopping.Volatilisation from ground exceeds deposition over dry parts of the continents and some sea regions at least seasonally. 29 Conclusions The results of both field observations and modelling suggest that PAH is undergoing long-range atmospheric transport including to remote continental regions, the world ocean and the Arctic.Gas-particle partitioning in air influences drastically the atmospheric cycling, total environmental fate (e.g.compartmental distributions) and the LRTP. For coherent large-scale modelling of PAHs, knowledge gaps with regard to gas-particle partitioning and chemical kinetics (reactions with ozone and NO 2 , besides the hydroxyl radical, and of particle-associated PAH molecules) should be closed and the uncertainty of emission factors for some sources (e.g., biomass burning 30 ) need to be reduced.Phase information from field data requires artefact-free sampling, which is not commonly used, [41][42] or an in situ determination and quantification method, which is not yet available for ambient conditions (except for the parameter sum of PAHs 43 ). Fig. 2 . Fig. 2. Distributions of annual mean fluoranthene (FLT, a, c) and benzo(a)pyrene (BAP, b, d) burdens (μg m -2 ) in air (a, b) and soil (c, d) under the scenario assuming both absorption in organic matter and adsorption to soot to contribute to gas-particle partitioning (OB, see above 2.3).29 29 Fig. 2. Distributions of annual mean fluoranthene (FLT, a, c) and benzo(a)pyrene (BAP, b, d) burdens (μg m -2 ) in air (a, b) and soil (c, d) under the scenario assuming both absorption in organic matter and adsorption to soot to contribute to gas-particle partitioning (OB, see above 2.3).29 .: Long-range Atmospheric Transport of Polycyclic Aromatic ...
2017-09-21T03:42:40.102Z
2015-04-29T00:00:00.000
{ "year": 2015, "sha1": "46c7505ba29a6ca192ec60894e43158f6b1e069c", "oa_license": "CCBY", "oa_url": "https://journals.matheo.si/index.php/ACSi/article/download/1387/664", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "46c7505ba29a6ca192ec60894e43158f6b1e069c", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science", "Medicine" ] }
260037472
pes2o/s2orc
v3-fos-license
Effect of Nb on the Damping Property and Pseudoelasticity of a Porous Ni-Ti Shape Memory Alloy In order to develop novel high damping materials with excellent pseudoelasticity (PE) properties to meet the application requirements in aerospace, medical, military and other fields, porous Ni50.8Ti49.2 shape memory alloy (SMA) was prepared by the powder metallurgy method. Different contents of Nb element were added to regulate the microstructures. It was found that after adding the Nb element, the number of precipitates significantly decreased, and the Nb element was mainly distributed in the Ni-Ti matrix in the form of β-Nb blocks surrounded by Nb-rich layers. Property tests showed that with the increase in Nb content, the damping and PE increased first and then decreased. When the Nb content reached 9.0 at.%, the highest damping and the best PE could be achieved. Compared with the porous Ni-Ti SMA without Nb addition, the damping and PE increased by 60% and 35%, respectively. Correlated mechanisms were discussed. Introduction With the development of industry, problems such as vibration and noise are becoming increasingly serious. Damping materials can effectively convert mechanical energy into thermal energy, thus achieving the effect of vibration and noise reduction. Polymer materials exhibit excellent damping properties due to viscoelasticity, but owing to their low mechanical properties they cannot be directly used as structural components in engineering fields [1]. Although traditional metal materials have excellent mechanical properties, their applications are also limited due to their low damping properties [2,3]. SMAs exhibit high damping properties owing to their inherent thermoelastic martensitic transformation (MT) behavior and the hysteretic migration of a large number of interfaces such as the austenite/martensite interface, the twin interface as well as the martensite/martensite interface, so they have been widely used [4,5]. Compared with iron-based and copper-based SMAs, Ni-Ti SMAs with an almost equal atomic ratio have unique advantages because they have a better shape memory effect (SME) and PE besides high damping properties [6]. Moreover, the one-step reverse MT (B19 → B2) of the Ni-Ti SMAs during the heating process can be easily changed to a two-step reverse MT (B19 → R → B2) by performing certain heat treatments [7]. Therefore, the regulation of its damping is more convenient than other SMAs. With the development of science and technology, aerospace, medical, military and other fields have put forward higher requirements for the damping property of metal materials [2]. It has been shown that making Ni-Ti SMAs porous can further improve their damping properties. This is because the microplastic deformation of the pore wall as well as the motion of atoms and stress mode transformation around pores will dissipate additional mechanical energy [8,9]. Element addition is another effective method of improving the Materials 2023, 16, 5057 2 of 13 damping properties of Ni-Ti SMAs. For example, the addition of Cu can significantly improve the damping of Ni-Ti SMAs, but excess Cu will embrittle the alloy and reduce the machinability [10]. The addition of Hf and H elements can form a new damping peak in Ni-Ti SMAs, but the resultant alloys have low modulus and poor deformability in the martensitic state [11]. In recent years, the addition of Nb to Ni-Ti SMAs has attracted ever more attention because of the excellent comprehensive properties of the resultant alloys. By comparing the damping properties of the Ni 50 Ti 50 SMA and the Ni 47 Ti 44 Nb 9 SMA, Cai et al. found that the addition of Nb contributed to an improvement in the damping property of the Ni-Ti SMA due to the formation of a high density of dislocations around the Nb phase [12]. Bao et al. fabricated a NiTiNb SMA through casting and hot rolling processes, and they found that the generation of the eutectic microstructure around the Nb phase could lead to an improvement in the damping [13]. Nevertheless, up to now, studies on the effect of Nb addition on damping properties have mostly focused on bulk Ni-Ti SMAs, and there are still few studies concerning porous Ni-Ti SMAs. In addition to damping properties, as mentioned above, the SME and PE are also important functional properties of Ni-Ti SMAs. In particular, the PE of porous Ni-Ti SMAs has shown an important application prospect in the field of high overload resistance related to high-speed launch in recent years [14]. Guo et al. demonstrated that Ni-Ti SMAs have good energy absorption ability and self-recovery capacity under dynamic loading, which can effectively improve the service life and material utilization of protective structures [15]. However, so far, there have been few studies on the PE property of porous Ni-Ti SMAs. In the present study, the effect of Nb addition on the microstructure, damping and PE properties of a porous NiTi SMA was systematically investigated. The obtained results can provide a theoretical basis for improving the comprehensive properties of the porous NiTi SMAs. Fabrication of Porous Ni-Ti SMAs Porous Ni 50.8 Ti 49.2 SMAs with Nb addition were fabricated via the powder metallurgy process by using pure Ni, Ti and Nb powders (the purity was 99.9% and the average particle size was 75 µm) as raw materials. Powders were purchased from the Gripm Advanced Materials Co., Ltd. (Beijing, China). Ni, Ti powders and different contents (0, 3.0, 6.0, 9.0 and 12.0 at.%) of Nb powders were ball milled for 10 h on a planetary ball mill under the protection of argon (the weight ratio of ball to powder was 10:1, and the rotating speed of the ball mill was 200 rpm). Subsequently, the obtained Ni-Ti-Nb powders were homogeneously mixed with NaCl space-holders with an average particle size of 0.60 mm purchased from the China National Salt Industry Group Co., Ltd. (Beijing, China) in a V-type mixer. The mixed powders were then compressed into cylindrical green compacts under the pressure of 350 MPa on a computer numerical control (CNC) hydraulic machine purchased from the Henan Oukai Hydraulic Equipment Co., Ltd. (Xinxiang, China). The green compacts were sintered in a tube furnace purchased from the Tianjin Zhonghuan Electric Furnace Co., Ltd. (Tianjin, China) under the protection of high purity argon gas (the green compacts were firstly sintered at 790 • C for 2 h to densify the metal matrix, and then sintered at 1100 • C for 3 h). After natural cooling to room temperature, the specimens were taken out of the furnace and the residual NaCl in the pores of specimens was washed out with running water. In order to eliminate the precipitates in the Ni-Ti matrix, the porous Ni-Ti SMAs were subjected to solution treatment at 1000 • C for 60 min and then quenched in water. After that, the specimens were aged at 450 • C for 30 min. For convenience, the porous Ni-Ti SMAs with different contents (0, 3.0, 6.0, 9.0 and 12.0 at.%) of Nb were termed as Nb0, Nb3, Nb6, Nb9 and Nb12 specimens, respectively. Characterization of Specimens The porosity of porous Ni-Ti SMAs was calculated by using the following equation: where P and ρ * denote the porosity and apparent density of the porous Ni-Ti SMAs, respectively, and ρ 0 denotes the theoretical density of the dense Ni-Ti SMA. The phase composition of specimens was analyzed by X-ray diffraction (XRD, Bruker D8 Discover). The equipment was sourced from the Bruker Corporation (Billerica, MA, USA). The pore morphology and microstructure of specimens were observed by scanning electron microscopy (SEM, JSM-7100F) equipped with an energy dispersive spectrometer (EDS). Elements mapping was obtained by using electron probe microanalysis (EPMA, JEOL 8530F). Both SEM and EPMA were sourced from the JEOL Japan Electronics Co., Ltd. (Tokyo, Japan). Property Tests The damping property was characterized by the internal friction IF (Q −1 ) and measured on a dynamic thermo mechanical analyzer (DMA, Q800) by using the method of forced vibration with a heating rate of 5 • /min and a vibration frequency of 0.5 Hz. The DMA sourced from the TA Instruments (New Castle, DE, USA) is shown in Figure 1. The damping specimens had a dimension of 35 × 10 × 2 mm 3 . The PE of the specimens was tested at room temperature on a hot simulated testing machine (HSTM, Gleeble 3180) with a strain rate of 1.0 × 10 −4 s −1 . The equipment was sourced from the Dynamic Systems Inc. (Poughkeepsie, NY, USA). Four pre-strains of 2.0%, 3.0%, 4.0% and 5.0% were used. The cylinder specimens had a dimension of ∅8 × 12 mm 3 . Characterization of Specimens The porosity of porous Ni−Ti SMAs was calculated by using the following equation where P and * denote the porosity and apparent density of the porous Ni−Ti SMAs, respectively, and 0 denotes the theoretical density of the dense Ni−Ti SMA. The phase composition of specimens was analyzed by X-ray diffraction (XRD, Bruker D8 Discover). The equipment was sourced from the Bruker Corporation (Billerica, MA USA). The pore morphology and microstructure of specimens were observed by scanning electron microscopy (SEM, JSM-7100F) equipped with an energy dispersive spectrometer (EDS). Elements mapping was obtained by using electron probe microanalysis (EPMA JEOL 8530F). Both SEM and EPMA were sourced from the JEOL Japan Electronics Co. Ltd. (Tokyo, Japan). Property Tests The damping property was characterized by the internal friction IF (Q −1 ) and measured on a dynamic thermo mechanical analyzer (DMA, Q800) by using the method of forced vibration with a heating rate of 5°/min and a vibration frequency of 0.5 Hz. The DMA sourced from the TA Instruments (New Castle, DE, USA) is shown in Figure 1. The damping specimens had a dimension of 35 × 10 × 2 mm 3 . The PE of the specimens was tested at room temperature on a hot simulated testing machine (HSTM, Gleeble 3180) with a strain rate of 1.0 × 10 −4 s −1 . The equipment was sourced from the Dynamic Systems Inc (Poughkeepsie, NY, USA). Four pre-strains of 2.0%, 3.0%, 4.0% and 5.0% were used. The cylinder specimens had a dimension of ∅8 × 12 mm 3 . Phase Composition and Microstructures of Specimens The XRD patterns of Ni−Ti and Ni−Ti-Nb (Nb content: 9.0 at.%) powders as well as porous Ni−Ti SMAs are shown in Figure 2. The sharp characteristic diffraction peaks of Ni, Ti and Nb elements indicate that the ball-milled powders have an ordered lattice arrangement. The position of the peaks without deviation indicates that the mixed powder has no interatomic diffusion behavior. In Figure 2b, diffraction peaks of Na and Cl are not detected, indicating that the NaCl particles only play the role of space-holders and do not participate in the reaction between metal powders at a high sintering temperature. The porous Ni−Ti SMAs with Nb addition are mainly composed of NiTi(B2) matrix phase and a small amount of Ti-rich phase and β−Nb phase. The Ti-rich phase is probably (Ti, Nb)2Ni phase [16]. The existence of β−Nb phase indicates that the sintering temperature of 1100 °C could not provide enough energy for the Nb element to form an eutectic structure with Phase Composition and Microstructures of Specimens The XRD patterns of Ni-Ti and Ni-Ti-Nb (Nb content: 9.0 at.%) powders as well as porous Ni-Ti SMAs are shown in Figure 2. The sharp characteristic diffraction peaks of Ni, Ti and Nb elements indicate that the ball-milled powders have an ordered lattice arrangement. The position of the peaks without deviation indicates that the mixed powder has no interatomic diffusion behavior. In Figure 2b, diffraction peaks of Na and Cl are not detected, indicating that the NaCl particles only play the role of space-holders and do not participate in the reaction between metal powders at a high sintering temperature. The porous Ni-Ti SMAs with Nb addition are mainly composed of NiTi(B2) matrix phase and a small amount of Ti-rich phase and β-Nb phase. The Ti-rich phase is probably (Ti, Nb) 2 Ni phase [16]. The existence of β-Nb phase indicates that the sintering temperature of Materials 2023, 16, 5057 4 of 13 1100 • C could not provide enough energy for the Nb element to form an eutectic structure with the NiTi phase. Compared with the specimens with the Nb addition, a higher content of the Ni-rich phase appears in the Nb0 specimen. For the specimens with Ni content higher than 50.5 at.%, the solution treatment can make part of the Ni 3 Ti phase re-dissolve in the NiTi phase [17,18]. However, in the subsequent aging treatment at 450 • C, the Ni-rich phase precipitates again as the Ni 4 Ti 3 phase [19]. Interestingly, in this study the specimens with the Nb addition do not show the diffraction peaks of Ni 3 Ti and Ni 4 Ti 3 phases after heat treatment, indicating that the presence of the Nb element in the matrix can effectively hinder the precipitation of Ni-rich phases. rials 2023, 16, x FOR PEER REVIEW 4 of the NiTi phase. Compared with the specimens with the Nb addition, a higher content the Ni-rich phase appears in the Nb0 specimen. For the specimens with Ni content high than 50.5 at.%, the solution treatment can make part of the Ni3Ti phase re-dissolve in t NiTi phase [17,18]. However, in the subsequent aging treatment at 450 °C, the Ni-r phase precipitates again as the Ni4Ti3 phase [19]. Interestingly, in this study the specime with the Nb addition do not show the diffraction peaks of Ni3Ti and Ni4Ti3 phases af heat treatment, indicating that the presence of the Nb element in the matrix can effectiv hinder the precipitation of Ni-rich phases. Figure 3a,b shows the morphologies of Ni−Ti and Ni−Ti-Nb (Nb content: 9at.%) po ders, respectively. The two kinds of powders are irregularly shaped and have no obvio difference in the morphology. During the compressing process, the irregular shape giv a better meshing effect between the powders, which can effectively improve the streng and density of the compacts and can avoid the delamination and cracking of the compa during the demolding process. Figure 3c shows the morphology of NaCl particles with average particle size of 0.6 mm. Figure 3d-g shows the macro and micro morphologies the Nb0 specimen, and Figure 3h-k shows the macro and micro morphologies of the N specimen. There is no significant difference in the macroscopic morphology between t two specimens. The pores are uniformly distributed in the Ni−Ti matrix and connect with each other, forming a three-dimensional network. The pores of the specimens reta the regular three-dimensional shape of the NaCl particles while losing their sharp-edg characteristics. This may be due to the fragmentation of the edges of the NaCl partic during the compressing process. The inner wall of pores is very rough, which is conduc to improving the ability of the material to absorb sound and reduce noise [20]. A 1100 sintering temperature provides enough energy for the formation and growth of the s tered neck. The powder particles exhibit a dense metallic framework due to good met lurgical bonding. When we observe the micro-morphology of the pore wall at a higher magnificatio it is clear that the Nb3 specimen has fewer micro pores than the Nb0 specimen, which c be attributed to the fact that the presence of Nb promotes the diffusion of atoms betwe Ni and Ti; this effectively reduces the vacancies generated by the Kirkendall effect duri the high temperature sintering process [21]. Although Nb has a higher melting point th Ni and Ti, Nb and Ti belong to an infinite solid solution system, so Nb may still be d solved in the Ni−Ti matrix [22]. Nb facilitates the sintering process by acting as a kind During the compressing process, the irregular shape gives a better meshing effect between the powders, which can effectively improve the strength and density of the compacts and can avoid the delamination and cracking of the compacts during the demolding process. Figure 3c shows the morphology of NaCl particles with an average particle size of 0.6 mm. Figure 3d-g shows the macro and micro morphologies of the Nb0 specimen, and Figure 3h-k shows the macro and micro morphologies of the Nb3 specimen. There is no significant difference in the macroscopic morphology between the two specimens. The pores are uniformly distributed in the Ni-Ti matrix and connected with each other, forming a three-dimensional network. The pores of the specimens retain the regular three-dimensional shape of the NaCl particles while losing their sharp-edged characteristics. This may be due to the fragmentation of the edges of the NaCl particles during the compressing process. The inner wall of pores is very rough, which is conducive to improving the ability of the material to absorb sound and reduce noise [20]. A 1100 • C sintering temperature provides enough energy for the formation and growth of the sintered neck. The powder particles exhibit a dense metallic framework due to good metallurgical bonding. When we observe the micro-morphology of the pore wall at a higher magnification, it is clear that the Nb3 specimen has fewer micro pores than the Nb0 specimen, which can be attributed to the fact that the presence of Nb promotes the diffusion of atoms between Ni and Ti; this effectively reduces the vacancies generated by the Kirkendall effect during the high temperature sintering process [21]. Although Nb has a higher melting point than Ni and Ti, Nb and Ti belong to an infinite solid solution system, so Nb may still be dissolved in the Ni-Ti matrix [22]. Nb facilitates the sintering process by acting as a kind of channel for the diffusion and reaction between Ni and Ti. Thus, the migration of atoms in the powders is accelerated, which causes the distance between the powders of different elements to decrease continuously, resulting in the gradual disappearance of micro pores. Figure 5a, it can be determined that the gray matrix is the NiTi phase, whereas the white Ti-rich phase at point B, the flaky Ni-rich phase at point C, and the fine acicular Ni-rich phase at point D are the Ti 2 Ni, Ni 3 Ti and Ni 4 Ti 3 phases, respectively [23,24]. From Figure 4b, it can be seen that the size of the Ni 4 Ti 3 phase is 2-3 µm. Figure 4c-g shows the backscattered electron images of the Nb3, Nb6, Nb9 and Nb12 specimens, respectively. In these specimens, white blocks appear, and with the increase in Nb content, the number of these white blocks increases. Figure 4d shows the image of the Nb3 specimen with higher magnification, and Figure 5b gives the EDS analysis results of points E-H. According to Figure 2b and the EDS result of point E, it can be determined that these white blocks are the β-Nb phase. In addition to the blocky β-Nb phase, a striped β-Nb phase is also found in Figure 4f,g, which may be related to the creep of the β-Nb phase during the sintering process [16]. From Figure 4d, it is apparent that the β-Nb phase is surrounded by a layer of Nb-rich phase. The clearly discernible transition region (the Nb-rich phase) increases the bonding effect between the β-Nb phase and the NiTi matrix. Moreover, Nb can also dissolve in NiTi and Ti 2 Ni phases by replacing Ti and form (Ti, Nb)Ni and (Ti, Nb) 2 Ni phases, respectively. Because the sintering temperature (1100 • C) of porous Ni-Ti SMAs in this study is lower than the eutectic temperature (1150.7 • C) of the Ni-Ti-Nb alloy, the eutectic structure reported by Bao et al. is not found in Figure 4 [13,25,26]. for the Ni, Ti, Nb elements and (Ti, Nb) Ni phase, which is in good accordance with the results shown in Figures 2b and 4c-g. The Ti-rich phase shown in Figure 4d cannot be detected due to its low content and the lower magnification of Figure 5 compared with that of Figure 4d. In Figure 7d, the layer of the Nb−rich phase shown in Figure 4d between the β−Nb phase and NiTi phase can still be clearly seen. detected due to its low content and the lower magnification of Figure 5 compared with that of Figure 4d. In Figure 7d, the layer of the Nb−rich phase shown in Figure 4d between the β−Nb phase and NiTi phase can still be clearly seen. Figure 6 shows the EPMA analysis results of the Nb0 specimen. Except for Ni and Ti, no other elements can be detected. Combining the EDS results shown in Figure 5a, it can be determined that the Ni-rich phase in Figure 6c is the Ni 3 Ti phase, and the uniformly distributed Ti-rich phase in Figure 4d is the Ti 2 Ni phase. The needle-like Ni 4 Ti 3 phase, however, is not found in Figure 6 due to its small size. Figure 7 shows the EPMA analysis results of the Nb9 specimen. No other elements and Ni-rich phases can be detected except for the Ni, Ti, Nb elements and (Ti, Nb) Ni phase, which is in good accordance with the results shown in Figures 2b and 4c-g. The Ti-rich phase shown in Figure 4d cannot be detected due to its low content and the lower magnification of Figure 5 compared with that of Figure 4d. In Figure 7d, the layer of the Nb-rich phase shown in Figure 4d between the β-Nb phase and NiTi phase can still be clearly seen. Figure 8 shows the IF-temperature spectrum of the porous Ni-Ti SMAs with different contents of Nb. It can be seen that during the heating process two IF peaks arise at around −40 • C and 30 • C, respectively, which can be ascribed to the two steps transformation from the B19 phase to the B2 phase (B19 → R → B2) [7,27]. It was reported that the Ni 4 Ti 3 phase precipitated after aging treatment could cause a stress field in the NiTi matrix and facilitate the transformation of the NiTi phase into the R phase; the two-step phase transformation of B19 → R → B2 therefore occurs in the Nb0 specimen [7,28]. In the specimens containing Nb, however, the precipitation of the Ni 4 Ti 3 phase is inhibited, so only the transformation of B19 → B2 occurs (only one IF peak appears). Figure 8 shows the IF-temperature spectrum of the porous Ni−Ti SMAs with contents of Nb. It can be seen that during the heating process two IF peaks arise a -40 °C and 30 °C, respectively, which can be ascribed to the two steps transformat the B19' phase to the B2 phase (B19' → R → B2) [7,27]. It was reported that the Ni4T precipitated after aging treatment could cause a stress field in the NiTi matrix an tate the transformation of the NiTi phase into the R phase; the two-step phase mation of B19' → R → B2 therefore occurs in the Nb0 specimen [7,28]. In the sp containing Nb, however, the precipitation of the Ni4Ti3 phase is inhibited, so transformation of B19' → B2 occurs (only one IF peak appears). From Figure 8, it is apparent that with the increase in Nb content, the damping (the level of the IF curve) of the porous Ni-Ti SMAs increases first and then decreases. When the Nb content reaches 9.0 at.%, the highest damping can be achieved. The damping capacity of the porous Ni-Ti SMAs has the following three influencing factors. One is the pore structure. Due to the existence of pores in the Ni-Ti matrix, the external stresses applied during IF measurements are not uniformly distributed in the specimens, making the strain significantly lag behind the stress and leading to the enhancement of the IF. In addition, uniformly distributed stresses will cause the pores to undergo the deformations of expansion and contraction, which are also conducive to the conversion of external mechanical energy into internal energy. However, in this study, all specimens have the same porosity. Therefore, the IF that originates from the pore structure is not the main influencing factor on the change in the IF with the Nb content. The other influencing factors are the damping sources in the matrix of the porous Ni-Ti SMAs. The hysteretic migration of interfaces in the Ni-Ti matrix is the main energy dissipation mechanism that leads to strain lagging behind stress and improvement in the damping [6]. Compared with the specimens containing the Nb element, the Nb0 specimen has a higher content of Ti 2 Ni and Ni 3 Ti phases without mobile interfaces, so the Nb0 specimen has lower damping. After adding the Nb element, as shown in Figure 4, the content of Ti 2 Ni and Ni 3 Ti phases significantly decreases. As a consequence, the damping increases. Moreover, from Figure 4d it is apparent that after adding the Nb element, a layer of Nb-rich phase appears around the β-Nb phase, forming a new interface of a β-Nb/Nb-rich layer/Ni-Ti matrix. This kind of interface is also mobile under applied stresses, so it acts as a new damping source of the porous Ni-Ti SMA [13]. It has been reported that there is a high density of dislocations in the regions adjacent to the β-Nb/Ni-Ti interface [12]. The slip of dislocations is also conductive to the improvement in the damping. Therefore, the addition of the Nb element not only decreases the content of precipitates, but also generates multiple damping sources by forming the β-Nb phase in the Ni-Ti matrix, and leads to the improvement in the damping. The third influencing factor of the damping is the existing form of the Nb element in the Ni-Ti matrix. In addition to the form of the β-Nb phase, a small number of Nb atoms dissolves into the NiTi phase. It has been shown that the dissolved Nb atoms can induce the movement of twins and effectively improve the intrinsic damping of martensite [13]. However, as the number of dissolved Nb atoms continues to increase, the damping of martensite will no longer increase, but will remain at a stable level. Therefore, dissolved Nb atoms are not the main reason for the change in damping with the Nb content. In addition, from Figure 8 it can also be seen that with the increase in the Nb content, the P 3 peak arising from the reverse MT (B19 → B2) gradually shifts to a low temperature side. This can be attributed to the inhibiting effect of Nb on the precipitation of Ni-rich phases. The increase in the Ni/Ti atomic ratio leads to the decreased temperature of the reverse MT [29]. Effect of Nb on the Damping Property The porous Ni-Ti SMAs with Nb addition can be regarded as a two-phase structure consisting of the β-Nb phase and NiTi phase. The damping arising from the β-Nb phase consists of the intrinsic damping of the β-Nb phase and the damping of the β-Nb/Nb-rich layer interface and the Nb-rich layer/Ni-Ti interface. The intrinsic damping of the β-Nb phase is very low [21], so the damping of the β-Nb/Nb-rich layer interface and the Nb-rich layer/Ni-Ti interface is the main damping source. In addition, the Nb-rich layer/Ni-Ti interface provides nucleation sites for the formation of martensite during cooling, thus creating more interfaces, which further promote the improvement of the damping. With the increase in the Nb content, the volume fraction of the β-Nb phase increases, whereas the volume fraction of the NiTi phase decreases. According to Figure 8, the damping increases first and then decreases with the increase in the Nb content. When the Nb content reaches 9.0 at.%, the highest damping is obtained. When the increase in the interface damping can compensate for the decrease in the damping caused by the decrease in the volume fraction of the NiTi phase, the damping increases, whereas when the Nb content reaches 12.0 at.%, the level of the IF curve decreases, indicating that the increase in interface damping is no longer able to compensate for the decrease in damping caused by the decrease in the volume fraction of the NiTi phase. Effect of Nb on the PE Property The PE of the Ni-Ti matrix determines the shape recovery process of the porous Ni-Ti SMAs after stress unloading, whereas the pores can be regarded as the phase that does not reflect shape recovery. Figure 9 shows the effect of Nb on the shape recovery of the porous Ni-Ti SMAs at different pre-strains (2.0%, 3.0%, 4.0% and 5.0%). As can be seen from the figure, the strain of the compressed specimen after unloading is composed of residual strain (ε R ), superelastic reversion strain (ε SE ), and pure elastic reversion strain (ε E ). The reason for the high PE response of the specimens is the result of stress-induced MT and subsequent reverse MT, which is clearly reflected in the stress-strain curves during the loading-unloading processes. Taking the Nb9 specimen as an example, during the loading process, an inflection point appears on the stress-strain curve when the stress reaches 90 MPa, indicating that the stress-induced MT begins to occur. When the compressive strain reaches 5%, the stress is unloaded. Subsequently, the reverse MT begins to occur, and the strain decreases linearly to about 3.36%. After that, the strain continues to recover until reaching the ε R of 1.77%. Then ε E , ε SE and the total shape reversion strain can be determined as 1.64%, 1.59% and 3.23%, respectively, so the total recovery rate can reach 64.6% when the pre-strain is 5%. The superelastic recovery strain of different specimens after unloading under the four test conditions is summarized in Figure 10. It can be seen that with the increase in the Nb content, the superelastic recovery strain increases first and then decreases. The Nb9 specimen has the highest superelastic recovery strain, indicating that Nb has a positive contribution to the PE. In addition, the Ti2Ni and Ni3Ti phases in the Nb0 specimen have no PE, so the reduction in the number of these phases also contributes to the improvement in the PE. The superelastic recovery strain of different specimens after unloading under the four test conditions is summarized in Figure 10. It can be seen that with the increase in the Nb content, the superelastic recovery strain increases first and then decreases. The Nb9 specimen has the highest superelastic recovery strain, indicating that Nb has a positive contribution to the PE. In addition, the Ti 2 Ni and Ni 3 Ti phases in the Nb0 specimen have no PE, so the reduction in the number of these phases also contributes to the improvement in the PE. The superelastic recovery strain of different specimens after unloading unde four test conditions is summarized in Figure 10. It can be seen that with the increase i Nb content, the superelastic recovery strain increases first and then decreases. The specimen has the highest superelastic recovery strain, indicating that Nb has a pos contribution to the PE. In addition, the Ti2Ni and Ni3Ti phases in the Nb0 specimen no PE, so the reduction in the number of these phases also contributes to the improvem in the PE. It has been reported that in Ni-Ti SMAs with the Nb addition, there is a large number of twinned martensites at the interface of the Nb-rich phase/Ni-Ti matrix [29], and this structure is considered necessary for the self-accommodation of martensite variants [30]. During loading and unloading processes, with the increase in the strain, reorientation occurs between martensite variants when the stress exceeds the yield stress of martensites. The reorientation and de-twinning of twinned martensites facilitate the hyperelastic recovery of the matrix [31]. However, the β-Nb phase has no PE, and its plastic deformation is not conducive to shape recovery. Therefore, as the content of the β-Nb phase continues to increase and the volume fraction of the Ni-Ti matrix phase continues to decrease, the PE of the porous Ni-Ti SMAs will decrease. Moreover, from Figure 9 it can be seen that the level of stress-strain curves increases first and then decreases with the increase in the Nb content. The increase in stress-strain curves can be attributed to the solid solution strengthening effect of Nb atoms as well as the dislocation strengthening effect due to the thermal mismatch between the β-Nb phase and the Ni-Ti matrix [29]. However, an excessive soft β-Nb phase will inevitably degrade the strength of the porous Ni-Ti SMAs. Therefore, the Nb9 sample has the highest level of stress-strain curve. Clearly, the strengthening of the porous Ni-Ti alloy contributes to the improvement in PE. • Porous Ni-Ti SMAs with different Nb contents were fabricated. The pores were uniformly distributed in the Ni-Ti matrix and connected with each other, forming a three-dimensional network. The addition of Nb can effectively reduce the numbers of micro pores and precipitates in the Ni-Ti matrix. • With the increase in Nb content, the IF peak of the porous Ni-Ti SMAs arising from the reverse MT gradually shifts towards the low temperature side. This can be attributed to the hindering effect of Nb on the precipitation of Ni-rich phases as well as the replacement of Nb atoms on Ti atoms. • With the increase in Nb content, both the damping and the PE of the porous Ni-Ti SMAs increase first and then decrease. The Nb9 specimen has the highest damping and the best PE. The improvement in damping can be ascribed to the formation of mobile interfaces around β-Nb phase, whereas the improvement in PE is related to the formation of twinned martensites and the strengthening of the NiTi matrix. In addition, the decrease in the content of low damping Ni-rich phases without PE is also conductive to the improvements in damping and PE. The decrease in damping and PE is caused by the decrease in the NiTi content.
2023-07-22T15:38:32.403Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "a07915441a3982d55b3ce99f695a67bcc0ea45a9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/16/14/5057/pdf?version=1689611355", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "77a84a00d4a833c358ecad6c5e9ad71a06b8f07b", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
221085876
pes2o/s2orc
v3-fos-license
Evolution of computed tomography manifestations of eleven patients with severe coronavirus disease 2019 (COVID‐19) pneumonia Abstract Purpose Severe coronavirus disease 2019 (COVID‐19) pneumonia is associated with high mortality. However, the evolution of computed tomography (CT) manifestations of severe COVID‐19 pneumonia remains unclear, more evidence regarding its evolution process is urgently needed. Method The clinical, laboratory and imaging data of eleven patients with severe COVID‐19 pneumonia were collected to investigate the evolution process of severe COVID‐19 cases. Results The main initial CT manifestations of severe COVID‐19 pneumonia were multiple ground‐glass opacities and/or consolidation. The evolution of CT manifestations showed that acute exudative lesions of severe COVID‐19 pneumonia could be gradually resolved after the active intervention. Conclusions Most of patients with severe COVID‐19 pneumonia showed marked improvement of acute exudative lesions on chest imagings and satisfactory prognosis of severe COVID‐19 pneumonia could be achieved after active treatment. | BACKG ROU N D The coronavirus disease 2019 (COVID-19) outbroke, which initially emerged in Wuhan City, Hubei Province, China, has spread to multiple countries around the world, with the number of confirmed cases increasing every day. [1][2][3][4] It poses a great threat to the global public health and human life. As of Mar 3, 2020, China reported 80 302 patients, including 2946 fatalities. 5 A severe type of COVID-19 pneumonia, 6 accounting for approximately 20% of COVID-19 cases, 5 caused the most COVID-19 deaths. 5 | PATIENT ' S CHAR AC TERIS TI C S Eleven patients admitted to Taizhou Hospital of Wenzhou Medical University from January 24 to February 10, 2020, were diagnosed with severe COVID-19 pneumonia. Six patients were male and five patients were female, with a median age of 52 years (range 33-75 years). All eleven patients were confirmed by real-time-polymerase chain reaction (RT-PCR) tests of throat swabs. The initial symptoms of patients were fever (n = 10), cough (n = 8), and pharyngodynia (n = 3). Two patients with diabetes, one patient with hypertension, one patient with hypothyroidism, one patient with renal insufficiency, and one patient with gout. Eleven patients were treated with cortisol, immunoglobulin, and oxygen therapy. | INITIAL C T MANIFE S TATI ON S The main manifestations on initial chest CT scans were multiple | E VOLUTI ON OF C T MANIFE S TATI ON S All eleven patients showed marked improvement of chest radiographic manifestations after active intervention. In the early and mid-term follow-up CT, one patient ( Figure 2D F I G U R E 1 A-D, Images from case 1. A, First CT performed on hospital day 1 shows multiple ground-glass opacities and consolidation in the subpleural regions of bilateral lungs (red arrows); B, Second CT performed on hospital day 6 shows a significant decrease in the groundglass opacities and consolidation in bilateral lungs (red arrows); C, Third CT performed on hospital day 14 shows no change in the extent of ground-glass opacities and consolidation in bilateral lungs(red arrows); D, Fourth CT performed on hospital day 21 shows no change in the extent of ground-glass opacities and consolidation in bilateral lungs (red arrows). E-H, Images from case 2. E, First CT performed on hospital day 2 shows multiple ground-glass opacities and consolidation as well as pleural effusion in the subpleural regions of bilateral lungs (red arrows); F, Second CT performed on hospital day 5 shows a significant decrease in the consolidation and resolved pleural effusion in bilateral lungs (red arrows); G, Third CT performed on hospital day 10 shows a mild decrease in the ground-glass opacities in the left lung (red arrows); H, Fourth CT performed on hospital day 16 shows a mild decrease in the ground-glass opacities in the left lung (red arrows). I-L, Images from case 3. I, First CT performed on hospital day 4 shows multiple consolidations in the bilateral subpleural regions (red arrows); J, Second CT performed on hospital day 10 shows a significant decrease in consolidation (red arrows); K, Third CT performed on hospital day 17 shows the density of ground-glass opacities decrease (red arrows); L, Fourth CT performed on hospital day 24 shows no change in the extent of ground-glass opacities (red arrows) | PROG NOS IS As of Mar 3, 2020, four patients with severe COVID-19 pneumonia had been cured and discharged. Clinical cure standard 6 included temperature returning to normal for at least three F I G U R E 4 A-F, Images from case 7. A, First CT performed on hospital day 2 shows multiple ground-glass opacities and consolidation in bilateral lungs (red arrows); B, Second CT performed on hospital day 5 shows a significant decrease in the ground-glass opacities and consolidation in bilateral lungs (red arrows); C, Third CT performed on hospital day 10 shows a decrease in the ground-glass opacities and consolidation in bilateral lungs (red arrows); D, Fourth CT performed on hospital day 14 shows a further decrease in the ground-glass opacities and consolidation in bilateral lungs (red arrows); E, Fifth CT performed on hospital day 19 shows increased consolidation in the lower lobe of the right lung, fibrosis-like stripes appear in the upper lobe of the left lung (red arrows); F, sixth CT performed on hospital day 22 shows a decrease in the consolidation and ground-glass opacities as well as fibrosis-like stripes in bilateral lungs (red arrows). G-L, Images from case 8. G, First CT performed on hospital day 2 shows multiple ground-glass opacities and consolidation in the upper lobe of the right lung (red arrows); H, Second CT performed on hospital day 5 shows decreased ground-glass opacities and resolved consolidation in the upper lobe of the right lung (red arrows); I, Third CT performed on hospital day 7 shows a mild increase in the ground-glass opacities in the upper lobe of the right lung (red arrows); J, Fourth CT performed on hospital day 10 shows a decrease in the ground-glass opacities in the upper lobe of the right lung (red arrows); K, Fifth CT performed on hospital day 16 shows a further decrease in the ground-glass opacities in the upper lobe of the right lung (red arrows); L, sixth CT performed on hospital day 20 shows the size of ground-glass opacities decrease but the density increase (red arrows) consecutive days, marked improvement of acute exudative lesions on chest CT imaging and viral clearance in respiratory samples from the upper respiratory tract (two consecutive negative results of COVID-19). D I SCLOS U R E The authors declare that they have no conflict of interest. Helsinki declaration and its later amendments or comparable ethical standards." "All applicable international, national, and/ or institutional guidelines for the care and use of animals were followed." F I G U R E 5 A-D, Images from case 9. A, First CT performed on hospital day 2 shows multiple ground-glass opacities and consolidation in the subpleural regions of bilateral lungs (red arrows); B, Second CT performed on hospital day 5 shows a significant decrease in the groundglass opacities and consolidation in bilateral lungs (red arrows); C, Third CT performed on hospital day 10 shows a further decrease in the ground-glass opacities in bilateral lungs (red arrows); D, Fourth CT performed on hospital day 17 shows no change in the extent of groundglass opacities in bilateral lungs (red arrows). E-H, Images from case 10. E, First CT performed on hospital day 6 shows multiple ground-glass opacities and consolidation in the subpleural regions of bilateral lungs (red arrows); F, Second CT performed on hospital day 12 shows a decrease in the ground-glass opacities and consolidation in bilateral lungs (red arrows); G, Third CT performed on hospital day 15 shows a significant decrease in the ground-glass opacities and consolidation in bilateral lungs (red arrows); H, Fourth CT performed on hospital day 19 shows a mild decrease in the ground-glass opacities in bilateral lungs (red arrows). I-L, Images from case 11. I, First CT performed on hospital day 2 shows multiple ground-glass opacities in bilateral lungs (red arrows); J, Second CT performed on hospital day 5 shows a decrease in the ground-glass opacities (red arrows); K, Third CT performed on hospital day 13 shows a significant decrease in the groundglass opacities (red arrows); L, Fourth CT performed on hospital day 21 shows a further decrease in the ground-glass opacities (red arrows)
2020-08-10T13:05:44.369Z
2020-08-08T00:00:00.000
{ "year": 2020, "sha1": "6bd813105e2f591d7158bf8d38676b6164e0d9ef", "oa_license": null, "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7435361", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "2fd23cd48ee14a25f72c6729d9d6ac0afcf50711", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2048738
pes2o/s2orc
v3-fos-license
Free edges in epithelia as cues for motility One of the primary functions of any epithelium is to act as a barrier. To maintain integrity, epithelia migrate rapidly to cover wounds, and there is intense interest in understanding how wounds are detected. Numerous soluble factors are present in the wound environment and epithelia can sense the presence of adjacent denuded extracellular matrix. However, the presence of such cues is expected to be highly variable, and here we focus on the presence of edges in the epithelial sheets as a stimulus, since they are universally and continuously present in wounds. Using a novel tissue culture model, free edges in the absence of any other identifiable cues were found to trigger activation of the epidermal growth factor receptor and increase cell motility. Edges bordered by inert physical barriers do not activate the receptor, indicating that activation is related to mechanical factors rather than to specific cell cell interactions. O ne of the primary functions of any epithelium is to act as a barrier. To maintain integrity, epithelia migrate rapidly to cover wounds and there is intense interest in understanding how wounds are detected. Numerous soluble factors are present in the wound environment and epithelia can sense the presence of adjacent denuded extracellular matrix. However, the presence of such cues is expected to be highly variable, and here we focus on the presence of edges in the epithelial sheets as a stimulus, since they are universally and continuously present in wounds. Using a novel tissue culture model, free edges in the absence of any other identifiable cues were found to trigger activation of the epidermal growth factor receptor and increase cell motility. Edges bordered by inert physical barriers do not activate the receptor, indicating that activation is related to mechanical factors rather than to specific cell-cell interactions. The fundamental role of epithelia is to provide barriers between different compartments of the organism and to the outside environment. During development and in adulthood, epithelial cells employ their inherent ability to migrate as a collective sheet to generate or restore barrier function. Collective migration is essential for processes such as organogenesis and wound healing, and similar migratory mechanisms can go awry and contribute to cancer metastasis. Therefore, a considerable amount of research has been directed at understanding the cellular signals that initiate and sustain epithelial migration. [1][2][3] In numerous epithelia, the epidermal growth factor receptor (EGFR) is activated by wounding, and blocking the Free edges in epithelia as cues for motility Jes K. Klarlund* and Ethan R. Block Ophthalmology and Visual Sciences Research Center; the Eye and Ear Institute; University of Pittsburgh; Pittsburgh, PA USA activity of the receptor pharmacologically or by genetic techniques inhibits healing. Conversely, experimental stimulation of the EGFR results in enhancement of wound healing in many instances, underscoring the central role of the EGFR in the healing process. [4][5][6] Wounding induces proteolytic release of ligands, such as heparin-binding EGF-like growth factor (HB-EGF), from precursors located in the cell membrane in a mechanism that resembles EGFR transactivation by G-protein coupled receptors. [7][8][9] In a mammalian model of epithelial morphogenesis, eyelid closure in mice, epithelial sheet movement is also dependent on the proteolytic release of HB-EGF, which activates the EGFR. 10 Therefore, not only are the biomechanical processes that control epithelial movements during morphogenesis and wound healing similar, but the signals that induce this motility are similar as well. Given its importance, it is not surprising that many mechanisms have evolved to regulate epithelial wound healing. Starting immediately after wounding, the epithelium is inundated with a large number of growth factors and cytokines produced by bordering tissues and infiltrating inflammatory cells. 1,11,12 In addition, epithelial cells themselves possess mechanisms that detect the presence of wounds. Epithelial cells in a monolayer are not stationary, but appear to move around in a lively fashion, which could theoretically produce wound closure because the cells could simply fill up the space that is opened up after wounding. In support of this, computer modeling has shown that the behavior of individually randomly moving cells can approximate the observed collective migration as a sheet. 13 However, human corneal Some stimuli are expected to be present continuously as cell sheets migrate to cover a wound or during development (Fig. 1B). For instance, an epithelial cell sheet that migrates over a basement membrane is expected to constantly form new interactions with cell surface integrins, which is known to induce activation of the EGFR. 21 Blocking EGFR signaling at various times after infliction of wounds with either tyrphostin AG 1478 or neutralizing antibodies has shown that continuous activity of the EGFR is required for progression of wound closure. 14 A Model to Determine the Effects of Free Edges on Signaling in Epithelial Cell Sheets Wounds are very heterogeneous in nature, and the presence of individual stimuli is expected to be highly inconsistent. To decipher the roles of different stimuli, we and others have developed models that reduce the number of signaling inputs that may influence healing. 8,14,19,20 A common factor shared by epithelial sheets during wound healing and development is the presence of free edges, and we therefore created a new model to test the effects edges in the absence of other cues in epithelial cell sheets. 14 Petri dishes are coated wounds. This is clearly the case for the initial mechanical perturbation. Also, wounding induces an instant Ca 2+ signal at the edge, but the signal is extinguished after a couple of minutes. 15 Signaling by extracellular ATP is also likely to be transient. It is mainly generated from broken cells and is expected to be removed by exonucleases or washed out. Many of these immediate, transient stimuli can undoubtedly contribute to promote cell migration. For instance, stimulation of cells with ATP clearly induces activation of the EGFR, [16][17][18] and ATP accelerates healing when present continuously at high concentrations in the culture medium. However, no single one of these signals seems to be necessary for induction of movement. For instance, neither wounding sheets of epithelial cells under conditions that minimize cell breakage, 8,19,20 nor the effective removal of extracellular ATP with apyrase has any detectable effect on healing of wounds in sheets of corneal epithelial cells. 18 In addition, the early activation of the EGFR, which occurs after a few seconds, is not absolutely required for induction of movement because blocking the receptor by a chemical inhibitor (tyrphostin AG 1478) at the time of wounding and subsequently washing it out at later time points does not impede healing. 14 limbal epithelial (HCLE) and other cells react to wounding by increasing their velocities near edges, 14 so they respond to wounds by changes in behavior and must therefore contain appropriate detection mechanisms. Different Roles of Stimuli during Wound Healing Tissue culture models have been useful in understanding molecular mechanisms in healing of wounds in epithelial cell sheets. Although some important aspects of wound healing are lost, for instance effects of blood-derived factors and other interactions with adjacent tissues, the models do reproduce the closure of gaps introduced in the cell layer and important features of signaling in the induction of movement are retained. Even in culture, wounding is a complex event and generates many potential stimuli that can be detected by cells. In the most commonly used model, scratching a cell layer with a pipette tip or similar instrument (Fig. 1A), there is inevitably cell breakage that results in release of intracellular components such as ATP. In addition, the initial trauma induces mechanical perturbation, the extracellular matrix is laid bare and free edges are created. Some of the potential stimuli may act only at the time of infliction of unfolding of proteins revealing cryptic sites that may serve to initiate signaling. 26 Candidate proteins are in many instances part of, or associated with the cytoskeleton. Actin-based protrusions could be associated with sensory functions and, for instance, filopodia have been suggested to have a major role in probing the environment. 27,28 Also, cells can sense mechanical signals in the plasma membrane through stress-activated ion channels, 29 and it is possible that the part of the cell membrane at the free edge has very different levels of tensions compared to the membrane of cells interior in the cell sheet. Clearly, a major focus for future research should be to identify the molecular sensor that triggers edge signaling. Perspectives Wounding typically induces many potential cues that promote damaged epithelia to migrate and cover areas that are laid bare. The cues commonly induce activation of the EGFR and they therefore seem to have at least partially redundant effects. Wounding is a messy event and the extent and duration of each cue is expected to be highly variable. We therefore speculated that the very presence of the EGFR appears downregulated, which is in agreement with the chronic nature of the stimulation. Cells near the edges migrate at increased velocities thus demonstrating a similar biological response as is seen after acute wounding. This shows that the presence of free edges in itself is a signal that is detected by the cells. Sensing Free Edges It is significant that the presence of edges that are mechanically constrained do not cause activation of the EGFR. When HCLE cells are physically constrained by agarose (Fig. 1D), the EGFR is not activated. Because edges constrained by these acellular barriers block activation, the free edge sensor is unlikely to be the absence of cell-cell interaction mediated by specific molecules as has been suggested in classical formulations of contact inhibition. 22,23 More likely, cells sense free edges by the lack of mechanical forces opposing the cells at the free edges, which can be provided by other cells in the cell sheets or by the presence of agarose barriers. It is increasingly apparent that cells are exquisitely responsive to mechanical forces. 24,25 At a molecular level, cells can sense the presence of forces by partial with poly(2-hydroxyethyl methacrylate) (polyHEMA), which does not allow cell adherence and cells are seeded on 0.5 mm wide plastic strips cast on top of this layer (Figs. 1C and 2A). Examination by confocal microscopy reveals that the cells extend over the edges of the plastic strips and are thus physically unconstrained (Fig. 2B). In this model there is no acute cell damage and new adhesions cannot form with adjacent extracellular matrix. Also, activation is not due to extracellular ATP signaling or to breakdown of segregation of EGFR and its ligands at edges. As controls, cells are seeded on dishes that are totally coated with plastic, and signaling can therefore be compared in cultures that contain many free edges with cultures that contain no introduced edges. Using this model, we found that edges induce activation of the EGFR and its down-stream effectors extracellular signal-regulated kinases 1 and 2 (ERK1/2) in HCLE and MDCK cells (Fig. 2C and D). Activation results from proteolytic cleavage of precursors of ligands for the receptor, as is the case after acute wounding, and similarly to wounding, secretion of the ligands is under the control of Src family kinases. In HCLE cells, Figure 2. A model to determine the effects of free edges in epithelial cell sheets (cf. Fig. 1C). 14 (A) Schematic of plates covered with polyHeMA and plastic strips. Light gray, polyHeMA; dark gray, plastic; inset, phase contrast microscopy of HCLe cells grown on plastic strips. (B) x-z section of a confocal image of HCLe cells at the edge of a plastic strip. the strips and poly-HeMA were labeled with fluorophores (green and blue, respectively), and the cells were labeled with the membrane dye Vybrant DiD (depicted in red). (C) immunoblot of extracts with an antibody against eGFr phosphorylated on tyr-1173. the blots were stripped and reprobed with antibodies that recognize total amounts of the eGFr. the same blots were also probed with an antibody against β-actin as load control. (D) immunoblot of extracts with an antibody against activated erK1/2. the blots were stripped and reprobed with antibodies that recognize total amounts of the erK1. the columns in (C and D) are means of six determinations ±SD. be given to the mechanical properties at edges in epithelia. can the signaling stimulate both mechanisms of healing? Finally, elucidation of the pathway should allow gene knockout strategies to study the role of the pathway in vivo. Chronic wounds are characterized by continued defects in epithelial coverage and since the model provides a persistent stimulation, it should be a useful in vitro tool to study defects in signaling at edges in such wounds. It is noteworthy that the EGFR is downregulated at the edges of the epidermis in chronic venous ulcers, 33 as is predicted by the model. During acute wounding, or when movement of the epithelial sheet is progressing (Fig. 1A and B), additional stimuli are present that can further enhance motility and fine-tune the motile phenotype of the epithelial cells. For instance, the presence of edges alone results in slightly enhanced secretion of MMP9 in HCLE cells, but the levels of MMP9 production are greatly increased when the cells are allowed to spread on adjacent tissue culture space (Fig. 1C vs. B). 14 Production of MMP9 is known to be influenced by integrins binding to extracellular matrix proteins 34 and signals derived from such interactions are probable cues that induce maximal MMP9 production in combination with signals from the EGFR. The response to edges is quite possibly graded, and the level of activation of the pathway could be influenced by the mechanical properties of the tissues through which an epithelium migrates. The biological effects of matrix metalloproteases and other extracellular proteolytic enzymes in tumors is extremely complicated, but is thought overall to promote progression of malignancies. 35 Proteolysis is expected to soften the extracellular matrix in the microenvironment of the tumor cells, and decreased mechanical resistance could provide a mechanism that promotes activation of the EGFR in invading epithelial tumor cells. Many proteases are associated with the cell membrane, and such a mechanism could act very locally at the surface of the cells, and may act in conjunction with additional mechanisms that register other mechanical properties of the tumor environment. 36 Clearly, the existence of a detection mechanism implies that attention should edges, which per definition are always present in wounds, might in itself be a cue for induction of EGFR activation and migration. This is not a new thought; in 1915 Herbert W. Rand formulated the famous dictum that "an epithelium will not tolerate a free edge" 30 and that epithelia tend to move to form uninterrupted sheets during development or after wounding. Indeed, with very few exceptions epithelia in adult organisms are continuous. However, when edges form after wounding or during development, subsequent movement is now known to be guided by interplay of many types of cues. We developed a simple model that allows analysis of signaling induced by free edges in epithelial cell sheets with a minimal influence of other cues including molecules released from broken cells or formation of new connections to extracellular matrix. We found that the very presence of edges is sufficient to induce activation of the EGFR and to increase motility of cells. Although the details may vary (for instance, other receptor systems could be involved), this type of mechanism could explain the universal propensity of epithelia to migrate at edges. Elucidation of the signaling pathway that detects free edges should be valuable for answering questions concerning its roles. This will likely allow monitoring and specifically blocking the pathway even when other signals that trigger the EGFR are activated, because edges appear to induce unique intracellular signaling. For instance, exogenous ATP or interaction with extracellular matrix signals through Pyk2 (a kinase related to focal adhesion kinase 31 ) but free edges do not signal through Pyk2 (Block and Klarlund, 32 data not shown). Many questions need to be addressed: Is the pathway activated acutely after wounding? The model (Fig. 1C) reflects a chronic stimulation and it is possible that rearrangements within the cell layer after wounding are necessary for the appropriate tensions to develop to trigger signaling. The size and geometry of wounds are known to determine whether wounds heal by a formation of a contractile actin cable or by lamellipodia-dependent migration, which are regulated by the small GTPases rho, rac and cdc42. Which GTPases are activated through the pathway, and
2018-04-03T00:11:28.951Z
2011-03-01T00:00:00.000
{ "year": 2011, "sha1": "96047eceea1bd27c38823122c41aee12676fbd85", "oa_license": null, "oa_url": "https://www.tandfonline.com/doi/pdf/10.4161/cam.5.2.13728?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "cca0483dce34e95b99caad728a9067a596ce9cff", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
14941989
pes2o/s2orc
v3-fos-license
Estimating the malaria transmission of Plasmodium vivax based on serodiagnosis Background Plasmodium vivax re-emerged in 1993 and has now become a major public health problem during the summer season in South Korea. The aim of this study was to interpret and understand the meaning of seroepidemiological studies for developing the best malaria control programme in South Korea. Methods Blood samples were collected in Gimpo city, Paju city, Yeoncheon County, Cheorwon County and Goseong County of high risk area in South Korea. Microscopy was performed to identify patients infected with P. vivax. Antibody detection for P. vivax was performed using indirect fluorescent antibody test (IFAT). Results A total of 1,574 blood samples was collected from participants in the study areas and evaluated against three parameters: IFAT positive rate, annual antibody positive index (AAPI), and annual parasite index (API). The IFAT positive rate was 7.24% (n = 114). Of the five study areas, Gimpo had the highest IFAT positive rate (13.68%) and AAPI (4.63). Yeongcheon had the highest API in 2005 (2.06) while Gimpo had the highest API in 2006 (5.00). No correlation was observed between any of the three parameters and study sites' distance from the demilitarized zone (DMZ). Conclusions These results showed that P. vivax antibody levels could provide useful information about the prevalence of malaria in endemic areas. Furthermore, AAPI results for each year showed a closer relationship to API the following year than the API of the same year and thus could be helpful in predicting malaria transmission risks. Background Plasmodium vivax is the causative agent of relapsing benign tertian human malaria and is the second-most important human malaria that annually afflicts several hundred million people. The disease is a major public health problem and has socio-economic ramifications for many temperate and tropical countries [1]. While vivax malaria has been reported throughout the Korean peninsula for several centuries, it was not until 1913 that the first scientific document was published. At that time, malaria occurred throughout the country without recognizable geographical differences [2]. The incidence of vivax malaria decreased rapidly as a result of economic improvement following the Korean War, a national malaria eradication programme, and assistance from the World Health Organization (WHO) [3,4]. As the last two sporadic cases detected in the 1980s were believed to be the result of latent malaria parasites transmitted the previous year [5], vivax malaria was reported to have been eradicated in South Korea by the late 1970s [6]. In 1993, a South Korean army soldier serving in northern Gyeonggi Province, with no travel history, was diagnosed with vivax malaria [7]. Subsequently, Cho et al. reported two civilian patients infected with vivax malaria [8]. By 2005, a total of 21,419 indigenous vivax malaria cases had been confirmed in South Korea, and a total of at least 937,634 vivax malaria cases had been reported from the entire Korean peninsula, both South and North Korea. The number of vivax malaria cases peaked in South Korea in 2000 with 8.9 cases/100,000, followed by a sharp decline of approximately 26-40% per annum to 1.8 and 2.9 cases/100,000 in 2004 and 2005, respectively [9]. The highest malaria cases centred around Paju, Yeoncheon, Cheorwon, Gimpo, Ganghwa, Goyang, and Dongducheon near the demilitarized zone (DMZ) separating North and South Korea. Following the re-emergence of malaria, subsequent high indigenous transmission rates and population movement caused great concern because of the increased geographical expansion potential [10]. Serological data obtained by an indirect fluorescent antibody test (IFAT) may provide useful for levels of malaria endemicity, as well as the time period of infection [11]. Of blood samples from 845 participants, who were residents of Gimpo from November to December 1998, 24 were positive for malaria antibodies by IFAT. Four seropositive participants (16.7%) developed malaria the following year. In 1999, 125 of 5,797 participants from the same area were seropositive by IFAT of which 12.8% (16/125) were positive for malaria parasites by polymerase chain reaction (PCR). Serological surveys have provided valuable epidemiological information, especially in areas with low level of endemicity [12,13]. The rate of parasitaemia is the classical method for measuring the endemicity of malaria, however, the incidence of parasitaemia alone fails to provide an adequate description of the occurrence of malaria in a population. Therefore, the incidence of malaria is low, the application of IFAT could be used to more accurately reflect the malaria situation in a particular area [14,15]. In this study, antibody-positive rates using IFAT were obtained from malaria high-risk areas near the DMZ and the results compared with the malaria prevalence in those areas during the year of the survey and the following year. Study areas and blood sample collection The study sites were within 20 km of the DMZ and are shown in Figure 1. The study was conducted in Gimpo ( Figure 1A), Paju ( Figure 1B), and Yeoncheon ( Figure 1C) of Gyeonggi Province, and Cheorwon ( Figure 1D) and Goseong ( Figure 1E) of Gangwon Province, South Korea, from late October to mid-December 2005. Blood samples were collected from participants residing in 35 villages in three cities located in Gyeonggi Province, and nine villages in two cities in Gangwon Province. A total of 1,574 blood samples (1.43%) were collected from 110,424 inhabitants of the study areas in 2005. All participants were volunteers enrolled by providing verbal informed consent. Blood samples were collected randomly from volunteers and ruled out those who had previously been infected with malaria. Three ml of blood was collected from each individual and thin and thick blood smears prepared for microscopic examination (magnification 7 x 100). The blood samples were transferred to the Korean National Institute of Health (KNIH), Korea Centers for Disease Control and Prevention (KCDC), where blood and sera were separated and stored at −20°C for future antibody analysis. The study protocol was reviewed and approved by the KNIH Human Ethics Committee. Indirect fluorescent antibody test To test for antibodies against malaria, IFAT was performed with the whole blood infected with P. vivax [16][17][18]. Briefly, malaria parasite infected blood was collected from patients. Plasma and white blood cells were removed and red blood cells suspended in phosphate buffered saline (PBS, pH 7.2). Samples were centrifuged for 5 min at 2,500 rpm. The supernatant was discarded and the cells were resuspended in fresh PBS. This wash step was repeated three more times, and then an appropriate amount of PBS was added. Red blood cells were mounted in each well of Teflon-coated slides, dried for 12 hr at room temperature, and then stored at −70°C. To determine the antibody titres against P. vivax of each individual, the antigen slides were fixed in pre-cooled acetone (−20°C) for 10 min, washed with PBS, and then 20 μl of diluted sera, 1:32 to 1:8,192 (vol/vol), was added to each well. Positive and negative controls were also spotted onto each slide, and the slides were incubated in a humidified chamber for 30 min at 37°C. The reactions were quenched by washing the reacted sera with PBS for 6 min, and then the samples were dried at room temperature. Diluted FITC conjugated anti-human IgG (Sigma, 1:32 vol/vol in PBS) was added to each well and incubated and washed as described above. Several drops of buffered glycerol were added to the samples, coverslips applied and then examined under a fluorescence microscope at 400x. Calculation of annual parasite index (API) API was calculated as the number of malaria-positive patients per 1,000 inhabitants at each of the study sites, API = (number of positive slides/total number of slides) × 1000 [19]. Calculation of annual antibody positive index (AAPI) AAPI was calculated as the number of malaria antibody positive participants per 1,000 inhabitants for each of the study sites, AAPI = (number of antibody positive slides/total number of slides) × 1000. Data analysis The antibody-positive individuals who had been patients in previous years were excluded during data analysis. The relationship of distance from the DMZ as related to IFAT positive rate, API and AAPI were analysed by Pearson's chi-squared test. The relationship between AAPI, API, and IFAT positive rate for each village was analysed by two-way ANOVA. Data analyses were performed using GraphPad (GraphPad Software, Inc., La Jolla, CA, USA). Comparison of annual parasite index (API) results from 2005 and 2006 Total Figure 1E). Comparison between each parameter and distance from the DMZ Vivax malaria in South Korea is characterized as border malaria [10,21]. It was tried to confirm this hypothesis by investigating whether the distance from the DMZ to each study sites influenced the IFAT positive rate, AAPI, or API in 2005 and 2006. In 2005, in the study areas of Gimpo, the positive rates for IFAT (r 2 = 0.9607, P = 0.1271) and API (r 2 = 0.1897, P = 0.7131) increased with distance from DMZ, and in 2006, AAPI (r 2 = 0.4931, P = 0.5044) and API (r 2 = 0.9411, P = 0.1560) decreased with distance. However, none of the parameters was significant (P > 0.05). In 2006, the study areas of Paju, the positive rates for for IFAT (r 2 = 0.0497, P = 0.8569), AAPI (r 2 = 0.7473, P = 0.3353) and API (r 2 = 0.6943, P = 0.3729) decreased with distance from the DMZ. Only 2005 results showed significant increase of API with proximity to the DMZ (r 2 = 0.9963, P = 0.0386) (P < 0.05). AAPI and API (2005 and 2006) decreased with distance from the DMZ in the study areas of Yeoncheon. Only the IFAT positive rate increased with proximity to the DMZ. 2005 AAPI (r 2 = 0.4539, P = 0.2124), 2005 API (r 2 = 0.4068, P = 0.2469), and the 2006 API rates (r 2 = 0.5556, P = 0.1482) decreased with distance from the DMZ in the study areas of Cheorwon. Only positive rates of IFAT (r 2 = 0.2542, P = 0.3864) increased, but the data were not significance (P > 0.05). Discussion The highest prevalence of malaria in South Korea was observed for Paju and Yeoncheon, in Gyeonggi Province, both of which are located within 10-15 km of the southern border of the DMZ [10,21]. Therefore, it was tried to determine the malaria transmission pattern in the malaria-prevalent areas: Gimpo, Paju, Yeoncheon, Cheorwon, and Goseong. Based on the data, Tongjin village in Gimpo may be dominated by locally transmitted malaria, because, in 2005, the IFAT positive (17.65%) and API (1.21) indices for that village were much higher than those for Wolgot village (5.70 km) and Haseong village (5.00 km), which are closer to the DMZ (8.00 km). However, Wolgot village had relatively high values for all three parameters and was closest to the DMZ, suggesting that the malaria transmission increased near the border. One of the main rice fields is located in Gimpo, South Korea where there are many places for Anopheles to breed, which suggests the main reason for the high malaria prevalence (Additional file 2). Gunnae village of Paju showed a higher IFAT positive rate and higher AAPI than Musan village or Tanhyun village, but there were no malaria patients reported in 2005 (API = 0.00). However, the API was 5.84 in 2006. This suggests that antibodypositive rates might be useful for predicting the malaria transmission in subsequent years. A new approach to interpreting AAPI results would be helpful to more accurately estimate community immunity in order to predict malaria transmission in epidemic areas (Additional file 3). In 2005, the AAPI (0.71) and API (3.41) of Yeoncheon village, located close to DMZ, exhibited higher values than did Cheongsan village in Yeoncheon, suggesting that Yeoncheon may have been directly affected by mosquitoes along the border with North Korea (Additional file 4). Similarly, the AAPI and API of villages in Cheorwon decreased as distance increased from the DMZ. Geunnam second-highest value for AAPI, and is located close to the DMZ (Additional file 5). When it was considered the relationship between AAPI and API, AAPI results showed significantly greater correlation with API in 2006 (P = 0.0003) than in 2005 (P = 0.7436). From this, it was concluded that the AAPI results on any given year may indicate the malaria transmission pattern for the following year. Because it would be helpful to be able to predict the malaria transmission in following years, this hypothesis should be tested for the purpose of developing diagnostic tools to aid in eradicating malaria in Korea. AAPI results may correlate with the long incubation period of malaria in patients, when it is taken into consideration the time of blood collection, as adult mosquitoes are not present or in hibernation during the cold winter season. In humans, the malaria parasites are dormant in the liver as hypnozoites, with development usually initiated during the following mosquito season. Going forward, it must be determined the AAPI before malaria transmission occurs to determine precisely what this index offers. IFAT was applied in this study because serological data provide useful evidence regarding the extent and degree of malaria endemicity, and reflect the period of infection [11], particularly in areas with low endemicity [13]. Additionally, sensitivity and specificity of IFAT analysis are much higher than enzyme linked immunosorbent assay (ELISA) of merozoite surface protein-1 (MSP-1) and circumsporozoite surface protein (CSP) in previous data [20]. The parasitaemia rate is the classical method for measuring malaria endemicity, but incidence of parasitaemia alone cannot provide a complete and adequate description of malaria infections among affected populations. When the incidence of malaria is low, mass blood surveys based on microscopic examination do not yield results commensurate with the work involved [14]. Therefore, the application of IFAT could be utilized to more accurately reflect the malaria situation in a given population [15]. If non-latent cases are observed in early June, sporozoites infection from mosquitoes to humans would have occurred in mid-May of the same year. During April, mosquitoes are usually infected with gametocytes by blood meal from latent patients and would have to have developed sporozoites in the salivary gland by mid-May to be infective to human beings. Malaria cases observed in May are largely due to latent malaria cases from the previous year [22]. The most abundant anopheline mosquitoes captured in the high-risk areas for malaria near the DMZ in northern Gyeonggi province were Anopheles sinensis (63.3%), Anopheles kleini (24.7%), and Anopheles pullus (8.7%) [23]. The incidence of malaria peaks in August after the rainy season and declines to baseline by mid-October. Blood collections were conducted between late October and mid-December when the active adult anopheline population has disappeared. It was conducted the same experimental design from 1996 to 1998, during the early stage of re-emergence of vivax malaria in South Korea, and the IFAT-determined antibody-positive rates in the study areas of Paju, Yeoncheon, and Cheorwon were highly influenced by the proximity to the DMZ. As some of the study areas far from the DMZ showed high positive rates for IFAT, AAPI, and API in 2005 and 2006, it appears that the pattern of vivax malaria transmission has changed and is no longer solely affected by border conditions. In other words, the causative source of malaria transmission is now located inside South Korea.
2017-04-04T18:19:51.342Z
2012-08-01T00:00:00.000
{ "year": 2012, "sha1": "e0fe92adf798c38765ea6a7cc91fbb510f688e3b", "oa_license": "CCBY", "oa_url": "https://malariajournal.biomedcentral.com/track/pdf/10.1186/1475-2875-11-257", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5c0200c31411f7c8c4b5f17cca814a9337637cef", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
13303176
pes2o/s2orc
v3-fos-license
Effect of thorax correction exercises on flexed posture and chest function in older women with age-related hyperkyphosis [Purpose] The purpose of this study was to determine the effects of thorax correction exercises on flexed posture and chest function in older women with age-related hyperkyphosis. [Subjects and Methods] The study participants included 41 elderly women who were divided into a thorax correction exercise group (n = 20) and a control group (n = 21). Participants in the exercise group completed a specific exercise program that included breathing correction, thorax mobility, thorax stability, and thorax alignment training performed twice per week, 1 hour each session, for 8 weeks. Outcome measures included the flexed posture (thoracic kyphosis angle, forward head posture) and chest function (vital capacity, forced expiratory volume in a second, and chest expansion length). [Results] Participants in the thorax correction exercise group demonstrated significantly greater improvements in thoracic kyphosis angle, forward head, and chest expansion than those in the control group. [Conclusion] This study provides a promising exercise intervention that may improve flexed posture and chest function in older women with age-related hyperkyphosis. INTRODUCTION Age-related hyperkyphosis (ARH) is an exaggerated anterior curvature of the thoracic spine that is associated with aging and frequently observed in older women 1) . ARH occurs commonly in women older than 55 years, regardless of vertebral fractures, with the incidence increasing 6-11% for every 10 years of increasing age 2) . ARH is multifactorial, and although the precise etiology is not fully understood, poor posture, dehydration of the intervertebral disks, and reduced back extensor muscle strength have been reported as general causes of ARH 1,3) . Elderly women with ARH often report difficulties with physical performance because of changes in vertebral column alignment, which affects the quality of life of patients in relation to their health 4) . In addition, ARH limits the movement of the rib cage, which is connected to the thoracic spine, resulting in difficulties with pulmonary function 5) . Horie et al. 5) reported that the thoracic vertebrae forms costotransverse joints with the head of the rib. That are directly connected to the rib cage, suggesting there is a close relationship with the expansion capabilities of the rib cage during inhalation. Although changes in vertebral column alignment in elderly women are clinically important to health and quality of life, ARH is considered important to only a small number of clinicians, and a basic protocol for the treatment of ARH has only recently been released 6) . Therapeutic exercises for ARH include back extensor strengthening and flexibility exercises as well as postural training to improve postural awareness 7) . However, most currently available exercise strategies apply only a single exercise or combine exercises that target the whole body but do not focus on the thorax (i.e., the main structure that is deformed because of kyphosis). In addition, most studies have concentrated on measurement of the thoracic kyphosis angle, muscle strength, range of motion, and physical performance after an intervention 7,8) , and few studies have been conducted on measurement of chest functions. Thus, this study applied a thorax correction exercise program for 8 weeks in elderly women with ARH aged more than 65 years old and verified the effect of the exercises on flexed posture and chest function with the purpose of providing a therapeutic intervention that can manage and prevent the health risks of women with ARH. SUBJECTS AND METHODS This study was conducted on 41 elderly women with ARH aged over 65 years living in D Metropolitan City (experimental group, 20 women; control group, 21 women). Original Article The selection criteria were as follows: thoracic kyphosis angle of more than 45°, reduced respiratory function (forced vital capacity [FVC] <80% predicted value) with <3 cm of chest expansion, ability to decrease kyphosis by 5° or more in the standing posture, and ability to walk independently both on flat surfaces and ascending stairs. Individuals were excluded if they were diagnosed with vertebral compression fractures within 6 months of the study, they had a cardiac or respiratory disease, or they were unable to perform the exercise program because of mental problems (e.g., dementia) or reduced cognitive ability (Mini-Mental State Examination score ≤24). All included patients understood the purpose of this study and provided written informed consent prior to participation in accordance with the ethical standards of the Declaration of Helsinki. Each measurement was conducted twice in 2 postures: usual and best straight postures. The usual posture was defined as a relaxed posture, and the measurements were taken at the time of full exhalation. On the other hand, the best straight posture was the optimal posture during full inhalation, in which the participants exhibited their tallest height and the spine was as straight as possible. Thoracic kyphosis was measured using 2 gravity-dependent inclinometers (Isomed Inc., Portland, OR, USA) placed over the spinal processes of T1 and T2, and over the T12 and L1 vertebrae. The thoracic kyphosis angle was measured and recorded by checking the angles displayed on the inclinometers 8) . Forward head posture was measured as the distance between the wall and the tip of the tragus. To prevent the participants' trunks from leaning forward, their heels were attached to a 5-cm block while their knees were extended as much as possible and their heads were maintained at the neutral position during measurement. Chest functions were assessed using the flow volume and chest expansion length, with the flow volume measured as the vital capacity (VC) and forced expiratory volume in 1 second (FEV1) using an electronic spirometer (Jaeger MaterScope, CareFusion, Wurzburg, Germany). To increase the measurement accuracy, a nasal clip and mouthpiece were applied to avoid air leaking during the measurements. To measure VC and FEV1, the participants started by breathing at rest after the completion of zero adjustment. Participants then inhaled slowly to within their maximal range and exhaled with their full force until they were instructed to stop by the sound of a beeper. Participants repeated several trials of this measurement, and all measurements were conducted while the participants were comfortably seated in a chair with arm and back rests. In addition, to evaluate the chest expansion length, the difference between the circumference of the thorax between inhalation and exhalation in the maximal range was measured at the level of the tenth rib using a flat measuring tape 9) . All measurements were conducted 3 times, and the average was used as the final value. The experimental group was required to participate in the thorax correction exercises for 60 minutes in total, consisting of a 5-minute warm-up and 5-minute cool-down period and 50 minutes of primary exercises. The exercises were conducted twice a week for 8 weeks in a group exercise setting. With respect to the control group, the participants were required to participate in a daily exercise training program consisting of the same exercises as the experiment group. The exercises were provided to the participants in a booklet that contained all of the exercise-related information. The thorax correction exercise aimed to correct the thorax, which is the structure most affected by thoracic kyphosis, in contrast to existing thorax improvement exercise. The program consisted of 4 sub-exercises including 5 minutes of breathing correction, 15 minutes of thorax mobility, 20 minutes of thorax stability, and 10 minutes of thorax alignment reorganization exercise. The 8-week exercise program was structured to include an adjustment phase for 1-2 weeks, an improvement phase for 3-5 weeks, and a maintenance phase for 6-7 weeks with the aim of gradually improving posture and strength. Exercises to strengthen the back utilized elastic bands that were applied according to the principle of high-intensity progressive resistance exercise. The elastic band (Thera-Band, Hygenic Corporation, Akron, OH, USA) used in this study was a yellow band (15 cm wide, 70 cm long) with 1.8 kg of resistance. When participants were able to perform the exercise using the band for 3 sets of 8 repetitions without any pain and discomfort, the yellow band was replaced with a red band with 2.7 kg of resistance 10) . The collected data were statistically processed using SPSS version 18.0; general participant characteristics are presented as means and standard deviations using descriptive statistics analyses. The Shapiro-Wilk test was conducted to verify normality. To compare the flexed posture and chest function variables between the experimental and control groups before and after the exercise program, a paired-sample t-test was conducted, whereas an independent sample t-test was conducted to compare differences between groups. The statistical significance level for hypothesis testing was set at α =0.05. RESULTS The mean age, height, and weight of the experimental group (n=20) were 73.8 years, 150.4 cm, and 57.7 kg respectively, whereas the values in the control group were 76.4 years, 151.1 cm, and 54.4 kg, respectively, with no statistical differences between the treatment groups (Table 1). Regarding differences in flexed posture variables between the 2 groups before and after the intervention, statistically significant differences were noted for the kyphosis angle and forward head variables for the best and usual postures (p<0.05; Table 2). VC and FEV1 were not significantly different between the 2 groups before and after the intervention (p>0.05), whereas the chest expansion length was statistically different between the groups (p<0.05; Table 3). DISCUSSION This study was conducted with elderly women whose thoracic kyphosis had increased more than 45° to determine the effect of an 8-week thorax correction exercise program on flexed postures and chest functions. The results illustrated that the thorax correction exercises were effective in improving the flexed postures and chest expansion ability. Thoracic kyphosis was improved by 3.45° in the usual posture and 3.50° in the best posture after the intervention. A study by Katzman et al. 7) , who aimed to improve multiple musculoskeletal impairments in elderly women with hyperkyphosis, reported that a 12-week complex exercise program including flexibility and strength exercises of the joints and muscles in the upper and lower extremities decreased thoracic kyphosis by 6° in the usual posture and 5° in the best posture. Greendale et al. 10) also aimed to improve hyperkyphosis and reported improvements of thoracic kyphosis when Hatha yoga was applied 3 times a week for 24 months. This suggests that exercise methods designed to improve the symptoms of thoracic kyphosis are conducive to improving the structural alignment and stiffness of the thorax as well as to correcting thorax position. In the forward head posture, the experimental group experienced improvements in thoracic kyphosis of 11.63% in the usual posture and 12.28% in the best posture after the intervention. Balzini et al. 4) explained that increased hyperkyphosis and forward head postures were the characteristics of flexed postures of elderly women, and the improvement of forward head posture improved the flexed positions. In addition, a 24.27% improvement in chest expansion, which refers to the mobility of the rib cage in chest functions, was found. However, VC and FEV1 were not significantly different between and within groups before and after the intervention. Horie et al. 5) measured respiratory function using FVC and % FVC and reported that the lumbar lordosis angle was more related to vertebral column alignment than thoracic kyphosis. In the present study, although the thorax improvement exercise program was not sufficient for improving flow volume, it was deemed effective in improving mobility of the rib cage. The exercise program described in this study was developed to improve the mobility and stability of the ribs and thorax, focusing on anatomical and kinematic improvements in the rib cage, where the structural deformity of hyperkyphosis was most prominent. Itoi and Sinaki 12) conducted a back extensor muscle strengthening exercise intervention with hyperkyphotic elderly women in which participants wore a backpack in the prone position, however, this exercise only focused on strengthening the back extensor muscles. Furthermore, a study by Greendale et al. 11) applied Hatha yoga, consisting of 4 postures, which aimed to strengthen the core and extensor muscles, improve the flexibility of the muscles around the shoulder and hips, and teach the participants postures. However, these exercises were not able to elicit a specific effect on thorax correction because they were not specific to the affected part of the body. In response to this, the present study implemented a thorax correction exercise program that considered the structural characteristics of the participants, with a focus on the rib cage. There were several limitations associated with the present study. First, it is difficult to generalize the results because they focus exclusively on women over the age of 65 years living in a single region. In addition, the physical activities of the participants other than the prescribed 8-week exercise program were not controlled. Therefore, our future work will focus on expanding this project to males and other regions as well as on incorporating elderly participants with different types of hyperkyphosis. Furthermore, the application of a variety of intervention programs should also be studied in future research. Moreover, dynamic postural analysis and functional performance evaluation, aside from static pos- Values are means ± SD. * Significant difference between pre-and post-intervention values within a group (p<0.05), † Significant difference in change value between groups (p<0.05) tures, need to be included so that the effect of thorax correction exercises can be verified from a variety of perspectives. In summary, the exercise methods developed in the present study can be recommended for improving the mobility of the rib cage and postures through specialized exercises focused on thorax posture correction in elderly women with hyperkyphosis.
2016-05-14T09:54:16.299Z
2015-04-01T00:00:00.000
{ "year": 2015, "sha1": "c4235596348d5023889538a515e734536f689f21", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/jpts/27/4/27_JPTS-2014-706/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c4235596348d5023889538a515e734536f689f21", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248722481
pes2o/s2orc
v3-fos-license
A Pilot Study of the Efficacy and Economical Sustainability of Acute Coronavirus Disease 2019 Patient Management in an Outpatient Setting Objective To report a preliminary experience of outpatient management of patients with Coronavirus disease 2019 (COVID-19) through an innovative approach of healthcare delivery. Patients and Methods Patients evaluated at the Mild-to-Moderate COVID-19 Outpatient clinics (MMCOs) of San Raffaele University Hospital and Luigi Sacco University Hospital in Milan, Italy, from 1 October 2020 to 31 October 2021 were included. Patients were referred by general practitioners (GPs), Emergency Department (ED) physicians or hospital specialists (HS) in case of moderate COVID-19. A classification and regression tree (CART) model predicting ED referral by MMCO physicians was developed to aid GPs identify those deserving immediate ED admission. Cost-effectiveness analysis was also performed. Results A total of 660 patients were included. The majority (70%) was referred by GPs, 21% by the ED and 9% by HS. Patients referred by GPs had more severe disease as assessed by peripheral oxygen saturation (SpO2), ratio of arterial oxygen partial pressure to fractional inspired oxygen (PaO2/FiO2), C-reactive protein (CRP) levels and interstitial involvement at lung ultrasound. Among them, 18% were addressed to the ED following MMCO assessment. CART analysis identified three independent predictors, namely home-measured SpO2, age and body mass index (BMI), that robustly divide patients into risk groups of COVID-19 severity. Home-measured SpO2 < 95% and BMI ≥ 33 Kg/m2 defined the high-risk group. The model yielded an accuracy (95% CI) of 83 (77–88)%. Outpatient management of COVID-19 patients allowed the national healthcare system to spare 1,490,422.05 € when compared with inpatient care. Conclusion Mild-to-moderate COVID-19 outpatient clinics were effective and sustainable in managing COVID-19 patients and allowed to alleviate pressure on EDs and hospital wards, favoring effort redirection toward non-COVID-19 patients. INTRODUCTION Coronavirus disease 2019 (COVID- 19) pandemic has posed significant challenges on healthcare systems worldwide due to an overwhelming surge of patients simultaneously seeking medical care (1,2). Emergency departments (ED) probably suffered the most, being the bottleneck of patients with acute disease, often regardless of symptom severity. In fact, a proportion of patients presenting to the ED had mild to moderate clinical features not requiring urgent care or hospital admission (3,4). While patients with mild disease and no risk factors for progression may benefit from medical assistance by general practitioners (GPs), those with moderate COVID-19 or harboring risk factors for adverse outcomes reside in a gray area between in-hospital and home management (5). Within the latter patient category, GPs may not have the tools to discriminate nor handle patients deserving more attentive monitoring. On the other hand, unfiltered hospital referral of these patients may cause unjustified ED overcrowding and saturation of hospital beds. Accurate patient evaluation in a hospital-based outpatient setting by expert physicians may fill this gap, allowing for timely risk classification and informed management decision-making. On the heels of the first pandemic wave and with the belief that some measure had to be taken to avoid system collapse, health policymakers of Lombardy region in Italy at the beginning of the second wave designed an integrated approach of healthcare delivery, called "Hot Spot" or "Mild-to-moderate COVID-19 outpatient clinic" (MMCO), based on the strict, bidirectional collaboration with GPs and the ED (5). One year after the introduction of this novel service, here we describe our preliminary experience of patient management at two MMCOs of the metropolitan city of Milan, specifically those of San Raffaele University Hospital and Luigi Sacco University Hospital. Moreover, we provide an evidence-based tool for patient classification into risk groups by the GP beforehand, to identify patients deserving early ED referral with the aims of optimizing patient management and spare resources. Mild-to-Moderate COVID-19 Outpatient Clinic Organization and Patient Referral Mild-to-moderate COVID-19 outpatient clinic organization and the process of patient flow from referral to discharge is described in Figure 1. MMCOs are located within hospitals, in a strategic location that is both easy-to-reach by patients and in direct connection with the ED. This innovative healthcare service is addressed to two different categories of patients with nasopharyngeal swab-confirmed infection: (i) patients with moderate COVID-19 and (ii) patients at increased risk of adverse outcome due to pre-existing risk factors independent of COVID-19 severity. Both categories may need active surveillance and management by physicians with an established expertise in treating COVID-19 and its complications. Patients can be referred to the MMCO by GPs, ED physicians or hospital specialists (HS) through direct telephone call to the MMCO physician at a dedicated mobile number, which is active 12 h per day, 7 days per week. Mild-to-moderate COVID-19 outpatient clinic physicians are internal medicine doctors. Criteria for referral to MMCOs of patients with moderate COVID-19 by GPs are derived from official regional regulations (5,6). Prior to patient evaluation at the MMCO, the GP provides the MMCO physician with a comprehensive report, in the form of a standardized questionnaire (Supplementary Figure 1), on the patient's past medical history, COVID-19-related symptoms, time from symptom onset, peripheral oxygen saturation (SpO 2 ), body mass index (BMI), chronic therapies and COVID-19specific treatments. Criteria for referral of include: (i) age ≥65 years in the presence of body temperature ≥38 • C and at least two comorbidities among obesity, active cancer, chronic kidney disease (CKD), chronic respiratory disease, immunosuppression, ischemic heart disease (IHD), diabetes mellitus (DM), coagulopathy, history of immunosuppression or organ transplant, HIV infection and cerebrovascular disease (CVD); (ii) body temperature ≥38 • C for longer than 72 h; (iii) SpO 2 between 90 and 94% (or between 88 and 90% in case of history of chronic obstructive pulmonary disease). Emergency department physicians may refer patients who may benefit from prolonged monitoring in a hospital-based setting following clinical stabilization, with the dual purpose of relieving the ED from excessive burden and limiting hospitalization rates, while feeling at ease discharging patients with a non-negligible risk of disease evolution. Hospital specialists, usually hematologists or oncologists, may especially benefit from extending referral to asymptomatic or mild COVID-19 patients in case of pre-existing risk factors for poor clinical outcome (i.e., cancer or other frailty conditions). Following the first evaluation, patients may either be discharged from the MMCO and redirected to GP care, or be addressed to the ED in case of severe COVID-19 requiring more intense care, or be scheduled for further visits for a prolonged monitoring at the MMCO. Specifically, active surveillance at the MMCO consists of serial visits, at varying time intervals depending on disease severity, until disease stabilization or complete recovery. Patient Evaluation at Mild-to-Moderate COVID-19 Outpatient Clinics The first visit at MMCO comprises a comprehensive physical examination with vital sign assessment (SpO 2 , heart and respiratory rates, blood pressure, body temperature, blood glucose) and measurement of anthropometric parameters including weight, height and waist circumference. Data about past and COVID-19-related medical history are accurately collected, integrating patient interview with the GP's questionnaire and available medical records. Lung assessment relies on lung ultrasound (LUS) imaging. In addition to being easy and rapid to perform, LUS has higher sensitivity and specificity for lung parenchymal abnormalities than chest X-rays (7)(8)(9). Moreover, it can be performed at bedside and bears no radiological hazard (8). Through LUS, signs of interstitial lung disease including white lung pattern suggestive of more severe involvement and parenchymal consolidations may be detected. Also, LUS allows to calculate the Lung UltraSound Score (LUSS), a semi-quantitative score of lung aeration loss (10,11), which has been associated with disease severity and mortality in COVID-19 (12,13). Arterial blood gas analysis parallels LUS in the evaluation of pulmonary dysfunction, and the ratio of arterial oxygen partial pressure (PaO 2 , in mmHg) to fractional inspired oxygen (FiO 2 , in mmHg), expressed as a fraction, is used as a quantitative marker of respiratory insufficiency. Electrocardiography at rest and blood exams including complete blood count, C reactive protein (CRP), lactate dehydrogenase (LDH), D-dimer, ferritin and creatinine are also performed. At the following visits, the abovementioned procedures may be repeated in varying combinations to allow for an individualized and attentive disease monitoring. Study Design All patients aged 18 years or older, evaluated at the MMCOs of San Raffaele University Hospital and Luigi Sacco University Hospital in Milan, Italy, from 1 October 2020 to 31 October 2021 were included in the present study. Data were retrospectively collected as part of the retroPAUCI protocol (N. 140/INT/2021), approved by the Hospital Ethics Committees, in conformity to the declaration of Helsinki. Written informed consent was obtained by all patients. Variables Age, sex, past medical history (i.e., obesity, active cancer, CKD, chronic respiratory disease, immunosuppression, IHD, DM, coagulopathy, history of immunosuppression or organ transplant, HIV infection, CVD), BMI, COVID-19-related history including time of symptom onset, COVID-19-related symptoms (i.e., dyspnea, cough, taste and smell disturbances, pharyngodynia, myalgias, arthralgias, asthenia, diarrhea, nausea or vomiting, headache, syncope), home-measured SpO 2 at time of MMCO referral and presence of fever for ≥72 h were collected for all patients. Recorded data on patient evaluation during the first MMCO visit comprised blood pressure, heart rate, SpO 2 , respiratory rate (RR), PaO 2 /FiO 2 at arterial blood gas analysis, blood exams (i.e., absolute neutrophil and lymphocyte counts, neutrophil to lymphocyte ratio [NLR], LDH, CRP, creatinine, ferritin and D-dimer), as well as LUSS, the presence of white lung pattern or parenchymal consolidations at LUS. Moreover, rates of ED referral following MMCO evaluation and of hospitalization after ED admission, observation time (i.e., time interval from the first MMCO visit to MMCO discharge), and the number of visits at MMCO prior to discharge were also registered. Prior to analysis, data were cross-checked with medical charts and verified by data managers and clinicians for accuracy. Primary Outcome To investigate which patients are at increased risk of adverse outcome, ED referral following MMCO evaluation was used as primary outcome. Cost-Effectiveness Analysis We investigated whether managing patients at MMCO was economically convenient for the hospital compared with inpatient care. We considered that patients who received ≥2 To identify early predictors of adverse outcome (i.e., ED referral following MMCO evaluation) and provide GPs with a tool for early risk classification, we employed a classification and regression tree (CART) algorithm within the cohort of patients referred to the MMCO by GPs. CART analysis relies on recursive partitioning to sequentially split a cluster of patients into homogeneous sub-groups based on independent variables, determining the hierarchy of prognostic factors and associated cut-points that best subdivides the initial population to obtain faithful risk groups (16,17). Demographical data, comorbidities, BMI, and parameters that GPs can easily obtain through patient interview, including home-measured SpO 2 , the presence of fever for ≥72 h, COVID-19-related symptoms and time from symptom onset were included as predictors in the CART. The results of the analysis were graphically represented. The area under the receiver operating characteristic (ROC) curve (AUC) was used as a quality metric of the CART. Missing data was not imputed. All statistical analyses were performed using R statistical package (version 4.0.0, R Foundation for Statistical Computing, Vienna, Austria), with a two-sided significance level set at p < 0.05. Tables 1, 2, respectively. The source of MMCO referral was known for 572 patients. Of these, 400 (70%) were referred by GPs, 119 (21%) by the ED and 53 (9%) by HS. Patient Characteristics and Sources of Mild-to-Moderate COVID-19 Outpatient Clinic Referral Most patients were male and median age was 56 (46-66) years. Median BMI was in the overweight range, 18% of patients being obese. Obesity was more common in patients discharged from the ED (27.7 vs. 16.5% in patients referred by GPs and 13.2% in those referred by HS, p 0.029). Except for active cancer and immunosuppression or history of organ transplant, which were, as expected, significantly more common in patients referred by HS (both p < 0.0001), no difference among the three groups was recorded in terms of other comorbidities ( Table 1). Clinical Outcome Following Mild-to-Moderate COVID-19 Outpatient Clinic Evaluation Following patient assessment at the MMCO, 97 out of 660 patients (15%) were referred to the ED for an urgent shift toward more intense care. Specifically, 77 (79%) patients were addressed to the ED soon after the first MMCO visit, while 20 (21%) following the second visit. Of the 400 patients referred to MMCO by GPs, 73 (18%) were addressed to the ED, compared to 11% (6 of 53) of those referred by HS. A minority of patients discharged by the ED (7 of 119 [6%]) were redirected to the ED following MMCO evaluation. Rates of hospitalization following ED admission were 66% (48 out of 73), 71% (5 out of 7) and 67% (4 out of 6) in patients initially referred to the MMCO by GPs, ED, and HS, respectively. Excluding patients addressed to the ED following MMCO visit, 235 out of 563 patients (42%) were scheduled for at least one additional MMCO visit due to the need of continued hospitalbased monitoring, while 328 (58%) were discharged after the first evaluation and redirected to GP care due to mild COVID-19. Risk Classification Algorithm for the Need of Early Emergency Department Referral In light of the observation that patients addressed to the MMCO by GPs had overall more severe COVID-19 at MMCO evaluation, we hypothesized that some of these patients might benefit from early ED referral directly by the GP, prior to MMCO visit. Therefore, we aimed at providing GPs with an evidence-based tool able to identify high-risk patients prior to MMCO evaluation, avoiding unnecessary time lags. As mentioned above, among the totality of patients referred by GPs (n = 400), 18% were addressed to the ED by the MMCO physician due to the need of more intense hospitalbased assistance. We used CART analysis to build an easyto-use algorithm that exploits parameters obtainable by simple patient interview. Among demographics, comorbidities, BMI, home-measured SpO 2 , the presence of fever for ≥72 h, COVID-19-related symptoms and time from symptom onset, CART analysis selected three variables, namely home-measured SpO 2 , age and BMI, able to robustly classify patients into risk groups for the early need of intense care. Moreover, for each of these variables, it identified the thresholds that maximized the segregation among the resulting patient clusters (Figure 2). Three risk groups were obtained: (i) low risk (home-measured SpO 2 ≥ 95% and age < 71 years), (ii) moderate risk (homemeasured SpO 2 ≥ 95% and age ≥ 71 years or home-measured SpO 2 < 95% and BMI < 33 Kg/m 2 ), and (iii) high risk (homemeasured SpO 2 < 95% and BMI ≥ 33 Kg/m 2 ). The AUC (95% confidence interval, CI) of the ROC for the CART (Figure 3) was 0.83 (0.77-0.88). The accuracy of the CART model was subsequently confirmed when comparing the identified risk groups in terms of indicators of disease severity assessed during MMCO evaluation. In fact, patients in the high-risk group had overall more severe COVID-19 than those in the moderate-and low-risk groups, differences being expectedly more pronounced compared with the low-risk group ( in the low-risk groups) were significantly reduced in high-risk patients (both p < 0.0001), in parallel to a significant increase in RR (breaths/min, 22 [20][21][22][23][24] in the high-risk group vs. 20 [18][19][20][21][22] in the moderate-risk and 18 [16][17][18][19][20][21] in the lowrisk groups, p < 0.0001). Similarly, blood levels of CRP and LDH, as well as NLR were significantly higher in patients in the highrisk group (all p < 0.05). Likewise, absolute lymphocyte count was significantly reduced in high-risk patients (p 0.0023), in line with a more severe disease. Ferritin showed a tendency toward being increased in the high-risk group (p 0.052). At LUS, white lung pattern was more common in patients in the moderateand high-risk groups compared to the low-risk cluster, while no difference was observed in the rate of parenchymal consolidation. LUSS was significantly lower in the low-risk group compared to the other groups. Cost-Effectiveness of Mild-to-Moderate COVID-19 Outpatient Clinic A total of 235 out of 660 (41.7%) patients performed ≥2 visits at MMCO and were thus included in the cost-effectiveness analysis. Based on the ICD-9-CM and the updated guidelines of Regional Health Authorities codes for diagnoses and procedures linked to COVID-19, the cost of one COVID-19-related hospitalization was estimated to be 3,275.86 €. According to the regional reform of 2021 on the increased pricing for the activities provided to COVID-19 patients, the cost of each hospitalization was increased by 3,713 € (14,15). Therefore, the total mean cost of one hospitalization for COVID-19 was estimated as being 6,988.86 DISCUSSION Here we describe an innovative healthcare strategy to optimize the management system of COVID-19 patients while sparing resources for the care of patients with non-COVID-19-related conditions. Similar models have previously been proposed (18,19). MMCOs were designed to fill the gap of care delivery to COVID-19 patients with clinical features that are neither too mild to be managed by the GP in a home-based setting nor too severe to require ED admission or hospitalization. In our experience, these novel infrastructures allowed the achievement of the dual goal of chaperoning GPs in the management of COVID-19 patients and alleviating pressure on EDs and hospital wards, favoring effort redirection toward patients affected by other conditions. Indeed, the first wave of COVID-19 pandemic forced an extensive reduction of several non-COVID-19-related activities to the detriment of non-COVID-19 care (20)(21)(22). The success of this approach dwells in the high degree of inter-system coordination and commitment to the integration of hospital and primary care services. In a timespan of 1 year, two MMCOs in Milan took care of hundreds of patients who would otherwise be directed straightforwardly to the ED due to the intrinsic difficulty of GPs to deliver optimal care in the absence of hospital equipment or, perhaps, COVID-19 expertise. Most of these patients were indeed managed at MMCOs for the entire course of their disease through serial visits, always in strict collaboration with GPs, until clinical recovery. Only a minority of patients, specifically less than 15%, were addressed to the ED for an urgent evaluation in an emergency setting due to severe disease or high risk of disease progression. Noteworthily, the majority of these patients (65%) required hospitalization following ED admission, pointing to the high level of appropriateness of clinical decisions by MMCO physicians. Considering that GPs may dispose of insufficient instruments to discriminate patients at increased risk of adverse outcome (23), in light of our observation that a higher proportion of patients among those referred by GPs than those referred by ED or HS were addressed to the ED following MMCO visit, we speculated that the early identification by GPs of patients deserving direct ED admission might guarantee proper and timely intervention. Therefore, we developed an evidence-based, easy-to-use tool that GPs can employ during patient interview to identify patients at high risk of disease progression. CART analysis, through a machine-learning approach, selected three variables, namely home-measured SpO 2 , age and BMI as the independent predictors that most robustly divide patients into faithful risk groups for severe disease. The model yielded an AUC of 83%, far above the ideal accuracy threshold of 70%. The predictive strength of the model was confirmed by subsequent analysis showing that patients in the high-risk group were indeed those who exhibited the highest degree of respiratory insufficiency, as measured by SpO 2 , RR and PaO 2 /FiO 2 , and the worse laboratory findings. Also, LUS demonstrated a decreased rate of interstitial abnormalities in patients in the low-risk group. In addition to the clinical efficacy of MMCO in terms of support to GPs and relief on ED and hospital wards, the cost-effectiveness analysis showed that the proposed model of COVID-19 outpatient management is also economically sustainable for the National Healthcare System. Caution is, however, warranted in interpreting economical results, given that many factors besides COVID-19 diagnosis may influence the decision of hospital admission and the length of hospital stay. Nonetheless, outpatient management of COVID-19 patients should be preferred when feasible. Overall, the establishment of MMCOs proved to be a deal-breaker for the management of COVID-19 patients in a sustainable and efficient way. Ideally, MMCOs may also serve as safe environments where candidate patients might receive COVID-19-directed therapies such as anti-SARS-CoV-2 monoclonal antibodies, antiviral therapies, etc., under the expert monitoring of trained personnel. This patient-centered, sustainable and flexible approach would ensure continuity of care through a 360-degree assistance and possibly serve as a template beyond COVID-19 outbreak. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Comitato Etico Ospedale San Raffaele. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS RD and MM: conception and design, acquisition of data, analysis and interpretation of data, statistical analyses, and drafting of the manuscript. EB, GV, AT, MC, GP, LL, RP, CP, CB, SM, NB, DD, and GR: acquisition of data and critical revision of the manuscript. CC, NM, and PR-Q: conception and design, interpretation of data, drafting of the manuscript, and supervision. All authors contributed to manuscript revision, read, and approved the submitted version. FUNDING This study was supported by the Ministero della Salute, Italy and COVID-19 donations.
2022-05-13T05:10:44.162Z
2022-04-27T00:00:00.000
{ "year": 2022, "sha1": "b0c1e8c7ee3f8aa9c96c64f9cfce6130cb90492d", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "b0c1e8c7ee3f8aa9c96c64f9cfce6130cb90492d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
118621186
pes2o/s2orc
v3-fos-license
Cylindrical thin-shell wormholes and energy conditions We prove the impossibility of cylindrical thin-shell wormholes supported by matter satisfying the energy conditions everywhere, under reasonable assumptions about the asymptotic behaviour of the - in general different - metrics at each side of the throat. In particular, we reproduce for singular sources previous results corresponding to flat and conical asymptotics, and extend them to a more general asymptotic behaviour. Besides, we establish necessary conditions for the possibility of non exotic cylindrical thin-shell wormholes. A wormhole configuration connects two regions of spacetime by a throat, thus implying a nontrivial topology, and some consequent interesting features as, for example, the possibility of closed timelike curves [1,2,3]. In the case of wormholes with a compact throat, this is defined as a minimal area surface [2], where a flare-out condition is satisfied. The main objection against the actual existence of such wormholes is that, in the framework of General Relativity, they require the presence of exotic matter, i.e. matter violating the energy conditions [2]. For wormhole geometries with infinite throats, as cylindrical wormholes are [4,5], the flare-out condition can be understood in two ways: The areal flare-out condition states that the area per unit length must increase with the radius [5]; this leads to the impossibility of fulfilling the energy conditions globally [6]. However, in Ref. [6] it was also pointed that for cylindrical wormholes it may be more appropriate the radial flare-out condition, which only demands that the length of a circunference increases with the radius (see also [7,8]). Within this approach, wormhole configurations satisfying the energy conditions [6] were found; however, negative results were obtained in Ref. [6] for flat and conical asymptotics. Besides, working within the thin-shell class, it was shown that cylindrical wormholes with a positive energy density at the throat are possible [8]. However, in Ref. [8] we were not able to find solutions completely satisfying the energy conditions; their existence was not discarded, but was left as an open question. Therefore, here we shall address the construction and matter characterization of cylindrical wormholes of the thin-shell class, working under the radial definition of the throat. We shall restrict our analysis to static configurations, and we shall focus on the normal or exotic character of the matter required. We shall prove the impossibility of cylindrical thin-shell wormholes supported by matter satisfying the energy conditions everywhere, under the following assumptions: 1. The asymptotic behaviour of the geometries at each side are either flat, local cosmic string-like (i.e. conical) or of the generic Levi-Civita form; they are not necessarily of the same type at both sides. 2. Apart from the required continuity of the line element and a weak condition on the first derivatives of the metric functions (see below), no assumption is made about the metric at the throat. 3. Both metrics have continuous first derivatives outside the wormhole throat (the only shell is at the throat). These conditions include a wide class of generally asymmetric wormholes; symmetric ones, which correspond to equal metrics everywhere at both sides of the throat, are of course included in our analysis as a particular case. The results for flat and cosmic string asymptotics are expected to reproduce those found without recalling singular sources in [6]. As a corollary, we shall obtain necessary conditions that the throat and asymptotic behaviour of the metrics must satisfy to allow for the existence of non exotic configurations. Let us start from two manifolds M 1 and M 2 with metrics of the most general static cylindrically symmetric form [9] where K 12 , U 12 and W 12 are functions of the radial coordinates r 12 . For such metrics the Einstein equations relating the geometry with the energy-momentum tensorT ν µ = 8πe 2(K−U ) T ν µ = diag(−ρ,p r ,p ϕ ,p z ) take the form where primes denote radial derivatives. From these equations we obtain the quantities which should all be positive or at least zero if no exotic matter exists. Now from the two manifolds M 1 and M 2 described by (1) we take the regions r 12 ≥ a and paste them at the surface Σ given by r 12 = a to construct a new manifold M. We assume that the metric functions and the coordinate choice guarantee the required continuity of the line element across Σ; then the induced metric on Σ is unique. We also assume that the flareout condition is satisfied at r 12 = a, so that the geometry at each side opens up there. Then the manifold M constitutes a wormhole with a matter shell at Σ, that is a thin-shell wormhole. The geometry at both sides of this surface and the matter on it are related by the where K j i is the extrinsic curvature tensor defined by with n γ the unit normal (n γ n γ = 1) to Σ in M. The coordinates of the 4-dimensional manifolds are labeled as X µ = (t, r, ϕ, z), while the coordinates on the surface are ξ i = (τ, ϕ, z); as usual, τ stands for the proper time measured by an observer at rest on Σ. The is the surface stress-energy tensor, with σ the surface energy density and p ϕ , p z the surface pressures. For the metrics (1) these equations give the following expressions for the energy density and pressures on the shell: From this we obtain the following relations In fact, the continuity of the metric across the throat would slightly simplify these expressions, but this is not essential in the subsequent analysis. As pointed above, the existence of a wormhole requires that the geometry opens up ("flare-out condition") at the throat, that is at r = a. The areal flare-out condition would at r = a. This requirement automatically implies that the energy density is negative at the shell, and the energy conditions would be violated (see Ref. [6] for the same result without recalling singular sources). But the radial flare-out condition only implies which is less restrictive. These conditions give The condition σ > 0 (σ = 0 fulfils the energy conditions, but there would be no shell) requires that at least On the other hand, the conditions σ + p i ≥ 0 on the shell impose that at least and simultaneously From these results, for a class of metrics with reasonably desirable asymptotics, we shall prove the impossibility of fulfilling the energy conditions everywhere outside the shell if This includes the case in which at both sides at the throat we have W ′ /W < 0, and also the case such that 2. Asymptotically cosmic string behaviour: this is the far behaviour associated to a gauge or local cosmic string, and corresponds to a locally flat geometry with a deficit angle. This is given by e 2U ∼ 1, e −2U ∼ 1, e 2K ∼ 1 and W 2 ∼ α 2 r 2 with α 2 < 1 (α 2 > 1 would describe what is called a surplus angle). As in the preceding case, at infinity we have W ′ ∼ 1. and d = 2. The length of a centered circunference increases with the radius only for d < 1; for d < 0 the length per unit of z coordinate decreases with r and vanishes for r → ∞. Thus a reasonable assumption would be 0 < d < 1. Under this condition, the far behaviour corresponds to K ′ − 2U ′ < 0 and K ′′ − 2U ′′ > 0. This will be of interest in the analysis presented in the Appendix; however, the hypothesis 0 < d < 1 is not central, because in any case the asymptotic behaviour of W is such that W ′ > 0, and this constitutes the crucial point in our proof. Asymptotically Levi Suppose that at one side of the throat it is W ′ /W < 0 at r = a; then according to (20) to fulfill the radial flare-out condition we have U ′ < 0 and |U ′ | > |W ′ /W |. Then if Now, outside the shell the metrics and their derivatives are continuous. This means that the last inequality must be fulfilled by W and K ′ , U ′ , W ′ at r = a + , that is in a region immediately beyond the wormhole throat, in the bulk where the energy and pressures are given by Eqs. (6)- (9). So let us assume that the energy conditions are fulfilled at r = a + . From (6) this can be possible only if together with W ′ /W < 0 the relation is satisfied at r = a + . But for all the metrics considered, we have the asymptotic behaviour W ′ /W > 0. Therefore at some radius a < r * < ∞ we necessarily must have W ′ = 0 together with W ′′ > 0. Then, as U ′2 ≥ 0, the energy density isρ 12 = −W ′′ /W − U ′2 < 0 at r = r * and the energy conditions are violated there. Note that this analysis does not require equal metrics at both sides of the throat, nor even that the metrics are of the same kind at infinity. One could have, for example, a cosmic string far behaviour at one side, and a Levi-Civita far behaviour at the other side; the steps followed to show that when at the throat there is a shell of normal matter then the energy conditions must be violated beyond it, still apply for different far behaviours. Though cylindrical wormholes supported by normal matter are possible [6], any static cylindrical thin-shell wormhole geometry with the throat and asymptotic behaviours considered here requires exotic matter at some finite value of the radial coordinate. Therefore the negative results stated in Ref. [6] are reproduced here starting from singular sources, and besides, for such matter distributions, are extended to a more general asymptotic behaviour. Now let us briefly discuss the complementary point of view. Because σ > 0 requires at least W ′ 1 /W 1 < 0 or W ′ 2 /W 2 < 0 at the throat, let us assume, without loosing generality, that at r = a we have W ′ 1 /W 1 < 0. Then, if we keep the restriction that W ′ 1 must be positive at infinity, to avoid exotic matter we should require at least that at the throat radius the metric also satisfies K ′ 1 ≤ W ′ 1 /W 1 . Now, the condition σ + p ϕ ≥ 0 leads to the requirement K ′ 2 ≥ W ′ 2 /W 2 at the other side of the throat. But if we also keep the condition that asymptotically W ′ 2 > 0 the analysis above implies that we must admit W ′ 2 /W 2 > 0 at r = a, and then σ > 0 forces there the relation e U 2 −K 2 |W ′ 2 /W 2 | ≤ e U 1 −K 1 |W ′ 1 /W 1 |. Finally, the continuity of the metric (i.e. e U 2 /|W 2 | = e U 1 /|W 1 | at r = a) simplifies the relation to e −K 2 |W ′ 2 | ≤ e −K 1 |W ′ 1 | at the throat. Of course, we have reached this point from the asymptotic behaviour W ′ > 0, which is common to flat, cosmic string-like and Levi-Civita metrics. However, we could relax this condition, and this could be done keeping the desirable property of an ever increasing circunference length, by demanding that U ′ < W ′ /W < 0 asymptotically. Thus, an alternative starting point in the search of non exotic configurations is to assume the latter form for the asymptotic behaviour of the metric. In this sense, our analysis has left us with, at least, necessary relations that should hold between the asymptotic behaviours of the metrics at each side and their first derivatives at the throat so that non exotic matter could support cylindrical thin-shell wormholes. In summary, in relation with the question posed in Ref. [8], here we have a no go result excluding a wide class of metrics as candidates, and we have obtained a guess of the kind of conditions which narrow the search for a positive answer.
2012-03-16T16:39:58.000Z
2012-03-16T00:00:00.000
{ "year": 2012, "sha1": "12309e00ce4f5ba97965c5d18c23abef42238f94", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1203.3761", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "12309e00ce4f5ba97965c5d18c23abef42238f94", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
221285907
pes2o/s2orc
v3-fos-license
Gastric MiNEN Arising from the Heterotopic Gastric Glands An 80-year-old woman presented with a 30-mm protruding lesion-like submucosal tumor with a central depression located at the anterior wall of the upper gastric body. The depressed area had a well-demarcated margin, while the other area was covered by a non-neoplastic mucosa. A biopsy specimen revealed neuroendocrine carcinoma. Endoscopic ultrasonography revealed a heterogeneous mass with a clearly distinguished border in the submucosal layer. The mass had two distinct areas adjacent to each other. In addition, a hypoechoic zone was observed on the margin of the mass. Distal gastrectomy was performed. The final diagnosis was a mixed neuroendocrine-non-neuroendocrine neoplasm arising from the heterotopic gastric gland. Introduction Gastric neuroendocrine carcinoma (NEC) is a rare disease that accounts for 0.1-0.6% of all gastric cancers (1,2). NEC frequently coexists with adenocarcinoma. The World Health Organization (WHO) 2019 classification proposed a classification for an epithelial tumor with a mixture of nonneuroendocrine and neuroendocrine components, and this tumor was referred to as mixed neuroendocrine-nonneuroendocrine neoplasm (MiNEN) (3). In addition, the heterotopic gastric gland (HGG) is a relatively rare entity characterized by the ectopic proliferation of gastric glandular elements in the lamina propria. A few cases of gastric carcinogenesis associated with the HGG have been reported (4,5). We herein report the first known case of Mi-NEN arising from the HGG. Case Report An 80-year-old woman was referred to our hospital for the treatment of gastric cancer. Physical and laboratory examinations, which included an assessment of the tumor marker levels, revealed no abnormalities. A 30-mm protrud-ing lesion-like submucosal tumor with a central depression located at the anterior wall of the upper gastric body was observed on esophagogastroduodenoscopy. The depressed area had well-demarcated and irregular margins, and the other area was covered by a non-neoplastic mucosa. (Fig. 1a, b). Magnified endoscopy with narrow-band imaging revealed a well-demarcated line with irregular microvascular and microsurface patterns in the depressed area (Fig. 1c). An assessment of the biopsy specimen obtained from the depressed area revealed small atypical round cells forming solid nests. The small round cells contained hyperchromatic nuclei and few cytoplasms which were immunoreactive for chromogranin A and synaptophysin. Thus, these results indicated that the tumor was NEC. In addition, grade 0-3 atrophic gastritis with histologically proven intestinal metaplasia was observed according to the Kimura-Takemoto classification (6), and the patient tested positive for anti-Helicobacter pylori immunoglobulin G antibody. Endoscopic ultrasonography revealed a heterogeneous mass with a clearly distinguished border in the submucosal layer, while the muscularis propria layer was preserved. The mass consisted of an iso-hyperechoic area and hypoechoic area adjacent to each other. In addition, a hypoechoic zone was observed on the margin of the iso-hyperechoic area. Although Intern Med 59: 3165-3169, 2020 DOI: 10.2169/internalmedicine.5333-20 the mucosal layer disappeared in the depressed area, it was well preserved in the other area ( Fig. 1d). A computed tomography scan revealed no lymph node or distant metastasis, and the clinical stage was T1N0M0 stage I (3). Distal gastrectomy was performed 1 month after the diagnosis. A gross examination of the surgically resected specimen showed a 25×20-mm protruding lesion-like submucosal tumor with central depression located at the anterior wall of the upper gastric body (Fig. 2a). On its cut surfaces, a well circumscribed and yellowish solid tumor was located in the submucosae (Fig. 2b). Microscopically, the tumor had two distinct components, consisting of irregularly shaped ducts, including cribriform glands and small-to-large round cells with hyperchromatic nuclei and some areas of cytoplasm forming solid nests, and it was located in the dilated cystic structure in the submucosal layer (Fig. 3a). The lesion was almost completely covered by a non-neoplastic mucosa, and there were exposed tumor components in the depressed area. The irregularly shaped ducts and small-large round cells were diagnosed as moderately differentiated adenocarcinoma and NEC components, respectively. Immunohistochemically, the NEC component was positive for chromogranin A, synaptophysin, and CD56 (Fig. 3b). According to the WHO 2019 diagnostic criteria, the tumor was diagnosed to be Mi-NEN. The two tumor components were located adjacent to each other and had a zone of transition in between them ( Fig. 3c-f). The tumor was surrounded by the dilated cystic structure, composed of an epithelial layer with no atypia or proliferation (Fig. 3g, h). The tumor components were continuous from the epithelium of the dilated cystic structure (Fig. 3i). Based on these findings, the tumor components were considered to have originated from the HGG. Venous invasion was observed, and the tumor had invaded up to a depth of 2,000 μm beyond the surface. None of the 24 dissected lymph nodes had metastases. Finally, the tumor was diagnosed to be MiNEN arising from HGG and stage I gastric cancer. The patient has remained recurrence-and metastasis-free 6 months after surgery. Discussion The current study describes a patient with MiNEN arising from the HGG who underwent distal gastrectomy. Only few cases of gastric adenocarcinoma arising from the HGG have been previously reported. To the best of our knowledge, this is the first case report of MiNEN arising from the HGG. This case is extremely rare and helpful for future diagnostic imaging of this disease because it provided information about characteristic endoscopic findings obtained preoperatively. As per the histopathological findings, we concluded that MiNEN had arisen from the HGG based on two points. Firstly, the tumor components, comprising NEC and adenocarcinoma components, were surrounded by a dilated cystic structure, which was the HGG. Second, the tumor was continuous from the epithelium of the dilated cystic structure without any neoplastic changes. HGG is believed to arise from the gastric glands existing congenitally in the submucosa or from the aberration of the epithelium into the submucosa due to the repeated erosion and regeneration of the mucosa (4, 7). In addition, one of the repeated inflammations is associated with chronic gastritis associated with Helicobacter pylori infection (8). Only a few reports showed the association between HGGs and gastric cancer (4,7,9). One hypothesis of this association is that both HGGs and gastric cancer develop due to repeated erosion and regeneration of the mucosa, indicating that HGGs are paracancerous lesions (9). In our case, anti-H. pylori immunoglobulin G antibody was positive, and atrophic gastritis was observed in the background gastric mucosa, suggesting that chronic inflammation had been due to an H. pylori infection, and that infection had thus contributed to the development of HGGs and gastric cancer. Although the carcinogenesis of MiNEN is not well clarified, previous reports showed that 70-75% of gastric NEC cases involve adenocarcinoma components in the mucosa and/or submucosa (1,10,11). Gastric NECs arise predominantly from endocrine precursor cell clones that develop in the preceding adenocarcinoma component. These clones transform into NEC lesions during rapid clonal expansion, and NECs develop rapidly in the submucosal and deeper layers (10,12). Furthermore, the NEC components are mainly located in the submucosal and deeper layers, while the surface layer of the NEC components is covered by nontumorous mucosa (13,14). In our case, the surface of the lesion was almost completely covered with a non-neoplastic mucosa, and the tumor components were seen pushing up the non-tumorous mucosal layer along with the peripheral mucosa. Due to the histological features of NEC and HGG, this lesion resembles a submucosal tumor with a central depression. In addition, the association between the development of MiNEN and chronic inflammation remains to be elucidated. Therefore, the etiology of the coexistence of MiNEN and HGG in this case was unclear, although the chronic inflammation might have contributed to the development of HGG and gastric cancer. Gastric cancer arising from the HGG is often difficult to diagnose preoperatively because the cancer components are often located in the submucosa, and the components are not often exposed on the surface (15). In this case, the surface of the lesion was almost completely covered with a nonneoplastic mucosa. However, a preoperative diagnosis was obtained via a biopsy using a sample obtained from the depressed area, which is considered to be an exposed area. Endoscopic ultrasonography (EUS) is considered to be a useful modality for detecting and evaluating HGG because it can identify hypoechoic scattered cystic lesions in a heterogeneous area (16,17). In this case, the coexistence of the tumors and the HGG could not be diagnosed preoperatively, and the pathologic evaluation led to the diagnosis after surgery. However, retrospectively, the coexistence of the tumor and cystic area may be identified on EUS. Comparing EUS and the histological findings, it was suggested that the isohyperechoic area had likely been the adenocarcinoma component, the hypoechoic area had been the NEC component, and the hypoechoic zone on the margin of the mass had In conclusion, this case is interesting since it revealed the carcinogenesis of MiNEN and HGG. Moreover, it shows the coexistence of both lesions on EUS. The authors state that they have no Conflict of Interest (COI).
2020-08-25T13:05:26.021Z
2020-08-22T00:00:00.000
{ "year": 2020, "sha1": "63b90f190d331345baac13365911285f50c2f930", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/internalmedicine/59/24/59_5333-20/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "766f5cb780dadc71ea7dff4304a82695b5249417", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252783303
pes2o/s2orc
v3-fos-license
Financial frauds’ victim profiles in developing countries Recently, the variety of the financial frauds have increased, while the number of victims became difficult to estimate. The purpose of this paper is to present the main profiles of financial frauds’ victims using a reviewing method. The analysis captures the main theoretical and empirical background regarding the motives and circumstances of becoming a victim, the dynamics of several social and demographical characteristics of this type of victims, as well as a sample of relevant case studies from some developing countries. The main finding is that, in literature, most of the victims are male people of different ages, employed, married or single, regardless the level of education. For developing countries such as China, India and Nigeria, the majority of victims act out of naivety and desire to escape from poverty, while some victims from Latin America, China and Nigeria are influenced by greed and lack of empathy, without thinking of further consequences for their families and friends involved. Moreover, most of the victims are convinced to invest in financial schemes by family members, friends, or acquaintances. Introduction In the last decades, economic crime has been thoroughly investigated at the international and national levels, with an emphasis on corruption and bribery. Economic or financial crime refers to illegal acts committed by an individual or a group in order to obtain an illegal material, economic, financial or professional gain. The economic crime attracts criminal organizations thanks to the low chance that these frauds would be discovered (Milosevic, 2016). On the other hand, individuals and organizations alike are not mindful of the risk of falling victim to economic crimes. Taking into consideration only the territory of the European Union, the criminal activities have multiplied, over 80% being related to trade in drugs, organized property crime, investment, online and other frauds, trafficking in human beings and migrant smuggling (Europol, 2021). Fraud schemes refer to the intentional deception or intent of a person to deceive using techniques, instruments and false and deceptive pretexts through which fraudsters intend to determine victims to voluntary and unlawful transfer to them values, goods, money or iniquitous advantages. In exchange for these transfers, fraudsters promise economic, financial or material benefits for victims, which, in fact, do not exist or cannot to be granted. In this context, fraud schemes are one of the main serious and organized crime activities in the EU, while legal business structures are used in more than 80% of the criminal networks (Titus et al., 1995;Europol, 2021). There are various types of fraud schemes, including financial fraud, excise, mass marketing, benefit, payment-order, procurement, value-added tax, insurance, EU subsidy and loan and mortgage frauds. Financial fraud is based on social engineering techniques, meaning the attempt to obtain sensitive or personal information about victims that can be used for fraudulent purposes (Titus et al., 1995;Europol, 2017). In the last years, the financial frauds have expanded and diversified, reflecting the growing creativity of the fraudsters and the permanent motivation of victims for rapid high returns. There are conceptual confusions between financial and investment frauds. Financial fraud includes a wider range of illegal acts such as phishing, identity theft or insurance scam, while investment fraud is a component of the financial category (Harvey et al., 2014). The most common investment frauds include boiler room, pyramid and Ponzi schemes. Also called "pressure room scams, " the boiler room schemes involve putting pressure on potential victims to invest in fictitious or low-value stocks. Fraudsters use cold calls for victims and present their business using fake references. On the other hand, pyramid and Ponzi schemes are similar, but in the case of pyramid, the first investors need to recruit new investors in order to generate profits (Europol, 2017). Despite the diversification of fraudulent programs, Ponzi schemes have retained their original method, to which fraudsters have come up with innovations. Charles Ponzi developed the original scheme in the 1920s, who promised a 50% return for those who decide to invest in international mail coupons. This fraud involves a promise of high returns with limited risks or none, while the fraudster uses the funds in personal or illegal purposes. The fraud operator pays the old investors using some of the funds received from new investors in order to lure more victims (Frankel, 2012). The operation of a Ponzi and pyramid schemes is based on a constant flow of new participants to the scheme. Therefore, the duration of the network is directly proportional to the number of victims, which means that the greater the number of victims, the longer the pyramid structure lasts (Titus et al., 1995). The victimology remains one of the most studied fields when it comes to analyzing financial scams. When a scheme is uncovered, thousands of people who had thought that their money was invested in a safe and profitable business lose everything. Financial frauds are a type of business that has increased exponentially in all parts of the world, having in recent times a greater capacity for expansion. It is due to the difficult times in which people live, plagued by major economic crises, high unemployment rates and job insecurity. A large number of desperate people are looking to gain high rates on their investment in a short period. Such opportunities become very tempting for many. Other factors are the globalization of communications, and markets, the easy manipulation through the Internet, which means that a fraudulent company does not have to create a physical headquarters in a certain country. The purpose of this study is to present the main profiles of financial frauds' victims and the factors that support the tendency of victims to be attracted by these fraudulent schemes. The method used to conduct this paper implies a review of the primary sources (books and articles) in order to establish the main theoretical framework regarding the victimology of financial schemes and its determinants. Further, the method involves a descriptive and detailed analysis of the main profiles of financial frauds' victims in developing countries by collecting, synthetizing and describing examples and case studies. This study is structured as follows. Section Motives and circumstances for becoming victim presents the main motives and circumstances for becoming a victim of financial frauds, considering both contextual circumstances and psychological underpinnings. Further, the next three parts describe the theoretical and empirical background regarding the main components of the victim profile analysis, in terms of socialdemographic characteristics, level of cooperation with the offenders, as well as other relevant items. Section Profiles of victims in developing countries provides relevant examples and case studies regarding the profiles of victims of financial frauds in developing countries. At the end, a discussion section is provided. Motives and circumstances for becoming victim Researchers built models and surveys in an attempt to understand how different factors come together and influence the tendency to become victims of fraud attempts. Lee and Soberon-Ferrer (1997) have suggested that cognitive deficiency and social interaction influence the tendency to become victims of frauds, both being related to biological, economic, sociological and psychological features. The cognitive deficiency refers to the limited ability of some individuals to process information, which make them more vulnerable to become victims. The cognitive ability is influenced by aging process, knowledge and experiences of individuals. On the other hand, social interaction refers to the quality of social networks and the level of social isolation. A low level of social interaction, whether is about a lifelong social isolation or a contextual one (due to a negative events), makes people more vulnerable to become victims. Moreover, the psychological isolation plays a higher role in this process than physical aloneness. Lea et al. (2009) compared victims and non-victims regarding the degree of chance of falling victim because of poor judgment, and found that victims' tendency to poor judgment is higher than among non-victims. From this, the authors concluded that victims are subject to persuasion in general and not necessarily to a specific type of fraud in which they were involved. They have Frontiers in Psychology 03 frontiersin.org identified various factors associated with poor judgment regarding financial fraud, and they divided them into two groups. First group includes motivational factors, such as: • Motivation of basic human needs and desires (fear, greed, and visceral influence); • Search for excitements in risk-taking; • Lack of self-control; • Low motivation for information processing; • Reciprocity as the tendency to return a favor for favor; • Commitment and consistency: fraudster take advantage of the need of a victim to make contact and engagement and then turning to him to invest money. The second group refers to cognitive factors, such as: • Positive illusions as the tendency of the individual to perceive himself in a favorable light and to appreciate his abilities; • Prior knowledge in a specific field and overconfidence in the ability to make the right decisions; • Low cognitive abilities (especially for elderly people); • Social proof; • Using norms of conduct such as reaching out to others and behaving politely; • Authority: as the tendency to accept authority. Contrary to the common perception that the behavior of the victims is irrational, Harvey et al. (2014) have interviewed 31 victims of investment fraud and found that they actually made rational decisions. The actions taken by victims of investment fraud are in fact rational, given the combination of the information accessible to them and circumstances of their lives at the time of the fraud. Victims testified an active attitude to ensure that they are subject of a real investment opportunity. They sought information and consulted with experts, family or friends, but the information they were able to gather was inconclusive, and even supported the legitimacy of what later turned out to be fraud. Victims testified that they decided to invest in financial schemes based on a combination of financial, family, and psychological circumstances at the time of the fraud: 1. Financial circumstances: a. Financial resources available (even a change of financial situation); b. Perceptions regarding systems and financial institutions; c. The social and financial networks, including its quality. 2. Family circumstances: a. The pressure to increase the family income fast; b. The need for a long-term financial security for family. 3. Psychological circumstances Dove (2018) proposes a model that presents various personal and cognitive elements that influence the tendency to fall victim to frauds. First, there will be preconditions for fraud, or personal circumstances in the life of the potential victim, for example, lack of time to review the offer or any personal need that the offer may meet. Victims are influenced by the fact that frauds motivate the victim's basic needs and desires by promising to earn a slightly higher profit from the effort he has invested. Second, the potential victim will perceived an offer to invest in fraudulent schemes as attractive when it sounds credible or limited in some way (discount for 1 day only). Once the potential victim is under the impression that the offer is credible, that person may cooperate immediately due to certain cognitive factors, such as being impulsive or easily persuaded. Moreover, frauds gain authority by fraudsters' claim that a bank or other authoritative figures back the offer. These aspects impair the victims' judgment, leading to a higher tendency of invest in fraudulent schemes. On the other hand, some victims may be skeptical or reflective. They will decide to test the offer even though it sounds credible or to think about it before investing. Moreover, victims may be aware that their past impulsive decisions led to unfavorable outcomes. With such experiences, they will delay decision making, which may reduce emotional involvement and will gain extra time for a more careful examination. Considering the models developed by literature, this paper proposes an approach to the motives and circumstances of becoming victim of financial frauds in accordance with Harvey et al. (2014) and Dove (2018). Moreover, this paper suggests that the financial and family circumstances to be treated in common as contextual circumstances, while the cognitive and personal elements to be presented as psychological underpinnings. Contextual circumstances When victims decide to invest their money, they considered both financial and non-financial aspects, in order to assess whether it was a rational decision with regard to their life circumstances. The financial considerations focused mainly on the ratio between the risk in the investment and the potential profit. Some of the victims interviewed by Harvey et al. (2014) were under the impression that the risk was low compared to the high potential profit. Even if some victims were aware of the high level of risk, they were willing to risk in a small amount of their investment savings. There were those who preferred an investment that yielded high profits to an investment in a banking program that yielded lower profits. On the other hand, the non-financial considerations focused on the non-financial benefits that victims would gain from their investment, for instance, refinement of financial skills and greater control over their investment than investing through a bank. Some victims reported that "enjoy the risk, " being a pattern of behavior conceived by Lyng (2005) as "edgework" and described as the temptation to take risk consciously when the incentive is the experience itself. Indeed, victims mentioned the risk, among other things, as exciting, fun and gambling. The financial circumstances include the available financial resources of the victims (even a change in their financial situation), the perceptions regarding systems and financial institutions and the social and financial networks. Everyone had initial money to invest, but some victims interviewed by Harvey et al. (2014) are getting into debt later in order to continue the investment or to repay the fund. Some of the victims reported that their financial situation was static when the crook-initiated contact with them. Moreover, there are victims who experienced a change in personal financial circumstances shortly before the fraudster's contact, whether it was a benefit (increase in financial resources) or a deterioration (decrease in financial resources). This change was a significant factor that explains why these individuals fell victims. Those who experienced a sudden increase in financial resources were debating where to invest their money in order to multiply profits. On the other hand, some victims experienced a sudden reduction in liquid funds and were pressured to quickly find additional financial resources. Some victims stated that the advice received from financial and social networks did not help them in the investment decision, but rather contributed to invest in fraudulent schemes. Regarding the perceptions of financial systems and institutions there are two types of victims (Harvey et al., 2014). The optimistic ones believed that there was significant regulations with a clear enforcement. They decided to invest in fraudulent schemes assuming that their money is safe as long as they transfer it to a bank account or because the fraudsters provided an account number managed by old and well-established banks. On the other hand, the pessimistic victims were suspicious of banks and in general of the global financial system. They considered that fraudsters are more trustworthy, while the government interventions led to the collapse of the business, which was in fact a fraudulent scheme. They assumed that if the government not enforced the closure orders, the financial scheme would have continued to yield them high profits. Moreover, very religious people tend to question the legitimacy of any source of authority other than members of their religion, including the authority of the government (Frankel, 2012). The family circumstances refer to the pressure of increasing the income immediately for the benefit of the family economy and to the responsibility for providing financial security to the family in the long-term. Lifestyle factors such as an active social life and working tend to expose people to a higher risk of frauds (Muscat et al., 2002). Victims interviewed by Harvey et al. (2014) described situations in which they felt immense financial pressure or responsibility: • They became the primary or sole breadwinner of their children; • They wanted to work fewer hours so that they would have more time to spend with their children; • They had to stop working to care for an elderly parent; • They had to fund a large expense, for example, a wedding; • They were constrained to assist their children to achieve financial security; • They had to assist the extended family members with a more precarious financial situation. Victims felt that they did not have enough time to think about before the investment decision due to the pressure exerted on them to reach a decision quickly. Psychological underpinnings The main psychological factors that support the tendency to be victim of frauds are related to gullibility, risk tolerance, the level of self-control, the level of prior knowledge in financial field, the character traits and the ability to discern between true and false information. Starting with gullibility, this can be considered a form of stupidity, defined by the situation when "someone engages in dangerous social or physical activity even though there are warning signs or he has questions about them that have not been addressed and should have worried him" (Greenspan, 2009, p. 22). Foolish people tend to believe even when something is too good to be true. Gullibility means trusting based on insufficient evidence, and acting on emotion, hope, or desire. Foolish people are more dependent, and therefore weaker than skeptical and suspicious people who trust themselves more than others. The crooks act as if they are invincible, radiating power and control over what is happening. In the face of these messages, the victims feel weak, insecure, and full of doubts. They consider that are incompetent, less smart and not sophisticated enough. Moreover, when victims fail to transcend these feelings of inferiority, they prefer to hide and disguise them, adopting values and behaviors similar to those of the crooks (Frankel, 2012). In this context, some ethical issues can be addressed, if the victim understands the warning signs or even more is aware that it is the case of a fraudulent scheme, not being emotionally affected by the fact that the desired potential financial gain implies losses for more other victims. Another cognitive factor is risk tolerance. The combination of gullibility and tolerance of risk creates a powerful tendency to take risks and become a victim of frauds. People who have a high tolerance of risk take risks in general and not necessarily in the financial field, choosing, for example, mountain climbing as a hobby. Van Wyk and Benson (1997) found that people with a positive approach to financial risk are at higher risk of being a target by crooks. Schoepfer and Piquero (2009) suggested that people with risky behaviors are more likely to become victims of fraud. Lea et al. (2009) believed that some fraud victims identify the risks involved in investing but take the risk nonetheless, hoping it would pay off for them. If the guaranteed profit is large enough, the risk will be perceived as worthwhile. Wood et al. (2018) suggested that higher benefits are associated with higher willingness to invest in fraudulent schemes, while high risks Frontiers in Psychology 05 frontiersin.org discourage individuals for such attempts. The importance of the risk tolerance in this context is emphasized by Harvey et al. (2014), who presented four types of victims of investment frauds according to this characteristic. The risk averse investors are the first type, being the most skeptical and reserved individuals, with low risk appetite and less than 1 month being engage in an investment scheme. They record a low emotional impact and no financial losses, have experience with investments and connected financial social networks. In contrast, adventurers have a positive thinking and a high-risk appetite, having the longest time of engagement in an investment scheme and a medium-high financial and emotional impact. Even if they have financial social networks, may not use it. The third type is represented by the dabblers, having a medium-high risk appetite. Usually, their engagement time is less than 6 months, while the financial and emotional impact is low medium. Providers, who record the biggest financial and emotional impact, even if they are individuals with a low-medium risk appetite, represent the last group. Low self-control is another determinant of attractiveness for financial frauds. People with low self-control tend to take risks due to the urge for immediate gratification and are more vulnerable to falling as victims (Langenderfer and Shimp, 2001;Schreck et al., 2006;Holtfreter et al., 2008). In addition, Holtfreter et al. (2010) discovered a positive association between low levels of self-control and propensity to fall victim to fraud, while for Modic and Lea (2012) both low self-control and impulsivity play an important role in fraud compliance. Impulsivity influences the likelihood of falling victim to fraud by impairing the decision-making process (Bayard et al., 2011). Pratt et al. (2014) suggested that those with a low level of impulsivity took less risk and were less willing to gamble, while self-control is a consistent predictor of the likelihood of falling victim in general. Schreck (1999) indicated that that low self-control leads to lack of premeditation and perseverance. Lack of premeditation leads to errors in decisionmaking, while victims with low levels of premeditation have a poor ability to plan and to predict future consequences of their actions. This suggests that those victims are more willing to share their personal information because they do not think the consequences (Modic and Lea, 2012). The level of prior knowledge in financial field plays an important role in attracting fraud victims, even if the opinions are divided regarding this role. AARP (1999), Langenderfer and Shimp (2001), and Kadoya et al. (2021) found that the absence of prior knowledge about frauds or in a field related to a particular fraud (e.g., financial or investing), increases the chance of falling victim to the scam. Almost 75% of the interviewed victims by AARP (2007) have a low level of financial investing knowledge. In contrast, Lea et al. (2009) assert that prior knowledge is what increases the chance of falling victim, because in the face of such knowledge the victim behaves with less caution. Many times, victims of investment frauds have prior knowledge in the field of financial investment (Xiao and Porto, 2021;Yang et al., 2022). Rebovich and Layne (2000) found that victims of investment frauds are more financially literate than the general population. The ability to discern between true and false information influences the attractiveness of victims for financial frauds. Here there are two types of persons, those with a low "need for understanding" and those with a high level. People with a higher "need for understanding" were more persuaded by a message addressed to cognition, compared to those with a low "need for understanding, " who were more persuaded by an impressive message. Those with a high level tend to process information through greater cognitive effort. Cacioppo et al. (1986) found that persons with a low "need for understanding" were less likely to differentiate between weak and strong messages, were less affected by the quality of arguments, and invested less effort in examining evidence than participants with a high "need for understanding. " On the other hand, Kaufman et al. (1999) said that those with a low "need for understanding" examined evidence more deeply when they believed the source was unreliable, while people with a high "need for understanding" were less affected by the source's credibility. Unreliable sources increase motivation among those with a low "need for understanding" to invest more effort in information processing. Nevertheless, because fraud offenders often impersonate trusted sources, those with a low "need for understanding" are more likely to become victims of the frauds than they are likely to process the information relevant to the offer. Information processing is also affected by personal traits. The message is perceived as more convincing when it is adjusted to individual's personality (Haddock et al., 2008). Some character traits allow for deception and other not. Modic and Lea (2012) found that six personality traits explain the fraud compliance, such as premeditation, extraversion, openness, sensation seeking, urgency and self-control. Harvey et al. (2014) discovered a list of factors regarding the character traits that enable the success of the fraud such as decisiveness, extroversion, low self-esteem, positivity, honesty, the tendency to trust too much and the tendency to gamble. Some character traits indicate openness to opportunities and willingness to take high risks, such as adventure, propensity for addiction, and aspiration to succeed. The tendency to believe too much in people allows deception, while some victims felt themselves under psychological pressure when the crook contacted them. They were emotionally vulnerable to fall victim to the fraud, especially due to a clinical depression or the loss of someone close. Their mental state impaired their decisionmaking process during the referral from the crook or during the fraud. On contrary, some victims reported that they became victims precisely when they were happy and peaceful. Other participants enumerated character traits that made them unable to interrupt the conversation with the crooks, such as the tendency to help others and a polite attitude toward others. Certain behaviors have the potential to increase the tendency to fall victim to a fraud, such as joining online groups, making online purchases, and making donations (Titus and Gover, 2001). In contrast, some character traits such as discretion, curiosity, thrift, skepticism and seriousness serve as a shield and inhibit fraud attempts. The survey participants conducted by Harvey et al. (2014) compare their approach to life to the one of those around them, noting that Frontiers in Psychology 06 frontiersin.org others are more careful with their money or do not strive for financial success and therefore their chances of falling victim are low. Victims who did not transfer money to crooks described the personal traits that they thought stopped the fraud attempts such as caution, skepticism and discretion. Some victims were careful and reflective and decided to delay the offer of investing in fraudulent programs in order to gain more time for examination (Dove, 2018). The tendency of fall victim to frauds can be expressed through a model of "Big Five personality factors, " which include five major personality characteristics such as openness to change, conscientiousness, extraversion, agreeableness and neuroticism (Tupes and Christal, 1992;Parrish et al., 2009;Modic and Lea, 2012). The openness to change means that a person is openness to experiences, while his conscientiousness will reduce the chance of falling victim because a person with this characteristic tends to obey safety warnings if he has received them. At the same time, an extrovert is more likely to share information with others, while agreeableness involves trust. Finally, the neuroticism will reduce the chance of falling victim because a neurotic person will refuse to share personal information on a web platform and will even navigate less online sites (Parrish et al., 2009). Modic and Lea (2012) indicated that both openness and extraversion have an important influence in fraud compliance. The dynamics of the socio-demographic characteristics of victims Socio-demographic dynamics of the financial frauds' victims include the main characteristics in terms of age, gender, education, marital and professional statuses. Some scholars found that only age and education play an important influence in predicting the tendency of falling victims in personal frauds (Titus et al., 1995;Van Wyk and Benson, 1997;Kerley and Copes, 2002). Lee and Soberon-Ferrer (1997) have suggested that age, education and marital status have a significant influence on predicting victims, while age has the largest effect. However, some scholars have determined profiles of victims considering the main demographic characteristics mentioned above (Shadel and Schweitzer-Pak, 2007;Zunzunegui et al., 2017). The literature is mixed regarding whether younger or elderly people are more vulnerable of being victims of financial frauds ( Table 1). Studies that examined the victims' profile until the 2010s have shown that both young and old can be victims of frauds. However, most of the studies focused on the study of victims since 2013 revealed that most of victims are old people. Titus et al. (1995) suggested that young people are more likely to become victims due to their lower income and a higher level of receptiveness to opportunities for rapid income growth, while older people have a higher tendency to report frauds and, for this reason, the fraudsters avoid them. In addition, the risk of adults falling victim to a fraud is three times lower than the risk for young people. Van Wyk and Benson (1997) declared that young people are more willing to be victims of financial frauds because scammers believe that young people tend to take more risks. Schoepfer and Piquero (2009) highlighted the tendency of young people to take greater financial risks, further increases their propensity to become victims. Young people tend to fall victim to proposals for business opportunities and work from home, mysticism, and network frauds. On the other hand, the elderly population tends to fall victim to the frauds regarding high-risk investments and providers of services that come at home (Button et al., 2009). Deliema et al. (2020) indicated that the odds of being victim of investment fraud grow with 4% with each year that age increases. Rebovich and Layne (2000) have interviewed 1,1,69 people and 60% of them believed that older people are the most likely to become victims of frauds. In terms of gender, men tend to fall victim to the foreign making frauds, network frauds, high-risk investments and land investments, while women are vulnerable on network scams, health and weight loss products that promise miracles, mysticism scams, and false career advancement offers (Button et al., 2009). However, almost all scholars found that most of the victims of financial frauds are male (Table 2). Lee and Soberon-Ferrer (1997) found that older women are more vulnerable of becoming victim than older man, but the situation is reversed for younger groups. Generally, education is seen as a factor that influences the tendency of becoming victim because individuals use the skills gained through formal schooling in decision-making, even for Author Period studied Most of victims Lokanan (2014Lokanan ( ) 1984Lokanan ( -2008 Old people Van Wyk and Benson (1997) 1989-1994 Young people Titus et al. (1995Titus et al. ( ) 1990Titus et al. ( -1991 Young people Lee and Soberon-Ferrer (1997) 1993 Old people Shichor et al. (1996Shichor et al. ( ) 1994 Old people Kerley and Copes (2002) (Lee and Soberon-Ferrer, 1997). Burke et al. (2022) indicated that the people's wiliness to become victims of investment frauds could be reduced through education, especially the online educational interventions. However, the literature is mixed regarding the level of education in profiling victims and no clear pattern was established across time (Table 3). On the one hand, high-educated persons are more likely to become victims of frauds for several reasons even if they know how to assess risks better than less educated people do (Titus et al., 1995;Van Wyk and Benson, 1997). One of the reasons is their own perception that they are educated and experts in their field and they apply this judgment to the fields in which they are not very prepared. They believe that they are protected from fraud, because of their intelligence. Kerley and Copes (2002) and Schoepfer and Piquero (2009) extend the analysis, arguing that education is a variable highly associated with crime reporting and suggesting that those with higher education levels are more likely to report fraud to authorities. Copes et al. (2001) argue that the decision of reporting a fraud is influenced by factors such as level of education, marital status, age and whether the offender was a stranger to the victim. On the other hand, some scholars indicated that people with low level of education have a higher risk to become victims of financial frauds. Lee and Soberon-Ferrer (1997) found that the level of vulnerability decreases as the education and income level increase. According to marital status, it seems that the literature is also divided, regardless the period studied (Table 4). Some scholars argue that married people are more vulnerable to fall victim, considering that first victims of someone affected by a pyramid fraud will be the victim's family and friends, which are unaware of the deception. The most intimate circle is usually the most prone to the extension of the base pyramidal. Generally, most of the victims are small savers, looking for an alternative to investing their money, relying on advice from family and friends (Shadel and Schweitzer-Pak, 2007). On the other hand, single people are more vulnerable of being victims than married ones considering the social isolation and feelings of loneliness (Kadoya et al., 2021). There are fewer studies regarding the professional status than other demographic features (Table 5). However, most of studies indicated that most of victims are employed. Shadel and Schweitzer-Pak (2007) developed two surveys for two different years having contrary findings. In the first survey, most of victims were retired and unemployed, while the second one was in accordance with most of scholars. The level of cooperation Researchers have investigated if there is cooperation or facilitation from the victim in the frauds and have tried to create a profile of the degree of cooperation. It is necessary to emphasize that this discussion is looking to draws attention to the role that victims can have in assisting fraudsters and developing frauds, not to invoke moral judgments about victim blame. In this perspective, there are ethical reasons for questioning the status of victims for those investors with a high degree of cooperation with the offenders. Regarding this issue, the opinions are divided. On the one hand, there are scholars who believe that victims are complicit in the fraud and their irresponsible attitude determines a full collaboration of victims with criminals in the development of frauds (Delord-Raynal, 1983;Titus and Gover, 2001;Button et al., 2009). Delord-Raynal (1983) believes that victims act out of greed and see fraudsters as accomplices who will help them to achieve gains. She created a link between the Author Period studied Most of victims Lokanan (2014) 1984-2008Male Lee and Soberon-Ferrer (1997 1993 Both male and female Shichor et al. (1996) 1994 cooperation of victims with fraudsters and the tendency of reporting frauds. While victims avoid to report frauds because of shame over being deceived, another reason can be the fear of expose the victim's dishonest intentions. Titus and Gover (2001) define victims as "careless. " For example, among victims of identity theft it was found that some threw bank reports in the trash, uploaded personal information to social networks and did not secure their personal computer. Others have even been warned by the bank that they are about to make a transfer to an unreliable source, but have chosen to ignore the warning (Button et al., 2009). On the other hand, given that financial crimes are committed using persuasion tactics, rather than force, many victims feel guilty about being fallen victim and are even perceived as such by others. Authorities often perceive victims as sharing the blame with the perpetrators (Shichor et al., 2001;Levi, 2008). Van Wyk and Benson (1997) place some of the responsibility on the victim claiming that a decent person will not fall prey to a fraudster. On contrary, Harvey et al. (2014) have argued that victims are not responsible for frauds committed against them. However, it is not possible to generalize these studies, because the types of fraud are varied and so are the types of victims. Thus, some scholars argue that there are different degrees of involvement by victims. Titus and Gover (2001) present three levels of involvement: significant cooperation, partial cooperation, and lack of cooperation. Victims in the non-cooperation group are defined as having the lowest degree of involvement. An example of a victim belonging to this group is a company manager who is unaware that his personal details have been stolen and that false loan applications have been filed on his behalf. Victims in a partial collaboration group are defined as having a higher degree of involvement than the previous group. They cooperate with the crook, but passively. For example, victims will provide personal details in phishing messages or be persuaded to buy worthless shares following a random phone call. Victims in a significant collaboration group are defined as having the highest degree of involvement. To some extent, they even show active involvement. For example, victims who respond to marketing ads or who are actively seeking to invest in programs with a high potential for fraud such as job opportunities from home that require money to be invested in advance. Titus (1999) detailed these groups by defining types of collaboration. There are victims who initiate contact with the offender (responds to marketing ads, visiting websites, etc.). Some victims use to share personal information with the offender, while others allow the offender to turn a business relationship into a personal one. Another type of victims share personal financial information with the offender, while some victims allow the offender to create a version of events for the purpose of fraud. Cialdini (2001) presents principles of influence that may cause even the most intelligent people to cooperate with fraudsters, as follows: • Reciprocity as the desire to return a favor; • Commitment and consistency as a tendency to honor obligations even when the original motive for granting the obligation no longer exists; • Social proof view as a desire to imitate those we trust; • Authority as the tendency to obey authoritative figures, even if they are conducted in an unacceptable way; • Affinity as the acceptance to be persuaded when the persuader is a popular figure. Affinity crime is defined as a crime in which there is a certain affinity between victims and fraudsters on ethnic, professional or religious grounds, or a common social circle. The prevailing opinion is that a person will not deceive people with whom he has an affinity, and therefore the offender exploits the trust created toward him (Springer, 2020). Other relevant characteristics in victim profiling The degree of awareness Profiles according to the degree of awareness of fraud consider four levels of awareness (Button et al., 2009): • victims who are not even aware being scammed; • victims who are aware of being scammed, but choose not to report it to the authorities; • victims who are aware of this and report to authorities; • victims who find it difficult to believe that it was indeed a scam. Lee and Soberon-Ferrer (1997) 1993 Single AARP (2007) Many people are unaware that they were victims of a fraud until they discover it following a request from the authorities. This is valid for certain types of fraud, for example, in illegal lottery games that are very likely to win anyway, and in charitable organizations that rely on the victim's tendency to donate without making sure that the request came from a legal organization (Fraud Advisory Panel, 2006). Author Period studied Most of victims However, in most cases the victims will find out that it is a fraud. Some of them will report to the authorities and some will avoid it. Almost 59% of the victims interviewed by Rebovich and Layne (2000) have chosen not to report falling as victim, while 41% of victims from EU have decided to reported fraud to no one (European Commission, 2020). Kerley and Copes (2002) have suggested than only 10% of victims report frauds to police, while the number increases at 22% for victims involved in multiple fraud attempts. Scholars found that the tendency of no reporting frauds has various explanations. Button et al. (2009) mentioned confusion, ambiguity of fraud and embarrassment as factors of no reporting. Some victims are confused as to where they should report frauds; others believed that they were victims of an unfortunate investment rather than fraud. Mason and Benson (1996) continued the list of factors and specified that the low reporting rate by victims is linked to perceptions of responsibility, level of loss, social networks and justice process. Victims avoid reporting frauds if they feel embarrassed, blame themselves for falling victims or tend to believe of sharing responsibility in part or in full with the fraudsters. Some victims avoid reporting because want to hide the losses incurred from their social networks (family and friends). On the other hand, the social network's attitudes toward the fraud may encouraged victims to report frauds or not. In addition, victims avoid reporting when they incurred small losses or when they believed that the criminal justice process is untrustworthy (Titus, 1999). Copes et al. (2001) and European Commission (2020) suggested that the tendency of victims for reporting frauds increases as the losses incurred grow. Regarding the criminal justice process, Reisig and Holtfreter (2007) highlighted than less than 50% of American victims trust that the authorities will successfully solve the fraud cases in which they are involved. Bolimos and Choo (2017) indicated that victims between 2008 and 2013 reported more than 57% of online frauds. Copes et al. (2001) found that the victims' tendency to involve the law in pursuing frauds is linked to morphology and cultural context as derived factors from "The Black's theory of law." According to this, morphology suggests that strangers are more willing to use the law than natives, while people that are more educated are more likely to involve law in fraud detection. The fourth group consists of victims who do not believe that this is indeed a fraud. These are chronic victims because they respond to repeated requests by the fraudsters (Button et al., 2009). They believe in the legality of the company, invest their money and try to induce others to follow them with the illusion that the investment is beneficial for all the victims involved (Spalek, 2016). Losses incurred and frequency of falling victim Other profiles can be determined according to the extent of the losses incurred and the number of times they fell victim to a fraud, grouping them in chronic victims, large-scale, low-volume and unidentified victims. The number of chronic victims is low, and mass marketing often harms them. They lose large parts of their income or savings usually in recurring losses of relatively low sums of money (Shichor et al., 2001). On the hand, the number of the large-scale victims is high, and they usually fall victim once or a few times and lose large sums of money. Some of them will report the fraud and some will avoid it. The highest number of all victims are those low-volume and unidentified victims, of which some are aware of fallen victim to a fraud, while those who have lost very small sums of money are unaware (Button et al., 2009). In the survey conducted by Titus et al. (1995), 58% of respondents declared that were victim one or more times during their life. Moreover, during the past 12 months, 31% of them have declared falling victim one or multiple times. Pascoe et al. (2006) found that more than 75% of victims had experienced multiple fraud attempts. As regarding the losses incurred, there were victims who did not suffer financial losses despite being subjected to fraud (Titus et al., 1995;Harvey et al., 2014). However, most of victims reported losses in various currencies (Table 6). In the survey conducted by Zunzunegui et al. (2017) the average losses incurred by victims were 60,660 euros. For victims interviewed by Kerley and Copes (2002) the average amount lost is almost $270 and exceeds $750 for repeat victims. Bolimos and Choo (2017) indicated that the average amount lost was $4,000. Author Period studied Losses incurred by most victims Lokanan (2014) Profiles of victims in developing countries Victims in developing countries are presented considering their contextual circumstances and their main profiles. Several countries were selected from literature, such as Bolivia, China, Columbia, India, Malaysia, and Nigeria. Contextual circumstances The main circumstances that lead to the increase of financial fraud's victims are related to economic hardship. Many victims are married and, considering both financial and family circumstances, they need money to support their families in the short term (Dreber et al., 2009;Apicella et al., 2014). Indian victims are looking for extra money to improve their standard of living or to build their financial base to improve their economic conditions. In addition, the economic conditions and poverty encourage the existence of financial frauds in Nigeria as victims are looking to improve their standard of living (Jack and Ibekwe, 2018). In contrast, lack of empathy for other investors and greed are factors that support the tendency of being attracted by financial frauds in China, Nigeria and Latin America. Financial frauds have begun to thrive in China in recent decades, that the authorities have defined the phenomenon as a real threat to social order. One of the reasons for the blossoming of the frauds is loose regulation on financial entities operating in the network, alongside greed and a desire to get rich that have become a major driving force among Chinese society (Dor, 2017). Regardless of the poor economic conditions identified for Nigeria, Obamuyi et al. (2018) have associated greed with one of the main motivations for the rise of financial schemes. The expectation of earning high returns in a short period was one of the motivating factors for participating in financial schemes along with the low and deteriorating living standards in Nigeria. When many participants received high short-term returns, others were fascinated by the return and joined the fraud. Existing and new investors focused on the returns and did not question how the high returns were realized. At the same time, most of the victims in countries from Latin America, such as Bolivia or Columbia, are low-income and small savers, who are looking for an alternative investment for their savings other than the ones offered by local banks (Heinemann and Verner, 2006). In many cases, they do not pay attention to the qualification of who is going to manage their money, nor whether the businesses or investments in which they would be participating is legally registered business, as long as what was promised is fulfilled. Often, the income obtained by the victims from the fraudster is "reinvested" in the same scheme, since the trust in the organizer increases once the latter pays what had been agreed upon. However, some investors remain committed to recruiting new participants, for which they receive a commission or some rather benefits (Monroe et al., 2010). Since what matters to them is making money, they also have no misgivings in recruiting other people in order to benefit from the commissions, without exposing the risks to the new investors. These two opposing situations raise a number of ethical issues, while, unfortunately, no ethical sensibility is found in the literature. Some victims are acting with naivety and gullibility out of the pure desire to advance from a very poor condition, hoping that their decisions will bring extra money in short time. Other victims are acting out of lack of empathy and greed. Even if some investors or victims know about the financial fraud in which they are involved, they choose not to address the warning lights as long as they received their pay. Some people invest in fraudulent schemes even though they suspect it was a scam or knew it for sure, when they still hope to profit from the investments of others, by recruiting additional investors and knowing that their money will flow to current investors. Thanks to their greed and to their desire to maintain professional and economic status, some investors may falsify data and take illegal actions for fear of losing their status and wealth. The highest level of lack of empathy is reflected when they are trying to recruit family, friends or acquaintances to participate in the fraud (Frankel, 2012). They imitate the behavior of fraudsters, being able to endanger the safety of those around him, even the family, out of the desire to increase their wealth. They act ignoring the ethical consequences arising from their behavior. Regarding the financial social networks, most of victims in developing countries decided to invest in financial schemes after being convinced by family, one acquaintance or friend that they trusted. For countries from Latin America, victims usually rely on the advice of relatives or close friends (Heinemann and Verner, 2006). In Malaysia, some investors have become victims through close ties to members of an exclusive group. Victims, like retirees, are often members of local religious groups, local neighborhoods, and other hobby or personal interest groups. These victims trusted the fraudster because of his good behavior and the long relationship between them (Piaw et al., 2019). For Nigeria, the participation of victims in investment schemes includes trusting the recommendation of a friend and charities provided by the fraudsters. Moreover, some churches encouraged their members to invest in financial programs that were later found to be fraudulent (Aluko and Olawuni, 2021). One of the main psychological factors that supports the tendency of becoming a financial fraud victim in developing countries is the level of financial knowledge. One of the reasons behind the popularity of financial frauds among the people of India is the low level of financial awareness and financial literacy within population. Moreover, over half of agricultural households in India are economically excluded from formal financial institutions. The same situation is valid for developing countries from Latin America. Most financial scheme operators seem to take advantage of the loophole and to offer a simple investment process, promising a high and easy return to people in agricultural communities, who have no access to banking facilities (Heinemann and Verner, 2006). Two types of victims can be identified in Malaysia in accordance with financial education. Frontiers in Psychology 11 frontiersin.org Many victims in Malaysia lack financial awareness and are easily defrauded. They have no basic understanding of financing and choose to invest in financial schemes out of unfamiliarity and naivety. However, there are also victims who have some experience and knowledge in financial matters and have joined the financial schemes out of greed (Piaw et al., 2019). The lack of a sound legal and regulatory environment contribute to the operation of financial schemes in African economies, this case being also valid for Nigeria (Jack and Ibekwe, 2018). For China, individuals are an easy prey for frauds because the Chinese markets opened in the 90s and therefore population has little experience in money management, familiarity with financial risks and the ability to choose financial products (Dor, 2017). In addition, an increased level of financial literacy acts as a defending factor against becoming a victim of financial fraud. Despite the financial literacy, the risk orientation is emphasized in predicting financial fraud victims for China, which are more likely to classify themselves as optimistic with a high tendency of risk-taking. This trend is particularly pronounced among those who have fallen into a fraud more than once (Cheng, 2016). Socio-demographic features for developing countries From an age perspective, victims in developing countries are either young or old. For China, the results are mixed. Most of the Indian and Nigerian victims are young middle age, considering that the use of the internet attracted many young victims, especially those interested in cryptocurrencies. On the other hand, most of Malaysian victims are those who have expected or have recently had a windfall, such as older people. Windfall wins typically include large sums of money given to recently retired employees through a mandatory savings plan known as the Employee Provident Fund. New retirees are attracted to participating in financial schemes for two reasons. First, the expected returns would serve as a source of lucrative income, and second, retirees would feel that putting a small portion of the windfall into an investment plan is not considered risky since they still have plenty of cash left over (Piaw et al., 2019). In Bolivia, the potential victims for financial schemes are retirees with low income and a large marginal capacity to save. The victims invest their capital in different pyramid companies and they start from a small amount of saving (Heinemann and Verner, 2006). There are two common versions in China of financial scams; one in which investors are recruited through the promise of a quick profit without risk provided they recruit additional investors and the other in which young people are taken captive and forced to recruit investors. This situation among the young people is causing a stir on social media in China due to a number of suicides that have occurred within its framework (Dor, 2017). On the other hand, people in their 60s in China are therefore more likely to be victims of financial frauds considering that many Chinese think of boosting their retirement income by investing in financial schemes. The average age is close to the retirement age of most Chinese people, suggesting that victims on the verge of retiring invested more money in financial schemes and tended to invest more times than younger people. People on the verge of retiring or retirees tend to feel economically pressured as they anticipate expensive medical outlays and increased living expenses. The frustration of people in this age group translates into a strong predisposition to find lucrative investments in the short term (Cheng, 2016). In gender terms, most of Indian victims were married men considered that the male members of the family make financial decisions. In contrast, most of the Chinese victims of financial schemes were females, because in many Chinese families, women take care of managing the family's financial affairs. However, the total amount invested by the women in financial schemes was almost equal to the total amount invested by the men. The women tended though to invest multiple times in small amounts, while men were more liable to invest a large amount at once. This finding implies that men appear to be more amenable to risk taking than women, who took small steps in their investment decision (Cheng, 2016). In terms of education and professional status, most of the victims in India are usually employed and have a formal education (academic degree or higher). Educated people are more prone to fall as financial fraud victims considering that recently frauds in India were based on cryptocurrencies. Therefore, only those who read about cryptocurrencies tried to invest in them. In Bolivia, victims are generally small savers who were looking for investment alternatives for their small of savings. Unemployed workers with minimum wages are potential victims (Heinemann and Verner, 2006). In contrast, for China, people with higher socioeconomic status tend to be more prone to be deceived (Cheng, 2016). Discussion Financial fraud has increased around the world, having in recent times a greater capacity for expansion due to economic insecurity and lack of financial knowledge. Victims are attracted by financial frauds considering their contextual circumstances (financial and family) and their psychological underpinnings. While the financial circumstances refer to available financial resources, financial situation and social networks, the family ones consider the pressure to secure the family's financial future. On the other hand, the tendency of being victim of frauds is related to gullibility, risk tolerance, the level of self-control, the level of prior financial education, the degree of discernment and the character traits. The victims' profiles of financial frauds can be determined in different terms. From a socio-demographic perspective, the most used features in profiling victims are age, gender, education, marital and professional statuses. The literature is mixed regarding age, education and marital status. Scholars found that both younger and elderly people, neither married nor single, with high or low education are Frontiers in Psychology 12 frontiersin.org vulnerable of being victims of financial frauds. However, in the last years, studied revealed a tendency of considering the most of victims of frauds as old people. In terms of gender and professional status, most of scholars agreed that most of victims are male and employed. Therefore, male people, regardless the age, employed, married or single, with high or low education can form a complete profile of victim from demographic point of view. In terms of level of cooperation from the victim in the frauds, there are scholars who believe that victims share blame in part or in full with the fraudsters. However, we believe that deception is more a fault of the fraudsters, while victims were involved in apparently profitable processes that later turned out to be fraudulent. Even if there are people who became victims because of their lack of attention or responsibility toward their money or personal and family data, these victims cannot be blamed of cooperating with fraudsters. In the worst case, it can be added that their irresponsible actions facilitated the fraudulent process, while the fraudsters took advantage of the victim's weakness, but this does not make them as guilty as the fraudsters. Therefore, from our perspective, there is no doubt that the fraudsters are responsible for choosing their targets and for manipulating the victims. In terms of degree of awareness, profile of victims varies from people who are unaware of being victims to individuals who do not believe in being scammed. However, victims have various motives for choosing whether to report frauds, some of them being related to the frequency of falling victim and losses incurred. As regarding the victims of financial frauds from developing countries, there are different profiles. In India, most of victims are young people, especially married men, looking for extra returns in order to improve their living conditions. The same purpose is met in China, but the main types of victims are females or people close to the retirement age. A similar profile in terms of people retired is valid for Malaysia and Bolivia, where older people are more vulnerable to be victims of financial frauds. Most of Asian victims have a low level of financial education, being attracted in financial schemes due to unfamiliarity, naivety and desire to escape from poverty. Despite the desire of improving their living conditions, some victims from China, Latin America and Nigeria are attracted in financial frauds by greed and lack of empathy, without thinking of further financial, emotional and ethical consequences of their unfair behavior. Another finding of this paper is that most of the victims decided to invest after being convinced by family, acquaintances or friends that they trusted. In some countries, there is a herd behavior as a tendency of victims to follow others in their circle. In Malaysia and Nigeria, some of the victims are attracted through the affinity in terms of religion, ethnicity or personal interest. The number of victims can be decrease by growing government regulations and by increasing population information and prevention actions, especially among low-income people. Moreover, countries must find tools to improve the living conditions for their citizens and to reduce the economic and social negative effects in times of economic uncertainty so that victims no longer be interested in fraudulent ways of winning. Future research may imply a fraud victimization survey for a representative sample of specific cases from developing countries. Author contributions All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication. Funding Authors are thankful to Romanian Ministry of Research, Innovation and Digitization, within Program 1-Development of the national RD system, Subprogram 1.2-Institutional Performance-RDI excellence funding projects, contract no. 11PFE/30.12.2021, for financial support.
2022-10-11T13:52:42.249Z
2022-10-11T00:00:00.000
{ "year": 2022, "sha1": "8d9f8ee9ca4e4ece3c50a7bf3c19e9c9a03c6431", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "8d9f8ee9ca4e4ece3c50a7bf3c19e9c9a03c6431", "s2fieldsofstudy": [ "Business", "Economics", "Law" ], "extfieldsofstudy": [ "Medicine" ] }
116321170
pes2o/s2orc
v3-fos-license
On the Temperature and Lubricant Film Thickness Distribution in EHL Contacts with Arbitrary Entrainment An understanding of mechanisms which are responsible for elastohydrodynamic lubricant film formation under high sliding conditions is necessary to increase durability of machine parts. This work combines thin-film colorimetric interferometry for lubricant film thickness measurement and infrared microscopy for in-depth temperature mapping through the contact. The results describe the effect of operating conditions such as speed, slide-to-roll ratio, ambient temperature, and sliding direction on lubricant film thickness and temperature distribution. Film thickness data shows how much the film shape is sensitive to operating conditions when thermal effects are significant, while the temperature profiles provides an explanation of this behavior. Introduction Elastohydrodynamic lubrication (EHL) is a regime of fluid-film lubrication of concentrated contacts, where a high contact pressure causes significant change in lubricant viscosity and surface deformations are of the same order as lubricant film thickness.Many machine elements-such as gears, rolling element bearings, cam/follower systems, etc.-are designed to operate in this regime thanks to relatively low friction.EHL film thickness prediction plays an important role in the design of high performance tribological systems. A classic isothermal EHL theory was established based on numerical work of Hamrock and Dowson [1] together with experimental work of Gohar and Cameron [2].This theory is widely applied in engineering practice since 1960s.Subsequently, the focus has been extended to rolling-sliding conditions [3,4].It was stated that additional shearing in lubricant contributes to increase in contact temperature and has other shear-induced effects such as shear thinning [5,6].These thermal and non-Newtonian effects lead to the reduction in lubricant film thickness [3][4][5][6].For relatively low slide-to-roll ratio this reduction is uniform, however under high sliding conditions, lubricant film shape changes significantly. One of the most significant phenomena is a local increase in film thickness in central zone so-called 'dimple'.This phenomenon was first described by Kaneta and his co-workers under pure sliding conditions [7,8].Using a ball-on-disc tribometer and optical interferometry, they have found that this unpredictable film shape occurs when glass disc slides against steel ball but not in the opposite case.These findings contradict commonly accepted view that film thickness is basically dependent on conditions in the inlet region.In recent years, this effect was studied experimentally by several authors [7][8][9][10][11][12][13]. It has been found that this effect is dependent mainly on kinematic conditions, material properties, and rheology of lubricant.Further study suggested that mainly a thermal conductivity of contacting surfaces is of great importance in dimple formation [10,14]. During last decades, several models have been proposed to explain the dimple phenomena.Some of them assume surface deformation under isothermal conditions such as "squeeze film effect" [7] and "surface-stretch mechanism" [15].Others expect non-Newtonian behavior of lubricant such as "boundary slippage" [16], "limiting shear stress" [17], or "plug flow model" of solidified lubricant [18].Due to high shear rates in EHL thermal models seems to be much more relevant.Nowadays, one of the most accepted thermal models is temperature-viscosity wedge effect [10][11][12]. Further insight into the problem has been made possible by the development of temperature mapping using infrared microscopy based on the approach developed by Ausherman et al. [19].IR technique was applied by Spikes et al. [20,21] and further improved by Reddyhoff et al. [22,23], Le Rouznic et al. [24], and Lu et al. [25].Yagi, Nakahara et al. [26][27][28] used infrared microscopy for temperature mapping of the contact under high sliding conditions where the dimple occurs.They showed that the dimple phenomena could be explained by the temperature-viscosity wedge effect.Moreover, they reported an increase in oil temperature at the top of the dimple.Similar results were published by Bruyer [29], who focused on numerical simulation of a line contact using Navier-Stokes equations.Other works on numerical simulation of EHL contact uses modified Reynolds equations or CFD model [30][31][32][33]. Recently, Omasta et al. [34,35] studied lubricant film thickness distribution under the conditions, where the rolling and the sliding velocity has different direction.They found that the direction significantly influences film shape once the sliding speed and related thermal effects are large enough.An asymmetry of the film shape under high sliding conditions was found although the kinematic conditions were symmetrical.Those conditions are interesting, especially when sliding is perpendicular to the rolling.This situation could allow to distinguish the effect of sliding and rolling speed and to provide better evidence of the mechanism responsible for the film shape changes connected with the sliding.Nevertheless, temperature mapping under the conditions has not been made yet. The aim of this paper is to describe the effect of sliding velocity on lubricant film thickness with respect to the temperature distribution in EHL contact.Attention is paid to the influence of sliding speed direction.For these purposes, IR technique for temperature mapping is implemented and used together with lubricant film thickness measurement. Optical Tribometer The experiments were performed with the optical ball-on-disc tribometer (Brno University of Technology, Brno, Czech Republic) that is illustrated in Figure 1.Sapphire disc with a diameter of 140 mm and a thickness of 5 mm is loaded using a death-weight lever mechanism against AISI 52100 steel ball with a diameter of 25.4 mm.Both the ball and the disc are independently driven by electric motor to provide a rolling-sliding contact. The ball and its drive can be inclined around a contact normal in the range from 0 • to 90 • .The angle is referred to as δ angle, as indicated in Figure 1.This inclination results in different direction of the ball and the disc velocities.By reversing the direction of rotation of the ball, the angle between surface velocities of 90 • to 180 • can be achieved. Entrainment velocity is expressed as half of the vector addition using the formula where → u b and → u d are velocities of the ball and the disc respectively.Sliding velocity is defined as vector difference according to the formula The slide-to-roll ratio (SRR) is expressed as a ratio between magnitudes of sliding and entrainment velocity by the formula The angle between sliding and entrainment velocities is defined as ε angle.For ε = 0 • entrainment and sliding has the same direction.For ε = 90 • , sliding is perpendicular to entrainment.The velocities are graphically indicated in Figure 2. Interferograms and film thickness and temperature results in this study are oriented so that the entrainment direction corresponds to the x-axis.The slide-to-roll ratio (SRR) is expressed as a ratio between magnitudes of sliding and entrainment velocity by the formula The angle between sliding and entrainment velocities is defined as ε angle.For ε = 0° entrainment and sliding has the same direction.For ε = 90°, sliding is perpendicular to entrainment.The velocities are graphically indicated in Figure 2. Interferograms and film thickness and temperature results in this study are oriented so that the entrainment direction corresponds to the x-axis.The contact is lubricated with FVA 4 oil by dipping the ball in the lubricant reservoir.This lubricant is ISO VG 460 base mineral oil that is used as reference gear oil [36].The properties of the ball and disc material and the oil are shown in Table 1.The slide-to-roll ratio (SRR) is expressed as a ratio between magnitudes of sliding and entrainment velocity by the formula The angle between sliding and entrainment velocities is defined as ε angle.For ε = 0° entrainment and sliding has the same direction.For ε = 90°, sliding is perpendicular to entrainment.The velocities are graphically indicated in Figure 2. Interferograms and film thickness and temperature results in this study are oriented so that the entrainment direction corresponds to the x-axis.The contact is lubricated with FVA 4 oil by dipping the ball in the lubricant reservoir.This lubricant is ISO VG 460 base mineral oil that is used as reference gear oil [36].The properties of the ball and disc material and the oil are shown in Table 1.The contact is lubricated with FVA 4 oil by dipping the ball in the lubricant reservoir.This lubricant is ISO VG 460 base mineral oil that is used as reference gear oil [36].The properties of the ball and disc material and the oil are shown in Table 1. Measurement Techniques This work utilizes two experimental techniques: thin film colorimetric interferometry (TFCI) for lubricant film thickness measurement and infrared (IR) microscopy for temperature mapping.TFCI method developed by Hartl et al. [37] uses white light interferometry with color matching algorithm and CIELAB color-film thickness calibration.The contact between the steel ball and the sapphire disc with thin chromium layer is captured by a CCD camera attached to the industrial microscope.Halogen lamp is used as a light source.For the measurement, calibration, and evaluation of interferograms AChILES software (Brno University of Technology) was used. IR microscopy senses electromagnetic waves in the IR radiation range.In the study FLIR SC5000 camera (FLIR Systems, Wilsonville, OR, USA) with IR macro objective were used.Determination of temperature distribution of both contacting surfaces and oil film is allowed by a combination of measurement procedure and robust calibration. The scheme of the measurement procedure is depicted in Figure 3. Radiation of the contact is composed of radiation of the steel ball, the disc and the lubricant.Two band filters referred to as L and S filter and disc with and without a chromium layer were used to separate individual components of radiation.The L filter transmits the radiation of the steel ball only while the S filter transmits the radiation of both the ball surface and the lubricant volume.In both cases, a sapphire disc without coating is used.When the L filter is used with the sapphire disc having a thick chromium layer on a contact side, only a radiation of disc surface is dominant. Halogen lamp is used as a light source.For the measurement, calibration, and evaluation of interferograms AChILES software (Brno University of Technology) was used. IR microscopy senses electromagnetic waves in the IR radiation range.In the study FLIR SC5000 camera (FLIR Systems, Wilsonville, OR, USA) with IR macro objective were used.Determination of temperature distribution of both contacting surfaces and oil film is allowed by a combination of measurement procedure and robust calibration. The scheme of the measurement procedure is depicted in Figure 3. Radiation of the contact is composed of radiation of the steel ball, the disc and the lubricant.Two band filters referred to as L and S filter and disc with and without a chromium layer were used to separate individual components of radiation.The L filter transmits the radiation of the steel ball only while the S filter transmits the radiation of both the ball surface and the lubricant volume.In both cases, a sapphire disc without coating is used.When the L filter is used with the sapphire disc having a thick chromium layer on a contact side, only a radiation of disc surface is dominant. Since the IR camera data is expressed as counts, calibration under the given temperatures and in various configurations is required.The calibration was performed using separate device, where the ball is loaded against a sapphire window with and without chromium layer.The ball in the static Hertzian contact is heated up to 250 °C and the contact temperature is measured using a precise thermocouple.The results of the calibration are graphs showing IR radiation (expressed as a digital level detected by the camera sensor) as a function of contact temperature.These graphs include various combinations of the disc and filter that are used during the measurement to distinguish the ball, the disc, and oil temperature.The calibration also includes determination of the effect of lubricant film thickness on its radiation.For this purpose, oil radiation and corresponding film thickness under given temperature is evaluated in the gap around the static Hertzian contact.The effect of the lubricant film thickness on its temperature is expressed as a hyperbolic function that is in accordance with Lu et al. [25].For the calibration, measurement, and evaluation of temperature maps the ATILA software (Brno University of Technology) was developed and used.Since the IR camera data is expressed as counts, calibration under the given temperatures and in various configurations is required.The calibration was performed using separate device, where the ball is loaded against a sapphire window with and without chromium layer.The ball in the static Hertzian contact is heated up to 250 • C and the contact temperature is measured using a precise thermocouple.The results of the calibration are graphs showing IR radiation (expressed as a digital level detected by the camera sensor) as a function of contact temperature.These graphs include various combinations of the disc and filter that are used during the measurement to distinguish the ball, the disc, and oil temperature.The calibration also includes determination of the effect of lubricant film thickness on its radiation.For this purpose, oil radiation and corresponding film thickness under given temperature is evaluated in the gap around the static Hertzian contact.The effect of the lubricant film thickness on its temperature is expressed as a hyperbolic function that is in accordance with Lu et al. [25].For the calibration, measurement, and evaluation of temperature maps the ATILA software (Brno University of Technology) was developed and used. Experimental Conditions A series of experiments was designed to address the effect of speed, slide-to-roll ratio (SRR), ambient temperature and rolling to sliding velocity inclination.All the experiments were carried out at a load F = 76 N that corresponds to the Hertzian pressure p H = 1.2 GPa.Ambient temperature t was set to 30 • C and 60 • C. The temperature was controlled by heating the tribometer body while the working area of the tribometer was covered and the tribometer ran under pure-rolling.Our own measurements followed this procedure once the inlet temperature as well as the surface temperature of contacting bodies stabilized at the required level.This procedure ensures uncertainty in the surface temperature of contacting bodies of about 1 • C at 60 • C. Three levels of entrainment speed u e were selected: 50, 100, and 150 mm/s at 30 • C. At the temperature of 60 • C the speeds were recalculated with respect to the changed oil viscosity, so that the product of entrainment speed and oil viscosity (u e × η 0 ) at a given temperature is constant.Thus, lubricant film thickness under pure rolling should be the same with respect to the isothermal EHL theory.Resulting speeds for 60 • C are 317, 634, and 951 mm/s.SRR of 2.5 and 5 was chosen to draw the thermal effects.In both cases, there is a partial opposite sliding, which means that surfaces moves against each other.SRR is positive, so the speed of the ball is higher than the speed of the disc.To show the effect of the angle between entrainment and sliding velocity, two extreme cases were investigated: ε angle of 0 • that represents coincidence rolling and sliding and ε angle of 90 • where the entrainment velocity is perpendicular to the sliding velocity while the magnitudes of surface velocities were the same.These conditions are hereinafter referred to as longitudinal and lateral sliding. Lubricant Film Thickness Data Optical interferograms of all the conditions analyzed in this study are summarized in Figure 4.These interferograms show the effect of speed, ambient temperature, SRR, and rolling to sliding inclination.For all the conditions lubricant film thickness data are evaluated in graphs in Figure 5.These three graphs show lubricant film thickness in a dimple, inlet film thickness, and minimum film thickness as a function of operating conditions.The dimple film thickness represents the largest film thickness that occurs in a dimple in central zone.Inlet film thickness should represent a central film thickness unaffected by the dimple.For dimpled film shape this is the smallest film thickness in the inlet or central part of the contact at y = 0 (axis of symmetry of the inteferograms).Minimum film thickness is a global minimum that occurs usually in side lobes.The effect of ambient temperature relates to the speed normalized with respect to the changed viscosity.The speed enhances thermal phenomena that are responsible for the dimple effect.Therefore, the effect of ambient temperature on lubricant film thickness is mostly positive thanks to the dimple.The results show that overall lubricant film thickness increases with speed.The effect of other parameters is however much less predictable.Except for low speed and low SRR at 30 • C the film shape is significantly affected by the dimple. The effect of sliding direction is very interesting.For low speed with SRR 2.5 and 30 • C the effect is negligible, and this state could be termed as isothermal.As the speed or temperature increases, lateral sliding leads to thicker film compared to longitudinal sliding.At 60 • C and SRR 2.5 inlet film thickness is almost double for lateral sliding.There is also a significant increase in minimum film thickness.For SRR 5 the effect is similar in the case of central film thickness.On the other hand, minimum film thickness may decrease for lateral sliding as the dimple becomes narrow in the entrainment direction.This is evident at SRR 5 for low speed and 60 • C, where the minimum film thickness under lateral sliding is roughly one-quarter of the thickness for longitudinal sliding. The effect of SRR is also non-uniform.Generally, overall film thickness reduction is supposed with increasing SRR.This is the true for low speed at 30 • C, until the dimple begins to appear.Then the effect depends on sliding direction and other parameters.When the sliding is lateral, the effect is usually positive for central film thickness and negative for minimum film thickness due to the dimple shape.For longitudinal sliding, the effect is rather a negative for lower temperature and positive for higher temperature.It is also interesting that a minimum film thickness is significantly improved by longitudinal sliding at 60 • C. Minimum film thickness in side lobes is about twice as large for higher SRR. The effect of ambient temperature relates to the speed normalized with respect to the changed viscosity.The speed enhances thermal phenomena that are responsible for the dimple effect.Therefore, the effect of ambient temperature on lubricant film thickness is mostly positive thanks to the dimple. Temperature Profiles An analysis of temperature distribution was made for selected cases summarized in Figure 4.The effect of SRR for the high speed and longitudinal sliding at 30 • C is depicted in graphs in Figure 6.These graphs describe film thickness profile and the oil, disc, and ball temperature profiles.The high SRR case increases oil temperature and surface temperatures with 30 and 20 degrees more than the low SRR case.Higher SRR also inclines temperature profiles in the direction of surface movement.Since the ball is faster, the oil temperature is inclined in the direction of the ball.The difference in inlet zone is app.15 • C. For the low SRR, the oil and the disc temperature profiles are nearly symmetrical while the ball temperature profile is slightly inclined in the direction of its movement.Despite the symmetrical oil temperature profile, the small dimple just before the exit constriction exists. the low SRR case.Higher SRR also inclines temperature profiles in the direction of surface movement.Since the ball is faster, the oil temperature is inclined in the direction of the ball.The difference in inlet zone is app.15 °C.For the low SRR, the oil and the disc temperature profiles are nearly symmetrical while the ball temperature profile is slightly inclined in the direction of its movement.Despite the symmetrical oil temperature profile, the small dimple just before the exit constriction exists.The effect of ambient temperature and the speed normalized with respect to changed viscosity is shown in Figure 7, where oil temperature and lubricant film thickness profiles are compared.It is evident that the increase in ambient temperature increases oil temperature rise, while the shape of temperature profile seems to be similar.Despite, there is a significant change in lubricant film shape at low speed.The dimple occurring at 60 °C is not reflected in the corresponding oil temperature profile.At high speed, the dimple occurs in both the cases and shape of the oil temperature profile is similar, i.e., inclined in the entrainment direction.The effect of ambient temperature and the speed normalized with respect to changed viscosity is shown in Figure 7, where oil temperature and lubricant film thickness profiles are compared.It is evident that the increase in ambient temperature increases oil temperature rise, while the shape of temperature profile seems to be similar.Despite, there is a significant change in lubricant film shape at low speed.The dimple occurring at 60 • C is not reflected in the corresponding oil temperature profile.At high speed, the dimple occurs in both the cases and shape of the oil temperature profile is similar, i.e., inclined in the entrainment direction. movement.Since the ball is faster, the oil temperature is inclined in the direction of the ball.The difference in inlet zone is app.15 °C.For the low SRR, the oil and the disc temperature profiles are nearly symmetrical while the ball temperature profile is slightly inclined in the direction of its movement.Despite the symmetrical oil temperature profile, the small dimple just before the exit constriction exists.The effect of ambient temperature and the speed normalized with respect to changed viscosity is shown in Figure 7, where oil temperature and lubricant film thickness profiles are compared.It is evident that the increase in ambient temperature increases oil temperature rise, while the shape of temperature profile seems to be similar.Despite, there is a significant change in lubricant film shape at low speed.The dimple occurring at 60 °C is not reflected in the corresponding oil temperature profile.At high speed, the dimple occurs in both the cases and shape of the oil temperature profile is similar, i.e., inclined in the entrainment direction.The results for lateral sliding are described in Figures 8 and 9.The first one compares film thickness and temperature profiles in entrainment direction for low and high speed at 30 • C and high SRR.These results can be compared with the corresponding results under longitudinal sliding to assess the effect of sliding velocity inclination.Figure 8a corresponds to Figures 7 and 8 corresponds to Figure 6b.The results for longitudinal sliding are plotted in light colors in Figure 8.At low speed, lateral sliding causes a slightly thicker film with a tiny dimple just before the exit constriction.Oil temperature profile is more symmetrical with respect to the y-axis and profiles of the temperature of the contacting bodies are predominantly coincident. high SRR.These results can be compared with the corresponding results under longitudinal sliding to assess the effect of sliding velocity inclination.Figure 8a corresponds to Figure 7a and Figure 8b corresponds to Figure 6b.The results for longitudinal sliding are plotted in light colors in Figure 8.At low speed, lateral sliding causes a slightly thicker film with a tiny dimple just before the exit constriction.Oil temperature profile is more symmetrical with respect to the y-axis and profiles of the temperature of the contacting bodies are predominantly coincident.Under the high speed conditions lateral sliding leads to significantly larger lubricant film thickness in the whole profile with a dimple elongated towards inlet region.The oil temperature is higher in the central part while the temperatures of contacting bodies are slightly lower.The ball temperature profiles for both sliding directions are coincident in the input half of the contact; then the temperature under longitudinal sliding is higher and reaches maximum in the exit half of the contact.The same applies to the disc temperature.Under the high speed conditions lateral sliding leads to significantly larger lubricant film thickness in the whole profile with a dimple elongated towards inlet region.The oil temperature is higher in the central part while the temperatures of contacting bodies are slightly lower.The ball temperature profiles for both sliding directions are coincident in the input half of the contact; then the temperature under longitudinal sliding is higher and reaches maximum in the exit half of the contact.The same applies to the disc temperature.Under the high speed conditions lateral sliding leads to significantly larger lubricant film thickness in the whole profile with a dimple elongated towards inlet region.The oil temperature is higher in the central part while the temperatures of contacting bodies are slightly lower.The ball temperature profiles for both sliding directions are coincident in the input half of the contact; then the temperature under longitudinal sliding is higher and reaches maximum in the exit half of the contact.The same applies to the disc temperature. Lubricant film thickness and the oil temperature profiles in lateral direction are compared for the low and the high temperature in Figure 9.At low temperatures, film thickness and temperature profiles along the sliding direction are symmetrical with respect to the x-axis; however, at high temperature the minimum film thickness is lower in the disc velocity direction.The same asymmetry occurs also in the oil temperature, where the profile is stretched in sliding direction. Discussion Lubricant film thickness is one of the most important parameters defining the ability of the contact to carry load without direct interaction of mating surfaces.The data described in Section 3.1 reveals that the dimple phenomenon significantly affects lubricant film thickness distribution in almost all the cases, which was studied.It is evident that even a small change in operating conditions may increase or decrease lubricant film thickness significantly.For example, the change in SRR from 2.5 to 5 may increase the minimum film thickness by 100%.This finding is of high practical relevance since the mechanical properties of the sapphire-steel contact are very close to the real steel-steel configuration. In this study, the influencing parameters are SRR, sliding direction and ambient temperature; other important parameters are thermal properties of the contacting bodies.These effects could be considered in the design of machine parts; nevertheless, the current possibilities of lubricant film thickness prediction under these conditions are limited.Although the conditions are specific, they are of great significance in machine components such as cam-tappet mechanisms and retainerless rolling element bearings. It is obvious that thermal effects play an important role under the high sliding conditions.A surface entering the contact area has a low temperature corresponding to the ambient temperature.Then, it is heated predominantly due to compress and viscous work while the heat generated in the central region depends mainly on load, sliding speed, and lubricant film strength that is a function of its viscosity influenced by the pressure and temperature.The generated heat is dissipated through conduction to the contact bodies, so the heat partitioning between the surfaces in the contact conjunction is mainly controlled by thermal properties.The temperature increase is larger on the surface, having lower thermal conductivity. The dimple occurrence can be ascribed to the temperature-viscosity wedge effect.This effect assumes unequal temperature distribution across film thickness which results in different viscosity wedges in different parts of the contact.When these wedges move against each other, additional pressure is induced and thus the dimple occurs.This is revealed especially by the profiles of the ball and the disc temperature, which are inclined in the direction movement.This inclination is insignificant at low speed, where the dimple does not occur.At high speed, this inclination makes a difference between the ball and the disc surface temperature at contact inlet zone (Figure 6).For a lateral sliding, the difference is lower at the entrainment direction (Figure 8). Nevertheless, there are some findings that deserve a discussion.The results in Figure 7a indicate that the dimple formation may not be related to the change in the shape of the oil temperature profile.The two oil temperature profiles in Figure 7a have the same shape and differ only quantitatively, although in the first case there is no dimple while in the second the dimple thickness is 2.5 times larger than the inlet film thickness.Some qualitative contradiction can be found with [10,38] where the dimple formation under the opposite sliding was also investigated based on temperature measurement.In these works, a very marked rise in the oil temperature in the dimple was observed.In the current work the oil radiation level detected by IR camera is also much larger in the dimple; however, once the effect of higher lubricant film thickness in the dimple is compensated based on calibration curves, the oil temperature profile does not show a steep rise in the dimple.The temperature profiles in the dimple could provide some information about the state of the oil in the dimple.If the oil rotates in the dimple, its temperature should be higher.If the oil is sheared in one or more planes, then the increase may not be significant.If there is a boundary slippage only on one of the surfaces, then the temperature of the surface should be significantly higher.CFD-based solution that considers continues change in oil speed profile through the film thickness made in [29] indicates only a moderate and gradual rise in the oil temperature under the high SRR.Further research is expected on this topic. The effect of sliding direction on lubricant film shape is in qualitative accordance with the previous study [35].Sliding inclination changes the shape of the dimple.The dimple is produced by the temperature-viscosity wedge action of the counter surfaces.This sliding velocity component is often called zero entrainment velocity (ZEV) [10,[38][39][40][41].The dimple produced under ZEV conditions is usually narrowed in sliding direction and becomes circular with increasing speed [41].The dimple thickness under ZEV conditions increases with speed to a certain limit from which it decreases. Under the arbitrary rolling-sliding the entrainment, the velocity component forces and deforms the dimple in the entrainment direction and changes heat flow through the contact.Under lateral sliding, temperature gradients appear in the lateral direction, so the difference in the ball and the disc temperature profiles is lower in Figure 8.The surface temperatures under longitudinal sliding are slightly higher.This can be related to the heat flow through the contact, particularly how the inlet region is heated by the surface moving in opposite direction.Under lateral sliding, the impact on the inlet viscosity should be smaller. Asymmetry in lateral film thickness and oil temperature profiles Figure 9 can be attributed to the lower thermal conductivity of sapphire compared to steel, as listed in Table 1.This effect is relatively low compared to glass disc that is usually used in optical tribometers.As the speed and the generated heat increases, the asymmetry is much more pronounced.There is a reasonable explanation in that more heat results in more significant difference in temperature.Moreover, the thermal conductivity of single crystal sapphire decreases with temperature in the relevant temperature range.Lower thermal conductivity of the disc results in higher oil temperature in the direction of the disc movement and the asymmetry is more significant. Conclusions This work describes, for the first-time, temperature distribution in the contact with different entrainment and sliding velocity direction.Based on the temperature and lubricant film thickness measurements, the following conclusions can be drawn: • Under high sliding conditions, lubricant film thickness is very sensitive to the operating conditions due to the dimple phenomenon.Particularly, ambient temperature, SRR, and sliding direction could significantly improve central as well as minimum lubricant film thickness. • There is no direct connection between temperature or its gradient and dimple occurrence. • This study reveals a less significant temperature increase in a dimple than other experimental works. • Asymmetry in lubricant film thickness under lateral sliding relates to temperature distribution affected by different thermal conductivity of contacting bodies.This observation is in accordance with the temperature-viscosity wedge effect. Figure 1 . Figure 1.Scheme of the ball-on-disc tribometer for IR microscopy and optical interferometry. Figure 2 . Figure 2. Scheme of the ball-on-disc tribometer for IR microscopy and optical interferometry. Figure 1 .Figure 1 . Figure 1.Scheme of the ball-on-disc tribometer for IR microscopy and optical interferometry. Figure 2 . Figure 2. Scheme of the ball-on-disc tribometer for IR microscopy and optical interferometry. Figure 2 . Figure 2. Scheme of the ball-on-disc tribometer for IR microscopy and optical interferometry. Figure 3 . Figure 3. Scheme of the radiation components in the contact.Figure 3. Scheme of the radiation components in the contact. Figure 3 . Figure 3. Scheme of the radiation components in the contact.Figure 3. Scheme of the radiation components in the contact. Lubricants 2018, 6 ,Figure 4 . Figure 4. Color interferograms and an approximate color vs. lubricant film thickness scale under various contact conditions. Figure 4 .Figure 4 .Figure 5 . Figure 4. Color interferograms and an approximate color vs. lubricant film thickness scale under various contact conditions. Figure 5 . Figure 5.The graphs showing lubricant film thickness in a dimple, inlet film thickness, and minimum film thickness as a function of operating conditions corresponding to the interferograms in Figure 4. Figure 9 . Figure 9.The graphs showing the effect of ambient temperature on lubricant film thickness and oil temperature in lateral profiles at x = 0 (ε = 90 • ; p H = 1.2 GPa; high speed, u e × η 0 = 126.9mm•Pa). Table 1 . Mechanical and physical properties of the contacting bodies and the lubricant.
2019-04-16T13:29:07.951Z
2018-11-15T00:00:00.000
{ "year": 2018, "sha1": "6a35875ae6fa854516e87d860a7a8301014fe02e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4442/6/4/101/pdf?version=1542264773", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "6a35875ae6fa854516e87d860a7a8301014fe02e", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
5977306
pes2o/s2orc
v3-fos-license
Structure-preserving low multilinear rank approximation of antisymmetric tensors This paper is concerned with low multilinear rank approximations to antisymmetric tensors, that is, multivariate arrays for which the entries change sign when permuting pairs of indices. We show which ranks can be attained by an antisymmetric tensor and discuss the adaption of existing approximation algorithms to preserve antisymmetry, most notably a Jacobi algorithm. Particular attention is paid to the important special case when choosing the rank equal to the order of the tensor. It is shown that this case can be addressed with an unstructured rank-$1$ approximation. This allows for the straightforward application of the higher-order power method, for which we discuss effective initialization strategies. Antisymmetric tensors play a major role in quantum chemistry, where the Pauli exclusion principle implies that wave functions of fermions are antisymmetric under permutations of variables. This antisymmetry needs to be taken into account when solving the multiparticle Schrödinger equation determining such a wave function; see [13] for a recent overview. This paper is concerned with finding an approximation B to a given antisymmetric tensor A such that B has a data-sparse representation and is again antisymmetric. More specifically, we will consider an approximation of multilinear rank r in structurepreserving Tucker decomposition where S ∈ R r×···×r for some r ≤ n is again antisymmetric and U ∈ R n×r has orthonormal columns. This choice is analogous to existing approaches for symmetric tensors, see, e.g., [3,4,9]. In this paper, we demonstrate that some existing algorithms for the symmetric case extend to the antisymmetric case. In particular, we study the extension of the Jacobi algorithm by Ishteva, Absil, and Van Dooren [8]. Despite a number of similarities, there are pronounced differences between symmetric and antisymmetric tensors. For example, every (multilinear) rank r can be attained by a symmetric matrix or tensor. In contrast, it is well known that skew-symmetric matrices have even rank. Although this statement does not extend to d > 2, we will see that there are still restrictions on the ranks that can be attained by anti-symmetric tensors. In particular, the smallest possible nonzero rank is r = d. In this case, the decomposition (1.2) simplifies to anti αu 1 ⊗ u 2 ⊗ · · · ⊗ u d , α ∈ R, (1.3) with the antisymmetrizer A = anti(X ) defined by A(i 1 , . . . , i d ) := 1 d! π∈S d sign(π)X π(i 1 ), π(i 2 ), . . . , π(i d ) , (1.4) where S d denotes the symmetric group on {1, . . . , d}. This corresponds to the notion of Slater determinants that feature prominently in the Hartree-Fock method from quantum mechanics. The expression (1.3) suggests the more general decomposition anti(X ) for a (non-symmetric) tensor X of low tensor rank. This corresponds to a short sum of Slater determinants used, e.g., in the Multi-Configuration Self-Consistent Field method. Such a low-rank model for antisymmetric tensors has been studied in the literature. In particular, Beylkin, Mohlenkamp, and Pérez [2,1] have developed an alternating leastsquares algorithm for approximating a given antisymmetric tensor A by anti(X ). The algorithm employs Löwdin's rule to avoid having to deal with the exponentially many terms in the sum (1.4). One contribution of this paper is a much simpler approach for (1.3), that is, when X has rank 1: The best choice of X is given by a scalar multiple of the best (non-symmetric) rank-1 approximation of A. For multilinear rank r larger than d, our developments deviate because the low-rank decomposition (1.2) differs from the CP-like decomposition considered in [2,1]. Working with (1.2) comes with a number of advantages, such as the possibility to obtain robust quasi-optimal approximation via the SVD. Its major disadvantage is the need for storing the core tensor S of order d. This can be mitigated by using hierarchical SVD-based decompositions, such as the tensor train decompositions [11]. However, the incorporation of antisymmetry into these decompositions is by no means as seamless as for the Tucker decomposition; see [7] for recent progress. Although not unlikely. it remains to be seen whether the developments in this paper are useful in this context. The rest of this paper is organized as follows. In Section 2, we study the multilinear rank of an antisymmetric tensor and recall the higher-order singular value decomposition. Section 3 is concerned with algorithms that aim at the antisymmetric low multilinear rank approximations, the higher-order iterations method and a variant of the Jacobi method. Section 4 is dedicated to the special case of rank-d approximation. Multilinear rank of antisymmetric tensors Let us first recall some basic concepts related to the multilinear rank of a tensor; see [10] for details. For any 1 ≤ µ ≤ d, the µth matricization of a general tensor X ∈ R n 1 ×n 2 ×···×n d is the n µ × ν =µ n ν matrix X (µ) defined by The multilinear rank of X is the tuple (r 1 , r 2 , . . . , r d ) defined by r µ = rank(X (µ) ). Note that X (µ) is a matrix and hence r µ ≤ min{n µ , ν =µ n ν }. For an antisymmetric tensor, all matricizations are essentially the same. Restrictions on the multilinear rank It is well known that skew-symmetric matrices have even rank. It turns out that this property does not extend to antisymmetric tensors; it is simple to construct tensors of higher order with odd multilinear ranks. However, the following theorem shows that antisymmetry still imposes some (weaker) restrictions on the ranks of antisymmetric tensors that are of small size n relative to d. Theorem 2.2 Let A ∈ R n×n×···×n be an antisymmetric tensor of order d ≥ 3. Then the multilinear rank r of A satisfies (i) r = 0 for n < d; (ii) r ≤ d for n = d or n = d + 1; (iii) r ≤ n for n ≥ d + 2. There exist tensors A for which equality is attained in (i)-(iii). Proof. Since A = 0, at least one α k is different from zero. Let us now consider the column of A (1) corresponding to a fiber A(:, i 2 , . . . , i d ) for some i 2 , . . . , i d ∈ [1, d + 1]. We may assume that i 2 , . . . , i d are mutually distinct because otherwise this fiber is zero. For the moment, we also assume that these indices are ordered, that is, By the pigeon hole principle, there are two integers 1 ≤ k < ℓ ≤ d + 1 such that k, l ∈ {i 2 , . . . , i d }. The situation is now as follows: In particular, this implies Using that A(i 1 , i 2 , . . . , i d ) is only nonzero for mutually distinct indices, we arrive at the linear combination Since this relation is not affected by a permutation of i 2 , . . . , i d , it also holds if these indices are not ordered. In summary, we have shown that and thus the rank of A (1) is at most d. For n = d + 1 equality is attained by the tensor used in the construction for n = d bordered with zeros. (iii) Let n ≥ d + 2. By the size of the matricization, r ≤ n. To show that r = n can be attained, let us first define the integer vector h = (1, 2, . . . , n, 1, . . . , d − 1). We choose the tensor X ∈ R n×n×···×n to be zero except for The corresponding sets In summary, we have found n linearly independent columns of A (1) and, therefore, the multilinear rank of A is n. The truncated HOSVD for a given multilinear rank (r 1 , . . . , r d ) with r µ ≤ n µ is obtained by setting with U µ = V µ (:, 1 : r µ ) and S = T (1 : r 1 , 1 : r 2 , . . . , 1 : r µ ). This gives a quasi-best approximation of X , in the sense that the approximation error in the Frobenius norm, X − S × 1 U 1 · · · × d U d , is within a factor √ d of the error of the best rank-(r 1 , . . . , r d ) approximation. In particular, if X happens to have multilinear rank (r 1 , . . . , r d ) then the decomposition (2.2) is exact. We now apply the truncated HOSVD to obtain an approximation of multilinear rank r to an antisymmetric tensor A. By Lemma 2.1, all matrices U µ in (2.2) can be chosen equal to a fixed matrix U . In turn S = A × 1 U T · · · × d U T is again antisymmetric. In summary, the truncated HOSVD described in Algorithm 1 automatically preserves structure and produces a quasi-best antisymmetric approximation. Algorithm 1 Truncated HOSVD of antisymmetric tensor Compute matrix U ∈ R n×r containing the leading r left singular vectors of A (1) . Corollary 2.3 Let A be an antisymmetric tensor of order d. Then the multilinear rank r of A satisfies r = 0 or r = d or d + 2 ≤ r ≤ n. Any of these ranks can be attained. Proof. By the discussion above, an antisymmetric tensor of multilinear rank r can be written as A = S × 1 U · · · × d U , where the r × · · · × r tensor S is again antisymmetric and has multilinear rank r. The statement of the corollary now follows from applying Theorem 2.2 to S. Let us inspect the case r = d more closely. Any antisymmetric d × · · · × d tensor of order d takes the form S = anti(αe 1 ⊗ e 2 ⊗ · · · ⊗ e d ), for some α ∈ R; see also the construction in the proof of Theorem 2.2 (i). By letting U = [u 1 , u 2 , . . . , u d ], the truncated HOSVD implies that any antisymmetric tensor of order d and multilinear rank d takes the form verifying the claim (1.3) from the introduction. Low multilinear rank approximation In this section, we discuss two iterative methods that aim to compute a best antisymmetric multilinear rank-r approximation starting, for example, from the truncated HOSVD of A. Both methods are based on the fact that this minimization problem is equivalent to solving and setting S = A × 1 U T · · · × d U T ; see, for example, [4]. Remark 3.1 For symmetric tensors, there is numerical evidence (see, e.g., [8]) that the best (unstructured) approximation of multilinear rank r can usually be chosen symmetric. For d = 2 and general r, this follows from the spectral decomposition. For general d and r = 1, this property has recently been shown by Friedland [5]. For general d and r, this question remains open. For antisymmetric tensors, we will observe the analogous phenomenon below; it appears that the best (unstructured) approximation of multilinear rank r can usually be chosen antisymmetric. For d = 2 and even r, this property follows from the real Schur decomposition. For r = d and general d, we will see in Section 4 that it is actually the unstructured rank-1 approximation that gives an antisymmetric multilinear rank-d approximation. To simplify the presentation, we will consider the case d = 3 for the rest of this section; all developments extend in a relatively straightforward manner to general d > 3. HOOI The higher-order orthogonal iteration (HOOI) introduced in [12] is a popular approach to the best low multilinear rank approximation of a general tensor. It consists of applying alternating least squares (ALS) to the unstructured variant of the maximization problem (3.1): One step of the method optimizes a single factor U µ while keeping the other two factors fixed. The resulting optimization problem admits a straightforward solution by the SVD; see Algorithm 2. Algorithm 2 HOOI for multilinear rank-(r, r, r) approximation Apply Algorithm 1 to choose initial factors Compute matrix U 1 ∈ R n×r containing the leading r left singular vectors of X (1) . Compute matrix U 2 ∈ R n×r containing the leading r left singular vectors of Y (2) . Compute matrix U 3 ∈ R n×r containing the leading r left singular vectors of Z (3) . Note that the iterates of Algorithm 2 are not antisymmetric. However, similarly as in the symmetric case, we have observed that in most of the cases Algorithm 2 converges towards an antisymmetric approximation; see Section 3.3 below. To antisymmetrize the output of Algorithm 2, one could set all factors U µ to be equal to one of them. Here we choose the factor U µ for which (3.1) gives the biggest value. A simple antisymmetric variant of Algorithm 2 consists of setting all factors to the factor that has been obtained from the SVD in one step. In the symmetric case, this variant has been observed to suffer from convergence problems [8] and we observed similar difficulties in the antisymmetric case. Jacobi algorithm In contrast to HOOI, the Jacobi algorithm proposed for symmetric tensors in [8] preserves structure, that is, all iterates stay symmetric. In this section, we develop a variant of this algorithm for antisymmetric tensors. In order to find the zeros of this function, we divide it by cos 2 φ and solve the resulting quadratic equation in t = sin φ/ cos φ: Among the two solutions to this equation, we choose the one that maximizes (3.6). Algorithm 3 summarizes the described procedure. Algorithm 3 Jacobi algorithm for antisymmetric multilinear rank-r approximation Apply Algorithm 1 to choose initial factor U ∈ R n×r . Choose U ⊥ such that Q = [U, U ⊥ ] is orthogonal. For choosing the pivot pair (i, j) in Algorithm 3, we traverse the list (3.3) cyclically. For each pair, the condition (3.4) is checked. If (i, j) does not fulfill this condition, it is skipped and the algorithm continues checking the next pair. Although observed in practice, it cannot be guaranteed that Algorithm 3 produces the minimum of the function in (3.2). The proof of a weaker convergence result for symmetric tensors [8,Theorem 5.4] directly extends to antisymmetric tensors, resulting in Theorem 3.2. Numerical Experiments The algorithms described in this paper have been implemented and tested in Matlab version 7.11. In our first set of experiments, we study the approximation error obtained by truncated HOSVD, HOOI, and the Jacobi algorithm. The latter two algorithms are iterative; they are considered converged when the norm of the gradient of the objective function is 10 −10 or below. We have chosen ǫ = 1/(10n) in the condition (3.4) of the Jacobi algorithm. We tested the algorithms with random tensors generated by applying antisymmetrizer from (1.4) to tensors with uniformly distributed random entries from the interval [0, 1]. Figure 1 shows that HOOI and the Jacobi algorithm always improve upon the approximation obtained from the HOSVD. In many cases, HOOI and the Jacobi algorithm result in the same (antisymmetric) approximation. In rare cases when the error of the Jacobi algorithm is greater than the one of HOOI, it is observed that the tensor produced by HOOI is not antisymmetric. On the other hand, when the error of HOOI is greater, the tensor produced by HOOI is antisymmetric. Although providing little evidence, these observations do at least not contradict a conjecture that the best (unstructured) approximation of multilinear rank (r, r, r) to a generic antisymmetric tensor can always be chosen antisymmetric for r ≥ 3. Figure 2 yields insights into the convergence behavior of HOOI and Jacobi algorithm for a representative run with a random antisymmetric tensor. To emphasize the benefits from initializing with the truncated HOSVD we compare with using no initialization, that is, instead of using Algorithm 1 we set U = Ir 0 and Q = I n in Algorithms 2 and 3, respectively. Apart from the approximation error we also show the norm of the gradient of the objective function. We have also considered antisymmetric tensors for which the matricizations are known to exhibit rapid singular value decays. To construct such a tensor, consider the function f (x, y, z) = exp(− x 2 + 2y 2 + 3z 2 ) on [0, 1] 3 . Then we let X contain its discretization: where ξ i = (i − 1)/(n − 1), and set A = anti(X ). Figure 3 shows the obtained results for n = 20. It reveals that the HOSVD gives an excellent initial approximation. This is also an example where the Jacobi algorithm with no initialization fails to converge to a global optimum. Finally, we report results for the antisymmetric ground state of a Schrödinger eigenvalue problem, which motivated our study of antisymmetric tensors. Following the example from [1, Sec. 5.4], we consider one-dimensional 2π-periodic variables x 1 , . . . , x d ∈ R and the Hamiltonian with c v = 100, c w = 5. The goal is to compute the smallest (negative) eigenvalue of H with an antisymmetric eigenfunction. After discretizing each variable x µ on a uniform grid with n grid points in [0, 2π) and approximating each ∂ 2 /∂x 2 µ with central finite differences, H becomes an n d × n d matrix, which can be reinterpreted as a linear operator H : R n×···×n → R n×···×n . Because H is invariant under permutations of the variables, H commutes with the antisymmetrizer A. Hence, computing the most negative eigenvalue of H with an antisymmetric eigenvector is equivalent to computing the smallest eigenvalue of A • H. We used Matlab's eigs for the latter and then applied the HOOI and Jacobi algorithm to the resulting (antisymmetric) eigenvector. The obtained results for d = 3 and n = 50 shown in Figure 4 are qualitatively similar to the ones obtained for the function-related tensor above. Multilinear rank-d approximation Antisymmetric tensors of order d and multilinear rank d have the very particular structure (1.3). As we will discuss in this section, this simplifies the approximation with such tensors significantly. The following basic lemma plays a key role; it extends the well known fact that u T Au = 0 always holds for a skew-symmetric matrix A. Lemma 4.1 Let A ∈ R n×···×n be an antisymmetric tensor of order d ≥ 2 and u ∈ R n . Then A × µ u × ν u = 0 for any 1 ≤ µ < ν ≤ d. Proof. Without loss of generality, we may assume that µ = d − 1 and ν = d. Then any entry of B = A × µ u × ν u satisfies The following theorem establishes an equivalence between the best antisymmetric multilinear rank-d approximation and the best unstructured rank-1 approximation of an antisymmetric tensor. Assuming that Algorithm 4 converges to a best rank-1 approximation with mutually orthogonal vectors u µ , Theorem 4.2 allows us to construct the best antisymmetric multilinear rank-d approximation anti(αu 1 ⊗ · · · ⊗ u d ). The following lemma assures mutual orthogonality. for any ν = 1. Hence, each step of HOPM orthogonalizes one of the vectors u µ = v µ / v µ . In turn, the statement of the lemma holds after at least one sweep of HOPM, even if the initial vectors are not orthogonal. Remark 4.4 To ensure orthogonality numerically, we perform another orthogonalization step after each step of Algorithm 4. In principle, this procedure can also be applied to HOOI, yielding an (unstructured) multilinear rank (r 1 , . . . , r d ) approximation with mutually orthogonal basis matrices U µ . This can then be turned into an antisymmetric multilinear rank-(r 1 + · · · + r d ) approximation by setting U = [U 1 , . . . , U d ]. We have tested this idea numerically and observed that this often yields a good approximation but the approximation error is usually worse compared to the result of the Jacobi algorithm. Initialization It remains to discuss a proper initialization strategy for HOPM. For general d, we use the truncated HOSVD from Section 3. For d = 4, we propose an antisymmetric variant of the technique proposed by Kofidis and Regalia [9] for symmetric tensors. Antisymmetric tensors of order d = 4 appear, for example, for wave functions describing the position of four fermions. For this purpose, we define the (1, 2)-matricization of a 4th-order tensor X ∈ R n 1 ×n 2 ×n 3 ×n 4 to be the n 1 n 2 × n 3 n 4 matrix X (1,2) with the entries where the function j(·) is defined as in (2.1). Lemma 4.5 Let A ∈ R n×n×n×n be antisymmetric. Then the following statements hold: 2. Let λ be a nonzero eigenvalue of A (1,2) with eigenvector v ∈ R n 2 . Then the matricization V (1) ∈ R n×n of v is skew-symmetric. Proof. 1. This statement follows directly from the definition (4.3): The relation which shows that V (1) is skew-symmetric. In turn, there is an eigenspace of dimension three with orthonormal basis belonging to the eigenvalue α/12. Due to the orthogonality of u 1 , u 2 , u 3 , u 4 the range of any linear combination of (4.4) equals span{u 1 , u 2 , u 3 , u 4 }. Analogously, there is an eigenspace of dimension three belonging to the eigenvalue −α/12 with the same property. Lemma 4.5.3 suggests the initialization strategy described in Algorithm 5. Algorithm 5 HOPM initialization strategy for antisymmetric tensor of order 4 Compute eigenvector v ∈ R n 2 associated with eigenvalue of largest magnitude A (1,2) . Form V (1) ∈ R n×n and compute its SVD. Return the four leading left singular vectors u 1 , u 2 , u 3 , u 4 . Numerical Experiments To investigate the difference between the different initializations, we focus our experiments on antisymmetric tensors of order four. Figure 5 shows the approximation errors returned by HOPM initialized with truncated HOSVD or Algorithm 5, using the random antisymmetric tensors described in Section 3.3. HOPM is considered converged when the norm of the gradient of the objective function reaches 10 −10 or below. It can be seen that both initialization strategies appear to work equally well in terms of the final approximation error. HOSVD initialization new initialization Figure 5: Approximation error of multilinear rank-4 approximation produced by HOPM for 100 random antisymmetric 10 × 10 × 10 × 10 tensors. Figure 6 shows the convergence behavior for a typical run. It turns out that initializing with Algorithm 5 gives a significant convergence benefit both for the approximation error and the norm of the gradient. Finally, analogous to Section 3.3, Figure 7 shows results for the 10 × 10 × 10 × 10 tensor generated by the function f (x, y, z, w) = exp(− x 2 + 2y 2 + 3z 2 + 4w 2 ). In this case, both initialization methods yield excellent approximations, with the new initialization resulting in significantly fewer iterations. The same observation can be made in Figure 8 for the 9 × 9 × 9 × 9 tensor corresponding to the antisymmetric ground state described in Section 3.3. Conclusions The multilinear rank of an antisymmetric tensor has been analyzed and new algorithms for antisymmetric low multilinear rank approximation have been proposed. The Jacobi algorithm initialized with truncated HOSVD preserves antisymmetry and appears to enjoy excellent global convergence properties. We have shown that a best unstructured rank-1 approximation can always be turned into a best antisymmetric multilinear rank-d approximation. In such a scenario, HOPM initialized either with truncated HOSVD (for d = 4) or Algorithm 5 (for d = 4) is certainly the method of choice. The algorithms discussed in this paper could provide a building block in the design of low-rank tensor algorithms [6] for eigenvalue problems with antisymmetric eigenvectors. In particular, the simplicity of HOPM makes it well suited in the context of truncated iterations and greedy strategies.
2020-05-01T16:08:54.601Z
2016-03-16T00:00:00.000
{ "year": 2016, "sha1": "692b87799f1acc2509152d47d8a1b922ff37a083", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1603.05010", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0f093c55e1fe64518af3c9484476e6080ce3f913", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
14895425
pes2o/s2orc
v3-fos-license
Experimentally‐induced anti‐myeloperoxidase vasculitis does not require properdin, MASP‐2 or bone marrow‐derived C5 Abstract Anti‐neutrophil cytoplasmic antibody vasculitis is a systemic autoimmune disease with glomerulonephritis and pulmonary haemorrhage as major clinical manifestations. The name reflects the presence of autoantibodies to myeloperoxidase and proteinase‐3, which bind to both neutrophils and monocytes. Evidence of the pathogenicity of these autoantibodies is provided by the observation that injection of anti‐myeloperoxidase antibodies into mice causes a pauci‐immune focal segmental necrotizing glomerulonephritis which is histologically similar to the changes seen on renal biopsy in patients. Previous studies in this model have implicated the alternative pathway of complement activation and the anaphylatoxin C5a. Despite this progress, the factors that initiate complement activation have not been defined. In addition, the relative importance of bone marrow‐derived and circulating C5 is not known. This is of interest given the recently identified roles for complement within leukocytes. We induced anti‐myeloperoxidase vasculitis in mice and confirmed a role for complement activation by demonstrating protection in C3‐deficient mice. We showed that neither MASP‐2‐ nor properdin‐deficient mice were protected, suggesting that alternative pathway activation does not require properdin or the lectin pathway. We induced disease in bone marrow chimaeric mice and found that circulating and not bone marrow‐derived C5 was required for disease. We have therefore excluded properdin and the lectin pathway as initiators of complement activation and this means that future work should be directed at other potential factors within diseased tissue. In addition, in view of our finding that circulating and not bone marrow‐derived C5 mediates disease, therapies that decrease hepatic C5 secretion may be considered as an alternative to those that target C5 and C5a. © 2016 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of Pathological Society of Great Britain and Ireland. Introduction Anti-neutrophil cytoplasmic antibody (ANCA) vasculitis is a systemic disease causing crescentic glomerulonephritis and pulmonary haemorrhage. Other clinical features include disease affecting skin, the nervous system, and upper airways [1]. The name ANCA vasculitis reflects the fact that it is characterized by autoantibodies against neutrophils [2,3], although they also bind to monocytes. The antigenic targets of the autoantibodies have been identified as myeloperoxidase (MPO) or proteinase 3 (PR3) [4,5] and these enzymes are indeed found within granules in both neutrophils and monocytes. In addition to their clinical utility, the suggestion that these autoantibodies were pathogenic came from in vitro studies in which IgG from patients with anti-MPO or anti-PR3 antibodies activates neutrophils to undergo respiratory burst and degranulation [6]. Many subsequent studies have supported this observation (reviewed in ref 7). However, it was not until 2002 that in vivo evidence of pathogenicity was obtained. Anti-MPO antibodies raised in MPO-deficient mice were shown to cause a focal necrotizing crescentic glomerulonephritis when injected into wild-type mice [8]. In this murine anti-MPO model, both the histological and the clinical features of glomerulonephritis closely mirror the situation in patients. The crescentic glomerulonephritis is focal and segmental, affecting segments of some glomeruli and not others, and also shows necrosis with a lack of immune deposits. All of these features recapitulate characteristics of the histology seen in clinical renal biopsy samples. Furthermore, proteinuria is relatively mild and not in the nephrotic range, as is the case for patients. In addition to providing evidence of pathogenicity, the murine anti-MPO model has become established as a preclinical model, which is useful for understanding mechanisms and developing therapies in ANCA vasculitis. Previous work using this model has suggested that the alternative pathway (AP) is important, as mice deficient in factor B, but not C4, were protected [9]. C5-deficient mice are also protected, and treatment with an anti-C5 monoclonal antibody inhibited disease [10], providing evidence of a role for C5. Furthermore, MPO-deficient mice immunized with MPO and transplanted with bone marrow from C5a receptor-deficient mice were protected from disease, compared with mice that received wild-type bone marrow, suggesting that the anaphylatoxin C5a is the key mediator [11]. This work has recently been extended with evidence of therapeutic efficacy for the C5a receptor antagonist CCX168 in mice in which the C5a receptor has been replaced with the human equivalent [12]. In addition, disease was exacerbated in mice defective in the second C5a receptor C5L2 [12]. This work in the anti-MPO model has led to clinical trials in which CCX168 is currently being tested in patients with ANCA vasculitis (trial registrations NCT01363388 and NCT02222155). The trigger leading to AP activation has not been identified and this raises an important question given the interest in therapies targeting the complement pathway in ANCA vasculitis. Properdin is established as a key stabilizer of the AP but recent data have also shown that it may bind to apoptotic cells and initiate AP activation [13,14]. Another candidate for initiating AP activation is the lectin pathway. Recent work has shown that C4 is not required for lectin pathway activation [15], and so we considered that the lectin pathway may be important in initiating complement activation, with subsequent amplification by the AP. Therefore, in order to explore the initial events leading to complement activation, we studied both properdin-deficient and MBL-associated serine protease 2 (MASP-2)-deficient mice in the anti-MPO model. Finally, in view of the recently discovered importance of intracellular complement components [16][17][18][19][20][21], we examined the relative importance of bone marrow-derived and circulating C5 in anti-MPO vasculitis. Induction of anti-MPO crescentic glomerulonephritis Anti-MPO antibody was generated and glomerulonephritis induced as previously described [27], with minor modifications. Day 0 denotes the day that 2 mg of anti-MPO IgG was injected. In all experiments apart from those with C5 bone marrow chimaeras, 30 μg of pegylated GCSF (Neulasta, Amgen, Cambridge) was given subcutaneously on days −8, −4, 0 and 4, and 10 μg of LPS (from E. coli serotype 0111 B4; Enzo Life Sciences, Exeter, UK) was given intraperitoneally on days 0 and 3. In the C5 bone marrow chimaera experiment, anti-MPO IgG was given intraperitoneally and 30 μg of pegylated GCSF was given subcutaneously on days −4 and 0. 2.5 μg of LPS was given intraperitoneally on day 0 only. Blood was taken from the saphenous vein on day −1 to measure circulating neutrophils by flow cytometry [27] or baseline serum creatinine by mass spectrometry [28]. For the experiments with MASP-2and properdin-deficient mice, metabolic cages were used to collect urine in the last 24 h of the experiment. In other experiments, spot urine was taken on day −1 and on day 6 for the urine albumin creatinine ratio. In all experiments, mice were killed on day 7. Assessment of disease Urine creatinine was measured using a commercial creatinase assay (Diazyme, Dresden, Germany) based on the manufacturer's instructions, with a standard curve generated for all assays. Histology, serum creatinine and urine albumin analysis were as described previously [27,28]. Crescent formation was defined as at least two layers of non-epithelial cells in Bowman's space. Immunofluorescence staining for MBL and C3 was performed using tissue fixed with phosphate-lysine-periodate prior to freezing and sectioning. For MBL, clone 16A8 (Hycult Biotechnology, Uden, The Netherlands) was used and for C3, clone RMC11H9 (Cedarlane, Burlington, Ontario, Canada). Detection was with DyLight 488 mouse anti-rat IgG (Jackson ImmunoResearch Laboratories, West Grove, PA, USA). Sections stained with secondary antibody only were included as controls and were negative (supplementary material, Figure S1). Fibrin staining was produced using a FITC-labelled rabbit anti-human fibrinogen/fibrin antibody (Dako, Cambridge, UK) which cross-reacted with mouse fibrinogen/fibrin. For C3, glomerular fluorescence intensity was quantified using ImageJ software (NIH, USA). For MBL and fibrin quantification, the software used was Cell (Olympus, Southend-on-Sea, UK). A minimum of 20 glomeruli per sample were included in all cases. Coagulation assays Blood (900 μl) was taken by intracardiac puncture under terminal anaesthesia into a 1 ml syringe containing 100 μl of 3.2% trisodium citrate (Sigma, Poole, UK). Plasma was double spun (800 g), aliquoted, and frozen until analysed. Thrombin generation was monitored with a Fluoroskan Ascent Thrombinoscope (Thermo Electron Corporation, Paisley, UK) and Thrombinoscope software version 5. Samples were run at a dilution of 1 in 3 and at 33 ∘ C [29]. Clauss fibrinogen was measured using the ACL300R analyser, and using the HemosIL Fibrinogen-C reagent by Werfen UK (Warrington, Cheshire, UK). Prothrombin fragment 1.2 was measured by ELISA using an Enzygost F1 + 2 Micro kit from Sysmex UK Ltd (Wymbush, Milton Keynes, UK). Statistics GraphPad Prism version 5 (GraphPad Software Inc, La Jolla, CA, USA), with Student's t-test, was used where two groups were compared. If more than two groups were compared, a one-way ANOVA with Tukey's post-test was used. Some data were logarithmically transformed before analysis if the variances of the groups were significantly different. Properdin-deficient mice are not protected in anti-MPO vasculitis Because previous work had shown that the AP was important in anti-MPO vasculitis, we wondered if properdin was essential and perhaps could initiate complement activation. We therefore studied properdin-deficient mice with the aim of confirming and extending our understanding of AP activation in anti-MPO vasculitis. We found that both histological and functional parameters of disease severity were similar to wild types as shown in Figure 1A-E. Figure 1F shows representative histology and CD68+ macrophage staining. Circulating neutrophil counts were the same in both groups after GCSF treatment (supplementary material, Table S1). These data showed that properdin is not essential for AP activation in anti-MPO vasculitis, and that the trigger for AP activation involves other mechanisms. MASP-2-deficient mice develop increased disease in the anti-MPO vasculitis model Since properdin was not required, we considered that lectin pathway activation may initiate complement activation independently of C4 [15]. This might then be followed by amplification via the AP. We first induced crescentic glomerulonephritis in mice by injecting anti-MPO antibody and compared disease severity at day 7 in MASP-2-deficient mice with wild types. MASP-2-deficient mice had significantly more disease, both functionally and histologically, than wild types, as shown in Figure 2. MASP-2-deficient mice had more glomerular crescents, more haematuria, and a higher serum creatinine. We did not find a difference in albuminuria, and the trend towards increased glomerular CD68+ macrophages did not reach significance. These findings were not due to differences in the clearance of anti-MPO antibody as levels were the same at the end of the experiment (supplementary material, Figure S2). They were also not due to differences in circulating neutrophil counts following GCSF treatment (supplementary material, Table S1). Overall, these data demonstrated that MASP-2-deficient mice are not protected from crescentic glomerulonephritis induced by anti-MPO antibodies and are in fact predisposed to more severe disease. Both properdin and the lectin pathway are therefore excluded as the initiator of AP activation in anti-MPO vasculitis. MASP-2/C3 double-deficient-and C3-deficient mice are protected from disease The reason for the increased disease severity in MASP-2-deficient mice was not clear; it was not due to a difference in anti-MPO antibody remaining in the circulation at day 7 (supplementary material, Figure S2). We therefore sought to examine if the increased disease depended on complement activation and compared disease in wild-type mice and mice deficient in both C3 and MASP-2 with mice deficient in C3. As shown in Figure 3, there was very mild disease in both C3-deficient and MASP-2/C3 double-deficient mice. There was no difference between these groups in any of the histological or functional parameters examined, and both had significantly less disease than wild-type mice. This meant that we were unable to determine if the increase in disease, in MASP-2-deficient mice compared with wild types, depended on C3 activation with this experiment. However, circulating C3 levels in untreated MASP-2-deficient mice did not differ from wild types (supplementary material, Figure S3). In addition, glomerular deposition of C3 in MASP-2-deficient mice with anti-MPO vasculitis was similar to that in wild-type mice (supplementary material, Figure S3). This suggested that there was no major dysregulation of the AP of complement in MASP-2-deficient mice. Figure 4 shows representative histology and immunofluorescence staining for CD68+ macrophages in wild-type, MASP-2-deficient, MASP-2/C3-deficient, and C3-deficient mice. Also shown in Figure 4 is staining for MBL, which demonstrates deposition of MBL, and by inference MASP-2, within diseased glomeruli. There was no difference in MBL deposition between the groups (supplementary material, Figure S4). Circulating neutrophil counts were the same in all groups after GCSF treatment (supplementary material, Table S1). These data show that both MASP-2/C3-deficient and C3-deficient mice are protected from disease. The phenotype of C3-deficient mice has not previously been reported in the anti-MPO model, and these data confirm a central role for complement in this model in our hands. Evidence for increased prothrombin activation in MASP-2-deficient mice Previous data have shown cross-talk between MASP-2 and the coagulation pathway [30]. Since fibrin generation is an essential feature of glomerular crescent formation, we wondered if the increased disease might be explained by activation of coagulation predisposing to fibrin deposition in MASP-2-deficient mice. We confirmed this by measuring prothrombin fragment 1.2, which is a breakdown product of prothrombin activation and which was increased in the plasma of untreated MASP-2 mice compared with wild type. This provided evidence of increased prothrombin activation ( Figure 5A). We then measured residual thrombin generation in the plasma of MASP-2-deficient mice compared with wild type. Figure 5B shows a typical profile of thrombin generation over time from both a wild-type and a MASP-2-deficient mouse. The data shown in Figure 4C-E show that although there was no difference in the peak thrombin generation, MASP-2-deficient mice had an increase in the lag time and a decrease in the overall endogenous thrombin potential (ETP). No significant difference in thrombin time or fibrinogen concentration was seen, but these are crude tests compared with the sensitivity of prothrombin fragment 1.2 and thrombin generation ( Figure 5F, G). The decrease in ETP suggested a reduction in residual thrombin generation in plasma, due to consumption of prothrombin, and a situation analogous to mild disseminated intravascular coagulation. Despite these findings, we were unable to detect a difference in fibrin deposition in glomeruli in MASP-2-deficient compared with wild-type mice ( Figure 4 and supplementary material, Figure S4). Neutrophils contain intracellular C5 and C5a In view of the emerging data on the importance of intracellular complement components, we explored whether neutrophils contained intracellular C5 and/or C5a, and if so, whether this may play a role in the development of pathology in ANCA vasculitis. We found that small amounts of C5 and C5a were present on the surface of non-activated neutrophils with a small decrease in C5 expression after priming with TNFα, followed by stimulation with fMLP, anti-MPO or anti-PR3 ANCA ( Figure 6). Intracellular C5 and C5a were present in greater amounts and decreased after stimulation with fMLP, anti-PR3 or anti-MPO. We therefore measured C5a in the supernatant of neutrophils in the absence or presence of the same activating stimuli, in serum-free conditions. Since released C5a was likely to bind to the C5a receptor, and hence may not be detected in the supernatant, we performed experiments in the presence of PMX53, an antagonist for the C5a receptor CD88. We found somewhat variable results, with C5a undetectable in some donors and levels in the range of 400-600 pg/ml in the presence of PMX53 in others. Monocytes also express C5 and C5a, and this is shown in the supplementary material, Figure S5. Circulating and not bone-marrow derived C5 mediates disease in anti-MPO vasculitis We next examined if this intracellular C5a was important in pathogenesis and constructed four groups of bone marrow chimaeric mice from wild-type or C5-deficient donors and recipients, and induced anti-MPO vasculitis. As shown in Figure 6C, D, histological measures of disease showed that crescentic glomerulonephritis was more severe in both groups of C5-sufficient recipients, compared with both groups of C5-deficient recipients, as indicated by more crescents and CD68+ macrophages within glomeruli. Functional readouts supported the histology data, as shown in Figure 6E-G. Figure 6H shows representative histology and CD68+ macrophage staining. Circulating neutrophil counts were the same in all groups after GCSF treatment (supplementary material, Table S1). These results show that the key mediator of inflammation in anti-MPO vasculitis is C5 derived from sources outside of the bone marrow. Discussion The murine anti-MPO model is an established preclinical model of ANCA vasculitis. This is demonstrated by the fact that the C5a receptor antagonist CCX168 has been tested in this model [12]. A phase II trial (NCT01363388) has been completed with CCX168 in patients and a second trial is underway (NCT02222155). Haematuria and proteinuria are variable in this model, as shown by the observation that wild-type mice had different amounts of haematuria and proteinuria in the experiments shown in Figures 1-3 and 6. There was also less proteinuria in C5 deficient to wild type than in wild type to wild-type chimaeras ( Figure 6F), despite similar histological findings and serum creatinine. In our experience, the degree of haematuria and proteinuria are not robust markers of disease severity in individual animals and this mirrors the situation in patients with ANCA vasculitis. Histological changes and serum creatinine are therefore the primary readouts, and a lack of correlation with haematuria and proteinuria, if present, would not be a concern. Whilst others have used LPS to obtain robust disease in this model, we also administer GCSF. It is important to consider this when considering our results in relation to those obtained without the addition of GCSF. We have made a number of observations in this model that are of clinical relevance, given this current interest in targeting complement in patients with ANCA vasculitis. First, the protective role of C3 deficiency has not previously been reported in this model. Here, we have shown for the first time that C3-deficient mice are protected. It is important to note that we have confirmed that complement is important in this model in our hands, given the lack of protection that we saw in mice deficient in properdin or MASP-2. The initiator of AP activation in anti-MPO vasculitis is not known, and we have excluded both properdin Figure S5. Scale bars = 20 μm. and the lectin pathway as candidates. Both were credible candidates. Activated neutrophils release properdin, which has been suggested as the cause of AP activation by neutrophils [31]. In addition to AP activation through properdin binding to apoptotic cells [13,14], cooperation between properdin and neutrophil-derived myeloperoxidase may lead to AP activation [32]. The lectin pathway was another candidate that had to be excluded because it is a key pathway in other models of kidney inflammation that do not require C4 [33]. Our findings are important because they pave the way for further studies to focus on other mechanisms of complement activation that may be important. Previously, activation of human neutrophils has been shown to initiate AP activation, although the mechanism of activation was not defined [31]. It is therefore possible that neutrophils themselves could be central to AP activation, perhaps via released proteases or induced deficiencies in complement regulatory proteins. Several such factors may combine to lead to AP activation on the surface of neutrophils or perhaps endothelial cells, and further work will be needed to define them. Disease in MASP-2-deficient mice was increased compared with that in wild-type mice. In view of the previously reported role for the AP, we wondered if the increased disease required complement activation. MASP-1 has been shown to activate factor D and MASP-3 can activate factor D and factor B [34,35]. Therefore, the absence of MASP-2 could result in an increase in MASP-1 and MASP-3 circulating in association with MBL and this could potentially enhance AP activation. However, although our results in mice deficient in both MASP-2 and C3 ( Figure 2) were not conclusive, the similarity in the levels of circulating and deposited C3 (supplementary material, Figure S3) in MASP-2-deficient mice compared with wild type suggested that the increased disease in MASP-2-deficient mice was not due to increased AP activation. Furthermore, wild-type and MASP-2-deficient mouse serum samples were assessed using ELISAs specific to mouse MASP-1 or MASP-3. The ELISAs revealed comparable serum levels for MASP-1 and for MASP-3 (n = 4 per group, data not shown). A link between coagulation and crescentic glomerulonephritis is well established. Fibrin deposition, initiated physiologically by tissue factor, promotes disease [36], while activation of fibrinolysis through plasminogen activators activating plasminogen is protective [37,38]. We therefore sought an alternative explanation for the increased crescent formation in MASP-2-deficient mice and wondered if cross-talk with the coagulation pathway could be involved. We confirmed that MBL (and hence MASP-2) is deposited in the glomeruli of mice with crescentic glomerulonephritis, due to anti-MPO IgG as this would be a requirement for an effect on glomerular fibrin formation. This mirrors data from a recent comprehensive analysis of complement deposition in 187 renal biopsy samples from patients with ANCA vasculitis [39]. This study showed that MBL can be detected in over 30% of cases. It may in fact be present in small quantities in a higher proportion of cases, with detection limited by the sensitivity of the staining method. Activation of either the intrinsic or the extrinsic pathways that make up the coagulation cascade leads to the formation of a prothrombinase complex composed of factors Va and Xa, cleaving prothrombin to thrombin. Thrombin then activates fibrinogen, leading to the deposition of cross-linked fibrin. Although cleavage of prothrombin by MASP-2 has been shown to occur at the same molecular site as factor Xa, it is far less efficient [30]. Our data show evidence of increased prothrombin activation in MASP-2-deficient mice compared with wild-type mice. Therefore, a plausible explanation for the increase in crescentic glomerulonephritis seen in MASP-2-deficient mice is their tendency to enhanced fibrin generation. Despite these findings in coagulation assays, we were unable to detect a difference in fibrin deposition in glomeruli. Whilst this may be due to the relative insensitivity of the immunofluorescence quantification technique used, it is equally possible that the differences in coagulation assays did not lead to a difference in fibrin deposition in glomeruli. The molecular basis for the increased prothrombin activation that we observed has not been established in this study. Further work will be needed to establish precisely how the absence of MASP-2, which cleaves prothrombin relatively weakly, results in potentiation of the cleavage of prothrombin by the prothrombinase complex. Circulating complement components are derived primarily from the liver, but it has recently become clear that complement proteins are found not only in serum and plasma but also within leukocytes. Antigen-presenting cell-derived C3a and C5a have been shown to augment T-cell responses [40], with important effects relevant to transplantation and graft versus host disease [16][17][18]. More recent data have shown that resting human CD4+ T cells contain intracellular stores of C3 and that the protease cathepsin L cleaves C3 into C3a and C3b, with important roles described for T-cell function and homeostasis [19][20][21]. Furthermore, the balance of stimulation of the C5a receptors and CD88 and C5L2 affects Th1 T-cell activation through inflammasome-mediated IL-1β secretion [41] and this may be largely due to intracellular C5a (C Kemper, personal communication). However, despite these developments highlighting the role of complement components within leukocytes, our data show that activation of C5 from other sources is important in anti-MPO vasculitis. Since the synthesis of C5 from murine or human renal cells has not been described, our results demonstrate a role for circulating C5. The present study includes several observations of direct relevance to developing therapies in ANCA vasculitis. We have shown that the trigger for AP activation is not the lectin pathway or the AP stabilizer properdin, and previous work has excluded the classical pathway [9]. Therefore, future work must look for other factors that may initiate cleavage of C3. An ability to target the specific mechanism of complement activation in patients may give more specificity and potentially less toxicity than therapies aimed at C5 or C5a. However, if C5 is to be targeted in patients, an alternative therapeutic approach to C5a receptor blockade or an antibody to C5 such as Eculizumab would be to prevent C5 secretion from the liver. This would not leave intact the important homeostatic interactions between leukocyte-derived C5a and C5a receptors. Indeed, such an approach is being developed using N-acetylgalactosamine (GalNAc)-siRNA conjugates to target the hepatocyte asialoglycoprotein receptor. A phase 1-2 clinical trial in patients with paroxysmal nocturnal haemoglobinuria is underway (NCT02352493). The present study supports such an approach in ANCA vasculitis as we have shown that circulating liver-derived C5 is the key mediator of disease. Figure S1. An example of a section stained with only the secondary antibody Figure S2. Anti-MPO levels measured in serum taken at the end of the experiment where wild-type and MASP-2-deficient mice were compared (shown in Figure 1 Table S1. Neutrophil counts in peripheral blood taken the day before injection of anti-MPO IgG in each of the experiments shown in this article
2018-04-03T04:24:22.825Z
2016-08-22T00:00:00.000
{ "year": 2016, "sha1": "dd510c95d2f2353b61d0afd4d7788e1f4dea94e6", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/path.4754", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "13716f17cda1812a0b40f26dd7b9c15be6cf9551", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258013432
pes2o/s2orc
v3-fos-license
Delivery of Doxorubicin by Ferric Ion-Modified Mesoporous Polydopamine Nanoparticles and Anticancer Activity against HCT-116 Cells In Vitro In clinical cancer research, photothermal therapy is one of the most effective ways to increase sensitivity to chemotherapy. Here, we present a simple and effective method for developing a nanotherapeutic agent for chemotherapy combined with photothermal therapy. The nanotherapeutic agent mesoporous polydopamine-Fe(III)-doxorubicin-hyaluronic acid (MPDA-Fe(III)-DOX-HA) was composed of mesoporous polydopamine modified by ferric ions and loaded with the anticancer drug doxorubicin (DOX), as well as an outer layer coating of hyaluronic acid. The pore size of the mesoporous polydopamine was larger than that of the common polydopamine nanoparticles, and the particle size of MPDA-Fe(III)-DOX-HA nanoparticles was 179 ± 19 nm. With the presence of ferric ions, the heat generation effect of the MPDA-Fe(III)-DOX-HA nanoparticles in the near-infrared light at 808 nm was enhanced. In addition, the experimental findings revealed that the active targeting of hyaluronic acid to tumor cells mitigated the toxicity of DOX on normal cells. Furthermore, under 808 nm illumination, the MPDA-Fe(III)-DOX-HA nanoparticles demonstrated potent cytotoxicity to HCT-116 cells, indicating a good anti-tumor effect in vitro. Therefore, the system developed in this work merits further investigation as a potential nanotherapeutic platform for photothermal treatment of cancer. Introduction Colon cancer is a disease with a high morbidity and mortality rate worldwide, which may be attributable to poor diet and a reversed work schedule [1,2]. Although academic research on colon cancer treatment has never stopped, the incidence of colon cancer continues to rise [3][4][5]. Currently, the primary anticancer treatments include surgery, chemotherapy, and radiotherapy [6][7][8]. Chemotherapy using doxorubicin (DOX), a broad-spectrum anthracycline antitumor drug, is widely used to develop model drugs for tumor-targeted drug delivery systems [9][10][11][12]. However, a single chemotherapy regimen can lead to several serious side effects, cancer metastasis, and tumor resistance [13,14]. Studies have demonstrated that chemotherapy combined with photothermal therapy (PTT) can reduce the heat resistance of tumor cells and that the heat generated by PTT can alleviate tumor hypoxia to further promote chemotherapy [15][16][17]. Thus, combined photothermal and chemotherapeutic treatment can effectively mitigate the drawbacks of monotherapy. With the ongoing development of nanotechnology, it has been found that nanodrug delivery systems can be used for targeted drug delivery to the tumor site, reduce dosage, and enhance the anticancer activity of chemotherapeutic drugs [18][19][20]. This type of loading system is typically biocompatible, degradable, and modifiable [21,22]. Polydopamine (PDA), a natural melanin polymer formed by the self-aggregation of dopamine (DA) [23], can be decomposed in the weakly acidic tumor microenvironment [24]. PDA has many applications in multifunctional surface modification due to its natural nontoxicity, biodegradability, and high absorptivity in the near-infrared region (NIR) [25]. MPDA-Fe(III)-DOX-HA nanoparticles were gathered near tumor cells via the active target of HA. Due to the sensitivity of tumor cells to temperature, 808 nm near-infrared light irradiation caused polydopamine nanoparticles to generate a substantial amount of heat and inhibit the growth of tumors. Subsequently, PDA disintegrated in a weakly acidic environment, releasing the chemotherapeutic drug doxorubicin, which, when combined with the photothermal effect, killed tumor cells. In summary, the MPDA-Fe(III)-DOX-HA delivery system not only increased the efficacy of chemotherapy but also decreased cytotoxicity, indicating that photothermal combined with chemotherapy is a promising strategy for treating tumors and that tumors can be destroyed by the synergistic effect of the two treatments. BET Analysis The N 2 adsorption-desorption isotherms of PDA and MPDA were measured (Figure 2a,b, with the pore size distribution in the top right corner). Both PDA and MPDA had typical Langmuir IV isotherms, as shown in Figure 2a,b, indicating that both PDA and MPDA may have pore structures [30]. The pore size curve of PDA in Figure 2a shows that the existing pore structure may have larger pores caused by mutual adhesion and polymerization of PDA, so the pore content of mesopores is relatively low. In Figure 2b, the specific surface area of MPDA is shown to be 36.824 m 2 g −1 , which is significantly larger than that of non-mesoporous PDA spheres (17.126 m 2 g −1 ), indicating that MPDA is more suitable for drug loading as a drug carrier than PDA. BET Analysis The N2 adsorption-desorption isotherms of PDA and MPDA were measured ( Figure 2a,b, with the pore size distribution in the top right corner). Both PDA and MPDA had typical Langmuir IV isotherms, as shown in Figure 2a,b, indicating that both PDA and MPDA may have pore structures [30]. The pore size curve of PDA in Figure 2a shows that the existing pore structure may have larger pores caused by mutual adhesion and polymerization of PDA, so the pore content of mesopores is relatively low. In Figure 2b, the specific surface area of MPDA is shown to be 36.824 m 2 g −1 , which is significantly larger than that of non-mesoporous PDA spheres (17.126 m 2 g −1 ), indicating that MPDA is more suitable for drug loading as a drug carrier than PDA. As shown in Figure 2, an H4-type hysteresis loop caused by capillary agglomeration occurs in the P/P0 range of 0.2-0.9. The average pore size of MPDA is 3.827 nm (Figure 2b), and that of PDA is 1.347 nm (Figure 2a), with poor pore size distribution. Based on these findings, it can be concluded that the mesoporous structure of MPDA can provide a larger specific surface area for drug loading and improve the drug loading capacity. SEM and Particle Size Analysis SEM images of various nanomaterials obtained in the experiment are shown in Figure 3. Results indicate that the addition of TMB could optimize the preparation of MPDA. Figure 3a shows that the PDA nanoparticles without the TMB template lack a mesoporous structure and have a non-uniform particle size distribution with an average particle size of 296 nm. The MPDA particles prepared with TMB have a distinct mesoporous structure and a uniform particle size. The results of optimizing the elution conditions of the template are shown in Figure 3b,c. MPDA nanoparticles with a relatively uniform distribution and small particle size (133 ± 18 nm) were obtained in the studies using acetone-ethanol and ethanol as the elution templates and acetone-ethanol as the eluent (Figure 3b). In contrast, the MPDA nanoparticles ( Figure 3c) prepared after the removal of the template using ethanol as the eluent had a non-uniform particle size distribution and a mean particle size of 156 ± 21 nm. Therefore, the acetone-ethanol elution condition was selected as the subsequent elution condition. In this study, the effect of HA modification at different proportions on the particle size of MPDA drug-loaded nanoparticles was investigated. When comparing Figure 3d-f to Figure 3b, HA was successfully coated on the MPDA particles. The HA-modified MPDA nanoparticles are distributed evenly because the hydrophilicity of HA improves As shown in Figure 2, an H4-type hysteresis loop caused by capillary agglomeration occurs in the P/P 0 range of 0.2-0.9. The average pore size of MPDA is 3.827 nm (Figure 2b), and that of PDA is 1.347 nm (Figure 2a), with poor pore size distribution. Based on these findings, it can be concluded that the mesoporous structure of MPDA can provide a larger specific surface area for drug loading and improve the drug loading capacity. SEM and Particle Size Analysis SEM images of various nanomaterials obtained in the experiment are shown in Figure 3. Results indicate that the addition of TMB could optimize the preparation of MPDA. Figure 3a shows that the PDA nanoparticles without the TMB template lack a mesoporous structure and have a non-uniform particle size distribution with an average particle size of 296 nm. The MPDA particles prepared with TMB have a distinct mesoporous structure and a uniform particle size. The results of optimizing the elution conditions of the template are shown in Figure 3b,c. MPDA nanoparticles with a relatively uniform distribution and small particle size (133 ± 18 nm) were obtained in the studies using acetone-ethanol and ethanol as the elution templates and acetone-ethanol as the eluent (Figure 3b). In contrast, the MPDA nanoparticles ( Figure 3c) prepared after the removal of the template using ethanol as the eluent had a non-uniform particle size distribution and a mean particle size of 156 ± 21 nm. Therefore, the acetone-ethanol elution condition was selected as the subsequent elution condition. , the prepared nanoparticles were more evenly distributed with an average particle size of 179 ± 19 nm. Therefore, m (MPDA-Fe(III)-DOX:HA = 1:1) was selected for the preparation of the MPDA nanoparticles as drug carriers. Zeta Potential Analysis As shown in Figure 4a, the surface potential of MPDA was −10.86 mV. After doxorubicin was loaded into MPDA, the surface potential of MPDA-Fe(III)-DOX increased to −1.42 mV (Figure 4b), which may be because the negative charge of the MPDA carrier itself was significantly reduced due to the chelation of metal ions on the surface and the loading of DOX drugs. The surface electronegativity of MPDA nanoparticles was significantly increased after HA modification due to the strong electronegativity of the In this study, the effect of HA modification at different proportions on the particle size of MPDA drug-loaded nanoparticles was investigated. When comparing Figure 3d , the prepared nanoparticles were more evenly distributed with an average particle size of 179 ± 19 nm. Therefore, m (MPDA-Fe(III)-DOX:HA = 1:1) was selected for the preparation of the MPDA nanoparticles as drug carriers. Zeta Potential Analysis As shown in Figure 4a, the surface potential of MPDA was −10.86 mV. After doxorubicin was loaded into MPDA, the surface potential of MPDA-Fe(III)-DOX increased to −1.42 mV (Figure 4b), which may be because the negative charge of the MPDA carrier itself was significantly reduced due to the chelation of metal ions on the surface and the loading of DOX drugs. The surface electronegativity of MPDA nanoparticles was significantly increased after HA modification due to the strong electronegativity of the carboxyl group in sodium hyaluronate. As shown in Figure 4b, the surface potential of MPDA-Fe(III)-DOX-HA modified by HA was more electronegative than that of MPDA-Fe(III)-DOX, reaching −9.17 mV (Figure 4b). The zeta potential of MPDA modified with HA was also investigated. As shown in Figure 4d-f, the zeta potential of the HA-modified MPDA nanoparticles was deprotonated by the carboxyl group on the HA surface. The electronegativity of MPDA nanoparticles with a higher proportion of HA modification was higher, indicating that HA was successfully modified on the surface of MPDA. carboxyl group in sodium hyaluronate. As shown in Figure 4b, the surface potential of MPDA-Fe(III)-DOX-HA modified by HA was more electronegative than that of MPDA-Fe(III)-DOX, reaching −9.17 mV (Figure 4b). The zeta potential of MPDA modified with HA was also investigated. As shown in Figure 4d-f, the zeta potential of the HA-modified MPDA nanoparticles was deprotonated by the carboxyl group on the HA surface. The electronegativity of MPDA nanoparticles with a higher proportion of HA modification was higher, indicating that HA was successfully modified on the surface of MPDA. FTIR Analysis The infrared absorption of MPDA, MPDA-Fe(III)-DOX, and MPDA-Fe(III)-DOX-HA nanoparticles was investigated using infrared spectroscopy. As shown in Figure 5, the absorption peaks of PDA are at −1630 cm −1 (the telescopic vibration peak of the aromatic ring and the bending vibration peak of N-H) [31], −1380 cm −1 (the phenolic C-O-H bending vibration), −1120 cm −1 (C-O vibration) [32], and 2921 cm −1 (the C-H telescopic vibration peak caused by aromatic and aliphatic C-H) [33]. This further indicates that PDA is prepared. The absorption peak at 1745 cm −1 could be attributed to the C=O stretching vibration peak [34]. The peak intensities at 2921 cm −1 and 1745 cm −1 were significantly reduced after DOX loading and HA modification, which was due to the reduction of the aldehyde group caused by the participation of Fe in chelation after DOX loading. The bands at 546 and 521 cm −1 in MPDA-Fe(III)-DOX-HA are attributed to the elastic and contractile vibration peaks of Fe-O [35], indicating that some free Fe ions may be involved in the chelation of HA and that the HA layer on the drug-loaded nanoparticles has been modified. FTIR Analysis The infrared absorption of MPDA, MPDA-Fe(III)-DOX, and MPDA-Fe(III)-DOX-HA nanoparticles was investigated using infrared spectroscopy. As shown in Figure 5, the absorption peaks of PDA are at −1630 cm −1 (the telescopic vibration peak of the aromatic ring and the bending vibration peak of N-H) [31], −1380 cm −1 (the phenolic C-O-H bending vibration), −1120 cm −1 (C-O vibration) [32], and 2921 cm −1 (the C-H telescopic vibration peak caused by aromatic and aliphatic C-H) [33]. This further indicates that PDA is prepared. The absorption peak at 1745 cm −1 could be attributed to the C=O stretching vibration peak [34]. The peak intensities at 2921 cm −1 and 1745 cm −1 were significantly reduced after DOX loading and HA modification, which was due to the reduction of the aldehyde group caused by the participation of Fe in chelation after DOX loading. The bands at 546 and 521 cm −1 in MPDA-Fe(III)-DOX-HA are attributed to the elastic and contractile vibration peaks of Fe-O [35], indicating that some free Fe ions may be involved in the chelation of HA and that the HA layer on the drug-loaded nanoparticles has been modified. Int. J. Mol. Sci. 2023, 24, x FOR PEER REVIEW 6 of 18 Figure 5. Comparison of the IR spectra of nanoparticles at different steps. XPS Analysis The X-ray photoelectron spectroscopy results of MPDA, MPDA-Fe(III)-DOX, and MPDA-Fe(III)-DOX-HA are shown in Figure 6a,b. The full spectrum in Figure 6a shows that each material contains C, N, and O elements [36]. Because of the relatively low content of Fe, we further analyzed the Fe 2p spectra of MPDA-Fe(III)-DOX-HA and MPDA-Fe(III). The peak in MPDA-Fe(III) indicates that Fe was chelated successfully on MPDA [37]. The Fe content of MPDA-Fe(III)-DOX-HA decreased, which could be attributed to the relatively low Fe content on the surface of HA-modified nanoparticles. Photothermal Conversion Capability Analysis As a photothermal agent, PDA has a strong near-infrared absorption capacity and an absorption capacity in the 808 nm near-infrared band [38]. Ferric ions were added to the MPDA preparation process to improve the infrared absorption capacity and photothermal efficiency of the obtained MPDA nanoparticles [39]. In this study, the ferric ion addition ratio was optimized. The effects of different ferric ion addition ratios on the photothermal efficiency of MPDA nanoparticles were investigated under irradiation conditions of 808 XPS Analysis The X-ray photoelectron spectroscopy results of MPDA, MPDA-Fe(III)-DOX, and MPDA-Fe(III)-DOX-HA are shown in Figure 6a,b. The full spectrum in Figure 6a shows that each material contains C, N, and O elements [36]. Because of the relatively low content of Fe, we further analyzed the Fe 2p spectra of MPDA-Fe(III)-DOX-HA and MPDA-Fe(III). The peak in MPDA-Fe(III) indicates that Fe was chelated successfully on MPDA [37]. The Fe content of MPDA-Fe(III)-DOX-HA decreased, which could be attributed to the relatively low Fe content on the surface of HA-modified nanoparticles. XPS Analysis The X-ray photoelectron spectroscopy results of MPDA, MPDA-Fe(III)-DOX, and MPDA-Fe(III)-DOX-HA are shown in Figure 6a,b. The full spectrum in Figure 6a shows that each material contains C, N, and O elements [36]. Because of the relatively low content of Fe, we further analyzed the Fe 2p spectra of MPDA-Fe(III)-DOX-HA and MPDA-Fe(III). The peak in MPDA-Fe(III) indicates that Fe was chelated successfully on MPDA [37]. The Fe content of MPDA-Fe(III)-DOX-HA decreased, which could be attributed to the relatively low Fe content on the surface of HA-modified nanoparticles. Photothermal Conversion Capability Analysis As a photothermal agent, PDA has a strong near-infrared absorption capacity and an absorption capacity in the 808 nm near-infrared band [38]. Ferric ions were added to the MPDA preparation process to improve the infrared absorption capacity and photothermal efficiency of the obtained MPDA nanoparticles [39]. In this study, the ferric ion addition ratio was optimized. The effects of different ferric ion addition ratios on the photothermal efficiency of MPDA nanoparticles were investigated under irradiation conditions of 808 Photothermal Conversion Capability Analysis As a photothermal agent, PDA has a strong near-infrared absorption capacity and an absorption capacity in the 808 nm near-infrared band [38]. Ferric ions were added to the MPDA preparation process to improve the infrared absorption capacity and photothermal efficiency of the obtained MPDA nanoparticles [39]. In this study, the ferric ion addition ratio was optimized. The effects of different ferric ion addition ratios on the photothermal efficiency of MPDA nanoparticles were investigated under irradiation conditions of 808 nm and 2 W/cm 2 . Figure 7a shows that the prepared nanoparticles had the best heating effect when the dopamine (DA):Fe ratio was 3:1. Table 1 shows that when the DA to Fe(III) molar ratio is 6:1, 3:1, or 2:1, the drug loading rate and encapsulation efficiency of the obtained nano-sized drug-loaded particles (MPDA-Fe(III)-DOX-HA) are 80.41 ± 0.84%, 16.08 ± 0.16%; 84.90 ± 0.68%, 16.98 ± 0.13%; and 81.87 ± 1.26%, 16.35 ± 0.25%, respectively. The loading effect of MPDA-Fe(III)-DOX-HA was optimal when the DA to Fe(III) molar ratio was 3:1. In this study, the photothermal stability of MPDA-Fe(III)-DOX-HA was further evaluated using cyclic laser irradiation. As shown in Figure 7e, the highest temperature reached by MPDA-Fe(III)-DOX-HA was relatively stable after three cycles of laser irradiation, indicating that MPDA-Fe(III)-DOX-HA possessed good photothermal stability. Table 1 shows that when the DA to Fe(III) molar ratio is 6:1, 3:1, or 2:1, the drug loading rate and encapsulation efficiency of the obtained nano-sized drug-loaded particles (MPDA-Fe(III)-DOX-HA) are 80.41 ± 0.84%, 16.08 ± 0.16%; 84.90 ± 0.68%, 16.98 ± 0.13%; and 81.87 ± 1.26%, 16.35 ± 0.25%, respectively. The loading effect of MPDA-Fe(III)-DOX-HA was optimal when the DA to Fe(III) molar ratio was 3:1. To investigate the drug release behavior of MPDA-Fe(III)-DOX-HA, PBS and ABS buffer were used to simulate the internal body environment and the tumor microenvironment, respectively. As shown in Figure 8, the drug release rate increased by nearly 30% to 53.1% under the simulated tumor microenvironment when compared to the normal PBS environment. This indicates that MPDA-Fe(III)-DOX-HA had a more potent disintegration and release capacity in acidic environments, which may be due to the instability of the dopamine structure in an acidic solution, which increased the drug release [40]. Additionally, protonation of amine groups in DOX at acidic pH results in higher solubility of DOX and faster drug release [41]. To investigate the drug release behavior of MPDA-Fe(III)-DOX-HA, PBS and ABS buffer were used to simulate the internal body environment and the tumor microenvironment, respectively. As shown in Figure 8, the drug release rate increased by nearly 30% to 53.1% under the simulated tumor microenvironment when compared to the normal PBS environment. This indicates that MPDA-Fe(III)-DOX-HA had a more potent disintegration and release capacity in acidic environments, which may be due to the instability of the dopamine structure in an acidic solution, which increased the drug release [40]. Additionally, protonation of amine groups in DOX at acidic pH results in higher solubility of DOX and faster drug release [41]. Cytotoxicity Analysis Nanodrug carriers are used for drug delivery, and their toxicity should be determined [42]. In this study, the MTT assay was used to determine the toxicity of DOX, Cytotoxicity Analysis Nanodrug carriers are used for drug delivery, and their toxicity should be determined [42]. In this study, the MTT assay was used to determine the toxicity of DOX, MPDA-Fe(III)-DOX, and MPDA-Fe(III)-DOX-HA on L929 and HCT-116 cells. As shown in Figure 9a, freely available DOX, MPDA-Fe(III)-DOX, and MPDA-Fe(III)-DOX-HA had no significant toxicity to mouse fibroblasts. The survival rate of cells receiving MPDA-Fe(III)-DOX was 66.94% at a DOX concentration of 20 µg/mL, which may be attributed to the targeting effect of MPDA-Fe(III)-DOX without a modification layer in a normal cell culture environment [43]. DOX release was uncontrolled, so its release capacity was high, resulting in increased toxicity to normal cells [44]. The survival rate of MPDA-Fe(III)-DOX increased to 79.28% after HA modification. Survival rates were greater than 80% at the remaining DOX concentrations. Cellular Uptake Analysis This study investigated the distribution of DOX after 4 h and 8 h of incubation of HCT-116 cells with drug-loaded nanoparticles, as well as the tumor cell uptake of MPDA-Fe(III)-DOX-HA nanoparticles. CD44 receptors are highly expressed in HCT-116 cells [45]. Tumor cell uptake of MPDA-Fe(III)-DOX (without HA modification) and MPDA-Fe(III)-DOX-HA (HA-modified) nanoparticles were compared. In addition, the differences in nanoparticle uptake behavior with and without near-infrared light irradiation were investigated. As shown in Figure 10, the fluorescence intensity of HCT-116 cells treated with Hoechst 33,258 increased with culture time. At the same time, the fluorescence intensity of DOX in cells increased over time, indicating that the nanoparticles were ingested rather than attached to the cell surface. In addition, when compared to MPDA-Fe(III)-DOX nanoparticles, MPDA-Fe(III)-DOX-HA demonstrated higher DOX fluorescence intensity at varying uptake times, indicating that the HA-modified nanoparticles could enhance nanoparticle uptake by tumor cells. The DOX fluorescence intensity of tumor cells in the near-infrared light irradiation group was higher than in the group without NIR light irradiation because the heat generated after the near-infrared light irradiation could promote drug release by the nanocarriers [46]. The toxicity of varying DOX concentrations (calculated by release rate) was evaluated using HCT-116 cells. The results indicated that the drug-loaded nanoparticles MPDA-Fe(III)-HA had no significant toxicity on tumor cells (survival rate > 80%), as shown in Figure 9b, but the survival rate in the corresponding MPDA-Fe(III)-HA-NIR group was only 50.29% at a DOX concentration of 20 µg/mL, due to the photothermal properties of the nanoparticles. As demonstrated in the MPDA-Fe(III)-DOX-HA experimental group, at a DOX concentration of 20 µg/mL, the tumor cell viability was significantly different in the NIR group, indicating that MPDA-Fe(III)-DOX-HA had good photothermal properties and could work in conjunction with DOX to kill tumor cells (Figure 9b). At this time, the tumor cell viability was reduced to 39.1%. In comparison to the free DOX group and the MPDA-Fe(III)-DOX-HA group, the MPDA-Fe(III)-DOX-HA-NIR group exhibited significant inhibition at the same drug concentration. The enhanced cytotoxicity was caused by the thermal effect generated by near-infrared radiation in combination with the action of DOX. Cellular Uptake Analysis This study investigated the distribution of DOX after 4 h and 8 h of incubation of HCT-116 cells with drug-loaded nanoparticles, as well as the tumor cell uptake of MPDA-Fe(III)-DOX-HA nanoparticles. CD44 receptors are highly expressed in HCT-116 cells [45]. Tumor cell uptake of MPDA-Fe(III)-DOX (without HA modification) and MPDA-Fe(III)-DOX-HA (HA-modified) nanoparticles were compared. In addition, the differences in nanoparticle uptake behavior with and without near-infrared light irradiation were investigated. As shown in Figure 10, the fluorescence intensity of HCT-116 cells treated with Hoechst 33,258 increased with culture time. At the same time, the fluorescence intensity of DOX in cells increased over time, indicating that the nanoparticles were ingested rather than attached to the cell surface. In addition, when compared to MPDA-Fe(III)-DOX nanoparticles, MPDA-Fe(III)-DOX-HA demonstrated higher DOX fluorescence intensity at varying uptake times, indicating that the HA-modified nanoparticles could enhance nanoparticle uptake by tumor cells. The DOX fluorescence intensity of tumor cells in the near-infrared light irradiation group was higher than in the group without NIR light irradiation because the heat generated after the near-infrared light irradiation could promote drug release by the nanocarriers [46]. Discussion Overall, our studies establish that the MPDA-Fe(III) prepared by adding trivalent ferric ions significantly improved the photothermal effect of MPDA, which was consistent with the conclusion in the relevant literature that metal ions enhanced the photothermal effect of polymers [47]. However, we found that the photo-conversion of MPDA-Fe(III)-DOX-HA was lower than that of MPDA-Fe(III), possibly due to the chelation of some iron ions by HA. However, photothermal cycling experiments showed that MPDA-Fe(III)-DOX-HA still had good photothermal stability. Discussion Overall, our studies establish that the MPDA-Fe(III) prepared by adding trivalent ferric ions significantly improved the photothermal effect of MPDA, which was consistent with the conclusion in the relevant literature that metal ions enhanced the photothermal effect of polymers [47]. However, we found that the photo-conversion of MPDA-Fe(III)-DOX-HA was lower than that of MPDA-Fe(III), possibly due to the chelation of some iron ions by HA. However, photothermal cycling experiments showed that MPDA-Fe(III)-DOX-HA still had good photothermal stability. In addition, we have found in the cell experiment that the mesoporous polydopamine has good biocompatibility, and the killing effects of free doxorubicin within a certain concentration range on normal cells and cancer cells are different, which may be related to the action mechanism of doxorubicin. At the same time, the cell experiment also showed the targeting effect of hyaluronic acid. The material encapsulated by hyaluronic acid could reduce the toxic effect on L929 cells. In addition, experiments on cancer cell HCT-116 cells proved that MPDA-Fe(III)-DOX-HA had a good killing effect on tumor cells, and the differential expression of this nanodrug delivery system in normal cells indicated that it is a potential good platform for cancer drug delivery. Synthesis of Mesoporous Polydopamine (MPDA) Nanoparticles Mesoporous polydopamine (MPDA) nanoparticles were prepared using the one-pot method [48], and in the classic experiment, TMB was added to optimize the preparation of MPDA nanoparticles. First, 250 mg of F127 and 100 µL of TMB were added to 10 mL of 50% ethanol. After 5 min of ultrasonic treatment, 75 mg of DA · HCl was added, followed by 450 µL of ammonia to adjust the pH value. The mixture was magnetically stirred for 24 h before being centrifuged at 12,000 rpm for 12 min at 25 • C. The precipitate was washed three times with acetone:ethanol (1:3; v:v). To compare the effect of TMB on MPDA nanoparticles, MPDA nanoparticles without TMB were prepared without changing any other parameters. Preparation of MPDA-Fe(III)-DOX Nanoparticles The MPDA nanoparticles were first prepared according to the molar ratios (DA:Fe) of 6:1, 3:1, and 2:1. The obtained MPDA nanoparticles and ferric chloride were dissolved in the above molar ratio in PBS and vortexed at 1000 rpm for 5 min to obtain three different iron-crosslinked MPDA (MPDA-Fe(III)) nanoparticles. DOX in various concentrations was dissolved in PBS, and the three ferric crosslinked MPDA nanoparticles were added in turn. The supernatant was obtained by centrifugation at 11,000 rpm for 10 min. The performance of the three ferric crosslinked loaded DOX was evaluated, and the best one, MPDA-Fe(III) nanoparticle-loaded DOX, was optimized to obtain MPDA-Fe(III)-DOX for the subsequent experiments. Hyaluronic Acid-Based Modification of Nanoparticles In the experiment, the optimal MPDA-Fe(III)-DOX was used for HA modification, and the mass ratios of MPDA-Fe(III)-DOX to HA were 1:1, 1:2, and 1:3. High-speed centrifugation was used after thorough mixing to obtain MPDA-Fe(III)-DOX-HA modified with varying proportions of HA. Zeta potential values and SEM images were evaluated to obtain the optimal proportion of HA. Brunner-Emmett-Teller (BET) Measurements In this study, the IQ3 automatic specific surface and porosity analyzer was used to determine the specific surface area and pore volume of the nanocarrier via adsorption. The N 2 adsorption-desorption isotherms were determined in continuous adsorption mode at 77.35 K, and the specific surface area, pore size, and pore volume were determined using BET and BJH methods. Scanning Electron Microscopy (SEM) The nanoparticles were dispersed in an ethanol solution, sampled, dropped onto tinfoil paper, and sprayed with gold after natural drying. The morphology of the nanoparticles was examined under a scanning electron microscope. Fourier Transform Infrared Spectroscopy (FTIR) Experimentally obtained nanoparticles were dried and sample-prepared before infrared spectrum analysis using the Bruker Tensor II infrared spectrometer. X-ray Photoelectron Spectroscopy (XPS) The X-ray photoelectron spectrum of the nanocarrier was analyzed, and the changes in the energy spectrum of the nanocarrier were compared before and after drug loading. 1. Standard curve The standard curve of DOX was plotted using ultraviolet spectrophotometry. Following the preparation of 10 mg DOX in 1 mg/mL mother solution with H 2 O, the mother solutions were diluted to obtain 2, 4, 8, 10, and 16 µg/mL DOX solutions. At 480 nm, the absorbance value was measured, and the standard curve of DOX was plotted. 2. Loading rate and encapsulation efficiency The nanocarrier was added to a 100 µg/mL DOX solution, oscillated and loaded overnight, and centrifuged at 11,000 rpm for 10 min. The supernatant was taken to determine DOX content. The centrifuged precipitate was lyophilized and weighed. The loading rate and encapsulation efficiency of MPDA-Fe(III) nanoparticles were calculated. The loading capacity (LC) and encapsulation efficiency (EE) were calculated using the following formulas: The W 0 , W 1 , and W 2 represent the initial DOX addition, the DOX content of the supernatant after centrifugation, and the weight of the centrifuged precipitate after freezedrying, respectively. In Vitro Release The release capacity of MPDA-Fe(III)-DOX-HA at different pH values was determined using dialysis. The dialysis bag (MW = 3500) was packed with the same amount of MPDA-Fe(III)-DOX-HA in PBS (pH = 7.4) or acetic acid buffer (pH = 5.2). After sealing, it was dispersed in the corresponding 200 mL buffer to simulate the normal human internal environment and tumor microenvironment, and the release process of the drugs in vivo was examined. Under constant temperature oscillation at 150 rpm and 37 • C, 2 mL of dialysate in the beaker was collected, and 2 mL of the corresponding buffer was added at a certain time point for continuous release for 48 h. The DOX content of the dialysate was measured, and the ratio to the initial DOX content was calculated to plot the release curve. Photothermal Conversion Efficiency MPDA, MPDA-Fe(III)-DOX, and MPDA-Fe(III)-DOX-HA solutions were prepared in 100 µL deionized water at a concentration of 200 µg/mL, and the temperature rise was measured under near-infrared laser irradiation (2 W/cm 2 , 5 min). The nanomaterial solutions with different concentrations (50, 100, and 200 µg/mL) and power densities (1 and 2 W/cm 2 ) were irradiated for 5 min to evaluate the photothermal effects at different irradiation powers. The 200 µg/mL MPDA-Fe(III)-DOX-HA solution was irradiated with a near-infrared laser (2 W/cm 2 ) for 5 min and allowed to naturally cool to 25 • C. The photothermal conversion rate of MPDA-Fe(III)-DOX-HA was then calculated. Cytotoxicity The MTT assay was used to evaluate the cytotoxicity of drugs and nano-systems to L929 and HCT-116 cells. L929 mouse epithelial cells and HCT-116 cells were cultured in Gibco MEM medium and Gibco DMEM medium, respectively, supplemented with 10% fetal bovine serum and 100 µg/mL streptomycin in a humidified atmosphere with 5% CO 2 at a constant temperature of 37 • C to the logarithmic phase. The cells were seeded in a 96-well plate at a density of 5 × 10 4 cells per well. After 18 h of incubation, 20 µL of samples with different concentration gradients were added to each well. After 12 h of coincubation, the illumination group was irradiated with a near-infrared light for 10 min. After 12 h, 20 µL MTT solution (5 mg/mL) was added to each well in the dark, followed by 4 h of incubation. After incubation, the solution was carefully removed from the wells, and 150 µL DMSO was added to each well. After shaking in the dark for 10 min, the absorbance was measured using a microplate reader at an absorbance of 490 nm, and the ratio of the absorbance of the drug-co-cultured cells to the absorbance of the medium reference was calculated to measure the survival rate of various cells. Uptake of Nanomaterials by Tumor Cells The uptake of nanomaterials by tumor cells and their targeting to HA were evaluated. HCT-116 tumor cells were used to assess cellular uptake (the cell surface contains CD44 receptors), and the cellular uptake of the HA-modified nanocarrier was compared to that of the non-HA-modified nanocarrier. HCT-116 cells were seeded into 6-well plates (containing 1 mL of culture medium) at a density of 1 × 10 6 cells per well and grown for 24 h. Subsequently, 1 mL of PBS, DOX, MPDA-Fe(III)-DOX, and MPDA-Fe(III)-DOX-HA were added (the equivalent DOX concentration in each group was 10 µg/mL), followed by an incubation of 4 or 8 h. After adding the samples, the irradiated group was incubated with near-infrared light (808 nm, 2 W/cm 2 ) for 10 min. After incubation, cells were washed three times with PBS and stained with 1 mL of Hoechst 33,258 for 25 min. After discarding the staining agent, the cells were again washed three times with PBS before being observed under a fluorescence inverted microscope. Statistical Analysis The results were expressed as mean ± SD. Statistical analysis was performed using oneway analysis of variance (ANOVA) and t-test. p < 0.05 was considered statistically significant. Conclusions In this study, mesoporous polydopamine nanoparticles (MPDA) were prepared using the template method, and an integrated photothermal-chemotherapy platform (MPDA-Fe(III)-DOX-HA) was subsequently established. MPDA-Fe(III)-DOX-HA had a uniform particle size of (133 ± 18 nm), a high drug-loading capacity for DOX (84.90 ± 0.68%), and a high release capacity at pH = 5.2 (release rate: 53.1%). Photothermal experiments revealed that the MPDA nanoparticles with surface-modified ferric ions had greater photothermal conversion ability and good photothermal stability. In addition, under local irradiation with an 808 nm near-infrared laser, MPDA-Fe(III)-DOX-HA exhibited strong cytotoxicity to HCT-116 cells, whereas targeted modification of hyaluronic acid reduced the cytotoxicity of nanoparticles to normal cells. The results showed that MPDA-Fe(III)-DOX-HA exhibited good biocompatibility and anti-tumor effect, which could be used as a reference for further research into photothermal-chemotherapy combination therapy in the future.
2023-04-08T15:22:07.027Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "1c0765a35c1306bd600e7866dc941beb863a3f9c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/24/7/6854/pdf?version=1680782087", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c0bb2bb18fc3a1c13efbfe68af3208effb69fa92", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
229719222
pes2o/s2orc
v3-fos-license
Health Care Workers’ Reluctance to Take the Covid-19 Vaccine: A Consumer-Marketing Approach to Identifying and Overcoming Hesitancy Health care workers may be reluctant to receive Covid-19 vaccines for a host of reasons. A survey of employees of Yale Medicine and Yale New Haven Health system identified 15 themes of reluctance with underlying positive and negative sentiments, which in turn affect strategies for reducing vaccine hesitancy. To wit, the CDC reported that only 63% of HCWs polled over several months would get a Covid-19 vaccine.4 (These surveys were conducted before public release of data by the U.S. Food and Drug Administration [FDA] and the manufacturer, as well as prior to initiation of vaccination in the U.K.) The risk of vaccine hesitancy was heightened by the State of Connecticut's Covid-19 Mass Vaccination Plan, which appropriately anticipated shortages and uncertainty of vaccine supply. Health systems charged with vaccinating all personnel would be provided with weekly allotments of vaccine that are dependent on the number of doses utilized the prior week. This created a major impetus to ensure complete vaccine administration each week in order to receive enough supply to vaccinate all health care personnel within approximately 8 weeks. We administered an anonymous survey to employees across our health system contemporaneous with FDA approval of the Pfizer-BioNTech vaccine to estimate the prevalence of Covid-19 vaccine hesitancy, as well as to characterize underlying reasons for vaccine hesitancy and identify sentiments amenable to persuasion through messaging campaigns. The survey was sent to the approximately 33,000 employees and medical staff across our health care system, which comprises Yale Medicine and Yale New Haven Health. The survey included clinically facing staff and those who support the critical infrastructure of the health system without direct patient contact, such as food service staff. We chose to administer a fully anonymous survey to increase survey participation. Our personal conversations with frontline staff indicated an unwillingness to express vaccine hesitancy in fear of being labeled an "anti-vaxxer" or outside of social norms. Prior research also indicates that employee response rates are likely to be lower and that those who respond are less truthful if they perceive that their answers would be identified.5 , 6 These concerns might also be higher among minorities and marginalized groups.7 To most respondents of electronic surveys, even the collection of limited attributes like work role and age or gender is feared as potentially identifiable. The survey included eight items, the first of which asked, "Once the U.S. Centers for Disease Control and Prevention and Food and Drug Administration have deemed Covid-19 vaccines safe and effective, would you get the vaccine if it was readily available and no cost to you?" Response options were: Extremely Likely, Somewhat Likely, Neither Likely nor Unlikely, Somewhat Unlikely, and Extremely Unlikely. Those who responded Very Likely or Likely were asked how soon they would get the vaccine (when first available, in 6 months, or later in the year). All others were asked, "What would make you comfortable getting the vaccine?" and provided a free-text box for responses. Results We received a total of 3,523 responses (an estimated 11% response rate) within the first 30 hours of survey availability. Fully 85% of respondents stated they were Extremely Likely or Somewhat Likely to receive the Covid-19 vaccine. Of these, 87% of these respondents sought the vaccine as soon as it was available to them, while 12% expressed mild hesitancy by stating that they would get it in the next 6 months. Another 523 (14.7%) responses from staff expressed reluctance to take the vaccine when readily available (Neither Likely nor Unlikely, Somewhat Unlikely, and Extremely Unlikely). These respondents indicated a wide variety of reasons for reluctance. Response themes and the frequency at which they were reported are shown in Table 1. The top reasons for reluctance were long-and medium-term safety concerns, although some participants indicated that "Nothing" would make them comfortable. A few indicated concerns stemming from the clinical trial's exclusion of specific groups (e.g., pregnant women) or uncertainty about whether minorities were included in the trial. While theme frequencies may provide a summary-level characterization regarding vaccine hesitancy, we also examined the complete free-text responses to better understand the underlying strengths and emotions of respondents' hesitancy in order to lead to more effective interventions. We developed word clouds ( Figure 1 also see Appendix) using subsets of data limited to the top 15 themes. Word clouds allow visualization of the raw text of responses, focusing the reader's attention on terms that are most common. The "R" statistical software and word cloud package were used for this purpose. The word clouds reveal unknown factors within the themes. For example, in the Others Getting theme, we saw that while watching others' experience would make most respondents comfortable, a minority had an altruistic motive. In the Nothing theme, respondents expressed concern about a mandate from the employer and spoke in strong terms about being uncomfortable with taking the vaccine. Sentiment analysis has been used in prior research across a variety of medical settings, ranging from patients' social media to electronic health records,8 as well as more broadly for brands9 and even prediction of stock market returns. 10 We used a sentiment lexicon-based approach, by which we classified sentiments by the top 15 themes. This captured both positive and negative scores of all sentiments within each theme. A higher negative score indicates that respondents used words and phrases that have highly negative meaning, or more vaccine hesitancy. Thus, an overall score assigned for each phrase is positive (+1), negative (-1), or neutral (0). Overall, we found the vast majority of our health care workers who responded to this anonymous survey were willing to get the Covid-19 vaccine in the first wave. However, 1 in 6 health care personnel expressed reluctance to get vaccinated, primarily due to concerns about the lack of information regarding the vaccine's effectiveness and safety." Figure 2 shows the total positive, negative, and overall sentiment across each theme for vaccine reluctance (also see Appendix). We find that the sentiment scores for those with long-term concerns are a combination of positive (green bar) and negative (red bar) sentiments. In many categories, the majority of users did not express strong sentiments. However, within the theme of Allergies, respondents expressed very strong negative sentiments, indicating distrust of the vaccine based on prior negative reactions to other vaccines. Similarly, we find that those with underlying health conditions and religious concerns had very strong negative sentiment toward being forced to take the vaccine. " FIGURE 2 However, those who expressed reluctance due to the lack of data transparency don't seem to have strongly negative sentiment, indicating they might benefit from receiving more details and discussion around the vaccine development and trial process from a trusted source. Broadly, people expressing themes for which we see more positive sentiments might be persuadable, whereas those with highly negative sentiments might be less so. Recommended Interventions While the prevalence of vaccine hesitancy among our employees was modest in comparison to several recent media reports, 1 in 6 personnel in our health system reported vaccine hesitancy after the first FDA approval of a Covid-19 vaccine for a myriad of reasons. Given the possibility of our employees responding favorably to an employer-administered survey due to social desirability bias, the risk of health care workers failing to meet community-wide standards for vaccination is possible and must be anticipated and mitigated. Understanding the reasons underlying reluctance in this population of health care workers is essential to increasing the likelihood of successful intervention. Without these reasons documented in the free-text response, we might recommend the "wrong" intervention. Consider two responses within the theme Others Getting. Response A indicates "it should be given first to others who have greater health risks," whereas B response indicates "I would be more comfortable to see how everyone else handles it first." These would have entirely different interventions to reduce reluctance. For A, we would communicate that individuals at high risk have adequate supplies of vaccines, whereas for B, we would provide data indicating high efficacy and low risk of side effects in populations similar to them. If these interventions were reversed, they might not be effective. Unlike traditional quality improvement initiatives that may afford the time for iterative cycles of failure and re-intervention, the scarcity of vaccine supply and magnitude of the Covid-19 pandemic demand a higher probability of early success. Based on experience in other industries and principles of consumer marketing, we propose several specific recommended interventions targeted to each specific reason for vaccine hesitancy in Table 2. Limitations and Conclusions Our study has limitations. First, as with any survey, participants who do not respond might be more reluctant to be vaccinated, which risks underestimating the true prevalence of vaccine hesitancy. Second, we anonymized the survey to improve response rate and reliability, but this limits our ability to use results to develop targeted messaging at specific types of health care personnel. Third, we only asked participants who are reluctant to receive the vaccine for text responses, so our sentiment analysis is likely to skew more negative than typical data. Fourth, intentions are not the same as behavior, so we don't know if those who indicated they would take the vaccine will actually follow through. Finally, the study population was in the state of Connecticut, which experienced very high rates of Covid-19 and health system strain in the spring of 2020 and again in the fall and may not be generalizable to other parts of the United States with different local Covid-19 burdens. Furthermore, Connecticut has historically the second-highest statewide flu vaccination rate nationally, which may imply a populace that is relatively more trustful of vaccination programs. Understanding the reasons underlying reluctance in this population of health care workers is essential to increasing the likelihood of successful intervention. Without these reasons documented in the free-text response, we might recommend the 'wrong' intervention." While this analysis may seem complicated to some on the surface, the design, administration, and analysis of this survey was completed within 1 week. Furthermore, the analytic tools and software necessary to replicate this approach in other health systems or by other employers are easily accessed through code we made publicly available and through openly accessible software, respectively. The rich insights provided by this approach demonstrate the potential for health systems to learn from consumer marketing firms that routinely apply such survey methods for customer service improvement, as well as unstructured text analysis to learn about performance issues in service industries.11 Overall, we found the vast majority of our health care workers who responded to this anonymous survey were willing to get the Covid-19 vaccine in the first wave. However, 1 in 6 health care personnel expressed reluctance to get vaccinated, primarily due to concerns about the lack of information regarding the vaccine's effectiveness and safety. We describe 15 major reasons for unwillingness and propose strategies for messaging to mitigate vaccine hesitancy among these groups. Subgroups of health care personnel with vaccine hesitancy who express positive sentiments should be targeted as the most persuadable under current circumstances.
2020-12-31T05:06:02.037Z
2020-12-29T00:00:00.000
{ "year": 2020, "sha1": "8f6301e71c3d35f31d08448c73580a455edd7c72", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "8f6301e71c3d35f31d08448c73580a455edd7c72", "s2fieldsofstudy": [ "Medicine", "Business" ], "extfieldsofstudy": [] }
199163270
pes2o/s2orc
v3-fos-license
Gamma-Aminobutyric Acid Levels in the Anterior Cingulate Cortex of Perimenopausal Women With Depression: A Magnetic Resonance Spectroscopy Study Objective The anterior cingulate cortex (ACC) is associated with the processing of negative emotions. Gamma-aminobutyric acid (GABA) metabolism plays an important role in the pathogenesis of mental disorders. We aimed to determine the changes in GABA levels in the ACC of perimenopausal women with depression. Methods We recruited 120 perimenopausal women, who were followed up for 18–24 months. After reaching menopause, the participants were divided into a control group (n = 71), an anxiety group (n = 30), and a depression group (n = 19). The participants were examined using proton magnetic resonance spectroscopy (MRS). TARQUIN software was used to calculate the GABA concentrations in the ACC before and after menopause. The relationship of the GABA levels with the patients’ scores on the 14-item Hamilton Anxiety Scale and 17-item Hamilton Depression Scale was determined. Results GABA decreased with time. The postmenopausal GABA levels were significantly lower in the depression group than in the anxiety group and were significantly lower in both these groups than in the normal group. The postmenopausal GABA levels were significantly lower than the premenopausal levels in the normal, anxiety, and depression groups (P = 0.014, <0.001, and <0.001, respectively). The premenopausal GABA levels did not significantly differ between the normal vs. anxiety group (P = 0.907), normal vs. depression group (P = 0.495), and anxiety vs. depression group. The postmenopausal GABA levels were significantly lower in the depression group than in the anxiety group and were significantly lower in both these groups than in the normal group, normal vs. anxiety group (P = 0.022), normal vs. depression group (P < 0.001), and anxiety vs. depression group (P = 0.047). Conclusion Changes in GABA concentrations in the anterior cingulate cortex are related with the pathophysiological mechanism and symptoms of perimenopausal depression. INTRODUCTION The anterior cingulate cortex (ACC) is closely related to the occurrence and development of depression. The ACC occupies the rostral portions of Brodmann areas 24, 25, 32, and 33, and is activated by diverse tasks, ranging from emotion processing and regulation to attention and cognitive control (Ferrone et al., 2007). Many previous studies have confirmed the significant association of the ACC, especially the subgenual ACC, with the processing of negative emotions. The pregenual ACC is considered to be associated with cognitive functions such as social cognition, including theory of mind tasks and conflict monitoring (Ferrone et al., 2007;Formica et al., 2007;Ferolla et al., 2011). Perimenopausal depression is a mental disorder that first occurs in women during the perimenopausal period and is mainly characterized by symptoms of hypothymia, anxiety, nervousness, and loss of interest, accompanied with autonomic and endocrine dysfunction, especially recession of the gonads. Women with severe symptoms may have a tendency to commit suicide. A meta-analysis has shown that women in the perimenopausal period were particularly vulnerable to anxiety or depression, and had more severe symptoms than women in the premenopausal period (de Kruif et al., 2016). However, the pathophysiological mechanisms of perimenopausal depression are still unknown. Gamma-aminobutyric acid (GABA) is the main inhibitory neurotransmitter in the central nervous system. It combines with GABA receptors and inhibits excitatory neural activity. Abnormal GABA metabolism plays an important role in the pathogenesis of mental disorders such as depression and schizophrenia (Levy and Degnan, 2013;Rowland et al., 2013). An increasing body of preclinical and clinical evidence has proved that a close relationship exists between GABA and depression. Magnetic resonance spectroscopy (MRS) is a noninvasive technique for quantifying metabolites in the brain. MRS has been successfully applied in studies of depression and has detected changes in many metabolites in different brain regions (Puts and Edden, 2012). MRS plays an important role in exploring the treatments and mechanisms of depression. However, due to the chemical shift and the scalar coupling effect, the GABA spectrum overlaps with signals of other major metabolites. It is therefore difficult to detect GABA by using conventional 1 H-MRS. An improved MRS method-MEGA-PRESS, based on partially refocused J-couplings-has been used to detect GABA in studies of healthy brains and psychiatric diseases. In this study, we used the MEGA-PRESS technique to detect GABA in the anterior cingulate cortex (ACC) of perimenopausal women. The Totally automatic robust quantitation in nuclear MR (TARQUIN) software was used as the post-processing method to calculate GABA concentrations. We aimed to characterize the pathophysiological mechanisms of perimenopausal depression by determining whether changes in GABA concentrations in the ACC were associated with perimenopausal anxiety/depression. Participants We recruited 131 perimenopausal women. After the exclusion of 11 participants, 120 participants remained. The inclusion criteria for the experimental group were as follows: (1) women in the perimenopausal stage, as defined by the Stages of Reproductive Aging Workshop (Soules et al., 2001), i.e., a persistent ≥7-day difference in menstrual cycle length in consecutive cycles (persistence was defined as recurrence within 10 cycles of the first cycle with variable length), and (2) education up to junior high school level or above. The exclusion criteria were as follows: (1) GABA values could not be detected (6 of the 131 participants were excluded due to this reason), (2) diagnosis of a somatic disease (hypertension or diabetes mellitus); (3) presence of a hypothalamic-pituitary-adrenal axis or thyroid disease; (4) history of depression or other related mental illnesses, or presence of dementia or other organic mental disorders; (5) use of oral contraceptives or hormone therapy within 3 months of entering the study; (6) a history of non-depressive disorders in the participant or a family member (including immediate family by blood and collateral blood relatives within three generations); (7) smoking and/or dependence on alcohol; and (8) poor GABA quality (5 of the 131 participants were excluded because of this reason) (Figure 1). Estrogen Measurements A fasting blood specimen (3 mL) was collected into a 5-mL sterile plain tube without anticoagulant at 9:00-11:00 a.m. on the second or third day of the menstrual cycle. However, if a timely sample could not be obtained (as was the case for the late stage perimenopausal women), a fasting sample was taken when the endometrium was <5 mm thick as determined using transvaginal Doppler ultrasonography. In this study, estrogen levels fluctuated during the perimenopausal period, but the overall trend was downward. This study was approved by the ethics committee of the Shanghai Sixth People's Hospital, Shanghai Jiao Tong University), and all participants signed informed consent forms before being entered into the study. MRI and MRS Analyses In all subjects, MR data were acquired using a 3.0-T MR scanner (MAGETOM, Verio, Siemens Healthcare, Erlangen, Germany) equipped with a 32-channel phased-array head coil as the transmitting and receiving coils. First, a T1-weighted turbo field echo sequence was used to obtain high-resolution threedimensional (3D) axial images of the brain structure, with the following scanning parameters: field of vision (FOV), 230 mm; repetition time (TR)/echo time (TE), 1500/2.96 ms; flip angle, 9 • ; voxel size, 0.9 mm × 0.9 mm × 1 mm; slice thickness, 1 mm; and distance factor, 50%. The MEGA-PRESS sequence was used to detect GABA in the regions of interest (ROIs), with the following scanning parameters: TE, 68 ms; TR, 1500 ms; acquisition bandwidth, 1200 Hz; pulse placement, 1.9 ppm, and number of excitations, 64 on and 64 off. Unsuppressed water was used for water scaling and correction of frequency and phase. The spectra were fitted using TARQUIN software (Wilson et al., 2011;Mullins et al., 2014;Harris et al., 2017). GABA peaks were quantified calculated using the water-scaled method, as described previously (Wilson et al., 2011;Mullins et al., 2014;Harris et al., 2017). A radiologist with 10 years of experience placed two ROIs measuring 2 cm × 2 cm × 2 cm each bilaterally in the subgenual ACC in the sagittal plane and adjusted them accordingly in the coronal and axial planes. The average GABA value of the right and left sides (ROIs) was calculated. The edges of all ROIs were positioned to avoid the lateral ventricles and skull. All images were post-processed by the same radiologist with 10 years of experience, and the TARQUIN software was used to calculate GABA concentrations in the ROIs (Figure 2). Gray matter and white matter tissue proportion of interest has been evaluated for each patient using manual segmentation available in ITK-SNAP 1 , where the bounding box was redrawn from the saved planning volume of interest. Quality Control In order to gain a high success ratio acquiring GABA spectra, we defined a pre-requirement for performing GABA acquisition: magnetic field inhomogeneity <15 Hz for the defined ROI. After the GABA acquisition, we visually inspected the spectra, and the quality control parameters for spectral fitting were reviewed to verify that the spectra were not qualitatively abnormal. All full widths at half maximum were ≤0.12 ppm. Poorly fitted spectra higher than 20% of the GABA estimate were excluded from further analysis, and this led to the exclusion of 5 of the 131 participants. Statistical Analysis For all statistical tests, the level of significance was set at P < 0.05. The following analyses were carried out (Figure 3). (1) Two-way repeated ANOVA with time (premenopausal, post-menopausal) and three groups (normal, depression, anxiety group) was used to compared changes GABA with time and groups. Both of the main effect of the group and the time effect were done. Main effect compared the difference between normal group, depression group and anxiety group at premenopausal stage (i.e., normal group vs. anxiety group, normal group vs. depression group, and anxiety group vs. depression group). It also compared the difference between the three groups at postmenopausal stage. Time effect was used to compare the difference of pre-and postmenopausal GABA concentrations in the normal group (n = 71), anxiety group (n = 30), and depression group (n = 19). (2) The Pearson correlation coefficient was used to analyze the correlation of GABA with the Hamilton Anxiety Scale (HAMA)-14 scores in the anxiety group and with the Hamilton Depression Scale (HAMD)-17 scores in the depression group. A two-tailed test of significance was used, with a P value <0.05 considered statistically significant. (3) Receiver operating characteristic (ROC) curves and the area under the curve (AUC) were calculated to evaluate the diagnostic performance of GABA concentration in the control, anxiety, and depression groups. (4) Retrospective calculated the gray matter/white matter ratio, as this ratio changes at the two time points (beginning and follow up), one-way ANOVA was used to compare the difference. General Information The 120 women remaining in this study were followed up for 18-24 months. All women underwent MRS with a 3.0-T MR FIGURE 3 | Two-way repeated ANOVA was used to compare the difference between normal group, anxiety group and depression group at different times. The ANOVA for repeated measures is used to perform the data analysis. Interaction between group effects and time effects was significant (F = 6.642, p < 0.01), the main effect of the group was significant (F = 4.473, p < 0.05), and the time effect was significant (F = 49.251, p < 0.001). (1) The main effect showed no statistical difference between any two groups in the three groups at the start time point, but there was a statistical difference between the two groups in the follow-up time point (p < 0.05). (2) The time effect shows that there are differences between the two time points in the normal group, there are differences between the two time points of the anxiety group, and there are also differences between the two time points in the depression group. * P < 0.05; * * P < 0.01; * * * P < 0.001. scanner assessed before and after menopause. The participants were divided into three groups: normal group (n = 71), anxiety group (n = 30), and depression group (n = 19). These diagnoses were based on the Diagnostic and Statistical Manual of Mental Disorders, fifth edition (DSM-V) and were made by two psychiatrists (with 8 and 10 years of experience). These two psychiatrists also assessed the HAMA-14 and HAMD-17 scores before and after menopause. HAMD-17 scores ≥ 17 suggest depression, while HAMA-14 scores > 14 suggest anxiety (Figure 1). The general information of the three groups is displayed in Table 1. Blood pressure, blood glucose and triglyceride levels, and body mass index did not significantly differ between the three study groups (P > 0.05; Table 1). MRS Data By means of the MEGA-PRESS sequence, we successfully acquired edited spectra of GABA from the ACC region in 120 subjects. GABA The two-way repeated ANOVA results showed that the interaction between group effects and time effects was significant (F = 6.642, p < 0.01), the main effect of the group was significant (F = 4.473, p < 0.05), and the time effect was significant (F = 49.251, p < 0.001) (Tables 2-4). (1) The main effect showed no statistical difference between any two groups in the three groups at the Frontiers in Neuroscience | www.frontiersin.org Interaction between group effects and time effects was significant (F = 6.642, p < 0.01), the main effect of the group was significant (F = 4.473, p < 0.05), and the time effect was significant (F = 49.251, p < 0.001). * P < 0.05; * * P < 0.01; * * * P < 0.001. The simple effect showed no statistical difference between any two groups in the three groups at the start time point, but there was a statistical difference between the two groups in the follow-up time point (p < 0.05). * P < 0.05; * * * P < 0.001. premenopausal time point, but there was a statistical difference between the two groups in the follow-up time point (p < 0.05). The postmenopausal GABA levels significantly differed between the normal vs. anxiety group (P = 0.022), normal vs. depression group (P < 0.001), and anxiety vs. depression group (P = 0.047). The postmenopausal GABA levels were significantly lower in the depression group than in the anxiety group and were significantly lower in both these groups than in the normal group. The premenopausal GABA levels did not significantly differ between the normal vs. anxiety group (P = 0.907), The simple effect shows that there are differences between the two time points in the normal group, there are differences between the two time points of the anxiety group, and there are also differences between the two time points in the depression group. * P < 0.05; * * * P < 0.001. normal vs. depression group (P = 0.495), and anxiety vs. depression group (P = 0.606). (2) The time effect shows that there are differences between the two time points in the normal group, there are differences between the two time points of the anxiety group, and there are also differences between the two time points in the depression group. This means there is difference between postmenopausal groups, and GABA decreased with time. The postmenopausal GABA levels were significantly lower than the premenopausal levels in the normal, anxiety, and depression groups (P = 0.014, <0.001, and <0.001, respectively). Table 5). Receiver operating characteristic curve and AUC results: The postmenopausal GABA diagnostic performance in normal and mental disorders (both anxiety and depression) was as follows: AUC value 0.703 (P < 0.001). Postmenopausal GABA diagnostic performance in anxiety and depression was as follows: AUC value 0.677 (P = 0.038). The ratio of GABA decline to premenopausal GABA diagnostic performance in normal and mental disorders (both anxiety and depression) was as follows: AUC value 0.683 (P < 0.001). The ratio of GABA decline to premenopausal GABA diagnostic performance in anxiety and depression was as follows: AUC value 0.774 (P = 0.001; Figure 8). Gray Matter/White Matter Ratio There is no significant difference between groups and times; gray matter/white matter ratio did not drive the GABA comparison results ( Table 6). GABA SNR%SD were calculated between groups and two time points, there is no significant difference. DISCUSSION In this study, we have provided preliminary evidence that GABA levels in the ACC region of perimenopausal women with depression were significantly lower after menopause and were significantly lower than the levels in the control group (P < 0.05). This finding suggests that a reduction in GABA levels in the ACC is associated with the pathophysiological mechanism of perimenopausal depression, and that there might be a lack of GABA in the perimenopausal period. The study also found that after menopause, the concentration of GABA was significantly lower in the depression group than in the anxiety group, and significantly lower in the anxiety group than in the normal group. This further suggested an association between a reduction in GABA levels and depression. We also observed dynamic changes in GABA levels during the transition from the premenopausal to the postmenopausal period. Many investigators have focused on the relationship between GABA levels and depression. GABA concentrations in the plasma and cerebrospinal fluid of depressive patients are lower than those in healthy controls. Smiley et al. (2016) found that GABA-neuron density in the auditory cerebral cortex is reduced in subjects with There is no significant difference between groups and times of Gray matter/white matter ratio. major depressive disorder. Using MRS studies, Hasler et al. (2007) found lower GABA levels in the prefrontal cortex in subjects with major depression than in healthy controls. Many studies have found that patients with depression have lower GABA levels in the occipital lobe and ACC than do healthy controls (Sanacora et al., 2004;Bhagwagar et al., 2008). Sanacora et al. (2004) have suggested that GABA concentrations vary among the different subtypes of depression and that a change in the ratio of excitatory-inhibitory neurotransmitter levels might be associated with abnormal brain function. In investigations of female physiological cycles related with depression, Liu et al. (2015) found that GABA levels in the ACC, prefrontal lobe, and left basal ganglia region were significantly reduced in women with premenstrual dysphoric disorder. Wang et al. (2016) found that GABA levels in the ACC and medial prefrontal lobe were decreased in postmenopausal women. In our study, we also detected reduced GABA levels in the ACC of perimenopausal women with depression, which was consistent with the results of the above studies. However, GABA was too low to be detected in some participants. An imbalance of the limbic-cortical-striatal-pallidal-thalamic loop is the generally acknowledged neurological model of depression. The activity of the anterior cingulate gyrus and dorsal lateral frontal lobe has been shown to be decreased in depressive patients. Wang et al. (2016) detected significantly low GABA levels in the ACC/medial prefrontal cortex of postmenopausal women with depression. Dubin et al. (2016) found that after repetitive transcranial magnetic stimulation, the GABA levels in the medial prefrontal cortex significantly increased. Therefore, in this study, we selected the ACC region as the ROI, as this region is closely related with emotional function. We did not choose the occipital lobe as in the study by Bhagwagar et al. (2008) because the size of the ROI in our study was relatively large, and ROIs placed in the occipital lobe can be easily disturbed by other structural signals in the base of the skull. Hasler et al. (2007) detected reduced GABA levels in the prefrontal regions in patients with major depressive disorder, including parts of the ACC, but the GABA values in their study were lower than the GABA values in this study, possibly because of differences in ROI sizes, channels of the head coil, and calculation methods. We retrospectively calculated the gray matter/white matter ratio, as this ratio changes at the two time points (beginning and follow up); there was no significant difference between groups and times. Gray matter/white matter ratio did not drive the GABA results. There are some limitations to this study. First, this is a cohort study, and the sample size needs to be increased in future. The follow-up period was relatively short. To explain the relationship between GABA levels and the pathogenesis of perimenopausal depression, more experiments need to be performed. GABA spectroscopy was poor due to movement, and GABA spectroscopy could not be acquired due to the participant requesting to end the scan prematurely. Furthermore, we only selected the ACC as the ROI; the prefrontal cortex and other brain regions related to emotional circuits were not included in this study. The differences between the left and right ROIs were not assessed. The manually prescribed ROIs were subjective and could have led to deviations in the results of different subjects. Moreover, it has been suggested that the GABA levels in the brain decrease with age (Gao et al., 2013). During the perimenopausal period, hormone levels fluctuate significantly. Studies have suggested that decreased estrogen levels affect perimenopausal depression (Schmidt et al., 1994). And the gray matter/white matter ratio was retrospectively calculated, and a small amount of cerebrospinal fluid was not considered and may lead to some small deviation. CONCLUSION In summary, this study examined the changes in the GABA levels in the ACC region in perimenopausal women with depression and anxiety as well as healthy women (controls). The results suggested that the GABA levels in the ACC decreased during the perimenopausal period, and that this decrease was closely associated with depression and anxiety. We calculated GABA concentrations by using the MEGA-PRESS sequence; the results were highly reliable and stable. Advances in MRS technology will be important in the exploration of pathogenesis and the development of targeted drugs for perimenopausal depression. ETHICS STATEMENT This study was approved by the local ethics committee (The ethics committee of Shanghai Sixth Affiliated People's Hospital, Shanghai Jiao Tong University), and all participants signed informed consent forms before entering the study.
2019-08-02T22:40:05.374Z
2019-08-20T00:00:00.000
{ "year": 2019, "sha1": "95748e670c20ad4812aac3b83c67c63e0e56bced", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2019.00785/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "95748e670c20ad4812aac3b83c67c63e0e56bced", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56240678
pes2o/s2orc
v3-fos-license
Does a one hour educational class improve compliance of chlorhexidine gluconate baths prior to operation? Background: Surgical site infections (SSIs) continue to be a major contributor to morbidity and mortality post-operatively. One of the treatments used to prevent such infections is chlorhexidine gluconate (CHG) baths prior to surgery. An obstacle to using CHG as a pre-operative preventative measure to infection has been the low patient compliance rates. Our study aimed to analyze whether an educational class explaining the proper usage of CHG prior to the surgery date will improve patient compliance. Methods: We evaluated two different groups. One group consisted of patients who were scheduled for total joint arthroplasty (TJA) and attended an educational class in addition to receiving the standard preoperative protocol explaining the proper application of CHG. A second group consisted of subjects undergoing any other type of surgery but was not offered the additional educational class. Results: Subjects undergoing TJA had a higher compliance rate than all other surgeries (95.8% and 77.8% respectively; p < .001). Interestingly, throughout time, the effectiveness of the educational class to improve compliance also improved (from 90.9% in the first month to 100% in the final month; p < .001). Discussion: The addition of an educational class to the standard preoperative educational protocol significantly improved patient compliance to the preoperative application of CHG in TJA Patients, and increasingly so overtime. This suggests the importance of proper patient education in the prevention of costly comorbidities such as infection. Conclusions: The use of instructional classes may be useful for improving compliance to patient protocols prior to undergoing surgery. Further research is needed to fully assess the benefits of educational classes and their correlation to patient compliance. INTRODUCTION Surgical site infections (SSIs) are a major contributor to morbidity and mortality in postsurgical care. [1] Although there has been an increase in awareness of the risk of perioperative infections, SSIs remain one of the most common perioperative complications. [2] SSI's are known to complicate up to 10%-20% of surgical operations in general, [3] and up to 25% of orthopedic surgeries. [2] Despite being a common and highly successful surgical procedure, total joint arthroplasty (TJA) is a known risk factor for perioperative SSI, [4] which occur in 1%-2% of these procedures. [5][6][7] This problem is compounded by the fact that TJA procedures are predicted to increase by 673% (3.48 million) and 174% (572,000) for total knee arthroplasty (TKA) and total hip arthroplasty (THA), respectively, over the next decade. [8] Given the prevalence of SSIs in TJA, these complications will continue to be a major financial burden to our healthcare system. According to the Center for Disease Control and Prevention (CDC) and Consumer price index (CPI), it is believed that SSI's currently account for $3.5 billion to $10 billion a year in healthcare expenditures. [9] One common method of preventing SSI's is the use of chlorhexidine gluconate (CHG) baths. CHG is a topical antiseptic used to limit the risk of SSI's and healthcare-associated infections (HAIs) by disinfecting the skin of patients before surgery. [10,11] CHG is considered an affordable option with minimal side effects. Except for rare cases of anaphylaxes, side effects are typically limited to localized skin irritations and reactions. [12] Unfortunately, as CHG use is practiced in an outpatient setting, low compliance is a common problem. [13] This stems from a number of different reasons: the improper monitoring of CHG protocols, failure of patients to remember to apply the prescribed CHG treatment, and patients' lack of understanding regarding the proper use and medical benefits of using CHG. [12] Patient education and opportunities for patients to play a larger role in their own care has historically shown to improve compliance to preoperative protocols. [12] However, there is uncertainty with respect to the optimal methods of implementing such measures. As such, the aim of our study is to examine the effect of an educational class in improving patient CHG compliance. Specifically, we compared patients who attended an educational class regarding the proper use of CHG prior to their TJA compared to patients who had surgery, but did not attend this educational class. We hypothesize that the addition of this educational class will have a significant increase in patient compliance to CHG use. Study design This is an observational retrospective cohort study, which was conducted as part of a quality improvement initiative at an urban, academic, tertiary care center. Given that this was a quality control study; it was exempt by the Institutional Review Board (IRB). Using the data provided from the hospital, we studied the difference in patient compliance to CHG use between two groups: patients scheduled for THA or TKA, who attended an educational class prior to surgery, versus patient who did not attend an educational class prior to any surgery. All patients included in this study were given a package containing one 4-ounce bottle of 4% CHG all-purpose soap (Ecolab, St. Paul, Minneapolis) and verbal instructions for its use. The patient obtained this instruction set and was verbally instructed, by a nurse, on the proper use of the CHG soap during a scheduled meeting that occurred 2 to 8 weeks before the surgery date. In addition, patients having THA/TKA were asked to attend a supplementary one-hour educational class, which included proper use of CHG soap. This class occurred between their initial scheduling visit and the patient's surgery date. As an additional component of the class, the joint replacement patients received instructions and were reminded of the importance of proper application during the patient's preoperative evaluation by the joint replacement preoperative clinic staff. Patients from the other cohort, which included any type of non-arthroplasty procedure, did not attend this class, or receive any additional instruction. Inclusion criteria/exclusion All patients having scheduled elective surgery from July 2013 to February 2014 were surveyed about their use of CHG soap upon their arrival at the hospital the day of their surgery. Patients were excluded for one of four reasons: (1) trauma and/or emergency cases, (2) Patients that were transferred to the operating room directly from a unit of the hospital, none elective surgery, (3) patients who experienced an allergic reaction or adverse skin reaction, (4) patients who did not take at least three consecutive showers with CHG soap just prior to their surgery date, which was a requirement per the preoperative protocol, were considered noncompliant and treated the same as patients who reported complete noncompliance. Statistical analysis Descriptive statistics were used to quantify compliance rates and interventions implemented. A chi-squared analysis was used to determine if there was a significant difference between compliance rates of each group. All data was collected and statistical analysis was performed using Excel software (Microsoft Corporation; Richmond, WA, USA). RESULTS From July 2013 to February 2014, we surveyed 4,181 patients who underwent elective surgical procedures. Two groups were involved in this study: (1) 138 TKA or THA candidates, (2) 4,043 patients who were scheduled for a surgical procedure other than THA or TKA. 3,792 patients had surgery once during this time period; 342 patients had surgery twice during this time period; 49 patients had surgery three or more times during this time periodthe maximum number of surgeries for a single patient in this time period was 5 (see Table 1). Two patients had a hip or knee arthroplasty as well as an unrelated surgery during this time period. 138 patients had total or revision hip or knee arthroplasty for a total of 168 procedures. 4,043 patients had any other type of surgery for a total of 4,525 cases (see Table 1). Table 2). Eighteen (0.4%) patients reported either a CHG allergy or an adverse skin condition, and thus discontinued use prior to completion of the protocol. Over the 8-month course that the study took place, compliance rates were consistently high in the TKA and THA group, while rates steadily increased in the "Other" group over time (see Figure 1). An upward trend was observed in the "Other" group which showed an increase from 64.0% total compliance in the first month to 87.1% by the last month. Compliance rates in Cohort 1 improved from 90.9% during the first month to 100% compliance by the final month. DISCUSSION CHG baths are a commonly prescribed method to safely decrease the rate of infection in surgical procedures. [13][14][15] There have been varying reports in the literature regarding the effectiveness of preoperative CHG use in reducing infec-tion amongst the TJA population. [16][17][18] A study by Leaper et al. supported that the use of 2% CHG as a preoperative deterrence of SSIs. The study showed a reduction of SSI's in all classes of surgery where there were no wound guards in place, and where diathermy skin incision techniques were not used. [3] A study conducted by Eiselt et al. found that the rate of SSI was reduced by half in orthopedic patients undergoing TJA when using a 2% CHG no-rinse cloths when compared to the use of Betadine. [19] Other studies have attempted to explain the low compliance rates observed with CHG use. Edminston et al. cited apathy, lack of interest, or the lack of patient understanding on the importance of applying the soap. [13] In another study, focusing on Emergency Department communication between the physician and the patient, Karin Rhodes proposes that the limited time spent during provider-patient encounters did not allow for sufficient patient education to be delivered. [20] Future studies should asses the details of the patient-provider relationship and how this affects patient compliance on various preoperative protocol requirements. A study by Edminston et al. proposed that failure to provide patients with an easy to follow system when applying CHG treatment is responsible for the following issues faced by patients: failure to understand administrative instructions, physical limitations (e.g., pain, restricted range of motion), use of unfamiliar medical jargon, social isolation, language barriers, low educational levels/illiteracy, and socioeconomic status. [21] This suggests that patients require more information about the importance of applying CHG preoperatively. This was corroborated by a study, conducted by Machoki et al., [22] which analyzed whether patient education, patient counseling, or a mixture of the two would increase compliance rates for patients prescribed tuberculosis treatment. The study found that education and/or counseling may increase compliance; however, compliance rates may vary according to the level of intervention. [22] Overall, studies which have observed the relationship between educational classes and compliance have found a small increase in compliancy with regards to the implementation of an additional educational class. These findings from previous studies partially support the current study's results. As suggested by these previous studies, lesser degrees of education received by patients from their care providers may contribute to poor patient compliance to preoperative protocols. In a way, the current study supports these findings as a correlation between higher compliance rates in patients who attended a one-hour class was observed (95.9% versus 77.8%; p < .001). Furthermore, we found that the effectiveness of the educational class increased as time went along (from 90.9% compliance during the first month to 100% compliance in the final month). This increase in compliance may indicate improvement in teaching methods overtime, supporting the notion that "how" rather than "what" is taught in these educational classes may be the most important indicator of the effectiveness of this educational class. Interestingly, however, although the absolute compliance rates were not as robust in the cohort that did not attend the educational class, they experienced a greater increase in compliance rates over the course of the study compared to the educational class cohort (see Figure 1). One possible explanation could be that we didn't control for changes in the existing preoperative work-up in the cohort, and the study did not account for. This may have come in the form of surgeons simply putting more emphasis on the importance of CHG application during preoperative clinic visits. Nevertheless, this phenomenon complicates the question regarding just how much of an impact an educational class has in improving compliance rates. This has led the authors to suspect that deeper underlying factors pertinent to both cohorts may be the cause of the improvements observed. There were a few limitations to this study. First, patients were not controlled for the type of procedure they were receiving. All patients who attended the educational class were also total knee and hip arthroplasty patients, while those that did not attend had other procedures. This is a potential confounding factor. Furthermore, subjects were assessed for their compliance rates but the reason for their compliance or lack thereof was not recorded. Such information could have allowed us to evaluate the relationship between the educational class and its direct influence on SSI's. The study also did not account for outside resources that patients may have referenced regarding CHG use, whether it be from literature online, or from an individual outside of our facilities. These extra resources may have affected the patient's perspective on CHG, and was a variable that we did no control for. Another source of limitation is that this study relied on patient self-reports on their CHG usage, which may have led to an over-estimation of compliance. The intention of this study was to be a quality improvement project. As such, we did not obtain the IRB approval to view individualized patient information (e.g., linking which patients attended the educational class to which patients were compliant to CHG application), but rather obtained compliance rates of each cohort as a whole. This is a limitation of the study as it prevented us from completing more complex statistical analyses such as odds ratios, which would have helped assess the role the educational class had in affecting compliance rates. Finally, the study did not account for patients who had already used CHG in a prior procedure. These particular patients have had experience with applying the CHG in the past, thus improving their knowledge on the proper application and importance of its use, ultimately affecting compliance within this group of patients. A secondary exposure to the use of CHG may have allowed for a better understanding for those specific patients and may have effected their compliance. CONCLUSIONS In conclusion, we found that there is an association between the attendance of an educational class and improved compliance to CHG application prior to surgery. Based on similar studies, this notion is supported and the addition of patient education plays a vital role in increasing patient compliance. However, we cannot separate the fact that all patients who received this class were TKA and THA patients. Therefore, while we feel that educational classes are beneficial to improving the compliance rates to preoperative instructions, additional research is needed to further evaluate the details that are directly responsible for this improvement. CONFLICTS OF INTEREST DISCLOSURE The authors declare they have no conflict of interest.
2019-03-17T13:05:20.099Z
2017-07-04T00:00:00.000
{ "year": 2017, "sha1": "0585c28384c583b602bfba00bfbc4a731ee0693e", "oa_license": null, "oa_url": "http://www.sciedupress.com/journal/index.php/css/article/download/11277/7271", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ed207ec79044c1c68be6954c259b01c20dbf0490", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233813028
pes2o/s2orc
v3-fos-license
Graphene: A new technology for agriculture This article presents a review on the use of graphene in various segments, elucidating that this product can be used in various industrial sectors. These include mainly agriculture (as in large crops of high relevance, such as coffee), the food industry and the environment, as a plant growth stimulator and in fertilizers, nanoencapsulation and smart-release systems, antifungal and antibacterial agents, smart packaging, water treatment and ultrafiltration, contaminant removal, pesticide and insecticide quantitation, detection systems and precision agriculture. However, some challenges can be overcome before the graphene-based nanoparticle is used on a large scale. In this way, before using the product in the environment, it is necessary to determine whether the technology is safe for the soil-plant system and consumers. Furthermore, the cost of its use can also be a limiting factor depending on the level applied. Therefore, this review proposes to examine the diverse literature to explain the effects of the use of graphene in agriculture, plants and soil microorganisms. Accordingly, this article discusses and presents the possibilities of application of graphene in agriculture, plants and soil microorganisms. Introduction In 1948, the creation of electron microscopy allowed the visualization of the first images of few-layer graphite. Thereafter, the search for the "isolation of graphene" began. Later on, in 2010, researchers Andre Geim and Constantine Novoselov, from the University of Manchester, won the Nobel Prize in physics for discovering graphene and its properties, extracting the famous graphene layers from graphite. Graphene can be used in agriculture and various sectors of the high-tech and food industries. This material can be used in different ways in these segments, e.g. (I) as a plant growth stimulator and a component of fertilizers (Zaytseva & Neumann, 2016); (II) in nanoencapsulation and smart-release systems (Andelkovic et al., 2018;Kabiri et al., 2017); (III) as an antifungal and antibacterial agent ; (IV) in smart packaging (Sundramoorthy et al., 2018); (V) in water treatment and ultrafiltration (Homaeigohar & Elbahri, 2017); (VI) in contaminant removal ; (VII) for pesticide and insecticide quantitation (Hou et al., 2013); and (VIII) in detection systems and precision agriculture (Wu et al., 2014). Graphene may be considered a renewable material, as it does not depend on natural reserves to be produced (Zaytseva & Neumann, 2016). Despite being a promising material, its interactions with the environment are not well defined (Zaytseva & Neumann, 2016). Methodology This review proposes to present current literature findings on various applications of graphene in agriculture, the environment and the food industry as well as the consequences of its application on the environment, plants and soil microorganisms. In addition, this study examines the prospects and the real needs for more promising and assertive research assessing the impact of this new technology. In this way, before using the product in the environment, it is necessary to determine whether the technology is safe for the soil-plant system and consumers. Furthermore, the cost of its use can also be a limiting factor depending on the level applied. Therefore, this review proposes to examine the diverse literature to explain the effects of the use of graphene in agriculture, plants and soil microorganisms. Accordingly, this article discusses and presents the possibilities of application of graphene in agriculture, plants and soil microorganisms. Characteristics, Synthesis and Properties Graphene is known to be formed by layers of carbon atoms attached in hexagonal structures (Soldano et al., 2010). Depending on reaction conditions, graphene nanomaterials (GNMs) can be formed with different chemical surfaces, which differ in morphology and oxygen content (O:C ratio). For this reason, their electrochemical and conductivity responses differ. Because it has higher amounts of sp2-hybridized carbon, the reduced form of graphene oxide (rGO) has a higher conductivity than graphene oxide (GO) (Jain & Mishra, 2016). Compared to primitive graphene, GO, with various oxygen groups, has relatively high solubility. The biocompatibility of GO can be improved with the use of polyetherimide (PEI) and polyethylene glycol (PEG) (Seo et al., 2011). The nanotoxicity of graphene can also be reduced with the use of amines, for instance . Negative (anionic) surfaces are less toxic than positive (cationic) surfaces, whereas neutral ones are more biocompatible (Goodman et al., 2004). This is due to the affinity of cationic particles for phospholipids or negative proteins. However, the effect of the surface charges of graphene on nanotoxicology is not yet fully elucidated. Graphene gained prominence in the 2000s, when Geim and Novolosev isolated and characterized it (Brownson et al., 2012) for the first time, in a technique known as the 'Scotch tape method' (Novoselov et al., 2005). The method, which consisted of removing pieces of graphite with Scotch tape, granted Geim and Novoselov the physics prize in 2010. By this technique, the pieces of graphite are exfoliated with more adhesive tape and applied to silica sheets until an atom-thick layer of graphite−graphene−is finally immobilized in the application (Novoselov et al., 2005). Nonetheless, large-scale graphene production requires the use of other methods. At present, producers employ the "Hummers" method and its variations developed by William Hummers in the late 1950s (Hummers & Offeman, 1958). This method uses strong acids and powerful oxidizing agents to separate the graphene layers from the graphite source. Although other production methods exist, they are little used. There is also the possibility of producing graphene from crop residues, which could further reduce production costs, e.g., agricultural waste from sugarcane bagasse (Somanathan et al., 2015). In addition, graphene can be produced from bacteria such as Shewanella (Lehner et al., 2019), with high efficiency in terms of cost, time savings and the environment in comparison to chemical methods of graphene production. Graphene Applications in Agriculture, the Food Industry and the Environment Graphene can be used in agriculture and various sectors of the high-tech and food industry, e.g. (I) as a plant growth stimulator and a component of fertilizers (Zaytseva & Neumann, 2016); (II) in nanoencapsulation and smart-release systems (Andelkovic et al., 2018;Kabiri et al., 2017); (III) as an antifungal and antibacterial agent ; (IV) in smart packaging (Sundramoorthy et al., 2018); (V) in water treatment and ultrafiltration (Homaeigohar & Elbahri, 2017); (VI) in contaminant removal ; (VII) for pesticide and insecticide quantitation (Hou et al., 2013); and (VIII) in detection systems and precision agriculture (Wu et al., 2014). a. Nanoencapsulation and smart-release systems Although fertilizers are essential in modern agriculture, their use efficiency is still in need of enhancement, given the losses to the environment. In this respect, the use of graphene in the development of new slow-release fertilizers can be an important alternative to reduce these losses (Andelkovic et al., 2018;Kabiri et al., 2017). An example is the coffee crop, which currently uses large amounts of slow-release fertilizers on a large scale. Graphene oxide is composed of a negatively charged layer capable of retaining cationic micronutrients such as zinc (Zn) and copper (Cu), or anions such as negatively charged phosphate. For this purpose, GO must be treated with iron (Fe). Thus, the release of these nutrients is slower than soluble fertilizers and can better respond to the demands of cultivated plants (Kabiri et al., 2017). Research, Society and Development, v. 10, n. 2, e56610212827, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i2.12827 It is known that, in covering fertilizer granules, a graphene layer can enhance their physical resistance, preventing friction damage and degradation during manufacture, transport and application (Kabiri et al., 2017). Moreover, the addition of GO in the encapsulation process of slow-release fertilizers may be an alternative for this segment, preventing waste and overdose (Zhang et al., 2014). In this context, GO can be used in crops with high added value, e.g. vegetables, fruits or coffee. The slower release of P is achieved by applying the GO-Fe-P composite, thereby reducing the possibility of soluble P leaching, in comparison to the commercial fertilizer monoammonium phosphate (MAP) (Andelkovic et al., 2018). For potassium nitrate, the nutrient release process was extended for 8 h in water after the fertilizer was encapsulated with GO films (Zhang et al., 2014). b. Plant growth stimulators and fertilizers The use of graphene can increase germination rates and stimulate growth, but it can also have contradictory effects depending on various factors such as time of exposure, concentration, particle size, plant species, among others. Therefore, further research is warranted to determine the appropriate concentration to improve plant growth without causing phytotoxicity and negative environmental changes (Zaytseva & Neumann, 2016). c. Antifungal and antibacterial agents Graphene has antifungal activity, which renders it an excellent product for the development of new fungicides . Reduced graphene oxide has the potential to inhibit the mycelial growth of three fungi-Aspergillus oryzae, Fusarium oxysporum and Aspergillus niger (Sawangphruk et al., 2012)-through damage caused to the induced microbial membrane , changes in electron transport (Shaobin Liu et al., 2012) and oxidative stress due to the antimicrobial activity of GO (Hui et al., 2014;Mangadlao et al., 2015). Larger graphene sheets are known to have greater antibacterial activity than small sheets (Akhavan & Ghaderi, 2010). The inactivation of R. solanacearum is caused by a rupture of the cell membrane by the antibacterial activity of graphene in its different preparation forms, which results in the release of the cytoplasmic content of the bacterial cell Antibacterial activity may be dependent on the size of the GO sheet. Large GO sheets were found to have less effective antibacterial action than small GO leaves (Perreault et al., 2015). The quality of cut roses can be improved and their pot life extended with the use of GO, due to its antimicrobial activity. Longer pot life, better water relationships and larger diameter were observed in cut roses when the plant was grown at a low rate of GO (0.1 mg/L), as a result of its germicidal and preservative action (He et al., 2018). Graphene-Silver (Ag) composite Until the 1940s, silver was widely used to treat bacterial infections, but following the discovery of the first, most effective antibiotics, it lost ground. However, due to the development of antibiotic resistance by some microorganisms, silver is regaining scientific relevance (Möhler et al., 2018). Expansive use of silver is limited by its cytotoxicity and low stability (Cai et al., 2012). For these reasons, hybrid elements must be used, such as graphene and silver nanoparticles (AgNPs), which possess strong antibacterial properties against many gram-negative and gram-positive strains (Shao et al., 2015). Nonetheless, the factors that affect its antibacterial activities and antibacterial mechanism remain unclear (Tang et al., 2013). Among the AgNPs, antibacterial GO-AgNPs improve activity against the gram-positive bacterial strain B. subtilis and the gram-negative bacterial strain E. coli (Ma et al., 2013). The permeability of sugars and proteins from the cell wall of Bacillus subtilis and S. aureus during the interaction with these GO-AgNPs resulted in 100% effectiveness in the elimination of bacterial colonies (Das et al., 2013). To prevent the development of microorganisms on medical devices and food packaging, a GO-Ag nanocomposite prepared in the presence of sodium citrate and silver nitrate can be used. With this property, the nanocomponent may be able to prevent P. aeruginosa from developing on stainless-steel surfaces (Faria et al., 2014;Tang et al., 2013). At a concentration of 100 µg/mL, the rGO-nAg nanocomposite was more effective against Escherichia coli, Proteus mirabilis and Staphylococcus aureus than rGO or nAg. The nanocomposite was as active as the systemic antibiotic nitrofurantoin against E. coli, S. aureus and P. mirabilis (Prasad et al., 2017). Moreover, nitrofurantoin was slower in inhibition than the rGO-nAg nanocomposite. Graphene oxide-AgNPs nanocomposites provided an almost three and seven times greater inhibition of Fusarium than pure suspensions of AgNPs and GO, respectively. Thus, the plant infection by F. graminearum can be controlled by GO-AgNPs nanocomposites (Chen et al., 2016). Several biocides have been used to control Xanthomonas oryzae in rice. However, indiscriminate use of these products is known to ultimately promote resistance of the microbial community, in addition to residual contamination of rice, causing risks to human health. Liang et al. (2017) observed that a low concentration of GO-Ag (2.5 µg/mL) completely inactivated some bacterial species. The severity of bacterial spot, one of the most important diseases of tomato, caused by Xanthomonas sp., can be significantly reduced with the application of GO-Ag at 100 ppm, as compared with lack of treatment, without risks of phytotoxicity (Ocsoy et al., 2013). Graphene-Germanium (Ge) composite Germanium (Ge) is spread over the Earth's crust. As an element analogous to silicon (Si), Ge shows chemical properties and characteristics very similar to this element (Wiche et al., 2018). As an elementary semiconductor material, germanium (Ge) has been an attractive candidate for the manufacture of microelectronic devices. The presence of graphene gives Ge a satisfactory antibacterial capacity against Staphylococcus aureus and an acceptable antibacterial capacity against Escherichia coli, due to its action of phospholipid disturbance and electron extraction at the interface between graphene and the biomembrane of the microorganism (Geng et al., 2016). a. Contaminant removal Environmental contaminants can be removed with graphene-based materials. Wu et al. (2012) used graphene as a new fiber-coating material for solid phase microextraction (SPME) coupled with HPLC-DAD for the detection of four triazine herbicides (atrazine, ametrine and prometrine) in water samples. The recovery of triazine herbicides in water samples ranged between 86.0 and 94.6%. The highest extraction efficiency was obtained with graphene-coated fiber, as compared with commercial fibers (CW/TPR, 50 mm; PDMS/DVB, 60 mm). In the future, graphene oxide may also act in the removal of dyes (do Nascimento et al., 2020). Results obtained with the sample with less H2SO4 (GO-21) showed better performance in the removal of methylene (99% removal) and light blue (29% removal). The kinetics showed that balance was reached in 30 min, removing 67.43% of the current and 90.23% of the effluent turbidity. Phytotoxicity tests indicated that wastewater treated with GO-21 was less toxic than other analyzed samples of wastewater (Nascimento et al., 2020). You et al. (2018) demonstrated that applied graphene oxide membranes can remove organic matter from water. Smart food packaging Incorporating a layer of graphene into the packaging can provide a substantial reduction in its permeability, prolonging the useful life of a given food product and contributing to the maintenance of its quality and safety (Sundramoorthy et al., 2018). Water treatment and ultrafiltration The exponential growth of environmental pollution caused by the increasing global industrialization and the demographic explosion resulted in the contamination of water resources. In this respect, graphene can be of great use thanks to its large contact surface and high mechanical resistance, atomic thickness, microporosity and reactivity to non-polar and polar pollutants in water. These characteristics provide excellent water purification efficiency, high permeability and water selectivity (Homaeigohar & Elbahri, 2017). Specific studies have shown that a single layer of porous graphene can be used as a desalination membrane (Homaeigohar & Elbahri, 2017;Surwade et al., 2015). a. Impact of Graphene on Plants Graphene is currently well known for its characteristics and abilities in the regulation of plant growth Shen et al., 2019). Depending on the level used, graphene oxides can reduce chlorophyll, inhibit plant growth, damage cell structures and induce genotoxicity or oxidative stress in plants (Anjum et al., 2014;Nair et al., 2012;Zhao et al., 2014). Root exudates can be stimulated by the use of primitive graphene oxide as secondary metabolites, small-molecule acids, alkanes, alcoholates and amino acids (Du et al., 2015). At a low concentration (5 mg L -1 ) (Begum et al., 2011), graphene can influence plant growth . In contrast, higher levels (≥50 mg L -1 ) can inhibit development (Anjum et al., 2013(Anjum et al., , 2014. After 20 days of exposure to graphene (500 to 2000 mg/L), cabbage, spinach and tomato plants showed reduced growth and biomass, when compared with their control counterparts (Begum et al., 2011). Depending on the graphene concentration, the number and size of leaves on the plant may decrease and necrotic lesions appear due to oxidative stress. However, little or no significant toxic effect was observed on lettuce. Thus, the level, the time of exposure and the plant species have a great influence on the enhancement of the effect of graphene (Begum et al., 2011). With the use of 1 to 10 mg/L of GO, the length of the adventitious root and the number of lateral roots in 'Gala' apple (Malus domestica), at three weeks of age, were inhibited. However, between the GO rates of 0.1 and 1 mg/L, rooting rates and the number of adventitious roots showed a significant increase when compared with the control (without GO). The treatment with GO increased the activity of the catalase (CAT), superoxide dismutase (SOD) and peroxidase (POD) enzymes in apple trees, as compared with the controls (Li et al., 2018). Low graphene concentrations accelerated the germination of tomato seeds, in comparison to the control treatment (Zhang et al., 2015). This response is explained by the fact that graphene penetrates the seed epidermis, making it break more easily, thereby capturing more water and resulting in faster and more efficient germination. As regards seedling growth, graphene was also able to penetrate the cells at the tip of the root, promoting longer roots, but resulting in less shoot biomass (Zhang et Research, Society and Development, v. 10, n. 2, e56610212827, 2021(CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i2.12827 al., 2015. Radish growth increased with the Ag-GO rates of 0.2 to 1.6 mg/mL, but the level of 0.8 mg mL -1 slowed the growth of cucumber, and rates above 0.2 mg/mL inhibited the growth of alfalfa . At limited concentrations, GO increases growth, root length, leaf area, number of leaves and the formation of buds and flowers in Arabidopsis thaliana and watermelon. In addition, GO can also affect the ripening, circumference and sugar content of watermelon (Park et al., 2020). The treatment of Brassica napus with 25 to 100 mg/L of GO resulted in a shorter root length, as compared with control. Treatment with GO also resulted in a lower indole acetic acid content and a higher abscisic acid content, in comparison to control samples . Seeds of coriander and garlic plants treated with 0.2 mg/mL of graphene for 3 h before planting exhibited an increase in growth rate (Chakravarty et al., 2015). Corn plants exposed to a low concentration of sulfonated graphene (50 mg L -1 ) showed an increase in plant height , whereas the opposite effect was described at a high concentration (500 mg L -1 ) (Ren et al., 2016). In this way, adverse effects depend on the applied level, and phytotoxicity mechanisms involve oxidative stress-induced necrosis (Begum et al., 2011). Graphene oxide can impair the oxidative balance of plants, inhibiting photosynthesis and plant growth (Du et al., 2016). Many questions still require investigation, such as the capture and translocation of graphene in plants, their gene expression and electrochemical interactions between the soil and the rhizosphere (Hu & Zhou, 2013). b. Effects of Graphene on Microbial Diversity According to Ren et al. (2015), the rate of removal of pollutants from the soil can be increased by using a small amount of graphene (<100 mg/kg of soil), increasing the soil microbial enzymatic activity and bacterial biomass in a short time. When the graphene concentration in the soil was extremely high, the number of bacteria present in the soil decreased significantly. This may have adverse influences on the soil nitrogen cycle, due to the difficulty of growth of iron-reducing bacteria and nitrogen-fixing bacteria. In the natural environment, the dispersibility, stability and toxicity of GO composites were significantly lower than those of graphene in liquid medium (Mejías Carpio et al., 2012). Graphene oxide can attack microbial cells, destroying the cellular structure and leading to cell death (Mejías Carpio et al., 2012). Due to its high stability, GO is not easily degraded in the environment (Kurapati et al., 2015). Chen et al. (2017) found that the influence of graphene on soil microorganisms is greater than in the aquatic environment, due to its low solubility in water. In addition, the toxicity of graphene is significantly lower than that of GO. Wang et al. (2013) observed a 10% increase in the activity of oxidizing bacteria with the use of 0.1 g/L GO. The GO concentrations of 0.05 to 0.1 mg/mL induced an increase in the production of proteins and carbohydrates . Graphene can also influence plant growth-promoting rhizobacteria. The effect of graphene was evaluated on five bacterial isolates selected from the rhizosphere of an agricultural field, identified as B. marisflavi, B. cereus, B. megaterium, B. subtilis and B. mycoides. Results suggest that GO reduces cell viability depending on the concentration and time, demonstrating that it can negatively affect bacterial communities in the soil (Gurunathan, 2015). Combarros et al. (2016) found that GO concentrations above 50 mg L -1 inhibited the growth of P. putida. Even at the low rate of 1 mg.kg -1 , rGO has the ability to interfere with bacterial composition (Forstner et al., 2019). On the other hand, the addition of graphene increased the richness and diversity index of the bacterial community in Cambisols, varying with the graphene concentration (0, 10, 100 or 1000 mg kg -1 ) and incubation time (7, 15, 30, 60 or 90 days) (Song et al., 2018). Finally, graphene in the environment can be influenced by salt, pH, natural organic matter or temperature. Recently, Lu et al. (2017) investigated the effects of montmorillonite, illite and kaolinite on GO transport. Graphene oxide transport is significantly inhibited by the presence of clay minerals, and the effects of inhibition followed the kaolinite > montmorillonite > illite order. The researchers suggested that the effects of transport inhibition are due to the existence of positively charged edge sites in these clay minerals. Sun et al. (2015) found that GO retention can be reduced with increasing sand particle size (coarse < medium < fine). Conclusion Research on graphene evolved rapidly, bringing innovations to the industrial sector, agriculture and environment. Studies already exist on the response of plants to graphene. Depending on its concentration, there can be variations in the degree of absorption by plants, and phytotoxicity is possible. However, low levels of graphene can be beneficial for the development of some plants. Thus, some challenges still need to be overcome before the graphene-based nanoparticle is used in the field. Future research must still focus on efforts to define doses in different real agricultural systems.
2021-05-07T00:04:40.898Z
2021-02-28T00:00:00.000
{ "year": 2021, "sha1": "23b3a3f22c405f07493adca5854301d6383c8a3f", "oa_license": "CCBY", "oa_url": "https://rsdjournal.org/index.php/rsd/article/download/12827/11604", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4ca70775f6fd0c90ffe6d8155915cb6aebe9048e", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Physics" ] }
55488649
pes2o/s2orc
v3-fos-license
Hypomagnesaemia in Diabetes Background: Relationship between Diabetes Mellitus and minerals has been reported frequently. Recent studies have demonstrated reduced levels of serum magnesium in diabetes mellitus especially in poorly controlled patients. Clinically, there are significant data linking hypomagnesemia to various diabetic microand macrovascular complications. Therefore, Mg supplementation may provide a new therapeutic approach to reducing vascular disease in patients with diabetes. Objective: To study the levels of serum magnesium in diabetic patients in a tertiary hospital in North Kerala. Methodology: A hospital based cross sectional study was done in all diabetic patients attending the out-patient and in-patient facilities of Department of Internal Medicine at a tertiary hospital in North Kerala between January1 st and March 31 st 2014. Results: In the study hypomagnesemia was seen in 15.7% of the participants. No significant difference was found in Serum Magnesium neither between genders (p 0.35) nor age (p 0.057). Serum magnesium levels was not seen to have any significant association with duration of diabetes or type of treatment. Significant relation could not be established with other complications associated with type 2 Diabetes Mellitus like hypertension and dyslipidemia. A significant negative correlation was seen between serum magnesium and Fasting blood sugar (p<0.001) as well as HbA1c values (p<0.001). In the present study it was observed that there was significant difference in the level of serum magnesium among controlled and uncontrolled diabetics (p 0.002). Conclusion: Hypomagnesemia is common in diabetes and negatively correlated with fasting blood sugar but not related to duration or type of treatment taken. INTRODUCTION Relationship between Diabetes Mellitus and minerals has been reported frequently. 1,2 Decreased levels of Zn and Mg are associated with increased values of HbA1c. These findings suggest that impaired metabolism of trace elements are involved in pathogenesis of diabetes by participation in oxidative stress. 3 Recent studies have demonstrated reduced levels of serum magnesium in diabetes mellitus especially in poorly controlled patients. 4 This reduction in magnesium level has been attributed to renal losses, decreased intestinal absorption and redistribution of Magnesium from plasma into blood cells caused by insulin effect. 5 Hypermagnesuria results specifically from a reduction in tubular absorption of magnesium. 6 Use of diuretics among patients with diabetes may also contribute to magnesiuria. 7 Finally, the common use of antibiotics and antifungals such as aminoglycosides and amphotericin in patients with diabetes may also contribute to renal Mg wasting. 8 Although many authors have suggested that diabetes per se may induce hypomagnesemia, others have reported that higher Mg intake may confer a lower risk for type 2 diabetes. [9][10][11] Hypomagnesemia, has been reported to occur in 13.5 to 47.7% of non-hospitalized patients with type 2 diabetes compared with 2.5 to 15% among their counterparts without diabetes. [12][13][14] The wide range in the reported incidence of hypomagnesemia most likely reflects the difference in the definition of hypomagnesemia, techniques in Mg measurements and the heterogeneity of the selected patient cohort. Type 2 diabetes is on track to become one of the major global health challenges of the 21 st century. Primary prevention remains the major strategic approach. Clinically, there are significant data linking hypomagnesemia to various diabetic micro-and macrovascular complications. Hypomagnesemia has been demonstrated in patients with diabetic retinopathy, with lower magnesium levels predicting a greater risk of severe diabetic retinopathy. 6 Magnesium depletion has been associated with multiple cardiovascular implications like arrhythmogenesis, vasospasm and hypertension and platelet activity. 5 Magnesium deficiency may have some effects on the development of diabetic vascular complications with other risk factors like systemic hypertension and dyslipidemia. 6 As per some studies, changes in Magnesium levels can affect Insulin action which is mainly attributed to the changes in decreased activity of insulin tyrosine kinase. 15 Long term hyperglycemia in patients with type 2 diabetes increases the risk of chronic complications such as nephropathy, which may exacerbate hypomagnesemia and aggravate the complications. 16 Serum Mg concentration between 2.0 and 2.5 mg/dl in patients with diabetes, may be favorable. Although the correction of low serum Mg levels has never been proved to be protective against chronic diabetic complications, intervention is justified because hypomagnesemia has been linked to many adverse clinical outcomes. Therefore, Mg supplementation may provide a new therapeutic approach to reducing vascular disease in patients with diabetes. 17 In addition, Mg supplementation is inexpensive and, with the exception of diarrhea, a relatively benign medication. Nonetheless, close observation must be given to those with renal insufficiency. The present study is to evaluate serum magnesium levels in Diabetic patients in a tertiary hospital in North Kerala. The present study has particular relevance in Kerala since no study has been reported to determine the prevalence of hypomagnesemia among diabetics. Symptoms of Diabetes Mellitus plus a Random Blood glucose concentration ≥200mg/dL; Fasting blood glucose ≥126mg/dl and or 2 hour Post Prandial glucose ≥200mg/dL. Exclusion Criteria: Patients on Diuretic/ amphotericin/ Protein Pump Inhibitors therapy/ Iatrogenic magnesium administration Data was collected using a pre-tested structured questionnaire and clinical examination findings were recorded. Relevant investigations including fasting blood sugar, glycated hemoglobin and serum magnesium were done. Hypomagnesemia was defined as a serum Mg concentration <1.6 mg/dl. Glycated hemoglobin values 6-7 means good control, between 7-8 means diabetes under moderate control and values more than 8 indicates poor control. Descriptive analysis was done. RESULTS A total of 70 diabetic patients were studied during the study period. Half (51.4%) of the subjects were males. Table 1 shows the baseline characteristics of these 70 subjects. The age of the study population ranged between 36 and 95 with a mean age of 55.93 years (+10.03). Three fourths (75%) are aged between 50 -69 years. Very few (4.3%) had gone beyond high school education. Most (98.6%) were currently married and one person was widowed. Half of the diabetic subjects (54.3%) reported having used some type of tobacco products and 10% reported alcohol consumption. Twenty three (32.9%) subjects gave history of diabetes in their families. The duration of diabetes was found to be between 1 and 30 years with a mean duration of 9.71 years (+ 6.125). Thirty one subjects (44%) were diabetic for the past 10 years and another 35 (50%) for the past 10 -20 years. Only 3 were diabetic for more than 20 years. No relation was seen between duration of diabetes and Serum Magnesium (p 0.736). Except for 2 on dietary control alone, the remaining 68 were on some kind of medications. Table 2 depicts the treatment pattern followed by these subjects. Nearly three-fourths (72.86%) were on oral hypoglycemic agents and a few (17.14%) were on both OHA and Insulin. No significant difference was seen in serum magnesium between the treatment modality the patient is taking (p 0.855). Table 3 shows the proportion of subjects having other comorbidities. A quarter (24.28%) had hypertension and 8.5% had history of coronary artery disease and peripheral vascular disease each. Table 4 depicts the distribution of the study participants based on of Fasting blood sugar values, Serum magnesium levels and control based on HbA1c level. The table also gives the mean (± SD) of these variables. Most (82.9%) of the subjects had a fasting blood sugar value >126mg/dL. The blood sugar levels of majority (62.9%) were found to be under poor control based on HbA1c level. Hypomagnesemia (<1.6 mg/l) was found in 15.7% (10) subjects. The mean FBS level was found to be higher in the hypomagnesemic group when compared to the normomagnesemic group as shown in Table 5.This difference was found to be significant (p 0.003). A negative correlation was found between serum Magnesium and Fasting Blood Sugar (r -0.349; p 0.003) ie as FBS value increases the S.Magnesium will decrease. Similarly, the mean HbA1c value was higher in hypomagnesemic patients when compared to partients with normal magnesium levels ( Table 5). The finding that patients with hypomagnesemia have poorly controlled diabetes has been found to be significant (p 0.002). A negative correlation was found between HbA1c and serum Magnesium (-0.296; p 0.013) ie as HbA1c increases the S.Magnesium will decrease. No significant significant difference was found in Serum Magnesium levels between males (mean S.Mg 1.89 + 0.35) and females (mean S.Mg 1.82 + 0.281) [ t 0.941; p 0.35]. DISCUSSION Hypomagnesemia is a common feature in patients with Type 2 Diabetes Mellitus. This study was designed to find out the serum magnesium levels and its influence on Type 2 diabetics and how it is associated with duration, treatment modalities and complication of the disease. The present study included 70 Diabetes Mellitus patients. In this study hypomagnesemia was seen in 15.7% of the participants an this falls within the range seen in another study (13.5-47.7%) among nonhospitalized patients with type 2 diabetes. [12][13][14] The study conducted by AP Jain et al 18 in 1986 had 32% diabetic subjects below 1.6mg/dl. In the present study no significant difference was found in Serum Magnesium between males and females [t 0.941; p 0.35]. Independent studies have reported a higher incidence of hypomagnesemia in women compared with men, at a 2:1 ratio. [19][20][21] In addition, men with diabetes may have higher ionized levels of Mg. 22 In the present study no significant correlation was observed between age of the cases and serum magnesium levels (Pearson correlation -0.207; p=0.057).Previous studies by Mishra S et al 23 CONCLUSION In this study hypomagnesemia was seen in 15.7% of the participants. No significant difference was found in Serum Magnesium between the genders (p 0.35) nor between age of the cases and serum magnesium levels (p=0.057). In the present study no significant association with duration of diabetes and serum magnesium levels. There was no significant difference noted when analyzed serum magnesium levels between OAD and insulin treated patients. Other associated complications of type 2 Diabetes Mellitus like hypertension and dyslipidemia was also analyzed with serum magnesium but could not establish significant relation. In our study a significant negative correlation existed as for Fasting blood sugar and HbA1c values with serum magnesium (p<0.001). In the present study it was observed that there was significant difference in the level of serum magnesium among controlled and uncontrolled diabetics (p0.002). LIMITATIONS OF THE STUDY Plasma magnesium is relatively an insensitive measurement of magnesium status of the body because major bulk of magnesium lies within the cell. Intra erythrocyte magnesium and urinary magnesium are not done in the present study due to lack of facility and financial burden. There is no follow up in this study hence change in magnesium states with respect to improvement or worsening of diabetic state in the long run is not studied. This study focuses on magnesium levels in type 2 Diabetes mellitus at a given point but not on therapeutically correcting hypomagnesemia or otherwise not correcting in the future course of the disease and its outcome.
2019-03-16T13:12:52.360Z
2017-02-28T00:00:00.000
{ "year": 2017, "sha1": "12a1acff4258c726a3aeed956668d4dba4e5c8c1", "oa_license": null, "oa_url": "https://doi.org/10.18535/jmscr/v5i2.154", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "96f034b46b21ed2a3c8c06201978950ec5db8a9a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
21981578
pes2o/s2orc
v3-fos-license
What's the Buzz: Tell Me What's Happening in Breast Cancer Screening Many controversies have come to light related to breast cancer screening recommendations for average- and high-risk populations. This manuscript focuses on factors to consider when coordinating and conducting breast cancer screening programs in an average or “healthy women” population. As presented at the 2016 ONS Congress, a brief comparison of current screening recommendations among various organizations for early detection of breast cancer is provided. Lessons learned regarding key components of successful screening programs such as being patient focused, accessible, and sustainable are shared. Practice implications such as gaining confidence in providing individualized patient education, encouraging every woman to discuss her risk of breast cancer with her health-care provider, advocating for patients needs and being involved in or aware of clinical and translational research on the efficacy of the clinical breast examination and screening services are critical roles for nurses and advanced practice nurse providers. Introduction The past few years have been fraught with controversies related to breast cancer screening recommendations for average-and high-risk populations. This manuscript focuses on factors to be considered effective when coordinating and conducting breast cancer screening programs in an average or "healthy women" population. The American Cancer Society, [1] a highly reputable organization in the United Many controversies have come to light related to breast cancer screening recommendations for average-and high-risk populations. This manuscript focuses on factors to consider when coordinating and conducting breast cancer screening programs in an average or "healthy women" population. As presented at the 2016 ONS Congress, a brief comparison of current screening recommendations among various organizations for early detection of breast cancer is provided. Lessons learned regarding key components of successful screening programs such as being patient focused, accessible, and sustainable are shared. Practice implications such as gaining confidence in providing individualized patient education, encouraging every woman to discuss her risk of breast cancer with her health-care provider, advocating for patients needs and being involved in or aware of clinical and translational research on the efficacy of the clinical breast examination and screening services are critical roles for nurses and advanced practice nurse providers. Effective Screening in Average-risk Women When most nurses think of secondary cancer prevention thoughts turn to what behaviors or actions, patients can adopt to detect disease early when it is most amenable to treatment. Early detection is a key strategy of secondary cancer prevention but to be truly beneficial it should include four key components: Be patient focused, accessible, sustainable, and allow for seamless follow-up and/or referral. Offering services that are patient focused not only comprises a dedicated facility or area carved out in a Radiology Department for women to wait separate from the general radiology patient population but also one where women feel comfortable with individualized care given to them. For example, a dedicated cancer center may have outreach workers who speak a patient's native language and serve as lay navigators to them from the time they meet in a community outreach program through screening, follow-up, or referral. Accessibility ensures that no matter where a potential consumer resides, they have the means to access services. Many urban cancer centers have satellite outpatient offices in suburban and rural areas that are open variable hours. These programs may assist patients in arranging transportation services and provide a "one-stop" visit so that intake, provider examination, education, and screening mammography occurs in the same encounter. Sustainability occurs when a center provides services for both insured and uninsured or underinsured patients. One such service is a primary grant funded Cancer Outreach Program that has been in existence for 15 plus years. It is directed by an Advance Practice Nurse (APN) and she and another APN provide weekly cancer screening clinics to a vulnerable population through grant funding from a variety of grantors such as the New Jersey Cancer Education and Early Detection Program, Susan G. Komen, and Avon Breast Cancer Foundation. The services provided include not only the full realm of cancer screening but also follow-up and referral for benign or malignant conditions diagnosed through the outreach program. Screening Controversy As previously noted, slight variation in screening mammography recommendations for women at average risk has led to controversy. Community members may experience confusion and misunderstanding related to messages the media provides about breast cancer screening recommendations. This may result in individuals avoiding screening behaviors and services altogether. Nurses working both in and outside of the oncology specialty may also question whose recommendations should be promoted and how to broach the subject of breast cancer screening with their respective patients of average risk. Professional organizations in the Americas such as the American Medical Association (AMA), the American College of Obstetrics and Gynecology (ACOG), [2] the American College of Radiology (ACR) [3] and Society of Breast Imaging, ACS, National Comprehensive Cancer Network (NCCN ® ) [4] and Siu and United States Preventive Services Task Force (USPSTF) [5] all have put forth recommendations that differ slightly. Each agency now negates monthly breast self-examination, a longtime staple activity promoted by the ACS. This practice has changed to patients practicing a behavior called breast awareness. The frequency of clinical breast examination (CBE) conducted by health-care providers has also been called into question. Most importantly, as noted in the literature, the role of mammography and its harm versus benefit continues to be debated. [6][7][8][9] Experts have cited both false positives and over diagnosis being a controversial issue linked to screening mammography. With False-positive findings, the most common outcome is women being recalled for additional imaging. A small percentage of women who are recalled go on to biopsy yet the majority of these women will have benign findings. [8] Hubbard et al. [6] shared data about a cohort study of the cumulative probability of false-positives recall or biopsy recommendations. The factors they associated with false-positives included greater mammographic breast density, use of post-menopausal hormone therapy, longer intervals between screening, and lack of comparison images for review by radiologists. The same researchers noted over diagnosis of breast changes that would not have led to symptomatic breast cancer if undetected by screening. Estimates in literature from empirical studies related to false positives and over diagnoses vary from < 5% to > 50% of cases reviewed. [10] Concept of over diagnosis is hard to understand as we simply do not have the tools to be able to determine which cancers are progressive and which are not. While the concept of false positives and over-diagnoses continues to be researched, a reference by Oeffinger et al., [10] can provide nurses whom would like to further explore this topic with a discussion of both of these listed factors in addition to quality-adjusted life expectancy, age to begin screening and screening interval. In contrast, Oeffinger et al., [10] also shared that evidence synthesis revealed screening mammography in women aged 40-69 years is associated with reduction in breast cancer deaths across a range of study designs, and inferential evidence supports breast cancer screening for women 70 and older who are in good health. Making Sense of the Breast Cancer Screening Recommendations To assist patients to understand the controversy related to breast cancer screening in the average population, nurses need to familiarize themselves with the numerous stakeholders that make and publish guidelines. A review of a sampling of the above-mentioned organizations is provided as well as a concise [ Table 1] that highlights each of many organizations' breast cancer screening recommendations. The most recent Guideline Update from American Cancer Society [1] and a guide for nurses published by the Oncology Nursing Society [11] states that women with average-risk breast cancer should start annual screening mammography at age of 45 years. The ACS [1] also includes the caveat that a female may by individual choice begin having screening mammograms at age of 40 years. An annual mammography schedule should continue until age of 54 years at which time mammograms can be offered every other year or continue annually as long as the women is healthy and expected to live another 10 years. CBE is not recommended for average-risk women of any age due to research not showing a clear benefit of physical breast exams done by health professional for breast cancer screening. According to the ACS, [1] this change in the recommendations was based on systematically reviewed clinical research evaluating mammography on breast cancer mortality, life expectancy, false-positive findings, over diagnosis, and quality-adjusted life expectancy. The historical significance of the ACS may not be known by many nurses. It was established in 1913 as a nationwide community-based voluntary organization that follows principles highlighted by independent systematic review of evidence and external review from outside experts related to cancer. A second organization, the USPSTF, often goes head to head with the ACS when comparisons of secondary prevention related to cancer recommendations are compared. The USPSTF was first convened by Public Health Service in 1984 to evaluate clinical research and assess merits of preventive measures that include not only screening tests but also counseling, immunizations, and preventive medicines. The recommendations of the USPSTF [5] are based on evidence that is graded from "A to D" and "I." With screening, the grades range from certainty that a procedure such as CBE or screening mammography is of moderate to high net benefit to moderate to high certainty that the service has no benefit or harms outweigh benefits. The grading of an "I" equals insufficient evidence. The USPSTF [5] recommendations' primary screening for breast cancer with conventional mammography is such that imaging should start at age of 50 years and proceed every other year to age of 74 years. This service is graded a "B" as there is high certainty that the net benefit is moderate or there is moderate certainty that the net benefit is moderate to substantial. Offering women screening mammography before age of 50 years is an individual decision that should made between the patient and provider This recommendation is graded a "C" which translates to moderate certainty that the net benefit is small. [5] No recommendation is given regarding screening mammography in women aged 74 years and over as the USPSTF cites insufficient evidence or "I" to assess the balance of benefits and harms of this service. For nursing and medical clinicians working in the specialty of cancer or oncology, perhaps the most resourceful and respected organization is that of the NCCN. This alliance of 27 leading cancer centers proposes recommendations or guidelines determined by results of panel member review of best evidence that ranges from category 1 (high level evidence and uniform consensus), 2A (lower level evidence with uniform consensus, 2B (lower level evidence with consensus) to 3 (disagreement that intervention is appropriate). [4] The NCCN Guidelines Version 1.2015 follows an algorithm. Asymptomatic women with a negative physical examination who have average risk for breast cancer would follow one of two paths. Women aged 25 through 39 years are recommended to receive a CBE every 1-3 years. They should also practice breast awareness which is being familiar with ones breast tissue and promptly reporting any change to a provider. Screening for those aged 40 years and above consists of annual CBE and screening mammography as well as breast awareness. [4] Approaching the Breast Cancer Screening Recommendations Puzzle The question the nurse may ask is how do I weigh the evidence and decide whose recommendations to follow? The answer is to follow the recommendation that best fits ones institutional policy and/or procedure. Also remember that any recommendation should not replace the professional judgment of the health-care provider. Nurses need to explore if an algorithm specific to their organizational structure, patient population, and current clinical best practice exists. It must be annually reviewed to make sure it is up to date. Within this advanced practice nurses practice, the employing cancer center combined the recommendations from ACR, [3] ACS [1] and NCCN [4] to develop an algorithm for breast cancer screening in both average risk and high-risk individuals. It outlines educational and research related tools and resources for use in clinical practice. As such, women aged 20-39 years should consider a CBE every one to three years, women aged 40 years and over should have CBE every year with average risk women should starting annual screening mammograms at age of 40 years. Screening with mammography is considered as long as patient is in good health and is willing to undergo additional testing such as tomosynthesis (three-dimensional mammography) or contrast-enhanced spectral mammography and a biopsy if an abnormality is detected. Breast awareness is also highly encouraged among all age groups. Providers use their individual knowledge and judgment and counsel women about the benefits, risks and limitations of screening mammography and employ diagnostic or high-risk screening/surveillance as opposed to average-risk screening as needed. [12] Practice Implications Based on the above discussion of breast cancer screening, nurses in general need to be familiar with the evidence related to screening, recognize that screening recommendations may continue to change and know their own institutions or practice policy related to secondary prevention. Nurses must feel confident providing individualized patient education, encouraging every woman to discuss her risk of breast cancer with her health-care provider, advocating for patients needs and being involved in or aware of clinical and translational research on the efficacy of the clinical breast exam and screening services. The RN and APN and other providers should encourage and promote breast awareness not only in their individual patients but also throughout the community by advocating for an active community outreach program. One must remember that every woman is at risk for breast cancer just by the fact that they of the female CIS gender. According to Leung, [13] a Section Chief of Breast Imaging, "screening mammography continues to be the single most cost-effective tool in early breast cancer detection and mortality reduction. This point must always be kept in mind when considering screening guidelines." It is important for us as a profession to recognize that the evidence does not support a one-size-fits all approach. Familiarize yourself with screening procedures for the healthy women and if appropriate or outside of your expertise advocate for further screening and follow-up for your patients. Recognition is given to the entire team at the CCCSP and to our grantors Susan. G. Komen, Avon Breast Cancer Foundation, and NJ CEED. Cancer Center at Cooper Cancer Outreach Program and to our grantors Susan. G. Komen, Avon Breast Cancer Foundation and NJ CEED. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2018-04-03T05:05:57.109Z
2017-04-01T00:00:00.000
{ "year": 2017, "sha1": "97353907d4defd979e1ed358e6667800790ba835", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/2347-5625.204500", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "97353907d4defd979e1ed358e6667800790ba835", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
117826902
pes2o/s2orc
v3-fos-license
Superconductivity in pressurized Rb0.8Fe2−ySe2−xTex We report the finding of pressure-induced elimination and reemergence of superconductivity in Rb0.8Fe2−ySe2−xTex (x = 0, 0.19 and 0.28) superconductors that belong to the family of A-245 superconductors (A = K, Rb, TlRb and Cs), characterized by the presence of an antiferromagnetic (AFM) long-ranged order phase with the superlattice structure of Fe-cavacies. In this study, we investigate the connections between superlattice, AFM phase and superconductivity via the combined approaches of Te doping and application of external pressure. Our data reveal that the superconductivity of the ambient-pressure superconducting phase (SC-I) and the AFM long-ranged order as well as the superconductivity of the pressure-induced phase (SC-II) in the host samples can be synchronously tuned by Te doping. At x = 0.4, the SC-I and AFM long-range ordered phases as well as the SC-II phase disappear together, indicating that the two superconducting phases have intrinsic connections with the AFM phase. Furthermore, in-situ synchrotron x-ray diffraction measurements indicate that the superlattice structure in the x = 0.4 sample still exists at ambient pressure, but collapses at the same pressure where the superlattice of the superconducting samples is destructed. These results provide new insight into understanding the physics of this type of superconductors. Previous studies found that applying external pressure on the A 0.8 Fe 1.7 Se 2 (A = K, Rb and Cs) superconductors can fully suppress the superconductivity of the ambient-pressure superconducting phase (SC-I) [30][31][32][33] and induce a new superconducting phase (SC-II) at higher pressure [33]. Experimental evidences have exhibited that the pressure-induced SC-II phase is probably driven by a quantum critical phase transition in which the AFM phase in the pressurized sample undergoes a transition from an AFM state to a paramagnetic (PM) state [30,34]. These results suggest that the superconductivity of the SC-II phase may be closely related to the AFM fluctuations [35,36]. On the other hand, it is known that the iso-valence substitution of Te or S (with larger or smaller ionic radius) for Se in the alkaline iron selenide superconductor can distort its Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. lattice, which can result in a suppression of the long-ranged AFM order and the superconductivity in the SC-I phase [37][38][39][40], but cannot induce a SC-II phase. The pressure-induced reemergence of superconductivity in A 0.8 Fe 1.7 Se 2 (A = K and TlRb) superconductors attracts considerable attention [41][42][43][44], meanwhile, puzzles such as whether the SC-I and SC-II phases are intrinsically connected to each other and what the connections are among the superconductivity, AFM phase and superlattice are raised. The answers for these questions may be helpful to shed light on understanding the superconducting mechanism for this family of Fe-based superconductors. In this study, we combine the two tuning ways, doping Te on Se sites and the application of external pressure, to conduct comprehensive investigations on Rb-245 superconductors. Experimental details Single crystals of Rb-245 superconductors were grown out by the self-flux method as reported in [37]. The actual chemical compositions of all samples investigated were Rb 0. 8 In-situ high-pressure electrical resistance and ac susceptibility measurements were carried out in a nonmagnetic diamond anvil cell which is integrated into a home-built refrigerator. Diamond anvils of 500 and 300 μm flats were used for this study. High-pressure resistance measurements were performed by a standard four-probe method in a diamond-anvil cell. Platinum electrodes with dimensions of 20 μm in width and 2.5 μm in thickness were used. The crystal was placed into the hole of the insulating gasket assembled with four electrodes, all of which were put on an anvil. And then the other anvil was pressed. To achieve a quasihydrostatic pressure condition, the NaCl powder was employed as the pressure medium for the resistance measurements. High-pressure ac susceptibility measurements were conducted by using home-made coils which were set up around the diamond anvils [33,45]. Before sample loading, we performed susceptibility measurements for the high pressure cell with the coils and gasket only, and took the result as a background signal. The high-pressure magnetic susceptibility data were extracted through the background subtraction [45,46]. Temperature was measured with a calibrated Si-diode attached to the diamond anvil cell with an accuracy less than 0.1 K.High-pressure x-ray diffraction (XRD) experiments were performed at beam line 15U at the Shanghai Synchrotron Radiation Facility (SSRF). Diamonds with low birefringence were selected for the XRD experiments. A monochromatic x-ray beam with a wavelength of 0.6199 Å was adopted for all measurements. Pressure was determined by the ruby fluorescence method [47]. Results and discussions Figure 1(a) shows the resistance (R) as a function of temperature (T) for the undoped Rb-245 superconductor measured at different pressures. It can be seen that the R-T curve demonstrates a remarkable hump around 200 K. The resistance hump is suggested to have originated from the competition between the insulating AFM phase and the superconducting phase [27,28,36]. Upon increasing pressure, the hump is suppressed significantly, same as that seen in the pressurized K-245 and Tl(Rb)-245 superconductors [30,32]. At pressure ∼8.4 GPa, we found that the resistance hump becomes almost featureless, which is signed the destruction of the long-ranged AFM order [34]. Zooming in the plot of R-T curve in the low temperature range, the pressureinduced decrease in Tc is shown more clearly ( figure 1(b)). At 7.2 GPa, the superconductivity is fully suppressed, and then the pressure-induced resistance drop is visible again in the pressure range from 8.4 to 11.8 GPa ( figure 1(c)). On further increasing of pressure to 14.1 GPa, this resistance drop vanishes, similar to that seen in other A-245 superconductors [33]. To fully characterize the superconducting state in the pressurized Rb-245 superconductor, we performed ac susceptibility measurements at pressures of 1.1, 3.5 and 11 GPa, respectively. The results show that the host sample at these three pressure points is diamagnetic, indicating that the sample is superconducting (figures 1(d) and (e)). We repeated the measurements three times and found that the data are reproducible. Assuming the superconducting volume fraction of the SC-I phase in the Rb 0.8 Fe 1.7 Se 2 superconductor at 1.1 GPa is 100% (magnitude of its superconducting transition measured from the real part of susceptibility measurements is about 42 nV), our high-pressure ac susceptibility result (11.3 nV) indicates that the superconducting volume fraction of the SC-II phase is about 26.9% at 11 GPa. Further resistance measurements under magnetic field or dc current for the sample subjected to 8.4 GPa find that the R-T curves shift to lower temperature when the magnetic field or dc current is increased (figures 1(f) and (g)), providing further evidence for the existence of a pressure-induced SC-II phase. Next we performed high-pressure studies on the Te-doped Rb-245 superconductors. We find that the resistance hump also exists in the pressure-free samples Rb 0.8 Fe 2−y Se 2−x Te x (x = 0.19 and 0.28) (figures 2(a) and (f)). Applying external pressure yields a dramatic suppression on the resistance hump in these two samples. After careful inspection of their R-T plots in the lower temperature range, we find that the Tc of the SC-I phase declines with increasing pressure (figures 2(b) and (g)). Upon further increasing pressure, the resistance drops featuring the SC-II phase show up at 11.5 GPa for the x = 0.19 sample and at 12.4 GPa for the x = 0.28 sample, respectively (figures 2(c) and (h)). The superconducting transition of the SC-II phase in these two samples is confirmed by the shift of the R-T curve to a lower temperature when the magnetic field or current is increased (figures 2(d), (e), (i) and (j)). Figure 3 illustrates the temperature dependence of resistance for the x = 0.4 sample at different pressures, and notably the sample at ambient pressure is in a semiconducting state. With increasing pressure, the semiconducting behavior is suppressed dramatically (figures 3(b) and (c)). At 13 GPa and above, its resistance decreases remarkably with decreasing temperature (figure 3(d)), indicating that the sample transforms into a metallic state. No SC-II phase is detected in the pressurized sample up to 15.5 GPa ( figure 3(e)). The overall behavior of Rb-245 superconductors is summarized in the electronic phase diagram of pressurecomposition-temperature, as shown in figure 4. Adopting pressure as a control parameter, the Tc of the SC-I phase in the x = 0 sample decreases with increasing pressure, and a new superconducting phase (SC-II) emerges within 8.4 GPa-11.8 GPa, after the SC-I phase is fully suppressed. The maximum onset Tc of the SC-II phase is ∼53 K at 11.8 GPa. The diagram with double superconducting phases has been observed in K-245 and Tl(Rb)-245 superconductors [33], so the results reported in this study further indicate that the pressure-induced reemergence of superconductivity is a common phenomenon for the family of A-245 superconductors. For x = 0.19 and 0.28 samples, the ambient-pressure value of the Tc in their SC-I phase is lower than that (Tc = 33 K) of the undoped sample, Tc = 29.8 K for the x = 0.19 sample and 24.2 K for the x = 0.28 sample (left panels of figure 4), implying that Te-doping is not in favor of superconductivity [37][38][39]. Remarkably, the reduced Tc of the SC-I phase can be partially recovered by applying pressure, and the maximum recovered Tc value for the x = 0.19 sample at 1.2 GPa is 1.1 K and for the x = 0.28 sample at 1.7 GPa is 2.2 K, respectively. Our results reveal that the Tc of the SC-I phase in the A-245 superconductors is very sensitive to the local lattice distortion from Te doping. It is noteworthy that neither the SC-I nor SC-II phases is observed in the sample of x = 0.4, at the doping level of which the long-ranged AFM order at ambient pressure is fully suppressed [30,34,36,37] (upper right panel of figure 4). No observable sign exists of the SC-II phase, even when the paramagnetic semiconducting sample (x = 0.4) is pressurized into a metallic state (figures 3(d) and (e) and 4). Previous high-pressure studies on K-245 and TlR-245 superconductors revealed that the SC-II phase emerges from a metallic state, driven by a quantum critical transition [30]. While, because the x = 0.4 sample is in a paramagnetic semiconducting state, pressure is unable to turn on a SC-II phase from such a heavy-doped sample. Neutron diffraction studies on A-245 superconductors have confirmed that the AFM long-ranged order is associated with the superlattice structure [34]. To clarify the role of the superlattice in stabilizing the superconductivity of the A-245 superconductors, we performed high-pressure synchrotron x-ray diffraction measurements at SSRF for Rb 0.8 Fe 2−y Se 2−x Te x (x = 0, 0.19, 0.28 and 0.4) samples. As shown in figure 5, we find that a superlattice peak (110) exists in all samples investigated below ∼9 GPa. At pressure above ∼10 GPa, the superlattice peak disappears from all these samples, consistent with the results observed in K 0.8 Fe 1.78 Se 2 and Tl 0.6 Rb 0.4 Fe 1.67 Se 2 superconductors [30,33,34]. More significantly, we find that the supperlattice peak of the x = 0.4 sample still exists at ambient pressure although its AFM long-ranged order state is fully suppressed [37]. This result demonstrates that Te doping can destroy the AFM order state by partially destructing the superlattice [37]. The left panels display two dimensional temperature-pressure phase diagrams for different Te dopings. The green and red dotted lines guide to eye. structure, while applying pressure can result in the entire destruction of the supperlattice structure [34], revealing important issues that the AFM long-range ordered state is sensitive to the local distortion of the supperlattice structure induced by Te doping. Conclusions In this study, we find the pressure-induced reemergence of superconductivity in Rb 0.8 Fe 2−y Se 2−x Te x (x = 0, 0.19 and 0.28) superconductors and the connection between the SC-I and SC-II phases. Our results demonstrate that Te-doping can significantly suppress the superconductivity of the SC-I and SC-II phases, and eliminate these two superconducting phases and the AMF order at x = 0.4. We propose that the SC-I and SC-II phases are connected by the state of the AFM phase, i.e. the AFM long-range ordered state stabilizes the superconductivity of SC-I phase, while the pressure-induced AFM fluctuation state drives the reemergence of superconductivity of the SC-II phase. The superconducting phase diagram obtained in this study provides a panorama picture on the pressure and doping planes for the superconducting behaviors of the A-245 superconductors. Significantly, insitu high-pressure x-ray diffraction measurements for all samples investigated reveal that the superlattice structure exists below ∼10 GPa, no matter whether the long-ranged AFM order presents or not.
2019-04-17T15:52:21.273Z
2015-07-15T00:00:00.000
{ "year": 2015, "sha1": "028867d28fcabab4d435db24a3627b63b44ffb6d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1367-2630/17/7/073021", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "b6575c632bde6bce0e84950030303acccdc831e1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
80284237
pes2o/s2orc
v3-fos-license
Study of neonatal morbidity and mortality patterns in a referral tertiary care neonatal unit of a teaching hospital Objective: To find out the incidences and causes of neonatal morbidity and mortality. The present study is carried out with the objective of finding out the causes of neonatal morbidity and mortality at the referral neonatal unit of teaching hospital which caters to the neonatal deliveries in the hospital and the cases brought directly or referred from the smaller hospitals in the region. Methods: A hospital based descriptive study was conducted from January 2014 to December 2014. Each case was studied with reference to maternal / neonatal factors affecting the neonatal mortality and morbidity. The maternal factors studied were maternal age, parity, antenatal care, presentation, mode of delivery, medical illness and obstetric complications. Neonatal factors include sex of the babies, birth weight and gestational age. Result: Total admissions of neonates during the study period were 1169, out of which 385 were intramural (inborn) & 784 extra mural (out born). Approximately, 37.90% children were of low birth weight and 25.40% were preterm babies. HMD, birth asphyxia, septicemia, prematurity, hyperbilirubinemia, congenital malformations were the chief morbidities. The main causes of neonatal mortality were perinatal hypoxia (40.09%), infection (27.64%), Hyaline membrane disease (12.09%) and immaturity (8.29%). Conclusion : Neonatal mortality rate was 12.94% which was influenced by sex of the baby. Birth asphyxia tops the list followed by infection as the cause of neonatal morbidity and mortality. The present study accounts for 2.44% of morbidity. The study conducted by et.al reports congenital malformation at 5%. Our study shows the lowest rate of mortality at 12.94% in comparison with figures collected from different parts of the country. Introduction Since the incidences of neonatal problems vary in different countries, It is essential to have data on the local patient population for planning a neonatal service of a maternity unit [1]. These data can help in choosing the priorities in babies weight (< 2500 g) accounting for a high rate of morbidity and mortality [2]. Every year, 8 million low birth weight are born in India [3]. It is projected that 1 million babies suffer from birth asphyxia [4], respiratory distress syndrome and hyperbilirubinemia, while 0.5 million babies, showed evidence of neonatal morbidity and mortality. Data on causes of neonatal morbidity and mortality in a referral neonatal unit receiving home delivered neonates from the community as well as from the first referred unit in the government and private sector was lacking. The high incidences of low birth weight babies in India were due to neglect of nutrition, health and education of women in society. Early teenage marriage (60%) frequent pregnancies, maternal malnutrition, anemia and infection were important contributory causes [5]. Neonatal mortality rate as reported from various parts of India ranges from 17.9-31.0/1000 live birth. The causes of perinatal and neonatal deaths were often presented as maternal, obstetric and fetal [6]. In order to evolve effective strategies to improve perinatal survival, it was obligatory to identify leading causes of perinatal deaths. Neonatal morbidity of a significant nature follows the same pattern as mortality [7]. The lowest illness rate was 4%. The present study was carried out with the objective of finding out the causes of neonatal morbidity and mortality at the neonatal unit of the hospital catering to neonates and the cases brought directly or referred from smaller hospitals in the region. [8]. Materials and Methods This hospital-based descriptive study was conducted over a period of 12 months from January 2014 to December 2014 in the neonatal unit of Owaisi Teaching Hospital attached to Deccan College of Medical Sciences, Hyderabad, Telangana State, India. This hospital caters to booked, un booked and high risk pregnant mothers referred from neighboring hospital. Inclusive Criteria: All new born babies brought alive admitted in neonatal unit. Exclusive Criteria: All the new bornbabies brought dead to the neonatal unit were excluded from the study. The hospital have 16 radiant warmers with ventilator support, phototherapy units, and facilities for exchange transfusion. Each case was studied with reference to the maternal / neonatal factors affecting the neonatal morbidity and mortality. All the babies whose mothers were uncertain about the data of last menstrual period were assessed by modified scoring system of gestation. Data was collected in a predesigned Proforma showing about the place of birth, Sexes, weight, age at admission, gestational age and mode of delivery. On admission in NICU, the new born were examined by doctors and then by neonatologist. Sample collection involved the selection of elements based on birth weight and gestational age. Diagnosis was mainly clinical and based on WHO definition for prematurity, low birth weight and very low birth weight. Low birth weight was defined as weight < 2500gm. Statistical analysis was performed using Graph pad prism 5 (Graph pad software Inc. U.S.A). Data is presented as mean +S.D. Variable difference with P<0.05 is considered statistically significant. The sample size was determined by using the Open EPI Statistics and 95% of confidence was used to detect the results with 90% of sample power. Results Present study comprised 1676 live births in Owaisi Teaching Hospital, Hyderabad Telangana State India from January 2014 to December 2014. In the present study, the total cases admitted in NICU were 1169. Out of these, 385 were intramural (inborn) and 784 were extramural (out born). Mortality in inborn & out born deliveries were 61 & 156 respectively. The present study shows that birth asphyxia developed in 286 neonates an incidence of 17.00% in which 167 babies I,e 9.96% were suffering from stage-II and 119 babies i.e. 7.10% were suffering fromstage-111. Thusbirth asphyxia is the leading cause of morbidity and mortality. Table 1 shows morbidity and mortality in babies weighing less than 2000 gm which was much higher as compared with those babies weighing more than 2000 gm. The mortality was 12.94% and morbidity was 56.80% when weight was 1000 gm to 1500 gm. Total 233 Table 2 shows that causes of respiratory distress in which HMD was found in 65 neonates i.e. 27.89%, transient tachypnea of a new born was 16 i.e 6.86% meconium aspiration syndrome was present in 125 (53%) neonates. Congenital heart diseases were found in 12 babies and congenital malformation was detected in 15 babies. Table 3 shows the causes of neonatal hyperbilirubineamia In our study prematurity was the highest i.e 76.38%. ABO incompatibility was the second leading cause. Table 4 Shows that out of 1676 cases 276 cases were of septicemia (61.60%) and 131 cases were of positive culture (29.24%) which constituted a major cause of morbidity. There were 28 cases of pneumonia (6.25%). Total 1676 Table 5 shows maternal age and mortality. Majority of the mothers were in the age group 19-35 years and others were in the age group <18 years and above 35 years. The neonatal mortality was the highest in mothers< 18 years of age. Table 6 Shows that the mode of presentation in majority of births were vertex. Breech, transverse lie and prolapse constituted 4.29%. Discussion The main source of information regarding neonatal mortality and morbidity was obtained from hospital where only referred patients with high risk and consistent antenatal records are admitted. The spectrum takes into account severity of neonatal morbidity, details of the newborn and obstetric back ground of the mother [9]. The present study shows that the morbidity in babies weighing less than 2000 gm at birth was higher as compared with those babies weighing more than 2000 gm. In the study, neonatal morbidity in babies <2000 gm was 10.76 % while that in Gupta et al reported 24.7% which is higher than the above observation [10]. The present study shows that the leading causes of neonatal morbidity are birth asphyxia which account for 17.06%, respiratory distress syndrome 6.14%, neonatal hyperbilirubinemia 8.59%, septicemia 16.46% and congenital malformation 2.08. Further analysis of our study shows that 7.06% of morbidity studied by other workers reported birth asphyxia as a major cause of morbidity [11]. Birth asphyxia was as the main cause for morbidity reported by Singh et.al. and Gupta et.al. in 5.9% and 2.5% cases respectively [12]. Bhakoo et.al. [13] and Kapoor et. al [14] however reported these figures as 5.9% and 5.7% in their study. Our study shows a higher rate of morbidity in comparison with other studies [15]. Data in our hospital was collected from both booked and un-booked cases and revealed that 6.14% of the cases were affected with respiratory distress syndrome for morbidity. This figure is higher than what was obtained by Bhargava et.al 2.3%. [16]. This difference in morbidity is significant [16]. The incidences septicemia were the lowest in our study recorded at 16.46% when compared with the figures quoted from earlier studies [17]. Our study shows that Staphy lococcus aureus is the predominant organism isolated in blood culture followed by Cons, Ecoli. Abhay T Bang et.al [18] reported Ecoli in 37.5%, Staphylococcus aureus in 29.5%, Kleisbiella in 20.80% and Pseudomonas in 12.5% of samples. Besides, our study also shows low neonatal mortality at 12.9 per 1000 live births in comparison with other studies [19] which reported these figures in the range of 18 to 31. [20] 1978-82 28.40% Kameswaram et al [24] 1990 22.70% Pradeep et al [25] 1992-93 18.30% Present study 2014-2015 12.94% In our study on neonatal mortality in mothers in the age group of less than 18 years was 14.28% whereas in the age group 19-35 years and more showed 12.89%. There was a significant difference in neonatal mortality between maternal age less than 18 years and more than 18 years. In our study, the mortality was high compared with the figures given by Gupta et al [20]. It was also noted that mother below 20 years and more than 35 years, were at risk for higher incidences of neonatal mortality [21].The present study shows that the mortality was less in comparison with Choudhary et al [22]. The present study shows that neonatal mortality rates in relation to parity were 19.3%, 13.80% and 6.52% in Primigravida, gravida 2-4 and gravida 5 and above respectively. Similar observation was made by Mavelankar et al [23] that parity was affected on neonatal mortality rate. However, Choudhry et al observed high neonatal mortality of 6% in primi mothers. In our case, statistical differences in mortality rates in un booked cases were high when compared with booked cases. In booked cases the mortality rate was 9.51% and in un booked cases it was 17.30% [14]. The present study shows that more than half of the neonatal deaths are associated with obstetric complications, PIH being the major complication resulting in 28.27% of deaths of which 4.76% were due to eclampsia. It was implied from this observation that PIH increases the risk of neonatal mortality by three folds. Since in our study more than half of neonatal deaths were associated with obstetric complications, it is easy to establish the need for regular antenatal checkup for early diagnosis of these complications and their management. The present study also shows that neonatal mortality was high in breech and cord prolapsed. The statistical analysis shows a significant difference in mortality rate between vertex presentation and presentation other than the vertex. Neonatal mortality in our study was similar for both male and female babies. Our observation was in agreement with that of kameswaram et al [24]. The neonatal mortality in the present study in birth weight of babies< 1250 gm was 72.22% and for those with birth weight 1251-1750gm was 37.50%. This shows that there are statistical differences in mortality based on the birth weight of the babies as well. Kamehswaram et al reported mortality of 76.9% [24]. Neonatal mortality in our study for the babies with birth weight between 1501-2000gm was 9.17% which is comparable to 8.5% mortality reported by Pradeep et al [25]. However, Chavan et al [26] reported high mortality rate i.e. 34.6% for the same group. The neonatal mortality in babies of more than 2000 gm weight was 2.39% while Pradeep et at [25] record it at 0.96% and Chavan et al [26] estimate 4.82% which is higher than both the observations cited above. Therefore we may conclude that to reduce neonatal mortality further, those babies whose birth weight is < 1500gm need better perinatal care. It is observed that in our study there was a decrease in the gestational age i.e< 28 week from 90.32% to 64.28% [27]. The difference in mortality rate of these two groups was statistically significant. Similarly, neonatal mortality for 31-32weeks of gestation age was 41.30% and 33-34 weeks gestation age was 37.68% which was also statistically significance. The present study shows that mortality in babies with more than 32 weeks of gestation period was 10 times less than that reported by Pradeep et al [25]. Overall neonatal mortality in preterm babies of <32 weeks of gestation was less. Our study shows 46.59% which is higher than 34.7% and 29.7% as reported by Chavanet al [26]. In our study, the leading cause of neonatal death is perinatal hypoxia (including birth trauma) accounting for 40.09% followed by infection 27.67%, HMD is 12.95%, immaturity is 8.29% miscellaneous was 6.45% and unknown was 4.60%. The study by other workers reported birth asphyxia as the leading cause of neonatal mortality. Pradeep et al [25] and chavan et al [26] reported birth asphyxia in 43% and 40.5% of neonatal deaths respectively. The incidence of prematurity was lower in our study i.e. 8.29%. Observation made above reveals that birth asphyxia is on the top of the list as a cause of neonatal mortality. This emphasizes the need for adoption of measures to prevent perinatal asphyxia and birth trauma. Similarly, infections need to be taken care by proper management of mothers having factors and babies born to such mothers. Conclusion The neonatal mortality in our study was 12.94%. Since in our stud more than half of the neonatal deaths were associated with obstetric complications, birth asphyxia was on the top of the list followed by infection which was the major cause of neonatal mortality whereas meconium aspiration syndrome and septicemia were the main causes of neonatal morbidity. It is essential to augment our community health service for early detection and management of these problems. There is a need to establish level II neonatal care facility at district levels. We need to strengthen perinatal care, emergency services of obstetric services to enhance neonatal resuscitation. Scope for further study: In order to find out the true neonatal morbidity and mortality, the study has to be planned in such a way that it conducts training programs for community health workers using simple and reliable methods, as we do not have adequate skilled staff to carry out such study in the community.
2019-03-17T13:03:45.248Z
2016-08-30T00:00:00.000
{ "year": 2016, "sha1": "c58007ab71d4b514138936dd5e9d352b71c49b51", "oa_license": "CCBY", "oa_url": "https://pediatrics.medresearch.in/index.php/ijpr/article/download/160/316", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "467988c9fdc80f24961c36eb326e305e1f17a88b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267734878
pes2o/s2orc
v3-fos-license
Entrepreneurial bricolage and entrepreneurial performance: The role of business model innovation and market orientation Newly established enterprises in China face significant challenges and opportunities, with persistently high mortality rates. Navigating market challenges and establishing sustainable competitive advantages are pressing issues for contemporary businesses. This study delves into the bridging role of business model innovation between entrepreneurial bricolage and entrepreneurial performance, with market orientation influencing the relationship boundaries. We examined 288 Chinese small and medium-sized enterprises, investigating the relationships among entrepreneurial bricolage, business model innovation, market orientation, and entrepreneurial performance. Empirical results indicate: (1) Entrepreneurial bricolage positively influences business model innovation, and business model innovation positively impacts entrepreneurial performance. (2) Business model innovation plays a fully mediating positive role between entrepreneurial bricolage and entrepreneurial performance. (3) Market orientation positively moderates the impact of entrepreneurial bricolage on business model innovation and entrepreneurial performance, and it also positively moderates the impact of business model innovation on entrepreneurial performance. (4) Market orientation positively moderates the impact of entrepreneurial bricolage, mediated by business model innovation, on entrepreneurial performance. The study results contribute to a more effective understanding of the mechanisms through which entrepreneurial bricolage and business model innovation influence entrepreneurial performance, as well as how market orientation moderates their relationships and how enterprises sustain competitive advantages. Introduction In recent years, entrepreneurship has become a new focal point for economic and social development worldwide, seen as a new driving force for promoting economic and social progress.In today's highly competitive business environment, new startups face significant challenges and opportunities.While there have been many successful new startups that have achieved brilliance through business model innovation, such as Didi, Xiaomi, and Xiami Music, the mortality rate of new startups remains high.For instance, in the bike-sharing industry, apart from the leading companies Mobike and Ofo, almost all second and third-tier companies' financing processes have been concentrated around Series A or A+ rounds, with many experiencing a wave of closures since the second half of 2017.In the complex and rapidly changing environment, due to inherent "new" and "small" disadvantages, new startups generally have a higher failure rate [1].Therefore, how new startups can address market challenges and build sustainable competitive advantages is a pressing issue for enterprises today and will continue to be a research hotspot in the field of strategic management for a long time (see Table 12). Entrepreneurship can undoubtedly contribute significant value to the economy and society [2].For new startups, enhancing entrepreneurial performance becomes a practical challenge.Read points out that the success of most startups often stems from initially overlooked idle resources [3].Entrepreneurial bricolage is a way of rationalizing the allocation of these idle (or redundant) resources, allowing startups to obtain enterprise value at the lowest cost, thereby contributing to performance improvement [4].Entrepreneurial bricolage also facilitates entrepreneurs in innovating and adjusting business models [5].Therefore, entrepreneurial bricolage, as a precursor to business model innovation, can provide new business opportunities and innovative ideas for enterprises [6].However, startups are constantly in a predicament of resource constraints, and to build novel business models, they have no choice but to creatively combine existing resources [7].Business model innovation also has a positive impact on entrepreneurial performance [8].It is an essential means for enterprises to achieve performance, but this innovation process continually evolves in overcoming market uncertainty [9].Thus, enterprises must be market-oriented, continuously adjusting and optimizing their products, services, and market strategies [10].Market orientation is a culture that emphasizes creating excellent value for customers, creating more value for customers through shared value concepts, behavioral norms, and effective process combinations, thereby achieving outstanding organizational performance [11].In the early stages of entrepreneurship, startups face the problem of resource constraints.A market-oriented strategy that focuses on customer needs and actively captures market opportunities is an effective way to gain competitive advantages for enterprises [12].Research indicates that market orientation can enhance entrepreneurs' sensitivity to market demand, thus promoting entrepreneurial bricolage and business model innovation [5,8], ultimately boosting entrepreneurial performance. Although research on the relationship between entrepreneurial bricolage and the performance of new startups has gradually received attention in recent years, there are still several limitations.First, since entrepreneurial bricolage and entrepreneurial performance theories are relatively new in the field of entrepreneurship research, empirical studies are limited.Most current research only explores the direct relationship between the two, and there is a lack of research on the intermediate paths through which entrepreneurial bricolage affects the performance of new startups [13].The relationship between entrepreneurial bricolage and the performance of new startups is still in a "black box," requiring in-depth research into the underlying mechanisms.Second, creative bricolage of available resources is highlighted as facilitating business model innovation [14], and business model innovation is a crucial foundation for the formation and enhancement of startup performance.Unfortunately, there is limited research on this aspect.Third, the formation of the performance of new startups is influenced by various factors, especially the complex and variable market factors that startups face.Market orientation is a crucial contingency variable in the process of forming startup performance.Therefore, investigating the role of market orientation in the mechanism through which entrepreneurial bricolage affects the performance of new startups is essential.However, existing literature on the role of market orientation in this mechanism is scarce.Therefore, analyzing the contingency factors in the mechanism of how entrepreneurial bricolage affects the entrepreneurial performance of new startups is necessary. Additionally, some scholars believe that studying how new startups can enhance entrepreneurial performance from the perspective of dynamic capabilities is a new and innovative approach [15].Building upon the achievements of previous scholars, this study, based on dynamic capabilities theory, focuses on exploring the mechanism and effects of how entrepreneurial bricolage influences entrepreneurial performance.It incorporates business model innovation and market orientation into the research framework, investigates the bridging role of business model innovation between entrepreneurial bricolage and entrepreneurial performance, and explores the boundary effects of market orientation on the relationship between entrepreneurial bricolage and entrepreneurial performance.Based on these research elements, this study breaks down into three sub-problems for clearer elucidation: (1) How does entrepreneurial bricolage affect the entrepreneurial performance of new startups?What are the pathways of its effects?In response to the increasingly complex and dynamic market, many new startups have either started or are preparing to engage in resource bricolage activities.However, the experiences of individual companies are insufficient to cover the entire industry, and some startups still adopt a wait-and-see attitude towards entrepreneurial bricolage.Therefore, the first sub-problem of this study is to explore the impact and pathways of entrepreneurial bricolage on entrepreneurial performance, providing theoretical guidance for the significance of entrepreneurial bricolage and offering insights for startups engaging in entrepreneurial bricolage.(2) During the process of entrepreneurial bricolage, what role does business model innovation play for new startups?In previous studies, scholars have investigated the relationship between business model innovation and entrepreneurial performance through empirical methods, but their conclusions vary.This suggests that the relationship between business model innovation and entrepreneurial performance is extremely complex.Therefore, the second sub-problem of this study, from the perspective of dynamic capabilities, is to examine the role of business model innovation in the relationship between entrepreneurial bricolage and entrepreneurial performance.(3) How does market orientation regulate the relationship between entrepreneurial bricolage, business model innovation, and entrepreneurial performance for new startups?With today's intensifying market competition and significant uncertainties brought about by global economic slowdowns and geopolitical conflicts, accurately understanding market orientation is crucial for the development of new startups.However, there is limited literature that introduces market orientation as a variable in the study of entrepreneurial bricolage, business model innovation, and entrepreneurial performance.Therefore, the third sub-problem of this study is to treat market orientation as a moderating variable, exploring its regulatory role in the relationship between entrepreneurial bricolage, business model innovation, and entrepreneurial performance, revealing the boundary conditions of entrepreneurial bricolage on business model innovation and entrepreneurial performance. S. Wu et al. In comparison with existing research, this paper contributes in several ways: Firstly, although there have been numerous discussions on entrepreneurial performance, previous studies often individually consider the impact of either entrepreneurial bricolage or business model innovation on entrepreneurial performance.There is limited research that comprehensively investigates these variables together.This study provides theoretical insights into the existing field of entrepreneurship research.Secondly, scholars exploring the relationship between entrepreneurial bricolage and entrepreneurial performance have introduced various mediating variables, such as organizational learning, self-efficacy, and knowledge search.However, few studies have included business model innovation as a crucial variable in their research models, particularly among Chinese scholars who have not paid attention to this aspect.This study enriches the literature on the relationship between entrepreneurial bricolage and entrepreneurial performance.Thirdly, scholars have examined the boundary conditions of how entrepreneurial bricolage influences business model innovation and entrepreneurial performance, considering external environmental dynamics and other external factors that companies cannot change.However, they have overlooked the important influence of market orientation.This paper analyzes, from a micro perspective, the specific impact of market orientation on how entrepreneurial bricolage influences business model innovation and entrepreneurial performance, supplementing academic research on the relationship between entrepreneurial bricolage and business model innovation, as well as entrepreneurial performance. Entrepreneurial bricolage and entrepreneurial performance In the process of growth and development, entrepreneurial ventures often face resource constraints.Overcoming resource scarcity is crucial for the survival and development of new ventures.Baker and Nelson propose that entrepreneurs can integrate stakeholders through entrepreneurial bricolage to expand their networks and access more available resources, thereby enabling timely exploration of entrepreneurial opportunities and enhancing the chances of survival and development [16].Salunke found through empirical research that entrepreneurial bricolage helps entrepreneurial ventures gain sustained competitive advantage in the market and improve their performance [17].Zhu et al. discovered that the differential competitive advantage among entrepreneurial ventures lies not only in resource differences but also in different development approaches for the same resources.Entrepreneurial bricolage significantly enhances the performance of new ventures [18].Yi Zhaohui et al. discussed the mechanism of entrepreneurial bricolage affecting the entrepreneurial performance of small and micro technology-based enterprises from the perspective of previous experience [19].Through an empirical analysis of the survey data of 317 small and micro technology-based enterprises, they concluded that entrepreneurial bricolage is positively correlated with the entrepreneurial performance of small and micro technology-based enterprises.Yan Huafei (2019) took 326 entrepreneurs as samples, adopted multi-layer regression analysis method, and found through empirical research that entrepreneurial bricolage has a positive impact on the growth performance of new enterprises [20].Tong Xin et al. used the survey data of 325 family farms in Hunan Province to explore the impact mechanism of entrepreneurial piecing on the entrepreneurial performance of family farms [21].It is found that entrepreneurial bricolage has a significant positive effect on entrepreneurial performance of family farms.Wang Zhong et al. found that peasant entrepreneurs can improve entrepreneurial performance by piecing together and reorganizing existing resources at hand [22].By breaking through the fixed value of existing resources, adopting unconventional approaches, and constantly innovating, entrepreneurial bricolage reduces the risk of venture failure and provides more possibilities for firm development.Based on these findings, the following hypothesis is proposed. H1. Entrepreneurial bricolage is positively associated with entrepreneurial performance. Entrepreneurial bricolage and business model innovation Entrepreneurial bricolage is an emerging strategic approach for firms to integrate and repurpose internal and external resources, aiming to overcome resource constraints in a more flexible and effective manner.By creatively combining available resources and establishing more efficient or novel ways of resource integration, entrepreneurial bricolage can lead to changes or innovations in business models [23].Emphasizing fleeting business opportunities and engaging in selective and disruptive resource development activities through bricolage strategies can provide irreplaceable business resources for business model innovation.Furthermore, entrepreneurial bricolage inherently involves process innovation in resource utilization, often driving significant innovation in operational processes and business models [24].To implement entrepreneurial bricolage effectively, firms need to tap into all available internal resources and optimize their integration.Moreover, the process of entrepreneurial bricolage often requires organizational improvisation and the ability to practice thinking, demonstrating organizational agility and absorptive capacity [25].Xu Shangde conducted empirical analysis to explore the interactive relationship between value chain constraints, entrepreneurial bricolage, and business model innovation in new rural online retail enterprises [26].The findings revealed that entrepreneurial bricolage has a significantly positive impact on business model innovation in new rural online retail enterprises.Wang Xin based on entrepreneurial bricolage theory and innovation theory, took entrepreneurial bricolage as a starting point and conducted research around the fundamental question of "how companies promote business model innovation through entrepreneurial bricolage" [27].Through empirical research, it was found that resource bricolage, customer bricolage, and institutional bricolage all show a positive correlation with business model innovation.In summary, entrepreneurial bricolage is not just a specific way of resource utilization but a new management logic for resource utilization that emphasizes recombining resources to reshape operational processes and business models.Based on these perspectives, the following hypothesis is proposed. S. Wu et al. H2. Entrepreneurial bricolage is positively associated with business model innovation. Business model innovation and entrepreneurial performance The process of business model innovation for firms is a significant "disruptive innovation" process that aims to transform existing business operating models to create more value and gain competitive advantage [28].Therefore, business model innovation can promote strategic transformation and change within firms and is an important factor in improving firm performance [29].Zott and Amit (2007) collected data from 190 listed entrepreneurial firms in Europe and the United States [30].The results showed that novel center-based business model design had a significant positive impact on entrepreneurial firm performance.Studies conducted by Wang et al., Wen et al., and others have examined the relationship between business model innovation and firm performance, suggesting that business model innovation is an important source of competitive advantage and performance for firms [31,32].Constantinides et al. argued that digital business model innovation changes the way value is obtained and created, allowing firms to expand their value space and achieve exceptional performance by flexibly adapting to environmental changes [33].The rapid development of digital technology provides infinite possibilities for business model innovation, intensifying competition among firms as they seek to create new value in this "blue ocean" market.Luo et al. examined 512 Chinese entrepreneurial firms and confirmed a positive correlation between business model innovation and firm performance [34].Chi Kaoxun et al. constructed a model for the impact mechanism of business model innovation on the performance of new startups based on resource management theory [35].Through empirical analysis of questionnaire data from 142 new startup companies, they found that business model innovation contributes to improving the performance of new startups.Tong Ziqiang et al. used growth-stage listed companies on the Shanghai and Shenzhen stock exchanges from 2014 to 2019 as their sample [36].They employed text analysis techniques using Word2Vec to measure the level of business model innovation in these companies based on annual financial data.Their empirical research revealed a significant positive impact of business model innovation on the performance of latecomer companies. Based on these perspectives, the following hypothesis is proposed. H3. Business model innovation is positively associated with entrepreneurial firm performance. The moderating role of market orientation in the relationship between entrepreneurial bricolage and business model innovation Market orientation is an important strategic orientation, and firms with a high level of market orientation can achieve excellent innovation performance, including rapid development of new products or services and improvements or innovations in existing business models [37].This is because firms with a market orientation have higher dynamic capabilities in resource allocation and coordination.Business model innovation, as a proactive market-oriented innovation, benefits from the implementation of market-oriented strategies and outperforms competitors in new market development, new customer acquisition, and new transactions [34].Based on a survey of 434 Chinese firms, Yuan et al. found that market orientation positively moderates the relationship between entrepreneurial bricolage and innovation type [38].Tong Qi based on research data from 261 companies in the Yangtze River Delta region, found that companies emphasizing big data capabilities experience a positive moderating effect on the relationship between entrepreneurial bricolage and business model innovation [39].It can be said that market-oriented firms actively promote the development of entrepreneurial bricolage to extract the necessary resources from the market, combine them with existing resources, and make informed decisions to grasp the direction of business model transformation or enhancement.Similarly, under market orientation, the knowledge or new insights generated by entrepreneurial bricolage are more aligned with market needs, reducing the failure rate of business model innovation by striving for alignment with the external environment.The alignment between the two creates great potential for the innovation and evolution of business models.Based on these arguments, the following hypothesis is proposed. H4.The role of entrepreneurial bricolage in business model invitation is moderated by market orientation. The moderating role of market orientation in the relationship between entrepreneurial bricolage and entrepreneurial performance Entrepreneurial bricolage is a process of creating value "out of nothing" for firms [16].Through entrepreneurial bricolage, new ventures can use the rational allocation of existing resources to change the mismatched state of resources, break established development patterns, and gain a competitive advantage, thereby promoting performance improvement and sustained development [40].Therefore, when facing similar resource environments, entrepreneurial bricolage can stimulate the generation of heterogeneous value, leading to performance improvement [41].Market orientation, on the other hand, is also a proactive adaptive learning process, and firms with a higher level of market orientation can continuously update, design, and improve products, services, and processes based on their keen understanding of market changes [42].To some extent, the level of market orientation determines a firm's ability to obtain valuable market information [43], which helps firms take action by breaking conventions, finding innovative solutions, and combining hypotheses with innovation through bricolage, generating previously unrealizable problem-solving approaches [17].Therefore, the combination of market orientation and entrepreneurial bricolage is also an important strategic choice for enhancing entrepreneurial performance.Based on these arguments, the following hypothesis is proposed. H5.The role of entrepreneurial bricolage in entrepreneurial performance is moderated by market orientation. S. Wu et al. The moderating effect of market orientation on the relationship between business model innovation and entrepreneurial performance Entrepreneurial ventures face constantly changing markets, which means that their existing business models may not provide sustained competitive advantages.Therefore, it is necessary to transform and innovate business models in response to market changes [44].The higher the degree of market orientation, the more likely an enterprise is to engage in innovation.Market fluctuations bring new market opportunities for entrepreneurial ventures, allowing them to construct new models for creating business value based on market conditions.Business model innovation is a crucial driving factor in the formation of competitive advantages and performance improvement for enterprises, and in this driving process, the uncertain market plays a significant role.Research suggests that market orientation demands enterprises to generate more new ideas and new thoughts, enabling them to undertake change and innovation in turbulent markets [45].When market orientation is high, market requirements for business model innovation by new ventures also increase, allowing the role of business model innovation to be better realized.Thus, the alignment and interaction between market orientation and business model innovation may act as catalysts for entrepreneurial performance.Different levels of market orientation can positively influence the relationship between business model innovation and entrepreneurial performance.Based on this, the following hypothesis is proposed. H6. The role of business model invitation in entrepreneurial performance is moderated by market orientation. The mediating role of business model innovation in the relationship between entrepreneurial bricolage and entrepreneurial performance Bricolage activities often lead to unpredictable innovative outcomes [46].This is because entrepreneurial bricolage itself is an innovative behavior that combines means and ends [47].For entrepreneurial ventures, any form of business model innovation can enhance their performance [30].Therefore, business model innovation is one of the key paths for the formation and improvement of performance for new ventures [48].At the same time, entrepreneurial bricolage is one of the important antecedents of business model innovation [49].Entrepreneurial bricolage provides convenience for business model innovation by recombining and reusing fragmented resources related to new opportunities [16].Duan Haixia et al. based on the perspective of enterprise resources, used three typical family farms in Hunan Province as case studies to explore the relationship between entrepreneurial bricolage, business model innovation, and the entrepreneurial performance of family farms [50].The research found that in resource-constrained situations, family farms adopt differentiated entrepreneurial bricolage strategies and enhance entrepreneurial performance by innovating different elements of their business models to achieve sustainable development.Li Xinyi conducted a study by surveying entrepreneurial enterprises using various entrepreneurship platforms such as entrepreneurial incubators, coworking spaces, and LinkedIn [51].The research revealed that business model innovation plays a partial mediating role in the impact of entrepreneurial bricolage on entrepreneurial performance.As seen from the previous analysis, there is a causal logic relationship between entrepreneurial bricolage, business model innovation, and entrepreneurial performance.On one hand, entrepreneurial bricolage provides convenience and possibilities for enterprise business model innovation.On the other hand, business model innovation helps enterprises shape new competitive advantages, which are powerful guarantees for performance improvement.Therefore, the impact of entrepreneurial bricolage on entrepreneurial performance can be realized through the mediating effect (i.e., business model innovation).Based on this, the following hypothesis is proposed.H7.Business model innovation indirectly influences the relationship between entrepreneurial bricolage and entrepreneurial performance. The moderating effect of market orientation on the relationship between entrepreneurial bricolage, mediated by business model innovation, and entrepreneurial performance Business model innovation enables enterprises to overcome developmental challenges, explore customer needs, and enhance enterprise value [52].Entrepreneurial bricolage helps new ventures develop new content, structures, governance processes, and capture new opportunities [16], thus effectively driving the implementation of business model innovation for new ventures [49].From this perspective, entrepreneurial bricolage provides convenience and possibilities for enterprise business model innovation, while business model innovation lays the foundation for improving entrepreneurial performance.Therefore, this study proposes the hypothesis of the mediating role of business model innovation between entrepreneurial bricolage and new venture performance.Additionally, the more pronounced the market orientation, the more likely the enterprise is to engage in disruptive innovation activities, and the more likely the role of business model innovation will be realized.Thus, this study proposes the hypothesis of the moderating effect of market orientation on the relationship between business model innovation, entrepreneurial bricolage, and new venture performance.When market orientation levels differ, the impact of business model innovation on new venture performance will vary.Based on this, this study further suggests that the path in which entrepreneurial bricolage influences entrepreneurial performance through business model innovation will also be moderated by market orientation.When market orientation is high, on the one hand, entrepreneurial bricolage activities promote new ventures to implement business model innovation; on the other hand, a higher level of market orientation can stimulate the role of business model innovation in new ventures, leading to a greater improvement in entrepreneurial performance.However, when market orientation is low, even if entrepreneurial bricolage activities promote the implementation of business model innovation in new ventures, the limited market orientation may prevent the effects of business model innovation from being highlighted, resulting in limited improvements in entrepreneurial performance.In other words, market orientation has a positive influence on the "entrepreneurial bricolage -business model innovation -entrepreneurial performance" path.Based on this, the S. Wu et al. following hypothesis is proposed. H8. Market orientation moderates the impact of entrepreneurial bricolage on entrepreneurial performance through business model innovation. Sample source This study focuses on entrepreneurial enterprises established within 8 years, using alumni associations from various universities in Hubei Province to assist in the research.Due to the comprehensive nature of the survey, founders who possess knowledge about the company's situation were chosen as respondents.A simple random sampling method commonly employed in entrepreneurial research was used, with questionnaire-based interviews conducted face-to-face.Researchers interviewed each founder, and the survey duration for each company was approximately 20-30 min. The survey consisted of two phases: In the first phase, executives completed individual questionnaires independently.In the second phase, researchers conducted interviews with the founders to validate the questionnaire's authenticity.Before distributing the questionnaires, the purpose and content of the study were thoroughly explained to ensure that the provided information would only be used for academic research. The survey was conducted between September 2022 and February 2023.A total of 338 questionnaires were distributed.After removing incomplete or patterned responses, 288 valid questionnaires were obtained, resulting in an effective response rate of 85.21%.Among them, 94 companies were less than 3 years old, 92 companies were 3-5 years old, and 102 companies were 5-8 years old.There were 67 companies with fewer than 50 employees, 81 companies with 50-100 employees, 92 companies with 100-200 employees, and 48 companies with over 200 employees.In terms of registered capital, 102 companies had capital less than 10 million RMB, 101 companies had capital between 10 million and 20 million RMB, and 85 companies had capital exceeding 20 million RMB.In terms of industry distribution, 87 companies were in the agricultural sector, 104 companies were in the industrial sector, and 97 companies were in the service sector. Variable measurement and research model The measurement of market orientation (MO) is based on a scale developed by Narver et al. [53], consisting of 8 items; the measurement of business model innovation (BI) follows a scale developed by Hunt et al. and revises it according to the research results of Dubey [54,55], consisting of 6 items; the measurement of entrepreneurial bricolage (EP) adopts a scale developed by SENYARD Table 1 Measurement variables and items. Market Orientation The primary goal of the company's production is customer satisfaction.The company formulates competitive strategies based on customer needs.The company frequently tests customer satisfaction.The company is more concerned about customers than its competitors.The company constantly strives to discover needs that customers are not aware of.The company seeks opportunities in areas where customers have difficulty expressing their needs.The company puts a great deal of effort into figuring out how customers consume its products.The company predicts mainstream trends in order to discover customers' future needs. Business Model Innovation Our business model provides value-added products and services. Our business model creates new profit models. Our business model creates new profit centers. Our business model adopts innovative transaction methods. Our business model constantly introduces new operational processes, routines, and norms, leading to reduced costs.Overall, our business model is novel and innovative. Entrepreneurial Bricolage When facing new challenges, we are confident to find feasible solutions using our existing resources. Compared to other companies, we can use our existing resources to handle more challenges.We make the most of any existing resources to deal with new problems or opportunities in entrepreneurship.By integrating our existing resources and low-cost resources to deal with new challenges. When facing new problems or opportunities, we assume that we can find feasible solutions and take action.By integrating our existing resources, we can successfully handle any new challenges. When facing new challenges, we combine our existing resources to create feasible solutions.We successfully cope with new challenges by integrating resources that were not originally intended for the plan. Entrepreneurial Performance The company maintains a high profit margin.The company's net asset return rate is at a leading level (return on investment).Company's number of employee is growing rapidly.New products or services are developed quickly by the company.The company's sales revenue is growing rapidly.The company's product market share is growing rapidly.The company's net earnings are growing rapidly.Starting this business makes me feel satisfied. S. Wu et al. et al. [56], consisting of 8 items; and the measurement of entrepreneurial performance (EP) refers to a scale developed by Chandler and Hanks [57], with modifications made to the specific items to fit the needs of this study, consisting of 8 items.A 5-point Likert scale is used to measure the above variables, with a range from "strongly disagree" to "strongly agree", corresponding to the numbers "1″ to "5″, as shown in Table 1. Building upon the discussions presented in the aforementioned literature, this study analyzes the relationship between entrepreneurial bricolage and entrepreneurial performance.Furthermore, it delves into the mediating role of business model innovation between entrepreneurial effort and entrepreneurial performance.The study also investigates the moderating effect of market orientation in this relationship.A moderated mediation model was established (Fig. 1) based on these concepts. Construct validity and common method bias test Table 2 presents the results of the confirmatory factor analysis.The factor loadings for each item range between 0.804 and 0.923, indicating strong factor loadings.The composite reliability (CR) values for all items exceed 0.9, demonstrating high internal consistency of the measurement scale.The average variance extracted (AVE) values for all constructs are above 0.6, indicating good convergent validity among variables.The square roots of the AVE values are greater than the Pearson correlation coefficients between constructs, indicating strong discriminant validity of the measurement scale. These results indicate that the construct validity and reliability in the measurement model meet the evaluation standards proposed by Fornell and Larcker, supporting the suitability of the sample data for empirical research in this study [58].Additionally, Table 2 presents the means and standard deviations of the variables.The standard deviations of market orientation, business model innovation, entrepreneurial performance, and entrepreneurial effort are relatively small, with means of 4.217, 4.007, 3.593, and 4.073 respectively.This suggests that the respondents generally find the measurement items for each variable to be 'comparatively consistent'. This study employed Amos 24.0 software to conduct a confirmatory factor analysis and examine the discriminant validity of the variables, as shown in Table 3.The results indicate that the four-factor model exhibits the best fit indices (χ 2 /df = 1.665,SRMR = 0.0339, RMSEA = 0.048, CFI = 0.968, TLI = 0.966), significantly outperforming the other models and demonstrating a high level of discriminant validity among the variables.Additionally, the fit indices for the single-factor model are very poor (χ 2 /df = 7.129, SRMR = 0.113, RMSEA = 0.146, CFI = 0.705, TLI = 0.683), suggesting that the issue of common method bias is not significant. Direct effect testing This study used a series of Process3 programs developed to estimate and explain the direct, indirect and modulating effects by Andrew F. Hayes [59].The use of the Process procedure is available through the http://www.guilford.com/p/hayes3.For the test of direct effect and mediation effect, selected in the SPSS 26.0 software installed Process program and research mediation effect analysis model 45000 Bootstrap sampling to study, after the control of the enterprise age, company size, sales scale, industry, direct effect and mediation effect test as in Table 4, as shown in Table 5. MO, BI, EP, and EB are used to represent variable market orientation, business model innovation, entrepreneurial performance, and entrepreneurial bricolage; Years, Size, Sales, and Industry are used to represent control variable enterprise age, company size, sales scale, and industry.In Table 4, The results of Model1 indicate that, Entrepreneurial bricolage has no significant impact on entrepreneurial performance (Effect = 0.051; CI = − 0.087, 0.189), Assuming that the H1 validation does not hold, Business model innovation has a significant positive impact on entrepreneurial performance (Effect = 0.456; CI = 0.335, 0.576), Assume that H3 is verified; The results of Model2 indicate that, Entrepreneurial bricolage has a significant positive impact on business model innovation (Effect = 0.864; CI = 0.776, 0.952), Suppose that H2 is verified to hold. Mediating effect testing Based on the analysis of Model 3 in Tables 4 and it is evident that entrepreneurial bricolage significantly and positively influences entrepreneurial performance (Effect = 0.445; CI = 0.346, 0.543).However, upon the inclusion of business model innovation, and considering the results from Model 1 in Tables 4 and it can be observed that the regression coefficient of entrepreneurial bricolage on entrepreneurial performance decreases from 0.445 to 0.051, leading to a shift from a significant effect to an insignificant one.Further analysis through Table 5 reveals that bricolage has a significantly positive mediating effect on entrepreneurial performance (Effect = 0.394; CI = 0.299, 0.495), supporting Hypothesis 7. The direct effect is not significant (Effect = 0.051; CI = − 0.087, 0.189), while the total effect remains significantly positive (Effect = 0.445; CI = 0.346, 0.543).This suggests that business model innovation plays a In this study, a simple slope analysis of the regulatory effect was performed based on the market-oriented mean and one standard deviation.The results are presented in Table 7. S. Wu et al. crucial role as a full mediator in the relationship between bricolage and entrepreneurial performance. Moderating effect testing For testing moderation effects, the Process tool developed by Andrew F. Hayes was employed [59].Using the Process3 program within the SPSS 26.0 software, the appropriate moderation analysis model 59 was selected to conduct 5000 rounds of bootstrap sampling for analyzing the moderating role of market orientation after controlling for variables such as company age, company size, sales scale, and industry.The results of the moderation effect tests are presented in Tables 6 and 7, and the moderated mediation effects are shown in Table 8. According to the results in Model 4 of Table 6, the interaction between market orientation and bricolage significantly and positively influences business model innovation (Effect = 0.149; CI = 0.015, 0.283).The positive moderation of market orientation demonstrates that it enhances the impact of entrepreneurial bricolage on business model innovation, confirming Hypothesis 4. As per the results from Model 5, the interaction between market orientation and bricolage significantly and positively affects entrepreneurial performance (Effect = 0.745; CI = 0.667, 0.823), indicating that market orientation positively moderates the relationship between bricolage and entrepreneurial performance, supporting Hypothesis 5. Additionally, the interaction between market orientation and business model innovation significantly and positively impacts entrepreneurial performance (Effect = 0.673; CI = 0.598, 0.748), signifying that market orientation positively moderates the relationship between business model innovation and entrepreneurial performance, confirming Hypothesis 6. When MO = M-1SD, the interaction between market orientation and bricolage significantly and positively affects business model innovation (Effect = 0.401; CI = 0.277, 0.525).When MO = M, the interaction between market orientation and bricolage significantly and positively affects business model innovation (Effect = 0.490; CI = 0.388, 0.592).When MO = M+1SD, the interaction between market orientation and bricolage significantly and positively influences business model innovation (Effect = 0.579; CI = 0.443, 0.715).This indicates that the impact of bricolage on business model innovation increases with higher levels of market orientation. Similarly, when MO = M-1SD, the interaction between market orientation and bricolage negatively affects entrepreneurial performance (Effect = − 0.236; CI = − 0.300, − 0.173).When MO = M, the interaction has a significantly positive impact on entrepreneurial performance (Effect = 0.209; CI = 0.161, 0.258).When MO = M+1SD, the interaction between market orientation and bricolage significantly and positively impacts entrepreneurial performance (Effect = 0.655; CI = 0.585, 0.726).This suggests that the effect of bricolage on entrepreneurial performance shifts from negative to positive with increasing market orientation.Moreover, when MO = M-1SD, the interaction between market orientation and business model innovation negatively affects entrepreneurial performance (Effect = − 0.122; CI = − 0.192, − 0.052).When MO = M, the interaction has a significantly positive impact on entrepreneurial performance (Effect = 0.280; CI = 0.231, 0.329).When MO = M+1SD, the interaction between market orientation and business model innovation significantly and positively impacts entrepreneurial performance (Effect = 0.683; CI = 0.620, 0.745).This suggests that the effect of business model innovation on entrepreneurial performance shifts from negative to positive with increasing market orientation.When MO = M-1SD, the interaction term of market orientation and business model innovation has a significant negative impact on entrepreneurial performance (Effect = − 0.122; CI = − 0.192,-0.052);When MO = M, the interaction term between market orientation and business model innovation has a significant positive impact on entrepreneurial performance (Effect = 0.280; CI = 0.231,0.329);When MO = M+1SD, the interaction term between market orientation and business model innovation has a significant positive impact on entrepreneurial performance (Effect = 0.683; CI = 0.620,0.745);At the same time, it can be inferred that with the increase of market orientation, the impact of business model innovation on entrepreneurial performance changes from negative to positive. In order to clearly present the moderating effect of market orientation (MO), based on the analysis in Table 7, this study plotted moderation effect diagrams using one standard deviation above and below the mean of entrepreneurial bricolage (EB) as benchmarks.The results are shown in Fig. 2, Fig. 3, and Fig. 4, and the graphical results are consistent with the analysis and conclusions mentioned above. Moderated mediation effect test In this study, a moderated mediation effect analysis was further conducted, using the mean and values one standard deviation above and below the mean of market orientation (MO) as the reference points.The results are presented in Table 8. When MO = M, it is observed that market orientation positively moderates the indirect effect of bricolage on entrepreneurial performance through business model innovation (Effect = 0.137, CI = 0.100, 0.178), confirming Hypothesis 8.When MO = M-1SD, market orientation negatively moderates the indirect effect of bricolage on entrepreneurial performance through business model innovation (Effect = − 0.049, CI = − 0.079, − 0.020).When MO = M+1SD, market orientation positively moderates the indirect effect of entrepreneurial bricolage on entrepreneurial performance through business model innovation (Effect = 0.395, CI = 0.311, 0.480).This suggests that as market orientation increases, the impact of bricolage through business model innovation on entrepreneurial performance shifts from negative to positive.This pattern of change is also visually depicted in Fig. 5.When comparing the moderated mediation effects at different levels of market orientation, it is observed that when comparing Effect2 with Effect1, Effect2 is significantly greater than Effect1 (Effect2-Effect1 = 0.186, CI = 0.143, 0.228).Similarly, when comparing Effect3 with Effect1, Effect3 is significantly greater than Effect1 (Effect3-Effect1 = 0.444, CI = 0.353, 0.532).Furthermore, when comparing Effect3 with Effect2, Effect3 is significantly greater than Effect2 (Effect3-Effect2 = 0.258, CI = 0.199, 0.317).This suggests variations in the moderating effects exerted by different levels of market orientation. Additionally, Table 8 analyzes the moderated direct effects.When MO = M, bricolage has a significantly positive effect on entrepreneurial performance (Effect = 0.209, CI = 0.161, 0.258).When MO = M-1SD, bricolage has a significantly negative effect on entrepreneurial performance (Effect = − 0.236, CI = − 0.300, − 0.173).When MO = M+1SD, bricolage has a significantly positive effect on entrepreneurial performance (Effect = 0.655, CI = 0.585, 0.726).Consequently, it can be inferred that with increasing market orientation, the influence of bricolage on entrepreneurial performance transitions from negative to positive. Robustness test This study conducted a robustness check on the constructed model to further validate our main results.According to the document "Notice of the National Bureau of Statistics on the Arrangement of the 2010 Statistical Annual Report and 2011 Regular Statistical Report System" (Guo Tong Zi [2010] No. 87), enterprises above designated size mainly refer to industrial legal entities with an annual main business income of 20 million yuan or more.To better study start-up enterprises, this article selects samples with a company size of less than 20 million yuan as the research object for robustness analysis, with a total of 203 data, accounting for approximately 70% of the original sample proportion.The robustness results are shown in Table 9, Table 10, Table 11, and Table 12, indicating that hypothesis H1 is not supported and hypothesis H2-H8 is supported.Therefore, we found that the results of the robustness test are consistent with the previous results, which strongly supports our main results. Discussion Duan Haixia et al. found that entrepreneurial bricolage has a fully indirect effect on entrepreneurial performance through business model innovation [50].Firstly, empirical research confirmed that entrepreneurial bricolage has a significantly positive impact on business model innovation, a conclusion consistent with Guo et al.;However, Guo et al.primarily focused on analyzing the relationship between market orientation and business model innovation, treating entrepreneurial bricolage merely as a mediator between the two without delving into the dimensions of entrepreneurial bricolage and their distinct impacts on business model innovation; Building upon Guo et al., this study further validates this finding [49]. The bricolage process can be understood as a process innovation strategy.Through a continuous trial-and-error process of bricolage, existing resources are optimally combined, leading to innovation in resource allocation processes, which in turn promotes business model innovation [60].Secondly, the empirical analysis affirmed that business model innovation has a significantly positive impact on entrepreneurial performance, consistent with Yan Jing et al. [61].However, Yan Jing et al. used a novel business model scale designed by Zott and Amit to measure business model innovation, rather than a specific business model innovation scale [30,61].Finally, the research results indicate that entrepreneurial bricolage exerts a significant fully mediated effect on entrepreneurial performance through business model innovation.This finding contradicts Li Xinyi's mention of a "partial mediation" perspective [51].However, Baker and Nelson only conducted theoretical analysis without further empirical analysis; This study enriches the entrepreneurial bricolage process model proposed by Baker and Nelson, further illustrating the presence of a mediating pathway in promoting performance improvement, namely business model innovation [16].Entrepreneurial bricolage strategies, through the effective combination and reuse of resources related to new opportunities, help companies develop new content, structures, governance affairs, and capture new opportunities, making them one of the key drivers of business model innovation [49].Moreover, enhanced performance of new startups can be achieved through business model innovation [48]. Market orientation moderates the relationship between entrepreneurial bricolage, business model innovation, and entrepreneurial performance.Firstly, market orientation positively moderates the effect of entrepreneurial bricolage on business model innovation.Resource scarcity is becoming a new norm in the innovation process, and effectively avoiding resource constraints, creatively using available resources, and solving problems with new methods require market situational awareness.Under the premise of being marketoriented with a keen focus on core market demand, companies can better manage their external market relationships and break free from resource constraints in their entrepreneurial bricolage activities, ultimately leading to business model innovation.Secondly, market orientation positively moderates the impact of entrepreneurial bricolage on entrepreneurial performance.While entrepreneurial bricolage itself does not have a direct and significant impact on entrepreneurial performance, under the moderating influence of market orientation, the effect of entrepreneurial bricolage on entrepreneurial performance shifts from negative to positive.Furthermore, market orientation positively moderates the impact of business model innovation on entrepreneurial performance.The alignment and interaction between market orientation and business model innovation can act as a catalyst for entrepreneurial performance.Business model innovation can sustainably achieve long-term entrepreneurial performance through the adjustment provided by market orientation, enabling the dynamic exploration of the effects of new business model innovations on entrepreneurial performance.Lastly, market orientation positively moderates the indirect effect of entrepreneurial bricolage on entrepreneurial performance through business model innovation.Under the moderation of market orientation, the mediating effect of entrepreneurial bricolage through business model innovation on entrepreneurial performance is positively influenced.Market orientation has a positive moderating effect on the "entrepreneurial bricolage -business model innovation -entrepreneurial performance" process. Implications for theory Prior research has already recognized the individual effectiveness of entrepreneurial bricolage, business model innovation, and market orientation in promoting entrepreneurial performance [19,[62][63][64].However, the partial or complete interplay between these factors has to a large extent remained unexplored.This study, within the context of entrepreneurial bricolage, business model innovation, market orientation, and entrepreneurial performance, has contributed to existing knowledge in several ways. Firstly, this research enriches the study of entrepreneurship theory by introducing market orientation into the realm of entrepreneurial analysis and enhances our understanding of entrepreneurial performance.Before this study, there was limited research considering market factors in entrepreneurial performance, even fewer studies that introduced market orientation as a moderating variable in the study of entrepreneurial behavior, and rare instances where entrepreneurial bricolage, business model innovation, market orientation, and entrepreneurial performance were studied within a single theoretical framework.Through empirical analysis, this study clarifies the boundaries and conditions under which entrepreneurial bricolage and business model innovation impact entrepreneurial performance, demonstrating their significance in the context of market orientation. Secondly, the results of this study confirm that business model innovation acts as a full mediator between entrepreneurial bricolage and entrepreneurial performance.Entrepreneurial bricolage cannot directly enhance entrepreneurial performance; instead, it requires the promotion of business model innovation to improve entrepreneurial performance.This conclusion contradicts the findings of Su Xiaofeng et al. and extends the research on the mechanisms behind the reconstruction of corporate performance [65]. Lastly, this study addresses the research gap regarding the influence of market orientation on startups.Previous analyses primarily examined the impact of entrepreneurial bricolage and business model innovation on entrepreneurial performance.The model presented in this paper provides a comprehensive view of market orientation as a moderating factor in the relationship between entrepreneurial bricolage and entrepreneurial performance.These findings indicate that the impact of entrepreneurial bricolage on entrepreneurial performance through business model innovation is moderated by market orientation, a relationship previously uncharted.This study fills this gap by elucidating the specific mechanisms through which market orientation influences entrepreneurial performance. Implications for practice The practical contribution of this study lies in two aspects.Firstly, enterprises need to recognize market orientation reasonably and adjust their strategies and operating processes according to the changes in the market.Enterprises should not be afraid of market changes but should fully utilize them and actively engage in entrepreneurial bricolage activities and business model innovation, allowing enterprises to break free from resource constraints and achieve performance improvement in turbulent environments.Secondly, the process of business model innovation is an experimental process.When the outcome of experiment entrepreneurs should not blindly deny the model but should reflect on whether they have truly allocated enterprise resources to promote business model S. Wu et al. innovation.Therefore, the entrepreneurial bricolage strategy that emphasizes creative resource integration is an innovative mechanism that provides possibilities and conveniences for enterprise business model innovation, allowing enterprises to achieve performance improvement through innovation. Limitations and future directions While this study has achieved certain results in theoretical derivation and empirical testing, there are still limitations in the research.Firstly, the study only selects entrepreneurial enterprises within Hubei Province as research samples, with a relatively small sample size, which might reduce the representativeness of the research conclusions.In future studies, in addition to increasing the sample size through supplementary surveys, resources could be focused on a few specific industries for more targeted research.Secondly, this paper only considers the moderating effect of the exogenous variable of market orientation on the relationship between entrepreneurial bricolage, business model innovation, and entrepreneurial performance.It does not account for the influence of endogenous variables on this relationship.Future research could delve deeper by incorporating endogenous regulatory variables such as organizational culture and structure for a more comprehensive investigation. Conclusion This study links entrepreneurial bricolage, business model innovation, and entrepreneurial performance through the regulation of market orientation, and uniquely constructs and explores a moderated mediation model.The study reveals several important findings: (1) Entrepreneurial bricolage significantly and positively impacts business model innovation.(2) Business model innovation, in turn, has a significant and positive effect on entrepreneurial performance.It also serves as a complete mediator between entrepreneurial bricolage and entrepreneurial performance.(3) Market orientation plays a crucial role in regulating the impact of entrepreneurial bricolage on business model innovation.It also positively regulates the impact of business model innovation on entrepreneurial performance.(4) Market orientation further exhibits a significant regulatory effect on the impact of entrepreneurial bricolage on entrepreneurial performance, acting as a mediator in this relationship.In summary, entrepreneurial bricolage, business model innovation, and market orientation are interconnected factors that influence entrepreneurial performance in a positive and significant manner. Fig. 2 . Fig. 2. Moderating effect of market orientation on the relationship between entrepreneurial bricolage and business model innovation. Fig. 3 . Fig. 3. Moderating effect of market orientation on the relationship between entrepreneurial bricolage and entrepreneurial performance. Fig. 4 . Fig. 4. Moderating effect of market orientation on the relationship between business model innovation and entrepreneurial performance. S .Wu et al. Table 2 Results of the reliability and validity tests. Note: *p < 0.05, **p < 0.01, ***p < 0.001, two-sided; the bold value is the square root of the potential E; the data below the diagonal is the value of the correlation.S.Wu et al. Table 3 Results of the confirmatory factor analysis.: a represents the merger of entrepreneurial bricolage with market orientation; b represents the merger of entrepreneurial bricolage with market orientation, and business model innovationwith enterprise performance; c represents the merger of all variables into one factor. Note Table 5 Mediator effect analysis. Table 6 Test of regulatory effects. Table 7 Analysis of the moderating effects. Table 8 Analysis of mediating mediating mediating effect. Table 9 Robustness test for direct effects. Table 10 Robustness test of mediating effect. Table 11 Robustness test of regulatory effects. Table 12 [54,55]ess test for moderated mediating effects.Therefore, this study revised and further validated this conclusion based on the classical scale developed by Hunt et al. and incorporating findings from Dubey et al.[54,55].
2024-02-18T16:13:41.375Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "748c03592ae1787c57eb2a66ad61e792eb5c3931", "oa_license": "CCBYNC", "oa_url": "http://www.cell.com/article/S2405844024026318/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eb5b7f71795783670b7ccb601c2400e1696589ee", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
232369108
pes2o/s2orc
v3-fos-license
Narrating Green Economies in the Global South Abstract This paper discusses how persisting, powerful narratives inform and shape the green economy in the Global South. Green economy strategies often evolve around market-based and technological solutions to the planetary crises, particularly in industrialized countries. In developing countries with rich resource bases, however, green transitions often imply various forms of modernization of the ways in which natural resources are managed, utilized and controlled. This, I argue, is a result of the process in which the green economy agenda is shaped by elites through narratives that feed into and inform green economy discourses and policies in resource-rich countries in the Global South. While much literature discusses variegated green economy schemes in the Global South and their outcomes, this paper discusses how these practices and policies are driven by powerful narratives that essentially shape green economy agendas. I argue that a persisting neo-Malthusian narrative of resource scarcity, degradation and overpopulation co-exists with a resource abundance narrative, holding that pristine natural resources are vast, but under threat, and that capital, ‘know-how’ and technology can protect and develop these resources while at the same time accumulate economic growth. As a result, the green economy in the Global South is often narrated and implemented under a discourse of modernization of natural resource management. Introduction In the aftermath of the converging crises of food, finance and energy between 2007 and 2009, the planetary crisis received increased attention in political, popular and academic circles. For many, this represented a 'unique moment in history, in which major environmental and economic challenges could be tackled simultaneously' (Tienhaara, 2014, p. 1). Global policymakers, such as the United Nations Environment Program (UNEP), the Organization for Economic Cooperation and Development (OECD) and the World Bank, began working on strategies to redirect the crises of the economy, the environment, and persisting global poverty. These efforts materialized at the United Nations Conference on Sustainable Development, Rio+20, in 2012, where one of the main outcomes was the conceptualization of the green economy (UNEP, 2011). A green economy is an economy 'that results in improved human well-being and social equity, while significantly reducing environmental and ecological scarcities' (UNEP, 2011, p. 16). There are many ways in which the green economy is being implemented in practice, but there is an overwhelming emphasis on market-based and technological solutions to environmental challenges . In industrialized countries, the focus is usually on investments, technology and innovation in renewable energies, as well as making fossil fuels more energyand cost-efficient, much along the lines of ecological modernization (Mol and Spaargaren, 2000). In the Global South, however, green economy measures are usually implemented in natural resource sectors (Brown et al., 2014;Bergius and Buseth, 2019;Buseth, 2017;Cavanagh and Benjaminsen, 2017;UNEP, 2011;World Bank, 2012). 1 One reason for this is that the combined targets of the green economypoverty reduction, climate measures and economic growthhave spurred initiatives that aim to merge these agendas in the same package. A wide range of initiatives targeting the management, utilization or protection of natural resources are being rolled out under green economy banners across the Global South, and a substantial amount of published research discusses the logics and consequences of various green initiatives (Büscher and Fletcher, 2015;Cavanagh and Benjaminsen, 2017;Death, 2015;Ehresman and Okereke, 2015). Investments and capital accumulation targeted new sectors after the financial crackdowns of [2007][2008][2009]. There is an increasing trend that investments in natural resource sectors are being framed as 'green growth,' merely based on the combination of economic growth and natural resources. In line with Harvey (2001), Patel and Moore (2017) argue that such use of natural resource sectors is based on capitalism's constant drive towards expansion, or a 'spatial fix. ' Bergius and Buseth (2019, p. 59) also state that 'green sectors in the Global South have become important outlets for international capital in recent yearsreinforcing a contemporary cycle of 'material expansion' in this stage of capitalism. ' Kröger (2013) found similar patterns in his study of 'forestry capitalism' in Brazil, and Benjaminsen 1 The 'Global South' is a broad and imprecise category of which all countries are of course not homogenous. There is furthermore obviously no dichotomy between the 'Global North' and the 'Global South'neither geographically nor politically-economically. However, some sort of generalizing terminology is unavoidable for the purpose of discussing discursive trends. Policy documents, policymakers, actors and existing literature make the same categorization. There are therefore many exceptions to this categorization, but the broad division nonetheless remains. When referring to the 'Global South' throughout this paper, it generally means not industrialized, developing countries with rich resource bases, typically 'poor' countries in the Southern hemisphere. and Bryceson (2012) discussed capital accumulation, or 'accumulation by dispossession,' through acquisition of land and coastal reserves in Tanzania. Many studies criticize various 'green' schemes and their implications as consequences of the green economy; examples include REDD+ (Lund et al., 2017;Svarstad and Benjaminsen, 2017), carbon forests Lyons and Westoby, 2014), climate-smart agriculture and the new green revolution for Africa (Bergius et al. 2018;Bergius and Buseth, 2019;Newell and Taylor, 2018), biofuel production (Boamah, 2014), nature conservation (Büscher and Fletcher, 2015;Sullivan, 2013), and ecotourism (Fletcher and Neves, 2012). Such studies are important for understanding how the green economy manifests, but there is a gap in research on the discursive drivers behind green economy agendas . As Asiyanbi (2015) argues, a growing body of work on 'neoliberal natures' has failed to make enough effort to assess how discourses of the green shift are being translated into realities on the ground. It is therefore important to analyze the green economy not only by its actual implementations, but also by discursive drivers behind these policies and practices (Bailey and Caprotti, 2014). In order to address this gap, this paper discusses how narratives and discourses inform and justify green economy agendas in resource rich developing countries in the Global South. Building on Hajer (1995), the contribution lies mainly within discussing how two distinct narratives inform green economy discourses and policies. These narratives are (1) a persisting neo-Malthusian narrative of resource degradation, scarcity and overpopulation, and (2) a narrative of resource abundance in the context of the Global South. Several powerful actors, such as the World Bank (2013; 2019), base their policy agendas on the idea that resource bases in developing countries are rich and pristine, but threatened by degradation, as also Scoones et al. (2018) found. Key policy documents hold that capital and technology inflows will protect these natural resources and at the same time accumulate green growth and development under the threefold goals of the green economy (OECD, 2009;UNEP, 2011;World Bank, 2013;. Indeed, Scoones et al. (2018) argue that 'scarcity' narratives became dominant motivations and justifications in the rush for Africa's farmland after the food price pike in the years 2007-2009. In this paper, I argue that such narratives have been a driving force also in behind green economy policies in the Global South at large since Rio+20. 2 This is supported by Bergius et al. (2020), who demonstrate how degradation narratives have justified and paved the way for the implementation of a green economy in the case of Tanzania. 2 Many countries in the Global South also apply technology-centered green economy approaches, such as in energy or industrial innovation (see e.g. Death 2016). Some countries across the Global South have more comprehensive green economy strategies than others, and some countries' strategies delve more into innovation, industry and renewable energy. See e.g. https://www.un-page.org/countries/page-countries for a broad policy/UN-oriented overview of green economy strategies and measures in selected countries across the Global South (accessed 20.12.19). The result is a strong modernization discourse holding that natural resources and land use systems in developing countries must be modernized in order to reach the prosperous levels of societies in the Global North, along the lines of Rostow (1960) and classic modernization thinking. Modernization as a development idea was most prominent during the post-war era (McMichael, 2012), but has roots back to the sixteenth and seventeenth century enlightenment history in Europe, where a dualism between humans and nature held that humans should control nature (Descartes, 1985, pp. 142-143). People living in the colonies, of course, was considered part of nature. This dualism and urge to control nature and underdeveloped areas of the world by far justified colonialism, and later 'development.' As Bergius and Buseth (2019, p. 59) hold, modernization thinking in the post-war period 'spelled out a geographical divide between the 'progressive' cores of 'modernity' and the 'lagging' peripheries of 'tradition'.' Development equaled modernization, and controlling nature and resources through the use of capital and technology was core in this thinking. The dichotomy between humans and nature, and the urge to modernize, has later been reframed under the green economy as a core part of contemporary global capitalism and the climate crisis. This links to a general recent turn towards involving private sector actors in development programs, and the 'trade not aid' trend, which has spurred an increase in public-private partnerships and philanthrocapitalism in many sectors, including environmental policies (Adelman, 2009;Arndt and Tarp, 2017). This private turn often combines development, climate measures and economic growth, effectively channeling donor money into green sectors in the Global South (Arndt and Tarp, 2017). Bergius et al. (2018, p. 825) hold that in Africa, the green economy is increasingly manifested through the use of green agendas in order to strengthen the idea that development equals modernization through 'capital-intensive land investments. ' Green (2015, p. 632) argues that after the converging crises of the late 2000s, development funding has largely been prioritized towards the private sector. This is part of the reason why the modernization discourse has revitalized under the green economy. Bergius and Buseth (2019) call this 'green modernization,' and discuss how this discourse has advanced certain 'incarnations' of dominant modernization narratives, such as capital and technology transfers, mobility of land and people, a renewed traditional/modern dualism, and new private-sector versions of 'stages to growth.' Current green economy debates, policies and practices are essentially apolitical (Newell, 2015), meaning that they pay little or no attention to power structures or policy implications of green transformations. Indeed, Newell (2015, p. 69) argues that policy and scholarly debates have focused more on the 'governance of transitions than the politics of transformations.' Political ecology may therefore offer a useful framework for the study of how narratives and discourses drive policies in natural resource management. The interaction between natural resources, power and politics is of main concern for political ecologists, who seek to unmask power structures and key assumptions underpinning natural resource management (Adger et al., 2001;Peet et al., 2011). Political ecology is, according to Forsyth (2003, p. 2), useful as a framework to explain 'the social and political conditions surrounding the causes, experiences, and management of environmental problems.' Political ecology seeks to critically see the environment, and natural resources, through a contextual approach; it sees the nature as power-laden, and focuses on multilevel connections, structures and actors in the environment and among decision-makers and hierarchies of power (Adger et al., 2001). Stott and Sullivan (2000) emphasize the importance of tracing environmental narratives by identifying power structures, and a key approach within political ecology is to link discourses to current environmental policies. Political ecology is useful for the analysis of power and multilevel politics in environmental governance (Adger et al., 2001), such as the green economy, and therefore undergirds the discussion in this paper. The findings presented in this paper are based on a review and discourse analysis of policy documents, 3 and data collection undertaken between 2015 and 2017. I applied qualitative methods, including in-depth interviews with key actors in global and multinational organizations and institutions working within the green economy in different ways. 4 The analysis furthermore builds on event ethnography (Campbell et al., 2014) carried out at several big international green economy policy conferences. 5 I analyzed the data qualitatively, particularly under a discourse and narrative analysis framework through methods of coding and identification of regularities across transcripts and documents, building on Foucault's (1972) 'archaeology of knowledge' and 'genealogy of power' (Foucault, 1980) which treats power as productive, meaning power can be influential through discourses. The analysis was further built on Roe's concept of policy narrative analysis (Roe, 1994), and Dryzek's framework for analyzing environmental discourses (Dryzek, 2013). 3 The analyzed documents are primarily key policy reports and strategies by UNEP, the OECD, the World Bank, the Partnership for Action on Green Economy (PAGE), and the World Economic Forum (WEF). I also reviewed project strategies and documents from REDD+ projects, Payment for Ecosystem Services (PES) schemes, and agricultural corridors in Africa, in addition to White Papers and investment strategy papers by selected investors. 4 Actors interviewed were working in international or multinational institutions or organizations, many of them in institutions operating across several countries in the Global South. I also interviewed informants at government level and with investors in specific countries. Most of the informants were representatives from multinational organizations or institutions working also outside the respective country in which the data collection took place. Most importantly; the trends described in this paper are similar across many countries in the Global South (see e.g. Bergius and Buseth, 2019;Dawson et al., 2016;De Schutter, 2015;McKeon, 2014;Moseley, 2017;Patel et al., 2015). 5 The events took place at OECD in Paris in 2015, in South Korea, led by OECD, UNEP, PAGE and the Global Green Growth Institute (GGGI), in 2016, in Dubai in 2016 (the World Green Economy Summit) (followed online), in Oslo in 2016, and in Tanzania in 2017 (an annual meeting of a multinational green growth scheme). The events are not further identified since informants who spoke here are anonymized in this paper. Discourses and narratives I follow Hajer's (1995, p. 44) understanding of a discourse as a 'specific ensemble of ideas, concepts, and categorizations that are produced, reproduced, and transformed in a particular set of practices and through which meaning is given to physical and social realities. ' Svarstad and Benjaminsen (2017) define a discourse as a shared way of comprehension that can be regarded as lenses that you see a certain topic through. Dryzek (2013, p. 9) holds that a discourse enables 'those who subscribe to it to interpret bits of information and put them together into coherent stories or accounts.' Discourses legitimize knowledge, and 'coordinate the actions of … people and organizations' (Dryzek, 2013, p. 10), especially in global politics, power and practices (Hajer, 1995). According to Svarstad et al. (2017, p. 356), discursive power is being exercised 'when actors such as corporations, government agencies or NGOs produce discourses and manage to get other groups to adopt and contribute to the reproduction of their discourses.' I see discursive power as being exercised also when a discourse has the power to influence policy or actions. A narrative is a social construction of a more specific case. For this study, narratives are important as drivers of assumptions, discourses and policies. According to Roe (1991, p. 288), development narratives exist 'to tell stories or scenarios that simplify the ambiguity' of practitioners, bureaucrats and policymakers, especially in rural development. A narrative is a story that usually has a beginning (typically a problem), a middle, and an end, which can be a solution, a premise, or a conclusion in an argument. Narratives are meant to simplify and inform the reader, but also to provoke feelings, and the actors in a narrative are often portrayed as heroes, victims or villains. Roe (1991) argues that development narratives are not so much concerned with what should happen as with what will happen. The objective of such narratives is therefore often to persuade the reader to engage in or act upon the presented problem. Roe's (1994) concept of narrative policy analysis can be used to explain how certain stories dominate and how they lead to action through policies or implemented schemes. Molle (2008, p. 31), for example, draws on narratives as storylines to explain how policy is formed in the water sector in Africa. In this paper, I discuss narratives that drive discourses, to illustrate how green economy policies are formed and shaped. I examine how discourses shape agency and policy, and how discourses are informed and driven by narratives, as selected bits of information. For this purpose, I find Hajer's (1995, p. 61) work on discourse institutionalization useful, to shed light on how and when a given discourse is translated into policy and institutional arrangements. Hajer (1995, p. 1) defined environmental discourses as 'fragmented and contradictory,' and as 'an astonishing collection of claims and concerns brought together by a great variety of actors.' He used the example of ecological modernization to demonstrate how discourses were translated into politics. For this paper, Hajer's (1995) work is relevant for understanding how discourses feed into the formation of green economy policies. Green economies UNEP's report Towards a Green Economy (UNEP, 2011) has been essential for the mainstreaming of green economy concepts, agendas and policies after Rio+20. Furthermore, OECD's report Green Growth: Overcoming the Crisis and Beyond (OECD, 2009) has been particularly influential in the business sector and for governments of industrialized countries. According to the OECD; Green Growth means fostering economic growth and development, while ensuring that natural assets continue to provide the resources and environmental services on which our well-being relies. To do this, it must catalyse investment and innovation which will underpin sustained growth and give rise to new economic opportunities. (OECD, 2011, p. 4) The above-discussed green economy definitions are widely used among different actors, including environmentalists, practitioners, the business sector, and politicians, and has brought together actors with different agendas behind the same proclaimed, but fuzzy, goals. Given the ambiguity of the green economy, it is necessary to distinguish between green economy schemes that are being rolled out in various contexts, on the one hand (Death, 2015;2016), and green economy discourses that shape the policies behind these practices, on the other hand (Dryzek, 2013;Scoones et al., 2015). Following Hajer's concept of 'discourse coalitions' (Hajer, 1993), many distinct versions of a green economy can be identified, and the green economy has been categorized discursively in several ways (see e.g. Bina, 2013;Ehresman and Okereke, 2015;Ferguson, 2015;Scoones et al., 2015). Death (2015) discusses how various discourses are manifested in national green economy strategies and policies, and how theydespite being fundamentally differentare categorized under the same green umbrellas by their proponents. This illustrates how the green economy agenda is being narrated in different ways despite the lack of a common understanding of the concept. It also, however, illustrates a blind spot towards discursive drivers behind the green economy in the Global South. Most green economy approaches and agendas discussed in the literature predominantly fits industrialized countries and wealthy societies. And they canbroadlybe summarized in two overall discourses: green growth and green (technological) transitions. However, when implemented in the Global South, typically resource rich developing countries, these agendas merge into a modernization of natural resource management discourse. This does not only represent an increasing practice, but also a distinct green economy discourse that has not been sufficiently recognized in the literature. I argue that this discourse is a result of how prevailing narratives feed into green economy agendas in the process towards policy implementation. Narrating problems and solutions in the green economies of the global south The combination of green growth and technological transitions unveils an interesting merging of ideasor to put it another way, it illustrates two strangely intertwined ways in which the green economy is being narrated in the Global South. The longstanding degradation/ overpopulation narrative, is coupled with a belief that we can overcome the scarcity crisis if we invest in natural resources, in terms of both capital and technology (Scoones et al., 2018). This is justified by a narrative saying that while Africa's natural resources are being degraded, they are also pristine, abundant and vast, only waiting to be 'developed.' In sum, these narratives comprise a modernization of natural resource management discourse particularly evident in the Global South. The narratives represent a 'problem' storyline of resource scarcity, degradation, poverty, and overpopulation, and a 'solution' storyline saying that we can add technology, 'know-how' and capital into natural resource sectors in order to overcome the aforementioned challenges. Informed by this, the green economy in the Global South is therefore often implemented through schemes that seek to protect, modernize or profit from 'green' sectors, resulting in transformed ways in which natural resources are managed, governed and controlled. The problem: 'poor people make poor land' An initial driver behind the green economy in the mid-2000s was a recognition that the pressure on the planet is reaching its limits (Rockström et al., 2009;UNEP, 2011). This 'limits' idea is by no means new, rather it represents a renaissance of long-standing ideas. Malthus suggested this link already in (1798 [1998]), claiming that population growth would outstrip food production. A few hundred years later, The Club of Rome's 1972 report The Limits to Growth discussed how population growth and unsustainable use of the world's resources threatened the planet and humanity (Meadows et al., 1972; see also Ehrlich, 1968). Dryzek (2013) argues that after a short decade of 'environmental problem solving' in the 1960sclosely linked to the first photography of Earth from space, Carson's 1962 publication Silent Spring and eventually the 'environmental awakening' in the USwe saw the rise of a 'limits and survival' discourse in the 1970s. Dryzek (2013, p. 25) holds; 'exponential growth in both human numbers and the level of economic activity meant that there was no time to lose, for humanity seemed to be heading for the limits at an ever-increasing pace. Hitting these limits would mean global disaster and a crash in human populations.' This discourse has persisted, but was situated more in the background during the 1980s' and 1990s' sustainability discourse. Rather, under the advent of neoliberalism in the 1980s, a Promethean view that the Earth was in fact unlimited, gained traction. The argument was that indigenous people had always developed substitutes to resources that had run out, ora more updated versiontechnology would find solutions (Dryzek, 2013, p. 26). This view paved the way for the rise of ecological modernization. The main tenets of eco-modernization, is an ecologization of production, market and regulatory reforms that reflect ecological priorities, and the 'greening' of both social and corporate practicespredominantly focusing on countries in the Global North (Low and Gleeson, 1998). Simultaneously, with regards to the global South, the link between poverty, population pressure and environmental degradation was reframed. To the WCED (1987), it was important to present new and more optimistic ideas about sustainability, prosperity and economic growthas opposed to the doom predictions by Meadows et al. (1972). This was in line with the general turn towards neoliberalism and the focus on 'human development' we saw towards the end of the 1980s. But the perceived link between poverty, population and degradation was strong: Many parts of the world are caught in a vicious spiral: poor people are forced to overuse environmental resources to survive from day to day, and their impoverishment of their environment further impoverishes them, making their survival ever more difficult and uncertain. (WCED, 1987, p. 28). Two decades later, UNEP (2011, p. 15) said that 'the link between population dynamics and sustainable development is strong and inseparable' and that [a] transition to a green economy can assist in overcoming the contribution that population growth makes to the depletion of scarce natural resources. The world's least developed countries (LDCs) are more strongly affected by environmental degradation than most other developing countries, so therefore they have much to gain from the transition to a green economy. (UNEP, 2011, p. 15) This overpopulation narrative is strong and persistent. Ojeda et al. (2018, p. 1) found that environmental policies in the Anthropocene builds heavily on a 'larger project of population control,' and demonstrates several ways in which elites, policy-makers and scholars have 'updated Malthus' ideas of human population stretching natural limits, applying them to problems like soil erosion, deforestation, pollution, and now climate change' (2019, p. 2). Indeed, Scoones et al. (2018) argue that the crises of 2007-9 'galvanized a series of scarcity narratives justifying interventions around land and resources.' Scarcity narratives are not new, but they have been renewed under global warming and the green economy, in new packages, and laid an important foundation for how actors think about natural resource sectors in the green transitions of resource-rich developing countries. One contemporary example of this narrative, as reemerged under the green economy in the Global South, is The Nature Conservancy's Adopt an Acre program, which enables consumers to 'adopt' (in the exchange for a donation) a piece of land in order to protect it from degradation. 6 On a webpage that has since been removed, they argued that 60 percent of Africa's lands and waterscommunity property, in a senseare managed by the people who live on them … A continuing threat is their lack of control over the communal lands and waters they depend on for survival. 7 Also UNEP (2011, p. 14) states that: Currently, there is no international consensus on the problem of global food security or on possible solutions for how to nourish a population of 9 billion by 2050. … Freshwater scarcity is already a global problem, and forecasts suggest a growing gap. These narratives 'prove useful for elites who seek to avoid responsibility'for conflicts, for scarcity or simply as an alibi for continued economic growth (Ojeda et al., 2018, p. 6). Furthermore, this illustrates the story about how 'we' should intervene in natural resource management in order to 'save' the planet's degrading resources, and at the same time make money. As the above quote from The Nature Conservancy (TNC) shows, green economy policies are based on a narrative that poverty and overpopulation contribute to resource degradation. Proponents of this view hold that population growth is threatening natural resources, and measures to halt population growth should therefore be an integrated part of the solution to hinder planetary degradationparticularly in the Global South, where the problem is perceived to be most serious (World Bank, 2012;WCED, 1987). Powerful actors and policymakers regard traditional sectors such as small-scale agriculture and pastoralism as inefficient and 'backwards' production systems that are degrading the environment (Bergius et al., 2020;World Bank, 2013). Although many have raised questions about this link (e.g. Gray and Moseley, 2005), these narratives are still frequently used, and feed into green transition agendas in the Global South. For example, the World Bank report titled Inclusive Green Growth: The Pathway to Sustainable Development holds that one main problem for what is usually called 'natural capital' under the green economy, is that soil is being degraded because of 'poor' use, and that 'land users need to be given the right economic incentives in preventing or mitigating land degradation' (World Bank,2012, p. 110). One chapter in the report is devoted to describing how natural capital, primarily in developing countries, should be managed in new ways in order to implement a green economy. One problem that the World Bank points to, is how resources such as forests and fisheries in developing countries usually are open access and poorly managed, and should change (echoing Hardin, 1968). It also holds that soil degradation is a problem due to poor agricultural and grazing practices, which must be managed in new ways. These views are rather common in key green economy policy documents, such as those by UNEP (2011), the OECD (2009) and the World Bank (2012; 2013; 2019). The policy agendas echo the 'limits and survival' discourse of the 1970s, and we see a trend where contemporary policy frameworks for environmental management meet longstanding narratives that have informed and justified natural resource management for centuries (Roe, 1991;1994). Decision-makers, practitioners and investors, as well as local and national elites, argue along similar lines. 8 For example, one senior representative of a prominent global agribusiness company said the following about smallholders in the African country in which he was based: Also soil degradation here is a big, big thing. And one of the main reasons is how badly [the smallholders] treat the soil. First of all on the animal life, they devastate absolutely everything they don't need … Because they have this thing, smallholders, and then what they do, because they have such a low productivity, they just devastate everything, and it will devastate more and more [soil]. 9 A number of other informants echoed these views. When asked how or why a green economy should be implemented in the Global South, the response was usually along the lines of 'because land is becoming degraded,' 'because of mismanagement of natural resources,' 'because of deforestation,' or 'rural farmers don't know how to treat the soil.' 10 Policy-makers at both global level, in multinational organizations, or national authorities produce the same storynot based on scientific research, but based on strong, persisting beliefs and the fact that 'others' tell the same story. 11 UNEP (2011, p. 15) for example argues that '[a] transition to a green economy can assist in overcoming the contribution that population growth makes to the depletion of scarce natural resources.' This storyline is frequently referred to in national investment strategies and natural resource policy documents in countries in the Global South (e.g. SAGCOT, 2013;World Bank, 2019). This illustrates how narratives form discourses that feed into policy and political decisions. Others have identified similar stories that mask the political and economic realities of overpopulation and resource scarcity (Ojeda, 2018). The green economy is often branded and implemented in ways that do not correlate well with the ambitious promises made in policies and strategies. The reframed policy agendas of scarcity, limits and degradation illustrate how narratives form 'solutions,' following Roe (1991). As Adger et al. (2001, p. 683) claim, since discourses are often based on 'shared myths' of the world, 'the political prescriptions flowing from them are often inappropriate for local realities.' Elite narratives about resource management and control have proved to survive despite evidence of the contrary. Political ecologists have repeatedly debunked narratives about environmental scarcity and degradation in the Global South (Fairhead and Leach, 1996;Mearns, 1996), 8 Informant 7, May 6, 2015;informant 45, April 27, 2016. 9 Informant 37, March 8, 201610 Informant 2, May 4, 2015informant 15, November 6, 201511 Informant 2, May 4, 2015informant 45, May 27, 2016. but such research is hardly taken into consideration in the formation of environmental policies (e.g. Svarstad and Benjaminsen, 2017). Roe (1991) pointed to several discourses from rural Africa that have persisted despite 'strong empirical evidence against its storyline.' Gardner (2017), moreover, demonstrated that global elite policies influence conservation schemes based on discursive policies rather than local realities. Instead, the resource degradation narrative has proved to be stronger, and hasby farjustified control over nature through centuries. This provides an explanation for why there still is a belief in the necessity of 'us' intervening to 'save' nature from 'them' (Eddens, 2017;Gardner, 2017). The solution: modernizing natural resource management Elites have historically sought to control nature, but the reasoning has changed over the centuries. While colonialism and the development era of the 1950s and 1960s at least in part was justified by a constructed 'need' to save nature, this was reformulated during the 'limits' discourse of the 1970s, and again under the 'sustainable development' discourse of the 1980s and 1990s. Today, policymakers and powerful actors have constructed new 'solutions' to environmental degradation based on a strong belief in modernization through technology and capital in natural resource sectors in the Global South. 12 This is linked to a perceived realization of the green economy's 'triple bottom line' of people, planet and profita threefold goal that targets the crises of poverty, the climate and the economy in the same package (Bergius and Buseth, 2019). One way in which this shift becomes obvious, is how the story about growth has changed. Peculiarly, the 'limits to growth' ideaor storyhas turned into 'green growth' under the green economy. Indeed, one of the headings in the UNEP (2011, p. 14) report reads From crisis to opportunity. This illustrates how rhetoric and policy has changed from limiting the use of nature to a focus on economic opportunities in nature (World Bank, 2013). This shift offers a useful bridge between the 'problem' narrative outlined above, and the 'solution' narrative, which holds that there is an abundance of natural resources and available land in African countries. In this way, narratives about scarcity and abundance are intertwined and set to represent two sides of the same story. This win-win narrative holds that the world's natural resources are pristine and under threat, but at the same time valuable, with tremendous potential for capital accumulation. This well illustrates the aims of the green economy, and how the narratives and discursive drivers have shifted. This was also found by Scoones et al. (2018), who argue that an abundance narrative exists alongside the scarcity narrative, holding that investment areas are 'abundant, empty, idle and underutilized.' This feeds into policy agendas, investment strategies, green economy strategies and mainstream green growth rhetoric (e.g. World Bank, 2019; SAGCOT, 2013). Policymakers, practitioners and governments adhere to the story that degraded or underutilized resources will prosper and be of high economic value only if we allow technology and market forces to 'develop' them (World Bank, 2013. Key green economy policy documents usually focus on the latter (i.e. the potential for capital accumulation). For example, one-third of UNEP's report Towards a Green Economy (UNEP, 2011) is devoted to natural capital and how we should invest in it in order to establish a green economy. This includes both protection of natural resources to hinder planetary degradation, and modernization of the utilization and management of natural resources for the purpose of development and (green) economic growth (OECD, 2009;UNEP, 2011;World Bank, 2019). One striking example in my data comes from the Global Green Growth Summit in 2016, where a senior associate from the International Institute for Environment and Development (IIED) started his panel talk by announcing to the 1200 people in the audience, 'If you're from an African government, please sell your land to investors! In that way we can create green jobs for the poor!' 13 This view illustrates the focus on capital investments and modernization of natural resource management in the green economy of the Global South, and the belief that external intervention is necessary. Such policy ('creating green jobs on land sold to foreigners') is a result of the belief in the degradation narrative, as well as the focus on poverty ('we can give them jobs'). The quote came from a leading green economy policymaker, and his audience consisted of other leading policymakers at global, regional and national levelincluding typically various UN organizations, OECD and World Bank representatives, as well as government and other representatives from the Global South and Global North, representing respectively the receiving end on the one side, and donors/ investors on the other side. The quote obviously does not only illustrate this person's view, but it represents a strong, leading narrative that is being passed on and circulated within a broader community of policymakers, decisionmakers, and elites, which essentially has consequences for how the leading discourse institutionalizes through policy. Brockington andPonte (2015, p. 2197) argue that initiatives such as carbon payments, ecotourism, and biodiversity offsets largely illustrate the green economy in the Global South. Such schemes are frequently used by its proponents as examples of how nature can be protected while at the same time accumulate economic growth. This illustrates how actors and discourses have changed the rhetoric from a focus on global crises and planetary degradation to a story about investment opportunities, as well as how new management schemes should be implemented in order to 13 Informant 48, September 6, 2016 'restore' natural capital (OECD, 2009). 14 For example, the OECD (2011) argues that natural resources should be conserved and 'used more efficiently' in order to achieve green growth. Similarly, the World Bank recently published a report on why improved management, modernization and protection of Tanzania's natural resources are crucial in achieving 'green development' and sustainability (World Bank, 2019). In an earlier World Bank report, 10 out of 16 principles for how to establish green growth in the Global South, deal directly with modernized environmental management (World Bank, 2012, p. 17). They emphasize carbon pricing, stricter water regulation, better forest management, coastal zone and fisheries management, land use planning, and more 'targeted' agricultural practices. For example; [d]ifferent resources require different types of policies. For extractable but renewable resources, policy should center on defining property rights and helping firms move up the value chain. For cultivated renewable resources, policy should focus on innovation, efficiency gains, sustainable intensification, and "integrated landscape" approaches. (World Bank, 2012, p. 105) When introducing the World Bank's Climate Change Investment Plan for Africa 15 during the Global Green Growth Summit in 2016, a senior World Bank representative working on climate change policies, said that, in the agricultural system, there's lots of changes to think about, and thinking about changes in livestock feeding, that can on the one hand increase productivity, on the other hand increase resilience to climate change, and on the third hand reduce emissions. It is possible to have these win-win-win solutions. These are the three underlying principles for our climate change actions. Moreover, the same representative said 'we're working on the sustainable land management program, working nationally to transform landscapes at scale in order to build this resilience.' 16 It is interesting enough how he emphasized that they were working on landscape transformations in a foreign country, which clearly illustrates the belief in the necessity of external interventions in natural resource sectors in the Global South. But essentially, this urge to 'transform' landscapes rests heavily on the degradation narrative. The idea that natural resource sectors must be managed in new waysi.e. modernizedcontinues to inform and shape policy. This exemplifies how 'green growth' agendas have been influenced by persisting narratives, and is evident in both policy documents and among informants interviewed. Modernization is seen as both necessary and obvious in sectors that are perceived as traditional, outdated and underdeveloped, such as pastoralism and agriculture and other land use systems. 17 These narratives are therefore particularly evident in the agriculture sector and the new green revolution for Africa (Patel, 2013), which increasingly has been merged with the green economy (Buseth, 2017;Moseley, 2017;World Economic Forum, 2010). The Malthusian dilemma of how to feed the world's growing population is a powerful narrative (World Economic Forum, 2010). Under the green economy, efforts in developing the agriculture sector have been combined with environmental concerns and climate measures. This conceptual fusion proposes a greener repetition of the original green revolution (Conway, 1997) to feed a growing world population sustainably. A general argument is that developing countries should 'upgrade' to the level of developed countries' production systems by a 'flow of knowledge, experience and equipment from one area to another,' usually from developed countries to developing countries (UNEP, 2011, p. 234). UNEP (2011, p. 36) too, holds that one of the most pressing problems in the contemporary world is 'feeding an expanding and more demanding' world population, and 'attending to the needs' of those that are undernourished, while at the same time addressing climate change. Hence, 'environmental degradation and poverty can be simultaneously addressed by applying green agricultural practices' (UNEP, 2011, p. 36), a trend that is increasing under brands such as 'climate-smart agriculture' (FAO, 2010) and 'agriculture green growth' (SAGCOT, 2013). The World Bank, for example, presents agribusiness in Africa with the narrative that while Africa has 'an abundance' of both land and water, it lacks the capital, knowledge and technology to 'unleash' its opportunities (2013, p. 17). The World Bank also holds that Africa has become the 'final frontier' for agribusiness (World Bank, 2013, p. 17), which exemplifies the understanding of the green economy as a 'spatial fix' in contemporary capitalist reorganization (Harvey, 2001;Patel and Moore, 2017). Other proponents hold that 'there is substantial untapped potential for the development of the continent's water and land resources for increasing agricultural production' (NEPAD, 2003, p. 24), and '[t]he continent is endowed with many natural resources, including plentiful land and fertile soils' (UNECA, 2013, p. 8). In an interview, an informant who was a foreign land investor in an African country asked, Have you ever flown across this country? All you can see is vast land areas which are just laying there. As far as your eye can see. There is plenty! Of no use! And, you know … the massive population growth … the number of people in this country is going to reach … yeah. It's a foreseen catastrophe. 18 17 Informant 53, September 7, 2016 18 Informant 7, August 6, 2015. Another self-proclaimed green growth investor repeatedly said how the local community was 'scratching dirt,' living from day to day, degrading the soil, the water and the forests in ignorance. 19 According to this informant, the best solution to the problem was to establish large-scale commercial farming that had the knowledge and the technology to manage the land 'correctly.' The collaborating authorities on the specific project this investor was working on, adhered to this story tooat both regional, national and global levels (World Economic Forum, 2010;World Bank, 2012;SAGCOT, 2013). The narratives of overpopulation, resource degradation and a predicted Malhusian crisis were the most mentioned reasons why investments in natural resource sectors in developing countries are necessary. 20 Clearly, there is a framing of villains in this picture: poor people degrade the soil with their outdated production systems, lack of knowledge and ignorance. These people are also often regarded as 'victims' in the same story alongside nature as a victim. The 'heroes' in the narrative are policymakers, practitioners, environmentalists, and investors who bring green growth, capital inflows, technology transfers, and essentially modernization to solve the degradation crises. Scholars have contributed to this view for decades. For example, Hollander (2003, p. 2) writes, The real enemy of the environment is povertythe tragedy of the billions of the world's inhabitants who face hunger, disease, and ignorance each day of their lives. Poverty is the environmental villain; poor people are its victims. Impoverished people often do plunder their resources, pollute their environment, and overcrowd their habitats. They do these things not out of willful neglect but only out of the need to survive. Not only foreign investors hold this view, but also elites and national stakeholders promote similar views when arguing why a green transition is necessary. Such scarcity narratives feed into policy formation (Hajer, 1995). This is evident through for example various large-scale land investment schemes that have been rolled out since the triple F crisis (Buseth, 2017;Scoones et al., 2018;World Economic Forum, 2010), aiming to improve production, alleviate poverty, accumulate economic growth, and at the same time curb climate change. 21 Policy strategies in such initiatives are to a large extent based on narratives of scarcity and degradation, presenting problems presumed to be solved by technology and capital inflows to natural resource sectors that are not utilized to their full potentials. Finally, deeply integrated in these narratives and the modernization discourse, there is a strong belief that in order to save the world's natural resources, we must attribute monetary values to them (OECD, 2009;UNEP, 2011). UNEP (2011, p. 18) holds that 'environmental valuation and accounting for natural capital depreciation must be fully integrated into economic development policy and strategy. ' Furthermore, UNEP (2011, p. 19) argues that we need to better control the environment in order to make money from it: The role of policy in controlling excessive environmental degradation requires implementing effective and appropriate information, incentives, institutions, investments and infrastructure. Better information on the state of the environment, ecosystems and biodiversity is essential for both private and public decision making that determines the allocation of natural capital for economic development. This quote illustrates the two-sided belief in the need to intervene in ecosystems in order to ensure capital accumulation, and to protect nature. It links to the broader debate on neoliberal natures, discussed by e.g. Bigger and Dempsey (2018), and demonstrates the contemporary turn towards a modernization discourse in natural resource management in the Global South. Commodification of natural resources and ecosystem services (Brockington, 2011;Sullivan, 2013) has directed many actors' interests towards what are perceived as 'underdeveloped' markets in the Global South (World Bank, 2013). Büscher and Fletcher (2015, p. 273) argue that the new mode of accumulation is best described as 'accumulation by conservation,' defined as 'a mode of accumulation that takes the negative environmental contradictions of contemporary capitalism as its departure for a newfound 'sustainable' model of accumulation for the future.' Whereas the green growth discourse rests on a narrative about a need to 'price nature,' it merely implies 'pricing nature to save the economy,' and not necessarily 'pricing nature to save nature' (Dempsey and Suarez, 2016). When implemented in the Global South, this narrative is informed by the aforementioned degradation narrative, and is accordingly transformed into a 'saving nature' storyline that masks how these 'natures' initially were framed as investment opportunities (Bailey and Caprotti, 2014;Death, 2015). This is obvious in particularly OECD's reports (e.g. 2009). This justifies interventions in nature, and largely explains how the green economy is regarded as an opportunity to find new ways to profit from natural resources (Brown et al., 2014). Under the green economy, actors focus more on utilizing natural resources than on regulating production or consumption systems, which are much more damaging to the planetbut that would disturb global capitalism (Kenis and Lievens, 2016;Patel and Moore, 2017). Biopower and capital accumulation must not be underestimated as driving forces in this discourse, or to put it another way, as driving forces in redirecting the attention away from the fossil industry. Thus, in line with Harvey (2001), the frontiers of this discourse appear as 'spatial fixes' to capitalism's internal contradictions (Harvey, 2014;O'Connor, 1991). This means not only expanding into new 'spaces,' but also finding new solutions ('fixes'), which are often short-term and not sustainable. From this perspective, the green economy emerges as a new 'frontier' in contemporary capitalist reorganization. Bailey andCaprotti (2014, p. 1799) argue that green economies in the Global South represents is a 'mosaic of practices that displays both synergistic components and dysfunctional overlaps and which has hazy systems of accountability for ensuring consistency between higher level visions of the green economy visions and onthe-ground green-economy strategies.' We see through piles of existing research that development interventions and green economy initiatives often are peripheral to on-ground realities. This is the result of how policy agendas are driven by prevailing and persistent narratives and discourses, and the gap can be explained by the ways in which policies are informed and shaped by narratives and discursive powers. Hajer's (1995, p. 61) concept of discourse institutionalization is useful to illustrate how actors interpret, shape and transform the green economy agenda. Actors consciously or unconsciously draw on a variety of selected arguments and narratives to establish new agendas and policies in responses to new situations, such as the green economy (Buseth, 2017). In this paper, I have discussed how longstanding narratives have revitalized to inform, justify and drive contemporary green economy agendas in resource-rich countries in the Global South. Particularly two narratives feed into the formation of green economy agendas in the Global South: first, a neo-Malthusian 'problem narrative' of resource degradation, scarcity, and overpopulation in poor countries, and second, a 'solution narrative' saying that we can overcome the crises by modernizing natural resource management and utilization. In this regard, the 'degradation' narrative is coupled with an 'abundance' narrative, holding that while the planet's natures are pristine and under threat, they are also abundant and underutilized, and should be invested inor 'developed' in order to accumulate green growth, as well as for the purpose of environmental preservation. In sum, this represents a modernization of natural resource management discourse in the green economy in the Global South. AUTHOR Jill Tove Buseth is an associate professor in human geography at the Department of Social and Educational Sciences at Inland Norway University of Applied Sciences, Norway. This article is part of her PhD thesis from 2019 at the Norwegian University of Life Sciences. Her PhD research investigated the implementation of the green economy in the global South, particularly an agricultural investment scheme in Tanzania (SAGCOT). Her work is primarily situated within human geography, development studies and political ecology, and covers environmental governance and policies, climate policies, policies and discourses of sustainable development and the green economy.
2020-12-24T09:02:24.557Z
2021-01-02T00:00:00.000
{ "year": 2021, "sha1": "67fee93b62a02dc6bd1709dd91c413df90be57d7", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/08039410.2020.1858954?needAccess=true", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "a2dd85feb31183f917b94766c5f0b23708637f07", "s2fieldsofstudy": [ "Economics", "Sociology" ], "extfieldsofstudy": [] }
51772296
pes2o/s2orc
v3-fos-license
Pomegranate iron ( III ) reducing antioxidant capacity 2 ( iRAC ) compared to ABTS radical quenching 3 Pomegranate juice (PJ) has total antioxidant capacity (TAC) which is reportedly higher 13 compared to other common beverages. This short study aimed to evaluate the TAC of 14 commercial PJ and pomegranate fruit in terms of a newly described iron (III) reducing 15 antioxidant capacity (iRAC) and to compare with ABTS free radical quenching activity. 16 Commercial PJ, freeze-dried pomegranate, and oven dried-pomegranate were analyzed. The 17 total phenols content (TPC) was also assessed by the Folin-Ciocalteu method. The calibration 18 results for iRAC were comparable to ABTS and Folin-Ciocalteu methods in terms of linearity (R2 19 > 0.99), sensitivity and precision. The TAC for PJ expressed as trolox equivalent antioxidant 20 capacity (TEAC) was 33.4 ± 0.5 mM with the iRAC method and 36.3 ± 2.1 mM using the ABTS 21 method. For dried pomegranates, TAC was 89–110 mmol/100g or 76.0 ± 4.3 mmol/100 g using 22 iRAC and ABTS methods, respectively. Freeze-dried pomegranate had 15% higher TAC 23 compared with oven-dried pomegranate. In conclusion, pomegranate has high TAC as 24 evaluated by the iRAC and ABTS methods, though variations occur due to the type of cultivar, 25 geographic origin, processing and other factors. The study is relevant for attempts to refine food 26 composition data for pomegranate and other functional foods. 27 Introduction Pomegranate (Punica granatum L.) is an ancient food used as a traditional remedy against a variety of conditions including microbial infections.Pomegranate is perceived as a "superfood" due to its high antioxidant capacity [1][2][3][4][5][6][7][8].Current databases show pomegranate juice (PJ) possesses total antioxidant capacity greater than many other beverages [9][10][11][12].Although the total antioxidant capacity for pomegranate from different countries were reported, only few publications deal with commercial PJ as sold in the market [9,13].The effect of drying on pomegranate seed, arils and peels were examined [14,15], but oven-drying and freeze-drying effects on the total antioxidant capacity of whole pomegranate fruit has not been compared. The aims of this short study were, to reevaluate the total antioxidant capacity of pomegranate fruit and commercial PJ using a newly described method for assessing iron (III) reducing antioxidant capacity (iRAC) [16] and to compare results with the ABTS method [17].Total phenol content (TPC) was evaluated also as another well characterized antioxidant method [18].The study is significant for current attempts to refine food composition data for pomegranate and other functional foods for improved nutrition applications, product development or international trade [19]. Preparation of samples and antioxidant standard Pomegranate fruit (Hicaz variety, Turkey) and commercial PJ (POM Wonderful 100% PJ; POM Wonderful LLC, UK) were purchased from a large supermarket in the United Kingdom (UK).Unpeeled pomegranate was washed, diced using a stainless steel knife and divided into two portions.One portion of pomegranate was oven dried at 80°C overnight and another was frozen at -80 o C for 48 hrs., then freeze-dried for 48hrs using the HETO Power Dry PL6000 instrument (ThermoFisher, Ltd., UK).The dried pomegranate samples were ground using a blender (DeLonghi Coffee Grinder; Type KG40 EXA) and the resulting powders (5 g) were extracted by stirring with 100ml of solvent (40:60 v/v methanol: water) for 2 hours.The pomegranate solvent extract was centrifuged using a microcentrifuge (@11, 000rpm for 5min) and the supernatant stored at -18 o C. The solids content for PJ was determined by drying a known volume and weighing the residue.Gallic acid and trolox reference compounds were prepared as 1000 µM solution and diluted to 500 µM, 250 µM, 125 µM, and 62.5 µM daily before use. Iron (III) Reducing Antioxidant Capacity (iRAC) Assay The iRAC reagent comprised 20 mg of ferrozine dissolved with 18ml of Tris buffer (0.1M, pH 7.0) or potassium acetate buffer (0.1m, pH 4.5) and mixed with 8mg of ferric (III) ammonium sulphate (8 mg) dissolved with 2 ml of deionized water.Typically, the final iRAC working solutions was prepared after the sample array to be analyzed was ready; 20 µL of pomegranate extract, PJ, or reference compound (gallic acid or trolox) were added to a 96-well microplate followed by 280 µL of the iRAC reagent.The reaction mixtures were incubated for 30 minutes at 37°C.Absorbance was read at 562 nm (A562) using a microplate reader (VersaMax model reader; Molecular devices, Sunnydale, California, USA).Several (25, 50, 100-fold) diluted samples were analyzed to determine the optimum dilution necessary for sample absorbances to fall linear range for analysis.Final samples were analyzed on two separate occasions suing (n =) 12 -16 wells of a microplate.For time-course measurements A562 readings were recorded at 2 minutes for 30 minutes. ABTS Assay The ABTS was performed as described by Walker and Everette [17] with modifications. ABTS (27.4mg) was added to 90 ml PBS buffer.Sodium persulfate (20mg/1ml PBS) was prepared separately, added to ABTS stock solution, and both were made up to 100 ml using PBS buffer.The mixture was stored in the dark for 16 hours.The ABTS•+ solution was diluted with PBS buffer to obtain an absorbance of 0.85 at 734 nm (A734) using a 1-cm conventional spectrophotometer (Ultrospec 2000 UV/Visible spectrophotometer, Pharmacia Biotech.Ltd, Sweden).Thereafter, 20 µL of samples or reference compounds (trolox) were added to 96-well microplate followed by 280 µL ABTS•+ solution.The plates were incubated in the dark for 30 minutes at 37°C and A734 was recorded using a microplate reader.Pre-diluted samples were analyzed on two separate occasions using (n=) 12 -16 wells of a microplate. Folin-Ciocalteu Assay for Total phenols The Folin-Ciocalteu method of Singleton et al. [18] was used for TPC determination, with minor modification.Antioxidant standards or samples (50 µL) of were added to microcentrifuge tubes with 100 µL Folin-Ciocalteu reagent and 850 µL of sodium carbonate solution.The samples were vortexed briefly and incubated for 20 minutes at 37-40°C.Thereafter, 200 µL of the reacted samples were transferred to a 96-well microplate (x4 200 µL per sample) and absorbance was read at 760 nm (A760) using a microplate reader. Data analysis and statistical analysis Microplate readouts were transferred to Excel for calculations and graphing.Calibration graphs for iRAC, ABTS or Folin-Ciocalteu assays were generated by plotting absorbances changes (ΔA) corrected for the sample-blank (B1) and zero-reagent blank (B2), .e.g.ΔA = A-B1+B2) on the y-axis.The concentration of analyte (mol/l) in the assay vessel was plotted on the graph x-axis. For the ABTS assay ΔA is A760 for ABTS reagent minus A734 for antioxidant samples.Calibration parameters (e.g.molar absorptivity, the minimum detectable concentration, upper limit of detection, regression coefficient) were determined by fitting a straight lines of (y=mx) to the data, where m is the slope.The total antioxidant capacity for samples were determined from absorbance changes (ΔAs) using Beer's relations (Eq.1-3) ; where, TAC = total antioxidant capacity, m = slope for the trolox calibration graph, Va = assay volume (µl; x10 -6 L), SV = samples sip volume assayed (µl; x10 -6 L), DF = dilution factor for samples before analysis (1 if undiluted), Vex = total volume of pomegranate extract, FW = formula weight for the reference antioxidant (g/mole), W = dry weight of food sample (g).For the PJ samples W/Vex is the solid content as determined by drying. Statistical significance was tested by using one-way ANOVA with Turkey post-hoc testing for separation of means.Significant differences were noted with P<0.05.All analyses were carried out using IBM SPSS Statistics 24. Calibration results of, iRAC, ABTS and Folin-Ciocalteu methods The assay time was fixed at 30 minutes based on the time-course of A562 readings for the iRAC procedure (Figure 1); the other assays were also conducted over 30 minutes.Calibration responses for iRAC, ABTS and Folin-Ciocalteu assays (Table 1) were linear with the regression coefficient (R 2 ) >0.99.Other calibration parameters for iRAC and ABTS assays were broadly similar with respect to, lower limit of detection (LLD) and upper limits of detection (ULD), but the assay sensitivity (slope) and the precision (CV %) were higher in the former case (Table 1). Total antioxidant capacity for pomegranate samples The total antioxidant capacity for PJ was 33.4±0.5 mM or 24.5±0.7mM(mol trolox equivalents per liter of PJ) determined by the iRAC method at pH 7.0 and pH 4.5, respectively.The ABTS assay for PJ at pH 7.0 showed a total antioxidant capacity was 36.3±2.1 mM (Figure 2A). A B The method of drying and type of assay affected values for total antioxidant capacity (Figure 2B).Freeze-dried pomegranate showed a higher iRAC response compared with oven-dried pomegranate, but no differences were observable using the ABTS assay.The order of total antioxidant capacity for whole pomegranate fruit was, freeze-dried pomegranate > oven-dried pomegranate and also iRAC (pH 7.0) > ABTS (pH7.0)> iRAC (pH 4.5). Total phenols content of pomegranate samples by Folin-Ciocalteu assay Values for the TPC ranged from 5.8% to 6.9% GAE for dried pomegranate (Table 2).The order of decreasing values for TPC was, freeze-dried pomegranate > oven-dried pomegranate > PJ Currently, pomegranate is listed as one the highest sources of dietary antioxidants amongst many beverages including red wine, green tea, grape, apple, orange or cranberry juices [9][10][11][12]. Nonetheless, published total antioxidant capacity values for pomegranate vary considerably (Table 3).In this paper, we examined total antioxidant capacity for pomegranate in terms a newly described iron (III) reducing antioxidant capacity [16] and compared values with the ABTS method [17].As per AOAC guidelines, total antioxidant capacity values were expressed as trolox equivalent antioxidant capacity (TEAC) to enable comparisons [21].The recommended units for TEAC are mmol/l (mM) for liquids (PJ) or mmol/100g for solid samples [21].The Folin-Ciocalteu assay was applied also as another well-standardized assay for total phenols and antioxidants from plant derived foods [18].↕.TAC = total antioxidant capacity (mM) determined by ABTS method.TPC by Folin-Ciocalteu method, TS = this study; PJ = Pomegranate juice.*From US. Total antioxidant capacity and TPC of pomegranate juice The basic principles behind the iRAC method is that an excess amount of iron (III) is reduced to iron (II) by antioxidants.The concentration of iron (II) is then detected with ferrozine as a complexing agent [16].The iRAC method is a modification of the FRAP method [22] which is performed at pH 7.0 rather than pH 3.6; the iRAC method was also useable at pH 4.5 (Fig 2). Interestingly, PJ total antioxidant capacity values were ~8% lower using the iRAC method compared with the ABTS method, whilst the former was ~20% higher overall after the dried pomegranate samples are also considered (see below). The total antioxidant capacity for commercial PJ in this study (33-34 mM) was higher than values [24] cited for PJ obtained from eight pomegranate cultivars (Table 3).However, our sample for POMW 100%PJ manufactured in the UK had 50% lower total antioxidant capacity compared another POMW 100%PJ brand produced in California (USA) 10 years ago [10].The former PJ contained 120mg vitamin C per liter (0.7mM) which is ~2% of the total antioxidant capacity. Total phenols content values for commercial PJ (250±12 mg GAE/100ml; this study) were within the range reported previously (Table 3).In general, TPC for PJ prepared from whole fruit is higher than the TPC for PJ extracted from frozen arils or peeled pomegranate (Table 3). Processing whole fruit led to the transfer of hydrolysable tannin from pomegranate peels to the PJ [9].An estimated 29% of TPC for pomegranate was associated with PJ compared with 69% associated with pomegranate peel [26].Significant process losses for TPC (and antioxidant capacity) were reported also when manufacturing pomegranate nectar from whole fruit [20]; under such circumstances about 37% TPC was associated with pasteurized PJ compared with 47% associated with peel [20].No TPC differences were reported for PJ extracted using organically grown versus conventionally grown pomegranate fruits [27].Clearly, total antioxidant capacity and TPC for PJ may considerably as a result of processing factors. Total antioxidant capacity and TPC for Pomegranate fruit There is less data available on the total antioxidant capacity and TPC for whole pomegranate fruit as compared with PJ [20,25].In this study, whole pomegranate fruit was pretreated by dicing, freezing/ oven drying, blending to form powers, and then extracting with methanol: water (40:60%) prior to analysis.The observed total antioxidant capacity and TPC values are for whole fruit and values are also moderated by drying and the efficiency of the extraction.In other studies, fresh whole pomegranates were homogenized or macerated directly with solvent and the extract subjected to analysis before the data were adjusted for moisture content [20, 26 29].There is been no concerted investigation to to examine whether two alternative sample treatment regimens affect the final results materially.Sometimes, whole pomegranates were also separated as, rind, flesh (core and arils) or seeds prior to analysis [4]. The total antioxidant capacity for pomegranate fruit using iRAC method ((72-106.3mmol/100g DB; Figure 2B) agreed closely with values from ABTS analysis (this study) and ABTS results reported previously as 122.9 mmol/100g DB [20].Past studies showed that total antioxidant capacity of pomegranate was strongly correlated TPC, tannins and flavonoids [4,28]. The TPC for pomegranate samples (this study) were comparable to values reported previously (Table 1S; Supplementary data) in spite of differences in the cultivars used and processing factors (Section 4.1).Freeze dried pomegranate fruit had 15% higher TPC and 18% higher total antioxidant capacity compared with oven drying.However, past studies showed that moderate drying temperatures (55-75 o C) had no effect on TPC [15].Some general difference in the values for TPC were noted (Table 1S; Supplementary data) with different cultivars, fruit parts (Whole fruit > Peel >>Seeds or arils) and extraction solvent choice (Methanol, Methanol: Water > Water solvent [26,30].The Hicaz variety of pomegranate had a high TPC but comparisons with other varieties are not possible owing to the various experimental approaches used.The TPC for pomegranate varieties declined with increasing maturation and ripening [28]. Conclusions The iron (III) reducing antioxidant capacity (iRAC) for pomegranate and juice was similar to values for ABTS free radical quenching capacity, both expressed as TEAC units.Both the iRAC and ABTS assays confirm previously reported high total antioxidant capacity values for PJ. Some differences in the TAC and TPC values for pomegranate and PJ were evident due to varying cultivars and processing factors.Such results have relevance for attempts to refine food composition data for pomegranate and other functional foods. Table 2 . Total phenol content for pomegranate samples per dry weight basis⊥ Table 3 . Reported total antioxidant capacity and TPC values for Pomegranate juice ↕
2018-07-25T05:00:37.109Z
2018-05-15T00:00:00.000
{ "year": 2018, "sha1": "48294b2debed9242f85ab3f5cf4ba548087d7eae", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2306-5710/4/3/58/pdf?version=1533635291", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "48294b2debed9242f85ab3f5cf4ba548087d7eae", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
265349544
pes2o/s2orc
v3-fos-license
Using SUpported Motivational InTerviewing (SUMIT) to increase physical activity for people with knee osteoarthritis: a pilot, feasibility randomised controlled trial Objective The objective of this study was to determine the feasibility and effectiveness of using SUpported Motivational InTerviewing (SUMIT) to increase physical activity in people with knee osteoarthritis (KOA). Design Randomised controlled trial. Setting We recruited people who had completed Good Life with osteoArthritis Denmark (GLA:D) from private, public and community settings in Victoria, Australia. Interventions Participants were randomised to receive SUMIT or usual care. SUMIT comprised five motivational interviewing sessions targeting physical activity over 10 weeks, and access to a multimedia web-based platform. Participants Thirty-two participants were recruited (17 SUMIT, 15 control) including 22 females (69%). Outcome measures Feasibility outcomes included recruitment rate, adherence to motivational interviewing, ActivPAL wear and drop-out rate. Effect sizes (ESs) were calculated for daily steps, stepping time, time with cadence >100 steps per minute, time in bouts >1 min; 6 min walk distance, Knee Osteoarthritis Outcome Score (KOOS) subscales (pain, symptoms, function, sport and recreation, and quality of life (QoL)), Euroqual, systolic blood pressure, body mass index, waist circumference, 30 s chair stand test and walking speed during 40 m walk test. Results All feasibility criteria were achieved, with 32/63 eligible participants recruited over seven months; with all participants adhering to all motivational interviewing calls and achieving sufficient ActivPAL wear time, and only two drop-outs (6%). 12/15 outcome measures showed at least a small effect (ES>0.2) favouring the SUMIT group, including daily time with cadence >100 steps per minute (ES=0.43). Two outcomes, walking speed (ES= 0.97) and KOOS QoL (ES=0.81), showed a large effect (ES>0.8). Conclusion SUMIT is feasible in people with knee osteoarthritis. Potential benefits included more time spent walking at moderate intensity, faster walking speeds and better QoL. Trial registration number ACTRN12621000267853. preliminary data of it's effectiveness.findings need to be tested in a much larger sample, and i do hope that you are able to do this/obtain funding in the future.I found the manuscript to be extremely well written, easy to read and presents the results in both clinical and meaningful ways.This is particularly important from an application point of view, whereby allied health clinicians are interested in how this can be used to inform patient care, rather than just statistically significant results with no context. Thank you for the opportunity to review this manuscript. Dr. Sibel Basaran, Cukurova University Comments to the Author: General: In the current study, the authors aimed to determine the feasibility of conducting a fully powered trial to evaluate the clinical effectiveness of using Supported Motivational Interviewing (SUMIT) targeting physical activity following completion of an exercise-therapy program (GLA:D) in people with knee osteoarthritis.It is a well-designed study.However, some limitations due to Covid-19 restrictions partially prevented the study from being well-conducted.Author response: We thank you for your time to review our work, Dr Basaran. The main concern with the study is its administration after completion of a structured exercisetherapy program (GLA:D). Since only patients from centers that implement this program will be included, selection bias will occur and the results will be valid only for these patients. Author response: We agree and have added the following to our limitations section: "People who have completed GLA:D® report being more confident to participate in physical activities, 4 therefore, we chose to include this subset of the knee osteoarthritis population.It is important to note that this group has been willing to participate in an exercise-based intervention previously, and in many cases paid out of pocket and/or claimed private health insurance to support their participation.This selection bias may limit the external applicability of our findings to the broader knee osteoarthritis population.Recruiting for SUMIT following GLA:D® participation may be more successful due to their change in perception towards physical activity. 4Nonetheless, our findings indicate SUMIT may be effective and feasible following a widely implemented education and exercise-therapy program (i.e., GLA:D®), which as at December 2022 had been provided to 12,884 people with osteoarthritis. 5" (lines 385 to 394) Please discuss its feasibility when proceeding to a large-scale RCT to evaluate the effectiveness of motivational interviewing. Author response: Global osteoarthritis initiatives (including GLA:D®, "Better Management of Patients with Osteoarthritis" (BOA), 6 "My Knee Exercise", and "Enabling Self-management and Coping with Arthritis Pain using Exercise" (ESCAPE-pain)) are increasing in availability around the world.GLA:D® specifically is a widespread program in Australia and eight other countries around the world.In Australia alone, GLA:D® had 12,884 registered participants until December 2022, at a current rate of approximately 3,000-4,000 participants per year.This makes proceeding to a large scale RCT highly feasible if the number of health services participants are recruited from are expected. Please emphasize for which patients and clinical settings the intervention is applicable, as it is difficult to implement in routine. Author response: We have added to the discussion "Compared to an evaluation of physical activity in Australian GLA:D® participants where 25% of participants were 'more' active at baseline, and 29% following GLA:D®, our cohort included 53% considered 'more' active based on UCLA criteria. 7Further increases in physical activity in those already more active are still likely to improve health, 8,9 and increasing cadence 8,9 during walking as occurred in our intervention group also provides additional benefits.However, future RCTs may consider targeting 'less' active participants where there is a greater potential for improvement in physical activity participation and health benefits.People who have completed GLA:D® report being more confident to participate in physical activities, 4 therefore, we chose to include this subset of the knee osteoarthritis population.It is important to note that this group has been willing to participate in an exercise-based intervention previously, and in many cases paid out of pocket and/or claimed private health insurance to support their participation.This selection bias may limit the external applicability of our findings to the broader knee osteoarthritis population.Recruiting for SUMIT following GLA:D® participation may be more successful due to their change in perception towards physical activity. 4Nonetheless, our findings indicate SUMIT may be effective and feasible following a widely implemented education and exercise-therapy program (i.e., GLA:D®), which as at December 2022 had been provided to 12,884 people with osteoarthritis. 5 " (lines 379 to 394) Minor recommendations: 6. Please give brief information about the duration of GLA:D.Author response: We have included the following to provide more detail about the GLA:D® program "GLA:D® involves two education and 12 supervised exercise-therapy sessions. 2Education covers information about osteoarthritis, treatment options, exercise and physical activity, and selfmanagement. 2Exercise-therapy includes neuromuscular, resistance-training and functional exercises. 2" (lines 107 to 110) Did the authors recorded the time between completion of GLA:D and the time of recruitment? Was it different between the groups? Author response: Participants were asked during screening when they completed GLA:D®, we have added the mean and standard deviation of months since completing GLA:D at enrolment into Table 2 (line 269).There was minimal difference between groups with the overall mean (SD) being 11 (8), SUMIT being 11 (9) and control being 10 ( 7 The role of motivational in improving physical activity levels, and related health measures, is an important topic.Your study provides evidence of the feasibility of this approach and preliminary data of it's effectiveness.These findings need to be tested in a much larger sample, and I do hope that you are able to do this/obtain funding in the future.I found the manuscript to be extremely well written, easy to read and presents the results in both clinical and meaningful ways.This is particularly important from an application point of view, whereby allied health clinicians are interested in how this can be used to inform patient care, rather than just statistically significant results with no context. Thank you for the opportunity to review this manuscript.Author response: Thank you Dr Stenner, for reviewing our manuscript and for your kind words. General changes: While this manuscript was under review, the lead author's PhD thesis was also examined, which included this manuscript as a paper.Based on feedback from thesis examiners, we have made two additional unsolicited changes to improve the clarity of information provided. • The text in the 5th paragraph of the discussion "However, further increases from this relatively high baseline are still likely to improve health, 8,9 and increasing cadence 8,9 during walking as occurred in our intervention group also provides additional benefits."was replaced with "Compared to an evaluation of physical activity in Australian GLA:D® participants, where 25% of participants were 'more' active at baseline, and 29% following GLA:D®, 7 our cohort included 53% considered 'more' active based on UCLA criteria.Further increases in physical activity in those already more active at baseline are still likely to improve health, 8,9 and increasing cadence 8,9 during walking as occurred in our intervention group also provides additional benefits.However, future RCTs may consider targeting 'less' active participants where there is a greater potential for improvement in physical activity participation and health benefits."(lines 379 to 385) which we believe is more clinically relevant for readers. A ). *************************** Reviewer: 2 Dr. Brad Stenner, University of South Australia Comments to the Author: Dear authors, Congratulations on both an innovative study and completing an RCT within the context of Victorian lockdowns.
2023-11-23T06:17:48.237Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "e5da5dc1649c6f46ed36f72de326d4a24016bd57", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/13/11/e075014.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b03a2dcfebe3c6c9e7a998627d2313610110f408", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
28426838
pes2o/s2orc
v3-fos-license
Diaquabis(cinnamato-κ2 O,O′)cadmium The title complex, [Cd(C9H7O2)2(H2O)2], was obtained as an unintended product of the reaction of cadmium nitrate with hexamethylenetetramine and cinnamic acid. The CdII ion lies on a twofold rotation axis and is coordinated in a slightly distorted trigonal–prismatic environment. In the crystal, the V-shaped molecules are arranged in an interlocking fashion along [010] and O—H⋯O hydrogen bonds link the molecules, forming a two-dimensional network parallel to (001). The title complex, [Cd(C 9 H 7 O 2 ) 2 (H 2 O) 2 ], was obtained as an unintended product of the reaction of cadmium nitrate with hexamethylenetetramine and cinnamic acid. The Cd II ion lies on a twofold rotation axis and is coordinated in a slightly distorted trigonal-prismatic environment. In the crystal, the V-shaped molecules are arranged in an interlocking fashion along [010] and O-HÁ Á ÁO hydrogen bonds link the molecules, forming a two-dimensional network parallel to (001). Comment The title compound was obtained as an accidental product of the reaction of cadmium nitrate with hexamethylenetetramine and cinnamic acid in ethanol in an attempt to synthesize a potentially interesting framework compound of the metal with both tetraamine and carboxylic acid groups. The potentially bridging hexamethylenetetramine ligand may have acted as a linker between cadmium ions; however, it was not incorporated into the material. A mononuclear cadmium complex with water and cinnamate ligands was the product formed in 75% yield, from an ethanolic solution. The structure of diaqua-bis(cinnamato)-cadmium(II) had been previously recorded and was presented at the 1983 meeting of the American Chemical Society, but complete structural details are not available (Amma et al., 1983). In the Cambridge Structural Database (Version 5.35, with updates up to May 2013;Allen, 2002) [REFCODE: BUYTUK] only the data collection temperature (room temperature), unit cell parameters and space group, and the R value (10.4%) are reported but no atomic coordinates are available. Given the relatively poor precision of the previously reported structure and the lack of three-dimensional coordinates, we herein report the crystal structure of the title compound at 100 K. The Cd II lies on a two-fold rotation axis and is coordinated by two cinnamate ligands and two water molecules (Fig. 1). The carboxylate groups are bidentate-chelating, the water molecules monodentate and non bridging. The two oxygen atoms of each carboxylate group take coordination sites, the overall coordination environment of the metal center is best described as distorted trigonal prism, with angles varying between 92.86 (11)° (between the O atoms of the two water molecules), and 116.30 (8)° (for the angle between a water molecule O atom and a neighboring carboxylate group, using the carboxylate carbon atom as a substitute for the average of the two oxygen atoms). The Cd-O bond distances are in the expected ranges. The bonds involving the water O atoms are 2.208 (2) Å , which compares well with those in similar Cd(II) complexes (O′Reilly et al., 1984, Mak et al., 1985. The Cd-O bond distances involving the two carboxylate O atoms are longer than those involving the water molecules, as would be expected due to the chelating coordination mode of the cinnamate ligand. The actual bond distances are 2.330 (2) and 2.375 (2) Å for Cd-O1 and Cd-O2, respectively. The similarity of the two Cd-O distances indicates an essentially symmetric coordination and a delocalization of the negative charge of the cinnamate carboxylate group. This is confirmed by the C-O bond distances within the carboxylate groups, which are also the same within experimental error, with values of 1.276 (3) and 1.269 (3) Å for O1-C9 and O2-C9, respectively. In the crystal, the V-shape of the molecule results in a linear arrangement along [010] with the Cd(OH 2 ) 2 part of one molecule oriented towards the V-shaped part of a symmetry related molecule (Fig. 2). In addition, intermolecular O-H···O hydrogen bonds connect molecules forming a two-dimensional network parallel to (001) (Fig. 3). Experimental To a stirred colorless solution of Cd(NO 3 ) 2 ·4H 2 O (0.3084 g, 1 mmol) in 10 mL of water was added hexamethylenetetramine (0.2802 g, 2 mmol) in 5 mL of water to give a colorless solution. Then, cinnamic acid (0.2962 g, 2 mmol) in 20 mL of ethanol was added to a give a colorless solution. The solution was stirred at room temperature for 6 h, was filtered and then left to evaporate at room temperature. After several days, colorless needle shaped crystals suitable for Xray analysis were obtained in 75% yield. A single-crystal was isolated while suspended in mineral oil, was mounted with the help of a trace of mineral oil on a Mitegen micromesh mount and flash frozen to 100 K on the diffractometer. Refinement Reflection 0 0 1 was affected by the beam stop and was omitted from the refinement. All H atoms positions were refined. Positions of carbon bound H atoms were freely refined, O bound H atoms were refined with an O-H distances restrained of 0.84 (2) Å. All U iso (H) values were refined.
2016-05-12T22:15:10.714Z
2014-02-26T00:00:00.000
{ "year": 2014, "sha1": "ebaddc281d0b830f457c1a529a27ff9022494c7d", "oa_license": "CCBY", "oa_url": "http://journals.iucr.org/e/issues/2014/03/00/lh5688/lh5688.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ebaddc281d0b830f457c1a529a27ff9022494c7d", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
225095983
pes2o/s2orc
v3-fos-license
The utility of emergency department physical therapy and case management consultation in reducing hospital admissions Abstract Background A significant number of patients who present to the emergency department (ED) following a fall or with other injuries require evaluation by a physical therapist. Traditionally, once emergent conditions are excluded in the ED, these patients are admitted to the hospital for evaluation by a physical therapist to determine whether they should be transferred to a sub‐acute rehabilitation facility, discharged, require services at home, or require further inpatient care. Case management is typically used in conjunction with a physical therapist to determine eligibility for recommended services and to aid in placement. Objective To evaluate the benefit of using ED‐based physical therapist and case management services in lieu of routine hospital admission. Methods Retrospective, observational study of consecutive patients presenting to an urban, tertiary care academic medical center ED between December 1, 2017, and November 30, 2018, who had a physical therapist consult placed in the ED. We additionally evaluated which of these patients were placed into ED observation for physical therapist consultation, how many required case management, and ED disposition: discharged home from the ED or ED observation with or without services, placed in a rehabilitation facility, or admitted to the hospital. Results During the 12‐month study period, 1296 patients (2.4% of the total seen in the ED) were assessed by a physical therapist. The mean age was 75.5 ± 15.2 and 832 (64.2%) were female. Case management was involved in 91.8% of these cases. The final patient disposition was as follows: admission 24.3% (95% CI = 22.1–26.7%), home discharge with or without services 47.8% (95% CI = 45.1–50.5%), rehabilitation (rehab) setting 27.9% (95% CI = 25.6%–30.4). The median (interquartile range) time in observation was 13.1 (6.0–20.3), 9.9 (1.8–15.8), and 18.4 (14.1–24.8) hours for patients admitted, discharged home, or sent to rehabilitation (P < 0.001). Among the 979 patients discharged home or sent to rehabilitation, 17 (1.7%) returned to the ED within 72 hours and were ultimately admitted. Conclusion Given that the standard of care would otherwise be an admission to the hospital for 1 day or more for all patients requiring physical therapist consultation, an ED‐based physical therapy and case management system serves as a viable method to substantially decrease hospital admissions and potentially reduce resource use, length of hospital stay, and cost both to patients and the health care system. Conclusion: Given that the standard of care would otherwise be an admission to the hospital for 1 day or more for all patients requiring physical therapist consultation, an ED-based physical therapy and case management system serves as a viable method to substantially decrease hospital admissions and potentially reduce resource use, length of hospital stay, and cost both to patients and the health care system. K E Y W O R D S case management, hospital admission reduction, observation units, pathways, physical therapy, rehabilitation placement INTRODUCTION Given the cost of health care, providing cost-effective high-quality care is increasingly necessary, often by streamlining existing services to decrease expenses. In particular, there is scrutiny on the part of insurers to reduce short-term hospitalizations. 1 Emergency departments are using observation units as efficient modalities for implementing clinical pathways that allow for more rapid evaluation, testing, treatment, and disposition of patients while avoiding inpatient hospital admission. [2][3][4] A significant number of patients who present to the ED may be unsafe to return home in their current state-whether it be from a fall, deconditioning, or illness-and could benefit from evaluation by a physical therapist. Traditionally, once emergent conditions requiring inpatient hospitalization and treatment were excluded in the ED, these patients were admitted to the hospital primarily for evaluation by a physical therapist. The physical therapist would then determine whether they would benefit from ongoing rehabilitation services at a rehabilitation facility, rehabilitation with home-based services, or whether they could be safely discharged back to their home. Case management is typically used in conjunction with physical therapy to determine eligibility for recommended services and to aid in placement. We instituted a novel ED-based physical therapist consult service to avoid hospital admission when possible. Because physical therapist evaluation typically occurs as an inpatient, the use of an ED observation unit as a venue for physical therapist evaluation and subsequent placement into rehabilitation or provision of home rehabilitation services is not welldescribed in the current literature. In this study, we present the results of an assessment of the initial outcomes of patients evaluated by ED-based physical therapists. The goal of this study was to demonstrate that this novel clinical pathway is safe, viable, and reduces hospital admission in this cohort of patients. METHODS We examined patients presenting to our ED between December 1, Table S1). To assess the safety of our intervention, we additionally identified patients that re-presented to the ED within 72 hours and examined their disposition during their re-presentation to ascertain whether these patients had been properly assessed and dispositioned during their initial presentation. A physical therapist was available as a consult service in the ED from 8 am-5 pm 7 days a week. Depending on daily staffing availability, a dedicated physical therapist was assigned to the ED or the ED assignment was split between multiple floor physical therapists. ED consults were prioritized over inpatient evaluations, because ED physical therapist evaluations constitute possible barriers to discharge. Consults were placed electronically via the ED electronic medical record with a time stamp as well as a posted forum for physical therapist to communicate their recommendations to the team in real time ( Figure 2). Case management services were available 7 am-10 pm Monday through Friday and 7 am-7 pm on weekends and holidays. When consults for physical therapist and case management were placed off-hours, this was noted on a shared electronic medical record space so that when these services returned, they were immediately able to see those patients waiting in the ED for their recommendations and professional services. Data were summarized overall and by disposition category. RESULTS In total, 1412 patient encounters were identified where a physical therapist consult was placed in the ED over the study period. Of these, 108 physical therapist consults were not completed, and 8 consults were placed in error, leaving 1296 patients (2.4% of the total seen in the ED during the study period) to be included in the study ( Figure 1 Figure 3). Data were further analyzed by chief complaint category subgroups ( Four patients who were discharged home without additional services re-presented to the ED within 72 hours and among them, 3 were admitted. Among those discharged home with services, 33 represented and 13 were admitted. Of those discharged to rehabilitation facilities, 6 re-presented and 1 was admitted (Table 3). In total, 1.3% of patients discharged home without services re-presented and were admitted, 3.3% of patients discharged home with services re-presented and were admitted, and 0.3% of patients discharged to rehabilitation re-presented and were admitted. Chart review was performed on the 17 patients who re-presented within 72 hours and were ultimately admitted following discharge home, home with services, or to rehab. Six of the 17 (35%) were recommended for rehabilitation placement on their initial presentation but LIMITATIONS There are a number of limitations to this study. F I G U R E 3 Comparison of patients with a physical therapy consult placed in the ED across disposition categories recommendations, inability to place patients in a timely manner, or as was necessitated by concurrent medical management. This study was performed in a state with comprehensive insurance coverage provided to a significant majority of patients, which may limit generalizability. Last, our follow-up data that helps demonstrate safety of this pathway is limited to return visits to our institution. It is possible that some patients went to other hospitals following discharge from our ED, but we believe that given the limited adverse outcomes found in the sampling of bounce-back patients who returned to our hospital, we should expect similar outcome in patients who may have sought subsequent care elsewhere. DISCUSSION A significant percentage of patients presenting to this ED (2.4%) ultimately received a physical therapist consult in the ED. These were patients deemed medically stable for discharge by an emergency physician but felt to be unsafe for discharge in their current functional state and were thought to require increased services. These patients were predominantly elderly with the most common chief complaints categorized as falls, musculoskeletal, back pain, or neurologic in nature. In total, 748 patients were discharged either to a rehabilitation facility or home with additional services. We identified a total of 979 patients who avoided hospital admission using this clinical pathway. Assuming that the patients included in this study would otherwise be 16 Recurrent falls are associated with increased mortality. 16 Physical therapist evaluation and intervention has been shown to decrease the number of falls and fall-related ED visits. 16,17 Physical therapist evaluation in the ED has been previously described in a variety of contexts. 9,18-26 physical therapists are often able to provide more specific diagnoses to patients, spend more time on patient education, streamline outpatient follow-up by performing an initial evaluation, and provide patients with expected symptom trajectory, instructions on activity modification and home exercise techniques. 19,22 In particular, ED-based physical therapists can aid in safety evaluation and disposition or discharge planning. 18,19,23,24 In some instances, evaluation in the ED by physical therapists has been associated with decreased wait times and decreased lengths of stay. 19,21,22 The addition of physical therapists to the ED has been associated with increased patient satisfaction. 18,19,22,27 physical therapist inclusion in the ED is also associated with high levels of satisfaction among emergency physicians and staff. 19,21,25 To our knowledge, this study is the first of its kind to describe a robust clinical pathway where physical therapist evaluation in an observation unit can be used to expedite rehabilitation placement and avoid hospital admissions. CONCLUSIONS Given that the standard of care in many institutions would be an admission to the hospital for all patients requiring physical therapist and case management consultation, we believe that an ED-based physical therapy and case management system serves as a viable method to substantially decrease hospital admissions and potentially reduce cost both to patients and the health care system. AUTHOR CONTRIBUTIONS KLG and MSB wrote the first draft. All authors reviewed and edited multiple revisions. KLG and MSB take responsibility for the paper as a whole.
2020-07-02T10:03:56.958Z
2020-06-26T00:00:00.000
{ "year": 2020, "sha1": "c862cba868b15dc0614bfa7fcab8d3f549368612", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/emp2.12075", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "d45072a6189b8f2b156bbbd75b667e0113b004f0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
201357479
pes2o/s2orc
v3-fos-license
CAN THE USE OF ENGLISH AS A MEDIUM OF INSTRUCTION PROMOTE A MORE INCLUSIVE AND EQUITABLE HIGHER EDUCATION IN BRAZIL? In this paper, we present the status quo and challenges regarding the use of additional languages as a medium of instruction in Brazilian higher education. We begin by contextualizing the importance of the process of internationalization at home (IaH) and additional languages in higher education. Next, the teaching of additional languages in Brazil, which has been until very recently relegated to the private sector and accessible only to an elite, is introduced. We then provide an overview of the present state of affairs of English as a Medium of Instruction (EMI) in the country, which is still in its infancy. We move on to describe different ways in which language and content can be integrated in higher education, as well as how EMI can be introduced in disciplinary courses. We finish concluding that EMI can maximize the learning of academic English by Brazilian students and content instructors, as well as encourage a more international higher education and balanced academic mobility by allowing foreign students to study in Brazil while preserving and even increasing the international interest in the Portuguese language. In a country located in the periphery of knowledge production and dissemination, we understand that the adoption of EMI can potentially foster the inclusion of more Brazilians in the global academic and research scenario. It gives them access to the knowledge produced internationally and, at the same time, enables the research produced in the country to be disseminated globally. Introduction The contribution of higher education (HE) to poverty eradication, sustainable development, and global progress has been highlighted in official documents issued by the United Nations, such as the Millennium Development Goals (MDG) and Education for All (EFA). In the 2009 World Conference on Higher Education organized by UNESCO, participants prepared an official announcement based on the results and recommendations of the six regional conferences previously held 1 . The guiding principles of the announcement were: social responsibility of HE; access, equity and quality; internationalization, regionalization and globalization; and learning, research and innovation (UNESCO, 2009). With regards to internationalization, the document specified the following items, among others: As a result of the United Nations and UNESCO guidelines, in the last decades, public policies for post-secondary education in Brazil have focused on achieving an inclusive university of excellence, involving actions towards democratization of access, improvement of faculty qualification and expansion of graduate programs, as shown by goals 12, 13 and 14 of the National Education Plan (NEP) 2 (Ministério da Educação e Cultura [MEC], 2014a) (Sarmento, Abreu-e-Lima, & Moraes, 2016). Public HE has always been entirely free in the country and until 2016 there were substantial investments in educational policies to promote inclusion, excellence and internationalization (ANDIFES, 2012;MEC, 2014a). In 2007, the implementation of a program focused on the restructuring of federal universities (REUNI) triggered an unprecedented expansion in public HE. Based on the program, the Association of Directors of Federal Higher Education Institutions (ANDIFES) designed guidelines for the expansion, excellence and internationalization of federal universities Internationalization at Home and Additional Languages As the seminal work of Knight (2008) demonstrates, different driving forces are involved in the internationalization of educational institutions. For instance, in North American and European countries, social and academic rationales do not always seem to be the main factors, given the strong commercial and market features of HE (Kubota, 2009). In these contexts, attracting foreign students to pay much higher tuition fees than local students has been an explicit and major goal of universities (Garson, 2016) and, consequently, rationales as generation of revenue, the search for financial incentives, and the positioning in international rankings have become preponderant. In Brazil, however, the process of internationalization takes on a distinct dimension since no tuition is charged in public HE, either for Brazilians or for foreigners. Tuition-free education allows for the emphasis to be placed on the establishment of equitable networks and partnerships between different nations, the qualification of knowledge production in the country, the pursuit of balanced inbound and outbound academic mobility, and more equal access to international practices. Despite the remarkable progress in recent years, especially due to Science without Borders (SwB) program (MCT, 2011), internationalization is still at an incipient stage in Brazil. Traditionally, academic mobility -defined as a period of study, teaching and/or research in a country other than someone's home country -has been perceived as the only or the most important instrument of internationalizing HE. Over the last two decades, however, academic mobility has started to be part of a broader internationalization process, due to the development of the Bologna process that led to the signature of the Bologna Declaration by 29 countries 4 (European Union, 2009). This is a collective effort of public authorities, universities, stakeholder associations, employers, international agencies and organisations, including the European Commission, to strengthen the quality assurance of European education and to simplify the recognition of qualifications and periods of study among different countries (European Union, 2009). However, access to international experiences is only available to a small minority of those involved in post-secondary education. Recent data from the Organisation for Economic Cooperation and Development expects only 3% of students, faculty and staff to experience academic mobility by 2025 (OECD, 2016). There is a significant increase in imbalance in academic mobility among more and less economically developed countries, with very few accounting for the majority of inbound and outbound flow of students (Egron-Polak, 2017). The United States, the United Kingdom and Canada are countries that receive the greatest number of foreign students, while Brazil and other nations in the Southern hemisphere send far more students abroad. In Brazil, 7,305,977 students were enrolled in undergraduate programs in 2013 (Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira [INEP], 2013), while the Science without Borders (SwB) program, the most ambitious and comprehensive mobility program ever, offered around 73,000 outbound undergraduate scholarships from 2011 to 2015 5 . These numbers show that even the most generous outbound mobility program in history accounted for less than 1% of the country's undergraduate students' population. Moreover, SwB only managed to attract a little over 1,000 students from other countries unveiling this unfair imbalance in the flow of students, with developing countries having great difficulty in attracting students from developed ones. Now, with the end of SwB, students' mobility in Brazil has decreased exponentially, and the situation does not seem likely to change anytime soon. Also, Brazilian universities have almost no international students. In 2018, there were over 8 million students enrolled in undergraduate programs scattered over 2,400 HE institutions, and only about 10,000 international students, being most of them from Spanish speaking neighbouring countries (Gimenez, Sarmento, Archanjo, Zicman, & Finardi, 2018). Thus, Portuguese 6 is (nearly) the sole language of instruction, as it will be reported below. As a country of continental dimensions and the only one in the Global South whose main language is Portuguese, Brazil has been left in a kind of academic (and linguistic) isolation. It is not uncommon to encounter professors and students who came back from a mobility program abroad resenting the fact that they do not have opportunities to keep up with the language of the host country, in many cases English, because of lack of opportunities to use the AL. Hence, if internationalization is a high priority for policy makers and HE institutions, mainly in developing countries, then it must go beyond the system of prioritising only academic mobility and shift to one which benefits a wider audience. Authors such as Teekens (2007), de Wit, Hunter, Howard, & Egron-Polak (2015), and Beelen and Jones (2015) see the development of a process called Internationalization at Home (IaH) as a counteract to the increased emphasis on academic mobility and an alternative for a more inclusive internationalization process. IaH emphasizes the intercultural and international dimension in the teaching and learning processes and research, extracurricular activities, and the integration of foreign students and teachers into local academic life (Knight, 2008). More recently, de Wit et al. (2015) have added as the purpose of IaH "to enhance the quality of education and research for all students and to make a meaningful contribution to society" (p.29). Likewise, Beelen and Jones (2015) understand that IaH refers to "the purposeful integration of international and intercultural dimensions into the formal and informal curriculum for all students in domestic learning environments" (p. 69). Therefore, IaH activities do not necessarily involve only a classroom or a university campus, but also the local community. In fact, IaH has been working as a new paradigm for the development of strategic institutional internationalization policies, as it encourages the respect for diversity while developing people "with a cosmopolitan mindset, with communication skills between and across cultures, at home" (Teekens, 2007, p. 6). Within IaH processes, Additional Languages (AL) play a key role in giving access to students and teachers to international practices while in their own countries and institutions. In a globalized and interconnected world, ALs have become a multifunctional and complex phenomenon which allows individuals to perform actions and connect with each other, with communities and with different cultures (Modern Language Association, 2007). Using 'additional' rather than 'foreign' considers the contributions of adding a language to the cultural and linguistic repertoire that one already has. Aligning with the guidelines of the International Bureau of Education, an organization associated with UNESCO, 'Additional' applies to all, except, of course, the first language learned. An additional language, moreover, may not be foreign since many people in their country may ordinarily speak it. The term 'foreign' can, moreover, suggest strange, exotic or, perhaps, alien-all undesirable connotations. Our choice of the term 'additional' underscores our belief that additional languages are not necessarily inferior nor superior nor a replacement for a student's first language. (Judd, Tan & Walberg, 2001, p. 6) Official documents produced by international organizations (European Parliament, 2006;European Union, 2012;UNESCO, 2014) have acknowledged the importance of multilingualism and ALs in students' educational background 7 . In 2014, the Executive Board of UNESCO declared that multilingualism encourages cooperation among nations through dialogue, tolerance and respect for cultural diversity, recognizing the need to implement LEPs for the integration of young people into international exchanges (UNESCO, 2014). The Member States agreed to promote the teaching of at least two additional languages, in addition to the main language of instruction, to ensure the linguistic and intercultural quality of education and to facilitate academic and professional mobility. In Brazil, the teaching of additional languages has been relegated to the private sector, with over 6,000 private language courses in the country and with an annual increase of 15% (Windle & Nogueira, 2015). There are different types of institutions, covering all price ranges, hence, catering for different social classes, but not all of them. Disadvantaged students may only have access to English classes in regular schools, which, in many scenarios would be good enough, but not in Brazil and causes are manifold. First of all, there is a belief that additional languages are not to be learned in the official regular schools, making teachers demotivated from the start. Second, classes are large and there is usually only one hour of English class a week, making it impossible to acquire fluency. Also, public school teachers are underpaid in the country and, to counterbalance the low salaries, have to take more than one job and work very long hours, leaving no room for professional development. This failure of the regular school system in providing quality AL instruction to all students has been well documented in a book by Lima (2011) entitled "English teaching does not work in public schools? Multiple perspectives". In fact, instruction is a problem not only considering ALs, but first language as well (in the Brazilian case, Portuguese). It is common to hear from working-class students that they are "ashamed to talk to people who have studied, because they don't speak Portuguese correctly" (Bartlett, 2007, p. 554). As Windle and Nogueira (2015) point out, these students believe that if "I don't even speak Portuguese properly, how can I learn English?" (p.188). As a consequence, the educational practices of ruling-class families "have been marked by a heavy investment in learning English and in international travel to 'first world' destinations for educational purposes since the 1990s." (Windle & Nogueira, 2015, p.176). In 2017 (as well as in previous years), Brazil was the top source country of students in private language courses in Canada, followed by Japan. This has made Brazil one of the target markets for Languages Canada (Languages Canada, 2017). Simon Fraser University Educational Review Vol. 12 No. 2 Summer 2019 / sfuedreview.org In order to make amends for the lack of efficient Language Education Policies 8 in primary and secondary schools, the government launched an AL program in 2012 (MEC, 2012a). The program was first called English without Borders and renamed Languages without Borders (LwB) in 2014 (Sarmento, Abreu-e-Lima, & Moraes, 2016), comprising six other AL. LwB originated from demands of the Science without Borders (SwB) program and caused an unprecedented change in the teaching of ALs in the country's HE system, offering free tuition for distance and face-to-face courses, as well as large-scale administration of proficiency tests to students, faculty and staff of public post-secondary institutions (MEC, 2012b). Along with LwB actions, a relatively new phenomenon started to take place in Brazilian HEIs: classes in which the medium of instruction is not Portuguese, but an AL (in most cases English). We acknowledge that the supremacy of English has had negative effects in the status of other home and minority languages around the globe (Kubota, 2018). However, we will demonstrate that the Brazilian context has its own peculiarities, and the use of English can actually have a positive impact when fighting linguistic and social inclusion in the country. Content-based instruction The importance of ALs, especially English, in post-secondary education is not a novelty. Universities around the globe, including the ones located in English-speaking countries have offered English for Academic Purposes (EAP) courses for quite some time: The field of EAP has blossomed over the past two decades, largely due to the increase in students studying at English-medium universities, as well as the increase of English in scholarly publication, though not without controversy (Kostka & Olmstead-Wang, 2014, p.2) EAP is focused solely on the teaching and learning of academic language, more specifically, but not exclusively, on reading and writing skills and courses are taught by EAP tutors, or English teachers who specialize in academic language. This is usually not discipline specific, i.e., students from all areas of knowledge may seat in the same class. However, in the last few years, content and language integrated programs have increased in popularity in non-English dominant countries. Studies in the field of Education and Applied Linguistics use different terms to refer to approaches related to the teaching and learning of content through English or another AL, such as Content and Language Integrated Learning (CLIL) -which is considered as an umbrella term (Airey, 2016), Integrated Content Learning (ICL), Content Based Learning (CBL), Immersion Programs (IP), and for English in particular, English as a Medium of instruction (EMI). These terms allude to relatively similar models of content-based instruction which do not have exact criteria of distinction, nor are they based on different theories of learning, as Dalton-Puffer recalls (2012). Airey (2016) proposes a continuum of content and language approaches, as shown in Figure 2. On the left end of the diagram are EAP courses, which, as already stated, are focused only on languages. Between the two extremes, we have CLIL courses, which should have both language and content objectives. ICL and CBL would be located somewhere near CLIL, whether Immersion Programs would lean towards EMI. EMI courses have "content-related learning outcomes in their syllabuses" (Airey, 2016, p. 73), and language learning would be a byproduct, and not the main goal. However, Airey (2016) affirms that language and content cannot be separated in this way as they are totally connected. On top of these theoretical differences, it is the tendency to use the CLIL label to refer to the teaching of content and language in primary/secondary education, whereas EMI has been favored in HE in non-English dominant contexts. EMI has expanded in Europe since the beginning of the Bologna Process in the 1990s. As a result, there has been a rapid increase in the number of programs from different European universities that adopted English as the language of instruction. Authors such as Dalton-Puffer (2012) and Macaro Akincioglu, and Dearden (2016) indicate the rapid global growth of EMI in the last decade. According to Bradford (2016), between 2001 and 2014 the number of EMI graduate and undergraduate programs around the world increased 1000%. In order to map information around provision of EMI across a wide variety of programmes, courses and additional activities offered by Brazilian HEIs, a survey was sent to 270 9 HEI (Gimenez et al., 2018). There were 84 responses to the questionnaire and 66 of them reported having some EMI activity in their campus. A more refined look at the data can be seen in table 2. 9 The 270 institutions are members of FAUBAI, the Higher Education Internationalization Association in Brazil. These are also the most important and prestigious HEI in the country, making it a significant sample. Simon Fraser University Educational Review Vol. 12 No. 2 Summer 2019 / sfuedreview.org Table 2. Programmes, courses and activities in English and Portuguese for Foreigners courses. Retrieved from Gimenez et al. (2018). Table 2 presents the type of activity offered in English 10 . What stands out is that only one full undergraduate program and five full graduate programs are offered in the country. With regards to courses, there were only 235 undergraduate and 406 graduate ones being taught in English. In spite of the low numbers, there was a substantial increase when compared to 2016, when there were only 197 undergraduate and 44 graduate courses in English (Gimenez et al., 2018). Overall, these results indicate that although EMI has been growing in Brazil, it is still in its embryonic stage. But why would EMI be important in a developing country such as Brazil? Although EMI 11 is mostly concerned with content, Muñoz (2012) points that the greater use of English contributes to establishing an environment that, indirectly, leads to language proficiency development. Individuals construct their dialogical relations in socially coconstructed practices using language (Clark, 1996) and, thus, English learning is grounded in interaction. The adoption of EMI can bring considerable linguistic benefits because instructors and students can take part in authentic practices that require the use of English. This leads to an improvement in their proficiency for various practical purposes, such as the participation in academic events, Massive Open Online Courses (MOOCs), and exchanges with international research partners. Authors such as Ammon (2010), Crystal (2012), De Swaan (2001), Montgomery (2013), Lillis and Curry (2010), and Solovova, Santos, and Verissimo (2018) have acknowledged that over the last decades the English language has achieved the status of global scientific and academic lingua franca 12 . English has become a key part of a myriad of knowledge production and practices in HE, acting as a key factor for the internationalization of the research and curriculum. For this reason, when used for teaching and learning in specific fields of study or in content-based programs, many factors need to be taken into consideration. Among these factors, instructors and students' different home languages, the language of the references adopted in the course, as well as the language used to interact outside the classroom. It is, nonetheless, crucial to discuss what is meant by "language of instruction", since we believe in a paradigm where knowledge is not "transmitted" to students. Therefore, both teaching and learning happen through multiple interactions and collaborations inside and outside the classroom. Considering this, would "medium of instruction" be: (1) the language(s) spoken by the teacher, (2) the language(s) of the references, (3) the language(s) students use to interact with 10 Data were also collected considering other ALs, however, these have not been made available yet. 11 This paper focuses on the use of English as a Medium of Instruction. However, instruction in ALs other than English is also taking place in Brazil. In a recent seminar about initiatives to integrate language and content in tertiary education, a professor from the Federal University of Minas Gerais (UFMG) shared his experience offering a Philosophy course in German, and classes in French have also taken place in the same institution (see https://www.ufmg.br/dri/encontro-sobre-ingles-como-meio-de-instrucao-para-docentes-da-ufmg/). 12 The supremacy of English over other languages is historically related to the hegemonic power of Englishspeaking countries, as several authors indicate (Ammon, 2010, Phillipson, 2015. Such questions will not be explored here, since they are beyond the scope of this study. Table 3. Potential Configurations of AL as a medium of instruction. Reprinted from Baumvol & Sarmento (2016, p. 75-76). As shown in Configurations I and II of Table 3, in Brazil some courses commonly adopt references in English while the teacher and the students, most of the time, speak Portuguese in class. In these situations, students may end up choosing to take the tests and/or do assignments in English (configuration II). Also, in fields such as Engineering, Medicine and Chemistry, the most important scientific journals and conferences are entirely in English. It is reported that in Brazilian HE institutions the following educational-relationships are also taking place in both undergraduate and graduate courses: (A) a Brazilian instructor teaching in Portuguese for Brazilian and foreign students (Configuration III), (B) a foreign instructor teaching classes in English for Brazilian students (Configuration IV), (C) a Brazilian instructor teaching classes in English and students taking tests and/or do assignments in Portuguese (Configuration IX), or alternate between Portuguese and English (Configuration X), or even entirely in English (Configuration XI), and (D) a foreign instructor whose first language is not Portuguese teaches classes in Portuguese for Brazilian students (Configuration XII). Configuration III, in particular, has become increasingly popular in HE due to an increase in academic mobility and in the number of students whose first language is not Portuguese, which includes indigenous 15 and minority languages. For these students, Portuguese becomes the AL, while it remains L1 to the instructor and to the other Brazilian students. Therefore, a "gradation" regarding the presence of English (or any other AL) in the teaching and learning process can be practiced in HE. It is not a binary issue of whether "there is" or "there is not" use of EMI, but rather a variety of contexts in which English can be used by more (or fewer) participants in more (or fewer) contexts and means within the same classroom. Conclusion Studies on integrated content and language instruction in non-English-dominant countries have traditionally been conducted in school settings, and they still represent most investigations 15 Major affirmative actions in recent years have been giving indigenous students access to HE in the country. However, it is important to point out that Gimenez et al. (2018) found that only 0.3% of undergraduate students, 2% of master's students and 4% of PhD candidates are foreigners. in the field (Dalton-Puffer, Llinares, Lorenzo, & Nikula 2014, Llinares & Morton, 2017, Nikula, Dalton-Puffer & Llinares, 2013. More recently, Asia (Byun et al., 2011;Hu, 2014, Li & Ruan, 2015, Scandinavia (Airey, 2012, Jensen, Denver, Mees & Werther, 2013, Ljsoland, 2011, Söderlundh, 2013 and Spain (Dafouz & Sanchez, 2013, Vázquez & Gastaud, 2013, where EMI practices in HE have been increasingly widespread, have emerged as important research scenarios. Nevertheless, there is still a need to conduct further studies approaching practical issues faced by students and instructors in academic and research activities conducted in English in those contexts. In Brazil, as shown in Gimenez et al (2018), EMI has been adopted through isolated initiatives of faculty members. Frequently, these instructors have extensive expertise in their fields and insufficient opportunities for pedagogical training and teacher education throughout their careers on issues such as the relationship between content and language. Arnó-Macià and Mancho-Barés (2015), Bonnet (2012), Fortanet (2012), Murray and Nallaya (2016) and Vázquez (2014) examined successful collaborative experiences between instructors from different disciplines and language instructors in non-English-dominant contexts. In the Brazilian context, establishing these partnerships could maximize the learning of content and language by both students and content instructors, who will use English for a myriad of practical purposes while "at home". Undoubtedly, offering classes to improve general English literacy skills, as done in the Languages without Borders program, is essential to broaden the scope of EMI in a non-English dominant context like Brazil and to ensure that EMI does not reinforce exclusion and inequality due to lack of language proficiency. However, it is our understanding that not providing more opportunities to use English and other ALs in Brazilian universities for students who do not have the means to take an English course abroad not only perpetuates social inequality but also helps to produce it. As discussed above, elite families make sure their children have this important symbolic and cultural capital (Bourdieu, 1986) which will enable them to function in the contemporary society. According to a teacher who had worked in a number of language courses, The difference between upper-class students and poor students, is that English is already a part of the reality for the upper-class students. For the working class students, it is just a dream." (Windle & Nogueira, 2014, p. 188) Thus, as applied linguists, it is indeed our responsibility to counterpose neoliberal market forces which overemphasize the importance of the English language in detriment of others. Nevertheless, this cannot be done at the expense of working-class students, who have the right to learn ALs as much as those who come from wealthy families. Finally, EMI can encourage a more balanced academic mobility, since institutions from non-English dominant countries will be more prepared to receive students from different geolinguistic regions of the globe. We find it crucial that Brazilian post-secondary students and faculty understand and express themselves in English, while preserving varied practices in Portuguese and in minority languages. A nation aiming to play a prominent role in the global scenario must have its scientific, academic and cultural results shared with a wider audience, and the English language would allow that. At the same time, Brazilians need direct and full access to international knowledge, which is predominantly produced and disseminated in English. Those who master this global language are much better prepared even to challenge its supremacy. This will allow the process of internationalization to be more aligned with the guiding principles and purposes established by international and national organizations and documents.
2019-08-23T16:43:33.403Z
2019-07-31T00:00:00.000
{ "year": 2019, "sha1": "8d8f88c703b9545acd70c1e2dadd0d06c05a47c9", "oa_license": "CCBYNC", "oa_url": "https://journals.lib.sfu.ca/index.php/sfuer/article/download/941/631", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "896a4b6221f5b0af7ae193b0117a277d7b78b510", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [ "Political Science" ] }
13295777
pes2o/s2orc
v3-fos-license
Equine Dermatophytosis: A Survey of Its Occurrence and Species Distribution among Horses in Kaduna State, Nigeria This study was designed to determine the occurrence and species distribution of dermatophyte from cutaneous skin lesions of horses in Kaduna State, Nigeria. A total of 102 skin scrapings were collected from 102 horses with skin lesions. Mycological studies were carried out using conventional techniques. Dermatophytes were isolated from 18 (17.6%) of the 102 samples collected. The 18 dermatophytes were distributed into 10 different species belonging to Microsporum (n = 5) and Trichophyton (n = 5) genera. T. verrucosum (n = 4) was the most predominant species isolated followed by M. equinum (n = 3), T. vanbreuseghemii (n = 2), M. gypseum (n = 2), and M. canis (n = 2). Others include M. fulvum (n = 2), T. mentagrophytes (n = 1), T. equinum (n = 1), T. soudanense (n = 1), and M. gallinae (n = 1). The present study reveals the occurrence of dermatophytes in cutaneous skin lesions of horses in Kaduna State, Nigeria. In addition for the first time in this environment the anthropophilic dermatophyte T. soudanense was isolated from horses. These findings have great economic, veterinary, and public health significance as they relate to the cost of treatment and dissemination of zoonotic dermatophytes. Introduction Dermatophytes are cited as the most frequent causes of dermatological problems in domestic animals [1]. They belong to the class Ascomycetes, which are normally located in the stratum corneum, hair shaft, or hoof, where they invade [2]. The distribution of dermatomycosis, their aetiological agents, and the predominating anatomical infection patterns vary with geographical location, age of the animal, and environmental and cultural factors [6,7]. Contagiousness among animal populations, high cost of treatment, difficulty of control measures, and the public health consequences of animal (especially horses) ringworm explain their great importance [4]. The high resistance of the dermatophyte arthroconidia in the environment, colonization of host species, and the confinement of animals in breeding areas are factors that also influence the endemicity of dermatophytosis [4]. Lesions arising from dermatophytosis have many adverse effects besides the discomfort and unsightly nuisance (esthetic) [8]. They also prevent the horses from working and interferes with their use in polo, racing, and riding because the horse will not be allowed at shows or other events (because it can transmit it to other horses), thus decreasing the cost value of the horse. Equine dermatophytosis also has considerable zoonotic importance as animals serve as reservoirs for the zoophilic dermatophytes (especially those caused by members of the Microsporum spp. and Trichophyton genera) and their infections [8,9]. Zoophilic dermatophytes such as T. verrucosum, M. canis, T. mentagrophytes, M. gypseum, and T. equinum have been reported as important causes of human tinea capitis and tinea corporis in many areas of the world [10]. It has been suggested that the increasing number of reports of infections due to zoophilic dermatophytes in humans is directly linked to the persistence of this fungus in animal reservoirs [10]. Therefore knowledge of their role in cutaneous skin lesions and identification of the species may play an important role in control of outbreaks by establishing the source of infection and thereby plans to manage and control dermatophytosis. Although dermatophytosis is worldwide in distribution, it is more prevalent in hot humid climates than in cold dry regions [6,11]. Despite the high prevalence of dermatophytoses in Nigeria, few studies have been carried out to identify the fungal species causing cutaneous lesions in horses and their prevalence [1]. Equine dermatophytosis has received little attention in Nigeria especially in the northern part of the country where a large population of horses are located and used for festivities (traditional durbar), polo, racing, and pleasure riding [6,12,13]. As a result, an actual prevalence figure for tinea in horse is unknown in Kaduna State and Nigeria as a whole. There is therefore urgent need to update our knowledge of the epidemiology of ringworm infection in domestic animals. The aim of this study was to investigate the occurrence and species distribution of dermatophytes from horses with cutaneous lesions suggestive of dermatophytosis in Kaduna State, Nigeria. Sample Collection. Samples from 102 horses with cutaneous lesions suggestive of dermatophytosis were collected from March to September 2014. Skin scrapings and hair samples were collected from the margins of the lesions according to the method of Elewski [14]. Whereas generalized lesions of an anatomic location were encountered, multiple (3-4) samples were collected from the different parts of the lesion and pooled together as one sample. Also the lesions were photographed with the aid of a digital camera (Samsung WB30F). Samples were placed in coloured (brown) envelops and transported as dry packet to the Diagnostic Laboratory of the Department of Veterinary Microbiology, Ahmadu Bello University, Zaria, for cultural isolation and identification of dermatophytes. Culture and Isolation of Dermatophytes. Sabouraud dextrose agar (SDA) (Oxoid, UK) supplemented with (17.6) chloramphenicol 40 mg/L (Fluka, UK), cycloheximide 500 mg/L (Sigma, Germany), and nicotinic acid (100 g/mL) was used for primary isolation [15]. Culture was carried out on agar plates. Another set of SDA (vitamin-free) was seeded concurrently. The scabs and hair collected were seeded on the medium and the plates incubated at room temperature for 1-4 weeks. The plates were checked for visible fungi growth every other day. Identification of Isolates. Pure fungi growths of suspected dermatophytes from the SDA cultures plates were subcultured onto the potatoes dextrose agar (PDA) plates to facilitate sporulation [16] and incubated at room temperature for 1-4 weeks. The fungi were identified based on their colonial morphology on the agar plates and microscopic characteristics (after staining with lactophenol cotton blue and viewing using ×10 and ×40 magnification) with the aid of Fungal Colour Atlas [17]. Slide culture preparations were also made for isolates that were not identified from PDA culture stained slides. Hair perforation test was used as a diagnostic aid for some isolates [16]. Characteristics used for the identification of dermatophytes in the study included colony pigment, texture, growth rate, and morphological features such as macroconidia, microconidia, and nodular organs as well as nutritional characteristic such as amino acid requirement and growth in 5% salt supplemented SDA to differentiate T. mentagrophytes from other Trichophyton species [18,19]. Results The study examined 102 horses comprising 53 males and 49 females and aged between six months to 20 years, with cutaneous skin lesion suggestive of dermatophytosis. Out of these 102 horses sampled, 18 (17.6%) of the samples were positive for dermatophytes. Majority (33.3%) of the dermatophytes were isolated from the saddle area (Table 1). The dermatophytes were distributed across two genera Microsporum and Trichophyton and 10 different species (Table 2). Trichophyton verrucosum was the most commonly occurring dermatophytes with a frequency of four and occurred mostly on the limbs and rump, with areas of inflammation (kerion) (Figures 1-10). This was followed closely by Microsporum equinum (Figure 1) which was isolated from lesions on the saddle, flank, and girth areas (pressure areas). T. Sudanense (Figure 7), an anthropophilic dermatophyte, was isolated from the girth area of a horse. Lesions were found to be areas of patchy alopecia. Lesions of dermatophytosis and isolates were found on the limbs and saddle areas (5 isolates each) followed by the flanks of horses (2 isolates) while the least was on the head and rump (1 isolate each) ( Table 2). Discussion Lesions suggestive of dermatophytosis in this study were areas of scaling, crusting, and alopecia with some kerion formation as dermatophytes are known to digest the skin, hair, and hoof of animals as a source of carbon using proteolytic and lipolytic enzymes [2]. The lesions were found to occur mostly on the limbs and pressure areas, which is in agreement with other authors [20]. The annular and coalesced lesions expanding centrifugally and losing their circular appearance observed on some horses have been reported to be the characteristic of Trichophyton infection in horses [21]. Dermatophytes are one of the commonest skin diseases affecting horses [22]. Species of dermatophytes belonging to the Trichophyton and Microsporum genera were the major dermatophytes detected in this study. This finding is in agreement with the reports of previous studies that dermatophytes in horses are majorly caused by members of these 2 genera [22]. Similar species of dermatophytes with the exception of T. soudanense detected in this study have also been isolated from horses in other parts of the world [23]. Dermatophytes from the three ecological groups were isolated in this study, M. gypseum, M. fulvum, and T. vanbreuseghemii which are geophilic dermatophytes were isolated from different parts of the body and may have infected the horses directly from the soil and through spores infested fomites as they were all housed in different stables; another source of infection could have been asymptomatic carriers in the stables. T. verrucosum and M. equinum were the most common dermatophytes detected in this study a finding that is consistent with the reports that these agents are the common cause of equine dermatophytosis [22]. The zoophilic T. verrucosum could have been contracted from large herds of cattle that can be found in this region that usually share pasture with the horses [1]. T. mentagrophytes isolated from the limb of a horse could have been contracted from rodents that have free rein of the stables and M. canis from dogs, cats, or any other domestic animals as they have been reported to be the most common dermatophyte encountered in domestic animals in the region [1]. T. soudanense, an anthropophilic dermatophyte which is believed to be strictly human pathogen, was encountered in this study in a horse with extensive inflammation extending from the flank to the ventrum of the horse. This could be attributed to the close contact that exists between these horses and humans. The close contact that exists during grooming, riding, and exercising of the horse may have predisposed it to infection with the anthropophilic dermatophyte. This dermatophyte has previously been isolated from prepubescent children in Nigeria by Nweze [24]. In addition other dermatophytes detected such as M. gypseum, M. canis, T. verrucosum, T. equinum, and T. mentagrophytes have been isolated from cases of tinea capitis and tinea corporis among humans in Nigeria indicating the zoonotic potential of the prevalent dermatophytes [25][26][27]. The close contact that exists between humans and these horses especially during riding, grooming, and sporting events may result in transmission of these zoophilic dermatophytes from the horses to humans. The detection of dermatophytes in only 17.6% of the horses suggests a role of other agents and factors in the observed cutaneous skin lesions. Poor nutrition and management, infectious diseases such as helminthosis, Staphylococcus aureus, infection sweat rash produced by bacteria infection of the hair follicles, allergy, and pruritis produce cutaneous lesions similar to dermatophytosis and may account for some of the skin lesions observed [28]. In conclusion, dermatophytes belonging to two genera (Microsporum and Trichophyton) were isolated from 17.6% (18/102) of the horses with cutaneous skin lesions suggestive of dermatophytosis. T. verrucosum was the most commonly occurring dermatophytes followed by M. equinum. T. soudanense a human dermatophyte was also isolated from one of the horses a finding that suggests a potential public health concern due to the degree of close contact between man and animals in this part of the world. Hence, it is recommended that a wide scale study encompassing different species of domestic animals in different parts of the country be carried out as this will provide more information on the role of dermatophytes in the skin lesions of animals in this environment.
2018-04-03T01:01:50.039Z
2016-06-01T00:00:00.000
{ "year": 2016, "sha1": "e900e3636463fa8148c024d390152a3070582d32", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/scientifica/2016/6280646.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a87a33ed62ceaf76b3fd10eb92b939080aea1646", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
225622288
pes2o/s2orc
v3-fos-license
Research of the E ff ect of Tourism Economic Contact on the E ffi ciency of the Tourism Industry : Following regional tourism cooperation, the promotion of balanced sustainable development has begun to play a vital role in the tourism industry. Using the West Coast of the Strait urban agglomeration, China, as an example, this study uses a data envelopment analysis (DEA) to analyzes the nonlinear relationship between tourism economic contact intensity and tourism industry e ffi ciency by constructing a mixed e ff ect model. The results show the following: (1) In the early stage of regional tourism cooperation, the e ffi ciency of the tourism industry will decrease with an increase in the intensity of tourism economic contact. As regional cooperation tends towards a stable stage, the e ffi ciency of the tourism industry will continue to increase with the strengthening of the intensity of tourism economic contact. (2) The regional economic level harms the e ffi ciency of the tourism industry. The urbanization level has a positive e ff ect on the e ffi ciency of the tourism industry. (3) The level of opening up and transportation development in the region will not only bring tourism resources or tourists, but also lead them to flow out. They have no significant impacts on the e ffi ciency of the tourism industry. Introduction The tourism industry has gradually become one of the most promising and fastest-growing fields in global economic development, and is an important part of a country's socioeconomic system [1]. China is the largest tourism market and international tourism consumption country in the world. As the strategic pillar industry of its national economic development, the tourism industry has been fully integrated into its national development strategy. According to the China tourism academy, China's tourism industry contributed 11.05% to GDP and 10.31% to national employment in 2019 [2]. With the rapid development of traffic and information technology levels, the time and distance between regions decreased, and the inter-regional tourism economy is more closely linked [3]. Tourism cooperation has been strengthened and contributes to the rapid and stable development of the tourism industry [3]. The strengthening of tourism economic connections has significantly strengthened the mobility of related tourism elements between regions, and the spatial configuration and integration form of tourism elements are constantly evolving [4]. These effects are beneficial to the optimization of the spatial structure of the tourism economy to a certain extent, and further enhance the economic growth of regional tourism [5]-which has a crucial impact on the high-quality and sustainable development of tourism. Therefore, research on the relationship between tourism Literature Review Scholars have previously performed relevant studies on the economic contact of tourism. Christaller (1964) explored the positive significance of cities on their direct hinterland economy based on the "Central Place Theory" proposed by the German geographer Christaller and the economist August in the 1930s and 1940s [15]. With the rapid development of industrialization and the trend of urban agglomerations gradually spreading along the outer edge, the research scope of tourism economic contact has also expanded from cities to urban agglomerations. The Spatial Interaction theory proposed by Ullman laid the foundation for subsequent research on the tourism economic contact between cities [16], and the gradually developed Core-brim and Space Spread theories have also provided certain theoretical support for the deepening of relevant research [17]. Relevant research has mainly focused on the spatial structure and the characteristic of economic contact, the correlation between economic contact and transportation, the measurement of urban agglomeration tourism economic contact, the economic contact and spatial fractal, the spatial structure and spatial development model of tourism economic contact, the coupling development of tourism economic contact, etc. Scholars have made efforts in different aspects of these studies. Garza (1999) explored the evolution of the spatial structure of the economic contact between urban agglomerations [18]. Dredge (1999) explored tourism flow and a spatial structure model [19]. Cao, Wu and other scholars used the Yangtze River Delta City Region as a research object and constructed a spatial model of tourism economic contact from the perspective of time-distance [20]. Yang and other scholars explored the spatial structure of tourism economic contact in the Beijing-Tianjin-Hebei-Xiong'an Region based on a gravity model and the social network analysis (SNA) method [21]. Lis (2001) and other scholars conducted in-depth studies on the spatial accessibility of urban agglomerations [22]. Ni and Liao analyzed the characteristics of tourism economic contact in the provincial capitals of China under the influence of a high-speed railway [23]. There are few studies on the direct impact of tourism economic contact on the development of tourism, and most existing studies have explored the impact of tourism economic contact on the development of the tourism industry from the outside. Simmons (1994) proposed the relationship between local residents' participation in community tourism development and tourism development. Dwyer, Forsyth and Spurr (2004) explored the economic benefits of the tourism economic industry and proposed CEG model as a potential model to analyze the economic impact of tourism. Marrocu and Paci (2011) explored the relationship between new information, tourism flow and product efficiency in Europe and proposed that tourism could help improve regional efficiency. Jiang (2017) proposed that with the rapid development of the economy and the Internet, regional links will become closer and more efficient, which will promote the rapid expansion of the tourism market [24]. Chen et al. (2009) and Gavilán et al. (2015) discussed the impact of tourism cooperation and the spatial structure of the tourism economy on the development of the tourism industry [25,26]. Ye and Wang (2019) and other scholars believe that the improvement of regional accessibility has a connection with the tourism economy, promotes scientific and rational allocation, integrates and optimizes various elements in a regional tourism system, drives the improvement of the efficiency of tourism spatial cooperation, and ultimately enables the common and high-quality development of a regional tourism economy [10]. Liu, Lu (2015) and other scholars have posited that the development efficiency and growth quality of the tourism industry are the primary conditions for tourism sustainable development. They argue that strengthening the exchange and cooperation of key elements of regional tourism development (such as information, technology and talents) will be beneficial to realize the regional sharing of tourism elements, and can effectively improve the overall efficiency of tourism industry development [27]. In summary, current domestic and foreign studies on tourism economic contact mainly analyze the preformation causes and forward-looking factors of tourism economic contact from multiple perspectives, such as characteristics, differences, spatial structure, development model and coupling degree. Less attention is paid to the backward effect of tourism economic contact. Some studies analyze the impact of tourism economic contact on tourism development from the outside, while few studies focus on the direct relationship between tourism economic contact and tourism industry efficiency. Against the background of economic globalization and regional integration, regional cooperation has gradually become an important trend in tourism development. It is increasingly important to strengthen tourism economic contact between regions [28,29], with the ultimate goal of achieving efficient development of regional tourism. Therefore, it is of great academic value and practical significance to explore the influence of tourism economic contact on the efficiency of the tourism industry. Study Areas This study takes 20 cities in the West Coast of the Strait urban agglomeration, China, as the study site, and Figure 1 indicates the 20 cities it contains. The West Coast of the Strait urban agglomeration, also known as the West Strait Economic Zone, is located in Southeast China. According to the Coordinated Development Plan for the Cities Group on the West Bank of the Straits, the city group is a national-level city cluster with Fuzhou, Xiamen, Quanzhou, Shantou and Wenzhou as the core of the five major central cities, with a total of 20 prefecture-level cities. With a total land area of 270,000 square kilometers and a regional GDP of more than 587.2345 billion yuan in 2018, it is one of the most dynamic areas in China, with a high level of transportation development. The tourism resources of the 20 cities in the West Coast of the Strait urban agglomeration are extremely rich and highly complementary, and it is one of the most mature regions for tourism development in China. Sustainability 2020, 12, x FOR PEER REVIEW 4 of 17 efficiency. Against the background of economic globalization and regional integration, regional cooperation has gradually become an important trend in tourism development. It is increasingly important to strengthen tourism economic contact between regions [28,29], with the ultimate goal of achieving efficient development of regional tourism. Therefore, it is of great academic value and practical significance to explore the influence of tourism economic contact on the efficiency of the tourism industry. Study Areas This study takes 20 cities in the West Coast of the Strait urban agglomeration, China, as the study site, and Figure 1 indicates the 20 cities it contains. The West Coast of the Strait urban agglomeration, also known as the West Strait Economic Zone, is located in Southeast China. According to the Coordinated Development Plan for the Cities Group on the West Bank of the Straits, the city group is a national-level city cluster with Fuzhou, Xiamen, Quanzhou, Shantou and Wenzhou as the core of the five major central cities, with a total of 20 prefecture-level cities. With a total land area of 270,000 square kilometers and a regional GDP of more than 587.2345 billion yuan in 2018, it is one of the most dynamic areas in China, with a high level of transportation development. The tourism resources of the 20 cities in the West Coast of the Strait urban agglomeration are extremely rich and highly complementary, and it is one of the most mature regions for tourism development in China. Gravity Correction Model Various elements between regions are constantly flowing and exchanging, with the cities as carriers. Therefore, this study uses a gravity correction model to measure the intensity of tourism economic contact in the West Coast of the Strait urban agglomeration [20]. Due to the inequality between the intensity of tourism economic contact in various regions and to show the directivity of the intensity of tourism economic contact, the proportion of tourism resource endowment stakes is the sum of the tourism resource endowments between two associated cities. This study measures the Gravity Correction Model Various elements between regions are constantly flowing and exchanging, with the cities as carriers. Therefore, this study uses a gravity correction model to measure the intensity of tourism economic contact in the West Coast of the Strait urban agglomeration [20]. Due to the inequality between the intensity of tourism economic contact in various regions and to show the directivity of the intensity of tourism economic contact, the proportion of tourism resource endowment stakes is the sum of the tourism resource endowments between two associated cities. This study measures the amount of tourism economic contact for a city by summing the amount of tourism economic contact between the city and all other cities in the region [30]. The relevant formulas are as follows: In formula (1), P ij represents the attraction of tourism between city i and city j; and also represents the strength of tourism economic contact between city i and city j, T i and T j represent the total number of tourists in cities i and j that year, respectively; V i and V j represent the total tourism revenue of the two cities in that year; W i and W j are the total GDP of the two cities that year; and C ij represents the linear spatial distance between the two cities. In formula (2), m ij is the gravitational coefficient, which uses the numbers of 4A and 5A scenic locations as a measurement index, and S i and S j represent the total number of 4A and 5A scenic locations in the two cities that year, respectively. In formula (3), G ij represents the amount of tourism economic contact of city i. Data Envelopment Analysis (DEA) Approach Data envelopment analysis (DEA) is an effective method to analyze the multi-input and multi-output of the decision-making units (DMU) and to compare their relative efficiency and benefit [31]. As a typical basic model in DEA, the BCC model is widely used in analyzing tourism efficiency. The model decomposes technical efficiency (TE) into pure technical efficiency (PTE) and scale efficiency (SE), that is, technical efficiency = pure technical efficiency × scale efficiency. Pure technical efficiency refers to the technical efficiency obtained by measuring the resource allocation efficiency at the current technical level on the basis of the variable scale benefit; scale efficiency means that on the basis of a constant control system and the management and technical levels, the difference between the current scale and the optimal scale is measured to reflect the rational distribution degree of the overall scale of the input-output factors [32]. Samples and Data In 2008, the "Great tee" between the two sides of the strait was officially realized. The official launch of direct air transportation, direct maritime transportation, and direct mail have promoted the development and further development of tourism in strait cities. Therefore, this study selected 2008 as the starting point for data collection and established a tourism-related database for the West Coast of the Strait urban agglomeration between 2008 and 2017. Within the database, the efficiency of the tourism industry is regarded as the explained variable, and the intensity of tourism economic contact is regarded as the core explanatory variable. Additionally, the regional economic level, industrial structure, urbanization level, traffic development level and level of opening up to the outside world are selected as control variables in the model. The tourism-related data used in this study come from the "Statistical Almanac of Fujian Province", "Statistical Almanac of Zhejiang Province", "Almanac of Xiamen Special Economic Zone", "The statistical bulletin of national economic and social development of Fuzhou", "The statistical bulletin of national economic and social development of Yingtan", the CEIC database and others. The data processing software used in this study is Stata14.0. Definition of Variables (1) Explanatory variable: The intensity of tourism economic contact (TER). The tourism economic contact is mainly manifested as the mutual flow of tourism elements in space-including the tourists flow, tourism commodities, practitioners, and information, etc. In this study, the gravity correction model is used to measure the intensity of tourism economic contact between cities in the West Coast of the Strait urban agglomeration, and the intensity of tourism economic contact between each city and other cities is summed to obtain the intensity of tourism economic ties in the city group. The amount of tourism economic contact measured based on the gravity model has a clear regional boundary, and the amount of tourism economic contact as a measure index of the explanatory variable is helpful to better explore the influence of the tourism economic contact intensity on the efficiency of the tourism industry. (2) Explained variable: The efficiency of tourism industry refers to the economic benefits that can obtain after applying certain costs, which reflects the internal relation and ratio relation between the input and output of tourism economic activities. In this study, data envelopment analysis (DEA) was used to measure the tourism industry efficiency of 20 cities in the West Coast of the Strait urban agglomeration between 2008 and 2017. From the perspective of economics, the factors of production mainly include capital factors, labor factors and land. The development of regional tourism is not restricted by land [33], so it is not included in this study. In terms of investment indicators, this study uses urban fixed-asset investment and the number of star-rated hotels as capital input variables from the perspective of capital factors. Direct investment in tourism infrastructure and the construction of tourist attractions are capital input factors related to regional tourism. However, due to the lack of these data in the domestic statistical almanac, urban fixed-asset investment is selected instead. Although urban fixed-asset investments are mainly used for the construction of urban infrastructure and the improvement of related main functions and the direct part of the investment in regional tourism is a small proportion, to a certain extent, the improvement of urban self-construction is also a very advantageous attraction in tourism. In addition, A-level scenic locations are an important part of tourism resources in tourism destinations and have a certain appeal to tourists. A-level scenic locations are not only an important factor reflecting the tourism reception capacity of a region but are also an important indicator of the region's investment in tourism capital [34]. From the perspective of labor factors, this study uses the number of employees in the tertiary industry as a measure of labor input. The number of tourism employees is an appropriate variable to measure labor input in the tourism industry, but these data are lacking in data sources, such as statistical yearbooks, in various regions; therefore, replacing them with the number of employees in the tertiary industry is reasonable. Due to the comprehensive characteristics of the tourism industry, it has a high degree of integration with other industries in the tertiary industry. To some extent, the number of employees in the tertiary industry includes both direct and indirect employees of the tourism industry [35]. From the perspective of output, this study selects the total number of domestic and foreign tourists and total tourist revenue as the output index. The total number of tourists and total tourist income are important criteria for the tourism industry to measure its economic output after inputting certain factors. Overall, this output index can reflect the development level of a region's tourism industry; if the total number of tourists and total tourist revenue are both greater, the development level of the regional tourism industry will be higher [36]. In summary, the input and output indicators of tourism industry efficiency selected in this study are scientifically based on their availability. Control variables: (1) Regional economic level (REL): This index reflects the economic development of a region (e.g., development scale and speed) to some extent. If the level of economic development is higher, the tourism infrastructure will be more perfect, and the ability to provide tourism services to the public will be higher, which will have a certain impact on the development of tourism. The level of regional economic development is expressed in terms of per capita GDP. (2) Industrial structure (IS): Also known as the sectoral structure of the national economy, from the perspective of the three industries, it mainly refers to the internal relationship between the primary industry, the secondary industry, and the tertiary industry. The industrial structure has constantly altered the proportion to the primary industry, the secondary industry, and the tertiary industry. An industrial structure with a higher optimization level can play a certain role in promoting the healthy development of the tourism industry, and it will also play a certain role in improving the efficiency of the tourism industry [37]. Therefore, this study represents the industrial structure as the proportion of the output value of the tertiary industry to the total GDP. (3) Urbanization level (UL): It refers to the degree of urbanization reached in a region, reflecting the proportion of the population living in large, medium, and small towns in the total urban and rural population of a region, country, or the world. The urbanization level not only has an impact on the level and pattern of tourism consumption in the region, but also has a certain impact on the development of tourism enterprises [38]. In this study, the urbanization level is represented by the proportion of the urban population to the total population of the region at the end of the year. (4) The level of opening up to the outside world (DO): It refers to the degree to which a country or region's economy is open to the outside world under the condition of a market economy. It also indicates the degree of contact between a country or region and the outside world. The level of opening to the outside world represents advanced science, technology, management level and concept. Driven by the rapid development of Chinese inbound and outbound tourism, the degree of opening up to the outside world will have a profound impact on the development of the tourism industry [39]. (5) Traffic development level (TL): It refers to the development stage or development degree of a region's transportation at a certain time, taking some measurement indicators as the object and according to the corresponding evaluation indicators. Transportation infrastructure is one of the most important foundations and prerequisites for tourism development. Additionally, the degree of transportation convenience has a direct influence on tourism accessibility [40], which is not only conducive to the development of the tourism economy and enhances its potential, but also to the emergence of spatial spillover effects that can affect the tourism economic development of the surrounding areas. This study expresses the level of transportation development by highway mileage. Before performing regression analysis on the relevant data, this study conducted a descriptive statistical analysis of related variables to observe their distribution and discrete degree, as shown in Table 1. It can be found that the value of each variable is in the normal range, i.e., there are no outliers. In addition, this study performs a logarithmic treatment on some variables (i.e., the intensity of tourism economic contact, the level of opening up to the outside world and traffic development level) to avoid a large gap between the values of the variables. Model Settings To explore the relationship between tourism economic networks and the efficiency of the tourism industry, this study used the intensity of tourism economic contact as the explanatory variable and the efficiency of the tourism industry as the explained variable and examines the influence of the five control variables (regional economic level, industrial structure, urbanization level, the level of opening up to the outside world and the traffic development level). This study established the following model [41]: In the formula, Per f ormance i,t+2 is the explained variable, which indicates the efficiency of the tourism industry of city i in year t + 2; Founder i,t is the explanatory variable of this study, which indicates the intensity of tourism economy contact of city i in year t; Economy i,t , Industry i.t , Urban i,t , Open i,t and Transportaion i,t are the control variables of the model, which represent regional economic level, industrial structure, urbanization level, level of opening up to the outside world and traffic development level, respectively; β 0 is the intercept term; β 1 -β 6 are the parameters to be estimated; and ε it is the disturbance term that contains all the factors affecting the efficiency of the tourism industry except the explanatory and control variables. Basic Model Test This study examined the impact of the intensity of tourism economic contact on the efficiency of the tourism industry. To ensure the robustness of the results, this study used a stepwise regression to estimate the mixed effect model. To investigate the possible nonlinear relationship between the intensity of tourism economic contact and the efficiency of the tourism industry, this study added the quadratic term of the intensity of tourism economic contact to the model. Table 2 shows the regression results of the relationship between the intensity of tourism economic contact and the efficiency of the tourism industry in the West Coast of the Strait urban agglomeration. The impacts of the explanatory variable tourism economic contact intensity (TER) and its quadratic term (TER 2 ) on the efficiency of the tourism industry passed the significance test. Additionally, the coefficients of the primary and secondary terms of the tourism economic contact intensity are negative and positive, respectively, which indicate that the nonlinear relationship between the two is valid. The intensity of tourism economic contact has a U-shaped effect on the efficiency of the tourism industry, which means that before reaching the critical point, the efficiency of the tourism industry will be reduced with the strengthening of the intensity of tourism economic contact. When the critical point is exceeded, the efficiency of the tourism industry will continue to improve with the strengthening of the intensity of tourism economic contact. To more intuitively reflect the U-shaped impact of the economic contact intensity on the efficiency of the tourism industry, this study provides a U-shaped graph of the relationship between the intensity of tourism economic contact and the efficiency of the tourism industry (as shown in Figure 1). The data analysis results do not change with the gradual addition of the control variables, indicating that the analysis results are stable. The subsequent analysis in this study is based on the estimation results of model 6, which included all the variables. In terms of the control variables, the influence of the regional economic level (REL) on the efficiency of the tourism industry is negative (p < 0.01), which shows that the efficiency of the tourism industry will continue to decline with the improvement of the regional economic level. The t-value of the urbanization level (UL) on the efficiency of the tourism industry is 1.22 (p < 0.1). Considering that the sample size of this study is relatively small, the criteria for the level of significance can be appropriately relaxed. It can be considered that the positive effect of the urbanization level on the efficiency of the tourism industry is significant, which means that the efficiency of the tourism industry will also be enhanced with the improvement of the urbanization level. The impacts of the industrial structure (IS), the level of opening up to the outside world (DO) and the level of traffic development (TL) on the efficiency of the tourism industry do not pass the significance test, and there is not enough evidence to show that these three variables will have an impact on the efficiency of the tourism industry. U-Shaped Curve Analysis The above research results show that there is a U-shaped relationship between the intensity of tourism economic contact and the efficiency of the tourism industry under the condition of adding control variables, such as regional economic and urbanization levels(as shown in Figure 2). is not enough evidence to show that these three variables will have an impact on the efficiency of the tourism industry. U-Shaped Curve Analysis The above research results show that there is a U-shaped relationship between the intensity of tourism economic contact and the efficiency of the tourism industry under the condition of adding control variables, such as regional economic and urbanization levels(as shown in Figure 2). (1) The intensity of tourism economic contact has a U-shaped effect on the efficiency of the tourism industry, which means that in the initial stage, the efficiency of the tourism industry will decrease with the improvement of the intensity of tourism economic contact and that when the intensity of tourism economic contact reaches a certain level, the efficiency of the tourism industry will gradually increase with its improvement. The tourism economic contact in the region generally shows the stage characteristics. The tourism economic contact gradually strengthens, and its development gradually tends (1) The intensity of tourism economic contact has a U-shaped effect on the efficiency of the tourism industry, which means that in the initial stage, the efficiency of the tourism industry will decrease with the improvement of the intensity of tourism economic contact and that when the intensity of tourism economic contact reaches a certain level, the efficiency of the tourism industry will gradually increase with its improvement. The tourism economic contact in the region generally shows the stage characteristics. The tourism economic contact gradually strengthens, and its development gradually tends to mature [30]. In the initial stage, the intensity of tourism economic contact between cities is relatively weak, and the immature tourism cooperation model leads to the outflow of tourism elements in the region, which will result in a siphon negative effect. The efficiency of the tourism industry increases with the enhancement of the tourism economic contact intensity. With the gradual maturity of the tourism cooperation model among cities in the region, the phenomenon of the return of tourism elements will occur, which will produce a siphon positive effect [42], and the efficiency of the tourism industry will increase with the enhancement of the intensity of tourism economic contact. (2) The research results show that the regional economic level harmed the efficiency of the tourism industry, while the urbanization level had a positive effect on the efficiency of the tourism industry. When the economic resources and policy support in the region are more inclined to other industries [43], and the propensity towards the tourism industry is low, the regional economic level may negatively affect the efficiency of the tourism industry. A higher urbanization level is, to some extent, helpful to stimulate tourists' motivation to travel, promote their travel decisions, increase regional tourism revenue and improve the efficiency of the tourism industry. (3) The research results showed that the industrial structure, the level of opening up to the outside world, and the level of transportation development had no effect on the efficiency of the tourism industry. Although a high-level industrial structure is beneficial to the improvement of tourism product structures and service forms, when the regional industrial structure reaches a certain level, its influence on the efficiency of the tourism industry is not significant. A higher level of opening up to the outside world is not only beneficial to the improvement of tourism industry efficiency but will also have a reverse effect on the development of tourism industry efficiency [44]. A positive force and negative effect are output simultaneously, and the effect on the efficiency of the tourism industry is not significant. Areas with a higher level of transportation development are beneficial to introduce more tourist flow, but a high level of transportation development will help reduce the number of overnight visitors to some extent, which has both advantages and disadvantages and has no significant influence on the efficiency of the tourism industry. Robustness Test Regions with highly efficient tourism industries in the current period may have closer ties. As a result, there is an endogenous problem in estimating the influence of the intensity of tourism economic contact on the efficiency of the tourism industry. To prevent possible errors in the estimation results caused by the endogenous problem, this study adopted the dynamic panel estimation method and systematic GMM (e.g., Generalized Method of Moments) estimation to investigate the influence of the intensity of tourism economic connection on the efficiency of the tourism industry. System GMM estimation model in the measuring dynamic panel data has strong effectiveness, if the difference estimation and level estimation into a system and then to estimate at the same time, the difference equation used in tool variables remains level lag, the horizontal equation you can use the corresponding variable difference of lag as a proxy variable. The estimation results are shown in Table 3. The Hansen test is used to examine the instrumental variables for transitional identification problems and to determine whether the instrumental variables set is reasonable. If the P value is greater than 0.1 in the Hansen test results, then the null hypothesis is accepted, and the choice of instrumental variables is reasonable [45]. From the results of the Hansen test in Table 3, the P value is greater than 0.1, indicating that the setting of the instrument variables in the model is reasonable. This study uses the Arellano-Bond test statistics to test whether the error term of the model is related to a second-order sequence. Because GMM is valid regardless of whether there is a first-order sequence correlation in the residual term after the difference, the GMM estimation cannot have second-order sequence correlation. In the results of the Arellano-Bond test presented in Table 3, the P values of AR (1) and AR (2) are all greater than 0.2, and the null hypothesis cannot be rejected; thus, there is no first-and second-order autocorrelation. The above tests indicate that the estimation results of SYS-GMM in this section are valid. Table 3. System GMM estimation results. The test results, presented in Table 3, show that the effects of the primary and quadratic terms of the tourism economic contact intensity on the efficiency of the tourism industry have both passed at a 1% significance level. The estimated coefficients of the primary and quadratic terms are negative and positive, which indicate that the influence of the tourism economic contact intensity on the efficiency of the tourism industry shows the characteristic U-shaped effect, which further verifies the conclusion of this study. Therefore, the U-shaped influence of the tourism economic linkage intensity on the efficiency of the tourism industry is sound. Conclusions and Discussion This study used the urban clusters on the West Coast of the Strait as an example and analyzes the effect of the tourism economic contact intensity on the efficiency of the tourism industry by constructing a mixed effect model. Additionally, the influence of the five control variables (regional economic level, industrial structure, urbanization level, opening up to the outside world and traffic development level) on the efficiency of the tourism industry is explored. Between 2008 and 2017, the intensity of the tourism economic contact in the West Coast of the Strait urban agglomeration has a U-shaped effect on the efficiency of the tourism industry. This result means that before reaching the threshold, the efficiency of the tourism industry will be reduced with an increase in the tourism economic contact intensity; after reaching the threshold, the tourism industry efficiency will increase with an increase in the intensity of tourism economic contact intensity between the cities in the West Coast of the Strait urban agglomeration. The two sides of the strait officially entered the era of "Great tee" from 2008. Since then, the tourism interconnectedness of the city clusters across the strait has been significantly improved, and its tourism economic cooperation and exchanges have gradually developed in an all-round way, and its tourism advantages have been continuously exerted. Considering the heterogeneity of resources, the short distance between cities and the guiding power of policies, the regional urban system is a combination of a centripetal and centrifugal force composed of external economies and noneconomies. Additionally, the imbalance of these two forces will lead to the siphon effect of the central city on the surrounding cities. The siphon effect means that the economic developments of the urban clusters on the West Coast of the Strait remain unbalanced, the degree of the regional industry correlation is not high, the linkage is weak, and the central city's radiation capacity is weak [46]. At the beginning of the realization of the "Great tee", regional tourism cooperation is still in its infancy, tourism economic contact among cities is weak, and the tourism cooperation model is not perfect. The high-quality tourism elements and resources in the region will appear as counter-current phenomena [47]. This better economic development areas will lead to unbalance of tourism elements between cities in the region, such as weak interactivity, low optimization of tourism resource allocation, and slow development of tourism. These effects will have a negative impact on the development of the tourism industry. However, with the further development of the "Great tee", the economic ties between the cities of on the West Coast of the Strait have gradually become more open, the links between them have gradually strengthened [48]. As regional tourism cooperation has improved, the economic ties between regional cities have reached a stable level, the tourism industry cooperation model has reached maturity. Moreover, high-quality tourism resources and factors will return, regional cities will absorb the tourism resources and elements of the low-level economic development area, tourism interactivity will be enhanced, and the input and output of tourism elements in the region will be gradually stabilized [49]. The overall allocation of the tourism resource elements will be optimized to promote the rapid and efficient development of the tourism industry in the region. (1) In the urban clusters on the West Coast of the Strait between 2008 and 2017, the regional economic levels have had a negative impact on the efficiency of the tourism industry, which indicates that when a regional economic level gradually improves, the efficiency of the tourism industry will tend to continue to decline. Previous studies have shown that the level of economic development will positively affect the efficiency of the tourism industry [50]. However, this study found that because of the existence of competition between different industries in the region (this competition is mainly reflected in economic resources and policy support), when financial support and policy support are more biased towards other industries, the level of economic development will hurt the efficiency of the tourism industry. The effect of the urbanization level on the efficiency of the tourism industry tends to be positive, which indicates that when the urbanization level improves, the efficiency of the tourism industry will gradually increase. A high level of urbanization means that urbanization is widely covered in terms of economic level, lifestyle and population distribution, which leads to a large influx of capital, high-tech and population. These factors can not only lay a solid foundation for the region to improve tourism infrastructure construction and tourism service systems and innovative tourism product systems, but also reflect the stronger consumption capacity of residents and tourists [51], which are the basis of tourism development and will promote the efficiency of the tourism industry to some extent. (2) In the urban clusters on the West Coast of the Strait between 2008 and 2017, the industrial structure, the level of opening up to the outside world, and the level of traffic development have no significant impact on the efficiency of the tourism industry. These findings also mean that when the regional industrial structure, the level of opening up to the outside world and the level of traffic development have a change in trend or intensity, the changes in the efficiency of the tourism industry they caused fail to pass the significance test. To some extent, a high level industrial structure will lead to an inflow of external tourism supply resources and an outflow of local tourists [44], which will weaken the positive influence of the improvement and innovation of tourism product and service systems; thus, it has no significant impact on the efficiency of the tourism industry. In regions with a high level of opening to the outside world, people gather more and flow more frequently, which not only helps to improve the operational saturation of regional tourism reception facilities, but also helps the tourism industry to break through the traditional forms of products and services to innovate and enhance the retention of tourists. However, a high level of opening up to the outside world also means that a certain amount of external tourism supply resources will flow into the region, which will have a certain impact on the development of regional tourism. Therefore, its impact on the efficiency of the tourism industry is not significant. The level of traffic development will reduce the number of overnight visitors while expanding regional passenger flow to balance the two, which has no significant impact on the efficiency of the tourism industry. Theoretical Contribution Based on the panel data of the urban clusters on the West Coast of the Strait, this study constructed an econometric model to explore and find that the intensity of tourism economic contact has a nonlinear effect on the efficiency of the tourism industry. Previous studies on tourism economic contact have primarily focused on the impact of the spatial structure, patterns, characteristic differences, and transportation [21,52], and explored the influencing factors and forward effects of tourism economic contact. There are few related studies on the backward impact of tourism economic contact on tourism. This study can help enrich the relevant studies on the backward effect of tourism economic contact. Additionally, studies have shown that the strengthening of regional tourism cooperation is conducive to the further optimization of the allocation of tourism resources. They have also shown that these factors enhance the overall balanced development of regional tourism, help avoid the emergence of vicious competition problems, and promote the high-quality development of the tourism industry [53], which also reflects that there is a certain relationship between tourism economic contact and the efficiency of the tourism industry. However, there is no research on the direct effect of the relationship between tourism economic contact and the efficiency of the tourism industry. This study explores the relationship between the two from a new perspective, which is expected to provide theoretical guidance for promoting the development of tourism. In addition to the influence of the regional economic level, tourism resource endowment, urbanization level and other factors on the efficiency of the tourism industry [39,54], this study found that the intensity of tourism economic contact is an important factor that affects this efficiency. Moreover, a change in tourism industry efficiency is not always a rise or a decline [55]. This study found that the intensity of tourism economic contact has a U-shaped effect on the efficiency of the tourism industry. This finding can be explained based on the principle of the siphon effect, which is also called the siphon phenomenon. Previous studies have shown that the relationship between regional economic development shows a nonlinear relationship when there is continuous close regional economic contact [56], which also indirectly shows that the flow of factors between regions are not always one-way, and will have different effects on the development of their industries. The U-shaped effect of the intensity of tourism economic contact on the efficiency of the tourism industry is mainly due to the change from the negative to the positive effect of the siphon effect. Research Recommendations Based on the above analysis, this study posits the following suggestions for the development of the tourism industry in the West Coast of the Strait urban agglomeration: (1) To give full play to the role of the intensity of tourism economic contact. After the realization of the "Great tee", direct air transportation, direct sea transportation and direct mail across the Straits have strengthened the economic exchanges between the strait city clusters, which has laid a certain foundation for the tourism cooperation and exchange of the strait city group. Taking that as an example, first, we should strengthen the interaction of tourism between cities in the region to form efficient tourism flow routes between regions. Moreover, the roles and responsibilities of each city in tourism cooperation should be clearly defined to promote the optimization of the allocation of tourism resources among regions to improve the efficiency of the regional tourism industry. Clear definitions of the roles and responsibilities of each city in tourism cooperation are essential for the development of urban tourism in the region, especially for cities with a low overall development level. Second, the tourism cooperation mechanism of the city group in the West Coast of the Strait urban agglomeration should be improved, and the cooperation between the city group and surrounding areas should be strengthened. To form a tourism area with unimpeded information, people, and resource flow, it should complement and interact with the surrounding areas. (2) The efficiency of the tourism industry is affected by many factors, and the regional economic and urbanization levels will have an impact. Thus, when developing a local economy, each city in the urban agglomeration should consider increasing its funding and policy support for the tourism industry. Concurrently, the region should enhance its urbanization level, accelerate the construction of new towns, and continuously promote the optimization and development of the tourism industry structure to improve the efficiency of the tourism industry in the West Coast of the Strait urban agglomeration. Research Limitations and Future Prospects This study explores the nonlinear effect of the strength of tourism economic contact on the efficiency of the tourism industry, but it has the following limitations: (1) This study uses the urban clusters on the West Coast of the Strait as the case. Due to the difficulty in obtaining small-scale data, this study uses the city area as the research unit, making the research scale relatively macro. The intensity of tourism economic contact has a U-shaped effect on the efficiency of the tourism industry, and future research can further explore the changing trend of this effect on another type of classification. (2) The tourism economic development of the city groups in the West Coast of the Strait urban agglomeration is at a relatively high level in the whole country, and the tourism economic space between the cities is relatively perfect. Therefore, future research may try to further compare it with other city groups or consider joining the tourism information flow to further explore the role of mutual influence between tourism industry efficiency, tourism economy, and tourism information flow.
2020-07-16T09:04:01.716Z
2020-07-14T00:00:00.000
{ "year": 2020, "sha1": "b786a851d396a816e66ceff9039c50c4c5aabc05", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/12/14/5652/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "cf0d792d3c6cadd427a3ebce2db2cd653080fa49", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [ "Business" ] }
26752337
pes2o/s2orc
v3-fos-license
Association of Frailty based on self-reported physical function with directly measured kidney function and mortality Background Use of serum creatinine to estimate GFR may lead to underestimation of the association between self-reported frailty and kidney function. Our objectives were to evaluate the association of measured GFR (mGFR) with self-reported frailty among patients with CKD and to determine whether self-reported frailty was associated with death after adjusting for mGFR. Methods Participants in the Modification of Diet in Renal Disease study (1989–1993) had GFR measured using iothalamate clearance (mGFR), and GFR was estimated based on the CKD-EPI creatinine (eGFRcr) and cystatin C (eGFRcys) equations. We defined self-reported frailty as three or more of: exhaustion, poor physical function, low physical activity, and low body weight. Death was ascertained through 2007 using the National Death Index and the United States Renal Data System. Results Eight hundred twelve MDRD participants (97 %) had complete data on self-reported frailty (16 % prevalence, N = 130) and mGFR (mean (SD) 33.1 ± 11.7 ml/min/1.73 m2). Higher GFR was associated with lower odds of self-reported frailty based on mGFR, (OR 0.71, 95 % CI 0.60–0.86 per 10 ml/min/1.73 m2), eGFRcr (OR 0.80, 95 % CI 0.67–0.94 per 10 ml/min/1.73 m2), and eGFRcys (OR 0.75, 95 % CI 0.62–0.90 per 10 ml/min/1.73 m2). Median follow-up was 17 (IQR 11–18) years, with 371 deaths. Self-reported frailty was associated with a higher risk of death (HR 1.71, 95 % CI 1.26–2.30), which was attenuated to a similar degree when mGFR (HR 1.48, 95 % CI 1.08–2.00), eGFRcr (HR 1.57, 95 % CI 1.15–2.10), or eGFRcys (HR 1.51, 95 % CI 1.10–2.10) was included as an indicator of kidney function. Conclusions We found an inverse association between kidney function and self-reported frailty that was similar for mGFR, eGFR and eGFRcys. In this relatively healthy cohort of clinical trial participants with CKD, using serum creatinine to estimate GFR did not substantially alter the association of GFR with self-reported frailty or of self-reported frailty with death. Background Frailty is more prevalent among patients with CKD than among individuals with normal kidney function [1][2][3][4], and frailty is also highly prevalent among nonelderly patients with ESRD [3,4]. The association between frailty and adverse clinical outcomes is established in the ESRD population. Most studies of frailty and CKD have been among patients with ESRD, in community cohorts not specifically enriched for CKD [2,5], or in CKD cohorts that employ estimates of kidney function limiting the ability to ascertain the strength of association between directly measured kidney function and frailty, independent of comorbidity and other factors. The use of serum creatinine to estimate kidney function could also complicate the problem of assessing the association of kidney function with frailty. Because muscle wasting associated with frailty, there is a potential for bias using creatinine based estimates of glomerular filtration rate (eGFR). To our knowledge there have been no evaluations of the relationship of frailty with kidney disease using direct measurement of GFR. The Modification of Diet in Renal Disease (MDRD) study affords a unique opportunity to study frailty by self-report among a cohort of healthier patients who have stage 3 to 5 CKD not on dialysis. Within this cohort there is a wide range of directly-measured GFR to assess the association of kidney function with frailty. The purpose of this study was twofold; first to examine the association of kidney function with self-reported frailty, using a direct measure of kidney function and estimated GFR as a comparison, and second, to determine whether self-reported frailty was associated with death in this cohort with few comorbidities. Study participants The MDRD Study was a multicenter cooperative clinical trial designed to determine whether restriction of dietary protein and a low target blood pressure (mean arterial pressure <92 mmHg vs. usual target blood pressure (<107 mmHg)) reduced the rate of progression of CKD, irrespective of the nature of the primary underlying process. Patients with diabetes requiring insulin were excluded [6]. At baseline, the severity of kidney disease was assessed by measurement of GFR using iothalamate. Data on physical function and exercise were collected by questionnaires. Data from ancillary studies of this cohort that evaluated C-reactive protein (CRP) as a predictor of cardiovascular outcomes were included in this analysis [7]. Mortality outcomes were acquired via direct data collection within the MDRD study and through linkage with the National Death Index and the United States Renal Data System (USRDS) through December 12, 2007. Participants from MDRD study A and B were combined for the purpose of this analysis. Only participants with complete measures of physical functioning and linkage information were included in analyses (n = 812; 97 %). Informed consent was obtained from all participants as part of the original study. The Committee on Human Research at the University of California -San Francisco and the Research and Development Committee of the San Francisco Veterans Affairs Medical Center deemed this study exempt. Self-report frailty definition Our frailty definition was an adaptation of the Fried Frailty Index that substituted patients' self-report of physical function for the direct measures of physical performance (gait speed and grip strength) that are part of the original definition. This approach is similar to that originally developed by Woods et al. [8] and subsequently applied among patients with endstage renal disease [9] and is (henceforth referred to as "self-reported frailty") (Appendix). For the exhaustion criterion, participants responded to a symptom questionnaire that asked "In the past month how often have you felt lack of pep and energy? Tiring easily, weakness?" Patients were given one point in the exhaustion domain if their response to both questions corresponded to a moderate amount of time or more. The physical function domain was ascertained using the MDRD quality of wellbeing measure, which assessed participants' ability to complete activities of daily living (ADLs) similar to the Rand SF-36 physical function scale. Individuals who scored in the lowest quartile based on normative data were allocated two points in the physical function domain of self-reported frailty. Physical activity was assessed using the MDRD Leisure Time Physical Activity Questionnaire, which measured the number of times per week each individual performed walking or other moderate and vigorous activities during leisure time. After converting physical activity into total kilocalories (kcal) of activity per week, individuals in the lowest quintile based on normative data [10] were allocated 1 point in the physical activity domain of self-reported frailty. The baseline assessment of standard body weight was used as a surrogate for weight loss. Individuals who were less than 95 % of standard body weight for sex and height were allocated 1 point in the weight loss/ underweight domain of self-reported frailty. Selfreported frailty was defined by a score of ≥3 points, and patients scoring 1 or 2 points were considered intermediate frail by self-report from a total of 5 possible points [10]. Measures of kidney function Assessment of kidney function using clearance of 125 I-iothalamate was completed twice during the baseline period for all participants. We used the average of the 2 baseline 125 I-iothalamate clearance measures in our analyses (averaged measured GFR; mGFR). We modeled GFR as a continuous variable and as categories (≥45 ml/min/1.73 m 2 , 30 to 44 ml/min/ 1.73 m 2 , ≤ 29 ml/min/1.73 m 2 ). To compare the association of the direct measurement of kidney function with self-reported frailty to the association of a creatinine-based estimate (eGFRcr) and cystatin Cbased estimate (eGFRcys) of GFR with self-reported frailty, we used the Chronic Kidney Disease Epidemiology Collaboration [11] (CKD-EPI) formulas [12] using the same modeling strategies. We chose to use the CKD-EPI equation rather than the MDRD equation to evaluate the association between creatinine-based eGFR and self-reported frailty to avoid overestimating the similarity between mGFR and eGFRcr in the derivation cohort for the MDRD equation. Statistical analysis Characteristics of participants who were frail by selfreport, intermediate frail by self-report and non-frail were compared by Wilcoxon rank sum for continuous variables and chi square for categorical variables. We treated self-reported frailty as a dichotomous variable in the analyses comparing the different methods of measuring GFR with self-reported frailty, similar to previous studies in CKD and ESRD [2,4]. In our survival analysis, self-reported frailty was treated as a three-level categorical variable (0 = not frail; 1-2 = intermediate frail, and 3 = frail by self-report) as Fried and others have done [2,10,13]. We used logistic regression modeling to estimate the association of mGFR, eGFRcr, and eGFRcys with self-reported frailty. With the exception of albumin and age, covariates included in the model were treated as continuous variables. Covariates included age (≤40 years, 41-59 years, ≥ 60 years), sex, quartiles of serum albumin concentration (≤3.84 mg/dL, 3.85-4.01 mg/dL, 4.02-4.24 mg/dL, ≥4.25 mg/dL), proteinuria, race, BMI, and log transformed CRP, protein intake and blood pressure group assignment. We tested for interactions between age greater than 60 and GFR and for interactions between sex and GFR in the adjusted models. A sensitivity analysis was conducted to determine whether associations were independent of study group assignment for blood pressure and protein. In addition, to examine the robustness of our findings, we examined the potential for non-linearity of continuous GFR predictors by including quadratic terms in the models and by examining the cubic splines. We tested whether the association of GFR with self-reported frailty differed according to method of GFR assessment by comparing the c statistics corresponding to areas under the ROC curves in adjusted models. Cox proportional hazards models were used to assess the association of frailty with death; mGFR, eGFRcr and eGFRcys were included as covariates in separate models. A p-value of less than 0.05 was considered statistically significant. Analyses were performed using SAS version 9.4 (SAS Institute Inc., Cary NC). Participant characteristics Of the 840 MDRD participants, 812 had complete data for the assessment of self-reported frailty and were included in our analyses. The majority was male (60.5 %), the median age was 52 (interquartile range [IQR] 42-61), and a history of hypertension was common (84 %) ( Table 1). As expected based on the MDRD inclusion criteria, very few participants had a history of diabetes (5 %). Approximately 21 % of participants had an mGFR greater than or equal to 45 ml/min/1.73 m 2 , 35 % had an mGFR between 30-44 ml/min/1.73 m 2 and 44 % had an mGFR less than or equal to 29 ml/min/1.73 m 2 . Prevalence of frailty The most common frailty components were low physical activity (47 %) and poor physical function (23 %), whereas a smaller proportion of individuals met the criteria for underweight/weight loss (17 %) and exhaustion (13 %). There was a graded association of the prevalence of the components of self-reported frailty with GFR category such that individuals in the lowest GFR category had a higher prevalence of each of the components of self-reported frailty (Fig. 1). Sixteen percent of patients were frail by self-report, 53 % were intermediate frail by self-report, and 31 % were not frail (Table 1). Self-reported frail individuals were slightly older than non-frail individuals. Selfreported frail patients were less likely to be male and less likely to be Caucasian. There were no statistically significant differences in the prevalence of hypertension or diabetes among self-reported frail and nonfrail participants. Association of mGFR with self-reported frailty In univariate analysis, higher mGFR was associated with lower odds of self-reported frailty ( Table 2). Creactive protein was not associated with frailty (OR 1.07, 95 % CI 0.95-1.2 per one log unit, data not shown) and was therefore not included in our final models. After adjusting for covariates, the association Association of estimated GFR (eGFRcr and eGFRcys) with self-reported frailty Associations of eGFRcr with self-reported frailty were similar to those observed with mGFR. Higher eGFRcr Associations of eGFRcys and covariates with selfreported frailty were slightly lower compared to those observed with both mGFR and eGFRcr. Higher eGFRcys was associated with lower odds of selfreported frailty in univariate and multivariable analysis (Table 2). When eGFRcys was modeled using categories, individuals with eGFRcys ≤ 29 ml/min/ 1.73 m 2 were more likely to be frail by self-report (OR 2.30 95 % CI 1.11-4.70). The association between GFR and frailty did not differ significantly according to age or sex for any method of assessing GFR (p-values for interactions >0.05). The potential for non-linearity of the association of GFR with frailty was examined using a quadratic term in the above models and by examining cubic splines, neither of which differed from the linear models presented. The concordance c statistic (area under the receiver operating characteristics [ROC] curve) for predicting frailty by self-report for the base model was 0. 66 (95 % CI, 0.60 to 0.71) (Fig. 2, Table 4). Adding mGFR (0.68, 95 % CI, 0.62 to 0.73), eGFRcr (0.67, 95 % CI 0.61 to 0.72) or eGFRcys (0.67, 95 % CI 0.62 to 0.72) to the model (as a categorical variables) did not significantly change the c statistic. Discussion We found a strong association between mGFR and self-reported frailty, such that patients with better kidney function were less likely to be frail by selfreport even in healthy clinical trial study participants with relatively advanced CKD. Although similar when kidney function was modelled with mGFR or eGFR, the point estimate was strongest with mGFR. Regardless of which GFR variable was used as a covariate, the hazard ratios for the association of self-reported frailty with death were similar. Our examination of associations using mGFR, eGFRcr and eGFRcys was novel. A common limitation of previous studies of frailty in the CKD population was the use of creatinine-based measures of kidney function [2,5]. Because muscle mass is associated with serum creatinine and inversely with frailty, use of creatininebased measures of kidney function may produce inaccurate associations. However, in this study we found similar associations using mGFR and eGFR. Our results suggest that the relationship between selfreported frailty and kidney function may not be particularly sensitive to method of measurement or Abbreviations: mGFR measured glomerular filtration rate by Iothalamate, eGFRcr creatinine estimated glomerular filtration rate, eGFRcys by CKD-EPI equation. All models are adjusted for urine protein, sex, age, race, BMI; body mass index and albumin quartiles and group randomization by diet and blood pressure assignment. All GFR measures are per 10 ml/min/1.73 m2 estimation of GFR. Informally comparing the association of self-reported frailty with all three renal function measures suggests there was no difference between the three measures of kidney function. It is possible, however, that eGFRcr might not perform as well in a more heterogeneous group of patients with CKD as might be encountered in a community-based cohort [13] or in clinical practice and that eGFRcys may have non-GFR-related variability not observed in these analyses [14]. Direct comparisons of the prevalence of frailty across CKD populations are hampered by different characteristics of the cohorts (e.g., age range) and different definitions of frailty. Specifically, the prevalence of frailty is higher using definitions based on selfreported physical function compared to frailty defined by direct measures of physical performance [15]. Nevertheless, it is interesting that our estimate of frailty based on self-report is similar to that reported in the Seattle Kidney Study (SKS), which used an adapted fried criteria [1,10]. We found a prevalence of frailty among MDRD participants of 16 % using a selfreported physical function based definition, whereas the frailty prevalence in the Seattle Kidney Study (SKS) was 14 % (95 % CI 10.5-18.2 %) based on direct measures of physical performance in a population with a larger comorbidity burden. Surprisingly, although MDRD clinical trial participants were healthier and younger than individuals in typical general elderly and ESRD cohorts, the prevalence of self-reported frailty among MDRD participants was double that of community elders [5,10] and similar to that of other CKD cohorts with higher comorbidity burden [2]. As anticipated, the prevalence of self-reported frailty among MDRD participants was less than among ESRD cohorts, in which estimates range from 42 % to 73 % [3,4,15,16]. The association of the severity of CKD with frailty has been previously evaluated using estimates of renal function. Similar to our findings with GFR modeled continuously, these studies have shown patients with more severe renal disease to have a higher likelihood of frailty [1,13,17]. To our knowledge, ours is the first study to test association of GFR with frailty based on three different methods for assessing renal function. Our study suggests associations of eGFRcys with frailty may have a similar magnitude of association using directly measured renal function. Thus studies that have used cystatin C as a measure of renal function may have provided reasonably calibrated findings to estimates using direct renal function measures [1,13]. The expectation that frailty would be associated with a higher risk of death was based on a conceptual framework in which frailty is a marker of accelerated loss of functional reserve above and beyond what occurs as a result of kidney disease. Although other cohort studies have evaluated the association of frailty with risk of death among patients with CKD [1,4,16], the long period of follow-up in our study was unique, and our results show that frailty even by selfreport is predictive of adverse outcomes even over longer term follow up. Self-reported frailty was associated with higher mortality independent of kidney function, raising the possibility that frailty operates through mechanisms not related to CKD such as inflammation, oxidative stress, or endothelial dysfunction related to CKD or its sequelae [18][19][20][21]. Further research into the role of these processes will be important to understand the pathophysiology of frailty in the CKD population. A number of limitations of our study should be acknowledged. Our definition of frailty was based on patients' self-reported physical function rather than direct measures of physical performance. We and others have shown that use of self-reported function identifies more individuals as frail, but such definitions have been associated with mortality as in this report [3][4][5]8]. We assessed self-reported frailty at baseline and did not update frailty status during the long follow-up period, which may have biased our findings towards less of an association with death. We used underweight based on standard criteria in place of the weight loss criterion in our frailty definition, which likely underestimates the contribution of this component of frailty [15] MDRD participants were younger and healthier than the general CKD population. Although the relative health of the cohort helped to minimize potential confounding due to burden of comorbid disease, the finding that creatininebased eGFRcr and mGFR were similarly associated with self-reported frailty must be considered with caution as greater differences might be observed in less healthy or older cohorts. For example, in the CHS, findings were much stronger with cystatin C than creatinine. Conclusions In summary, we found that worse kidney function was associated with higher prevalence of self-reported frailty in a cohort with a smaller burden of comorbidities than is generally present among patients with advanced CKD. Furthermore, the inverse association between kidney function and self-reported frailty was similar when kidney function was measured or estimated. Self-reported frailty was associated with higher risk of death even after adjusting for kidney function. There is a need for longitudinal and interventional studies to determine whether intervening on frailty can improve outcomes among patients with CKD.
2017-06-27T00:25:34.687Z
2015-12-09T00:00:00.000
{ "year": 2015, "sha1": "405b520d6912edfd77aff91bca1834729223db93", "oa_license": "CCBY", "oa_url": "https://bmcnephrol.biomedcentral.com/track/pdf/10.1186/s12882-015-0202-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "405b520d6912edfd77aff91bca1834729223db93", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245904022
pes2o/s2orc
v3-fos-license
SRM with 8/6 magnetic system topology for electric drive of mine battery electric locomotives: design and modelling The paper deals with the development and simulation results of the switched reluctance motor for electric drive of mine battery electric locomotives instead of the DRT-14 DC motor. The switched reluctance motor parameters are obtained based on numerical calculation of the magnetic field by QuickField program and are embedded in MATLAB/Simulink model of a switched reluctance motor created for 8/6 magnetic system configuration. SRM-14-615 has the mechanical characteristics as DC motor DRT-14 and meets the required operating modes as part of the AM8D mine battery electric locomotive. Also two types of models using methods and techniques of the artificial intelligence theory are presented: a model with a fuzzy control system in the Fuzzy Logic Toolbox package and a model with a neural network control system in the Neural Network Toolbox package. Introduction Switched Reluctance Motor (SRM) drives have been used for many years in applications, where design simplicity was primary important. The theory of a rotating magnetic field and well-developed methods for calculation of rotating electric machines are not applicable to SRMs. The literature notes the complexity of solving the problem of SRM with high technical indicators creating. Numerous studies have almost overcome this drawback [1]- [3]. We believe the SRM development has to be based on the principles of block-modular motor design. It means that a basic industry-developed motor prototype is selected and all necessary components and parts (housing, shaft, bearings, shields, etc.) are saved, with the exception of the developed stator coils, stator and rotor packages, as well as a small shaft revision. Thus, in blockmodular motor design, the constraints are the outer diameter of the stator package and the inner diameter of the rotor package, the dimensions of which must be saved when designing the stator and rotor iron sheets. The principles of block-modular motor design ensure a minimum of production and financial costs [4]. The SRM design procedure differs significantly from the traditional one and includes three interrelated design stages [4], [5]. The authors have developed their own algorithms and calculation programs based on EXCEL, QuickField, and MATLAB technologies for each of the design stages. The inclusion in the design procedure of typical mathematical blocks of visually-oriented modeling MATLAB matrix system with the Simulink and SimPowerSystems extension packages opens up additional opportunities for effective design [5]- [6]. This paper presents the results of a study in the Matlab/Simulink environment of a four-phase Switched Reluctance motor with the topology of the magnetic system 8/6 for a mine battery electric IOP Publishing doi:10.1088/1757-899X/1211/1/012004 2 locomotive. For study of this motor topology three Simulation block diagrams have been created for controlling the speed of rotation, battery including those created with methods and techniques of the theory of artificial intelligence. The created models were verified in various modes of a wide range of motor speed control. Materials and methods The electric motor SRM-14-615 (14 kW, 145 V, 615 min -1 ) is designed for electric drive of mine battery electric locomotives AM8D and 2AM8D with a coupling weight of 8 tons and meets the requirements for the currently used electric motor DRT-14 (14 kW, 130 V, 615/1845 rpm). The prototype is designed and founded on the principles of block-modular motor design based on DC motor 4PF200L. The n-sided frame of electric motor 4PF200 is used as a stator SRM-14-615 and the axial length of the rotor package is limited to 200 mm. The transition to an n-face frame allows you to effectively implement a variant of a four-phase electric motor m=4, N s =8, N r =6. The width of the stator (rotor) tooth in the angular version is considered optimal -20.3 (23.7) degrees [5]. To select the rotor diameter, calculations were performed for four of its values in the range of 210-261 mm. The limiting factor when choosing the diameter is the current density in the winding, which for the degree of motor protection of IP54 should not be higher than 2.5 A/mm 2 . For the prototype, new packages of stator poles, a package of the rotor and stator coils were made, the shaft, bearing shields, bearings, labyrinth seals and other necessary parts were preserved from the 4PF200 DC motor in addition to the charge bed. The switched reluctance motor SRM-14-615 is the totally enclosed air over (TEAO), 4 phase machine with 8/6 magnetic system topology. It has the next main data (table 1): The second design stage it is calculating (using the QuickField program) the motor electromagnetic field for different current values at the agreed and mismatched positions of the rotor and the stator and also determining the flux linkage dependences on the phase current at different rotor positions. Investigation results Previously, the authors created the models in MATLAB/Simulink environment and investigated SRM with 6/4 and 12/8 magnetic system topology [4][5]. The Simulink model structure is the same for both topologies, so for creating Simulation SRM block diagram with 8/6 magnetic system topology it is necessary to add the fourth phase and to implement the differences by changing Look-Up Table blocks Rotor position (rad) -ψ=0.02 -ψ=0.08 -ψ=0.14 -ψ=0. Flux linkages are obtained by two dimensional (2-D FEM) models for misalignment angles of 0π/6° with π/90 interval. In the Look-Up Table 1 blocks, the torque data are recorded depending on the phase current and the misalignment angle. The problem is also solved by MATLAB m-file creating, in which the coenergy and its gradient have been determined. Torque dependences on currents at different rotor positions θ for SRM-14-615 are represented in figure 3. The MATLAB Function 2 block has to use the rem function (remainder after division) to convert a continuous rotor angle function into a rotation angle function between 0 to π/3 for topology 8/6. The MATLAB Function 7 block the created m-file is used. The rotation angle values for each phase are converted from the range [0-π/3] into the range [0-π/6] for topology 8/6., i.e. for the ranges for which the two-dimensional tables of the Look-Up where θ is the misalignment angle of the rotor and stator position, u j is the phase voltage depending on the angle θ. For the further analysis, phase current, flux linkage, and torque data are stored in the workspace using Simout blocks. Figure 4 contains the example of data received on the Scope oscilloscope screens during simulation. It is possible to study the switched reluctance motors of using artificial intelligence methods and techniques [7]- [10]. The basis for such development SRM-14-615 with the magnetic system topology 8/6 is the new created model in the MATLAB/Simulink environment for speed control with current limitation (figure 1). A model with a fuzzy control system is developed in the Fuzzy Logic Toolbox package, and a model with a neural network control system is developed in the Neural Network Toolbox package. First of all in Simulation SRM block diagram for speed control with current limitation (figure 1) has been added fragment figure 6, a to create simulation SRM block diagram for speed control with a fuzzy logic, or fragment figure 6, b to create simulation SRM block diagram for speed control with the neural network. The connection points of the fragments are shown in figure 1 and figure 6. The model with a fuzzy logic-based control system is developed interactively in the Fuzzy Logic Toolbox package. A fuzzy inference system of the Sugeno type has two input variablesthe angular rotation frequency ω and its deviation from the specified one Δω=ω-ω ref is selected. Each variable is characterized by 5 membership functions, the type of which can be selected from the functions built into the editor. For both variables, triangular functions of the type trimf are selected (S2, S1, CE, B1, B2). For membership functions 25 rules of the fuzzy inference system are written in the form: "If (x1 is S2) and (x2 is S2) then (y is S2)". The adopted rules are presented in table 3. Table 3. 25 rules of the fuzzy inference system. ω / Δω S2 S1 CE B1 B2 S2 S2 S2 S2 S1 CE S1 S2 S1 S1 The method of algebraic product (prod) is chosen for performing logical conjunction under fuzzy rules, and the method of algebraic sum (probor) is chosen for performing logical disjunction. To make a logical conclusion in each of the fuzzy rules, the minimum value (min) method is selected, and for aggregating values, the maximum value (max) method is selected. To perform de-fuzzification of output variables in the Sugeno fuzzy inference system, the weighted average method (wtaver) is adopted. According to the selected conditions, the srm_nFL_86 file is created, which is written to the Fuzzy logic controller with Ruleviever Simulink block of the model shown in figure 6, (a). The model with the neural network control system is developed in the Neural Network Toolbox package. The controller based on the reference model -Model References Controller -is selected as the control system ( figure 6, b). The Simulink file srm_n_86 of motor speed control by current limitation is selected as the reference model of the SRM-14-615 for the neural network (figure 1). As a training example, the base86.mat file is created, which records the dependence of the angular speed of rotation on the time ω=f(t), obtained by calculating the reference model srm_nNN_86 for the speed of rotation ω ref =68.6 rad/s and containing 10620 values. The Model References Control window also permits to select data: the number of hidden layers -10, delayed inputs of the reference model -2, delayed outputs of the controller -1, delayed outputs of the object -2. To identify the object model (Plant indetification), the following parameters are set: the number of hidden layers -10, delayed inputs of the object model -2, delayed outputs of the object model -2; the Simulink file srm_nNN_86 which contains the controller ( figure 6, b), and the training example file base86.mat are entered; the number of variables (10620), minimum (0) and maximum (ω ref ) values, maximum (0.001) and minimum (0.0001) interval values. When training the network for the specified number of iterations and training the controller, we get the final acceptance of the Simulink data and then the file srm_nNN_86 is ready for modeling. Results of SRM-14-615 motor speed simulation on the reference model for the mode with the voltages U=145°V and 72.5°V are shown in figure 5. The same modes are modeled on the created models. For comparison, the data are summarized in table 4. The comparison was carried out according to the main parameters of the studied modethe effective value of the phase current I rms , the angular rotation frequency ω, the average value of the rotating electromagnetic torque T mean , the torque ripple coefficient K pT . It should be noted, that there is a minimal difference in the angular frequency of rotation (1.6%), a small difference in the torque magnitude (3.6%) and its ripple coefficient, and the largest difference in the magnitude of the RMS current (3.9-19.5%). Conclusion Switched reluctance motor SRM-14-615 based on the principles of block-modular motor design is proposed for electric drive of mine battery electric locomotives instead of the DRT-14 DC motor. The study of the SRM-14-615 operating modes in MATLAB/Simulink environment was conducted on a model created for the 8/6 magnetic system configuration. The model makes it possible to improve the design procedure, rationally select the control parameters (I ref , θ on , θ off , ω, M load ) and get the dynamics of the main parameters changes in different motor modes. SRM-14-615 has the mechanical characteristics as DC motor DRT-14 and meets the required operating modes as part of the AM8D mine battery electric locomotive. Also two types of models using methods and techniques of the artificial intelligence theory are presented: a model with a fuzzy control system in the Fuzzy Logic Toolbox package and a model with a neural network control system in the Neural Network Toolbox package. Based on the results of verification of the created models in various modes it has been found that the selected methods and characteristics of artificial intelligence controllers allow us to successfully simulate the Switched Reluctance drive. As in Fuzzy Logic Toolbox package, so in Neural Network Toolbox package only one training example file can be used for a wide range of motor speed.
2022-01-13T20:07:24.530Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "d42f097b1d8cc1ffe568469bde78f79c2b7bf8ca", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/1211/1/012004/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "d42f097b1d8cc1ffe568469bde78f79c2b7bf8ca", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
6967264
pes2o/s2orc
v3-fos-license
Gastric emptying in hereditary transthyretin amyloidosis: the impact of autonomic neuropathy. BACKGROUND Gastrointestinal (GI) complications are common in hereditary transthyretin amyloidosis and an autonomic dysfunction has been considered to explain these symptoms. The aim of this study was to investigate the impact of autonomic neuropathy on gastric emptying in hereditary transthyretin amyloidosis and to relate these findings to nutritional status, GI symptoms, gender, and age at disease onset. METHODS Gastric emptying was evaluated with gastric emptying scintigraphy. Spectral analysis of the heart rate variability and cardiovascular responses after tilt test were used to assess the autonomic function. The nutritional status was evaluated with the modified body mass index (s-albumine × BMI). KEY RESULTS Gastric retention was found in about one-third of the patients. A weak correlation was found between the scintigraphic gastric emptying rate and both the sympathetic (rs = -0.397, P < 0.001) and parasympathetic function (rs = -0.282, P = 0.002). The gastric emptying rate was slower in those with lower or both upper and lower GI symptoms compared with those without symptoms (median T(50) 123 vs 113 min, P = 0.042 and 192 vs 113 min, P = 0.003, respectively). Multiple logistic regression analysis showed that age of onset (OR 0.10, CI 0.02-0.52) and sympathetic dysfunction (OR 0.23, CI 0.10-0.51), but not gender (OR 0.76, CI 0.31-1.84) and parasympathetic dysfunction (OR 1.81, CI 0.72-4.56), contributed to gastric retention. CONCLUSIONS AND INFERENCES Gastric retention is common in hereditary transthyretin amyloidosis early after onset. Autonomic neuropathy only weakly correlates with gastric retention and therefore additional factors must be involved. Hereditary transthyretin amyloidosis or familial amyloidotic polyneuropathy (FAP) is a dominantly inherited transthyretin amyloidosis that is caused by mutated transthyretin (TTR). There are approximately 100 known amyloidogenic transthyretin (ATTR) mutations of which ATTR Val30Met (methionine substituted for valine at position 30) is the most common, leading to a neuropathic form of the disease, FAP. 1 FAP Val30Met is present all over the world with endemic areas in Portugal, Brazil, Sweden, and Japan. Symptoms are caused by deposition of amyloid fibrils in various body tissues and include, for example, peripheral polyneuropathy, autonomic neuropathy, cardiac arrhythmias, and gastrointestinal (GI) disturbances. Virtually all patients develop GI complications during the course of the disease 2 and the initial constipation is later relieved by bursts of diarrhea that successively become permanent. Nausea and vomiting are also reported by many patients. Without treatment the average survival in Sweden is 9-13 years after onset 2,3 and death is caused by severe malnutrition and opportunistic infections in many cases. 3,4 As nearly all circulating TTR is produced by the liver, a liver transplantation (Ltx) that ceases the synthesis of mutated TTR has proved to halt the progression of the disease. The GI disturbances of FAP patients lead to malnutrition, which negatively affects the outcome of Ltx, 5 and are hence important for morbidity and mortality after the procedure. The mechanisms behind the GI complications in FAP are poorly understood, but it has been suggested that an autonomic neuropathy is at least partly responsible. 4 The aim of the present investigation was to assess the occurrence of gastric retention in FAP patients and relate the findings to autonomic function measured by heart rate variability (HRV) and tilt test. We also wanted to relate the findings to the patients' nutritional status measured by the modified body mass index (mBMI), GI symptoms, gender, and age at disease onset. Patients One hundred and eighty-eight Swedish FAP patients were available for the study (Table 1). All patients were examined at the Department of Medicine, Umeå University Hospital, Umeå , Sweden, between 1990 and 2009 as part of an investigation of their disease and, for the majority of the patients, also as part of the evaluation for Ltx. The FAP diagnosis was based on clinical findings consistent with FAP, presence of amyloid fibrils in an intestinal, skin, or abdominal fat biopsy, and identification of an amyloidogenic TTR (ATTR) mutation. Practically all patients carried the Val30Met mutation except 6 who carried the Leu55Gln, Phe33Leu, Tyr69His, His88Arg, Gly57Arg, and Val30-Leu mutations, respectively. Clinical data were obtained from the first evaluation after onset of the disease. One hundred and fortyone patients (75%) were still alive at the time of the study (December 2009). For each patient all examinations were carried out within a 6 months period of time. Gastric emptying scintigraphy (GES) In 162 (86%) of the patients a GES was performed. Twenty-six patients did not undergo GES, nine because it was not available at the time of the examination, and 17 because of technical problems at the scheduled time of examination. The measurement was carried out according to the method employed in the Swedish multicenter study of gastric emptying. 6 The scintigraphic acquisitions were performed using a STARCAM gamma camera (General Electric, Milwaukee, WI, USA) with a low energy, general-purpose collimator and a 128 · 128 matrix. The collected data were then electronically plotted on a graph ( Fig. 1) that was printed and manually analyzed by one examiner (JW) who calculated the lag phase, T 50 , and T 1/2 (T 50 being the total half-life and T 1/2 the half-life after the lag phase). Abnormal gastric emptying was defined as a T 50 above 133 min (mean + 2 SD) according to the reference values obtained by the Swedish national multicenter study of 160 healthy individuals aged 17-80 years. 6 T 50 over 350 min was entered as 350 min. Nutritional status The patients' nutritional status was assessed by the mBMI, in which BMI (kg m )2 ) is multiplied by serum albumin (g L )1 ) to compensate for edema. 3 mBMI could be calculated in 185 (98%) of the patients. Values below 750 were considered consistent with underweight and values below 600 were regarded as consistent with severe malnutrition. 3,5 HRV Heart rate variability was recorded in 177 (94%) of the patients. The power spectral analysis was estimated by auto-regressive modeling, consequently using Burg-algorithm with 30 parameters. 7 The spectral power in two frequency bands was used in the investigation -the high-frequency component (0.15-0.50 Hz) recorded in a supine position (HF sup ) and the low-frequency component (0.04-0.15 Hz) recorded in an upright position (LF tilt ) as presented in Fig. 2. The respiration-related high-frequency component of HRV represents an indirect estimate of vagal cardiac control and the low-frequency component, recorded after postural change from a supine to an upright position, is a useful marker of sympathetic activity. 8,9 All frequency-domain HRV indices were log transformed (base 10) because of skewed distribution. As the heart rate was expressed in mHz and as spectral power corresponds to the variance, all spectral indices had the unit mHz 2 , but became dimensionless after log transformation. All patients with pacemaker treatment and frequent non-sinus beats were excluded from the analysis, where patients with nonneurogenic HRV patterns were identified by comparing with power spectra for respiration and by inspection of the pattern in the beat-to-beat fluctuations in R-R intervals. [10][11][12][13][14] Tilt test During the HRV examinations the blood pressure in the right upper arm was measured with cuff and stethoscope after 3 min in the supine resting position and after 3 min in the tilted (70°) upright position. The mean heart rate (HR) was calculated from the 2 min ECG recordings used for the HRV analysis, of which all were adjacent to the measurements of the blood pressure. The changes in systolic blood pressure (SBP) and mean HR after tilting were then calculated. Data on changes in SBP and mean HR were recorded in 167 (89%) and 138 (73%) of the patients, respectively. A decrease in SBP of 20 mm Hg or more was regarded as consistent with orthostatic hypotension. 15 Ethics The study is part of a larger project that is approved by the Regional ethics board in Umeå , Sweden; reference number 06-084M. GI symptoms and nutritional status Data on GI symptoms were available for all patients except one. Fifty-nine percent suffered from GI disturbances, and median mBMI was 961 (range 550-1535). In 181 (98%) of the patients the mBMI was 600 or more, and in 4 (2%) it was below 600. All patients with severe malnutrition were women and all of them had GI symptoms. Patients with mBMI below 600 had a lower age at onset (47 vs 57 years), a longer duration of disease (10 vs 3 years), and lower LF tilt (1.33 vs 2.08) and HF sup (1.55 vs 1.77) than those with mBMI of 600 or more. However, the number of patients with severe malnutrition was too small for adequate statistical calculations. Gastric emptying Gastric emptying scintigraphy disclosed gastric retention in 63 of 162 patients (39%). Median T 50 was 119 (range 48-350) min. In 13 patients the gastric emptying was severely delayed with T 50 of 350 min or more. A small difference in T 50 was found between men and women, with the latter showing a slower gastric emptying rate (median T 50 116 vs 128 min, P = 0.043). No significant correlation was found between the age of onset and T 50 (rs = )0.026, p = 0.744). There was a weak but significant negative correlation between T 50 and mBMI (rs = )0.218, P = 0.006). Gastric emptying and GI symptoms When comparing the reported GI symptoms with the outcome of GES, we found significant differences in T 50 between the groups (Fig. 3). Post hoc analysis showed the strongest significance between patients without symptoms compared to those with both upper and lower GI symptoms (median T 50 113 vs 192 min, P = 0.003). A significant difference in T 50 between patients without and those with lower GI symptoms (median T 50 113 vs 123 min, P = 0.042) was also found. No significant difference was found between patients without and those with upper GI symptoms (median T 50 113 vs 119 min, P = 1). Autonomic function The spectral analysis of HRV could be performed in 134 (76%) of the patients. In 23 (13%) of the patients arrhythmia precluded the analysis of HRV and in 20 (11%) patients data were missing or not applicable (due to pacemakers or data file errors). The median HF sup was 1.77 (range 0.11-3.49) and the median LF tilt was 2.07 (range 0.02-4.17). In comparison, healthy subjects registered in our database of healthy volunteers had a median HF sup of 2.52 (range 1. 16-4.24) and a median LF tilt of 2.98 (range 0.00-4.06). Significant differences in HRV between female and male patients were found for HF sup (median HF sup 1.98 vs 1.67, P = 0.008) and LF tilt (median HF sup 2.28 vs 1.89, P = 0.015), respectively. Negative correlations between the age of onset and HF sup (rs = )0.194, P = 0.025) and LF tilt (rs = )0.395, P < 0.001) were found, where older patients displayed a lower HRV. No significant correlation was found between HF sup and mBMI (rs = 0.147, P = 0.089), but there was a correlation between LF tilt and mBMI (rs = 0.280, P = 0.001 Autonomic function and gastric emptying Weak but significant correlations were found between T 50 and HF sup (Fig. 4A) and T 50 and LF tilt (Fig. 4B). Patients with retention at GES had a greater decrease in SBP after tilting than those without retention (median DSBP )10 vs )5 mm Hg, P = 0.038). No difference in the change in HR after tilting was found between patients with or without retention at GES (median DHR 7.56 vs 9.06 beats/min, P = 0.535). Retention at GES was significantly more common in patients with orthostatic hypotension than in those with no orthostatic hypotension (v 2 = 6.66, P = 0.010). Characteristics of patients with severe gastric retention To further assess the possible mechanisms behind the GI complications, subgroup analyses were performed for patients with T 50 of 350 min or more. Patients' details are outlined in Table 1. Patients with a severely delayed gastric emptying had significantly lower LF tilt than those with T 50 below 350 min (median LF tilt 1.20 vs 2.14, P = 0.010) and they also had lower HF sup , however, this difference was not statistically significant (median HF sup 1.35 vs 1.79, P = 0.063). Orthostatic hypotension was significantly more common in patients with severe gastric retention (v 2 = 5.74, P = 0.017). A significant difference was also found for mBMI, where those with severely delayed gastric emptying showed lower mBMI than those without (median mBMI 809 vs 961, P = 0.004). No significant difference was found in age of disease onset (median age at onset 63 vs 56 years, P = 0.075) between patients with or without severe gastric retention. The number of patients was too small for valid statistical analyses on differences related to gender and GI symptoms. Multiple logistic regression analysis on factors behind gastric retention To identify factors with an impact on gastric retention, HF sup , LF tilt , gender, and age at disease onset were A B utilized as independent factors in multiple logistic regression analyses. The age at onset and LF tilt were the only factors significantly contributing to gastric retention. Details are outlined in Table 2. DISCUSSION To the best of our knowledge, this is the first and largest study on FAP patients exploring the impact of autonomic dysfunction on GI symptoms and gastric emptying. The study showed a high prevalence of delayed gastric emptying and also that delayed gastric emptying is a common feature even early after disease onset. In consistency, unpublished study data from upper GI endoscopies after an overnight fast showed that 29% of the patients had retention of undigested food. Contrary to our expectations, only a weak correlation between delayed gastric emptying and autonomic neuropathy was found. Surprisingly, sympathetic activity appeared to be strongly, or at least similarly, related to gastric emptying as parasympathetic activity. This was also supported by a higher frequency of gastric retention in patients with orthostatic hypotension and by subgroup analyses for patients with severely delayed gastric emptying. Parasympathetic dysfunction often precedes sympathetic dysfunction in FAP patients. 16 The fact that the sympathetic function was more strongly correlated with gastric retention in our study probably reflects a more advanced disease in patients with sympathetic dysfunction. In the multiple logistic regression analysis, sympathetic dysfunction significantly contributed to gastric retention, whereas the parasympathetic did not, indicating that an intact parasympathetic function is not the only factor important for a preserved gastric emptying rate. A case report from Ohio has also shown that gastroparesis can occur in patients with pure sympathetic dysfunction, 17 suggesting that a reduced sympathetic function itself may negatively affect gastric emptying. As vagotomy (i.e., loss of parasympathetic control) often leads to gastric retention, 18 and as there is evidence of amyloid deposits in the autonomic nervous system with a destruction of the vagal nerve in FAP patients, 19 an autonomic neuropathy has been suggested to be the underlying factor for GI disturbances of these patients. 20 This is also supported by the findings of a destructed celiac ganglion in FAP patients, 21 but our findings of delayed gastric emptying in patients with normal sympathetic and parasympathetic activity implies that additional factors must be involved. Such possible contributing factors may be a depletion of the GI neuroendocrine cells and a destruction of the enteric nervous system. [22][23][24][25] However, the enteric nervous system seems to be unaffected in FAP Val30Met 26,27 and no improvement of the GI function has been shown after Ltx, 28,29 although a normalization of the endocrine cell count was noted. 30 In diabetes mellitus a down-regulation of ghrelin and its receptor is linked to GI dysfunction, 31,32 and it would be of interest to study if this is also the case in FAP patients. Likewise, a depletion of the interstitial cells of Cajal (ICC), increased oxidative stress, and smooth muscle degeneration and fibrosis play important roles in the gastroenteropathy of diabetes mellitus, [33][34][35] and need to be investigated in FAP as well. Preliminary results from another of our studies in fact showed a marked decrease of gastric ICC in FAP patients compared with controls. Interestingly, an accumulation of advanced glycation end products, which bind to enteric neurons and cause a decreased production of nitric oxide synthase leading to a delayed gastric emptying in diabetics, 34 have also been observed in FAP. 36 Analysis of urinary secreted NO 2 ) /NO 3 ) levels has implicated a decreased NO synthesis in FAP patients. 37 Nausea, vomiting, and early satiety are classic symptoms of gastroparesis. Unexpectedly, none of the patients with a severely delayed gastric emptying reported upper GI symptoms. Furthermore, there was no difference in the outcome of GES when comparing patients without symptoms with those with upper GI symptoms, whereas lower GI symptoms and, especially, a combination of upper and lower GI symptoms seemed to be more related to gastric retention in FAP. A possible explanation may be that those patients have a more pronounced GI dysfunction and it appears to be difficult to predict the presence of gastroparesis from GI symptoms alone. 38 Delayed gastric emptying correlated with a low mBMI and patients with severe gastric retention had significantly lower mBMI compared with those with T 50 below 350 min. Previous studies have shown that GI dysfunction and a low mBMI are predictors of an increased mortality after Ltx, as is an early onset of GI disturbances. 3,5 Thus, the detection of gastric retention is an important part of the evaluation of FAP patients, and GES is therefore part of our routine investigation of patients under evaluation for Ltx. We noted that the female patients displayed higher HF sup and LF tilt and slower gastric emptying rates than males and that the group of patients with severe malnutrition consisted of women only. However, no gender related differences in survival after Ltx have been observed, except for a decreased survival of late onset males. 39 We do not believe that the gender differences observed have any clinical impact. Expectedly, a weak negative correlation was found between age at onset and HF sup and LF tilt . A decline in the autonomic nervous function is also found in the general population and HRV values are often age adjusted. The values displayed in the present investigation were, however, not age adjusted. A late onset of the disease also significantly contributes to gastric retention, probably reflecting a poorer autonomic function. In conclusion, hereditary TTR amyloidosis was associated with delayed gastric emptying early after disease onset in Swedish patients. Furthermore, gastric retention is associated with a poorer nutritional status. Only weak correlations were found between the measures of autonomic neuropathy and gastric retention, indicating that other factors must be involved. Surprisingly, no correlation was found between upper GI symptoms and delayed gastric emptying, suggesting that GI symptoms are poor predictors of the actual GI function in these patients. Limitations The HF sup and LF tilt from spectral analysis of HRV were used as markers of parasympathetic and sympathetic autonomic function, respectively, as they are used as estimates of autonomic cardiac control and one may argue that they do not fully correspond to the autonomic gastric control. However, HRV is generally considered to be a good marker for autonomic function in FAP 20 and has also been used in studies on autonomic function and GI motility in diabetics. 40,41 A scintigraphic measurement of 4 h would improve the accuracy, 42 but a 2-h measurement is the standard procedure at our hospital reflecting current clinical practice. The manual measurement of the lag phase, T 50 , and T 1/2 by a single reviewer has an advantage over the automatic version because the settings of the latter method have been changed over the years making it less homogeneous. However, the manual analysis may not be as precise as the automatic. FUNDING This study was supported by grants from the Swedish Heart and Lung Foundation, the patients organizations FAMY/AMYL in Vä sterbotten and Norrbotten, Umeå University, the 6th research framework of EU, The Euramy project, ALF-grants from Umeå University Hospital, and a 'Spearhead' grant from Vä sterbotten's county. DISCLOSURE The authors have no competing interests. AUTHOR CONTRIBUTION Co-author OBS examined the patients, designed the research study, contributed to the interpretation of the results, and revised the manuscript; JW analyzed the data and wrote the manuscript; UW and RH provided and analyzed the HRV examinations and revised the manuscript; AR provided the GES examinations and revised the manuscript; PK helped with the interpretation of the results, the statistical analysis, and the revision of the manuscript; IA contributed to the interpretation of the results and revised the manuscript.
2016-05-12T22:15:10.714Z
2012-12-01T00:00:00.000
{ "year": 2012, "sha1": "c53fc67ce7bb5b3c0eee0a75522774a9ace22b99", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1365-2982.2012.01991.x", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "c53fc67ce7bb5b3c0eee0a75522774a9ace22b99", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221132081
pes2o/s2orc
v3-fos-license
Design, Development and Performance Evaluation of Nano Robot in Traditional Siddha Medicines for Cancer Treatment This paper reports a design and fabrication of a functional nano robot “Sootha Vennai Parpamcutâvanai – Parpam – SVP” of 150-200 nm size which behaves more or less like a human physician in nano form inside the body. This nanorobot is a medicine which has artificial intelligence i.e. the distinguishing ability, decision making ability and some basic behavioral properties such as target detection ability, self-propelling capability and handling the infection in the case of breast cancer. This nanorobot can also aid in cancer therapy, site-specific or target drug delivery, and tissue repair. Introduction Research on nanorobotics has been getting immense attention in the last few decades [1]. Nanorobotics is an emerging field and it deals with the design, manufacturing, programming, and control of robots of nanoscale [2]. Nanorobotics is a multi-disciplinary subject that involves material science, biomedical engineering, artificial intelligence and clinical medicine dealing with nano scale things at molecular level [3]. It has been suggested for an inter-disciplinary bio-inspired approach to be used to design and fabricate nanorobot [4]. Nanorobotics research has been a challenge to engineering, life sciences, and medicine in developing a fully functional or practical nanorobot for biomedical applications [4]. The main challenge is using traditional engineering approaches to fabricate nanorobots. Many researchers believe that the medical field would be the primary beneficiary and the applications of practical nanorobotics in the medical field would lead a paradigm shift from treatment to prevention [1,5]. Most of current research discusses nanorobotics as a theoretical or hypothetical nanotechnology engineering concept. So far, there is no universally accepted design for practical nanorobots [3]. However [4,6] have described some key components of a practical nanorobot. In recent years, research interest in nano-bio formulations and bio nano robots has been growing. As the name suggests, both nano bio formulation and nano robot have dimensions of nanoscale [5], with sizes comparable to bacteria. Due to their small size, nanorobots can directly interact with cells, and even penetrate into them, providing direct access to the cellular machinery [4]. Some characteristics of a robot include: actuation, sensing, propulsion, intelligence, swarm behaviour, manipulation, signalling, or information processing at the nanoscale [2,7]. If a nano formulation has few or most of these characteristics, then we may consider that nano formulation to be a nano robot. In comparison to conventional medicines, nano robots have a number of advantages because of its size. Nanorobots have the capability to sense and act in microscopic environments [8]. In the biomedical field, some current research objectives of nanorobotics include early diagnosis of cancer, neutralisation of viruses, target drug delivery, monitoring and treatment of diabetics, and precise and incision less surgery [4,9]. Long term clinical studies concerning the fate of many nanomaterials in vivo have not been conducted and the fate of these materials remains unknown. Siddha -cittã medicine [10] is one of the ancient, traditional systems of medicine originated in India. It is very predominant in the southern part of India, especially in the state of Tamil Nadu, although it is practised in some places across Asia, including Sri Lanka, Malaysia and Singapore. Siddha system has been included in the AYUSH Ministry, Government of India [11]. AYUSH stands for Ayurveda, Yoga & Naturopathy, Unani, Siddha and Homeopathy -various forms of traditional medicine. There are more than 120 books written in Tamil -tamil language [12] on Siddha medicine, printed versions of the palm literature, available at the oriental manuscript library, Madras University and published by Thamarai-Tãmarai Publishers. The ancient literature on Siddha medicines has identified n number of nano-and organo-metallic compounds [13], such as Vanga Vennai -vaṅka -vēn . n . aiy [14], nano liposomal [15] such as Rasagandhi Mezhugu -ra -kenti -meluku [16] and Gowsigar Kuzhambu -Kaaucikar Kulampu [17]. It is believed that first ever documented nano formulation was given by the great Siddha Agastiyar -akastiyar in his book Agasthiar Paniran Ayiram -agasthiar pan . n .ī râyiram. The ancient literature has also identified many processes that are used to prepare the nano compounds, with particle size varying from 100-200 nm [18]. In the Siddha system of medicine, all nano-bio formulations or medicines are given to the patients based on their body mass ratio, depending on the formulation. Some formulations are given 1/2 mg per 1 kg body weight; some 1 mg; and others 2 mg per 1 kg body weight. For example, a patient with 65 kgs will be given 65 mg/dose/delivery for nano metallic/mineral sulphides (Senthuram-centūram), and about 120 mg/dose/delivery for nano metallic/mineral oxides (Parpam-parpam). However, depending upon the degree of intensity of the disease decides the dose of the drug, supporting drug and the carrier. The typical carriers are ghee, honey and castor oil. There are more than 330 mercury based formulations/organo metallic formulation in 120 available siddha literature. Among the library have identified few nano materials having robotic behaviours. But we decided to experiment with Sootha Vennai Parpam [22][23][24][25][26][27], classified as Siddha Sastric drug by AYUSH Ministry, Govt of India, because this single compound has the ability to treat more than 43 prime diseases of different nature. In this paper, we focus on the design and development of biologically inspired nano robot SVP -A mercury based nano bio material. Some typical application scenarios such as targeted drug delivery (ability to detect and distinguish the target; self-propelling capability) was developed and studied, which will provide a stepping stone for more research into nanomedical applications. Materials and Methods There are a number of techniques/methodologies available for synthesizing nanomaterials in Siddha system of medicine. They include: 1. combination of grinding and sublimation; 2. combination of grinding and heat treatment; and 3. grinding and treatment with acids. The instruments/equipment used to produce the nano robot were "kalvam" (Figure 1(a)) made out of hard granite. The standards dimension for the kalvam mentioned in siddha literature recommends 390 mm × 300 mm × 150 mm [28]. The specially made earthen clay pots (Figures 1(b) and 2) with highest possible density and low porosity. The standard clay port dimension 270 mm × 285 mm. In this study, the traditional method used to synthesize the nanorobot SVP was by using the combination of grinding and sublimation technique ( Figure 1). The fabrication of the nanorobot SVP was carried out in two phases. In the first, preparatory phase, two ingredients were used to extract mercury. Good quality cinnabar (mercury sulphide) -700 g was procured from a mineral supplier. It was made into fine powder using the traditional grinding stone. The second ingredient is Plumbago Zeylanica [28] which was collected from mountain areas. Entire plant was cleaned, dried and powdered (1400 g). The mercury for the SVP was extracted from cinnabar by distillation process using a sublimation chamber Thiruneelakanda Valai -tirunīlakan . t . avālai ( Figure 2) as suggested by Soothamuni-cētamuni. As shown in Figure 2, two earthen vessels made out of clay were placed one over the other. In the lower vessel, 1400 grams of dried Plumbago Zeylanica [28] was taken along with 700 g of cinnabar. At first, nearly half of the herbal powder was spread evenly at the bottom of the earthen vessel; then the powdered cinnabar was then spread over the herbal powder and finally, the remaining herbal powder was spread on top. Note that the inner side of the upper earthen vessel has been coated with the herbal extract of the Piper betle [28] and dried in sunlight 3 times to increase the surface tension of the top surface which can collect and hold the sublimated mercury. Mouths of both earthen vessels were joined together after smoothening to avoid any air leakage. The vessels were fitted using clay and cotton. To get the best possible sealing 7 layers of sealing has been done. The sublimation chamber has been kept on lotus flame (a flame resembling lotus in appearance generated using three (3 nos) neem wood of 30 cm in length and 1 inch thickness) for 24 hours. During the burning process, thick wet cotton is placed on the top surface of the upper earthen vessel and wetness is maintained for 24 hours using water. This ensures the upper earthen vessel temperature is less than the lower one and thus the condensation. The chamber has been cooled to room temperature and carefully opened. The low-density sulphur vapor escapes through the micropore of the upper clay pot and the high-density mercury bonded with carbon are deposited as a layer in the inner surface of the upper earthen vessel. The sublimated material is carefully collected using the brush made out of pig's hair. By applying pressure, followed by water wash, the bonded carbon is removed from the compound and the mercury is separated. In the second phase, the fabrication of the nanorobot was carried out. Ingredients used were: 1. Distilled Mercury: 350 g 2. Leaves of Azadirachta indica: Q s Step 1: Arrange an incinerator towards the northeastern direction. Generally, the physical location of Tamil Nadu state in India which gets cold breeze from north east direction hence the location has been chosen in such a way to get the natural ventilation. Step 2: Keep the purified mercury in a large earthen vessel. Step 3: Spread the purified leaves of Azadirachta indica in that earthen vessel above the purified mercury Step 4: Fill with water inside till the level of the widened mouth of that earthen vessel. Step 5: Place the earthen vessel inside the incinerator; maintain the flame constantly for 24 hours. Step 6: Continue the burning process until the water is fully evaporated and collect the mercury from the bottom of the vessel. Step 7: Again, repeat the process in a new earthen vessel in previous method. Step 8: Repeat the same process for 41 days using those two earthen vessels. Step 9: After 41 days, the reddish orange colored drug will be obtained. Keep the drug properly in a hermetically sealed container. Measurement The Vehicle SVP 240 mg mixed with cow ghee and delivered to the empty stomach after sunrise when the breath runs through the right nostril. Before delivering SVP, 25 ml of thick coconut milk extract has to be taken and after the delivery another 25 ml of coconut milk has to be taken to ensure the SVP particles don't stay in the mouth and reach the stomach. Nano Robotic Management & Outcomes A. Patient administered with SVP, 120 mg /dose with 10 ml Ghee and 100 ml coconut milk extract once a day for 10 days. B. Patient administered with SVP, 120 mg /dose with 10 ml Ghee and 100 ml coconut milk extract twice a day for 10 days. C. Patient administered with SVP, 120 mg /dose with 10 ml Ghee and 100 ml coconut milk extract twice a day for 28 days. Results The basic objective of the research is to design, produce, certify and test a functional Nano Robot. After successfully producing the Nano Robot having average size of 200 nm the robotic properties of sootha vennai has been validated using the following procedure. 1. A patient are chosen, one with cancer tumour on the surface of the breast. 2. The Nano robot has been delivered using self-emulsification drug delivery mechanism [30]. 3. It demonstrated the ability to detect the tumour on the surface of the breast. 4. The robot was able to propel by itself without any external field support. 5. Temperature rises in the tumour has been detected. Indicating localized working areas of the nano robot. Hence the capabilities of selfpropelling, detecting location, distinguishing cancer cells (indicating intelligence). 6. It has reached the target in the case of cancer tumour and brought down the size of the tumour by one fifth in 20 days' time. 7. The depth of the tumour started reducing and the pus formation has stopped completely. 8. The patient gained enough strength to move around and manage herself. The particle size of the above formulations play an important role. The smaller the size of the formulation, higher the efficacy. Until early 2011, it was not well known that a section of Siddha formulations were of nano size [31]. When the particle size of the 4 materials Iya Veera Senthuram-ayavīra centūram [32], Sandamarutham Senthuram-can . t . amāruta centūram [33], Rasa Sunnam [34] and Rasa Senthuram-raca centūram [35] were measured, atomic force microscopic images gave surprising results used by siddha system for medical application varied between 100-200 nm ( Figure 6). The bright field AFM micrograph in Figure 3 shows the particles having an average diameter of around 200 nm. AFM microstructural observation from various regions shows size of the particles are in the range of 100-250 nm and most of the particles are measured to be less than 200 nm. Discussion When the scientific community has grand challenges in bioengineered nanorobotics for cancer therapy [36] The people in ancient India had a strong understanding of nanotechnology. When the current science and technology approach the design and development of nanorobots by designing the power sources, energy harvesting systems, power conditioners, sensors and actuators, embedded control, communication etc., in nanoscale, the scientific community of India (siddha's) approached the same requirements exactly in the opposite direction by designing and synthesizing the molecule with robotic behaviour and artificial intelligence. The selection of mercury among all the elements present in the periodic table and selection of Leaves of Azadirachta indica out of a few thousand species in the plant kingdom of the southern peninsula of Indian subcontinent is very surprising. Each and every aspect of the design and fabrication starting from conditioning the clay for fabrication of processing earthen vessel, mercury distillation from cinnabar, choosing materials, synthesize procedure etc., are scientifically well defined. This study threw new light on the roles of elements like mercury in the human metabolic activity as well as the significance of elements like arsenic and lead in their nano oxide and sulphide forms which in turn indicates their therapeutic significance. Biological nano robotic systems do exist in nature [5], and our study shows their existence. The Siddha medical science if researched properly might be able generate the basic guidelines for the usage of nano bio materials to the Drug Agencies across the world about the metabolic activities of metals, minerals in their nano forms which leads to the treatment of the most complex health challenges faced by the mankind across the world for the few decades. In cancer therapy, the basic requirements of a nanorobot are: 1. The ability to carry a payload, essentially a drug; 2. Active movement to a specific site in the body; 3. Attachment to the cancerous cells; and 4. Release of the payload locally upon recognition of the binding event. This type of "targeted" therapy had currently eluded investigators, whose present goal is to develop "dumb" nanocarriers, which may attach to cancerous cells by chance [4]. Nanoparticle drug delivery systems have several clinical advantages that make them attractive candidates for the development of a nanorobot core. First, nanoparticle drug delivery systems have clinically been shown to prevent rapid renal clearance and prolong the plasma half-life of complexed or encapsulated drugs [36]. Second, nanoparticles are often more easily endocytosed, which leaves less free drugs available to "normal" cells, reducing harmful side-effects [36]. Third, the high surface to volume ratio of nanoparticles allows for increased drug loading compared to micronsized particles [37]. Finally, nanoparticles have demonstrated the ability to passively penetrate into tumour tissues. Conclusions The SVP nanorobot successfully synthesized, characterized, delivered and demonstrated its capability in handling 4 stage breast cancer tumors. The emulsified SVP delivered by oral means, propelled by itself, reached the target and was repaired. The pus formation completely stopped within 4 weeks and the wound started healing. The SVP should be evaluated further for its ability to carry payload, handle the microbes and other indications. Biographies Kaviarasu Balakrishnan received M.Tech in Electronics and Communication Engineering from Pondicherry University, Pondicherry, India in 2000. He has 24 years of experience in Research and Academic. Presently he works as a Head of Research in Manushyaa Blossom Pvt Ltd, Chennai based Siddha Nano Bio Pharma company. He founded Dr Krishanmoorthy Foundation for Advanced Scientific Research, Vellore, a center for Nano Science in the year 2011. He has designed and synthesized more than 20 nano bio materials and discovered more than 300 processes of nano medicine synthesis using siddha medicine principles. He belonged to the tradition of Indian siddha system of medicine practitioners for more than 700 years. His is specialized in child immunisation using nano formulations. His research interest includes nano robotics, nano drug design, synthesis, delivery and nuclear physics.
2020-07-30T02:03:12.897Z
2020-07-09T00:00:00.000
{ "year": 2020, "sha1": "803f46c714630f62758fbe82ede89de6e0ccb94c", "oa_license": null, "oa_url": "https://journals.riverpublishers.com/index.php/JICTS/article/download/4323/3085", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "abd143fa181ab7b56e0e532fd835615af022c669", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
119476496
pes2o/s2orc
v3-fos-license
Transverse coherent instability of a bunch in a rectangular potential well Theory of transverse instability of a bunch in a rectangular potential well is developed. Series of equations adequately describing the instability is derived and solved both analytically and numerically. Dependence of the instability increment and threshold on bunch factor is investigated for various beam coupling impedances. The theory is applied to the Fermilab Recycler Ring. I. INTRODUCTION Fermilab Recycler is an antiproton storage ring with stochastic and electron cooling [1]. Transverse resistive wall instability is observed in the ring at intensity several×10 11p and relatively small phase volume of the bunch. A digital instability damper with high order filter is used increasing achievable phase density by a factor of about 2 [2,3]. A distinctive feature of the Recycler is the RF system which can create a series of rectangular pulses (other possibilities are not considered here) [1]. A bunch of several microseconds long is kept in almost rectangular potential well which arises between two pulses of alternating polarity (barriers). Synchrotron frequency is very low in such a bucket (typically several Hz) having a 100% spread. First theoretical analysis of resistive wall instability in the Recycler was published in Ref. [3]. It was shown that dependence of the instability decrement on bunch factor is rather moderate, and a coasting beam model was used to find the instability threshold. More detailed investigation was performed in Ref. [4] where several impedances of different types were examined, including space charge and instability damper contributions. The problem was treated in terms of an effective impedance. It was shown that its real part (which is responsible for the instability) increases at the bunch squeezing not faster than its imaginary part. The permissibility of a coasting beam model for calculation of the instability threshold was confirmed by this. However, a dependence of the increment or the effective impedance on a bunch factor was not established thoroughly in the mentioned articles. The basic challenge is a numerical calculation of high order eigenvalues of large matrices. Alternative method developed in this paper does not require a use of such cumbersome matrices, and allows to investigate the increment and threshold of very high eigenmodes. This is especially important for systems with an instability damper, where these modes * Electronic address: balbekov@fnal.gov are most unstable. We will consider a single bunch neglecting the penetration of particles into the barriers. Betatron oscillations are taken to be linear, because nonlinearity of external field is very small in the Recycler [1], and nonlinearity of space charge field does not affect the transverse oscillations of the beam center [5]. II. BASIC EQUATIONS Let us consider the transverse dipole moment of a beam in its rest frame: D(θ) = k D k exp (ikθ) where θ is longitudinal coordinate (azimuth), and dependence on time is presumed to be given by factor exp (−iωt). Then the Fourier coefficients D k satisfy the following series of equations [4,6]: where r 0 = e 2 /mc 2 (about 1.535 × 10 −16 cm for protons), Z 0 = 4π/c ≃ 376.7 Ohm, N is the beam intensity, ω 0 and Q 0 are central angular velocity and betatron tune, respectively. Factors Z l (ω) can be represented in terms of transverse beam coupling impedance in the laboratory frame, or in terms of the corresponding wake field [7]: The general formula for C k,l (ω) is: where ǫ is longitudinal action and F(ǫ) is corresponding normalized distribution function, Ω(ǫ) is synchrotron frequency, ν = Q 0 − ξ/η, ξ and η are the machine chromaticity and slippage factor, respectively. Form-factors I m,k (ν, ǫ) are the coefficients of expansion of a planar wave in series of multipoles: where the particle azimuth θ should be presented as a function of synchrotron action and phase. III. RECTANGULAR POTENTIAL WELL Now we have to apply these general formulae to the particles in a rectangular potential well of length 2πB where B is the bunch factor. Then Eq. (4) gives: and the synchrotron frequency is: where p is the particle momentum in the laboratory frame and p 0 is the central momentum of the beam [4]. Therefore, one can represent coefficients (3) of series (1) in the form: where F (p) is the normalized distribution function on momentum. Because the factors I m,k do not depend on action now, another form of series (1) can be proposed: where and Z m,n (ω) = k Z k (ω) I * m,k (ν)I n,k (ν) . This form will be largely used for the analysis. Note that variables X m can be treated as amplitudes of the longitudinal multipoles in the bunch spectrum. IV. ZERO SLIPPAGE LIMIT As it was mentioned above, the synchrotron frequency is extremely small in the Recycler Ring. Therefore, the limit Ω → 0 can be considered as a reasonable first approximation. Corresponding limiting process should not be performed by decreasing of the distribution width, because the effect of chromaticity would also be lost. It is necessary to proceed to the limit η → 0 taking into account that ν → ∞ in this case. Then the dispersion equation following from series (1) is: where Q(p) = Q 0 + ξ[p − p 0 ]/p 0 is the momentum dependent betatron frequency [4]. The equation includes the effective impedance which is M -th eigenvalue of the series of equations: where are Fourier coefficients of the normalized linear density of the beam: It can also be shown that for the rectangular potential well This relation allows to obtain another form of series (12) corresponding to Eq. (8): One more form can be obtained by inverse Fourier transformation of series (12) resulting in the integral equation: where W (θ) and Z(ω) are connected by Laplace transformation (2). Similar equation was applied earlier for an analysis of resistive wall instability in the Recycler [3]. V. EFFECTIVE IMPEDANCE Several specific examples of effective impedance are considered below. Series (15) is used being the most convenient for numerical calculation. Its main advantage is that the matrix Z m,n is nearly diagonal, which makes it possible to calculate eigenvalues by use of its rather small fragments. Note that the effective impedance does not depend on which value of ν is used for calculation of the matrix by Eq. (10), because the change produces unitary transformation of the matrix. In fact, all of the calculations below presume that ν = 0. In addition, it is taken into account that Z m,n = Z −m,n = Z m,−n to reduce series (15) to the form: Let us consider the wake field: and corresponding impedance in frequency domain: (symbol 'hat' marks the laboratory frame). An analytical solution of the problem is possible in this case. In terms of Eq. (16), the eigenfunctions and the corresponding eigenvalues are: It is most remarkable that these eigenvalues do not depend on the bunch factor. The eigenvalues of series (17) with positive real part are represented in Fig. 1 at Z (exp) = 1, ω f = ω 0 , being calculated by the following method. At any B, first 20 of them are obtained with the help of 100 × 100 matrix Z m,n starting from 0-th multipole. Each other point is the first eigenvalue of a 10 × 10 matrix starting from multipole 50, 100,..., 2000. The number M is defined as the index of the highest power multipole in the spectrum of the eigenmode. Note that only odd M appear in Fig. 1 because real parts of the eigenvalues are found to be negative otherwise. Many symbols in the figure overlap confirming that the eigenvalues do not depend on B. According to Eq. (22), all of the eigenvalues should be located on the solid line plotted in Fig. 1. This is so indeed, and there is a perfect agreement of numerical and analytical solutions at the relation: A very important conclusion follows from these results: any eigenmode includes a rather small number of multipoles, and reasonable accuracy can be reached by using 10 × 10 or an even smaller fragment of the matrix Z m,n . The conclusion will be applied below to more complicated impedances when analytical solution is not achievable. B. Resistive wall impedance The same technique is used in this subsection to calculate the effective resistive wall impedance: The results at Z (rw) = 1 are shown in Fig. (2) and are fitted by the formula: At B = 1, the fit coincides with analytical solution of Eq. (12) if relation (23) is also applied. At arbitrary B and M >∼10, rather good agreement is provided by the expression: However, the agreement is worse at lower M . In particular, better approximation for the lowest unstable mode is: where K is the minimal integer exceeding Q 0 . C. Resistive wall + first order damper At imaginary Z (exp) , impedance (20) represents the simplest model of an instability damper with first order RC filter (imaginariness is actually provided by appropriate arrangement of pickup, kicker, and delay line). We consider it jointly with resistive wall contribution representing the full impedance in the form: The effective impedances are calculated with the help of 10 × 10 matrix starting from M -th multipole at Z (rw) = 1, ω f /ω 0 = 200 and several G. Their real parts are shown in Fig. 3 in the area of rather large M , where positive values appear for the first time. Fits obtained using Eq. (26) and (28) are plotted as well, providing very good agreement for positive values. D. Resistive wall and high order damper The impedance is considered in this subsection. At non-integer ω s /ω 0 , the addition to resistive wall part can be interpreted as a simple model of digital damper with sampling frequency ω s and high order filter of bandwidth ω f [8]. The case G = 5/3, ω s /ω 0 = 588.1, ω f = ω s is plotted in Fig. 4. Again, the numerical values are fitted very well by Eq. (26). Similar results are obtained at ω f < ω s as well. VI. INCREMENT OF THE INSTABILITY According to Eq. (11), at relatively small momentum spread the instability increment is: As shown in the previous section, its dependence on bunch factor is rather diverse. For the most unstable modes the obtained results can be summarized as: • Exponential wake: no dependence on B; • Resistive wall: approximately ∝ B −1/3 ; • Resistive wall + high frequency damper: ∝ B −1 . In the last case the increment depends on local beam density only, which means that the instability is driven by a short-range interaction. This fact can be explained by taking into account that the bunch spectrum includes a relatively small number of high order multipoles in this case, concentrating near M ∼ 2Bω f /ω 0 . Another important point is an intimate connection of the multipoles and space harmonics due to the relation θ = 2B|φ|. As a result, the beam spectrum of any unstable mode in laboratory frame includes frequencies ∼ ω f ± ∆ω where ∆ω <∼ ω 0 /B arises because of the bunching. At ω f ≫ ω 0 /B, only high-frequency harmonics are present in the spectrum. Typically, they are rather quickly damped out, resulting in a suppression of long-range interaction. Similar reasoning could be applied to any high order mode (though it is unobservable in practice). Being consistent with Eq. (26) for resistive wall instability, this statement contradicts -from the first glance -the exponential wake effective impedance (22) because the last does not depend on the bunch factor. In fact, there is no contradiction here because the statement is related to high modes, i.e. to high frequency only. Then it follows from (20) and (22): in total agreement with Eq. (26). VII. THRESHOLD OF THE INSTABILITY Frequency independent space charge impedance Z = −iZ (sc) should be taken into account when the instability threshold is calculated, because it usually produces a determining effect on Landau damping. Using Eq. (18), it is easy to verify that the inclusion provides an additive contribution −iZ (sc) /B to all diagonal elements of the matrix Z m,n , i.e. to all its eigenvalues. For example, threshold of the lowest (most unstable) mode of resistive wall instability should be determined from Eq. (11) where and minimal K > Q 0 is applied. Absence of slippage factor in this case is actually immaterial, because its contribution to frequency spread would be small in comparison with chromaticity contribution. However, the slippage can be important for the analysis of a wide-band damper, because fast-modulated eigenmodes with dominant contribution of higher multipoles are most unstable in this case. Fortunately, this drawback can be easily remedied due to the narrowness of the eigenmode spectrum discussed above. It is sufficient to separate corresponding central multipoles n = ±M in Eq. (8) and to retain them in all following transformations. Next, taking into account also Eq. (6) and relation F (p − p 0 ) = F (p 0 − p), the following equation can be obtained instead of Eq. (11): (32) This expression can be represented in the form very similar to the coasting beam dispersion equation: where ω r (p) is angular velocity of a particle with momentum p in the laboratory frame,ω = ω + κω 0 , κ = M/2B . An appropriate form of the effective impedance should be used in this equation. For example, substitution of Eq. (31) allows to find threshold of resistive wall instability. When the higher modes are considered, Eq. (26) should be used resulting: where Z(ω) is total beam coupling impedance, including space charge contribution, resistive wall, damper, etc. Gaussian distribution function F with dispersion σ is considered below. Then all the solutions of Eq. (33) are stable (Im ω ≤ 0) at the condition: Re∆ω where ∆ω is the impedance produced frequency shift: δω is the r.m.s. frequency spread due to the momentum spread: and function f is represented by solid line in Fig. 5. A simple fit is also plotted providing rather good approximation at x > 3. The space charge impedance often dominates among others, so that imaginary part of the total impedance significantly exceeds its real part. If the beam transverse distribution function is Gaussian, the statement can be written in the form: where and S ⊥ is transverse normalized r.m.s. phase volume of the beam [5]. Then stability condition (35) can be represented in the form: is treated as longitudinal r.m.s.phase volume of the bunch, and C is the machine circumference. VIII. EXAMPLE: FERMILAB RECYCLER We continue the analysis taking the Fermilab Recycler Ring as an example with the following parameters: • ω 0 = 2π × 89.86 kHz • γ = 9.526 Then Eq. (40) and (38) give the condition of stability: where 95% emittances are used in the definition of space phase density D. Some special cases are considered below: A. Resistive wall impedance This impedance is the main source of instability in the Recycler. Characteristic parameter Z (rw) ≃ 18 MOhm/m [2,4] provides for the lowest (most unstable) mode: Because slippage is negligible in this case, Eq. (42) gives: It means that the beam becomes more stable at the squeeze, though the dependence is very weak, and estimation D <∼ 0.14 | ξ | ≃ 0.3 ÷ 0.8 is valid at S ⊥ ∼ 1 πmm-mrad and any reasonable B. IX. CONCLUSION It is shown that the transverse instability of a rectangular bunch can be described by the same dispersion equation as a coasting beam, if an effective beam coupling impedance is used instead of the standard one. Several methods to calculate the effective impedance are considered: integral equation for the beam dipole moment, corresponding series of equations for Fourier harmonics, or equivalent series for amplitudes of multipoles. The last method is most universal and convenient for a numerical solution because corresponding matrix is approximately diagonal. This property allows to use relatively small fragments of the matrix to calculate its eigenvalues including high order ones, starting from desirable number and scanning step by step the whole matrix. This also means that any eigenmode includes a rather small number of multipoles and has a narrow-band spec-trum. In particular, it follows from this that the spectrum of high order eigenmodes includes only high frequencies which typically damp sufficiently rapidly to exclude long-range interaction in the beam. Therefore the effective impedance of these modes is proportional to B −1 , and high-frequency collective effects depend only on local linear density of the beam (however it is important that the density is constant within the whole bunch). Dependence of the effective impedance on B is diverse for lower modes, but typically it increases at the bunch squeezing. For example, effective resistive wall impedance ∝ B −1/3 for the most unstable mode. Being applied to the Fermilab Recycler, the theory gives achievable beam density summarized in the table below (see Eq. (42) for the units). Numbers in brackets are achievable beam intensities in units of 10 10 at the phase volume 4S × 6S ⊥ = 70 eV-s × 7 π-mm-mrad. High frequency related results should be valid for multi-bunch regime as well, restricting parameters of any bunch. However, they cannot be applied to very short bunches when penetration of particles into the barriers becomes essential. Beam shaping before extraction ("mining") is an example of such a regime, when a multipulse RF wave form is generated without any space between the pulses. Then potential wells are triangular, and the results break down. The possibility must not be ruled out that the threshold decreases and instability appears at the mining, which effect could explain the slow transverse emittance growth observed in the Recycler at the mining [9].
2019-04-18T13:08:47.904Z
2006-04-01T00:00:00.000
{ "year": 2006, "sha1": "9dc07c5890e05eebd6bc376142b5b1251eaf24f1", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevSTAB.9.064401", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "db00f2ba58f06bd291480a2658821c24ed58db89", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
246398276
pes2o/s2orc
v3-fos-license
Application of Logistic Regression and Decision Tree Models in the Prediction of Activities of Daily Living in Patients with Stroke An improvement in the activities of daily living (ADLs) is significantly related to the quality of life and prognoses of patients with stroke. However, the factors predicting significant improvement in ADL (SI-ADL) have not yet been clarified. Therefore, we sought to identify the key factors affecting SI-ADL in patients with stroke after rehabilitation therapy using both logistic regression modeling and decision tree modeling. We retrospectively collected and analyzed the clinical data of 190 patients with stroke who underwent rehabilitation therapy at our hospital between January 2020 and July 2020. General and rehabilitation therapy data were extracted, and the Barthel index (BI) score was used for outcome assessment. We defined SI-ADL as an improvement in the BI score by 15 points or more during hospitalization. Logistic regression and decision tree models were established to explore the SI-ADL predictors. We then used receiver operating characteristic (ROC) curves to compare the logistic regression and decision tree models. Univariate analysis revealed that compared with the non-SI-ADL group, the SI-ADL group showed a significantly shorter course of stroke, longer hospital stay, and higher rate of receiving occupational and speech therapies (all P < 0.05). Binary logistic regression analysis revealed the course of stroke at admission (odds ratio (OR) = 0.986, 95%confidence interval (CI) = 0.979–0.993; P < 0.001) and the length of hospital stay (OR = 1.030, 95%CI = 1.013–1.047; P =0.001) as the independent predictors of SI-ADL. ROC comparisons revealed no significant differences in the areas under the curves for the logistic regression and decision tree models (0.808 vs. 0.831; z = 0.977, P = 0.329). Both models identified the course of disease at admission and the length of hospital stay as key factors affecting SI-ADL. Early initiation of rehabilitation therapy is of immense importance for improving the ADLs in patients with stroke. Introduction Stroke is a disease with focal neurological deficits caused by sudden cerebral blood circulation abnormalities [1]; it is associated with high mortality and disability rates. Although stroke-associated mortality has decreased with the improvement of medical technology, the number of patients with poststroke motor, sensory, speech, cognitive, psychological, and other dysfunctions has increased sharply [2]. In China, approximately 2 million patients are diagnosed with new-onset stroke every year [3]. Approximately 75% of these stroke survivors have varying degrees of dis-ability; among these, more than 40% are severely disabled [4,5]. This not only has a marked impact on the activities of daily living (ADLs) of patients but also places a heavy burden on their families. Improvement in ADLs is significantly related to the quality of life and prognoses of patients with stroke. Previous studies have reported that rehabilitation therapy can improve limb function and ADLs, thereby helping patients return to normal life [6,7]. In clinical practice, we found that some patients with stroke showed a significant improvement in ADL (SI-ADL) after rehabilitation therapy [8,9], while other patients only showed a minimal improvement [10]. However, the factors affecting SI-ADL have not yet been clarified. Therefore, we sought to identify the key factors predicting SI-ADL in patients with stroke after rehabilitation therapy, using both logistic regression modeling and decision tree modeling. Materials and Methods 2.1. Patient Selection. Between January 2020 and July 2020, 190 patients with stroke underwent rehabilitation therapy at the Department of Rehabilitation of the Quzhou Affiliated Hospital of Wenzhou Medical University. We included patients with stroke, according to the diagnostic criteria adopted by the Fourth National Cerebrovascular Disease Academic Conference of the Chinese Society of Neurology in 1995 [11]. We included patients with stable vital signs or neurological deficit symptoms that no longer progressed after more than 48 hours, who had dysfunction, and who needed rehabilitation intervention. The exclusion criteria were as follows: (1) patients with severe cardiovascular, liver, kidney, digestive, and hematopoietic diseases that may endanger life; (2) patients with serious mental disorders; (3) patients with newly developed intracranial lesions or further aggravation of neurological deficits during hospitalization; (4) patients who refused continued rehabilitation; and (5) patients with incomplete clinical data acquired during hospitalization. We extracted data on demographics (age and sex), medical history, final diagnosis, course of stroke at admission, with or without rehabilitation therapy before admission, length of hospital stay, laboratory test results at admission, and medications used during hospitalization. This study was approved by the human ethics committee of the Quzhou Affiliated Hospital of the Wenzhou Medical University (LS2018023). Written informed consent was obtained at the time of admission. The clinical investigation was conducted in accordance with the principles of the Declaration of Helsinki. 2.2. Rehabilitation Therapies. All patients received exercise therapy and physical factor treatment. In addition, patients received one or more treatments specific to their dysfunctions. Exercise therapy included proper limb positioning, joint range-of-motion training, muscle strength training, turnover, transfer training, bridge exercise, sitting and standing balance training, and walking and up-and-down stair training, among others. Each treatment session lasted for 40 min and was conducted once a day. Physical factor treatment included neuromuscular electrical stimulation of the hemiplegic side. Different electrical stimulation sites were selected according to the patients' conditions. The commonly used stimulation sites included the deltoid, triceps brachii, extensor carpi longus radialis, quadriceps femoris, and tibialis anterior muscles. Each session lasted for 20 min and was conducted once a day. Occupational therapy included training in shoulder antexion, abduction, elbow extension, forearm rotation, wrist extension, and finger flexion and extension movements. It also included upper limb virtual games and task-oriented training (such as washing and dressing). Each treatment session lasted for 40 min and was conducted once a day. For speech therapy, the Schuell stimulation method was adopted to conduct progressive training in audiovisual understanding, retelling, oral expression, reading, and writing in a one-to-one manner. Each treatment session lasted for 40 min and was conducted once a day. Cognitive therapy included computer-assisted targeted training in attention, orientation, visual space, executive ability, memory, and logical thinking. Each treatment session lasted for 40 min and was conducted once a day. Swallowing therapy included guided lip exercises, ice stimulation of the oral and throat muscles, supraglottic swallowing, forced swallowing, empty swallowing, nodding swallowing, Mendelsohn swallowing and shaker manipulation, free drinking water training, and limiting the amount of one mouthful. Each treatment session lasted for 40 min and was conducted once a day. Acupuncture treatment included stimulation of the acupoints in the Yangming meridian of the upper and lower limbs during the period of soft paralysis. For spasms, the principle of "taking acupoints by antagonistic muscles" was adopted. The acupoints that were often stimulated included the hand Sanli, Waiguan, Hegu, Jianyu, Bige, Yanglingquan, Zusanli, Jiexi, Weizhong, and knee Yangguan. Acupuncture was administered using 1.5-2.0-inch No. 30 acupuncture needles. The needles were inserted for 20 min, once a day. Respiratory therapy included guided chest-expansion exercises, abdominal breathing training, and respiratory function improvement through the use of an incentive spirometer. For patients with foot drop and varus affecting the walking function, an orthosis was employed. We used a German Ottobock 50s1 ankle-foot orthosis for walking training. For patients with unrelieved poststroke shoulder pain after manipulation and physical factor treatments, the lesion site on the shoulder was examined under the guidance of color Doppler ultrasound, and local injections were administered. These comprised a 3 mL lidocaine hydrochloride injection, 1 mL compound betamethasone injection, and 0.9% sodium chloride injection. These injections were administered only once in selected patients. Outcome Evaluation. Data on the Barthel index (BI) scores at admission and discharge were extracted from the records. The BI scale assesses 10 ADLs, namely, feeding, bathing, grooming, dressing, bowels, bladder, toilet use, transfers, mobility, and stairs. Each item was scored 5, 10, or 15 points, and the total scores ranged from 0 to 100. A BI score of 60 points was chosen as the cutoff: a score ≥ 60 points indicated that the patient lived mostly or completely independently, while a score < 60 points indicated that the patient lived mostly or in complete dependence on the care of others [12]. In this study, SI-ADL was defined by an at least 15-point increase in the BI score at discharge. No significant improvement in the ADL (NSI-ADL) was defined by a less than 15-point increase in the BI score. Variables with P < 0:05 in the univariate analysis were further analyzed to develop the decision tree model. The model was established using the classification and regression tree method; the decision tree analysis was performed using the SPSS software (version 22.0). The decision tree grew "branches" by significance testing, with a split occurring at α = 0:05. The time limit specified that the minimum sample size of the parent node was 20, and the minimum sample size of the offspring node was 5. If the sample size on the node failed to meet this requirement, the node was considered a terminal node and was not segmented. Statistical Analysis. The SPSS 22.0 software (IBM) was used for data analysis. Continuous data are presented as frequencies and percentages. The two groups of measurement data are presented as means ± standard deviations or as medians (interquartile ranges, IQRs). The χ 2 test was used for the comparison of categorical variables, while the t-test or the Mann-Whitney nonparametric test was used for the comparison of continuous variables. Spearman correlation analysis was conducted to identify the correlations between the variables. Baseline variables with P < 0:05 in the univariate analysis were used to develop the binary logistic regression analysis model and the decision tree model, separately. Implementing the Delong method, receiver operating characteristic (ROC) curves obtained from the logistic regression and decision tree models were then compared using the MedCalc 15.0 software (MedCalc Software, Mariakerke, Belgium). Statistical significance was set at P < 0:05. The BI scores at discharge of both patients with and without prior rehabilitation therapy had significant improvement when compared with the corresponding BI scores at admission (all P < 0:001); however, the BI scores at discharge themselves did not differ significantly between these two groups (with and without prior rehabilitation therapy: 60 vs. 65 points, Z = −0:468, P = 0:639). At discharge, there was no significant difference in the proportion of patients with SI-ADL between these two groups (with and without prior rehabilitation therapy: 38.2% vs. 47.5%, χ 2 = 1:650, P = 0:199). During hospitalization, all patients received exercise and physical therapies. Therefore, exercise therapy and physical therapy were not included in the calculation of the rehabilitation therapy types. Spearman's correlation analysis revealed that the number of rehabilitation therapies received by patients during hospitalization was positively correlated with the course of stroke at admission (ρ = 0:197, P = 0:006 ) and the length of hospital stay (ρ = 0:277, P < 0:001) and negatively correlated with a cerebral infarction diagnosis (ρ = −0:248, P = 0:001). There was no correlation between the number of rehabilitation therapies and SI-ADL (ρ = 0:088, P = 0:288). There were no significant differences in the lengths of hospital stay between the group with a BI score < 60 at discharge and the group with a BI score ≥ 60 at discharge (median: 21 vs. 26, Z = −0:799, P = 0:424). In addition, the group with a BI score ≥ 60 at discharge received significantly fewer rehabilitation therapy types as compared with the group with a BI score < 60 at discharge (1 vs. 2, Z = −4:727 , P < 0:001). The group with a BI score ≥ 60 at discharge received lesser speech, cognitive, swallowing, and respiratory therapies (all P ≤ 0:001; Table 1). Additionally, patients who received speech, cognitive, swallowing, and respiratory therapies had significantly lower BI scores than those who did not receive these four rehabilitation therapies (all P < 0:001 ; see Supplementary table I). However, multivariate regression analysis revealed that none of these four therapies were risk factors for achieving a BI score ≥ 60 at discharge (see Supplementary Tables II and III). 3.3. Factors Associated with SI-ADL. Univariate analysis revealed that compared with patients with NSI-ADL, patients with SI-ADL had a shorter course of stroke at admission and a longer length of hospital stay and also comprised a higher proportion of those receiving occupational and speech therapies (all P < 0:05). There was no significant difference in the proportion of patients receiving cognitive, respiratory, and swallowing therapies between the NSI-ADL and SI-ADL groups (all P > 0:05; Table 2). Characteristics of Patients with SI-ADL Based on the Decision Tree Model. The baseline variables with P < 0:05 in the univariate analysis were also used to develop the decision tree model to predict SI-ADL (Figure 2). The results showed that among the patients with a course of disease ≤ 100:5 days at admission, 52.8% had SI-ADL. Among the patients with a length of hospital stay > 15:5 days, 67.1% achieved SI-ADL. Only 32.2% of the patients with a length 3 Neural Plasticity of hospital stay ≤ 15:5 days achieved SI-ADL. Among the patients with a course of disease > 100:5 days at admission, only 8.7% achieved SI-ADL after hospitalization. Among the patients whose length of hospital stay was >29.5 days, 30.0% achieved SI-ADL. Only 2.8% of the patients with a length of hospital stay < 29:5 days achieved SI-ADL. Discussion We investigated the factors that predict SI-ADL after rehabilitation therapy in patients with stroke. Both the logistic regression model and the decision tree model confirmed that the course of disease at admission and the length of hospital stay were the key factors affecting SI-ADL. Previous studies have shown that the ADL level of patients with stroke at admission is positively correlated with the ADL level at discharge, suggesting that the higher the degree of functional independence at baseline, the better the effect of the rehabilitation therapy. However, in contrast to previous research, our study found that although the BI score (at admission) of patients who had received rehabilitation therapy before admission was higher than the score of those who had not, there were no significant differences in the BI scores at discharge between the two groups after receiving the rehabilitation therapy. This suggested that the plasticity of functional recovery in patients who had previously received rehabilitation therapy was relatively low. This may be because patients who have received rehabilitation therapy before admission usually have a longer course of the disease. Therefore, when they receive rehabilitation therapies for the second time, their sensitivities to these therapies are much lower than those of patients who have not received rehabilitation therapy previously. Li and Zhong [13] categorized 45 patients with stroke into three groups according to the disease course. Patients in whom the disease course was <1 month, 1-6 months, and >6 months were included in the first, second, and third groups, respectively. The Neural Plasticity ADL scores were evaluated before and after the comprehensive rehabilitation therapy. They found that the ADL scores of groups 1 and 2 were significantly improved after the rehabilitation therapy, but there was no significant increase in the ADLs in group 3 after rehabilitation. This study showed that the effect of rehabilitation therapy varies significantly at different stages after stroke onset. Moreover, patients who received rehabilitation therapy before admission were mostly patients with intracerebral hemorrhage; previous studies have shown that such patients are likely to have more serious sensory, motor, and cognitive impairments. Therefore, they require additional rehabilitation therapy, and the rehabilitation process is more difficult [14,15]. Moreover, the degree of recovery of ADLs in patients with intracerebral hemorrhage is time-dependent; i.e., the later the rehabilitation intervention commences, the more difficult is the recovery [16]. Therefore, in patients who have already received rehabilitation therapy, the effect of the second rehabilitation therapy largely depends on the timing of the treatment. If the course of the disease is too long, even if they receive rehabilitation therapy again, the treatment effect may still be unsatisfactory. In addition, a large number of previous studies have shown that the types of rehabilitation therapies are positively correlated with an increase in ADLs [17][18][19][20]. Zhang and Zhang [21] divided 160 patients with acute stroke into two groups: 80 patients in the control group were treated with conventional medical drugs, while 80 patients in the study 5 Neural Plasticity group were treated with comprehensive rehabilitation therapy (such as exercise therapy, acupuncture, and traditional Chinese medicine). After 4 weeks, the limb motor ability and the BI scores of the patients in the study group were higher than those of the patients in the control group. Therefore, it is considered that comprehensive rehabilitation therapy can reduce the neurological deficit of patients, promote functional recovery of hemiplegic limbs, and improve ADLs. However, our study showed that patients who received more types of rehabilitation therapy did not have improved ADLs as compared with patients who received fewer types of rehabilitation therapies. This may be because the more the rehabilitation therapy administered to a patient, the more severe is the loss of basic neurological function, the longer is the course of the disease, and the longer is the hospital stay. It is worth noting that the condition of patients receiving comprehensive rehabilitation therapy still improved after treatment, and the two groups of patients had similar rehabilitation outcomes at discharge; this suggests that it is still valuable to provide comprehensive rehabilitation therapy and appropriately prolong the treatment time for patients with a serious condition and a long course of the disease. However, in the short term, it may not be possible to achieve a better recovery effect than in patients with milder symptoms. Based on univariate analysis, it was clear that the course of the disease and a previous implementation of rehabilitation therapy at admission had an impact on the improvement of ADLs. In addition, through a binary logistic regression analysis, we showed that the course of the disease at admission and the length of hospital stay are the key factors for significant improvement in the ADLs; i.e., early rehabilitation intervention for patients with stroke and prolonged duration of rehabilitation treatment can improve the ADLs. However, because logistic regression cannot quantify the variables that are meaningful for classification and the value of guiding the patients' treatment strategies in the clinic is limited, we further performed a decision tree analysis. The decision tree model is a reliable and effective analysis tool that can build an intuitive and understandable tree structure, quantifying the specific variables of certain prediction results and providing a basis for decision-making. Comparison of the ROC curves between the logistic regression model and the decision-tree model confirmed that the predictive value of the decision-tree model was not inferior to that of the logistic regression model; the decision tree could be used to formulate individualized rehabilitation strategies for patients with stroke. The first consideration was the course of the disease at admission. For patients in whom the course of the disease at admission was less than 100.5 days, the probability of SI-ADL after hospitalization for rehabilitation therapy for more than 2 weeks was 67.1%. However, the probability of Neural Plasticity SI-ADL was significantly lower in patients with a length of hospital stay of shorter than 2 weeks (only 32.2%). For patients in whom the course of the disease exceeded 100.5 days at admission, the probability of an SI-ADL was approximately 30.0% for those who were hospitalized for more than 1 month and only 2.8% for those who had been hospitalized for less than 1 month. This suggests that for patients with a long course of the disease (more than 3 months), the effect of short-term rehabilitation therapy is minimal. On the other hand, long-term inpatient rehabilitation therapy can significantly improve ADLs. Although the guidelines set the time of rehabilitation therapy for patients with stroke to within 48 hours to 2 weeks after the condition is stable, some studies have pointed out that the golden period of rehabilitation therapy is within 3 months after the stroke [22]. This is almost consistent with the 100.5 days of the course node automatically defined in the decision-tree analysis in this study. Ballester et al. [23] also found that with a longer course of the disease, the slow rehabilitation effect is due to the gradual decline in the patients' sensitivity to the treatment. For more than a year after the stroke, the patients' nerves still had some plasticity. By formulating accurate plans and adopting continuous individualized and progressive rehabilitation therapy schemes, the sensitivity to treatment can be increased and ADLs can still be improved. The inability to implement and failure to implement rehabilitation early in many areas in China are some of the reasons for the nationwide high morality and mortality rates in patients with stroke; this has led to a huge socioeconomic burden [24]. Based on our study, although patients with late initiation of rehabilitation still benefited from adequate rehabilitation, those who had not received previous rehabilitation therapy had a lower BI score at admission. Those with lower BI scores required more types of rehabilitation during hospitalization, which resulted in an increased cost of rehabilitation. Furthermore, the shorter the stroke course, the shorter the time required for rehabilitation. Therefore, early initiation and adequate rehabilitation after stroke may be an important step for reducing the socioeconomic burdens on patients with stroke. However, early initiation of rehabilitation is limited by several barriers [25]. Tam et al. [26] used a rapid access outpatient stroke rehabilitation program for providing rehabilitation. This approach could alleviate problems such as rehabilitation ward unavailability and the inability to treat patients in the hospital for a long period. It could also improve patient compliance with rehabilitation and help both the doctors and patients choose the timing of rehabilitation initiation and treatment delivery in a flexible manner. The limitations of this study were as follows. First, this was a retrospective single-center study with a small sample size, and selection bias may have occurred. In the future, we will perform prospective studies and expand the sample sizes to obtain more accurate results. Second, we did not conduct a long-term follow-up on the functional outcomes of the patients after discharge, because the description of the outcomes of patients with stroke generally includes the functional status, length of stay, and destination after discharge [27]. The length of stay is largely affected by the patient's economic status, medical insurance system, and other factors. In China, the postdischarge destination of patients with stroke is typically their homes, rather than nursing homes and care institutions for older patients. Thus, the length of hospital stay would be prolonged; this limits postdischarge destination as an outcome evaluation index. The functional status of the patients at discharge can avoid the influence of social factors; therefore, it is the most reliable predictor of patient outcomes after stroke [22]. Conclusions Early initiation of rehabilitation and sustained rehabilitation therapy plays a key role in improving ADLs. Therefore, providing continuous and sustained rehabilitation therapy to patients with stroke, as early as possible, will help improve the efficiency of the rehabilitation therapy. This can significantly improve the quality of life of these patients. Data Availability The original data used to support the findings of this study are available from the corresponding author upon request.
2022-01-30T16:19:33.564Z
2022-01-28T00:00:00.000
{ "year": 2022, "sha1": "02652f4a52b2e689a511ff48849b0dda7f5da868", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2022/9662630", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "26134a7ad4c65cd7e0b5fe2e9c7ae83788eb3e94", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
202746053
pes2o/s2orc
v3-fos-license
Analysis of Issues and Future Trends Impacting Drug Safety in South Korea New drug safety issues are emerging that are beyond the existing medication safety management system. To pre-empt these problems, forecasting future drug safety trends and issues is a necessity. The objective of this study was to identify issues and future trends impacting drug safety using foresight methodologies. The study started by identifying global megatrends, trends in safety management of medicines, and key issues in drug safety. A total of 25 global megatrends were selected by extracting and clustering keywords from 26 reports concerning the future. Using the text-mining method, 10 trends in drug safety were identified from 3593 news articles. This study derived 60 issues which can arise from the trends, and finally, the 20 key issues with the highest urgency and impact scores were selected. Some examples of issues with high scores were as follows: illegal distribution of medicines, lack of technology for managing and utilizing big data, change in the pharmaceutical trade environment, lack of education and safety management for specific populations, lack of artificial intelligence-based technology for the safety management of medicines, and the prevalence of drug advertisements through social network services. The key issues could be used to establish plans for medication safety management. Introduction Because a medication safety accident can cause harm to patients or death in severe cases, it is a threat to patient safety. The recent recall of antihypertension medicines due to carcinogenic substances contained in the China-sourced raw materials caused great social problems. Moreover, the number of cases in which medication is recalled or suspended in the USA and South Korea is estimated to be around 40 cases per year in both countries [1,2]. Because of the continuous occurrence of medication safety accidents, the public's demand for the more thorough safety management of medications is increasing. Currently, medication safety management is carried out with premarket scientific evaluations, postmarket re-evaluations, and the reporting of adverse drug reactions [3]. The Ministry of Food and Drug Safety (MFDS), like the USA Food and Drug Administration (FDA), regulates medical products in South Korea and conducts research and development (R&D) on a medication safety management. The project includes the advancement of policies and systems, scientific reviews and assessments, and guidelines for the safe use of medical products [4]. The MFDS publishes a white paper every year highlighting their achievement and describing the implementation plan for drug safety. White papers published in 2016 and 2017 emphasized (1) the introduction and stabilization of good manufacturing practices (GMP) that are in harmony with international standards; (2) the internationalization of medicine approval and an evaluation system; (3) strengthening safety management of approved pharmaceuticals; (4) strengthening the competitiveness of the pharmaceutical industry by stable operation in the patent-regulatory approval linkage system; (5) the establishment of a management system for preventing abuse and misuse of narcotic drugs [5,6]. As the National Assembly has pointed out, issues such as the distribution of medicines through social network services, the theft and loss of 186 opioid drugs, and the lack of management of the nation's essential drugs, show that there are holes in medication safety management [7]. Furthermore, as modern society changes rapidly, a new safety issue may arise that is beyond the existing medication safety management system. The recent issue of carcinogenic Chinese raw materials in South Korea medication is the dark side of the globalization of pharmaceutical production and distribution [8]. Moreover, emerging social issues such as the aging society and low birth rates [9], the emergence of a new technological paradigm in medication quality management [10], and the emergence of new medicines beyond the conventional concept of medicine could have a negative impact on medication safety management [11]. With the rapid development of technology and the interrelation of technology and society, the future of society is likely to become more complex and uncertain, and the need for proactive preparation to address potential threats to future drug safety is essential. In light of these situations, the Act on the Promotion of Technology for Ensuring the Safety of Food, Drugs, etc. was enacted in 2015 and the Act states that a plan for the promotion of safety technologies should be established every five years [12]. Thus, the MFDS conducted a planning study to find R&D tasks for future medication safety management. This paper is part of the planning study. The purpose of this study was to analyze the issues and future trends impacting drug safety using scientific foresight methodologies to timely respond to the rapidly changing global environments. Materials and Methods The definition of terms used in this study are as follows [13,14]: (1) global megatrends: a set of changes in society, technology, economy, environment, and political conditions which effects are not restricted to a particular geographic area; (2) trends in drug safety: a pattern of gradual change in the area of drug safety toward the future; (3) issues: problems or concerns that are expected to arise in the future; (4) key issues: issues that have great potential to affect our society. Global megatrends, trends in drug safety, and key issues in drug safety were identified using a method referring to foresight methodologies used by Ministry of Science, Information and Communication Technology (ICT) and Future Planning, and Korea Institute of Science and Technology Evaluation and Planning (MSIP and KISTEP; Figure 1) [13]. We modified the method to suit the scope of drug safety. The scope of the drug safety considers the whole life cycle of drugs, which runs from premarket to postmarket, and it covers the advancement of policies, scientific reviews and assessments, quality evaluation of medical products, and the safe use of medical products according to the MFDS notice [15]. Biologics and herbal medicines were beyond the scope of the study. Global megatrends were derived using the environmental scanning methodology [16]. First, twenty-six reports concerned with the future published since 2010 were selected as sources of data and are listed in Table S1 . All trend keywords were extracted from the sources and then were categorized into STEEP (Social, Technological, Economic, Environmental, and Political). The classified keywords were clustered in several groups based on similarity and were named as global megatrends. In order to derive trends in drug safety, text-mining was conducted. Unlike qualitative conventional methods such as Delphi, expert panels, and scenarios, which rely on opinions from experts, text-mining can forecast the future in objective and quantitative ways [43,44]. In addition, text-mining can save money and time when deriving trends as compared to costly and time-consuming literature reviews and experts' advice [43,44]. A database of news websites about medicines in South Korea (http://www.yakup.com) was used. Other sources for text-mining were not used to limit overestimating problems due to overlapping articles on the same topic. As a search term, the terms corresponding to the global megatrend and the MFDS's drug safety categories were used. A web crawler, using Python version 3.4 (Python Software Foundation, Delaware, USA), automatically collected and stored the body of the articles which were the <div class = "bodyarea"> part of the pages published from 1 January 2014, to 28 February 2017. Afterward, the words corresponding to the nouns were extracted by conducting an impropriety, and a stem extraction and morpheme analysis from the article text were collected using the R-program's KoNLP (Heewon Jeon, South Korea) package. The simultaneous occurrence probability among the nouns was calculated using the latent Dirichlet allocation (LDA). The words that appear together are grouped into a topic. The trends in drug safety were selected to represent the nouns included in each topic. Since the search terms used for text-mining are the term determined by the megatrends, we could pair megatrends to trends, which were the results of text-mining. Global megatrends were derived using the environmental scanning methodology [16]. First, twenty-six reports concerned with the future published since 2010 were selected as sources of data and are listed in Table S1 . All trend keywords were extracted from the sources and then were categorized into STEEP (Social, Technological, Economic, Environmental, and Political). The classified keywords were clustered in several groups based on similarity and were named as global megatrends. In order to derive trends in drug safety, text-mining was conducted. Unlike qualitative conventional methods such as Delphi, expert panels, and scenarios, which rely on opinions from experts, text-mining can forecast the future in objective and quantitative ways [43,44]. In addition, text-mining can save money and time when deriving trends as compared to costly and timeconsuming literature reviews and experts' advice [43,44]. A database of news websites about medicines in South Korea (http://www.yakup.com) was used. Other sources for text-mining were not used to limit overestimating problems due to overlapping articles on the same topic. As a search term, the terms corresponding to the global megatrend and the MFDS's drug safety categories were used. A web crawler, using Python version 3.4 (Python Software Foundation, Delaware, United States), automatically collected and stored the body of the articles which were the <div class = "bodyarea"> part of the pages published from 1 January 2014, to 28 February 2017. Afterward, the words corresponding to the nouns were extracted by conducting an impropriety, and a stem extraction and morpheme analysis from the article text were collected using the R-program's KoNLP (Heewon Jeon, South Korea) package. The simultaneous occurrence probability among the nouns was calculated using the latent Dirichlet allocation (LDA). The words that appear together are grouped into a topic. The trends in drug safety were selected to represent the nouns included in each topic. Since the search terms used for text-mining are the term determined by the megatrends, we could pair megatrends to trends, which were the results of text-mining. Issues that may arise within five years concerning the medication safety trend were derived from a literature review and brainstorming. News articles, research papers, and regulatory agency reports Issues that may arise within five years concerning the medication safety trend were derived from a literature review and brainstorming. News articles, research papers, and regulatory agency reports from the USA, European Union, and Japan about trends in drug safety were reviewed. After that, both individual and group brainstorming sessions were conducted. Ten participants with one to more than ten years of experience in medicine performed brainstorming individually, and three of them participated in group brainstorming. Created issues were categorized. The issues were evaluated by thirteen experts from industries, universities, and research institutes assessing the urgency and impact using a 7-point Likert scale. Experts were evenly selected for industry, academia, and research institutes who had more than 20 years of experience and competence in planning research with sufficient insight into medication safety management. Urgency was defined as how quickly an issue will it be a problem or will need to be resolved. Impact was defined as how much the problem threatened people's health or how much risk could be prevented when the problem was solved. Issues with a minimum average score of 4 (neutral) were chosen as key issues in drug safety, and issues that experts disagreed (an average score of less than 4) with were excluded. Global Megatrends and Trends in Drug Safety A total of 517 trend keywords were extracted from 26 sources, and 25 global megatrends were derived by clustering the keywords. The 25 global megatrends are listed in Table S2. By STEEP, there were seven social trends, seven technological trends, two environmental trends, five economic trends, and four political trends. The existing future reports did not describe trends for the field of drug safety, so trends in drug safety were derived using a website where medication-related news articles are posted. A total of 3593 articles related to both global megatrends and drug safety were extracted. Words extracted through text-mining were grouped into ten topics according to their probability of appearing simultaneously. Ten trends in drug safety were selected and are shown in Table 1. Table 1. Ten trends in safety management of medicines. Trends in Drug Safety 1. Application of the 4th Industrial Revolution and artificial intelligence (AI) in the field of medicines 2. Drug safety for the aged, pregnant women, and multicultural families 3. The international harmony of regulatory science 4. Introduction of illegal medicines due to increased foreign trade 5. Introduction of precision medicines 6. Preparing for terrorism and disaster 7. Communication of medication safety information with the public. 8. Encourage generic drug use 9. Novel variables for medication efficacy and safety assessment 10. Drug safety in preparation for the unification of Korea Key Issues in Drug Safety We obtained 60 issues in the drug safety area through literature review or brainstorming. The issues derived were eventually grouped into 24 issues. The twenty-four issues and their urgency and impact scores are presented in Table 2. The average score was 4.47. Twenty key issues with an average score of more than four were identified. 'Illegal distribution of medicines' got the highest score of 5.52 points followed by 'Lack of technology for managing and utilizing big data', 'Change in the pharmaceutical trade environment', 'Lack of education and safety management for specific populations (pediatrics, elderly, etc.)', 'Lack of artificial intelligence-based technology for safety management of medicines', and 'Prevalence of drug advertisements through social network services'. Discussion We derived 10 trends and 20 key issues in drug safety. Foresight methodologies, which were environmental scanning and text-mining, were used in the research process. It is the first time in South Korea that text-mining has been used to forecast the future of drug safety. Planning research for medication safety management has been conducted previously. Science and Technology Policy Institute (STEPI) reports from 2001 documented safety management trends, such as the rapid development of the biological industry and the government's will to foster it, increased investment by large companies and venture startups, the emergence of new biotechnology products, and the increase of new harmful and toxic substances [45]. It proposed implementing R&D projects for the safety assessment of technologies, establishing an international level of research infrastructure, strengthening the organization's corporate support activities, introducing advanced systems for improving the service of businesses and the public, and operating a national toxic substances management program [45]. In 2010, researchers in South Korea forecasted the aging population and subsequent changes in disease structure, increasing medical costs, the pursuit of quality of life, the need for information, the opening of the market through free trade agreements, globalization of technological development, and the development of science and technology [46]. Furthermore, they suggested mid-long term strategies for the advancement of the drug safety system and the competitiveness of the pharmaceutical industry in South Korea based on the forecasting [46]. Some of their forecasts were realized in the Precision Medicine Initiative and the Sentinel Initiative, which are examples of active drug monitoring using big data in the United States [47]. On the other hand, the introduction of artificial intelligence, IBM's Watson, USA, into medical care and expectations for the unification of the Korean peninsula following the election of a new president in South Korea will present new impacts on the safety of medicines. Therefore, future foresight research in drug safety is necessary periodically to respond to changing circumstances and to make plans. The social, technological, economic, environmental, and political megatrends, such as the aging of society, advanced technologies, disasters, major countries' low growth, and the unification of Korea affect drug safety. Unification between South and North Korea is a unique situation. North Korea has a significant gap in medicine safety management as compared to more advanced South Korea, and the terminology in drug management and disease prevalence are also different. Thus, efforts and studies are needed to narrow this gap [48]. Some trends were derived from only one megatrend search term, while others were derived from more than one megatrend. For example, 'Drug safety for the aged, pregnant women and multicultural families' trend is associated with megatrends of 'Structural and functional changes of social members' and 'Globalization'. However global megatrends which were related with advanced technology such as 'Cognitive science' and 'Space engineering' were not relevant to trends in drug safety because there were no intersections between these technologies and medical products. In this study, trends were derived by text-mining, and it has been reported that there is no significant difference between the result from qualitative methods by experts and that of text-mining [43]. Issues related to the recent development of information technology were particularly high in urgency and impact. This reflects the high interest in managing drug safety by applying new big data and artificial intelligence technologies. Advertising or distributing medicines over the Internet is also a new problem. The key issues in drug safety can be used to determine the direction of policy proposals or the direction of R&D by MFDS to proactively address the drug safety issues forecasted by future environmental changes [49,50]. For example, key issues such as 'Illegal distribution of medicines' and 'Change in the pharmaceutical trade environment' are issues that need policy solutions. On the other hand, key issues such as 'Lack of technology for managing and utilizing big data' and the 'Lack of artificial intelligence-based technology for safety management of medicines' are issues that require R&D to address the drug safety issues. This research has derived trends and issues in drug safety using scientific foresight methodologies but has several limitations. First, because trends are chosen based on words that have emerged with a high frequency in text-mining, it is likely that emerging trends, which are currently not discussed in articles, but maybe major in the future could be omitted. Second, some issues, such as unification, are unique to Korea and thus difficult to generalize when considering other countries. However, most of the issues are relevant to global society, so are worth mentioning. Although accurate forecasting of the future of drug safety is not possible, research using foresight methodologies can proactively develop response strategies by presenting the possible future of drug safety. R&D projects to address the issues presented in this study will have to be planned in the future. For that process, it will be helpful to perform a scenario analysis. The scenario would provide a direction for the allocation of resources for problem resolution and enable the efficient use of limited resources. Continuous and periodic identification of trends and issues in drug safety will be required to determine policies proposed by government agencies for drug safety management and the direction of R&D to be carried out. Conclusions Twenty key issues related to drug safety were derived using scientific foresight methodologies. 'Illegal distribution of medicines' and 'Lack of technology for managing and utilizing big data' were identified as the most important key issues. The key issues could be used to establish plans for medication safety management. To address the issues, MFDS will need to plan its R&D strategy. This will require further research such as SWOT (Strength, Weakness, Opportunity, and Threat) analysis and scenario analysis.
2019-09-19T09:15:18.296Z
2019-09-01T00:00:00.000
{ "year": 2019, "sha1": "4dde874ebfc494f162bc6322731a54513048f9d2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/16/18/3368/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "47eeefdb839a1266c1642fd4db60c7d94d7e4723", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Business", "Medicine" ] }
195767140
pes2o/s2orc
v3-fos-license
Geodesic Distance Estimation with Spherelets Many statistical and machine learning approaches rely on pairwise distances between data points. The choice of distance metric has a fundamental impact on performance of these procedures, raising questions about how to appropriately calculate distances. When data points are real-valued vectors, by far the most common choice is the Euclidean distance. This article is focused on the problem of how to better calculate distances taking into account the intrinsic geometry of the data, assuming data are concentrated near an unknown subspace or manifold. The appropriate geometric distance corresponds to the length of the shortest path along the manifold, which is the geodesic distance. When the manifold is unknown, it is challenging to accurately approximate the geodesic distance. Current algorithms are either highly complex, and hence often impractical to implement, or based on simple local linear approximations and shortest path algorithms that may have inadequate accuracy. We propose a simple and general alternative, which uses pieces of spheres, or spherelets, to locally approximate the unknown subspace and thereby estimate the geodesic distance through paths over spheres. Theory is developed showing lower error for many manifolds, with applications in clustering, conditional density estimation and mean regression. The conclusion is supported through multiple simulation examples and real data sets. Introduction Distance metrics provide a key building block of a vast array of statistical procedures, ranging from clustering to dimensionality reduction and data visualization. Indeed, one of the most common representations of a data set {x i } n i=1 , for x i ∈ X ⊂ R D , is via a matrix of pairwise distances between each of the data points. The key question that this article focuses on is how to represent distances between data points x and y in a manner that takes into account the intrinsic geometric structure of the data. Although the standard choice in practice is the Euclidean distance, this choice implicitly assumes that the data do not have any interesting nonlinear geometric structure in their support. In the presence of such structure, Euclidean distances can provide a highly misleading representation of how far away different data points are. This issue is represented in Figure 1, which shows toy data sampled from a density concentrated close to an Euler spiral. It is clear that many pairs of points that are close in Euclidean distance are actually far away from each other if one needs to travel between the points along a path that does not cross empty regions across which there is no data but instead follows the 'flow' of the data. As a convenient, if sometimes overly-simplistic, mathematical representation to provide a framework to address this problem, it is common to suppose that the support X = M, with M corresponding to a d-dimensional Riemannian manifold. For the data in Figure 1, the manifold M corresponds to the d = 1 dimensional curve shown with a solid line; although the data do not fall exactly on M, we will treat such deviations as measurement errors that can be adjusted for statistically in calculating distances. The shortest path between two points x and y that both lie on a manifold M is known as the geodesic, with the length of this path corresponding to the geodesic distance. If x and y are very close to each other, then the Euclidean distance provides an accurate approximation to the geodesic distance but otherwise, unless the manifold has very low curvature and is close to flat globally, Euclidean and geodesic distances can be dramatically different. The accuracy of Euclidean distance in small regions has been exploited to develop algorithms for approximating geodesic distances via graph distances. Such approaches define a weighted graph in which edges connect neighbors and weights correspond to the Euclidean distance. The estimated geodesic distance is the length of the shortest path on this graph; for details, see Tenenbaum et al. (2000) and Silva and Tenenbaum (2003). There is a rich literature considering different constructions and algorithms for calculating the graph distance including Meng et al. (2008), Meng et al. (2007) and Yang (2004). In using the Euclidean distance within local neighborhoods, one needs to keep neighborhoods small to control the global approximation error. This creates problems when the sample size n is not sufficiently large and when the density ρ of the data points is not uniform over M but instead is larger in certain regions than others. A good strategy for more accurate geodesic distance estimation is to improve the local Euclidean approximation while continuing to rely on graph distance algorithms. A better local approximation leads to better global approximation error. This was the focus of a recent local geodesic distance estimator proposed in Wu et al. (2018) and Malik et al. (2019). Their covariance-corrected estimator adds an adjustment term to the Euclidean distance, which depends on the projection to the normal space. This provides a type of local adjustment for curvature, and they provide theory on approximation accuracy. However, their approach often has poor empirical performance in our experience, potentially due to statistical inaccuracy in calculating the adjustment and to lack of robustness to measurement errors. We propose an alternative local distance estimator, which has the advantage of providing a simple and transparent modification of Euclidean distance to incorporate curvature. This is accomplished by approximating the manifold in a local neighborhood using a sphere, an idea proposed in Li et al. (2018) but for manifold learning and not geodesic distance estimation. Geodesic distance estimation involves a substantially different goal, and distinct algorithms and theory need to be developed. The sphere has the almost unique features of both accounting for non-zero curvature and having the geodesic distance between any two points in a simple closed form; even for simple manifolds the geodesic is typically intractable. We provide a transparent and computationally efficient algorithm, provide theory justifying accuracy and show excellent performance in a variety of applications including clustering, conditional density estimation and mean regression on multiple real data sets. Methodology Throughout this paper, we assume M is a smooth compact Riemannian manifold with Riemannian metric g. Letting γ(s) be a geodesic in arc length parameter s, the geodesic distance d M is defined by where L(γ) := S 0 g{γ (s), γ (s)} 1/2 dt is the length of curve γ. Given points X = {x 1 , · · · , x n } on the manifold, the goal is to estimate the pairwise distance matrix GD ∈ R n×n where GD ij = d M (x i , x j ). First we propose a local estimator, that is, to estimate GD ij where x i and x j are close to each other, and then follow the local-to-global philosophy to obtain a global estimator, for arbitrary x i and x j . Local Estimation In this subsection, we focus on geodesic distance estimation between neighbors. The simplest . However, the estimation error of the Euclidean distance depends on the curvature linearly. As a result, a nonlinear estimator incorporating curvature needs to be developed to achieve a smaller estimation error for curved manifolds. We propose a nonlinear estimator using spherical distance, which is motivated by the fact that osculating circles/spheres approximate the manifold better than tangent lines/spaces. On the osculating sphere, the geodesic distance admits an analytic form, which we use to calculate local geodesic distances. Let S x i (V, c, r) be a d dimensional sphere centered at c with radius r in d + 1 dimensional affine space x i + V , approximating M in a local neighborhood of x i . Letting π be the orthogonal projection from the manifold to the sphere, the spherical distance is defined as the geodesic distance between π(x i ) and π(x j ) on the sphere S x i (V, c, r). The spherical distance depends on the choice of sphere S x i (V, c, r), which will be discussed in section 2.3. Global Estimation We now consider global estimation of the geodesic distance d M (x i , x j ) for any x i , x j . The popular Isomap algorithm was proposed in Tenenbaum et al. (2000) for dimension reduction for manifolds isometrically embedded in higher dimensional Euclidean space. Isomap relies on estimating the geodesic distance using the graph distance based on a local Euclidean estimator. Let G be the graph with vertices x i . For any two points x i and x j that are close to each other, Isomap estimates d M (x i , x j ) using x i − x j . This leads to the following global estimator of d M (x i , x j ), for any two points x i , x j ∈ X, where P varies over all paths along G having x i 0 = x i and x ip = x j . In particular, the global distance is defined by the length of the shortest path on the graph, where the length of each edge is given by the Euclidean distance. In practice, local neighbors are determined by a k-nearest neighbors algorithm, with the implementation algorithm given in Section 2.4. The estimator in expression (2) has been successfully implemented in many different contexts. However, the use of a local Euclidean estimator x i l − x i l+1 is a limitation, and one can potentially improve the accuracy of the estimator by using a local approximation that can capture curvature, such as d S (x i , x j ) in (1). This leads to the following alternative estimator: where P is as defined for (2) and an identical graph paths algorithm can be implemented as for Isomap, but with spherical distance used in place of Euclidean distance in the local component. Osculating Sphere In order to calculate the local spherical distances necessary for computing (3), we first need to estimate 'optimal' approximating spheres within each local neighborhood, characterized by the k nearest neighbors of x i , denoted by X [k] i . The local sample covariance matrix is defined as Σ k (x i ) = k −1 The eigen-space spanned by the first is the best estimator of the d+1 dimensional subspace V . Here we are ordering the eigenvectors by the corresponding eigenvalues in decreasing order. Observe that the target sphere S x i (V * , c * , r * ) passes through x i so we have r * = c * −x i . Hence, the only parameter to be determined is c * and then r * = c * − x i . To estimate c * , we propose a centered k-osculating sphere algorithm. Suppose x j ∈ S x i (V * , c * , r * ), then the projection of x j to x i + V * , denoted by y j = x i + V * V * (x j − x i ), is among the zeros of the function y − c * 2 − r * 2 where r = c * − x i = c * − y i . We use this to define a loss function for estimating c in Definition 1; related 'algebraic' loss functions were considered in Coope (1993) and Li et al. (2018). Definition 1. Under the above assumptions and notations, let c * be the minimizer of the following optimization problem: Letting r * = x i − c * , the sphere S x i (V * , c * , r * ) is called the centered k-osculating sphere of X at x i . We can tell from the definition that the centered sphere is a nonlinear analogue of centered principal component analysis to estimate the tangent space. There is one additional constraint for the centered k-osculating sphere: the sphere passes through x i . This constraint is motivated by the proof of Theorem 4, see the supplementary materials. Observe that the optimization problem is convex with respect to c and we can derive a simple analytic solution, presented in the following theorem. Theorem 1. The minimizer of the optimization problem (4) is given by: Algorithms In this subsection, we present algorithms to calculate the spherical distance. Before considering algorithms for distance estimation, we present the algorithm for the centered k-osculating sphere, shown in Algorithm 1. In real applications where the data are noisy, we recommend replacing the centered kosculating sphere by an uncentered version because in this case the base point x may not be on the manifold so shifting toward x can negatively impact the performance. In addition, the constraint r = x i − c restricts the degrees of freedom when choosing the optimal r. The only difference is that instead of centering at the base point x and forcing r = x i − c , we instead shift x i to the meanx = 1 n n i=1 x i and average x j − c , as shown in Algorithm 2. From Algorithm 3 we obtain the local pairwise distance matrix SD, where SD ij denotes the distance between x i and x j . However, if x i and x j are not neighbors of each other, the distance will be infinity, or equivalently speaking there is no edge connecting x i and x j in graph G. Then we need to convert the local distance to global distances by the graph distance proposed in Section 2.2. There are multiple algorithms for shortest path search on graphs including the Floyd-Warshall algorithm (Floyd (1962) and Warshall (1962)) and Dijkstra's algorithm (Dijkstra (1959)); here we adopt the Dijkstra's algorithm, which is easier to implement. Algorithm 4 shows how to obtain the graph spherical distance from local spherical distance. Algorithm 4: Graph Spherical Distance input : Local pairwise distance matrix SD ∈ R n×n . output: Graph pairwise distance matrix SD. We note that in the local estimation, the computational complexity for where k is assumed to be much smaller than n. To compare with, the computational complexity of d E (x i , x j ) is O(D). Hence, in general, we are not introducing more computation cost by replacing the local Euclidean distance by the local spherical distance unless d is not very small relative to D. Once the graph is determined, the computational complexity of Dijkstra's algorithm is O(n 2 ), where n is the sample size, and this complexity does not depend on which local distance is applied to obtain the weights on the graph G. Hence, the total computational complexity for the graph Euclidean distance estimator is O nkD + n 2 while the complexity for the graph spherical distance estimator is O n min{k, D} 3 + n 2 . Error Analysis In this section, we analyze why the spherical distance is a better estimator than the Euclidean distance from a theoretical perspective following a local-to-global philosophy. Local Error First we study the local error, that is, is the geodesic ball on M centered at x with radiusr. It is well known that the error of the Euclidean estimator is third order, as formalized in the following proposition. where γ varies among all geodesics on M in arc length parameter. In terms of the error rate, These bounds are tight and the proof can be found in Smolyanov et al. (2007). The Euclidean distance is a simple estimator of the geodesic distance, and the error is s 3 While this may seem to be a good result, if the manifold has high curvature, so that r 0 is very small, performance is not satisfactory. This is implied by the r −2 0 multiple on the error rate, and is also clearly apparent in experiments shown later in the paper. Now we consider the error of the spherical distance proposed in section 2.1. For simplicity, we first consider the case in which M = γ is a curve in R 2 with domain [0, S]. Without loss of generality, fix x = γ(0) but vary y = γ(s). Let n be the unit normal vector of γ at x, that is γ (0) = κn. Let r = 1 |κ| and c = x − 1 κ n, which determine a circle C x (c, r) centered at c with radius r. This circle C x (c, r) is called the osculating circle of the curve γ, which is the "best" circle approximation to the curve. Letting π : γ → C x (c, r) be the projection to the osculating circle, the error in d S (x, y) as an estimator of d M (x, y) is shown in the following theorem. Theorem 2. Let x = γ(0) and y = γ(s), so d M (x, y) = s, then Comparing to the error of Euclidean estimation in Proposition 1, the spherical estimate improves the error rate from O(s 3 ) to O(s 4 ). The above result is for curves, and as a second special case we suppose that M d ⊂ R d+1 is a d dimensional hyper-surface. Similar to the curve case, the spherical distance can be defined on any sphere S x (c, r) passing through x with center c and radius r where c = x− 1 κ n and n is the normal vector of the tangent space T x M , r = 1 |κ| . However, for geodesics along different directions, denoted by where the maximum and minimum can be achieved due to the compactness of U T x M . Fix any κ 0 (x) ∈ [κ 1 (x), κ 2 (x)], let S x (c, r) be the corresponding sphere, and π : M → S x (c, r) be the projection. The estimation error is given by the following theorem. , then the estimation error of spherical distance is given by In the worst case, the error has the same order as that for the Euclidean distance. However, there are multiple cases where the error is much smaller than the Euclidean one, shown in the following corollary. Corollary 1. Under the same conditions in Theorem 3, which is a neighborhood of the geodesic exp x (tv 0 ), spherical estimation outperforms the Euclidean estimation. The closer to the central geodesic, the better the estimation performance. For a point x where κ v (x) is not changing rapidly along different directions, the spherical estimation works well in the geodesic ball Br(x). Finally we consider the most general case: M is a d dimensional manifold embedded in R D for any D > d. Let S x (c, r) be a d dimensional sphere whose tangent space is also T x M . Letting π be the projection to the sphere, the estimation error is given by the following theorem. Theorem 4. Fix x ∈ M , for y = exp x (sv) such that d M (x, y) = s, then the estimation error of spherical distance is given by Combining Theorem 2-4, we conclude that spherical estimation is at least the same as Euclidean estimation in terms of the error rate, and in many cases, the spherical estimation outperforms the Euclidean one. Global Error In this section we analyze the estimation error: |d SG (x, y)−d M (x, y)| for any x, y ∈ M . The idea is to pass the local error bound to the global error bound. We use the same notation introduced in Section 2.2. Theorem 5. Assume M is a compact, geodesically convex submanifold embedded in R D and {x i } n i=1 ⊂ M is a set of points, which are vertices of graph G. Introduce constants min > 0, max > 0, 0 < δ < min /4 and let C be the constant such that |d S ( M (x, y)} according to Theorem 4. Suppose 1. G contains all edges xy with x − y ≤ min . 2. All edges xy in G have length x − y ≤ max . As the sample size grows to infinity, δ, min , max → 0 and we can carefully choose the size of the neighborhood so that δ/ min → 0. As a result, λ 1 , λ 2 → 0 so d S (x, y) → d M (x, y) uniformly. Euler Spiral We test the theoretical results on generated data from manifolds in which the geodesic distance is known so that we can calculate the error. The first example we consider is the Euler spiral, a curve in R 2 . The Cartesian coordinates are given by Fresnel integrals: The main feature of the Euler spiral is that the curvature grows linearly, that is, κ(s) = s. We generate 500 points uniformly on [0, 2]. Then we fix x = γ(1.6) and chooser = 0.04, so there are 20 points falling inside the geodesic ball Br(x), denoted by y 1 , · · · , y 20 . Then we can calculate the Euclidean y i − x and the spherical distance d S (x, y i ). The covariance-corrected geodesic distance estimator (Malik et al. (2019)) can be viewed as the state-of-the-art. We compare the spherical distance with both Euclidean distance and the covariance-corrected distance. Figure 2a is the spiral and Figure 2b contains the error plot for the three algorithms. To visualize the rate, we also present the log − log plot in Figure 2c. The results match our theory and the spherical estimator has the smallest error among these three algorithms. Then we consider the global error. By the definition of arc length parameter, the pairwise geodesic distance matrix is given by GD ij = |s i −s j |. Denote the Euclidean pairwise distance matrix by D, the graph distance based on Euclidean distance, covariance-corrected distance and spherical distance by EG, CG and SG, respectively. As the most natural measurement of the global error, we calculate and compare the following norms: Table 1 shows the global error when the total sample size is 500 and k is chosen to be 3. Furthermore, we vary the curvature from [0, 1] to [3,4] to assess the influence of curvature on these estimators. The global Euclidean distance is by far the worst and the graph spherical distance is the best in all cases. Furthermore, as the curvature increases, the spherical error increases the most slowly. This matches the theoretical analysis, since the spherical estimator takes the curvature into consideration. In real applications, almost all data contain measurement error, so the data may not exactly lie on some manifold, but instead may just concentrate around the manifold with certain noise. The robustness of the algorithm with respect to the noise is a crucial feature. To assess this, we generate samples from the Euler spiral and add Gaussian noise i ∼ N (0, σ 2 Id D ) where σ is the noise level. In this setting the local error is not very meaningful since x i is no longer on the manifold. However, the global error is still informative since the pairwise distance matrix contains much information about the intrinsic geometry of the manifold. Since the ground truth d M (x i , x j ) is not well defined, we firstly apply the Graph Euclidean distance to a large data set, and treat these results as ground truth GD. The reason is that when the sample size is large enough, all the above global estimators converge to the true distance except for D. Then we subsample a smaller dataset and apply these global estimators to obtain EG, CG and SG and compute the error. We test on different subsample sizes to assess the stability of the algorithms and the performance on small data sets. Figure 3 shows that spherical estimation works well on very small data sets, because it efficiently captures the geometry hidden in the data. Torus We also consider the torus, a two dimensional surface with curvature ranging from negative to positive depending on the location. We set the major radius to be R = 5 and the minor radius to be r = 1 so the equation for the torus is {x(θ, ϕ), y(θ, ϕ), z(θ, ϕ)} = {(R + r cos θ) cos ϕ, (R + r cos θ) sin ϕ, r sin θ} . Since the geodesic distance on the torus does not admit an analytic form, we apply the same strategy as in the noisy Euler Spiral case. First we generate a large dataset and apply the Graph Euclidean method to obtain the "truth", then estimate the distance through a subset and finally compute the error. Similarly, we also consider the noisy case by adding Gaussian noise to the torus data. The results are shown in Figure 4, which demonstrates that the performance of the spherical estimation is the best for both clean and noisy data. Applications In this section we consider three applications of geodesic distance estimation: clustering, conditional density estimation and regression. k-Smedoids clustering Among the most popular algorithms for clustering, k-medoids (introduced in Kaufman et al. (1987)) takes the pairwise distance matrix as the input; refer to Algorithm 5 (Park and Jun (2009)). Similar to k-means, k-medoids also aims to minimize the distance between the points in each group and the group centers. Differently from k-means, the centers are chosen from the data points instead of arbitrary points in the ambient space. Algorithm 5: k-medoids input : In most packages, the default pairwise distance matrix is the global Euclidean distance D, which is inaccurate if the support of the data has essential curvature. As a result, we replace D by SG and call the new algorithm k-Smedoids. By estimating GD better, it is reasonable to expect that k-Smedoids has better performance. We present two types of examples: unlabeled (example 1) and labeled data (example 2 and 3). For the unlabeled data, we visualize the clusters to show the performance of different algorithms, for the labeled datasets, we can make use of the labels and do quantitative comparisons. Among clustering performance evaluation metrics, we choose the following: Adjusted Rand Index (ARI, Hubert and Arabie (1985)), Mutual Information Based Scores (MIBS, Strehl and Ghosh (2002), Vinh et al. (2009)), HOMogeneity (HOM), COMpleteness (COM), V-Measure (VM, Rosenberg and Hirschberg (2007)) and Fowlkes-Mallows Scores (FMS, Fowlkes and Mallows (1983)). We compare these scores for standard k-medoids, k-Emedoids and our k-Smedoids. These algorithms are based on different pairwise distance matrices while other steps are exactly the same, so the performance will illustrate the gain from the estimation of the geodesic distance. We note that for all above metrics, larger values reflect better clustering performance. Regarding the tuning parameters, depending on the specific problem, d and k can be tuned accordingly. In example 1, the data are visualizable so d = 1 is known and k can be tuned by the clustering performance: whether the two circles are separated. For example 2-3, cross validation can be applied to tune the parameters based on the six scores. In any case with quantitative scores, cross validation can be used to tune the parameters mentioned above. Our recommended default choices of k are uniformly distributed integers between d + 2 and n 2 , proportion to √ n. Estimating the manifold dimension d has been proven to be a very hard problem, both practically and theoretically. There are some existing methods to estimate d, see Granata and Carnevale (2016), Levina and Bickel (2005), Kégl (2003), Camastra and Vinciarelli (2002), Carter et al. (2009), Hein andAudibert (2005), Camastra and Vinciarelli (2001) and Fan et al. (2009), and we can apply these algorithms directly. Example 1: Two ellipses. We randomly generate 100 samples from two concentric ellipses with eccentricity √ 3/2 added by zero mean Gaussian noise. We compare with k-means, standard k-medoids and k-Emedoids. Figure 5 shows the clustering results for the two ellipses data. In this example, we set K = 2, k = 3 and d = 1 since the support is a curve with dimension 1. Since the two groups are disconnected and curved, the Euclidean-based algorithms fail while the spherical algorithm works better than using other geodesic distance estimators. We also consider two real datasets with labels. The Banknote data set is introduced in Lohweg and Doerksen (2012). There are D = 4 features, characterizing the images from genuine and forged banknote-like specimens and the sample size is 1372. The binary label indicates whether the banknote specimen is genuine or forged. Table 2 shows the clustering performance of three algorithms for the Banknote data. We can see that k-Smedoids has the highest score for all 6 metrics. In this example, K = 2 is known, and we set k = 4 and d = 1. The choice of d and k are determined by cross validation. Example 3: Galaxy zoo data. The last example is from the Galaxy Zoo project available at http://zoo1.galaxyzoo.org. The features are the fraction of the vote from experts in each of the six categories, and the labels represent whether the galaxy is spiral or elliptical. We randomly choose 1000 samples from the huge data set. Table 3 shows the clustering performance of three algorithms for the Galaxy zoo data. We can see that k-Smedoids has the highest score for all 6 metrics. In this example K = 2, and the parameters k = 6, d = 1 are determined by cross validation. Geodesic conditional density estimation Conditional density estimation aims to estimate f (y|x) based on observations where x i ∈ R D are predictors and y i ∈ R are responses. The most popular conditional density estimator involving pairwise distance is the kernel density estimator (KDE) with Gaussian kernel (Davis et al., 2011): where h 1 and h 2 are bandwidths. This method is motivated by the formula f (y|x) = f (x,y) f (x) , using kernel density estimation in both the numerator and the denominator. As discussed before, if the data have essential curvature, Euclidean distance can't capture the intrinsic structure in the data. Instead, we can improve the performance by replacing the Euclidean distance by geodesic distance. That is, the natural estimator iŝ where d is the (estimated) geodesic distance. The kernel e −d(x i ,x) 2 /h corresponds to the Riemannian Gaussian distribution (Said et al., 2017). In terms of the distance, we have four pairwise distances between training data: global Euclidean distance D, graph Euclidean distance ID, graph covariance corrected distance CD and our proposed graph spherical distance SD. For any given x, d(x, x i ) is obtained by interpolation. First we add x to the graph consists of all training data and connect x with its neighbors. Then we calculate the graph distance between x and x i . This is more efficient than calculating pairwise distances between all samples X train ∪ X test . The algorithm is formulated in Algorithm 6: Algorithm 6: Geodesic conditional kernel density estimation algorithm input : Training data {x i , y i } n i=1 ⊂ R D × R, tuning parameters k, d, h 1 , h 2 , given predictor x. output: Estimated conditional densityf (y|x). 1 Estimate pairwise geodesic distance between x i 's SD by Algorithm 4; 2 Calculate d(x, x i ) for neighbors of x; We can replace the distance estimator in the first step by any other algorithm to obtain the corresponding version of conditional density estimation. To compare the performance, we estimate the conditional density through training data X train , Y train and calculate the sum of log likelihood ntest i=1 log(f (y i |x i )). We randomly permute the data to obtain different training and test sets and provide boxplots for the sum of log likelihoods. Regarding the tuning parameters, k and d have been discussed in Section 5.1. Regarding bandwidths h 1 and h 2 , there is a very rich literature on choosing optimal bandwidths in other contexts. For simplicity we use cross validation to estimate h 1 and h 2 . We consider the following two real data sets. predict the net hourly electrical energy output (EP) of the plant (response). We randomly sample 1000 data points and repeat 100 times to obtain the following boxplots for the sum of log likelihood scores for different distance estimation methods, as shown in Figure 6. There is a clear improvement for our graph spherical approach. From the above two examples we can tell that Euclidean distance is the worst choice because the predictors have non-linear support. Graph Euclidean and covariance corrected distances improve the performance a lot, while graph spherical distance outperforms all competitors. Geodesic kernel mean regression As a related case, we also consider kernel mean regression using a simple modification of the Nadaraya-Watson estimator (Nadaraya, 1964;Watson, 1964): Algorithm 7 provides details: Algorithm 7: Geodesic kernel regression algorithm input : Again we have four options for the distance in the first step, D, ID, CD and our proposed SD. We measure the performance by calculating the Root Mean Square Error (RMSE) ntest i=1 ( m(x i ) − y i ) 2 /n test , and again use cross validation for bandwidth choice. We consider the same two datasets as in Section 5.2 and show the results in Figure 8 and 9: Again it is clear that spherical distance outperforms the competitors. Discussion The choice of distance between data points plays a critical role in many statistical and machine learning procedures. For continuous data, the Euclidean distance provides by far the most common choice, but we have shown that it can produce badly sub-optimal results in a number of real data examples, as well as for simulated data known to follow a manifold structure up to measurement error. Our proposed approach seems to provide a clear improvement upon the state-of-the-art for geodesic distance estimation between data points lying close to an unknown manifold, and hence may be useful in many different contexts. There are multiple future directions of immediate interest. The first is the question of how the proposed approach performs if the data do not actually have an approximate manifold structure, and whether extensions can be defined that are adaptive to a variety of true intrinsic structures in the data. We have found that our approach is much more robust to measurement errors than competing approaches; in practice, it is almost never reasonable to suppose that data points fall exactly on an unknown Riemannian manifold. With this in mind, we allow measurement errors in our approach, so that the data points can deviate from the manifold. This leads to a great deal of flexibility in practice, and is likely a reason for the good performance we have seen in a variety of real data examples. However, there is a need for careful work on how to deal with measurement errors and define a single class of distance metrics that can default to Euclidean distance as appropriate or include other structure as appropriate, in an entirely data-dependent manner. First we prove Theorem 2, the error bound for curves. Proof of Theorem 2. At the fixed points x = γ(0), let t = γ (0) and n = γ (0) γ (0) , then (−n, t) is an orthonormal basis at x. Then the Taylor expansion γ(s) = γ(0) + γ (0)s + s 2 2 γ (0) + s 3 6 γ (s) + O(s 4 ) can be written in the new coordinates as where v 1 and v 2 are unknown constants subject to the constraint γ (s) = 1. Observe that γ (s) = −κs + 3v 1 s 2 1 + 3v 2 s 2 + O(s 3 ) so γ(s) 2 = 1 + κ 2 s 2 + 6v 2 s 2 + O(s 3 ) = 1 then we conclude . For convenience, we assume κ > 0; the proof is the same when κ ≤ 0. Let θ be the angle between x − c and y − c. Then the spherical distance between π(y) and x is rθ. In section 2.1, we characterize θ by cos(θ) = r arccos x−c r · π(y)−c r , here we characterize θ by tan(θ) for computational simplicity. Let y t be the intersection of the tangent space spanned by γ (0) and the straight line connecting y and c, and let y l be the projection of y onto the tangent space. Then we can focus on the triangle cxy l with x − c//y l − y. Observe that By the definition of y l , we can write y l = 0 s − κ 2 6 s 3 + O(s 4 ) and similarly y − y l = which implies γ (0)//n. By Taylor expansion, we rewrite γ in terms of the new coordinates: Again, by the constraint γ = 1, we have α 2 = − κ 2 y 6 . As before, denote the intersection of y − c and T x M by y t and the angle between x − c and y − c by θ, so By direct calculation, we derive that the coordinates for y t as where s t = 1 1+ Now we can prove Corollary 1. Proof of Corollary 1. (2) Again by Equation 10 Assume c = x + rn where n = D−d i=1 β i n i is a unit vector in the normal space T x M ⊥ so D−d i=1 β 2 i = 1. The idea of the proof is to connect s and d S (x, y) by the tangent space. To be more specific, denote the projection onto T x M by P x , and define y l = P x (y) and y s = P x {π(y)} . Then it suffices to show the following three statements: iii y s − x = s + O(s 3 ). Observe that the first statement implies that the Euclidean distance between base point and the projection to the tangent space is an estimator of the geodesic distance with error O(s 3 ). Since T x M is the common tangent space of M and S x (c, r), Statement i implies Statement ii. So it suffices to show Statement i and Statement iii. Before we prove the statements, we need to calculate the coordinates of y l and y s . By the definition of P x , we have Similarly, by the definition of π and linearity of P x , Global estimation error In this section we prove the global error bound stated in Theorem 5. Proof of Theorem 5. Before proving the inequalities, we define the graph geodesic distance where P varies over all paths along G with x 0 = x and x p = y. Clearly we have d M (x, y) ≤ d G (x, y) for any x, y ∈ M and any graph G. The idea is to show d S G(x, y) ≈ d G (x, y) ≈ d M (x, y).
2019-06-29T23:37:02.000Z
2019-06-29T00:00:00.000
{ "year": 2019, "sha1": "5f788ac0a19d6694bb44a1d8a41bb69b432b368f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f428f3e3738bb2d838f1d5777022a165be98db5a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
18590931
pes2o/s2orc
v3-fos-license
African Savanna-Forest Boundary Dynamics: A 20-Year Study Recent studies show widespread encroachment of forest into savannas with important consequences for the global carbon cycle and land-atmosphere interactions. However, little research has focused on in situ measurements of the successional sequence of savanna to forest in Africa. Using long-term inventory plots we quantify changes in vegetation structure, above-ground biomass (AGB) and biodiversity of trees ≥10 cm diameter over 20 years for five vegetation types: savanna; colonising forest (F1), monodominant Okoume forest (F2); young Marantaceae forest (F3); and mixed Marantaceae forest (F4) in Lopé National Park, central Gabon, plus novel 3D terrestrial laser scanning (TLS) measurements to assess forest structure differences. Over 20 years no plot changed to a new stage in the putative succession, but F1 forests strongly moved towards the structure, AGB and diversity of F2 forests. Overall, savanna plots showed no detectable change in structure, AGB or diversity using this method, with zero trees ≥10 cm diameter in 1993 and 2013. F1 and F2 forests increased in AGB, mainly as a result of adding recruited stems (F1) and increased Basal Area (F2), whereas F3 and F4 forests did not change substantially in structure, AGB or diversity. Critically, the stability of the F3 stage implies that this stage may be maintained for long periods. Soil carbon was low, and did not show a successional gradient as for AGB and diversity. TLS vertical plant profiles showed distinctive differences amongst the vegetation types, indicating that this technique can improve ecological understanding. We highlight two points: (i) as forest colonises, changes in biodiversity are much slower than changes in forest structure or AGB; and (ii) all forest types store substantial quantities of carbon. Multi-decadal monitoring is likely to be required to assess the speed of transition between vegetation types. Introduction There is growing evidence that woody encroachment into savannas is occurring worldwide [1,2,3]. This has been attracting attention, because if woody encroachment is widespread, it has important consequences for the global carbon cycle and land-atmosphere interactions. For example, a recent modelling study by Poulter et al. [4] highlights that since 1981, a six per cent expansion of vegetation cover over Australia was associated with a fourfold increase in the sensitivity of continental net carbon uptake to precipitation. Deforestation exceeds forest gains across much of the tropics [5], and some have argued that there is a bias towards the detection of deforestation as opposed to woody encroachment or recovery [3]. In Africa, evidence of woody encroachment is scattered but widespread, covering a range of ecosystems and rainfall levels: from West Africa through Central Africa, Ethiopia and South Africa [3,6]. In the Congo basin, the second largest block of contiguous tropical forest after the Amazonian basin, the interface between tropical forest and savanna is a structurally and floristically diverse mosaic of vegetation types, with forest penetrating into the savanna as gallery forests along river banks and as forest patches on plateaus [7]. Within Central Africa, savannas are likely maintained by a combination of precipitation, soil characteristics and anthropogenic disturbance such as fire and clearance for grazing, agriculture and timber [8,9]. In the Congo basin, it has been suggested that forest is expanding into savannas because of urban-migration and a consequent reduction in fire frequency [10], or driven by higher atmospheric CO 2 concentration [4,11], similarly to the more positive conditions for tree growth documented in intact forests [12,13]. While the number of studies assessing woody encroachment in Africa has increased in the past few years (see [3] for a review), most focus on detecting forest expansion (tree cover change, tree density or Leaf Area Index, LAI), and not the assessment of the characteristics of these different forests as they undergo forests succession, often because studies use mostly remotely sensed data, thus subtle changes within forests are difficult to assess. Hence, there remains much uncertainty on forest dynamics and succession from savanna to old-growth forest in the tropics [14]. In fact, within the Congo basin, the few studies available on forest dynamics assess recovery after logging [15,16,17], or within intact forests [12] with few studies addressing savanna-forest succession, although studies from central Gabon provide a notable exception [18,19,20,21]. In the coming decades, African forests are predicted to experience profound climatic changes with increased temperature, alteration of rainfall patterns and possibly longer dry seasons [13,22,23]. Thus a better understanding of ecosystem functioning within different forest types is urgently required, as well as predictions of how they may respond to climatic changes [13,24,25]. Furthermore, assessing the carbon stored in different forests and soils is necessary to participate in schemes to reduce emissions from deforestation and degradation in the tropics, (e.g. the UN Framework Convention on Climate Change REDD+ initiative), as well as Nationally Determined Contributions for countries that include decreases in emissions from land-use change [26]. Soils are also an important carbon pool. In some tropical environments, such as in the African miombo woodlands, soil carbon stocks are greater than those aboveground [27]. Sources of soil organic carbon (SOC) include root turnover, leaf litter, and woody debris in forests, or grasses necromass in savanna. Vegetation types with high net primary production (NPP), such as old-growth forests are expected to have high soil carbon [28]. Apart from soil carbon, soil chemical and physical conditions can also constrain the amount of biomass stored in tropical forests, with different forest types often linked to different soil types [9,29,30]. While studies of the differing vegetation types in the putative succession from savanna to forest in central Gabon have been published [18,19], no studies have considered long-term phytodemographic change within different forest types in central Gabon, or to our knowledge Central Africa. Here we use long-term inventory plots to quantify the changes in AGB, vegetation structure and biodiversity of five vegetation types over 20 years. These five types, in probable successional order, run from savanna to colonising forest to monodominant Okoume forest, young Marantaceae forest (still Okoume monodominant overstorey) and mixed Marantaceae forest (mixed species overstorey; [18,19]. The mixed forest is not the end of the succession, mixed forest without abundant Marantaceae is likely to follow mixed Marantaceae forest [18], but this does not occur in the immediate vicinity, so was not sampled. After assessing whether soil properties were driving any of the different vegetation communities, we identified two key questions. First, do changes in AGB, vegetation structure and biodiversity across the five vegetation types follow the pattern expected of increasing AGB, followed by a decline in stem density in mature forest following self-thinning, and steadily increasing diversity? Second, after 20 years, has any vegetation type altered enough in structure, AGB or diversity to be reclassified as another vegetation type in the hypothesised successional sequence? Study area The study area was Lopé National Park (LNP) located in central Gabon (0°10'S 11°35' E). Established as a wildlife reserve in 1946, it became a National Park in 2007 covering 4960 km 2 , one of the country's largest protected areas. While most of the park is closed-canopy tropical rainforest, the north of the park is characterised by a savanna-forest mosaic (Fig 1), a remnant of the landscape that dominated much of the Congo basin during the Last Glacial Maximum (LGM) [31]. During the LGM savanna covered the majority of LNP, whereas increased precipitation since the beginning of the Holocene when much of central Africa became wetter, thus causing an expansion of forest to cover nearly the whole area, with forest continuing to expand into the savannas today, combined with waves of human activity and abandonment since the expansion of Bantu farming culture [19,31]. The savanna that currently remains is maintained by a combination of human burning and the rain-shadow of the Massif du Chaillu, which reduces rainfall to 1500 mm yr −1 in the north of the park compared to about 2500 mm yr −1 in the south [32]. A diversity of vegetation types have been described within LNP [18,19]. Here we focus on five distinct major vegetation types that occur close to the forest-savanna boundary: savanna, colonising forest, monodominant Okoume forest, young Marantaceae forest and mixed Marantaceae forest. Although the concept of succession from savanna to old-growth forest has been debated, particularly for African savannas (e.g. [33], within LNP, in absence of disturbance, savannas (S) become colonising forest (F1), then monodominant Okoume forest (F2), then young Marantaceae forest (F3), the mixed Marantaceae forest (F4), and eventually mature old-growth forest [19]. In this study we consider these five distinct vegetation types which appear to show successional stages, ending with mixed Marantanceae forest <700 years old, as reported by White [18,19]. Savannas of LNP are tree species-poor, with no Acacia spp. and dominant species including Crossopteryx febrifuga, Bridelia ferruginea, Sarcocephalus latifolius and Psidium guineensis. Colonising forest is characterised by an open canopy, as trees are not sufficiently large and tall to fully meet one another (Fig 2), and the presence of heliophile species such as Okoume (Aucoumea klaineana, Burseraceae), Lophira alata (Ochnaceae) and Sacoglottis gabonensis (Humiriaceae). Monodominant Okoume forest (F2) is characterised by a closed canopy of A. klaineana trees of similar age and an open understory (Fig 2). Young Marantaceae forest (F3) consists of larger Okoume trees and a very distinctive understory dominated by a thick layer of herbaceous plants of the Marantaceae and Zingiberaceae families produced within light gaps created by falling trees, often of pioneer species (Fig 2). Mixed Marantaceae forest (F4) refers to a mature Marantaceae forest with very large trees emerging of a diversity of species coupled Savanna-Forest Dynamics with some areas of very low-stature vegetation, and an understorey dominated by Marantaceae and Zingiberaceae families. This has the highest tree species richness compared with the other forest types. Further details on the vegetation types of LNP and the different successional stages can be found in White [18,19] and White and Abernethy [32]. Although the term closed canopy is often defined as 'areas where tree cover exceeds 40 per cent while the term open forest refers to areas where tree cover is between 10 and 40 per cent' (see [34,35], White and Abernethy [32] refer to closed canopy as something > 60-70% canopy cover. Even though occurrence/prevalence of tree harvesting inside National Parks and other protected areas is a widespread phenomenon in Africa, this is typically not the case in Gabon. The Lopé reserve experienced low-level selective logging, usually for Okoume, prior to becoming a National Park, between 1965 and 1980 [18]. Firewood harvesting does not occur as the population density is very low adjacent to the park. Savanna fires have occurred for at least the timespan of human occupation, at least 40,000 years. A human population expansion following the arrival of Bantu farmers~4,500 years ago increased the number of fires, while a human population crash~1,400 years ago reduced the numbers of fires [36]. Recent historical fires were set by local people, but since 1993 a fire management plan has been in place to help maintain the regionally-rare savanna ecosystems [18,37]. Field measurements Five plots in each vegetation type were installed in 1993 [18], each 20 × 40 m with similar altitude and topography, using standard field inventory methods (see Fig 2). Plots were established in areas with the best known burn history, including the use of aerial photographs, and were classified into different vegetation types according to canopy cover, tree height, species' diversity and dominance, presence/absence of a thick layer of herbaceous plants and vertical forest structure [18]. Overall, savannas were open grassland with sparse trees <4m height and low tree species' diversity. F1 had more trees but < 70% canopy cover, trees >4m and low tree species' diversity. F2 had >70% canopy cover and canopy height of 15-20m, and were dominated by A. klaineana, F3 had <70% canopy cover, this was dominated by A. klaineana, and a thick layer of herbaceous plants of the Marantaceae and Zingiberaceae families. F4 also had a thick layer of herbaceous plants of the Marantaceae and Zingiberaceae families, plus much larger trees, higher canopy cover and high tree species diversity. Plots were re-measured in 2013, except one F1 plot which could not be relocated in 2013 due to tag loss. In all plots all living free-standing woody stems !10 cm diameter at 1.3 m along the stem from the ground (or above buttresses/deformities if present) were measured and stems were identified to species where possible. In 2013 tree height was measured using a laser hypsometer (Nikon Forestry Pro). In three plots per vegetation type, soil samples were also collected in 2013 at depths of 0-5cm, 5-10cm, 10-20cm, 20-30cm and 30-50cm. The litter layer was excluded. These were collected at the centre of the plot using soil sampler that does not disturb the soil (Eijkelkamp Agrisearch Equipment BV, Giesbeek, The Netherlands). Samples were air-dried. In two plots per vegetation type, vertical forest structure was assessed in 2013 using novel 3D terrestrial LiDAR measurements [38,39]. This TLS (Terrestrial LiDAR Scanner) data were acquired with the RIEGL VZ-400 3D instrument (RIEGL Laser Measurement Systems GmbH, Horn, Austria). Full hemispherical scan data were collected at scan resolution of 0.06°in the azimuth and zenith directions. Each plot had six scan locations, following a systematic 20 × 20 m sampling design (Fig 2). The scanner records multiple return data, with a maximum of four returns per emitted pulse, which improves vertical sampling in the upper canopy [40]. Analysis of soil samples On arrival in the laboratory all soil samples were oven dried to constant mass at 40°C. Methods followed tropical standard methods, see Quesada et al. [41] for full details. Briefly, sieving, weighing and subsampling, particle size (sand/silt/clay) and pH were carried out following ISRIC [42]. Cation exchange capacity (CEC) and weatherable elements (Ca, Mg, P, Fe, Al, Na, etc.) sample preparation also followed ISRIC [42], and were measured using Inductively Coupled Plasma Optical Emission Spectrometry (ICP-OES, Perkin Elmer Optima 5300DV). For CEC the modified silver-thiourea method [43] was used. Bulk density (BD, gcm −3 ) followed Rowell [44] and carbon and nitrogen content were analysed following the manufacturers' recommendations for ground fine material using the Vario Micro Gas Combustion Analyser (Elementar Co.). Phosphorous was analysed by sequential extraction following Tiessen and Moir [45]. All soil samples were analysed at the soil laboratory of the University of Leeds. The soil organic carbon pool (SOC) in each layer, in Mg ha -1 , was estimated using the equation SOC = C Ã BD Ã V in which C, is the proportion of a given mass of soil that is carbon, BD is the bulk density of the soil (mass per unit volume), in Mg m −3 , and V the volume (m 3 ) of soil. Fire history of the studied plots As fire frequency may be an important factor determining woody encroachment we assessed fire history for each of the plots. First, information on planned and recorded fires between 1995 and 2008 [37], and from 2009-13 (LNP managers) was gathered. Second, burned area data from the official MODIS MCD45A1 Collection 5.1 product [46] and the MCD64A1 product, including thermal anomalies [47] were obtained. Monthly burned area data was extracted for each plot, between April 2000 and July 2013. During the managed burning period (1995-2013), all savanna plots were burned between five and 19 times, but no fire was recorded entering F1-F4 stages of forest. Prior to 1993 it is presumed that unmanaged fires affected savanna plots and possibly F1 plots, but not F2-F4 plots. Data analysis For each plot, we calculated Basal Area (BA); stem density, and BA-weighted wood mass density (WMD BA ) following Lewis et al. [48]. For AGB the Chave et al. [49] equation including diameter, wood mass density (WMD) and tree height was used to estimate the AGB of each tree in the plot. Diameter was used as measured, unless a change in the point of measurement (POM) occurred. These tend to occur when trees grow fast and buttresses form, but the raising of the POM underestimates the true growth of the stem. In these cases we harmonize the two disjointed sets of growth measurements (from the original POM, and the new POM) by replacing the measured diameters with the mean of (1) the ratio of the original to the new POM diameter measurements (to standardize each diameter measurement to the height of the original POM), and (2) the ratio of the final to the original POM diameter measurement (to standardize each diameter measurement to the height of the final POM; [12,50]). The best taxonomic match of wood density to each stem was extracted from a global database [51,52] following Lewis et al. [48]. As 1993 field measures did not include tree height, a best estimate of the height in 1993 based on available data was used. First, following Feldpausch et al. [53] the relationship between tree diameter and height was established using a Weibull function [54] for each forest type separately. These relationships were then used to estimate heights for every tree in both 1993 and 2013 (H weibull ). Then, for each tree, the difference between H weibull (estimated height) in 2013 and field-measured height (H real ) provides an offset from which to estimate height in 1993, by subtracting the offset from H weibull in 1993 to obtain H real in 1993 Stem density (number of trees ha -1 ) included all trees !10 cm diameter, while BA (sum of the cross-sectional area at 1.3 m, or above buttresses) was calculated in m 2 ha -1 . WMD BA (the mean of the WMD of each stem weighted by its BA) was estimated as dry mass/fresh volume, in g cm -3 . AGB change terms were divided into growth (gain in AGB due to tree growth and tree recruitment) and mortality (loss due to tree mortality) components. For AGB gains, we also added the productivity of newly recruited stems, senus Talbot et al. [50] using the 86 th percentile growth rate of stems from the same plot census in the 10-19.9 cm size class is used, since this provides the closest approximation of the mean growth of recruits. Stem turnover was calculated as the mean of the number of stems recruited and lost due to mortality, sensu Phillips and Gentry [55]. Three biodiversity metrics where calculated for 1993 and 2013: species richness, the Shannon index (H') and the Bray-Curtis Index of dissimilarity (BC). Species richness was determined as total number of species observed in a given plot. H', a measure of biodiversity calculated from the relative abundance of species in a community, was computed separately per each plot as: where p i = n i /N, n i is the number of individuals present of species i, N the total number of individuals, and S is the total number of species. The Bray-Curtis Index of dissimilarity (BC), used for comparing the dissimilarity and diversity of sample sets, was defined as: where S i,j are the species found in both sample sets, S i is the total number of species of sample set i, and S j is the total number of species of sample set j. A value of BC = 1 indicates complete similarity, while BC = 0 indicates complete dissimilarity. As we wanted to establish if vegetation types were becoming more similar with increasing time, BC between a given vegetation type and the following vegetation type in the succession in 1993 was computed by combining all plots within each vegetation type in 1993. Then, BC between a given vegetation type in 2013 and the following vegetation type in the succession in 1993 was also computed. In order to assess if species' dominance changed, for each plot we also computed species dominance (in terms of % of BA) of two common light-demanding species: the above-mentioned A. klaineana (which forms monodominant stands in F2), and L. alata, a tree often found alongside A. klaineana in forest regrowth [32]. Vertical plant profiles of each plot were derived from terrestrial LiDAR through estimates of the vertically resolved gap fraction. Vertical plant profiles describe the plant area volume density (PAVD) as a function of canopy height and can be used to quantify the plant area index (PAI), but also to derive various canopy height metrics, see Calders et al. [56] for full details. The PAVD is defined as the plant area per unit crown volume (m -1 ) and the PAI is the total one-sided plant area per unit ground area [38]. Potential errors in vertical plant profiles related to terrain were corrected following Calders et al. [40]. Each forest plot was scanned from six locations and the plot vertical profiles were calculated by averaging the vertically resolved gap fraction of each individual scan. Savanna plots were not assessed with TLS since no trees with > 10 cm diameter were present thus no canopy was present. R statistical software R v3.2.1 and RStudio v.0.99.447 were used for all statistical analyses [57]. A Bonferroni correction was applied when considering differences in soil characteristics amongst vegetation types. The study was carried out with permission as part of the government of Gabon national carbon inventory programme. The TLS data was collected under a CENAR-EST research permit and ANPN National Park entry permit. All plant species in the park are protected, with some protected throughout Gabon, and some on IUCN lists. Permits covered measurements of these trees as part of the study. Soil characteristics Only small differences in soil characteristics between vegetation types were observed, mostly in comparisons between the extreme ends of the putative succession: savanna and mixed Marantaceae forest (F4) soil (Tables 1 and 2). Overall, savanna soils had significantly higher pH and C:N ratio but lower cation exchange capacity (CEC) and aluminium than F4 forests while the other forest types had intermediate values (Tables 1 and 2). F4 forests also had significantly lower Bulk density (BD) than other forest types. No differences in C%, total C or total P between vegetation types were observed (Table 1). Few differences were noted between the soils beneath the three most mature forest vegetation types (F2, F3 and F4), although F4 forest had significantly lower pH, higher aluminium, and lower bulk density than F2 or F3 forest types. While P, Mg, K, Na and C% decreased significantly with increasing soil depth in all vegetation types, this pattern was not clear for soil pH, CEC and C:N ratio (Fig 3). With regard to micronutrients, savanna soils had significantly higher Mn than other vegetation types but lower Zn and Fe than F4 (Table 2). No other significant differences in micronutrients (Ba, Co, Cu, Mo, Ni) were observed. Observations in 1993 In 1993, no tree >10 cm diameter was found in any savanna plot (trees <10 cm were observed). Colonizing forests (F1) had significantly less AGB, at 43 Mg dry mass ha -1 , than the other forest types (>300 Mg dry mass ha -1 , see Table 3). F1 forests also had significantly lower BA than other forest types. WMD BA ranged between 0.45 and 0.64 (Table 3). Species richness ranged from 5.7 to 13.4 per 0.08 ha plot, and H' from 1.24 to 2.26, with only F1 forests being significantly different from F4 forests in terms of diversity metrics (Table 4). While F4 had lower A. klaineana dominance than F2 and F3 (5% compared with 50%), there were no differences in L. alata dominance amongst forest types (Table 4). Stem densities were lowest in F1 plots, highest in F2 plots and low in F3 and F4 plots. Observations in 2013 In 2013, as in 1993, no tree >10 cm diameter was found in any savanna plot. After 20 years F1 and F2 forests had gained AGB, but the rank AGB order did not change. F1 forests had significantly less biomass than the other forest types (100 Mg ha -1 versus > 400 Mg ha -1 , see Table 3). Although F1 forests had significantly increased stem density and BA since 1993, in 2013 stem density was significantly higher than F4 forests, while BA was still significantly lower than F4 forests (Table 3). In 2013 F1 forests had significantly increased in WMD BA , and had significantly higher WMD BA than F3 (Table 3), possibly due to the significantly increased dominance of L. alata, which has a particularly high wood density. With regard to biodiversity, only F1 had increased in species richness, but the H' pattern remained the same, with F1 forests continuing to be significantly different from F4 forests (Table 4). While L. alata dominance had increased in F1 forests, A. klaineana dominance patterns had not changed (Table 4). Changes within vegetation types Savanna (S). No tree >10 cm diameter was found in any savanna plot in either 1993 or 2013. The woody structure of these plots is characterised by small trees <10cm basal diameter, of two species only (Crossopteryx febrifuga and Psidium guineensis), which are small stature trees that rarely attain 10 cm diameter. Neither of these species was found in any forest plot. Colonising forests (F1). Between 1993 and 2013 F1 forests significantly increased in AGB, BA, stem density and WMD BA (Table 3, Figs 4 and 5). Although species richness and L. alata dominance significantly increased, H' did not significantly change (Table 4). No A. klaineana was recruited in any F1 plots, likely because this species is strongly shade intolerant and highly fire sensitive. Monodominant Okoume forests (F2). Between 1993 and 2013, F2 forests also significantly increased in AGB and BA but not in stem density (Table 3). WMD BA , species richness, H', A. klaineana or L. alata dominance did not significantly change (Tables 3 and 4). Although 20 years saw an increase in the AGB and BA of F2, none of the F2 plots could be classified as F3 in 2013. For F2 plots, high values of AGB were related to middle values of BA, stem density and low WMD BA (Fig 5). Young Marantaceae forests (F3). F3 forests were very stable across the 20 years: they did not significantly increase in AGB, BA or stem density between 1993 and 2013 (Table 3). In fact, two plots lost AGB due to large trees dying, F3.3 lost two trees >50cm diameter and one >1m diameter, while F3.4 lost two trees >50cm diameter (see Fig 4). Neither species richness, H' nor WMD BA changed significantly during this period (Tables 3 and 4). F3 plots had similar values of AGB, BA, stem density and biodiversity to F4 plots, both in 1993 and 2013. F3 plot A. klaineana dominance in 1993 remained in 2013 (Table 4). Twenty years did not significantly change F3 forests; therefore, none of F3 plots from 1993 could be classified as F4 in 2013. For F3 plots, as for F2 plots, high values of AGB were also related to middle values of BA, stem density and low WMD BA (Fig 5). Savanna-Forest Dynamics Mixed Marantaceae forests (F4). F4 forests were very stable across the 20 years: they did not significantly increase in AGB, BA or stem density (Table 3). One F4 plot lost 30% of its 1993 AGB due to two trees >70cm diameter dying. Neither species richness, H' or WMD BA changed significantly during this period. For F4 plots, high values of AGB were related to high BA and WMD BA but low stem density (Fig 5). Comparing putative successional stages Overall, F1 and F2 forests increased in AGB, mainly as a result of adding stems (recruitment) in the case of F1 forests, or increased BA in the case of F2 forests. Some plots of F3 and F4 increased in AGB while some decreased. Relative change in stem density and AGB over time was different depending on forest type (Fig 4). Considering changes in stem density and AGB together, in Fig 4, F1 forests mainly moved along the y-axis (larger increases in stem density relative to AGB) while F2 forests moved along the x-axis (larger increases in AGB relative to stem density). F3 and F4 plots were more scattered (some had high or relatively low stem density and AGB) and changes occurred in different directions, as some plots lost AGB due to large trees falling, and others recovered from local disturbance events. Eight plots of the twelve plots which increased in AGB (of F2, F3 and F4) decreased in stem density, suggesting a tendency towards stand self-thinning (Fig 4). When the relationship between AGB and other parameters is considered, in general, F4 plots have higher values of AGB because of high BA and WMD BA but lower stem density, while F3 and F2 forests have higher values of AGB as a result of having intermediate values of BA and stem density, and lower values of WMD BA (Fig 5). Annual change in AGB (Mg dry mass ha -1 year -1 ) was not significantly different between the forest types, due to high variation within forest types (Table 3). Changes in AGB related to losses (from mortality) were not significantly different between forest types. However, changes related to gains (growth of surviving stems and recruitment) were significantly lower in F1 compared to the other forest types, with F1 forests having significantly higher AGB from recruitment of new stems than other forest types (Table 3). Regarding diversity, only F1 significantly increased in species richness over the 20 years. F1 plots also increased in L. alata dominance, but not in H'. No significant biodiversity-related changes were observed for F2, F3 or F4 plots. The Bray-Curtis Index of dissimilarity between a given vegetation type and the following vegetation type in the succession in 2013 was slightly higher than in 1993 (Table 4). Vertical structure Vertical plant profiles derived from TLS data were different depending on forest type (Fig 6). Plant area volume density (PAVD) was highest for F1 forests at 20 m, and there were few trees >30 m. PAVD had two peaks for F2 and F3 forests, at 3 m and 38.5 m. The peak in the upper canopy in F2 forests was larger than for F3 forests (canopy of most A. klaineana). Both F2 and F3 forest types show a second, lower large peak at 3 m due to thick Marantaceae understory. F4 forests also showed a bimodal vertical plant profile; with an upper canopy PAVD peak around 26.5 m, and lower peak at 3m. Some variation around each profile was found, related to the small number of plots sampled (two per vegetation type). Fire history of the plots Observations by the authors showed that for the period 1995-2013, the savanna plots were each burned between five and 19 times. No fires were detected in any other plot. No fires were detected in any of our savanna plots by the two MODIS burned area products used between 2000 and 2013. The discrepancy is likely due to dry-season cloudiness meaning that data was missing for most of the fire-prone months each year, thus the use of MODIS products to infer fire impacts for this region is not further recommended. Soil characteristics Savanna soils had significantly higher pH and C:N ratio but lower CEC and aluminium than other forest types, which is likely to be related to fire frequency in this vegetation type. Burning tends to clear vegetation and then stimulate fresh vegetation growth, and is known to increase soil pH (related to Ca and Mg being released from organic matter being burned) but decrease organic C, N, exchangeable aluminium and sulphur, the latter being volatilized by fire [58]. With regard to soil C stocks at depths of 0-30 cm, there was no clear successional gradient, as values ranged from 23.7 to 27.2 Mg C ha -1 and were not significantly different amongst vegetation types. The values in F1-4 are much lower soil C stocks than those reported in the literature (Table 5), including very recently collected soil carbon data at the same study site [59], the latter probably related to local heterogeneity at sampling sites and the method these authors used to estimate BD. This is consistent with soil maps of Gabon showing Lopé savannas and adjacent forest having extremely old and weathered soils [60]. However, it should be noted that: (i) few studies have assessed soil C stocks in Africa; (ii) great variation has been reported in such studies that have been undertaken; and (iii) these studies often report values for different depths (0-20cm, 0-30cm, 0-50cm, 0-100cm, 0-200cm, which make comparisons difficult, see Table 5). The lack of difference in soil C stocks amongst vegetation types is on the one hand not surprising, as the geology and soil types do not differ. On the other hand, elsewhere significant differences between both savannas and nearby forests, and between different forest types have been documented. For example, Coetsee et al. [61], in a study of soil C change related to woody plant encroachment in South Africa, reported that forests contained significantly more soil carbon than adjacent grassland savannas for 0-100 cm depth. However, unfortunately, these authors do not provide absolute values to be compared with our study. Differences in soil C stocks between different forest types have also been reported: Djomo et al. [62] show mixed forests having greater stocks than Caesalpinioideae rich forests (Table 5), although this may be also due to differing soil types. Overall, the low organic matter inputs into these soils over long periods of time-as the forests are all young (<700 years [19])-likely explains the low and similar C stocks in the soils underlying these forests. Changes in AGB and vegetation structure First, it should be noted that our sampling method only included trees !10 cm diameter, and savanna trees in our plots did not reach this size. Jeffery et al. [37] reported that some savanna patches protected from fire, located near the forest edge thickened rapidly over a 15-year period to become classified as colonising forest. Savanna plots sampled in this study were all burnt at least five times, suggesting that fire has played a role in preventing rapid transition to F1 type forest during this period. Comparing the forest types along the putative successional gradient young (F1) forests increased in stem density while oldest forests (F4) tended to have a net loss of individuals (a decrease in stem density). These changes broadly follow the pattern expected from the literature [18,19,63], and can be seen in Fig 4. AGB significantly increased over time in F1 and F2 forests. In F1 forests, greater AGB was related to an increase in stem density and BA while in F2 forests it was only related to an increase in BA. Indeed, when the canopy is still open (F1), recruitment of trees is high (e.g. between 240 and 520 individuals ha -1 were recruited into F1 plots over the period 1993-2013) and mortality is low, but with time, as the canopy closes (as in F2) and as competition for light and resources increases, forests tend towards self-thinning with recruitment reducing and mortality increasing. Even though self-thinning is a debated topic in forest ecology (see [64]), it is a common view that F2 forests (monodominant Okoume forests) follow this pattern, as Fuhr et al. [65] reported: 'BA increases but stem density decreases with stand age'. Because trees become increasingly prone to disturbance with increasing age [66], and F2 forests are close to a single cohort stand, the death of old trees creates large gaps that can take many years to refill. This more open canopy is typical of young Marantaceae forests (F3), where more light reaches the forest floor, which is then colonised by ground-level Marantaceae and Zingiberaceae plants. The stability over 20 years in terms of structure, AGB and diversity of F3 suggests either that this successional type is very long-lived, or that this forest type might not be an intermediary stage towards F4. It has long been accepted that F3 forests were a preliminary stage of F4 forests, mainly found at the forest edge physically located between F2 and F4 forests and in areas with high large mammal density especially gorillas and forest elephants [19,67]. However Tovar et al. [68] point out that Marantaceae forest might not be a successional stage, either following fire, savanna colonisation or post-agriculture regeneration [19], but may be a final stage in its own right. They found that the establishment of Marantaceae forests may also be associated with frequency of fires in forest rather than just savanna conversion to forest. Tovar et al. [68] also suggest that the mechanisms behind the maintenance of Marantaceae forest are more related to the opening of the canopy rather than to the establishment of the Marantaceae species themselves. Our results cannot distinguish between these alternative hypotheses. But they suggest that at a local scale the various paths that F3 forests follow can be quite different from one another. Some plots might be 'trapped' as young F3 forest, becoming a final stage in itself, due to forest 'engineering' in areas of high large mammal density; while other F3 plots might tend towards F4 and then old-growth forest. It should be noted that Marantaceae forests are not found in certain areas, such as coastal Gabon, where monodominant Okoume forests evolve directly to resemble the surrounding mixed forest [65]. More research is needed on the successional pathways of Marantaceae forests. The amount of change in structure, AGB and diversity after 20 years was low. Even though the changes were greater at the beginning of the succession, no F1 or F2 plot could be classified as the following forest type in the succession. Forest recovery is expected to be faster in areas close to remnant forest patches, as recovery speed is considered to depend heavily on seed dispersal [14], although local climate conditions and soils might also be important. In our study area forest patches are close to areas of colonizing forest (often < 1 km) and animal dispersers are common. Nevertheless, we only observed a 'fast' increase of AGB for F1 and F2 forests, and even then little change in tree species diversity was seen. Forest recovery from direct human impacts is often assessed via changes in tree canopy cover (e.g. [69]), and indeed, F2 forests have a closed canopy while F1 do not. It has been reported that the succession (first stages) is more rapid during the colonisation of cultivated lands in the coastal area of Gabon than during the colonisation of paleo-climatic savannas in the LNP [65]. While it has been estimated that in Coastal Gabon it takes about 40 years to reach the monodominant Okoume forest stage [65], in LNP it has been predicted to take 100 years (AGB increase of 3 Mg dry mass ha -1 year -1 ), and F3 or F4, even longer. However, if biodiversity is also taken into account, it would take much longer as no A. klaineana was recruited in 20 years in any F1 plot, and this species is what defines F2 forests. Thus, given the potential 80-100 year transition from F1 to F2, it is not surprising that we did not document an F1-F2 transition in our 20 year study. This is the first report on the use of TLS to obtain vertical plant profiles of different tropical forests types. Our vertical structure results from the different forest types support previous work in that (i) canopy height increases along the hypothesised succession, (ii) vertical forest structure is different amongst forest types, and (iii) the Marantaceae understory layer can be very thick in F3 [19]. Tropical forests, with high biodiversity and complex vegetation structure are often a challenge to classify. The use of TLS has potential for including objectively measured structural characteristics in vegetation classification, and if combined with botanical knowledge may be a powerful tool. We suggest TLS is therefore likely to prove a useful additional tool to analyse tropical forest structure. Furthermore, it can efficiently quantify relatively small changes in vegetation structure change [56]. Of course, vegetation varies so scanning many plots within a forest type will be required to capture robust patterns and differences amongst forest types. Changes in biodiversity and wood density Significant biodiversity changes over time were only documented for species richness and L. alata dominance for F1. This supports the hypothesis that changes in biodiversity are much slower, occurring over centuries, compared to changes in forest structure or AGB occurring over decades [70]. No A. klaineana was recruited in any F1 plot in 20 years and in F2 A. klaineana density did not alter over 20 years. A. klaineana is a long-lived early-successional light demanding species mainly found in Gabon, Equatorial Guinea and Congo [71]. It is able to colonize open spaces, thus can form monodominant stands [72] and even-aged stands, and persists in mixed forests as other species are recruited. In our study A. klaineana in F2 forests were about 50% of BA, and appeared to be fairly even-aged stands. In F3 forests, A. klaineana trees were larger and less abundant, implying a maturing stand. A. klaineana comprised only about 5% of BA in F4. A. klaineana cannot resist fire. Fire, even once in 20 years, will be likely to differentially remove A. klaineana from a colonising stand (F1) and set the emerging forest on a successional track that will not include F2 or F3 stages as we describe them, but will have alternate successional stages. Overall, as forest type changes observed were slow, the sequence we see may be a very slow succession. Our results for WMD BA may appear surprising, as successional change in species are expected to result in an increase in WMD BA when pioneer species are replaced with slower growing mixed forest species [27]. However, it should also be considered that savanna species often have high wood density, related to resistance to fire. In this study only F1 had significantly greater WMD BA than F3, and this was related to L. alata dominance in F1, and the dominance of large A. klaineana trees in F3. L. alata has paradoxically high-density wood (0.897 g cm -3 ), unlike most light-demanding species which tend to have a light wood (e.g. A. klaineana = 0.378 g cm -3 ; [52]). L. alata heavy wood is linked to its fire-resistance. Although we did not find any L. alata in our savanna plots, single isolated stems of L. alata are also occasionally found in savannas. The mixed forest plots WMD BA is typical of Central African tropical forests [48]. Implications for management The different forest types studied here each store considerable amounts of carbon as AGB (47% tropical forest AGB is carbon, [73]). F2, F3 and F4 forests had 182, 210 and 232 Mg C ha -1 respectively, which is similar to the mean value of African closed canopy forests, at 203 Mg C ha -1 , and substantially higher than Amazonian values [48]. Furthermore, F1 and F2 increased in carbon stocks over the 1993-2013 period, mirroring wider patterns of increasing AGB in African forests [12,16]. As many REDD+ carbon projects are 25-30 years long, protecting F1 or F2 forests might seem to be a relatively easy and/or cost-effective way to not only maintain but also increase carbon stocks on the land. Alternatively, F2 forests could provide timber, taking pressure off older more diverse forests. A. klaineana is the main timber export from Gabon (82% of the total timber production, see [74]). It has been suggested that these monodominant stands are not in equilibrium, and therefore that selective logging, with the consequent small scale canopy gaps, can be a way to promote a sustainable population for this species [75]. Although F2 forests store important quantities of carbon, and they increase their carbon stocks over time, their timber value is highest, therefore it might be more economically viable (and ecologically relevant for the species) to exploit F2 if this spares other more ecologically valuable forests from degradation. Marantaceae forests (F3 and F4) sometimes considered degraded forest, are often assigned low conservation and research priority [76]. However, this forest type not only has great importance to several flagship species (gorillas, forest elephants) as it provides a dependable year-round supply of vegetative food [76], they also store large quantities of carbon per unit area. Thus this forest type, which may have been much more extensive in the past [19], may require active management to maintain areas of F3 and F4 forests [19,68]. However, how exactly to best undertake interventions to maintain such a system is currently unknown. Continued monitoring and research to ascertain if F4 is a long-term deflected succession is important. The fire management plan was designed to reduce rates of forest expansion into the savanna to maintain a diversity of habitats in the forest-savanna transition zone [37]. This is important to maintain the ecologically distinct flora of the central Gabon savannas. One further aim of this was to encourage the seasonal use of savanna's by large mammals as part of plans for the further development of tourism in and around LNP. Our results suggest that the current fire management programme has been sufficient to prevent savanna plots from converting to colonising forest in the 20 year study period. However, it should be noted that here we only report trees !10 cm diameter, thus woody thickening of savannas or compositional changes were not assessed. Jeffery et al. [37], using fixed-point photomonitoring methods, reported savanna thickening and forest expansion in certain parts of LNP. Veenendaal et al. [77] highlight that once subordinate woody canopy layers are taken into account, a less marked transition in woody plant cover across the savanna-forest species discontinuum is observed compared to that inferred when trees !10cm diameter are considered. Future work should assess these smaller stems and shrubs. Conclusions To our knowledge, this is the first study assessing long-term phytodemographic changes over time along the succession in the Central African savanna-forest mosaic. Observed changes in AGB and vegetation structure followed our hypothesised directions, but the rate of change was found to be, overall, slow, especially with regard to changes in biodiversity and species' dominance. After 20 years no plot could be classified as having moved to the next stage in our putative succession of forest types. Despite a lack of change, our study highlights the high carbon storage in AGB in these forests. Additional long term monitoring is required to better understand forest dynamics in Central Africa. Ideally, this will include TLS to provide precise and accurate structural parameters of each forest types and how these are changing. Soil properties differed only between savanna and mixed Marantaceae forests, likely due to the effects of fire frequency on savanna soils. However, soil carbon stocks did not differ amongst any of the vegetation types, and different forest types did not occur on different soil types. Forest plots had much lower soil carbon stocks than other African forests. Overall, the fire management plan to keep some areas of LNP as open savanna is maintaining savanna. Our documented increases in AGB across F1 and F2 forests suggest that carbon stocks may be increasing in LNP. established by researchers as part of the AfriTRON network. We thank the Government of Gabon, notably CENAREST (Centre National de Rechereche Scientifique et Technologique) and ANPN (Gabon's National Parks Agency), and CIRMF (the International Medical Research Centre in Franceville) for permission to undertake this study and for logistical support in the field. We are grateful for the substantial contribution to fieldwork made by C.A. Mandebet, K. Y. Mayossa, J. Gonzalez de Tanago, J. T. Dikangadissi, E. Dimoto, A. Sinibouré, J. Dibakou, D. Mala, C. Pasilende and M. Fernandez, plus support given by J. Poulsen and Caroline Tutin. We also thank J. Armston for his help with the vertical plant profiles. S.L. Lewis was funded by the EU FP7 GEOCARBON project and EU ERC T-FORCES project. K.J. Jeffery and L.J.T. White were funded by ANPN. K. Abernethy was funded by University of Stirling, and M. Disney and A. Burt were funded in part by the UK Natural Environment Research Council, which, through the National Centre for Earth Observation (NCEO) provided some travel funds.
2016-10-31T15:45:48.767Z
2016-06-23T00:00:00.000
{ "year": 2016, "sha1": "0b833d59c233b70d8feb4bc482c4291e006301d6", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0156934&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0b833d59c233b70d8feb4bc482c4291e006301d6", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography", "Medicine", "Environmental Science" ] }
267059314
pes2o/s2orc
v3-fos-license
Successful bladder-sparing partial cystectomy for muscle-invasive domal urothelial carcinoma with sarcomatoid differentiation: a case report High-grade (HG) urothelial carcinoma (UC) with variant histology has historically been managed conservatively. The presented case details a solitary lesion of muscle-invasive bladder cancer (MIBC) with sarcomatoid variant (SV) histology treated by partial cystectomy (PC) and adjuvant chemotherapy. A 71-year-old male with a 15-pack year smoking history presented after outside transurethral resection of bladder tumor (TURBT). Computerized tomography imaging was negative for pelvic lymphadenopathy, a 2 cm broad-based papillary tumor at the bladder dome was identified on office cystoscopy. Complete staging TURBT noted a final pathology of invasive HG UC with areas of spindle cell differentiation consistent with sarcomatous changes and no evidence of lymphovascular invasion. The patient was inclined toward bladder-preserving options. PC with a 2 cm margin and bilateral pelvic lymphadenectomy was performed. Final pathology revealed HG UC with sarcomatoid differentiation and invasion into the deep muscularis propria, consistent with pathologic T2bN0 disease, a negative margin, and no lymphovascular invasion. Subsequently, the patient pursued four doses of adjuvant doxorubicin though his treatment was complicated by hand-foot syndrome. At 21 months postoperatively, the patient developed a small (<1 cm) papillary lesion near but uninvolved with the left ureteral orifice. Blue light cystoscopy and TURBT revealed noninvasive low-grade Ta UC. To date, the patient has no evidence of HG UC recurrence; 8 years after PC. Patient maintains good bladder function and voiding every 3–4 h with a bladder capacity of around 350 ml. Surgical extirpation with PC followed by adjuvant chemotherapy may represent a durable solution for muscle invasive (pT2) UC with SV histology if tumor size and location are amenable. Due to the sparse nature of sarcomatous features within UC, large multicenter studies are required to further understand the clinical significance and optimal management options for this variant histology. Background Bladder cancer is the tenth most common cancer worldwide. 1The largest predictor of bladder cancer mortality is tumor grade with non-muscleinvasive bladder cancer (NMIBC) demonstrating about 95% progression-free survival at 15 years while localized muscle-invasive bladder cancer (MIBC) has a 5-year overall survival of 55%. 2,3eoadjuvant chemotherapy (NAC) followed by radical cystectomy (RC) with pelvic lymph node dissection (PLND) is currently the gold standard for the treatment of localized MIBC.However, Successful bladder-sparing partial cystectomy for muscle-invasive domal urothelial carcinoma with sarcomatoid differentiation: a case report Mark Sultan , Ahmad Abdelaziz, Muhammed A. Hammad , Juan R. Martinez, Shady A. Ibrahim, Mahra Nourbakhsh and Ramy F. Youssef Abstract: High-grade (HG) urothelial carcinoma (UC) with variant histology has historically been managed conservatively.The presented case details a solitary lesion of muscle-invasive bladder cancer (MIBC) with sarcomatoid variant (SV) histology treated by partial cystectomy (PC) and adjuvant chemotherapy.A 71-year-old male with a 15-pack year smoking history presented after outside transurethral resection of bladder tumor (TURBT).Computerized tomography imaging was negative for pelvic lymphadenopathy, a 2 cm broad-based papillary tumor at the bladder dome was identified on office cystoscopy.Complete staging TURBT noted a final pathology of invasive HG UC with areas of spindle cell differentiation consistent with sarcomatous changes and no evidence of lymphovascular invasion.The patient was inclined toward bladder-preserving options.PC with a 2 cm margin and bilateral pelvic lymphadenectomy was performed.Final pathology revealed HG UC with sarcomatoid differentiation and invasion into the deep muscularis propria, consistent with pathologic T2bN0 disease, a negative margin, and no lymphovascular invasion.Subsequently, the patient pursued four doses of adjuvant doxorubicin though his treatment was complicated by hand-foot syndrome.At 21 months postoperatively, the patient developed a small (<1 cm) papillary lesion near but uninvolved with the left ureteral orifice.Blue light cystoscopy and TURBT revealed noninvasive low-grade Ta UC.To date, the patient has no evidence of HG UC recurrence; 8 years after PC.Patient maintains good bladder function and voiding every 3-4 h with a bladder capacity of around 350 ml.Surgical extirpation with PC followed by adjuvant chemotherapy may represent a durable solution for muscle invasive (pT2) UC with SV histology if tumor size and location are amenable.Due to the sparse nature of sarcomatous features within UC, large multicenter studies are required to further understand the clinical significance and optimal management options for this variant histology. Keywords: bladder preserving therapy, case report, muscle invasive bladder cancer, partial cystectomy, sarcomatoid urothelial carcinoma TherapeuTic advances in urology these practices are often associated with high morbidity rates, as such we are motivated to durably control early-stage disease while halting progression. Pure urothelial carcinoma (UC) approximately accounts for 75% of bladder cancer cases, whereas about 25% of cases demonstrate variant histology divided into urothelial and nonurothelial types.Urothelial variants include UC with squamous, glandular, or trophoblastic differentiation, micropapillary, plasmacytoid, tubular and microcystic, nested, clear cell, lymphoepithelioma-like, giant cell, and sarcomatoid types. 4Sarcomatoid variant (SV) UC is a rare type of UC of the bladder accounting for 0.1-0.3% of all instances. 5By definition, this tumor demonstrates both malignant epithelial and sarcomatoid components.The sarcomatoid component is either spindle cells or can demonstrate heterologous differentiation in the form of rhabdomyosarcomatous, osteosarcomatous, angiosarcomatous, liposarcoma, chondrosarcomatous, or other type of sarcoma. 6Due to its rarity and severity, SV UC management presents a substantial challenge.The literature remains limited regarding conservative approaches in the management of these historically aggressive variants. 7,8Thus, the American Urological Association (AUA) guidelines recommend early RC in patients with NMIBC with variant histology. 9cording to the National Comprehensive Cancer Network guidelines, partial cystectomy (PC) is recommended as an option for patients with stage ⩽ cT2 disease and a solitary lesion amenable to segmental resection with adequate negative surgical margins and suitable resulting bladder functional capacity. 10 The presented case details a solitary lesion of MIBC with sarcomatoid changes treated by PC and PLND with adjuvant chemotherapy without evidence of high-grade (HG) recurrence over 8 years of surveillance.Thus, demonstrating a satisfactory oncological outcome while allowing the patient to maintain bladder function and spontaneous voiding without the need for RC. Case presentation A 71-year-old male with a past 15-pack year smoking history presented to the clinic with a history of MIBC on an outside transurethral resection of bladder tumor (TURBT).His medical comorbidities included hypertension, diabetes, and dyslipidemia; his body mass index was 30.7.Surgical history includes an umbilical hernia repair.The physical exam was noncontributory, and urine cytology returned positive.Imaging by computerized tomography (CT) abdomen and pelvis was negative for pelvic lymphadenopathy or abnormalities in either collecting system.A review of outside pathology was consistent with HG T1 UC but no definite evidence of muscle invasion.Repeat TURBT was recommended, given the need for appropriate staging. In the operating room under general anesthesia, complete cystoscopy demonstrated a 2 cm broadbased papillary tumor at the bladder dome, no additional tumors were appreciated.Complete TURBT revealed a final pathology of invasive HG UC with areas of spindle cell differentiation consistent with sarcomatous changes (Figure 1).There was no evidence of lymphovascular invasion and the muscularis propria was present but uninvolved, affirming pT1 disease.For HG T1 urothelial cancer at the bladder dome, the patient was offered different treatment options including RC versus PC with or without the need for adjuvant therapy versus repeat TURBT with the possibility of intravesical therapies such as Bacillus Calmette-Guerin, if there is no muscle invasion.The patient was inclined toward bladder-sparing options.Given his variant histology portends an aggressive tumor and with shared patient decision-making, an open PC with bilateral PLND was elected. Under general anesthesia, a midline incision from the umbilicus to the pubic symphysis was accessed to isolate the urachus.Subsequently, bilateral external iliac, internal iliac, and obturator lymphadenectomy was performed without appreciation for any suspicious lymph node morphology.After dissecting the bladder away from the pelvic side wall, direct cystoscopy confirmed the tumor to be confined to the bladder dome.A Satinsky clamp was utilized to isolate the tumor and PC was performed with a 2-3 cm visual margin.A three-layer bladder closure including the mucosa, muscularis, and adventitia was carried out.The abdominal drain was removed on postoperative day 3 while a catheter remained in place for 2 weeks.Final pathology revealed HG UC with sarcomatoid differentiation and invasion into the deep muscularis propria, consistent with pathologic T2bN0 disease with negative margins and no lymphovascular invasion. The patient subsequently pursued four doses of adjuvant doxorubicin chemotherapy though his treatment was complicated by hand-foot syndrome.Restaging positron emission tomography CT after adjuvant therapy was negative for the disease.In accordance with the AUA guidelines, the patient was screened for recurrence by urine cytology, office cystoscopy, and annual CT with delay phase scans.For the first 2 years after surgery, the patient was screened with urine cytology and office cystoscopy quarterly in addition to an annual CT chest abdomen pelvis with a delay phase.A full timeline of the patient's management and surveillance is included in Supplemental Table 1.At 21 months postoperatively, the patient developed a small (<1 cm) papillary lesion near but uninvolved with the left ureteral orifice (UO), above the trigone.Given his previous history of HG disease, the patient was counseled for blue light cystoscopy and TURBT. One hour prior to arriving at the operating room, a catheter was placed to coat the bladder with hexaminolevulinate, which is preferentially absorbed by rapidly dividing cells, allowing for visual identification under UV light.Two lesions were identified, one lesion approximating the left UO as previously described, and another area at the dome of the bladder adjacent to his scar following open PC.After cold cup bladder biopsies, the lesions were resected and fulgurated with bipolar energy.Final pathology revealed noninvasive low-grade Ta UC without muscle involvement of the papillary lesion near the left UO, and reactive epithelium by the bladder dome.The patient continued with screening cystoscopy and urine cytology every 4 months for the first year, then biannually for 2 years with annual upper tract screening by CT after his low-grade recurrence.To date, the patient has no evidence of HG UC recurrence; 8 years after PC.He continues to maintain appropriate bladder function, voiding every 3-4 h with a bladder capacity of about 350 ml. Discussion This case demonstrates the advantage of adequate surgical extirpation (with appropriate margins) in addition to adjuvant chemotherapy as a means for oncologic control of MIBC with SV histological changes.Historically, evidence has supported the AE1 and AE3, monoclonal antibodies; H&E, hematoxylin and eosin; HG, high grade; UC, urothelial carcinoma.journals.sagepub.com/home/tauTherapeuTic advances in urology SV UC of the bladder (SV-UCB) to be a negative prognostic indicator.A cohort study of 46,515 patients with UC through the Surveillance, Epidemiology, and End Results (SEER) database program in 2007 was reviewed to demonstrate patients with sarcomatoid carcinoma of the bladder presented at a more advanced disease stage as well as have a greater risk for death after adjusting for tumor stage on presentation. 5Confirmed again in 2010, the SEER program was analyzed to identify a cohort of 221 patients specifically with SV-UCB, with results demonstrating that SV-UCB presents as HG, an advanced disease with a poor prognosis.The 1-, 5-, and 10-year cancer-specific survival rates were 53.9%, 28.4%, and 25.8%, respectively. 11This is a stark contrast to outcomes of RC for MIBC with reported 5and 10-year cancer-free survival rates at 66% and 68%, respectively. 12However, more recent data from 1067 and 624 patient samples with MIBC cancer treated by single tertiary care centers failed to associate the SV with a negative effect on survival after RC. 13,14 These cohort studies represented data that included 21 and 15 cases of SV-UCB, respectively, between the tertiary care centers, corroborating the paucity of this variant histology.Hence, the current body of data is limited and inconsistent, precluding a full understanding of the disease with relevant randomized controlled trials (RCTs) to establish a gold standard of care. Previous evidence has recommended forgoing intravesical therapy in patients with SV T1 disease and proceeding directly to RC. 15 However, the morbidity of RC compared to bladder-preserving approaches may be prohibitive for certain patients, such as the elderly population. 16We are therefore motivated to investigate PC as an additional option to manage variant histology UC while preserving adequate bladder and sexual function. 17For patients with a solitary lesion amenable to resection, PC allows the surgeon to assess tumor margins completely as well as perform a PLND as needed.Published data from the SEER program registry for stage T1-T2 tumors with variant histology demonstrated no difference in cancer-specific or overall mortality on Cox regression modeling between PC and RC for the treatment of variant histology UC. 18 In addition, the successful use of PC in conjunction with PLND has previously been reported in the literature to manage SV UC. 19 Though one limitation regarding PC, as highlighted in this case, is the need for frequent surveillance cystoscopy due to the higher rate of recurrence within native urothelium.Our patient had one recurrence of a small low-grade Ta UCB 21 months after PC. The recent European Association of Urology guidelines suggest Magnetic Resonance Imaging (MRI) in conjunction with the Vesical Imaging-Reporting and Data System tool to be worthwhile for discriminating between NMIBC and MIBC due to superior soft tissue contrast. 20In addition, a pooled meta-analysis of 1724 patients undergoing MRI to stage bladder cancer demonstrated a sensitivity of 0.92 (0.88-0.95) and 0.88 (0.78-0.94) for discriminating between ⩽T1 and ⩾T2 tumors, respectively. 21Though MRI was not used in the management of this case, the authors believe clinical T2 disease may likely have precipitated neoadjuvant therapy. Another consideration is the role of PLND at the time of PC, as performed in this case.A SEER program query published in 2020 on patients with nonmetastatic pT2-T3 UC of the bladder treated by PC discovered only 50% of patients treated by PC concomitantly received PLND.However, the results noted a 5-year case-specific mortality of 30% for patients who received PLND compared to 41% for those who did not (p < 0.01). 22Though this corroborates the utility of PLND for nonmetastatic MIBC, the data are scarce regarding the utility of PLND for variant histology, particularly SV UC.However, given the patient's CT imaging for staging demonstrated no evidence of nodal involvement, a standard PLND template was elected over an extended PLND.Intraoperatively, pelvic nodes were not suspected for metastasis. Endoscopic management as a bladder-preserving option for SV UC has demonstrated inferior overall survival compared to RC. 23 However, trimodal therapy (TMT) with maximal TURBT, sensitizing chemotherapy, and radiation therapy have been validated as a bladder-preserving solution for MIBC in appropriate patient populations. 24,25owever, no randomized comparison exists to compare RC to TMT in the management of UC with variant histology.A recent review of 303 patients with MIBC treated by TMT demonstrated the 5-year survival of pure UC at 75%, yet with a small sample size of SV-MIBC cases (n = 8), no statistically significant difference was found on subgroup analysis compared to the reported 5-year survival rate of 56% in the SV pathology (p = 0.7). 26he presented patient was not considered for NAC as the TURBT specimen demonstrated no evidence of muscle invasion and no published data exist to validate a survival benefit with NAC for the treatment of T1 bladder cancer with SV histology, a contrast from evidence for muscle-invasive or metastatic disease.A National Cancer Database study published on patients with non-metastatic MIBC (T2a-T4) demonstrated NAC improved overall survival and pathological downstaging compared to RC alone for patients with SV UC (n = 501, p = 0.014). 27Another SEER registry review of 110 patients with metastatic SV UC reported an overall survival of 8 months with chemotherapy treatment and 2 months without (p = 0.016). 28Thus, the role of chemotherapy continues to be investigated for the management of SV-UCB. Current AUA expert opinion guidelines for nonmetastatic MIBC with variant histology recommend a divergence from standard evaluation and management. 29There is currently no evidencebased consensus for a single appropriate chemotherapy regimen for SV histology.A meta-analysis involving 10 RCTs in patients treated with adjuvant chemotherapy after RC for MIBC (n = 1183) demonstrated an absolute 11% improvement in recurrence-free survival (p < 0.001) compared to RC alone. 30In the presented case, four doses of adjuvant doxorubicin were administered after confirmation of muscle-invasive disease on final pathology.A study by Sui et al. compared RC alone (n = 106), RC and either chemo-or radiotherapy (n = 71), TURBT alone (n = 146), and TURBT with chemo-or radiotherapy (n = 71) for the treatment of SV UC.Findings validated that patients with RC and some form of multimodal therapy demonstrated the best overall survival. 23n addition to the mounting evidence within the literature, this case serves as an anecdotal example to justify the utility of chemotherapy in the treatment of SV-MIBC. A limitation of this study is the presented utility for PC as a potential management solution for SV pT2 UC is garnered from a single case report.Cohort studies are necessary to delineate oncologic and functional long-term outcomes.Future research endeavors ought to also assess the role of immunotherapy as the sarcomatoid transformation is associated with high PD-L1 expression, and a recent retrospective review of 755 patients with advanced or metastatic disease noted an improved complete response rate for patients with SV UC (52.6%) compared to pure UC (21.1%) after treatment with pembrolizumab (p = 0.032). 31,32nclusion Surgical extirpation with PC followed by adjuvant chemotherapy may represent a durable solution for muscle-invasive (pT2) UC with SV histology if tumor size and location are amenable for PC.Due to the sparse nature of sarcomatous features within UC, large multicenter studies are required to further understand the clinical significance and optimal management options for this variant histology in the management of bladder cancer.Adequate surgical extirpation with the absence of other aggressive pathological features like lymphovascular invasion and high stage may represent a good prognosis after treatment of UC with SV, as in this case, further promoting cancer-free survival for solitary muscle-invasive UC.
2024-01-22T05:03:30.572Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "c0ff6e39f86b865ca4799ad4beec5298eaa6a5d5", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c0ff6e39f86b865ca4799ad4beec5298eaa6a5d5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231907242
pes2o/s2orc
v3-fos-license
The Effect of “High-ankle Sprain” Taping on Ankle Syndesmosis Congruity: A Cadaveric Study Methods: This controlled cadaveric laboratory study included ten cadaveric specimens installed in a custom-made device applying 750N of axial loading in order to simulate weight-bearing. Sectioning of syndesmotic ligaments, AiTFL and IOL, was done sequentially and CT scan images were taken with and without high-ankle sprain taping. A validated measurement system consisting of 3 lengths and 1 angle was used. Results were compared with Wilcoxon tests for paired samples and non-parametric data. INTRODUCTION Although ankle injuries are particularly frequent, syndesmotic injuries occur only in 1% to 24% of ankle ligamentous injuries, predominantly in the professional athlete population [1 -3]. In approximately 10% of ankle fractures, there is a concomitant syndesmotic injury [4,5]. The syndesmosis is a very stable fibrous joint composed of four ligaments, the Anterior Inferior (AiTFL) and Posterior Inferior Tibiofibular Ligaments (PiTFL), as well as the Interosseous Ligament (IOL) and the Transverse Tibiofibular Ligament (TTFL) [6 -13]. It is usually injured by an external rotation of the foot, causing ligaments to rupture [1,3,9,10,13]. Each ligament composing the syndesmotic joint provides some stability; however, the AiTFL, PiTFL and IOL are the major stabilizing structures [14]. The usual pattern of injury is a disruption of the AiTFL and anterior deltoid ligament followed by the IOL, TTFL and interosseus membrane [1,3,12]. The PiTFL and the remaining deltoid ligament require a greater force to be injured [1,3]. Untreated syndesmotic injury can lead to chronic instability and painful symptoms as a result of degenerative changes [12, 14 -17]. Diagnosis and treatment of syndesmotic injury remain controversial as there is a lack of sensitive physical exami-nation maneuvers to assess syndesmosis stability [2,9,18,19]. Furthermore, the parameters that were once diagnostic of syndesmotic injury on simple radiography are now being challenged because of the variability seen in normal synd-esmosis anatomy and the difficulty to control rotation during the radiographic assessment [15, 20 -24]. If the distance between the tibia and the fibula is clearly greater, due to complete disruption of the syndesmosis, it is usually obvious on radiography, particularly on the mortise view [13,25]. Simple radiography is also useful to rule out associated fractures [2,9]. However, in some individuals, the diastasis is not as clear on radiographs, even with complete ligament ruptures; thus, this method of imagery will fail to detect syndesmotic injuries in about 35-50% [23, 25, 26]. Therefore, when there is no clear diastasis nor clinical examination findings, syndesmosis injuries can be hard to diagnose [8,9,27]. A CT scan is a more sensitive diagnostic tool for syndesmosis injuries [13,23,24,28,29]. When identified, preor intra-operatively, there are multiple treatment options described in the literature [16,30]. Indeed, various surgical treatments are available to maintain or regain this stability, such as metallic screw fixation, absorbable screw, and dynamic suture button fixation devices [16]. In general, when mortise widening can be seen on imagery, surgical treatment with one of these devices is required [2]. However, if only one or two ligaments are ruptured and there is still some stability in the syndesmosis, conservative treatment can be the solution [2]. A previous study showed that the CAM (controlled ankle motion) boot could slightly increase external rotation of the fibula when axial loading was applied to the ankle, possibly because of the posterior cushions inside the boot [31]. Considering this, and knowing that no other study had evaluated this precise topic before [32], the following research question emerged: does "high-ankle sprain" taping have an impact on syndesmosis congruity under axial loading with different ligaments ruptured? The main purpose of this study was to evaluate the effect of high-ankle sprain taping on syndesmotic stability in various ligament conditions when axial loading is applied. The hypothesis is that high-ankle taping will not modify the distal anatomy between the tibia and fibula in axial loading with a syndesmotic injury; more specifically, we believe there will be no overtightening of the distal tibia and fibula. MATERIALS AND METHODS The first steps of this study were the same as our previous study with the CAM orthopaedic boot [31]. Approval from the Institutional Review Board Committee was obtained. Unused fresh-frozen specimens were removed from the freezer around 24 to 30 hours before experimentation and dissected to expose the syndesmosis and tibial plateau. An antero-lateral approach was used to expose the syndesmosis. A 3D-printed device to support the leg and apply axial loading was designed to support the heel and the tibial plateau. The neutral position of the ankle was maintained to simulate axial loading (AL). An Omegadyne load cell was set up between two parallel 12-inch fully threaded rods on the device. Each specimen was then positioned in the leg-holder and loaded with 750 N of force, simulating weight-bearing. The target of 750 N, which was modeled on several other cadaveric studies [1,4,33,34], is an estimate of the force representing the weight of an average person standing on his/her foot as the determined surface area (Fig. 1). Fig (1). Image of the setup when the cadaver leg is being introduced into the CT-scan. 3D-printed device allows axial loading on the ankle in a neutral position, simulating weight-bearing. High-ankle sprain taping, or ring taping, was done by the same senior orthopaedic resident on each cadaver. He had previously learned the proper technique with a physiotherapist specialized in sports medicine and experienced in this method. Pre-taping underwrap (thin, lightweight foam) was rolled just above the malleolus. Rigid adhesive tape, used by the physical therapy department of our institution, helped compress the tibia and the fibula towards one another by wrapping it around the syndesmosis (Fig. 2). Fig (2). Ring tape applied to one of the specimens. The tape is placed just above the malleolus, aiming to compress the tibia and the fibula. First, images of the intact specimen, without axial loading, were taken. Pure axial loading was applied and a new series of images were taken. Finally, high-ankle sprain taping was added to the specimen before a new imaging sequence in the CT scan. After this baseline data was defined, the AiTFL was sectioned. Images were taken with and without axial loading, as well as with taping and AL. The whole sequence was repeated following the sectioning of the IOL, up to 10 cm proximally. The deltoid ligament was kept intact. To evaluate the relationship between the distal tibia and fibula, a previously validated method composed of five measurements on CT scan was used [24]. More specifically, in this study, three-length measurements (a,b,c) and one angle (θ1) were recorded [24] (Fig. 3). The length "a" is measured as the distance between the most anterior point of the incisura and the nearest most anterior point of the fibula. [ 24 ] Length "b" is determined by measuring the distance between the most posterior point of the incisura and the nearest most posterior point of the fibula. [ 24 ] Distance "c" is defined as the distance between the tibia and the fibula in the middle of the incisura. [ 24 ] Finally, Angle 1 represents the angle between a line drawn from the anterior and posterior point of the incisura and a line drawn in the fibula representing its orientation; according to these measurements, internal rotation is represented by a negative angle. [ 24 ] This method to evaluate the syndesmosis using CT scan images was also used in a similar study on the effect of controlled ankle motion walking boot on syndesmotic instability [ 31 ]. Measurements on all CT scan slices were taken by the same senior author. Each specimen was compared, and data were analyzed using Wilcoxon tests for paired samples and non-parametric data, with SPSS statistics 25.0 (IBM). In order to control for the multiplicity of tests and considering the nature of this study (controlled experimental study), the level of significance was set at 0.01 for these analyses. RESULTS Ten paired cadaveric specimens from mid-thigh to toes were used for this study, including four males and one female. Their average age was 71 years old. All specimens underwent the full complement of testing and ligament rupture simulation. Although our specimens' mean age was 71 years, we believe these results are applicable to all individuals because no significant degenerative changes seen on CT scan were present in the ankles we used. There were no significant differences when comparing the ankles with and without AL. This remained true for all specimens with isolated AiTFL "tears" and in those with both AiTFL and IOL "tears" (p<0.01, Table 1). We also could not establish any significant effect of highankle sprain taping on syndesmotic stability, whether an isolated AiTFL or combined AiTFL and IOL ruptures were simulated, and this was true regardless of the axial loading state ( Table 1). DISCUSSION The main purpose of this study was to evaluate the effect of high-ankle sprain taping on the stability of the tibiofibular syndesmosis in different conditions of simulated axial loading and ligament injury. Our hypothesis was that high-ankle sprain taping would not modify the relationship between the distal tibia and fibula. According to our experimental values and the level of significance set to 0.01, no significant displacement of the fibula in relation to the tibia was noted, with or without the high-ankle sprain tape, in any ligament and loading combination. Therefore, it is impossible to accept or reject our hypothesis because there was no significant displacement when we compared ankles with and without AL. The data for Angle 1 has a wide range. In intact ankles without AL, the difference is approximately 13 degrees, -8.31(±6.54), and with AL and both ligaments ruptured, the mean is -7.27 (±5.00). This wide range was also noted in a study by Patel et al. where they used the same measurement method as in this study. [ 37 ] Their mean for Angle 1 in weight-bearing ankles was -12.78 (5.45) with a range between -24.00 and 6.00 degrees [37]. The fact that no significant change was observed during this study could be explained in many ways. First, results might have been different if an external rotation force had been applied to the custom-made weight-bearing simulation device. External rotation is known to be a common injury mechanism when the syndesmosis is involved [35]. Indeed, a study conducted by Beumer et al. showed that external rotation is the force that causes the most significant displacement of the fibula at the syndesmosis level [36]. They also found that external rotation of the ankle-foot complex led to the rotation of the fibula, causing the narrowing of the syndesmosis width [36]. This may explain why AP radiography is not a good radiologic tool to assess syndesmosis instability [36]. Consistent with our results, they did not find the frank displacement of the fibula with regard to the tibia when the foot was placed in the neutral position, with or without axial loading, when the AiTFL, the PiTFL and the anterior part of the deltoid ligament were sectioned [36]. Second, the deltoid ligament is one of the ankle's primary stabilizers [13]. It can be argued that significant displacement could have been caused by iatrogenic rupture of the deltoid ligament, as well as the full syndesmotic complex disruption. We chose not to do this because we believe that three ligament tears will have a visible diastasis and a clear surgical indication. However, two ligament injuries do not have a clear surgical indication, and therefore ankle taping may be of some benefit. Finally, cadaveric thawed tissue cannot adequately replicate in vivo conditions with live tissue. In order to analyze the capacity of high-ankle sprain taping to maintain syndesmotic congruity, baseline displacement data on CT scan with axial loading was needed. Unfortunately, there was no significant difference in any measurement or angle among specimens, with and without axial loading. Isolated time-point axial loading in neutral position seems to have no significant effect on syndesmotic anatomy with two injured ligaments. It is important to note that high-ankle sprain taping did not cause any malreduction of the distal tibiofibular joint. The same cannot be said for the CAM boot. In a previous study, Lamer et al. found that the CAM boot can increase external rotation of the fibula when AL is applied, which may interfere with proper healing [31]. In the present study, even when the syndesmosis was not intact and two ligaments were ruptured, putting on a ring tape did not change the anatomy of this joint. This might argue in favor of high-ankle sprain taping when treating certain types of syndesmotic injury. Hunt et al.'s study demonstrated that axial loading alone, without external rotation, did not impact intraarticular contact pressure or syndesmosis stability [1]. No significant displacement and rotation of the fibula or talus greater than 0.5 mm and 1 degree were found in their study during AL alone without any rotation force [1]. These findings suggest that in the clinical setting, when an incomplete syndesmotic rupture is treated conservatively, the external rotation should be avoided, with taping, a CAM boot or a cast. There is no specific standardized rehabilitation program to allow proper ligament healing [12]. Immobilization and a progressive return to function are left to the discretion of the treating doctor and the rehabilitation team. Normally, this process would start with rest, ice and elevation, followed by a non-weight-bearing period [12,13]. During this period, would high-ankle sprain taping be enough to restrain syndesmosis congruity even with an external rotation force applied? Further studies are needed to answer this question. The main limitation of our study stems from its cadaveric design. Ligaments were sectioned but the other soft-tissue structures (such as the capsule and the deltoid ligament, among others) were not damaged, contrary to what is frequently found in real-life injuries. This could have provided the specimens with more stability than what is normally seen. Also, the specimens were only stressed with an axial load without rotational components, which, again, does not reproduce the pattern of true ankle injuries. Furthermore, dynamic loading, modeled more closely in real-life, could have revealed differences over time. This is a potential future research avenue. Finally, in a clinical setting, the relative tension applied to the ring tape can be adjusted based on patient feedback. In cadaveric specimens, since this is not possible, there is a potential for over-tensioning. Clearly, more studies are needed to evaluate the theoretical benefits and use of highankle sprain taping. This study is, to our knowledge, the first to assess radiographically the effect of high-ankle sprain taping on the tibiofibular syndesmosis in an axial loading setting. CONCLUSION It is impossible to conclude if high-ankle sprain taping can maintain syndesmotic congruity because we found no significant difference, even with AL and two ruptured ligaments, whether taping was used or not. However, this study confirms the hypothesis that high-ankle sprain taping does not cause a malreduction of the distal tibiofibular joint when one or two ligaments are torn in neutral weight-bearing condition. These results support its use as a conservative treatment for syndesmosis injury in strict axial loading condition without any external rotation. This is the first study evaluating the effect of high-ankle sprain taping and because there are no evidencebased guidelines regarding syndesmotic injury treatment, taping could become an interesting alternative. However, further studies to evaluate its function with cyclical dynamic axial loading, as well as external rotation force, would be needed. HUMAN AND ANIMAL RIGHTS Not applicable. CONSENT FOR PUBLICATION Not applicable. AVAILABILITY OF DATA AND MATERIALS The data that support the findings of this study are available upon request from the authors.
2021-02-12T19:09:42.460Z
2020-12-31T00:00:00.000
{ "year": 2020, "sha1": "4fe5cda961f577913da77b3a27771ad33f589518", "oa_license": "CCBY", "oa_url": "https://opensportssciencesjournal.com/VOLUME/13/PAGE/123/PDF/", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4fe5cda961f577913da77b3a27771ad33f589518", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
56316765
pes2o/s2orc
v3-fos-license
Measurement and QCD analysis of double-differential inclusive jet cross-sections in pp collisions at sqrt(s) = 8 TeV and ratios to 2.76 and 7 TeV A measurement of the double-differential inclusive jet cross section as a function of the jet transverse momentum pT and the absolute jet rapidity abs(y) is presented. Data from LHC proton-proton collisions at sqrt(s) = 8 TeV, corresponding to an integrated luminosity of 19.7 inverse femtobarns, have been collected with the CMS detector. Jets are reconstructed using the anti-kT clustering algorithm with a size parameter of 0.7 in a phase space region covering jet pT from 74 GeV up to 2.5 TeV and jet absolute rapidity up to abs(y) = 3.0. The low-pT jet range between 21 and 74 GeV is also studied up to abs(y) = 4.7, using a dedicated data sample corresponding to an integrated luminosity of 5.6 inverse picobarns. The measured jet cross section is corrected for detector effects and compared with the predictions from perturbative QCD at next-to-leading order (NLO) using various sets of parton distribution functions (PDF). Cross section ratios to the corresponding measurements performed at 2.76 and 7 TeV are presented. From the measured double-differential jet cross section, the value of the strong coupling constant evaluated at the Z mass is alpha[S(M[Z]) = 0.1164 +0.0060 -0.0043, where the errors include the PDF, scale, nonperturbative effects and experimental uncertainties, using the CT10 NLO PDFs. Improved constraints on PDFs based on the inclusive jet cross section measurement are presented. Introduction Measurement of the cross sections for inclusive jet production in proton-proton collisions is an ultimate test of quantum chromodynamics (QCD).The process p + p → jet + X probes the parton-parton interaction as described in perturbative QCD (pQCD), and is sensitive to the value of the strong coupling constant, α S .Furthermore, it provides important constraints on the description of the proton structure, expressed by the parton distribution functions (PDFs). In this analysis, the double-differential inclusive jet cross section is measured at the centre-ofmass energy √ s = 8 TeV as a function of jet transverse momentum p T and absolute jet rapidity |y|.Similar measurements have been carried out at the CERN LHC by the ATLAS and CMS Collaborations at 2.76 [1, 2] and 7 TeV [3][4][5][6], and by experiments at other hadron colliders [7][8][9][10][11]. The measured inclusive jet cross section at √ s = 7 TeV is well described by pQCD calculations at next-to-leading order (NLO) at small |y|, but not at large |y|.The larger data sample at √ s = 8 TeV allows QCD to be probed with higher precision extending the investigations to yet unexplored kinematic regions.In addition, the ratios of differential cross sections at different centre-of-mass energies can be determined.In Ref. [12] an increased sensitivity of such ratios to PDFs was suggested. The data were collected with the CMS detector at the LHC during 2012 and correspond to an integrated luminosity of 19.7 fb −1 .The average number of multiple collisions within the same bunch crossing (known as pileup) is 21.A low-pileup data sample corresponding to an integrated luminosity of 5.6 pb −1 is collected with an average of four interactions per bunch crossing; this is used for a low-p T jet cross section measurement.The measured cross sections are corrected for detector effects and compared to the QCD prediction at NLO.The high-p T part of the differential cross section, where the sensitivity to the value of α S is maximal, is measured more accurately than before.Also, the kinematic region of small p T and large y is probed.The measured cross section is used to extract the value of the strong coupling constant at the Z boson mass scale, α S (M Z ), and to study the scale dependence of α S in a wider kinematic range than is accessible at √ s = 7 TeV.Further, the impact of the present measurements on PDFs is illustrated in a QCD analysis using the present measurements and the cross sections of deep-inelastic scattering (DIS) at HERA [13]. The CMS detector The central feature of the CMS apparatus is a superconducting solenoid of 6 m internal diameter, providing a magnetic field of 3.8 T. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections.Forward calorimeters extend the pseudorapidity (η) coverage [14] provided by the barrel and endcap detectors.Muons are measured in gas-ionization detectors embedded in the steel fluxreturn yoke outside the solenoid. Jet reconstruction and event selection The high-p T jet measurement is based on data sets collected with six single-jet triggers in the HLT system that require at least one jet in the event with jet p T > 40, 80, 140, 200, 260, and 320 GeV, respectively.All triggers were prescaled during the 2012 data-taking period except the highest threshold trigger.The efficiency of each trigger is estimated using triggers with lower p T thresholds, and each is found to exceed 99% above the nominal p T threshold.The p T thresholds of each trigger and the corresponding effective integrated luminosity are listed in Table 1.The jet p T range, reconstructed in the offline analysis, where the trigger with the lowest p T threshold becomes fully efficient is also shown.This analysis includes jets with 74 < p T < 2500 GeV.Events for the low-p T jet analysis are collected with a trigger that requires at least two charged tracks reconstructed in the pixel detector in coincidence with the nominal bunch crossing time.This selection is highly efficient for finding jets ( 100%) and also rejects noncollision background.The p T range considered in the low-p T jet analysis is 21-74 GeV. The particle-flow (PF) event algorithm reconstructs and identifies each individual particle with an optimized combination of information from the various elements of the CMS detector [16,17].Selected events are required to have at least one reconstructed interaction vertex, and the primary interaction vertex (PV) is defined as the reconstructed vertex with the largest sum of p 2 T of its constituent tracks.The PV is required to be reconstructed from at least five tracks and to lie within 24 cm in the longitudinal direction from the nominal interaction point [15], and to be consistent with the measured transverse position of the beam.The energy of photons is obtained directly from the ECAL measurement and is corrected for zero-suppression effects.The energy of electrons is determined from a combination of the electron momentum at the PV as determined by the tracker, the energy of the corresponding ECAL cluster, and the energy sum of all bremsstrahlung photons spatially compatible with originating from the electron track.The transverse momentum of muons is obtained from the curvature of the corresponding track.The energy of charged hadrons is determined from a combination of their momentum measured in the tracker and the matching ECAL and HCAL energy deposits, corrected for zero-suppression effects and for the response function of the calorimeters to hadronic showers.Finally, the energy of neutral hadrons is obtained from the corresponding corrected ECAL and HCAL energies.In the forward region, the energies are measured in the HF detector. For each event, hadronic jets are clustered from the reconstructed particles with the infrared and collinear safe anti-k T algorithm [18], as implemented in the FASTJET package [19], with a size parameter R of 0.7.Jet momentum is determined as the vector sum of the momenta of all particles in the jet, and is found from simulation to be within 5% to 10% of the true momentum over the whole p T spectrum and detector acceptance, before corrections are applied.In order to suppress the contamination from pileup, only reconstructed charged particles associated to the PV are used in jet clustering.Jet energy scale (JES) corrections are derived from simulation, by using events generated with PYTHIA6 and processed through the CMS detector simulation that is based on the GEANT 4 [20] package, and from in situ measurements by exploiting the energy balance in dijet, photon+jet, and Z+jet events [21,22].The PYTHIA6 version 4.22 [23] is used, with the Z2 * tune.The Z2 * tune is derived from the Z1 tune [24] but uses the CTEQ6L [25] parton distribtion set whereas the Z1 tune uses the CTEQ5L set.The Z2 * tune is the result of retuning the PYTHIA6 parameters PARP(82) and PARP(90) by means of the automated PRO-FESSOR tool [26], yielding PARP(82)=1.921 and PARP(90)=0.227.The JES corrections account for residual nonuniformities and nonlinearities in the detector response.An offset correction is required to account for the extra energy clustered into jets due to pileup.The JES correction, applied as a multiplicative factor to the jet four momentum vector, depends on the values of jet η and p T .For a jet with a p T of 100 GeV the typical correction is about 10%, and decreases with increasing p T .The jet energy resolution (JER) is approximately 15% at 10 GeV, 8% at 100 GeV, and 4% at 1 TeV. The missing transverse momentum vector, p miss T , is defined as the projection on the plane perpendicular to the beams of the negative vector sum of the momenta of all reconstructed particles in an event.Its magnitude is referred to as E miss T .A requirement is made that the ratio of E miss T and the sum of the transverse energy of the PF particles is smaller than 0.3, which removes background events and leaves a negligible residual contamination.Additional selection criteria are applied to each event to remove spurious jet-like signatures originating from isolated noise patterns in certain HCAL regions.To suppress the noise patterns, tight identification criteria are applied: each jet should contain at least two PF particles, one of which is a charged hadron, and the jet energy fraction carried by neutral hadrons and photons should be less than 90%.These criteria have an efficiency greater than 99% for genuine jets.Events are selected that contain at least one jet with a p T higher than the p T threshold of the lowest-threshold trigger that recorded the event. Measurement of the jet differential cross section The double-differential inclusive jet cross section is defined as where N jets is the number of jets in a kinematic interval (bin) of transverse momentum and rapidity, ∆p T and ∆|y|, respectively; L int,eff is the effective integrated luminosity contributing to the bin; is the product of the trigger and jet selection efficiencies, and is greater than 99%.The widths of the p T bins increase with p T and are proportional to the p T resolution.The phase space in absolute rapidity |y| is subdivided into six bins starting from y = 0 up to |y| = 3.0 with ∆|y| = 0.5.In the low-p T jet measurement an additional rapidity bin 3.2 < |y| < 4.7 is included.The statistical uncertainty for each bin is computed according to the number of events contributing to at least one entry per event [6], corrected for possible multiple entries per event.This correction is small, since at least 90% of the observed jets in each ∆p T and ∆|y| bin originate from different events. In order to compare the measured cross section with theoretical predictions at particle level, the steeply falling jet p T spectra must be corrected for experimental p T resolution.An unfolding procedure, based on the iterative D'Agostini method [27], implemented in the ROOUNFOLD package [28], is used to correct the measured spectra for detector effects.The response matrix is created by the convolution of theoretically predicted spectra, discussed in Section 5, with the JER effects.These effects are evaluated as a function of p T with the CMS detector simulation, after correcting for the residual differences from data [21].The unfolding procedure induces statistical correlations among the bins.The sizes of these correlations typically vary between 10% and 20%. The dominant contribution to the experimental systematic uncertainty in the measured cross section is from the JES corrections, determined as in Ref. [21,22].For the high-p T jet data set, this uncertainty is decomposed into 24 independent sources, corresponding to the different components of the corrections: pileup effects, relative calibration of JES versus η, absolute JES including p T dependence, and differences in quark-and gluon-initiated jets.The set of components, used here, is discussed in detail in Ref. [22], and represents an evolution of the decomposition presented in Ref. [29].The low-pileup data set uses a reduced number of components, since the pileup-related corrections are negligible, and there is no JES time dependence.Moreover, the central values of the corrections, for the components common between the two data sets, are not the same; the low-p T jet analysis uses corrections computed only on the initial part of the 2012 data sample.The impact of the uncertainty induced by each correction component on the measured cross section is evaluated separately.The JES-induced uncertainty in the cross section depends on p T and y.For the high-p T data, this ranges from 2% to 4% in the sub-TeV region at central rapidity to about 20% in the highest p T bins for rapidities 1.0 < |y| < 2.0.Due to the different set of corrections used, the low-p T jet cross section has a larger JES uncertainty than the contiguous bins of the high-p T part, and this effect becomes more pronounced as the jet rapidity increases. To account for the residual effects of small inefficiencies of less than 1% in the trigger performances and jet identification, an uncertainty of 1%, uncorrelated across all jet p T and y bins, is assigned to each bin. The unfolding procedure is affected by the uncertainties in the JER parameterization, which are derived from the simulation.The JER parameters are varied by one standard deviation up and down, and the corresponding response matrices are used to unfold the measured spectra. The JER-induced uncertainty amounts to 1-5% in the high-p T jet region, but can exceed 30% in the low-p T jet region. The uncertainties in the integrated luminosity, which propagate directly to the cross section, are 2.6% [30] and 4.4% [31] for normal and low-pileup data samples, respectively.Other sources of uncertainty, such as the jet angular resolution and the model dependence of the unfolding, arise from the theoretical p T spectrum used to calculate the response matrix and have less than 1% effect on the cross section.The total experimental systematic uncertainty in the measured cross section is obtained as a quadratic sum of contributions due to uncertainties in JES, JER, and integrated luminosity. Theoretical predictions Theoretical predictions for the jet cross section are known at NLO accuracy in pQCD [32,33], and the NLO electroweak corrections have been computed in Ref. [34].The pQCD NLO calculations are performed by using the NLOJET++ (version 4.1.3)program [32,33] as implemented in the FASTNLO (version 2.1) package [35].The renormalization (µ R ) and factorization (µ F ) scales are both set to the leading jet p T .The calculations are performed by using six PDF sets determined at NLO: CT10 [36], MSTW2008 [37], NNPDF2.1 [38], NNPDF3.0 [39], HER-APDF1.5 [40], and ABM11 [41].Each PDF set is available for a range of α S (M Z ) values.The number of active (massless) flavours chosen in NLOJET++ is five in all of the PDF sets except NNPDF2.1,where it is set to six.All the PDF sets use a variable flavour number scheme, except ABM11, which uses a fixed flavour number scheme.The basic characteristics of each PDF set are summarized in Table 2.The parton-level calculation at NLO has to be supplemented with corrections due to nonperturbative (NP) effects, i.e. hadronization and multiparton interactions (MPI).The nonperturbative effects are estimated using both leading order (LO) and NLO event generators.In the former case, the correction is evaluated by averaging those provided by PYTHIA6 [23] (version 4.26), using tune Z2 * , and HERWIG++ (version 2.4.2) [42], using tune UE [43].The size of these corrections ranges from 20% at low p T to 1% at the highest p T of 2.5 TeV.The NLO nonperturbative correction is derived using POWHEG [44][45][46][47], interfaced with PYTHIA6 for parton shower, MPI, and hadronization.The nonperturbative correction factors are derived in this case by averaging the results for two different tunes of PYTHIA6, Z2 * and P11 [48].Hadronization models have been tuned by using LO calculations for the hard scattering, and applying these tunes to NLO-based calculations is not expected to provide optimal results.On the other hand, the application of nonperturbative corrections based on LO calculations to NLO predictions implicitly assumes that the behaviour of nonperturbative effects is independent of the hard scattering description.To take into account both facts, the final number used for the nonperturbative correction, C NP , is an arithmetic average of the LO-and NLO-based estimates.Half the width of the envelope of these predictions is used as the uncertainty due to the nonperturbative correction.Figure 1 shows the nonperturbative correction factors derived by combining both LOand NLO-based calculations. (GeV) The uncertainty in the NLO pQCD calculation arising from missing higher-order corrections is estimated by varying the renormalization and factorization scales in the following six combinations of scale factors: (µ R /µ, µ F /µ) = (0.5, 0.5), (2,2), (1, 0.5), (1, 2), (0.5, 1), (2, 1), where µ is the default choice equal to the jet p T , and considering the largest variation in the prediction as the uncertainty.The uncertainty related to the choice of scale ranges from 5% to 10% for |y| < 1.5 and increases to 40% for the outer |y| bins and for high p T .The PDF uncertainties are estimated following the prescription from each PDF group by using the provided eigenvectors (or replicas in case of NNPDF).The corresponding uncertainty in the predicted cross section varies from 5% to 30% in the entire p T range for |y| ≤ 1.5.Beyond |y| = 1.5, in the outer rapidity region, these uncertainties become as large as 50% at high p T and even increase up to 100% for the CT10 and HERAPDF1.5 sets.The nonperturbative correction induces an additional uncertainty, which is estimated in the central rapidity bin to range between 1.4% at p T ∼ 100 GeV to 0.06% at ∼2.5 TeV.Overall, the PDF uncertainty is dominant. Electroweak effects, which arise from the virtual exchange of the massive W and Z gauge bosons, induce corrections with magnitudes given by the Sudakov logarithmic factor , where α W is the weak coupling constant, M W is the mass of the W boson, and Q is the energy scale of the interaction.For high-p T jets, the values of the logarithm, and therefore the correction, become large.The derivation of the electroweak correction factor, applied to the NLO pQCD spectrum corrected for nonperturbative effects, is provided in Ref. [34]. Figure 2 shows the electroweak correction for the two extreme rapidity regions as a function of jet p T .In the most central rapidity bin for the high-p T region, the correction factor is as large as 14%.Electroweak corrections are not applied to the low-p T results, where they are negligible. Comparison of theory and data The measured double-differential cross sections for inclusive jet production are shown in Fig. 3 as a function of p T in the various |y| ranges after unfolding the detector effects.This measurement is compared with the theoretical prediction discussed in Section 5 using the CT10 PDF set.The ratios of the data to the theoretical predictions in the various |y| ranges are shown for the CT10 PDF set in Fig. 4. Good agreement is observed for the entire kinematic range with some exceptions in the low-p T region. [GeV] T Jet p in Section 8.The values for χ 2 for the comparison between data and theory based on different PDF sets for the high-p T region are summarized in Table 3.In most cases the theoretical predictions agree with the measurements.The exception is the ABM11 PDF set, where significant discrepancies are visible.Significant differences between the theoretical predictions obtained by using different PDF sets are observed in the high-p T range. The predictions based on CT10 PDF show the best agreement with data, quantified by the lowest χ 2 for most rapidity ranges, while predictions using MSTW, ABM11, and HERAPDF1.5 exhibit differences compared to data and to the prediction based on CT10, exceeding 100% in the highest p T range. In the transition between the low-and high-p T jet regions, some discontinuity can be observed in the measured values, although they are generally compatible within the total experimental uncertainties.The highest p T bins of the low-p T jet range suffer from a reduced sample size, and therefore have a statistical uncertainty significantly larger than the first bin of the high-p T jet region.The JES corrections for the low-and high-p T regions are different, in particular in the p Tdependent components, and this also contributes to the observed fluctuations in the matching region.The corresponding uncertainties are treated as uncorrelated between the low-and highp T regions.The overall estimated systematic uncertainties account for these residual effects.The transition region between the low-and high-p T jet measurements has limited sensitivity to α S and no impact in constraining PDFs, since it probes the x-range where the PDFs are well constrained by more precise DIS data. Ratios of cross sections measured at different √ s values Ratios of cross sections measured at different energies may show a better sensitivity to PDFs than cross sections at a single energy, provided that the contributions to the theoretical and experimental uncertainties from sources other than the PDFs themselves are reduced.A calculation of the ratio of cross sections measured at 7 and 8 TeV presented in Ref. [12], for instance, suggests a larger sensitivity to PDFs in the jet p T range between 1 and 2 TeV.Therefore, it is interesting to study such cross section ratios. Differential cross sections for the inclusive jet production have been measured by the CMS Collaboration at √ s = 2.76 [2] and 7 TeV [6].Ratios are computed of the double-differential cross section presented in this paper at 8 TeV to the corresponding measurements at different energies.For p T > 74 GeV, the choice of jet p T and rapidity bins is identical for the various measurements, thus allowing an easy computation of the ratio.Only the high-p T jet data set at 8 TeV is used, since no counterpart of the low-p T jet analysis is available for the other centre-ofmass energies.The NLO pQCD prediction using the CT10 PDF is shown with its total uncertainty (shaded band) and the contribution of the PDF uncertainty (hatched band).Predictions obtained using alternative PDF sets are shown by lines of different styles without uncertainties.The data to theory ratios (bottom panels) are shown by using the same notations for the respective rapidities.The last bin for the |y| < 0.5 region is wider than the others in order to reduce the statistical uncertainty. As a result of partial cancellation of the systematic uncertainties, the relative precision of the ratios is improved compared with the cross section.Experimental correlations between the measurements at different centre-of-mass energies are taken into account in the computation of the total experimental uncertainty.As a consequence of the unfolding procedure, the results of the cross section measurements at each energy are statistically correlated between different bins, while the measurements at different energies are not statistically correlated with each other.The statistical uncertainties in the ratio measurement are calculated by using linear error propagation, taking into account the bin-to-bin correlations in the unfolded data.Correlations between the components of the jet energy corrections at different energies are included, as well as correlations in JER.Uncertainties related to the determination of luminosity are assumed to be uncorrelated. The theoretical uncertainties are approached in a similar manner: the uncertainties in nonperturbative corrections, PDFs, and those arising due to scale variations are assumed to be fully correlated. The ratios of the cross sections measured at √ s = 7 and 8 TeV are shown in Figs.6-7 for the various rapidity bins and they are compared with theoretical predictions obtained using different PDF sets.A general agreement between data and theoretical predictions is observed.Some discrepancies are visible at high p T , in particular in the 1.0 < |y| < 1.5 range.In the cross section ratio the central values of the predictions are not strongly influenced by the choice of the PDFs.However, the uncertainty is mostly dominated by PDF uncertainties, which are represented here for CT10.The experimental uncertainty in the ratio is considerably larger than the theoretical uncertainty.Consequently, no significant constraints on PDFs can be expected from the inclusive jet cross section ratio of 7 to 8 TeV. Determination of α S Measurements of jet production at hadron colliders can be used to determine the strong coupling constant α S , as has been previously from the CMS 7 TeV inclusive jet measurement [29], and from Tevatron measurements [49][50][51].The procedure to extract α S in Ref. [29] is adopted here.Only the high-p T jet data are used, since the sensitivity of the α S predictions increases with jet p T .The determination of α S is performed by minimizing the χ 2 between the data and the theory prediction.The NLO theory prediction, corrected for nonperturbative and electroweak effects, is used.At NLO, the dependence of the differential inclusive jet production cross section dσ/dp T on α S is given by: where α S is the strong coupling, X(0) (µ F , p T ) represents the LO contribution to the cross section and K1(µ R , µ F , p T ) is an NLO correction term.A comparison with the measured spectrum gives an estimate of the input value of α S for which the cross section, predicted from theory, has the best agreement with data. The extraction of α S is performed by a least squares minimization of the function where D is the array of measured values of the double-differential inclusive jet cross section for the different bins in p T and |y|, T(α S (M Z )) is the corresponding set of theoretical cross sections for a given value of α S (M Z ), and C is the covariance matrix including all the experimental and theoretical uncertainties involved in the measurement.The total covariance matrix C is built from the individual components as follows: where: • C stat is the statistical covariance matrix, taking into account the correlation between different p T bins of the same rapidity range due to unfolding.Different rapidity ranges are considered as uncorrelated among themselves; • C unfolding includes the uncertainty induced by the JER parameterization in the unfolding procedure; • C JES includes the uncertainty due to JES uncertainties, obtained as the sum of 24 independent matrices, one for each source of uncertainty; • C uncor includes all uncorrelated systematic uncertainties such as trigger and jet identification inefficiencies, and time dependence of the jet p T resolution; • C lumi includes the 2.6% luminosity uncertainty; • C PDF is related to uncertainties in the PDF used in the theoretical prediction; • C NP includes the uncertainty due to nonperturbative corrections in the theoretical prediction. The unfolding, JES, lumi, PDF, and NP systematic uncertainties are considered as 100% correlated among all p T and |y| bins. The extraction of α S uses the CT10 NLO PDF set in the theoretical calculation, since it provides the best agreement with measured cross sections, as shown in Section 6.This PDF set provides variants corresponding to 16 different α S (M Z ) values in the range 0.112-0.127 in steps of 0.001.The sensitivity of the theory prediction to the α S choice in the PDF is illustrated in Fig. 11. The χ 2 in Eq. ( 3) is computed, combining all p T and |y| intervals, for each of the variants corresponding to a different α S value, as shown in Fig. 12.The variation of χ 2 with α S is fitted with a fourth-order polynomial, and the minimum (χ 2 min ) corresponds to the best α S (M Z ) value.Uncertainties are determined using the ∆χ 2 = 1 criterion.The individual contribution from each uncertainty source listed in Eq. ( 4) is estimated as the quadratic difference between the main result and the result of an alternative fit, in which that particular source is left out of the covariance matrix definition. The uncertainties due to the choice of the renormalization and factorization scales are evaluated by variations of the default µ R , µ F values, set to jet p T , in the following six combinations: (µ R /p T ,µ F /p T ) = (0.5,0.5), (0.5,1), (1,0.5),(1,2), (2,1), and (2,2).The χ 2 minimization with respect Figure 12: The χ 2 minimization with respect to α S (M Z ) by using the CT10 NLO PDF set and data from all rapidity bins.The uncertainty is obtained from the α S (M Z ) values for which χ 2 is increased by one with respect to the minimum value, indicated by the box.The curve corresponds to a fourth-degree polynomial fit through the available χ 2 points.to α S (M Z ) is repeated in each case, and the maximal upwards and downwards deviations of α S (M Z ) from the central result are taken as the corresponding uncertainties. In Table 4, the fitted values of α S are presented for each rapidity bin, separately, and for the whole range.The contribution to the uncertainty due to each individual source is also given, together with the best χ 2 min value for each separate fit.The largest source of uncertainty in the determination of α S is due to the choice of renormalization and factorization scales, pointing to the need for including higher order corrections in the theoretical calculations. Alternatively, the value of α S (M Z ) is also determined using the NNPDF3.0NLO PDF, resulting in α S (M Z ) = 0.1172 +0.0083 −0.0075 .These values of α S (M Z ) are compatible with the current world average α S (M Z ) = 0.1181 ± 0.0011 [52]. The value of α S depends on the scale Q at which it is evaluated, decreasing as Q increases.The measured p T interval 74-2500 GeV is divided into nine different ranges as shown in the first column in Table 5, and α S (M Z ) is determined for each of them. The Q scale corresponding to each p T range is evaluated as the cross section weighted average p T for that range.The extracted α S (M Z ) values are evolved to the Q scale corresponding to the range, using the 2-loop 5-flavour renormalization group (RG) evolution equation, resulting in the α S (Q) values listed in Table 5.The same RG equation is used to obtain the corresponding uncertainties.The contributions to both the experimental and theoretical uncertainties are shown in Table 6.A comparison of these results with those from the CMS [53][54][55], [49,50], H1 [56], and ZEUS [57] experiments is shown in Fig. 13.The present measurement is in very good agreement with results obtained by previous experiments.The present analysis constrains the α S (Q) running for Q between 86 GeV and 1.5 TeV, which is the highest scale at which α S has been measured, to date.9 The QCD analysis of the inclusive jet measurements The CMS inclusive jet measurements at √ s = 7 TeV probe the gluon and valence-quark distributions in the kinematic range x > 0.01 [29].In this paper, we use the inclusive jet cross section measurements at √ s = 8 TeV for p T > 74 GeV in a QCD analysis at NLO together with the combined measurements of neutral-and charged-current cross sections of deep inelastic electron (positron)-proton scattering at HERA [13].The correlations of the experimental uncertainties for the jet measurements and DIS cross sections are taken into account.The DIS [53][54][55], D0 [49,50], H1 [56], and ZEUS [57] measurements are superimposed.measurements and the CMS jet cross section data are treated as uncorrelated.The theoretical predictions for the cross sections of jet production are calculated at NLO by using the NLO-JET++ program [32,33] as implemented into the FASTNLO package [35].The open-source QCD fit framework for PDF determination HERAFitter [58,59], version 1.1.1,is used with the parton distributions evolved by using the DGLAP equations [60][61][62][63][64][65] at NLO, as implemented in the QCDNUM program [66]. The Thorne-Roberts general mass variable flavour number scheme at NLO [37,67] is used for the treatment of the heavy-quark contributions with the heavy-quark masses m c = 1.47 GeV and m b = 4.5 GeV.The renormalization and factorization scales are set to Q, which denotes the four-momentum transfer in case of the DIS data and the jet p T in case of the CMS jet cross sections. The strong coupling constant is set to α S (M Z ) = 0.118, as in the HERAPDF2.0analysis [13] and following the global PDF analyses, for example, in Ref. [39].The Q 2 range of HERA data is restricted to Q 2 ≥ Q 2 min = 7.5 GeV 2 .The procedure for the determination of the PDFs follows the approach used in the previous QCD analysis [29] with the jet cross section measurements at √ s = 7 TeV replaced by those at 8 TeV.At the initial scale of the QCD evolution Q 2 0 = 1.9 GeV 2 , the parton distributions are represented by: Table 6: Composition of the uncertainty in α S (M Z ) fit results in ranges of p T .For each range, the corresponding statistical and experimental systematic uncertainties and the components of the theoretical uncertainty are shown.The numbers are obtained by using the CT10 NLO PDF set. 74-133 The normalization parameters A u v , A d v , A g are determined by the QCD sum rules; the B parameter is responsible for small-x behavior of the PDFs; and the parameter C describes the shape of the distribution as x → 1.A flexible form for the gluon distribution is adopted here, where the (fixed) choice of C g = 25 is motivated by the approach of the MSTW group [37,67].Additional constraints B U = B D and A U = A D (1 − f s ) are imposed with f s being the strangeness fraction, f s = s/(d + s), fixed to f s = 0.31 ± 0.08, as in Ref. [37], consistent with the determination of the strangeness fraction made by using the CMS measurements of W+charm production [68].Additional D and E parameters allow probing the sensitivity of results on the specific selected functional form.The parameters in Eqs.( 5)-( 9) are selected by first fitting with all D and E parameters set to zero.The other parameters are then included in the fit one at a time.The improvement in χ 2 of the fits is monitored and the procedure is stopped when no further improvement is observed.This leads to an 18-parameter fit. The PDF uncertainties are estimated in a way similar to the earlier CMS analyses [29,68] according to the general approach of HERAPDF1.0[40] in which experimental, model, and parameterization uncertainties are taken into account.The experimental uncertainties originate from the measurements included in the analysis and are determined by using the Hessian [69] method, applying a tolerance criterion of ∆χ 2 = 1.Alternatively, the Monte Carlo method [70,71] to determine the PDF uncertainties is used. Model uncertainties arise from variations in the values assumed for the charm and bottom quark masses m c and m b , with 1.41 ≤ m c ≤ 1.53 GeV and 4.25 ≤ m b ≤ 4.75 GeV, following Ref.[13], and the value of Q 2 min imposed on the HERA data, which is varied within the interval 5.0 ≤ Q 2 min ≤ 10.0 GeV 2 .The strangeness fraction f s is varied by its uncertainty.The parameterization uncertainty is estimated by extending the functional form of all PDFs with additional parameters.The uncertainty is constructed as an envelope built from the maximal differences between the PDFs resulting from all the parameterization variations and the central fit at each x value. The total PDF uncertainty is obtained by adding experimental, model, and parameterization uncertainties in quadrature.In the following, the quoted uncertainties correspond to 68% confidence level.The global and partial χ 2 values for each data set are listed in Table 7, where the χ 2 values illustrate a general agreement among all the data sets.The somewhat high χ 2 /N dof values for the combined DIS data are very similar to those observed in Ref. [13], where they are investigated in detail.Together with HERA DIS cross section data, the inclusive jet measurements provide important constraints on the gluon and valence-quark distributions in the kinematic range studied.These constraints are illustrated in Figs. 14 and 15, where the distributions of the gluon and valence quarks are shown at the scales of Q 2 = 1.9 and 10 5 GeV 2 , respectively.The results obtained using the Monte Carlo method to determine the PDF uncertainties are consistent with those obtained with the Hessian method.The uncertainties for the gluon distribution, as estimated by using the HERAPDF method for HERA-only and HERA+CMS jet analyses, are shown in Fig. 16.The parameterization uncertainty is significantly reduced once the CMS jet measurements are included. The same QCD analysis has been performed using both the low-and high-p T measurements of the jet cross sections at 8 TeV and including the systematic correlations of the two CMS data Figure 14: Gluon (left), u-valence quark (middle), and d-valence quark (right) distributions as functions of x at the starting scale Q 2 = 1.9 GeV 2 .The results of the fit to the HERA data and inclusive jet measurements at 8 TeV (shaded band), and to HERA data only (hatched band) are compared with their total uncertainties as determined by using the HERAPDF method.In the bottom panels the fractional uncertainties are shown.sets.The PDFs obtained with the addition of the low-p T jet cross sections are consistent with those from the high-p T jet cross sections alone; the low-p T jet cross sections do not, however, improve the PDF uncertainties significantly. The gluon PDFs obtained from the 8 TeV jet cross sections are compared to those from the 7 TeV cross sections [29] in Fig. 17 The extraction of the PDFs from the jet cross sections depends on the value of α S .Consequently, the PDF fits are repeated taking α S to be a free parameter.In this way, the PDFs and the strong coupling constant are determined simultaneously, diminishing the correlation between the gluon PDF and α S .The experimental, model, and parameterization uncertainties of α S (M Z ) are obtained in a manner similar to the procedure for determining uncertainties of the PDFs.The uncertainty due to missing higher-order corrections in the theoretical predictions for jet production cross sections is estimated by varying the renormalization and factorization scales.The scales are varied independently by a factor of two with respect to the default choice of µ R and µ F equal to the p T of the jet and the combined fit of PDFs and α S (M Z ) is repeated for each variation of the scale choice in the following six combinations: (µ R /p T , µ F /p T ) = (0.5,0.5), (0.5,1), (1,0.5),(1,2), (2,1), and (2,2).The scale for the HERA DIS data is not changed.The maximal observed upward and downward changes of α S (M Z ) with respect to the default are then taken as the scale uncertainty.The strong coupling constant is α S (M Z ) = 0.1185 +0.0019 −0.0021 (exp) +0.0002 −0.0015 (model) +0.0000 −0.0004 (param) +0.0022 −0.0018 (scale).Within the uncertainties, this value is consistent with the one determined in Section 8 and is an important cross- Figure 17: Gluon (left) and d-valence quark (right) distributions as functions of x at the starting scale Q 2 = 1.9 GeV 2 .The results of the 13-parameter fit [29] to the subset [40] of the combined HERA data and inclusive jet measurements at 7 TeV (hatched band), and, alternatively, 8 TeV (shaded band) are compared with their total uncertainties, as determined by using the HERAPDF method.In the bottom panels the fractional uncertainties are shown.check of the α S (M Z ) obtained by using the fixed PDF.The scale uncertainties in α S (M Z ) obtained simultaneously with the PDFs are smaller due to consistent treatment of the scales in the PDFs and the theory prediction for the jet cross sections in the simultaneous fit.The evaluation of scale uncertainties is an open issue that is ignored in all global PDF fits to date.There is no recommended procedure for the determination of the scale uncertainties in combined fits of PDFs and α S (M Z ). Summary A measurement of the double-differential inclusive jet cross section has been presented that uses data from proton-proton collisions at √ s = 8 TeV collected with the CMS detector and corresponding to an integrated luminosity of 19.7 fb −1 .The result is presented as a function of jet transverse momentum p T and absolute rapidity |y| and covers a large range in jet p T from 74 GeV up to 2.5 TeV, in six rapidity bins up to |y| = 3.0.The region of low jet p T , in particular the range from 21 to 74 GeV, has also been studied up to |y| = 4.7, using a dedicated low-pileup 5.6 pb −1 data sample.The ratios to the cross sections measured at 2.76 and 7 TeV have been also determined. Detailed studies of experimental and theoretical sources of uncertainty have been carried out. The dominant sources of experimental systematic uncertainty are due to the jet energy scale, unfolding, and the integrated luminosity measurement.These lead to uncertainties of 5-45% in the differential cross section measurement.The theoretical predictions are most affected by PDF uncertainties, and their range is strongly dependent on the p T and rapidity interval; at low p T they are about 7%, but their size increases up to 40% in the most central intervals and exceeds 200% in the outermost regions.Many uncertainties cancel in the ratio with the corresponding results at 2.76 and 7 TeV, leading to uncertainties ranging from 5% to 25%, both for the measurement and for the theoretical predictions.Perturbative QCD, supplemented by a small nonperturbative and electroweak corrections, describes the data over a wide range of jet p T and y. The strong coupling constant is extracted from the high-p T jet cross section measurements using the probed p T range and six different rapidity bins.The best fitted value is α S (M Z ) = 0.1164 +0.0060 −0.0043 using the CT10 NLO PDF set.The running of the strong coupling constant as a function of the energy scale Q, α S (Q), measured for nine different values of energy scale between 86 GeV and 1.5 TeV, is in good agreement with previous experiments and extends the measurement to the highest values of the energy scale.This measurement of the double-differential jet cross section probes hadronic parton-parton interaction over a wide range of x and Q.The QCD analysis of these data together with HERA DIS measurements illustrates the potential of the high-p T jet cross sections to provide important constraints on the gluon PDF in a new kinematic regime. Figure 1 : Figure 1: The nonperturbative correction factor shown for the central (left) and outermost (right) absolute rapidity bins as a function of jet p T .The correction is obtained by averaging LOand NLO-based predictions, and the envelope of these predictions is used as the uncertainty band. Figure 2 : Figure 2: Electroweak correction factor for the central (left) and outermost (right) rapidity bins as a function of jet p T . Figure 3 : Figure 3: Double-differential inclusive jet cross sections as function of jet p T .Data (open points for the low-p T analysis, filled points for the high-p T one) and NLO predictions based on the CT10 PDF set corrected for the nonperturbative factor for the low-p T data (solid line) and the nonperturbative and electroweak correction factors for the high-p T data (dashed line).The comparison is carried out for six different |y| bins at an interval of ∆|y| = 0.5. Figure 5 Figure 4 : Figure 5 presents the ratios of the measurements and a number of theoretical predictions based on alternative PDF sets to the CT10 based prediction.A χ 2 value is computed based on the measurements, their covariance matrices, and the theoretical predictions, as described in detail Figure 5 : Figure5: Ratios of data and alternative predictions to the theory prediction using the CT10 PDF set.For comparison, predictions employing five other PDF sets are shown in addition to the total experimental systematic uncertainties (band enclosed by full lines).The error bars correspond to the statistical uncertainty in the data. Figure 6 : Figure 6: The ratios (top panels) of the inclusive jet production cross sections at √ s = 7 and 8 TeV, shown as a function of jet p T for the absolute rapidity |y| < 0.5 (left) and 0.5 < |y| < 1.0 (right).The data (closed symbols) are shown with total uncertainties (vertical error bars).The NLO pQCD prediction using the CT10 PDF is shown with its total uncertainty (shaded band) and the contribution of the PDF uncertainty (hatched band).Predictions obtained using alternative PDF sets are shown by lines of different styles without uncertainties.The data to theory ratios (bottom panels) are shown by using the same notations for the respective rapidities.The last bin for the |y| < 0.5 region is wider than the others in order to reduce the statistical uncertainty. Figure 7 : Figure 7: The ratios of the inclusive jet production cross sections at √ s = 7 and 8 TeV shown as a function of jet p T for the absolute rapidity 1.0 < |y| < 1.5 (top left), 1.5 < |y| < 2.0 (top right) and 2.0 < |y| < 2.5 (bottom).The ratios of the cross sections measured at 2.76 TeV to those measured at 8 TeV are determined in a similar way.Results are presented in Figs.8-10, and compared to theoretical predictions that use different PDF sets.In general, the predictions describe the data well.The central value of the theoretical prediction and its uncertainty are completely dominated by the choice of and the uncertainty in the PDFs, demonstrating the strong sensitivity of the 2.76 to 8 TeV cross section ratio to the description of the proton structure. Figure 8 : Figure 8: The ratios (top panels) of the inclusive jet production cross sections at √ s = 2.76 and 8 TeV are shown as a function of jet p T for the absolute rapidity range |y| < 0.5 (left) and 0.5 < |y| < 1.0 (right).The data (closed symbols) are shown with their statistical (inner error bar) and total (outer error bar) uncertainties.For comparison, the NLO pQCD prediction by using the CT10 PDF is shown with its total uncertainty (light shaded band), while the contribution of the PDF uncertainty is presented by the hatched band.Predictions that use alternative PDF sets are shown by lines of different styles without uncertainties.The data to theory ratios (bottom panels) are shown using the same notations for the respective absolute rapidity ranges. Figure 9 :Figure 10 : Figure 9: The ratios of the inclusive jet production cross sections at √ s = 2.76 and 8 TeV shown as a function of jet p T for the absolute rapidity ranges 1.0 < |y| < 1.5 and 1.5 < |y| < 2.0. Figure 11 : Figure 11: Ratio of data over theory prediction (closed circles) using the CT10 NLO PDF set, with the default α S (M Z ) value of 0.118.Dashed lines represent the ratios of the predictions obtained with the CT10 PDF set evaluated with different α S (M Z ) values, to the central one.The error bars correspond to the total uncertainty of the data. Figure 13 : Figure 13: The running α S (Q) as a function of the scale Q is shown, as obtained by using the CT10 NLO PDF set.The solid line and the uncertainty band are obtained by evolving the extracted α S (M Z ) values by using the 2-loop 5-flavour renormalization group equations.The dashed line represents the evolution of the world average value.The black dots in the figure show the numbers obtained from the √ s = 8 TeV inclusive jet measurement.Results from other CMS[53][54][55], D0[49,50], H1[56], and ZEUS[57] measurements are superimposed. . The results are very similar. Figure 16 : Figure16: Gluon PDF distribution as a function of x at the starting scale Q 2 = 1.9 GeV 2 as derived from HERA inclusive DIS (left) and in combination with CMS inclusive jet data (right).Different contributions to the PDF uncertainty are represented by bands of different shades.In the bottom panels the fractional uncertainties are shown. Physics; the Institut National de Physique Nucléaire et de Physique des Particules / CNRS, and Commissariat à l' Énergie Atomique et aux Énergies Alternatives / CEA, France; the Bundesministerium f ür Bildung und Forschung, Deutsche Forschungsgemeinschaft, and Helmholtz-Gemeinschaft Deutscher Forschungszentren, Germany; the General Secretariat for Research and Technology, Greece; the National Scientific Research Foundation, and National Innovation Office, Hungary; the Department of Atomic Energy and the Department of Science and Technology, India; the Institute for Studies in Theoretical Physics and Mathematics, Iran; the Science Foundation, Ireland; the Istituto Nazionale di Fisica Nucleare, Italy; the Ministry of Science, ICT and Future Planning, and National Research Foundation (NRF), Republic of Korea; the Lithuanian Academy of Sciences; the Ministry of Education, and University of Malaya (Malaysia); the Mexican Funding Agencies (BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI); the Ministry of Business, Innovation and Employment, New Zealand; the Pakistan Atomic Energy Commission; the Ministry of Science and Higher Education and the National Science Centre, Poland; the Fundac ¸ão para a Ciência e a Tecnologia, Portugal; JINR, Dubna; the Ministry of Education and Science of the Russian Federation, the Federal Agency of Atomic Energy of the Russian Federation, Russian Academy of Sciences, and the Russian Foundation for Basic Research; the Ministry of Education, Science and Technological Development of Serbia; the Secretaría de Estado de Investigaci ón, Desarrollo e Innovaci ón and Programa Consolider-Ingenio 2010, Spain; the Swiss Funding Agencies (ETH Board, ETH Zurich, PSI, SNF, UniZH, Canton Zurich, and SER); the Ministry of Science and Technology, Taipei; the Thailand Center of Excellence in Physics, the Institute for the Promotion of Teaching Science and Technology of Thailand, Special Task Force for Activating Research and the National Science and Technology Development Agency of Thailand; the Scientific and Technical Research Council of Turkey, and Turkish Atomic Energy Authority; the National Academy of Sciences of Ukraine, and State Fund for Fundamental Researches, Ukraine; the Science and Technology Facilities Council, UK; the US Department of Energy, and the US National Science Foundation.Individuals have received support from the Marie-Curie programme and the European Research Council and EPLANET (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation à la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Council of Science and Industrial Research, India; the HOMING PLUS programme of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund, the Mobility Plus programme of the Ministry of Science and Higher Education, the National Science Center (Poland), contracts Harmonia 2014/14/M/ST2/00428, Opus 2013/11/B/ST2/04202, 2014/13/B/ST2/02543 and 2014/15/B/ST2/03998, Sonata-bis 2012/07/E/ST2/01406; the Thalis and Aristeia programmes cofinanced by EU-ESF and the Greek NSRF; the National Priorities Research Program by Qatar National Research Fund; the Programa Clarín-COFUND del Principado de Asturias; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); and the Welch Foundation, contract C-1845. Table 1 : HLT trigger ranges and effective integrated luminosities used in the jet cross section measurement.The luminosity is known with a 2.6% uncertainty. Table 2 : The PDF sets used in comparisons to the data together with the corresponding number of active flavours N f , the assumed masses M t and M Z of the top quark and Z boson, the default values of the strong coupling constant α S (M Z ), and the ranges in α S (M Z ) available for fits.For CT10 the updated versions of 2012 are used. Table 3 : Summary of the χ 2 values for the comparison of data and theoretical predictions based on different PDF sets in each |y| range, where cross sections are measured for a number of p T bins N bins . Table 4 : Results for α S (M Z ) extracted using the CT10 NLO PDF set.The fitted value for each |y| bin; the corresponding uncertainty components due to PDF, scale, and nonperturbative corrections; and the total experimental uncertainty is shown.The last row of the table shows the results of combined fitting of all the |y| bins simultaneously. Table 5 : The extracted α S (M Z ) values, the corresponding α S (Q) values at the Q scale for each p T range, and χ 2 min /N Bins are shown.Uncertainties are given for both α S values. Table 7 : Partial χ 2 /N dp per number of data points N dp and the global χ 2 per degree of freedom, N dof , as obtained in the QCD analysis of HERA DIS data and the CMS measurements of inclusive jet production at √ s = 8 TeV.
2017-04-04T13:49:40.000Z
2016-09-17T00:00:00.000
{ "year": 2016, "sha1": "bc250e4f65eafc65034386f280c9f8bad21e0bf2", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP03(2017)156.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "bc250e4f65eafc65034386f280c9f8bad21e0bf2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
51969717
pes2o/s2orc
v3-fos-license
Detecting Workload-based and Instantiation-based Economic Denial of Sustainability on 5G environments This paper reviews the Economic Denial of Sustainability (EDoS) problem in emerging network scenarios. The performed research studied them in context of adaptive approaches grounded on self-organizing networks (SON) and Network Function Virtualization (NFV). In particular, two novel threats were reviewed in depth: Workload-based EDoS (W-EDoS) and Instantiation-based EDoS (I-EDoS). With the aim to contribute to their mitigation a security architecture with network-based intrusion detection capabilities is proposed. This architecture implements machine learning techniques, network behaviour prediction, adaptive thresholding methods, and productivity-based clustering for detecting entropy-based anomalies based on the observed workload (W-EDoS) or suspicious variations of the productivity observed at the virtual instances (I-EDoS). A detailed experimentation has been conducted considering different calibration parameters under different network scenarios, on which the security architecture has been assessed. The results have proven good accuracy levels, hence demonstrating the proposal effectiveness. INTRODUCTION The complexity and sophistication of emerging network architectures has noticeably increased and nowdays, they demand more agile, robust and effective network management paradigms, were their scalability is mandatory. In the last years, 5G networks have emerged as a promising technology towards the fulfillment of the Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. challenging requirements posed by the current and future communication scenarios [26]. They have motivated a smart integration of innovative communication network solutions, such as Network Function Virtualization (NFV), cloud computing, Software Defined Networking (SDN), artificial intelligence, Self-Organizing Networks (SON), among others. In particular, the suitable combination of SDN and SON is considered one of the most relevant to accomplish the 5G Key Performance Indicators (KPI) [17]. Because of this, recent 5G projects have been integrating such technologies to incorporate cognitive capabilities for the inference of the network status, thus enhancing the autonomic management capacity [23] when dealing with heterogeneous network environments [2]. A clear example of this is observed in the SELFNET project [32], where a 5G-oriented framework for self-organizing management is proposed. The research introduced in this paper is thereby focused on SONnetworks as promising solutions for fulfilling the aforementioned challenges. Originally, SON networks were proposed as a response to address the problem of LTE mobile network efficiency [4], being consequently standardized by the Third Generation Partnership Project (3GPP) on which their capability to reduce operational costs by automation is remarked [1]. In this way, SON poses a transition from traditional management paradigms where human intervention is mandatory (open-loop) towards a fully automated model (closedloop). Another important topic of this research is the role of cloud computing in the SON context, which has allowed the virtualization of network functions aimed to address scalability issues of network infrastructures [45], which in the meantime yields the reduction of costs in the deployment of sensors and actuators involved at SON. That network elasticity is orchestrated through auto-scaling policies, which expose vulnerabilities that can be exploited by an attacker with the aim to produce an economical overspending of the target victims, hence making a cloud service unsustainable [6]. This effect is known as Economical Denial of Sustainability (EDoS), and it poses security threats which have not been reviewed in depth by the research community, being frequently confused with flooding-based or complexity-based Denial of Service (DoS) attacks. EDoS threats have gained sophistication with the expansion of the next generation technologies, hence demanding the deployment of detection strategies toward their mitigation [40]. The research presented throughout this paper contributes with an in-depth review of the EDoS problem in conventional cloud infrastructures and their adaptation to self-organizing scenarios. It has entailed the distinction of two main threats: EDoS based on the exploitation of the network elements workload (W-EDoS), and EDoS based on fraudulent instantiation of virtualized network functions (I-EDoS). It is also proposed a multilayered architecture compatible with the ETSI-NFV [16] model for their detection, which combines machine learning techniques, prediction methods and clustering algorithms. The effectiveness of the detection strategy has been assessed in a real SON environment, which has exposed promising preliminary results. This paper is divided into seven sections, being the present introduction the first of them. Section II reviews the state of the art about EDoS attacks related with SON environments and the proposals for their mitigation. Section III defines the W-EDoS and I-EDoS attacks and their characterization. In section IV, the proposed approach for detecting EDoS threats is introduced. Section V describes the evaluation methodology conducted throughout the experimentation. In section VI the experimental results are discussed. Finally, Section VII presents the conclusions and highlights the future research lines. BACKGROUND This section describes the main characteristics of EDoS attacks, and the efforts proposed by the research community towards their mitigation. Economical Denial of Sustainability The expression Economical Denial of Sustainability was coined by C. Hoff in 2008 [10] [11] to describe attacks originally targeted against cloud computing platforms, in which the intruder has the goal to fraudulently increase the economic expenditures derived from the maintenance of the hosted cloud services. Therefore, their main consequence is to affect the economic viability in the wake of higher expenses, which can motivate either the migration to other cloud provider or, even worse, the service unsustainability. Interested in this new threat, R. Cohen [31] extended its definition pointing out the exploitation of vulnerabilities of self-scaling processes as the most implemented procedures to achieve the aforementioned fraud, an approach that nowadays is mainly supported by the research community. Although EDoS introduces a new paradigm of intrusion inherent in emerging network technologies, it has drawn the attention of different organizations for information security, which usually refer to EDoS as Reduction of Quality (RoQ) [9] attacks or Fraudulent Resource Consumption (FCR) [36] threats that typically take advantage of the payment-for-service solutions offered by the cloud computing suppliers [30]. These threats usually try to go unnoticed by monitoring elements via registering consumption distributions and requests that resemble those of normal and legitimate clients [10] [11]. Therefore, it is common to undertake the intrusion by requesting computationally expensive requests [36]. This also poses a representative difference with events of legitimate nature capable of jeopardizing the availability of the protected system, such as the massive access of legitimate users to the hired services, commonly referred as flash crowds [44]. At the present time, there are different techniques to perpetrate EDoS threats, for example, by requesting large files or costly queries to databases [7], HTTP requests linked from XML content [41], or by exploiting specific vulnerabilities of the web service platforms [46][35] [34]. In addition to causing an economic impact, EDoS attacks potentially lead to other secondary risks. G. Sonami et al. [36] reviewed this problem by pointing out different collateral damages, which vary depending on the role of each actor in a cloud computing deployment. For example, the provider tends to lose reputation while customers decide to contract cheaper services to rival enterprises. Clients also may pay an excessive amount of money for services that they were not using. These threats also may affect the operational capacity of the services at the different information processing layers that support them, this being the case of infrastructure, network function virtualization or multitenacy [9][35]. Countermeasures Despite the growing relevance of the EDoS threat at the emerging networking landscape, the bibliography does not provide an extensive number of publications that address the challenges it poses. They usually describe solutions based on analyzing network-level metrics typical on flooding-based denial of service recognition. In order to facilitate their understanding, the contributions are classified as they are classical organized at the research related to conventional DDoS defense [35]: detection, mitigation/prevention, and source identification. Detection. The publications at this field aim on identifying the EDoS attacks. A significant portion of them analyzed local-level metrics for modeling the resource consumption and self-scaling processes of the monitored environment [35]. Other publications lie on studying network-level data [20] and the browsing habits of the clients [34]. Note that although the research focused on local metrics has proven to be effective by best fitting the definition of EDoS attacks proposed by Hoff [10] [11], the network-based solutions are able to take advantage of the state-of-the-art about flooding-based DDoS and the emerging communication paradigms. Mitigation and Prevention. The contributions towards EDoS mitigation trend to focus on increasing the restriction level of the protected system through access control techniques. Turing tests based on image recognition [22] or resolution of cryptographic puzzles [25] are usually the most commonly applied methods. In contrast to the detection techniques, they do not require the previous identification of the threat, but their deployment usually penalizes the user Quality of Service or the operational expenditure. It worth emphasizing that most of the proposals categorized as mitigation solutions can be implemented as prevention measures, hence ignoring previous threat identification stages. Source Identification. Finally, the research that aims on discovering the origin of EDoS situations attempts to track the attacker. Because of the complexity that this challenge implies, the scope of identifying the threat source is often reduced to get as close as possible to the attacker. The bibliography related with the defense against DDoS serves to this purpose [21], being worth to highlight among the previous publications those based on analyzing error messages [3], honeypot deployment [42] and packet marking [43]. EDOS IN THE SON ENVIRONMENT Hoff [10] [11] pointed out the great similarity that EDoS activities present with respect to the legitimate traffic. It is then assumable that, in the context of a client-server architecture, that similarity is expressed in terms of the set of clients and the requests they generated, thus taking into account their number, distribution over time and computational complexity. These traits characterize both W-EDoS: Workload-based EDoS An attack of Economic Denial of Sustainability based on Workload (W-EDoS) is characterized by the execution of operations of high computational cost in the virtual instances hosted on a cloud computing provider. They are executed at server-side, thus generating a high workload in response to seemingly legitimate client requests. Under this premise, the existence of a W-EDoS attack is assumed when a monitored network environment presents conditions of similarity with legitimate network traffic, but where the average workload per request is significantly greater in terms of quantity and distribution. Fig. 1 shows a representation of a W-EDoS attempt launched on an instantiated VNF. The effect of the W-EDoS attack is to force the SON management layer to scale the instantiated VNFs vertically or horizontally, hence implying to waste additional computational resources (computation, storage, etc.) hired by payment per use policies, which causes negative effects in the economic sustainability of the offered services they support. I-EDoS: Instantiation-based EDoS An attack of Economic Denial of Sustainability based on Instantiation (I-EDoS) is characterized by the exploitation of some existing vulnerability either in the cloud service platform or in virtual functions, that leads to the automatic creation of additional VNF instances in one or several points of the network. In this way, an increase in the number of deployed instances is observed. Note that their average productivity is typically considerably lower, since their deployment would not have been necessary under legitimate circumstances. Therefore, the existence of an I-EDoS attack is assumed when a monitored network environment displays conditions of similarity with legitimate network traffic; but with a significant increase in the number and distribution of virtual instances, as well as a decrease in their average productivity. Fig. 2 shows a graphic representation of an I-EDoS attack in which the cloud service platform exposes a vulnerability that triggers the creation of additional virtual instances with different degree of productivity. The group of unproductive instances was fraudulent instantiated by the attacker, which causes extra costs derived by the time they remain in execution and their resource consumption, in this way jeopardizing the economic sustainability of the offered services. DESIGN PRINCIPLES AND ARCHITECTURE The performed research aimed on distinguishing legitimate situations from those related to EDoS attacks in self-organized scenarios. The following describes its design principles, architecture, and the EDoS threat discovery approach. Design Principles Thorough this section the requirements, assumptions and limitations (scope) of the performed research are detailed, which are enumerated as follows: • The architecture must be capable of detecting W-EDoS and I-EDoS attacks assuming the characteristics described in the previous section, in this way distinguishing them from legitimate activities (typified as normal traffic and flash crowds). • The detection of conventional flooding-based DoS attacks is beyond the scope of the performed research. • The non-stationarity inherent to the emerging monitoring environments is assumed [14]. • For simplicity and facilitating the understanding of the proposal, the attacks based on mimicry or identity theft [29] weaponized for avoiding the proposed EDoS detection approach are not studied. • The Self-Organized Networks pose complex monitoring scenarios in which a large number of sensors collects information about the state of the network in real time. This information should be aggregated into observations that can be treated by high-level analytical tools. Although in the experimentation the impact of the data granularity is briefly discussed, the introduction of methods for data granularity calibration is postponed for future investigation. • The correlation and management of the discovered incidents [39] are beyond the scope of this publication. However, it is assumed that the acquired knowledge must be notified to the security management layers. Fig. 3 illustrates the proposed architecture, which was designed in accordance with the most widely accepted framework for Network Function Virtualization (ETSI-NFV) and next generation networks (5G) [16]. Accordingly, the data decoupling and data plane management make possible the distinction of the different functional layers. The Virtualization Layer is executed on the Physical Layer commonly implemented with Commercial-Off-The-Shelf (COTS) hardware. At a higher level, the Cloud Layer manages the automatic instantiation of Virtual Network Functions (VNFs) through interaction with the Virtualization Layer, which is responsible for providing the requested resources. The deployed Cloud environment interconnects VNFs through the underlying virtual network composing one or more Network Services (NS) accessible to users. It is also assumed that the Cloud Layer has the ability to extract monitoring metrics, which are subsequently analyzed in the SON Autonomic layer in the following steps: Data collection. In SON environments the sensors (S) play an important role by monitoring custom metrics at the application-level, such as response times, memory consumption per process, NFV instances productivity, etc. Likewise, cloud computing platforms dispose of monitoring tools (e.g. Ceilometer [27]) capable of offering a significant number of metrics related to the usage mode of the network and the performance of the instantiated resources; e.g. CPU or memory consumption, latency, etc. In this way, the architecture collects information from both sensors (ALM) and cloud platform (VIFM). Architecture Data Aggregation. The high volume of data generated by the monitoring tasks requires to run periodic aggregation procedures while generate time series able to be handled by the analytic components, by this approach being empowered their projection to future observations. At application-level, this is achieved through the Feature Extraction (FE), which implements at least the methods involved in EDoS detection described in the forthcoming sections, for example, the measurement of the data disorder by entropy analysis. On the other hand, the metrics directly gathered from the cloud computing platform are extracted and added (VRA) through queries to the API of the monitoring tool. In both cases, the granularity of the time series is determined by the periodicity with which the aggregation operations are executed. EDoS Detection. The discovery of EDoS situations is addressed by the analytics and decision-making stages. In this framework, the first of them allows the inference of predictive models (MD) applied to time series of aggregated metrics, which results are considered for building prediction intervals (AT) based on the estimated error per observation. Consequently, unexpected behaviors are deduced when the observations are outside the prediction interval. Besides that, groups of instances based on the similarity (SM) observed at their productivity indicators are clustered, thus giving rise to the identification of groups with low productivity potentially related with I-EDoS situations. At decision-making stage, the analyzed data is taken into account to create inference rules designed to detect anomalies (AD) that reflect the presence of an EDoS threat, hence assuming as factual knowledge the information directly gathered from the monitored environment or acquired by the previous analytical steps. Notification. The inferred conclusions are notified as possible EDoS situations. They serve the purpose of avoiding the creation of instances whose fraudulent origin generates surcharges derived from their usage. W-EDoS detection The following details the W-EDoS detection metrics and the analytical processes this task involves: W-EDoS metrics. According to the W-EDoS definition, this type of attacks maintains a condition of network similarity with the normal and legitimate usage model but displaying significant variations in terms of VNF workload. Because if this, the detection strategy considers the CPU consumption (X cpu ) and the response time at application level (X app ) as W-EDoS indicators. It is important to clarify that the first of them measures the CPU consumption at operating system level, while the second measures the total time required to process each request at server-side. With the motivation of discovering unexpected behaviors, the first performed step is to analyze the variations in X app , which is achieved by studying their disorder degree in fixed time intervals. The reviewed literature suggests the correlation of these observations in terms of entropy [20,29,37], as commonly accepted for classical DDoS recognition. As indicated by Bhuyan et al. [8], the entropy defined by Rènyi provides a general-purpose solution particularly effective at this type of problems. It is defined by H α (X app ) in the following equation, being α the entropy order, α ≥ 0 and α 1. where X is the random variable with n possible outcomes and corresponding P i with (i = 1,2,...,n) probabilities. For experimental purposes, the normalized solution H α (X app )/log n is considered. Note that if α = 1, the particular case is observed in which the Rènyi entropy coincides with that of Shannon. The successive measurements of entropy give rise to the creation of the time series: and the CPU consumption indicators expressed as the time series: The rest of analytical steps to detect W-EDoS are the same for X cpu and X app . Henceforth, X is used to refer indistinctly to any of them. Unexpected behaviors derived from W-EDoS. The proposed detection method lies on deciding whether the estimationX t =m at time horizon m differs significantly from X t =m . This requires predicting time series of variable X in a predetermined horizon, which allows comparing the forecasted values with the actual observations. The Double Exponential Smoothing (DES)predictive algorithm has been implemented, because it reduces the adaptation time by requiring shorter time series for data modeling, in this way outperforming autoregressive solutions as ARIMA [34]. Its adjustment parameters are auto-calibrated as described in [24] but instead of inferring variations with respect to the estimated points, prediction intervals are constructed as suggested in [19]. They are expressed considering the prediction error ϵ t based on the Mahalanobis distance at t, particularly when t = m, according to the following equation: Cloud Layer Network Physical Infrastructure ) The Prediction Interval (PI) is expressed as follows: PI = x t =n ± η σ 2 (ϵ t ) (5) where σ 2 is the variance of the prediction error ϵ t . Consequently, let X n t =0 and its predictionX t =n+m at horizon m, the observation X t =n+m is considered a workload-based unexpected behavior if ϵ t PI , i.e. whenx t =n+m and x t =n+m differ significative. Since X cpu is a variable independent from X app , the proposal assumes that each X t =m unexpected observation at both X cpu and X app unmask a potential W-EDoS threat if X cpu displays increasing trend, in this case reporting a W-EDoS incident. I-EDoS detection The I-EDoS detection metrics and the adopted analytical procedure are described below: I-EDoS metrics. The I-EDoS threat preserves a condition of network similarity with the normal and legitimate usage model. However, and as previously indicated, these attacks are characterized by the appearance of new instances, which causes a direct relationship between the new NFVs deployment and their low productivity. Consequently, two metrics are mainly taken into account for I-EDoS detection: the number of VNFs instantiated per observation (Y ), and their productivity (Z ), where Z is the set Z = {z 1 · · · z Y , Y ≥ 0} that defines the productivity of the different virtual instances of the observation at t. In analogy to the proposed solution for W-EDoS detection, they are monitored over time, hence leading to the generation of the following time series: where an observation at t, 0 ≤ t ≤ n is suspicious when Y t displays a significant increase and Z t = {z 1 , · · · , z Y (t ) } contains a group of VNFs instances with clear low productivity, which is referred as lazy group. They are suspicious of deriving in an additional resource consumption and empowering the anomalous raising of Y t . Unexpected behaviors derived from I-EDoS. As in W-EDoS attack detection, at I-EDoS situations there is a significant increase in the number of instances Y when for a time horizon m the calculated error between its forecasted valueŶ t =n+m and its observation Y t =n+m falls outside the previously defined prediction interval (PI ). When an auto-scaling action has triggered the creation of new VNFs instances with productivity Z t = {z 1 , · · · , z Y t } it is possible to assess if part of them are involved in an I-EDoS attack by applying a density-based clustering; in the solution implemented at the performed experimentation, this method is particularized through a Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm [15]. This approach considers the existence of groups of observations based on the density of its closest K-neighbors. The observations that are not reachable within the same group are considered outliers [12]. DBSCAN has been chosen because it is tolerant to noise and does not require previous estimation of the number of groups, being configured at the experimentation by an heuristic approach recommended in [33]. DBSCAN is executed per set of productivity values Z t = {z 1 , · · · , z Y t }, and the result is a set of K clusters represented by C t = {c 1 , · · · , c k }. Let Z t = {z 1 , · · · , z Y t } the set of productivity measures at the instances in t classified as C t = {c 1 , · · · , c k } with K ≥ 0 and ordered as s(C t ) = [c 1 , · · · , c K ], there is an I-EDoS based unexpected behavior (labeled as possible I-EDoS at t) when a significant growth at the time of creation of the VNFs instances belonging to c 1 is observed, where c 1 is the least productivity (lazy) group of VNFs. EXPERIMENTATION This section presents the network environment where the EDoS detection approach has been evaluated. The Cloud Layer and related SON components are described below. Fig 3. illustrates the experimental testbed where the Cloud Layer has been implemented with Openstack [28]. It has been deployed in two servers: Controller and Compute. The Controller server hosts the network service (Neutron), and the Compute node provides orchestration (Heat), clustering (Senlin) and telemetry (Ceilometer) services; on which the auto-scaling policies are supported. All Openstack services are communicated via RabbitMQ message exchange buses. On the other hand, the processing stages of the SON autonomic layer combine custom implementations and open source tools. Thus, the Collection node periodically fetches the response times calculated per instance; whereas the metrics related with the instantiated VNFs are gathered by Ceilometer. Then, data aggregation functions are applied, firstly aiming on calculating the entropy from data of the central node; and secondly, by queryng the Ceilometer API for obtaining the average CPU consumption of the instantiated VNFs per observation. The time series feed the algorithms implemented for the detection stage. The acquired factual knowledge is analyzed by production rules configured in Drools with the aim of inferring unexpected behaviors labeled as potential EDoS situations [38]. W-EDoS characterization An HTTP REST web service that supports GET requests to seven URIs (numbered 1 to 7) has been implemented in a virtual Openstack instance, each URI with a different response time, from the simplest (18.56ms) to the most complex queries (36.73ms). An eighth URI with 226.04 ms of average response time is also implemented, which represents the point of greatest computational cost that can be exploited as vulnerability. The metrics required for EDoS detection are collected per second, which serve for building time series and calculating the Rènyi entropy degree of the monitored observations. On the other hand, the CPU based indicators are obtained per instance from the Ceilometer API, thus creating additional time series. In the experimental test, the requests have been launched from 500 clients implemented as Python threads, that in normal traffic situations randomly communicate with URIs 1 to 8, while in attack scenarios only URI 8 is requested. In both situations, a self-scaling policy that creates a new instance of the web service has been configured, which occurs when the average CPU consumption reported is greater than 60% in a one-minute time interval. Two adjustment factors allowed to configure the attack intensity: the number of compromised nodes, and the variation of the connection rate per second. From them, the rules for discovering unexpected behaviors derived from W-EDoS were configured. I-EDoS characterization At the I-EDoS scenario, the implemented REST application has been modified to expose a single URI that performed request with an average execution time of 27.89ms. For hosting the virtual image instances, an Openstack cluster was created with minimum length of 2 VNFs and maximum length of 12. The implemented auto-scaling policy orchestrated the creation of a new NFV instance when the average CPU consumption was higher than 80%; and the removal of an instance of the lower productivity cluster when this value was less than 40%. A stress-test was launched on the server for establishing the default productivity level. This has been evaluated with Httperf [18], and the obtained results reflected the lowest achieved productivity when the connection rate per second was less than 10, in this way causing a maximum CPU consumption of 39.1% that approached the lowest threshold of the configured auto-scaling policy. The optimal performance levels were recorded with a connection rate that varied from 10 to 40 per second, resulting in an average CPU consumption from 41.2% to 81.6%. In the aforementioned use case the percentage of connection errors was 0%. However, when the traffic injected above 40 connections per second, the CPU consumption reached its highest levels, thus registering values between 82.7 and 99.6% that exceeded the auto-scaling threshold and that posed connection errors higher than 10%. The network parameters and the resulting productivity served for DBSCAN to identify the groups of VNFs that due to their behavior may be compromised by an I-EDoS situation. Their workload resembled a random Poisson distribution [5] where the expected value λ was the number of connections of the cluster at certain observation, for which has been tested by rates from 53 to 286 connections per second in a time period of three hours. The same default workload has been applied at both normal and attack scenarios. In the malicious situation, the VNF self-scaling was triggered through manipulating metrics gathered by Ceilometer, where it is assumed the ability of the attacker for exploiting vulnerabilities like CVE-2016-9877 [13] to poison the information collected via RabbitMQ data buses. They enabled turning the original CPU readings (JSON messages) into fake values randomly ranging from 90% to 100%. The manipulated metrics were finally registered at the Ceilometer database, which led to fraudulently deploy additional VNFs instances due to auto-scaling policies. RESULTS The following discusses the effectiveness of the proposal when assessed at the evaluation testbed. This section separates the results obtained when dealing with W-EDoS and I-EDoS situations. Effectiveness at W-EDoS attacks In Fig. 4 the effectiveness of the proposal when varying the Rènyi entropy degree is illustrated. The lower λ values minimize the impact of the inferred noise, this being the main reason that led them to yield more accurate results. Consequently, during the rest of the experimentation the best observed adjustment achieved (i.e. λ = 1) was assumed. The W-EDoS attacks have been injected in intervals of 1%, 5% and 10%, where the percentage represents the proportion of malicious requests that characterize the attack intensity. Additionally, four scenarios have been studied based on the average of requests per second (px) performed by clients: 50; 60; 70; 80, where K is the adjustment value for the creation of the prediction intervals. It has been experimented with different values of K (from 0.1 to 6), this being the parameter that varies the degree of sensitivity of the detection. The best results were obtained when 1% 5% 10% Figure 5: ROC curve when 80 px at W-EDoS detection the request rate was 80px and the intensity was 10% (Fig. 5), being 0.995 the trapezoidal approximation of the Area Under the ROC Curve (AUC). According to the Yauden statistic, the best configuration registered True Positive Rate (TPR) of 1 and False Positive Rate (FPR) of 0.01. In the opposite case, the worst results were observed with a request rate of 60px and attack intensity of 1%, where AUC=0.901, TPR=0.816 and FPR=0.15. From them it is possible to conclude that, as the attack intensity makes the threat more visible and the request rate increases, the accuracy of the system improves since these conditions lead to more noticeable variations in terms of entropy and CPU overload. In general terms, the obtained accuracy demonstrates the ability of the proposed method to detect W-EDoS attacks in scenarios similar to those considered for evaluation. Effectiveness at I-EDoS attacks The I-EDoS situation recognition capabilities of the proposal have also been evaluated according to the attack intensity, which impact translates into a growth of 10%, 20%, 30% 40% and 50% of the number of instantiated VNFs. As was easy to deduce, this adjustment parameter directly influenced the effectiveness of the proposal. This fact is illustrated in Fig. 6, where the ROC curve obtained at the different experimental conditions is displayed. In general terms, the hit rate experienced small and inconspicuous variations. At the first group of attacks (10%, 20%, 30%, 40%), a distance of 0.022 (0.025%) was observed between the minimum hit rate (TPR = 0.89 when 10% intensity) and the best hit rate (TPR = 0.91 when 40% 10% 20% 30% 40% 50% Figure 6: ROC curve at I-EDOS detection intensity); note that as in the previous tests, the best adjustments were estimated according to the Yauden criteria. Likewise, when the attack gained intensity (50%) the hit rate slightly increased (TPR = 0.94). However, by taking into account the percentage of false positives the observed variations were more significant; in particular, the detection method registered FPR = 0.12 when 10% intensity; but when gaining intensity, the best configuration (at 40% and 50% intensities) resulted in FPR = 0.07, which represents an improvement of 58.3% over the worst result. This pattern can be observed in Fig. 6 where the AUC varies according to the attack intensity, being AUC = 0.9811 in the best adjustment and AUC = 0.9483 in the worst scenario. The variations in effectiveness is caused at the clustering stage based on the VNFs productivity. Thus, the more visible the attack, the greater the number of instances that belong to the group of unproductive instances. In view of the obtained results, it can be concluded that the proposed strategy is able to successfully identify I-EDoS situations at scenarios similar to that considered for evaluation. CONCLUSIONS The problem of Economic Denial of Sustainability (EDoS) in the SON landscape has been studied and defined from two paradigms: workload (W-EDoS) and instantiation (I-EDoS) exploitation. In this context, two novel detection strategies have been proposed, which were able to recognize each of them. Both were based on modeling the normal behavior of the protected system and the discovery of discordant activities at the monitoring environment. In particular, for W-EDoS recognition the study of significant prediction errors was adopted, which lies in analyzing the evolution of the CPU consumption and the entropy estimated on the response times at the application level calculated in VNFs instances. On the other hand, for I-EDoS detection purposes, the relationships between the growing of the number of instantiated VNFs belonging to low productive clusters was studied. The effectiveness of the proposal was proven through the performed experimentation, in which the impact of varying different adjustment parameters was studied (intensity of the attacks, confidence of prediction intervals or entropy degree). Consequently, it was possible to demonstrate that the proposal meets its main objective on the deployed testbed. However, it should be noted that aiming on enhancing the understanding of our contribution, some aspects also necessary for its application to real scenarios were not discussed in-depth, among them strengtheningl against adversarial threats or supporting the adoption of data protection policies, which pose interesting lines of future research.
2018-08-14T13:02:10.293Z
2018-08-27T00:00:00.000
{ "year": 2018, "sha1": "41959b685735c542b0b8bc3ce3b2eb78213b72d3", "oa_license": "CCBY", "oa_url": "https://zenodo.org/record/1484277/files/Detecting%20Workload-based%20and%20Instantiation-based%20Economic%20Denial%20of%20Sustainability%20on%205G%20environments.pdf", "oa_status": "GREEN", "pdf_src": "ACM", "pdf_hash": "41959b685735c542b0b8bc3ce3b2eb78213b72d3", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
261162520
pes2o/s2orc
v3-fos-license
Development and Degeneration of the Intervertebral Disc—Insights from Across Species Simple Summary The intervertebral disc is an important organ providing structure, support and flexibility to the spine, yet it can degrade over an individual’s lifetime resulting in a painful condition known as intervertebral disc degeneration. Historically, the degeneration or breakdown of the organ has been catalogued and studied in humans, mice and to some extent dogs, however research has expanded to rats, cows, horses, rabbits, cats, and monkeys in recent decades allowing for the application of new research methods. Expanded research has further clarified the mechanisms contributing to degeneration at a molecular and cellular level. This review examines stressors promoting degeneration, how the intervertebral disc responds to them, the variation in symptomatic presentation of intervertebral disc degeneration between species, physical differences in the disc at different levels in the spine and between animals, as well as how the cellular population of the disc changes over time. Examining these aspects both within and between species helps to characterize degenerative changes in the intervertebral disc necessary for development of treatment and therapy. Abstract Back pain caused by intervertebral disc (IVD) degeneration has a major socio-economic impact in humans, yet historically has received minimal attention in species other than humans, mice and dogs. However, a general growing interest in this unique organ prompted the expansion of IVD research in rats, rabbits, cats, horses, monkeys, and cows, further illuminating the complex nature of the organ in both healthy and degenerative states. Application of recent biotechnological advancements, including single cell RNA sequencing and complex data analysis methods has begun to explain the shifting inflammatory signaling, variation in cellular subpopulations, differential gene expression, mechanical loading, and metabolic stresses which contribute to age and stress related degeneration of the IVD. This increase in IVD research across species introduces a need for chronicling IVD advancements and tissue biomarkers both within and between species. Here we provide a comprehensive review of recent single cell RNA sequencing data alongside existing case reports and histo/morphological data to highlight the cellular complexity and metabolic challenges of this unique organ that is of structural importance for all vertebrates. Introduction Intervertebral discs (IVD) sit between the vertebrae, providing shock absorption and enabling flexibility of the spine.The IVD is made up of three tissue types: cartilage endplate (CEP), annulus fibrosus (AF), and nucleus pulposus (NP), each of which have unique physical and biomolecular properties (Figure 1) [1][2][3].IVD degeneration (IVDD), which often leads to back pain, osteoarthritis (OA), neuropathy, endplate defects, and disc herniations is categorized by the breakdown of the extracellular matrix (ECM) and changes in the microenvironment of the IVD tissue [2,4].IVDD is often accompanied by protrusion or extrusion of the IVD into the spinal canal, pushing on nerve roots and the spinal cord which can induce neurologic symptoms [5,6].Inflammation, changes in cellular function including senescence, and loss of cells through regulated cell death all contribute to degeneration of the NP, AF, and calcification of the CEP and NP [2,[7][8][9][10].IVDD is a common disease in humans, and also affects a large number of other species including monkeys, cows, horses, mice, rabbits, rats, dogs, cats, and bears [11][12][13][14][15].To date, there are no treatments that effectively halt or reverse IVDD and the changing microenvironment results in further degeneration, innervation, and neovascularization of the disc throughout an individual's life [2,6,16,17].Analyzing animal models of IVDD has furthered our understanding of disease onset and progression, yet further research into markers of IVD tissue types, disease progression, and eventual therapy and treatment development is necessary for alleviating degeneration and subsequent pain across species. spinal cord which can induce neurologic symptoms [5,6].Inflammation, changes in cellular function including senescence, and loss of cells through regulated cell death all contribute to degeneration of the NP, AF, and calcification of the CEP and NP [2,[7][8][9][10].IVDD is a common disease in humans, and also affects a large number of other species including monkeys, cows, horses, mice, rabbits, rats, dogs, cats, and bears [11][12][13][14][15].To date, there are no treatments that effectively halt or reverse IVDD and the changing microenvironment results in further degeneration, innervation, and neovascularization of the disc throughout an individual's life [2,6,16,17].Analyzing animal models of IVDD has furthered our understanding of disease onset and progression, yet further research into markers of IVD tissue types, disease progression, and eventual therapy and treatment development is necessary for alleviating degeneration and subsequent pain across species. .(E) Following this resegmentation, the anterior layer of the cell dense section will give rise to the AF of the IVD (green), while the NP is derived from the NC (pink).Chondrogenesis enables the formation of the CEP and VB.(F) The reorganization and shift influenced by complex signaling events will enable innervation (light blue) of the myotome derived skeletal musculature (purple).(G) Transverse depiction of an isolated IVD illustrating only the caudal CEP for simplicity.(H) Depicts a sagittal section of the VB and IVD.The simplified illustration is not drawn to scale.AS: axial sclerotome; C: caudal; CEP: cartilage end plates; iAF: inner annulus fibrosus; IVD: intervertebral disc; LS: lateral sclerotome; NC: notochord; oAF: outer annulus fibrosus; R: rostral; VB: vertebral body. Importance of the Notochord Mouse fate-mapping and lineage tracing experiments demonstrated that the NP is a notochord (NC) derived tissue [3,25,26].As such, the NP contains both NC-like cells and smaller chondrocyte (CC)-like cells contributing to a heterogenous cell population [14,[27][28][29][30] with NC-like cells declining or disappearing in the matured IVD of some animals (Figure 2) [14,27,31].The NC also acts as an important transient structure influencing patterning of ventrolateral sclerotome cells during embryogenesis, which migrate on both sides of the embryonic midline to surround the NC with the axial sclerotome giving rise to AF cells of the IVD and portions of the vertebral body (Figure 1) [9,29,32,33].Failure to form . (E) Following this resegmentation, the anterior layer of the cell dense section will give rise to the AF of the IVD (green), while the NP is derived from the NC (pink).Chondrogenesis enables the formation of the CEP and VB.(F) The reorganization and shift influenced by complex signaling events will enable innervation (light blue) of the myotome derived skeletal musculature (purple).(G) Transverse depiction of an isolated IVD illustrating only the caudal CEP for simplicity.(H) Depicts a sagittal section of the VB and IVD.The simplified illustration is not drawn to scale.AS: axial sclerotome; C: caudal; CEP: cartilage end plates; iAF: inner annulus fibrosus; IVD: intervertebral disc; LS: lateral sclerotome; NC: notochord; oAF: outer annulus fibrosus; R: rostral; VB: vertebral body. A General Understanding of Anatomic and Molecular Features of the IVD 2.1. Importance of the Notochord Mouse fate-mapping and lineage tracing experiments demonstrated that the NP is a notochord (NC) derived tissue [3,25,26].As such, the NP contains both NC-like cells and smaller chondrocyte (CC)-like cells contributing to a heterogenous cell population [14,[27][28][29][30] with NClike cells declining or disappearing in the matured IVD of some animals (Figure 2) [14,27,31].The NC also acts as an important transient structure influencing patterning of ventrolateral sclerotome cells during embryogenesis, which migrate on both sides of the embryonic midline to surround the NC with the axial sclerotome giving rise to AF cells of the IVD and portions of the vertebral body (Figure 1) [9,29,32,33].Failure to form the NC sheath during development resulted in abnormal NP cell movement and a malformed NP, further illustrating the importance of the NC and NC sheath for NP development [34].Notably, after the NC sheath disappears during development, NC-like cells remain present in the future NP [3,26,35].Single cell RNA sequencing (scRNA seq) has recently been instrumental in identifying these clusters of cells, further advancing the understanding of NC cells, NClike cells, NP progenitors, and NP cells which is further discussed in Section 2.5.Beyond influencing NP and AF cell fate, NC cells also contribute to ECM synthesis through the production of proteoglycans (PG), most importantly aggrecan (Acan), and collagens such as collagen II [9].Pathway analysis of NC markers suggest a role in the inhibition of inflammation and vascularization [36]. IVD Structure: Nucleus Pulposus Whether NP cells transdifferentiate from NC-like cells over time or undergo cell death and replacement by CC-like NP cells has been debated [37] (Figure 3).Another study speculated a combined NP cell origin with some cells developing from NC cells and others from CC cells [17].Recent scRNA seq data has identified NP progenitors with NC origins populating the IVD and contributing to the heterogeneous tissue [35,38,39]. the NC sheath during development resulted in abnormal NP cell movement and a malformed NP, further illustrating the importance of the NC and NC sheath for NP development [34].Notably, after the NC sheath disappears during development, NC-like cells remain present in the future NP [3,26,35].Single cell RNA sequencing (scRNA seq) has recently been instrumental in identifying these clusters of cells, further advancing the understanding of NC cells, NC-like cells, NP progenitors, and NP cells which is further discussed in Section 2.5.Beyond influencing NP and AF cell fate, NC cells also contribute to ECM synthesis through the production of proteoglycans (PG), most importantly aggrecan (Acan), and collagens such as collagen II [9].Pathway analysis of NC markers suggest a role in the inhibition of inflammation and vascularization [36]. IVD Structure: Nucleus Pulposus Whether NP cells transdifferentiate from NC-like cells over time or undergo cell death and replacement by CC-like NP cells has been debated [37] (Figure 3).Another study speculated a combined NP cell origin with some cells developing from NC cells and others from CC cells [17].Recent scRNA seq data has identified NP progenitors with NC origins populating the IVD and contributing to the heterogeneous tissue [35,38,39].Illustrates the timeline of embryonic development from the onset of gastrulation to birth across different species such as mice, rats, rabbits, non-chondrodystrophic (NCD) dogs, chondrodystrophic (CD) dogs, humans, cows, and horses, in addition to the reported disappearance of notochord (NC)-like cells in the NP.Embryonic development data in humans is compiled from [40] and all other animals from [41] while NC-like cell disappearance is compiled from mice [37], rats [26,42], rabbits [26,42], dogs [26,37], humans [30,42,43], cows (personal observation) and [14], horses [11].Information on NC or NC-like cells disappearance varied across sources and was based on histological or morphological data.Advancements in scRNA seq analysis can provide further clarification and identified NC-like cells into adulthood in humans. Compositionally, the ECM of the NP is primarily made up of Acan, type I and type II collagen, as well as hyaluronic acid, creating a hydrated matrix with a high internal osmotic pressure that enables the spine to withstand loading and avoid disc extrusion [2,44,45].The balance of ECM molecules is important in maintaining a healthy shock absorbing IVD.Unsurprisingly, ECM remodeling occurs with degeneration.In healthy discs, the NP typically has a higher ratio of type II collagen to type I collagen [2].This collagen ratio shifts to favor type I collagen during degeneration [10,46].Additionally, Figure 2. Illustrates the timeline of embryonic development from the onset of gastrulation to birth across different species such as mice, rats, rabbits, non-chondrodystrophic (NCD) dogs, chondrodystrophic (CD) dogs, humans, cows, and horses, in addition to the reported disappearance of notochord (NC)-like cells in the NP.Embryonic development data in humans is compiled from [40] and all other animals from [41] while NC-like cell disappearance is compiled from mice [37], rats [26,42], rabbits [26,42], dogs [26,37], humans [30,42,43], cows (personal observation) and [14], horses [11].Information on NC or NC-like cells disappearance varied across sources and was based on histological or morphological data.Advancements in scRNA seq analysis can provide further clarification and identified NC-like cells into adulthood in humans. Compositionally, the ECM of the NP is primarily made up of Acan, type I and type II collagen, as well as hyaluronic acid, creating a hydrated matrix with a high internal osmotic pressure that enables the spine to withstand loading and avoid disc extrusion [2,44,45].The balance of ECM molecules is important in maintaining a healthy shock absorbing IVD.Unsurprisingly, ECM remodeling occurs with degeneration.In healthy discs, the NP typically has a higher ratio of type II collagen to type I collagen [2].This collagen ratio shifts to favor type I collagen during degeneration [10,46].Additionally, during IVDD there is a decrease in Acan and versican (Vcan) and an increase in smaller PGs [9,47].Negatively charged glycosaminoglycan chains of Acan, interact with cations and water molecules, thus affecting the balance of cations and anions and osmotic pressure within the IVD [48].However, during degeneration and ECM remodeling the IVD experiences calcification [10] and has diminished osmotic pressure and disc hydration due to a reduction in large PG content and thus fewer interactions with water molecules disrupting the molecular flow in and out of the IVD ECM [46]. The predominantly avascular nature of the IVD contributes to its uniquely harsh microenvironment.Fournier et al. performed a scoping review analyzing vasculature in IVDs from individuals with no history of back pain, radiculopathy, or myelopathy across studies conducted between 1959 to 2018 [17].Across the studies included, the vast majority agreed that the NP from fetal to infant years is avascular with only one reporting the presence of blood vessels in degenerated discs at the NP and AF border [17,49].One additional study found microscopic angiogenesis in 6.6% of their human samples aged 2-25 years old and no study reported vascularization in discs from individuals after 25 years old [17].Consensus supports the NP tissue as avascular throughout an individuals' lifespan for those with no medical history of back pain.Notably, SOX10, CTSK, and TBXT positive (+) cells in the developing NC/NP tend to express SEMA3A, which is a protein thought to perpetuate avascular environments and therefore may contribute to the NP's avascular nature [50]. IVD Structure: Annulus Fibrosus The AF is traditionally differentiated into an outer AF (oAF) and inner AF (iAF) or transition zone (TZ), which is described as the region between the oAF and the NP [14].The oAF is primarily composed of organized type I collagen layers and elastin while the TZ features a comparatively lower ratio of type I/II collagen as the disc shifts towards NP tissue [2,29,45] (Figure 3).Fibroblast-like cells are responsible for the production of type I collagen while fibrocartilage cells produce type II collagen of the TZ [2,29].The AF is subject to degeneration characterized by annular fissures as well as biomolecular changes including the disappearance of decorin (Dcn) and biglycan (Bgn) [10,51].These two small leucine-rich PGs are thought to aid in resistance to biomechanical stress in the outer AF, and their disappearance is reported by fifty years of age in the human IVD [10,51].Sharpey's fibers, made primarily of collagen bundles that connect the iAF to the CEP, peripherally surround the NP tissue forming an enclosure that connects the IVD to the VB, securing the IVD's position in the spine [10,23].Interestingly, loading a poorly hydrated disc with a lowered osmotic pressure reportedly increased discogenic pain and greater stress on the AF and CEP compared to the loading of a healthy disc [52].This indicates degenerative changes in one tissue can indirectly stress the remaining tissues thus further degenerating the disc at large. Unlike the NP, the oAF exhibits some level of vascularization even in a healthy disc.During development, eight of ten IVD studies between 0-2 years reported vasculature in the AF, however it was restricted to the oAF in non-degenerated discs even in aged adults [17].In the latter group vascularization of the AF was primarily reported in conjunction with fissures or nerve growth in damaged tissue and deeper penetration in highly degenerated tissues [17]. IVD Structure: Cartilage Endplates The CEP defines the upper and lower boundaries of the IVD, connecting the NP and AF regions to bony vertebrae [53].The proximity of the CEP to the epiphyseal arteries allows blood supply to this tissue and promotes diffusion of nutrients to the AF and NP [2,10,53].Of nine studies examining IVDs from patients with no history of back pain conducted between 1947 and 2007, the majority agreed the CEP has some vasculature throughout embryonic development and infancy before transitioning to an avascular tissue by ages 8-10 with a regression in the amount of vasculature in the CEP over time [17].CEP degeneration, marked by endplate irregularity, more easily allows for NP herniation into the vertebral body creating 'Schmorl's nodes' which can be documented through magnetic resonance imaging (MRI) [4,5].Schmorl's nodes were originally described in 1927 in humans as "Knorpelknötchen" and linked to spondylitis [4].Additionally, four studies reported vascular ingrowth in the CEP of damaged IVDs between ages 2-65+, indicating degenerative changes to the IVD may be accompanied by vascularization [17].Furthermore, thickening and increased irregularity of CEPs is a marked degenerative change to the IVD [5]. Vet. Sci.2023, 10, x FOR PEER REVIEW 5 of 32 tissue by ages 8-10 with a regression in the amount of vasculature in the CEP over time [17].CEP degeneration, marked by endplate irregularity, more easily allows for NP herniation into the vertebral body creating 'Schmorl's nodes' which can be documented through magnetic resonance imaging (MRI) [4,5].Schmorl's nodes were originally described in 1927 in humans as "Knorpelknötchen" and linked to spondylitis [4].Additionally, four studies reported vascular ingrowth in the CEP of damaged IVDs between ages 2-65+, indicating degenerative changes to the IVD may be accompanied by vascularization [17].Furthermore, thickening and increased irregularity of CEPs is a marked degenerative change to the IVD [5].Mallory's tetrachrome histological staining on formaldehyde preserved tissues was performed as previously described [14] to visualize the AF and NP tissue across species.All but the human sample were coccygeal IVDs.Ages were generally from adult organisms but varied across species.The dog IVD was from a 1-day old non-chondrodystrophic boxer docked breed.Scale bar reflects 50 µm.Staining represents: nuclei (red), collagen fibrils, ground substance, cartilage, mucin and amyloid (blues), erythrocytes and myelin (yellow) or elastic fibrils (pale pink, pale yellow or unstained) [54]. Multi Species scRNA seq Supports Cell Heterogeneity and Clarifies Cell Identities in the IVD The physical and molecular composition of the IVD and its largely avascular and non-innervated nature creates a harsh cellular environment which impedes regenerative capabilities of the IVD.Identifying differences in biochemical and structural contributions to the NP, AF, and CEP is important for understanding how genes and proteins contribute to cell type specific signaling and maintenance of the tissue, as well as how their differential expression can support or resist degeneration [2].While there are some conserved markers across species, a number of proteins and biomarkers are differentially expressed, and it is therefore beneficial to document markers in a species-specific manner.A major challenge in identifying biomarkers of the IVD is that the heterogenous cell population leads to subclusters of NP, AF, or CEP cells with differential expression patterns.Recent advances in scRNA seq enabled the identification of IVD cell subpopulations, including NP stem-like progenitor cells, AF progenitor cells, CC-like NP cells, endothelial cells (EC) in the oAF, myeloid cells, and lymphoid cells, in both healthy and diseased discs of various species [35,39,[55][56][57][58].Recent scRNA seq analysis makes use of statistical analysis such as T-distributed stochastic neighborhood embedding (tSNE) to characterize clusters or subclusters of NP, AF, and CEP cells based on their expression profile [39,47,59,60], however labeling and defining these cell groupings currently varies between research groups and experiments.Nonetheless, scRNA seq is a powerful technology that effectively characterizes cellular changes via gene expression changes and shifts in ratios of cell populations during degeneration helping to clarify cell identities and monitor senescence during IVD development and degeneration. Prior to scRNA seq, Sakai et al. utilized colony forming assays to identify NP stemlike and progenitor cells in mouse and human via TEK receptor tyrosine 2 kinase Mallory's tetrachrome histological staining on formaldehyde preserved tissues was performed as previously described [14] to visualize the AF and NP tissue across species.All but the human sample were coccygeal IVDs.Ages were generally from adult organisms but varied across species.The dog IVD was from a 1-day old non-chondrodystrophic boxer docked breed.Scale bar reflects 50 µm.Staining represents: nuclei (red), collagen fibrils, ground substance, cartilage, mucin and amyloid (blues), erythrocytes and myelin (yellow) or elastic fibrils (pale pink, pale yellow or unstained) [54]. Multi Species scRNA seq Supports Cell Heterogeneity and Clarifies Cell Identities in the IVD The physical and molecular composition of the IVD and its largely avascular and non-innervated nature creates a harsh cellular environment which impedes regenerative capabilities of the IVD.Identifying differences in biochemical and structural contributions to the NP, AF, and CEP is important for understanding how genes and proteins contribute to cell type specific signaling and maintenance of the tissue, as well as how their differential expression can support or resist degeneration [2].While there are some conserved markers across species, a number of proteins and biomarkers are differentially expressed, and it is therefore beneficial to document markers in a species-specific manner.A major challenge in identifying biomarkers of the IVD is that the heterogenous cell population leads to subclusters of NP, AF, or CEP cells with differential expression patterns.Recent advances in scRNA seq enabled the identification of IVD cell subpopulations, including NP stem-like progenitor cells, AF progenitor cells, CC-like NP cells, endothelial cells (EC) in the oAF, myeloid cells, and lymphoid cells, in both healthy and diseased discs of various species [35,39,[55][56][57][58].Recent scRNA seq analysis makes use of statistical analysis such as T-distributed stochastic neighborhood embedding (tSNE) to characterize clusters or subclusters of NP, AF, and CEP cells based on their expression profile [39,47,59,60], however labeling and defining these cell groupings currently varies between research groups and experiments.Nonetheless, scRNA seq is a powerful technology that effectively characterizes cellular changes via gene expression changes and shifts in ratios of cell populations during degeneration helping to clarify cell identities and monitor senescence during IVD development and degeneration. Prior to scRNA seq, Sakai et al. utilized colony forming assays to identify NP stem-like and progenitor cells in mouse and human via TEK receptor tyrosine 2 kinase (Tie2/TEK) and disialoganglioside 2 (GD2) as markers [55].They additionally identified angiopoietin-1 (Ang1) as an important Tie2 ligand in the bovine IVD which previously displayed anti-apoptotic effects in human NP cells [55].The frequency of Tie2 positive NP cells decreased as disc degeneration increased, indicating that these cells are significantly impacted by IVDD and aging [55].Another interesting advancement of this study is the ability to sort and characterize the proliferation capacity of marker positive (+) or negative (−) NP cell subpopulations (Table 1) [55].As a result, Tie2+/Gd2−/CD24− progenitor cells were described as dormant stem cells in the mouse, human and bovine NP, with Tie2+/Gd2+/CD24− cells exhibiting self-renewal potential and stem cell properties and Tie2−/Gd2−/CD24+ cells being committed to a mature NP phenotype [32,55,61,62].Overall, the identification of Tie2+/Gd2+ NP cells is important as a potential therapeutic target and for characterizing the IVD cell population [55]. Tie1 and Tie2, which are expressed during embryonic angiogenesis, were recorded in the bovine AF by scRNA seq, marking the first time that Tie2 was described outside of NP cell progenitors [57].This is particularly interesting as Tie2 is expressed in hematopoietic cells [63].Furthermore, scRNA seq carried out for cells derived from rat AF and NP tissue identified matrix metalloproteinases MMP3, and MMP13, along with interleukin 11 (IL11) as highly expressed in AF cells, however because they are rather common in related degenerative diseases they cannot be considered exclusively as an AF biomarker [58].Aiming to identify new cell-specific biomarkers in the oAF, iAF, and NP, gene ontology analysis indicated that rat AF cell function is mainly related to fibrosis and stress response while NP cells are related to degenerative diseases and ECM maintenance [58]. More recently, urotensin 2 receptor (UTS2R) + cells, a proposed novel progenitor NP (proNP) cell cluster marker, were located in the peripheral NP tissue which showed an enrichment for progenitor cell markers and stemness genes.These cells were largely Tie2+ or Tie2+/GD2+, indicating that most UTS2R+ cells may be considered proNP cells [39].These initial tests were performed in mice, however the group followed up with rat UTS2R+ proNPs finding that they primarily formed fibrous colony forming units (CFU) in vitro, and in matrigel had more of a spherical formation [39].Human UTS2R+ NP cells cultured in matrigel also had improved spherical formation compared to UTS2R− NP cells [39].Similarly, to the recorded decrease in Tie2+/GD2+ cells in degenerated IVDs, UTS2R+ cells also showed a decrease in cell number in Grade V degenerated IVDs compared to Grade II IVDs, indicating exhaustion of the progenitors [39].Notably administration of UTS2R+ but not UTS2R− NP cells to mouse IVD puncture models showed significant attenuation of IVDD, suggesting an important role of proNP cells, which were located in a tenascin-c (TNC) enriched ECM niche in the peripheral NP [39].Gao et al. further supported the heterogeneity of the NP cell population in mice with their scRNA analysis defining four NP cell clusters including cluster 1: NP progenitors, cluster 2: transient NP, cluster 3: regulatory NP, and cluster 4: homeostatic NP with clusters 3 and 4 making up making up 77.2% of NP cells [39].Cluster 3 is thought to play the biggest role in degeneration and the onset of inflammatory cascades based on its enrichment of genes implicated in angiogenesis, TNFα production, and axon guidance, however ciliary neurotrophic factor receptor (CNTFR) marked cluster 3 cells were significantly reduced in Grade V degenerated discs compared to Grade II [39]. ScRNA seq of cynomolgus monkeys and immunohistochemical (IHC) analysis of monkey and rat coccygeal IVDs enabled the expression analysis of CC-like and NC-like NP cells.Similar expression of a nuclear mediator in the sonic hedgehog (SHH) pathway, glioma-associated oncogene homolog 1 (GLI1), was shared in both CC-like and articular CC cells as well as the SHH signaling mediator, smoothened (SMO), suggesting the activation of hedgehog pathways in both articular CC and a subset of NP cells [56].Furthermore, IHC showed expression of SHH but not indian hedgehog (IHH) in vacuolated NC-like NP cells while both SHH and IHH were detected in bone marrow (BM) cells of a cynomolgus monkey femur, suggesting BM cells as a supply source for hedgehog signaling ligands activating articular CC hedgehog signaling [56].Notably, SHH was not expressed in all NC cells, however it is thought that NC cells may begin as SHH+ NC cells and lose SHH expression over time as SHH− cells were found in individuals older than ten years old [38,56].Hedgehog signaling may activate the hypoxia inducible factor 1 alpha (HIF1α) pathway in both CC-like and articular CC, based on the demonstrated increase in HIF1α protein levels with the addition of SHH to primary chondrocytes [56].Furthermore, transforming growth factor beta (TGFβ) is believed to promote expression of SHH, providing a potential link between increased TGFβ, SHH, and HIF1α in NC cells.Notably, this scRNA data came only from male monkeys.Lin et al. analyzed changes in protein C receptor (PROCR)+ progenitor cells for CC differentiation between healthy and degenerated goat IVDs [35].Progenitor cells were found in the oAF with three main differentiation fates of regulatory CCs, fibroblast CCs and stress CCs [35].Notably, with degeneration, differentiation towards regulatory CCs diminished significantly, instead favoring differentiation into a cluster referred to as stress CCs [35].Regulatory CCs activity revolved around protein synthesis, mitophagy, and promotion of TGFβ and Hippo signaling pathways in relation to cartilage formation while stress CCs showed enrichment of pro-inflammatory tumor necrosis factor (TNF) and IL17 as well HIF1 signaling pathways showing a relation between stress CCs and both inflammation and apoptosis with degeneration [35].The trend towards stress CC in differentiation of PROCR+ progenitor cells in IVDD likely contributes to the increasingly pro-inflammatory environment of IVDD pathology [35]. Caveolin (CAV) + endothelial cells (EC) were identified in the iAF and oAF of degenerated IVDs [35].Lin et al. described the contribution of CALCR and VEGF, among other signaling pathways, to communication between chondrocytes and CAV+ ECs.Chondrocytic secretion of SEMA3C to inactivate ECs and EC secretion of TNFSF10 to disrupt CC function indicates regulatory networks between CC and ECs in the IVD which may affect vessel infiltration [35]. Table 1.Reported biomarkers and differentially expressed genes for IVD tissues.Abbreviations: ELISA: enzyme linked immunosorbent assay, IHC: immunohistochemistry, (q) RT-PCR: (quantitative) reverse transcription polymerase chain reaction, RISH: RNA in situ hybridization.Gene abbreviations and reference numbers will be provided in Supplementary Table S1. Environmental Stress Factors The composition of an IVD's ECM enables the disc to carry the mechanical load and flexibility required for movement, however multiple environmental and metabolic factors can impact ECM composition and remodeling.As such, maintaining proper ECM ratios and balance is necessary for preserving the healthy microenvironment and disruption of these levels relates to degeneration.Here we analyze mechanical, environmental, and metabolic stressors which alter IVD ECM and examine potential contributions to degeneration. Mechanical Stress and Trauma Repetitive stress and loads on the spine are impacted by posture and movement patterns, such as bipedal versus quadrupedal movement.As bipedal vertebrates, both male and female humans experience the most stress and therefore highest prevalence of degeneration, in the lumbar region [66,71].Hyperlordotic spines, which display an increase curvature of the lumbar spine, experienced significantly greater stress on the IVD compared to healthy models, while hypolordotic spines, which have a decrease in lumbar spinal curvature, generally experienced diminished IVD stress and minimal compression of the disc [72].Additionally, the IVD experiences varying levels of loading stress, cycling between periods with decreased loading, typically at night, as the spine maintains a decompressed horizontal laying position and greater loading periods while assuming an upright position that leads to a decrease in overall disc height throughout the day [73].The average combined diurnal loss of disc height due to compressive loading throughout the day was 5.91 mm [73].Changes in spinal loading are important as they enable the cyclical expulsion and intake of fluid in the disc which affects solute diffusion and transportation of nutrients to the different disc tissues [16].However, efficiency of solute diffusion is impacted by degeneration.IVDs demonstrated a significant decrease in solute diffusion with calcification of the CEP, showing an inverse relationship between solute diffusion and CEP calcification levels [74].Thus, illustrating that physical loading changes in the disc can impact solute diffusion and nutrient transport [10]. When considering compressive loading of the spine, it is important to also consider how the stress is dispersed throughout a tissue.For example, uneven loading of a bovine IVD, producing concave and convex sides, primarily stressed the AF in the concave side, leading to decreased AF tissue, increased caspase-3 (CASP3), and reduced ACAN, while in the convex side, MMP1, ADAMTS4, IL1β, and IL6 mRNA was upregulated in the AF but not the NP compared to controls [75].This indicates not only a tissue specific response to loading forces but also demonstrates how uneven stress can influence the expression of proinflammatory cytokines.Furthermore, bovine coccygeal IVDs cultured in TNFα containing medium and exposed to dynamic loading resulted in a significantly increased percentage of TNFα+ cells in the NP compared to control or static loading groups, indicating the importance of convective transport for TNFα penetration into a healthy NP [76].Notably, increased TNFα levels in NP cells promoted ACAN degradation and a significant increase in IL6, IL1ß and TNFα production as a sustained inflammatory effect of TNFα [76].Spring loaded compressive force applied to canine spines for up to one year did not produce changes in PG or collagen compared to controls [77].However, static compressive forces applied to rat tail IVDs resulted in p53 mediated apoptosis and decreased NP and AF cell numbers as well as a decreased proportion of cells expressing NC markers [15].This further suggests that the type of loading greatly impacts the IVD's stress response. Beyond compressive forces, Rohanifur et al., 2022 described cellular remodeling after lumbar disc puncture (LDP) induced trauma through changes in gene expression.Sequencing compared pooled male rat lumbar (L) 4-L5 IVDs following LDP to L2-L3 unpunctured control discs [78].A shift in cell clusters was noted eight weeks post LDP with newly identified ECs and an increase in the main cluster of IVD cells in the treatment group at the expense of cells contributing to the myeloid and lymphoid clusters.Interestingly, subclusters of AF cells expressing nerve growth factor (Ngf) and its receptor (Ngfr) increased in degenerated discs.The authors speculated that the increased number of cells expressing Ngf and Ngfr, in combination with the newly registered lymphoid cells in degeneration, may suggest signaling between neuronal cell population and immune cell population during degeneration and injury to the disc [78].Beyond age-based changes, Moseley et al. sought to determine sex-based associations between annular puncture model of IVDD and pain.Male and female rats were subjected to annular disc puncture or sham surgery to determine sex-association of IVDD induction, radiologic IVD height, histological grading, or biomechanical testing [79].Female rats showed the greatest association between injury, histological grading and IVD height, while male rats additionally showed significant association with von Frey thresholds, linking injury to pain levels.Furthermore, evidence suggested the decrease in IVD height is stronger in more caudal discs, indicating significant disc level variation that should be accounted for when creating experimental models [79].Lastly, considerable variation in methods of inducing IVDD in animals has been shown to produce differing IVDD phenotypes [80], which may result in altered gene expression profiles and biomarkers for natural occurring IVDD.The use of marker gene panels for each tissue type is therefore recommended.Overall, this highlights the importance of considering both age and sex of animals when determining experimental methods of inducing IVDD, including type and distribution of loading stresses, the size of instruments used for disc puncture and the level of the IVD examined. Metabolic Stress The avascular NP establishes a harsh environment with cells depending on anaerobic lactic acid fermentation that promotes a decrease in pH [48,81].However, a decrease in lactic acid production in isolated bovine NP cells following acidification of the medium or low oxygen content was described, indicating a negative Pasteur effect [82].Furthermore, oxygen consumption rate in bovine NP cells in medium is affected by both oxygen content and pH, with oxygen consumption significantly decreased in environments with lower pH and lower oxygen [82].Typically, acidification of the IVD accompanies degeneration; in human IVDs, the pH of a healthy disc is approximately 7.1 while degenerated discs have a pH of 6.5 and severely degenerated discs were recorded with a pH as low as 5.7 [83].Matrix synthesis disruption occurs with disc acidification as demonstrated by the significant drop in PG synthesis when the pH falls below 6.8 [81].Overall, the decreased PG content with disc acidification can further contribute to degeneration through the related decrease in IVD osmotic pressure that disrupts load bearing and solute transportation abilities [81].Acidification of the IVD further leads to increased production of known pro-inflammatory markers with an 81-fold increase in IL1ß, 7.8-fold increase in IL6, 3-fold increase in NGF and 4.6-fold increase in BDNF when cells were cultured at pH 6.5 compared to controls [83].At pH 6.2, bovine NP cells showed a slight reduction in viability, however human NP cells experienced cell death at both pH 6.5 and 6.2, no proliferation at pH 6.8 and effective proliferation at pH 7.1 and 7.4 [83].This suggests that an acidic pH has a strong effect on promoting expression of matrix degrading proteins, pro-inflammatory cytokines, promotes cell death, and prevents NP proliferation in humans.Furthermore, it indicates a species dependent difference in the pH levels that trigger cell death and degenerative changes, which should be clarified in other species.Fermentation provides less usable chemical energy in form of adenosine triphosphate (ATP).This crucial energy providing nucleotide influences molecular levels of PGs, collagen, and other ECM molecules [84].As previously mentioned, disruption of these levels, such as decreased PG, can contribute to degeneration.Bovine IVD cells cultured with 100 µM ATP for two hours showed significant increases in internal ATP content in NP and AF cells, with NP cells showing higher levels than AF cells in both control and experimental groups [84].This suggests extracellular ATP treatment leads to intracellular ATP increases.Interestingly, 5x more ATP was needed in NP versus AF cells to show significantly increased PG and collagen levels compared to controls [84].Higher mitochondrial protein activity in the mouse NP suggests potentially higher mitochondrial activity in the NP region at large, thus offering a potential explanation for the generally higher ATP concentration required for influence on ECM synthesis [66].While reduced mitochondria were described in human notochord cells [30] fewer mitochondria in IVD derived compared to adipose tissue derived cells were noted but not quantified (Figure 4).Furthermore, 16 h of ATP treatment led to significant increases in Acan and collagen II gene expression in both AF and NP cells [84].Overall, this indicates that extracellular and intracellular levels of ATP influence levels of PG, collagen, and Acan, thus altering ECM composition. Regulated Cell Death and Tissue Homeostasis Regulated cell death is an important process to maintain homeostasis of any tissue under stress [7,8].Increased cell death is common in IVDD, of which inflammasome-mediated, caspase-dependent pyroptosis is thought to promote a pro-inflammatory response [85].In pyroptosis, nucleotide-binding domain and leucine-rich repeat (NLR) and nod- Regulated Cell Death and Tissue Homeostasis Regulated cell death is an important process to maintain homeostasis of any tissue under stress [7,8].Increased cell death is common in IVDD, of which inflammasome-mediated, caspase-dependent pyroptosis is thought to promote a pro-inflammatory response [85].In pyroptosis, nucleotide-binding domain and leucine-rich repeat (NLR) and nod-like receptor protein 3 (NLRP3) recruit caspase-1 (Casp1) [85].This promotes pore formation in the plasma membrane, allowing for a quick water influx, creating swelling and subsequent cell lysis [85].Pyroptosis is associated with the release of activated IL1β and IL18 following cell lysis and is further linked to matrix degradation, NP cell apoptosis, CEP tearing, vascularization, and nerve ingrowth [85].Milk-fat globule-EGF factor 8 (MFG-E8), a modulator of the NLRP3 inflammasome, inhibits pyroptosis in NP cells and IVDD [86].Furthermore, a positive correlation exists between disc cell apoptosis and caspase-12 (Casp12), GRP78 expression, and cytochrome C mitochondrial release in rats [87].The induction of AF cell apoptosis via sodium nitroprusside resulted in upregulation of Casp12, GRP78, and GADD153 while cytochrome C levels increased in the cytosol [87].Ferroptosis, cell death through lipid peroxidation initiated by iron, was first described in 2003 but not coined until 2012 [8,88,89].Understanding the specific role of ferroptosis in IVDD onset and development is ongoing, however studies using animal models support a role for ferroptosis in IVDD.AF and NP cells in a simulated oxidative stress environment demonstrated an upregulation of prostaglandin endoperoxidase synthase 2 (PTGS2) and downregulated glutathione peroxidase 4 (GPX4) and ferritin heavy chain (FTH), suggesting ferroptosis in the degenerated IVD [90].Additionally, CEP degeneration and IVDD in mouse iron overload models showed degeneration occurred in a dose dependent manner [91].In addition to pyroptosis, apoptosis, and ferroptosis, necroptosis also contributes to regulated cell death in IVDD.Necroptosis occurs due to death receptor activation including TNFα activated tumor necrosis receptor (TNFR1) and Fas, or Toll-like receptors TLR3 or TLR4 [7].Necroptosis markers were found in human degenerated discs [92].MyD88 was found colocalized with necroptosis markers and inhibition of MyD88 rescued necroptotic NP cells, suggesting a role of MyD88 in IVDD [92].Unlike ferroptosis, necroptosis activation is not a mitochondrial influenced process [7], however necroptosis can impact mitochondrial ultrastructure [92].Regulated cell death including pyroptosis, ferroptosis, necroptosis and apoptosis in the IVD promote pro-inflammatory cytokines and activation of caspases, responsible for signaling and inflammatory cascades, autoimmune responses, and cell death [7,85,87,93] which further accelerates degeneration of the IVD. Protagonists and Antagonists in the Proinflammatory IVD Damage-associated molecular patterns (DAMPs) are a group of trauma or tissue damage response molecules, including ECM fragments recognized by pattern recognition receptors (PRRs) that activate signaling pathways and trigger the release of inflammation mediators [94,95].As such DAMPs activate microglia and macrophages in aging and chronic inflammation and exasperate inflammation through pro-inflammatory cytokine release via NLRP3 and TLRs [96][97][98] (Table 2).A 2020 study used isolated NP cells from canine herniated IVDs and exposed them to 30 kDa fibronectin fragments (FnF) which stimulated a significant upregulation of IL6 and IL8; while the response varied between donors this suggested FnF contributed to a pro-inflammatory environment and may offer another contributing family of molecules that assist in degeneration [98]. Phagocytotic macrophages have complex contributions to inflammation and IVDD.In response to injury, macrophages transition from a pro-inflammatory M1 phase to an anti-inflammatory M2 phase, further categorized as either M2a or M2c, of which M2c is thought to assist with tissue healing, yet secreting MMP7, MMP8, and MMP9, known for pro-inflammatory and ECM remodeling effects [99].Inflammatory response to an increase in expression of pro-inflammatory cytokines is a major contributor to IVDD.Proinflammatory cytokines such as TNFα, IL1β and IL6 amongst others are predominantly produced by activated macrophages [99].Mediating this response is one method for mitigating inflammation and IVDD. Cells positive for CCR7, a cell surface marker that is associated with pro-inflammatory M1 and CD206+ M2a cells both considered to have anti-inflammatory properties, did not significantly increase with degeneration [99], hence suggesting that IVDs fail to reach the associated anti-inflammatory and tissue repair stage.CD163 associated with M2c, were significantly increased in degenerated NP, AF, and CEP tissues compared to healthy tissues according to the severity of degeneration and may be beneficial markers for degeneration and inflammation [99].Interestingly, both M2a and M2c macrophages are associated with tissue repair.However, they are also associated with various diseases including spondyloarthropathy and pulmonary fibrosis among others and their specific impact remains an area of active research.Macrophages might enter through degenerated CEP [99]. To better understand the impact of macrophages on the inflammation of bovine IVDs, Silva et al. cocultured bovine IVD biopsies in IL1β containing medium which led to increased IL6, IL8, and MMP3 levels, however when cultured in IL1β containing medium with macrophages an insignificant reduction in IL6 and IL8 expression was noted and no increase in MMP3, suggesting macrophages interfered with the production of MMP3 and tissue remodeling in pro-inflammatory environments [100].A different study cultured human NP cells and mouse IVDs in either fetal bovine serum (FBS), TNFα, or TNFα and M2 macrophage conditioned medium.The co-cultured group had an upregulation of ACAN, and type II collagen compared to TNFα only and indicated a positive effect of M2 macrophage conditioned medium on ECM synthesis [101].TNFα upregulated MMP13, ADAMTS4, ADAMTS5, and IL6 while the TNFα and M2 macrophage co-culture groups reversed the pro-inflammatory effect [101].These studies show a potential for macrophages to alter inflammatory responses in the IVD. Injection of TNFα into bovine IVDs led to upregulation of MMP3, COX2, IL6, IL8 and ADAMTS4 [102].Interestingly, a significant increase in COX2+ cells was reported in severely degenerated IVDs compared to healthy or mild to moderately degenerated IVDs in humans and dogs [103,104], supporting it as a marker of severe degeneration.Isolated bovine NP cells treated with either TNFα or the inflammatory agent lipopolysaccharide (LPS) and subjected to decreases in osmotic loading produced sustained changes in F-actin expression in treated cells compared to controls [105]. The monomeric form of C-reactive protein (mCRP) is commonly used as an acute phase marker for progression of inflammatory diseases due to its strong proinflammatory nature in chondrocytes [106,107].Compared to the native pentameric CRP, both the intermediate pentameric CRP (pCRP*) and mCRP showed pro-inflammatory effects, especially in CCs [106,108].In recent years, there has been a rise in studies exploring the prevalence of CRP in inflammatory mediated degenerative conditions.A study on patients recommended to undergo lumbar spinal surgery due to single level discopathy had a higher prevalence of blood CRP levels compared to those not recommended while excluding individuals with multiple levels of discopathy or rheumatologic conditions from the study [109].CRP levels, if tied to inflammatory progression, would likely be elevated in individuals with multiple affected discs or higher Pfirrmann Grades, similar to mCRP levels being associated with the severity of OA.Degeneration of articular cartilage is a feature of both OA and IVDD [107].Ruiz-Fernandez et al. were the first to functionally confirmed the presence of CRP in both human AF and NP cells [106].Furthermore, nitric oxide, known for its upregulation of MMPs and cytotoxicity and promotion of ECM degradation, is increased with activation of mCRP in addition to MMP13, IL6, IL8, and LCN2, demonstrating a correlation between mCRP and inflammatory mediators associated with increasing degeneration [106].Pathways important for mCRP signaling in the AF include NF-kB, ERK1, ERK2, and PI3K [106].To understand the potential role of CRP in other animals, follow up studies will be necessary.CRP may be a useful acute marker of inflammation associated with IVDD; however, it should be noted that CRP plasma levels are also increased in rheumatoid arthritis, infection, cancer, and tissue trauma [110].Lastly, a recent study designed C10M, a low molecular weight compound, to block pCRP binding to the phosphocholine groups of damaged cell membranes that mediate the conformational change between pCRP to pro-inflammatory pCRP* and mCRP [108].Thus, presenting a possible anti-inflammatory tool to combat CRP induced inflammation in IVDD. While many cytokines have been shown to promote inflammation, not all cytokines promote IVDD.Utilizing a rat caudal IVD puncture model to determine the effect of the anti-inflammatory chemokine IL10 on IVD degeneration, rat IVDs were injected with 20 microliters of IL10, or a saline solution as a control on a weekly basis [111].Among sections of IVDs from degenerate, IL10, and saline treated groups, IL10 groups showed decreased levels of p38 mitogen activated protein kinase (MAPK) and collagen X in addition to higher collagen II expression levels [111].This finding was further supported by the maintenance of a healthy ring-like AF structure in IL10 treated cohorts [111].Findings from MRI, histological and IHC analysis all suggested a positive effect of IL10 in promoting a healthy IVD microenvironment and indicating reversal of disc degeneration [111].IL1ß induced NP degeneration increased transcription of Col10 and decreased levels of Acan and Col2 mRNA levels [111].Samples treated with both IL1ß and IL10 to determine potential anti-inflammatory effects of IL10 did not alter Acan mRNA levels significantly, however it increased Sox9 and Col2 expression while decreasing Col10 levels [111].Following an initial stress response, injection of IL10 treated samples showed a significant decrease in p38 phosphorylation as well as lowered p38 MAPK activation [111].This suggests antiinflammatory effects of IL10 in rat IVD through a reduction in p38 stress activated signaling.Interestingly, in humans, IL10 was found to increase sensitization of the IVD [112,113], indicating deviation between species in response to IL10 treatment. Evidently, dynamic loading, biomechanical and metabolic stress affects IVD tissue through nutrient diffusion, decreased ECM protein synthesis, increased pro-inflammatory cytokines and apoptosis and diminishing cell numbers, leading to long-term changes of ECM organization that can alter disc height, hydration and movement of solutes.Hence resulting in sustained physical and biochemical changes of the IVD promoting IVDD across species.Continuing to understand how these individual facets of stress function together to promote IVDD pathophysiology ultimately is important for determining a functional therapy. Table 2.This table describes several differentially expressed genes and proposed markers for degeneration and inflammatory responses in IVDD compared to healthy or herniated discs as well as the species in which they were determined.Abbreviations: ELISA: enzyme linked immunosorbent assay, IHC: immunohistochemistry, (q) RT-PCR: (quantitative) reverse transcription polymerase chain reaction, RISH: RNA in situ hybridization.Gene abbreviations and reference numbers will be provided in Supplementary Table S1. Species Degenerating IVD Method Source Reduced GRB10 in lumbar IVDD compared to healthy controls.Not detected in piriformis syndrome, sacroiliac joint pain, entrapment neuropathy and lumbar disc herniation, suggesting a biomarker for lumbar IVDD. Insights from Different Species While IVD tissues of different species appears relatively consistent, with most dissimilarities found in the NP (Figures 2 and 3) the variation in movement patterns, localization of highest IVDD prevalence, response to interleukins, and NC-like cell disappearance between species indicates the importance of considering IVDD research advancements in the context of species-specific research as well as the consideration of research methodology for induced degeneration.The following section will discuss structural and molecular similarities and differences (Table 2) as well as developments in IVDD research in a species dependent manner.It is notable that the extent of research varies by species with some having only recorded the first cases of IVDD in the last decade. Primates 4.1.1. Humans While spine development is typically considered to occur in a rostro-caudal direction, a 2018 study proposed a verifiable newly described process for NC development in humans (Homo sapiens) [127].Starting with formation of the NC process (day 17-23) its thickening and an epithelial to mesenchymal transformation (EMT) marks the prechordal plate stage [127].Once embedded into the endoderm, the NC process is considered the NC plate (day [19][20][21][22][23][24][25][26] before it further develops into the definitive NC proper (day 23-30) [127].The NC remains in physical contact with the ectoderm derived central nervous system (CNS) throughout NC development as it attaches to the "ventral floor" of the CNS, [127].The NC plate, NC process, and somites are thought to develop from the cranial end and progress caudally [127,128].However, de Bree et al. proposed a bidirectional developmental shift away from craniocaudal development by 23-30 days, instead favoring a central origin of development that progresses in both the cranial and caudal directions [127].This allowed the NC ridges to fully close in the central region of the embryo forming the definitive NC fully released from the endoderm and cranially released from the neural tube by day 26 of development [127].The secretory activity of NC cells begins around day 34 and peaks by 50 days in humans, enabling the production of a NC sheath [129].Overall, this implicates all three germ layers in NC development, offering a potential explanation of the complex molecular nature of IVD cells. Human IVDs are subject to naturally occurring degeneration, producing pain in a large portion of the population.As in other species, the pathology of IVDD is still under research.Differentially expressed genes in IVDD patients, such as lowered GRB10, increased serum levels of markers such as IL6 and ferritin, and other notable differences can contribute to IVDD diagnosis (Table 2) [91,114,118].Notably, all human studies in Table 2 examined cases of naturally occurring IVDD in humans.As the complexity of IVD tissue and IVDD degeneration has become apparent, it is important to consider how induced degeneration, such as through LDP, may activate only a portion of the degenerative processes that contribute to IVDD.Therefore, naturally occurring degeneration may show different biochemical and or clinical presentations from induced IVDD. Non-Human Primates Rhesus monkeys (Macaca mulatta) often used in research have seven cervical vertebrae, 12 thoracic, seven lumbar, three fused vertebral bodies in the sacral spine, and approximately 20 caudal vertebrae [130].Naturally occurring IVDD has been recorded in rhesus monkeys [131].Furthermore, naturally occurring degeneration occurs in baboons, with severe degeneration after 14 years of age that both radiographically and histopathologically resembles the degeneration of human IVD [132].While IVDD occurs spontaneously in rhesus and cyanomolgus monkeys, anterolateral annulus resection can also be performed to induce IVDD for research purposes [131]. Organization of the IVD is largely similar in monkeys and humans.Thicker collagen fibrils (70-110 nm) localized to the oAF and CEP while thinner fibrils (40-50 nm) localized to the iAF and the outer NP in thoracic male rhesus IVD [130].Localization of collagen I and collagen II remained the same across disc levels (C5-C6, T3-T4, T9-T10, L2-L3, L4-L5), however disc height was significantly lower in the cervical discs, registering only 55% that of the lumbar disc [133].This indicates significant disc differences that are important for consideration, especially in studies using different disc levels as controls compared to experimental groups.In cynomolgous monkeys, fibronectin is found in the CEP, oAF but not the iAF or NP [134].Interestingly, radial tears in the AF associated with degenerated discs in humans, but not in rhesus monkeys which may be due to postural differences as the rhesus monkey uses quadrupedal movement, however they mimic spinal loading of humans when sitting [135].Lastly, sex-based differences of degeneration were identified in monkeys.Male rhesus monkeys had significantly more severe osteoarthritis compared to their female counterparts, as well as a significantly higher prevalence of OA [136].Overall, this supports non-human primates as a beneficial model to study human IVD, disc level differences, and sex-based differences in certain manifestations of degeneration which should all be considered during experimental model determination. Dogs Dogs (Canis lupus) have a long history of IVDD and protrusion or extrusion of the disc [125].The first reported diagnosis of IVDD in canines, then termed echondrosis intervertebralis, was reported in 1896 by Herrmann Dexler as cited in [137].Since it's identification, IVDD has remained a major contributor to pain in dogs.Recent studies, report that IVDD affects approximately 2-5% of all dogs, with higher prevalence in specific breeds [138].IVDD and degeneration in canines is typically evaluated with consideration of chondrodystrophic (cd) and non-chondrodystrophic dog breeds (ncd).Between these groups, presentation of IVDD is different both in onset and clinical findings.Structurally, chondrodystrophic breeds have extremely short limbs compared to their torso length [139].Additionally, cd breeds typically experience NP extrusion as early as two years of age and early signs of ECM breakdown by three months, while ncd breeds more often experience AF protrusion and IVDD at five years or older [139].NP extrusion, enabled by the calcification of NP cartilage and dehydration, in IVDD is considered Hansen Type I while AF protrusion is classified as Hansen Type II IVDD [10,[138][139][140].Radiographic identification of IVD calcification in many cases can be used in the visualization of acute disc extrusion [10], however CT is suggested as a superior diagnostic tool in dogs presenting with signs of IVDD [141].In a study with 25 extruded dachshund discs, radiographic evidence of calcification was found in 68% while both CT and histopathology identified calcification in 100% [141].This suggests all extruded discs in dachshunds to have calcification and supports CT as a superior diagnostic device for identification of IVDD compared to radiography [141].Interestingly, three variants have been associated with disc calcification in Danish wire-haired dachshunds, a single nucleotide polymorphism in the 5 untranslated region of KCNQ5 and two in MB21D1 [142].Identification of variants associated with disc calcification may assist selective breeding practices to mitigate higher genetic probabilities of IVDD in some dachshunds [142].Notably, radiographic reports of calcification most frequently are described in the thoracic spinal region, with highest frequency between T10 and T13 [143].An additional study examining reported highest localization of IVD herniation to the thoracolumbar and lumbar spinal regions with 46 out of 60 and 14 out of 60 dogs respectively [144].Interestingly, the canine ventral AF is 2-3 times thicker than the dorsal AF, thus disc herniations are more likely to occur on the dorsal side, allowing nerve impingement and compression of the spinal cord [6,16].Breeds of cd and ncd dogs do appear to share some aspects of IVDD development.A 2017 study concluded that both, cd and ncd dogs experience chondroid metaplasia in the NP, as indicated by the lack of fibrocytes across degenerative stages [145].This contradicts previous work describing NP degeneration in ncd dogs through fibrous metaplasia [137,143].Notably, a study comparing 8117 reported cases of IVDD in canines, found Dachshunds, Pekingese, Beagle, Welsh Corgi, Lhasa Apso, and Shih Tzu breeds to have a significantly greater risk of developing IVDD [146].Their longer spines and shorter legs could produce heightened strain on the spine if lacking muscular support, especially with quick jarring movements or excessive loading [140,147]. Functionally, the NP, AF, and CEP in canines is largely consistent with other species.When compressed through mechanical loading, the high osmotic pressure in the NP is contained by the AF and CEPs, preventing extrusion [16] and the IVD is further supported and loaded by trunk muscles that enable spinal movement including flexion, extension, and twisting [16].Due to the lack of blood supply to the IVD, iAF and NP regions rely on CEP transmission of oxygen and glucose, while the outer vascularized portions of the AF can intake additional nutrients ( [16].Fluid flow carrying albumin and large molecules is maintained as the IVD experiences loading [16].Similar to humans, negatively charged aggrecan attracts water into the bean shaped canine NP resulting in an osmotic gradient, with 80% water in the NP and 60% water in the AF of a healthy IVD, and approximately 50-80% water in the CEP [16].Like humans, dogs experience a shift from NC cells to CC-like NP cells however this typically occurs towards the earlier part of a dog's life with most cells replaced by one year, and degeneration expected far later in cd breeds [125].Additional scRNA seq of canine IVDs in healthy and degenerative states would be helpful to further clarify the shift from NC-like to CC-like NP cells in aging and degeneration.Similar to humans, the TZ also blends the mucoid and fibrous phenotype of NP and AF, respectively [16]. Mice Much of our understanding of IVD development comes from genetic engineering and fate mapping experiments performed in mice (Mus musculus) [34].Compositionally, mice were described to have seven cervical, 13 thoracic, six lumbar, four sacral, and 28 caudal vertebrae [148,149].Mouse IVDs, however, are significantly smaller than human discs, rendering some analysis challenging.Due to the small size of the mouse IVD, total RNA from each disc may require pooling with adjacent discs to obtain the necessary amounts of RNA for the molecular assessment of transcriptome changes and the analysis of MMP, matrix markers, and inflammatory cytokines [150].As such, RNA in situ hybridization (RISH), IHC and immunofluorescence may be favored over quantitative reverse transcription polymerase chain reaction (qRT-PCR) for those seeking a narrower analysis or analysis at the single cell level.As the heterogenous cell population often requires single cell resolution and cell pooling may mask important differences in gene expression [14,150]. Further physiological differences exist between humans and mice.Proteomic profiling comparison between mice and humans showed some divergence in the NP region, however AF data was largely consistent [66].The proteomic divergence of the NP may be explained through varied loss of NC cells.For many years, mice were considered to retain their NC cells for a larger proportion of their life while humans are thought to lose theirs by ten years old (Figure 2) [150].However, when defining the disappearance of NC cells, consistent definitions of NP cells are crucial: NC cells which reside in the transient NC sheath, NC-like NP cells which have similar expression profiles to NC cells, and CC-like NP cells [3].Mice and humans share COL12A1 as an AF marker and KRT8, KRT19, CD109 as NP markers (Table 1) [66].Further differences exist between mouse coccygeal and lumbar IVDs which are round and kidney shaped, respectively [151].Additionally, lumbar mouse IVDs had greater enrichment for mitochondrial proteins compared to tail discs, with the greatest differences in the NP region [66].In coccygeal IVDs, the AF had little variation in thickness around the circumference of the disc, while in the lumbar mouse IVD the AF was thicker ventrally compared to dorsally [151].This thicker ventral AF region in mouse lumbar IVDs is consistent with findings in canines [16].Additionally, Brendler et al. noted chondrocytic eosinophilic type cells in the NP [151].If eosinophilic cells are present within the mouse NP IVD, it would challenge the idea that the NP is an immune privileged tissue.While the cell type was noted in 3-month-old mature mouse in this study, the IVDs were not considered significantly degenerated [151].As such, it may be beneficial to explore the potential presence of eosinophilic cells in the NP of younger mice.Because the disappearance of NC and NC-like NP cells is linked to the onset of IVDD in humans, further research may be beneficial to understand the effects of retaining NC cells in addition to any potential effects of NC differences between lumbar and coccygeal IVDs. Rats Rats (Rattus norvegicus), often of the Sprague Dawley or Wistar breed, are research staples [152] and have been reported to retain NC cells throughout their lives (Figure 2) [58].Rats are commonly used as a model organism in IVDD research.Since rats have larger IVDs than mice, they are favored for analyzing therapeutic injections [80].Careful methodology determination for surgery, puncture models, and injections is necessary to elicit the desired IVDD phenotype.Needle size for LDP induced IVDD impacts the severity of degeneration and onset of pain hypersensitivity [80].Use of an 18-gauge (G) needle is required to elicit severe degeneration with hypersensitivity to mechanical stress in female rats [80].Additionally, age dependent changes in protein expression analysis, determined through cDNA microarray, qRT-PCR, IHC analysis and histological stains, support variation in NP protein expression [65].As such age dependent IVD tissue biomarkers are helpful in proper characterization of the IVD and IVDD.A decline of neurophillin-1 (Nrp1) was detected in mature rats compared to younger controls [65,122].NRP1 as receptor for SEMA3A, thought to perpetuate an avascular environment, plays a role in regulating MMP13 expression in human chondrocytes [153].Furthermore, brachyury (Tbxt) mRNA, was detected in all analyzed age groups, however protein was only detected at 1 month suggesting the importance of brachyury protein in context of NC cell loss with aging [65].Basp1, Ncdn, and Arp1, based on their expression in immature NP cells and their neuronal-association, were recommended as potential NC-like NP markers [65]. Diabetes mellitus has been implicated as an indicator/contributor to IVDD.To understand diabetes mellitus type 1 (T1DM) induced IVDD, Yu et al. confirmed IVDD in streptozotocin (STZ)-induced T1DM rats and performed NP transcriptome analysis in conjunction with microarray screening to identify genes relevant to IVDD in the T1DM rats [123].This resulted in 35 potential genes of interest which were enriched in ECM, cytokine adhesion binding, and displayed prominent molecular function in apoptosis regulation and morphogenesis [123].Interaction analysis of the 35 top differentially expressed genes (DEG) revealed Bmp7, Ripk4, Wnt4, Timp1, Col11a1, Acp5, Vdr, Col8a1, Aldh1a1, and Thbs4 as the ten core genes in IVDD in STZ induced T1DM rats in descending enrichment [123].Further ELISA results found pyroptosis-related proteins NLRP3, IL18, IL1β, and gasdermin-D had significantly higher expression in T1DM rats compared to controls suggesting the potential involvement of NP pyroptosis in IVDD in T1DM rats [123].Bmp7 was found to inhibit the activation of the Nlrp3 inflammasome and NP pyroptosis which mitigated IVDD in STZ induced T1DM rats [123].Overall, results suggest Nlrp3 inflammasome activation and NP cell pyroptosis as likely contributors to IVDD and Bmp7 as a potential inhibitor of IVDD pathology in T1DM rats [123]. Rabbits New Zealand White Rabbits (Oryctolagus cuniculus) are commonly bred for research yet other breeds are also used.Rabbits are reported to reach skeletal maturity within 4-6 months of birth and most rabbits used for induced models of degeneration range from 1.5 to 9 months of age [154].In these IVDD models, degeneration can typically be identified around four weeks post-surgical induction [154].Because rabbits are commonly used species in induced IVDD models, quantitative analysis of degeneration is a useful tool in examination and interpretation of results.Sheldrick et al. found decay variance mapping of multi-echo transverse relaxation time (T2) MRI scanning on rabbits to be a beneficial quantitative analysis method with higher sensitivity and specificity than T2 relaxometry for disc puncture models [155]. Rabbits, similar to mice, retain NC-like cells at birth and for a large portion of their lives [156].Kim et al. concluded that CC cells located in the central NP originate from the CEP [157].Hematoxylin and eosin staining in combination with polarized light microscopy enabled visualization of recently formed fibrocartilage with collagen fibers showing double refraction and displayed consistent staining between the NP and CEP [157].Rabbit annulus puncture models of degeneration revealed CEP remodeling with increased bone modeling and decreased small molecule transport, as concluded by lowered gadodiamide diffusion in the AF four weeks post puncture and twelve weeks in the NP [158]. Cows Holstein cattle (Bos taurus) is a popular breed in research.Developmentally, bovine IVDs, like other species, have a NC derived NP region, and a mesodermal sclerotome derived AF and CEP [9,14].A NC sheath forms and thickens containing acid mucopolysaccharides which persists until around 45-50 days of gestation at which point the NC begins to break down [129].NC cells within the NC sheath increase throughout embryonic development until the fetus reaches 30 mm in length at which cell numbers within the NC sheath decrease [129].When the fetus reaches 55 mm resegmentation of the vertebrae and subsequent changes of the NC occur [129].NC cells in the bovine IVD have a granular appearance due to enzymatic activity where secretory activity was recorded as early in development as 10 mm in length [129].Considering developmental similarities, relative ease of obtaining tissue, and degenerative capacities, Holstein cattle continue to be utilized in the field. Contributing to the growing understanding of IVD pathophysiology, experimental results from bovine RISH, fluorescent RISH and other transcriptional profiling have all further supported the IVD tissues as heterogenous cell populations and have enabled further tissue marker discovery including stemness markers supporting progenitor and stem-cell like qualities of some IVD cell groups (Table 1) [14,28,67,69,70].This information supports the need for single cell level analysis of IVD cells.ScRNA seq of the bovine IVD has further provided evidence of stemness, suggestive of IVD progenitor cells [68].Furthermore, analysis of a young, healthy and degenerating, aged bovine matrisome helped to clarify changes in protein expression over time [159].This revealed increased protein expression of PRELP and FINC in degenerating, aging IVDs along with a high protein expression of Col12 and Col14 in healthy, young IVDs [159]. Beyond protein expression changes, bovine cell density diminishes significantly over time from conception onwards in both the AF and NP regions [160].Between conception and four weeks of gestation, cell densities in the bovine AF and NP were recorded at 11,435 and 17,426 count/mm 2 respectively [160].This density drops dramatically, logging 1258 and 488 count/mm 2 between birth and 1 week for AF and NP, and then down to 71 and 106 between count/mm 2 five and ten years [160].Figure 5 reflects a lower cell density in the bovine NP compared to AF after nuclear DAPI staining, showing cell arrangement in the IVD in pairs, strings, clusters, and singles similarly to the cellular arrangement reported in Bonnaire et al. [160].Interestingly, the majority of clustered cells were in advanced degenerated tissue compared to moderate degenerated discs, trauma, acute nuclear prolapse, or chronic nuclear prolapse cells [160]. Vet. Sci.2023, 10, x FOR PEER REVIEW 20 of 32 further supported the IVD tissues as heterogenous cell populations and have enabled further tissue marker discovery including stemness markers supporting progenitor and stem-cell like qualities of some IVD cell groups (Table 1) [14,28,67,69,70].This information supports the need for single cell level analysis of IVD cells.ScRNA seq of the bovine IVD has further provided evidence of stemness, suggestive of IVD progenitor cells [68].Furthermore, analysis of a young, healthy and degenerating, aged bovine matrisome helped to clarify changes in protein expression over time [159].This revealed increased protein expression of PRELP and FINC in degenerating, aging IVDs along with a high protein expression of Col12 and Col14 in healthy, young IVDs [159]. Beyond protein expression changes, bovine cell density diminishes significantly over time from conception onwards in both the AF and NP regions [160].Between conception and four weeks of gestation, cell densities in the bovine AF and NP were recorded at 11,435 and 17,426 count/mm 2 respectively [160].This density drops dramatically, logging 1258 and 488 count/mm 2 between birth and 1 week for AF and NP, and then down to 71 and 106 between count/mm 2 five and ten years [160].Figure 5 reflects a lower cell density in the bovine NP compared to AF after nuclear DAPI staining, showing cell arrangement in the IVD in pairs, strings, clusters, and singles similarly to the cellular arrangement reported in Bonnaire et al. [160].Interestingly, the majority of clustered cells were in advanced degenerated tissue compared to moderate degenerated discs, trauma, acute nuclear prolapse, or chronic nuclear prolapse cells [160]. Horses Despite the historical importance of domesticated horses (Equus callabus) in agriculture and transportation and the financial interest in racing horses, characterization of equine IVDs has been a contradictory topic with great delays in the initial diagnosis of IVDD in horses compared to dogs or humans.Horses have seven cervical vertebrae, 18 thoracic, five to six lumbar, five sacral, and 15-21 coccygeal vertebrae [161] separated by IVDs.Warmblood horses have a documented CEP and fibrous AF, however literature describing both a presence and absence of NP and NC cells exists [162].Histological studies in 2018 by Bergmann et al. confirmed equine NPs with high PG levels and a distinguishable TZ between NP and AF, while still noting the distinction between NP and AF in horses is not as clear as for humans or dogs [162].Previous studies were unable to distinguish between the NP and AF and reported no NP cells in the equine thoracolumbar region [163] or the cervical region [164].Because horses were reported lacking in NP cells, IVDD diagnosis was not prevalent until recent years; Kreuger et al. claimed the lack of clinical IVDD diagnosis in horses was due to failures differentiating between AF and NP tissues and subsequent challenges classifying a breakdown of NP degeneration and changes to cellular composition in combination with a thinner AF [165]. Even with the recently distinguished NP region, differences between equine and human IVDs continue to exist.Utilizing immunolabeling of KRT18, a NC marker, failed to identify any NC cells across the NP region in two fetal vertebral columns (youngest 45 days) and one adult vertebral column [11,161].It is possible that NC cells are initially present and then disappear prior to E45 in development.A different histological study reports the formation of the neural tube and NC with subsequent segmentation of vertebrae by day 21 of gestation in horses, yet it does not specifically mention NC cells at any time point [166].Further studies using a broader range of NC markers on horses between fertilization and E45 are recommended to determine the role of a NC in equine IVD development.ScRNA seq of equine IVDs to search for NC markers may offer additional information. Limited cases of reported equine IVDD are typically a late-stage diagnosis with severe neurologic symptoms including pelvic limb ataxia as early stages may show only subtle signs of compression and pain [167].In 2020, an eight-sample case study reported of cervical and cranial thoracic equine IVDD from 2008-2018 [168].Clinical manifestation of IVDD presented as difficulty or inability to lower the neck to the ground for grazing, fixation or locking of the neck after grazing for five minutes, stiff neck posture, mild to severe ataxia, stumbling, and forelimb lameness [168].Radiological markers of IVDD included reduction of IVD space, irregular margins of endplates, and irregular margins of vertebral bodies [168].Bergmann's 2022 study described 41 warmblood horses and applied a modified canine disc degeneration grading, adding vascularization of AF, for consistency in classification of equine IVDD [11].Of the variables considered, positive correlation was found between degeneration and the total histological score, tearing of cervical AF and NP tearing [11].This grading scale allows for consistent categorization of progression of IVDD in horses that will make future diagnosis easier.Histological scoring of disc degeneration considered: tears in AF, vascularization of NP, CC-like cell presence in the NP, NC cells present in NP and ECM staining of the NP by alcian blue and picrosirius red to view PGs and tears in the NP [11].Table 3 outlines clinical presentations, radiographic findings, biochemical, and histological signs reported in equine IVDD.There is greater decrease in pentosidine levels in the caudal cervical region which could be explained by increased mechanical loading in this spinal region and therefore greater disc degeneration [11].Due to the quadrupedal movement and longer neck compared to other species, loading and stresses on equine spine differ from other species. While horses certainly are not immune to disc and spinal diseases, their diagnosis and subsequent treatment plans have been delayed due to challenges characterizing discs.Delays in diagnosis of equine spinal disorders are not unique to IVDD diagnosis.The first study reporting lumbosacral IVD protrusion in horses occurred in 2016, prior to that, disc protrusion was only reported in cervical and thoracic regions [165].Ultrasound imag-ing enabled that report following identification of severe lumbar disc protrusion while histopathological analysis revealed inflammation of the dorsal root ganglia (DRG) spinal nerve root, and degeneration of the spinal cord [165].Analysis of serum biomarkers in horses with chronic back pain found significant increases in serum levels of glial fibrillary acidic protein (GFAP), commonly used as an astrocytic glial cell marker, and phosphorylated neurofilament-H (pNF-H), a common damage marker for neurons and axons, compared to healthy controls, offering a potential biomarker for chronic back pain [169].Further research utilizing larger equine study samples in combination with widespread use of diagnostic imaging to determine the prevalence of IVDD in symptomatic and asymptomatic horses would be beneficial.Spinal ataxia, more severe in pelvic limbs [170] Collapse of disc space [171] Increase in pentosidine in AF and NP [11] Elevated creatine phosphokinase [171] Fiber degeneration of white matter [171] Caudo-cervical region showed significantly more severe degeneration compared to other regions.[162] Limited range of motion in neck [11] Endplate sclerosis [171] Advanced glycation end product (AGE) crosslinking [11] Serum glutamicoxalocetictransaminase slightly elevated [171] Poor myelin staining [171] Severe neck pain [11] Disc protrusion [170,171] Decrease in hydroxylysine moderate in AF, severe in NP [11] Elevated blood urea nitrogen [171] Necrosis of individual neurons in region of disc protrusion [171] Lameness [170] Increased Collagen type I in NP [11] Elevated venous pH values [171] Scattering of microglial cells [171] Spasticity [11] No change in glycosaminoglycans in NP [11] CSF showed elevated protein content of xanthochromia [172] Swollen axons [171] Dysmetria [11] Increased GFAP serum levels [169] Degeneration of spinal cord in area of disc protrusion [172] Normal cutaneous sensation and cranial nerve function [11] Increased pNF-H serum levels [171] Positive sway response [11] Proprioceptive deficits [11] 4.5.Cats 4.5.1.Large Cats A retrospective study of large cats (Panthera) analyzed 13 lions, 16 tigers, four leopards, one snow leopard and three jaguars kept in a zoo environment of which three lions, four tigers, and a leopard was diagnosed with IVDD [13].In this group, clinical presentations included rear limb atrophy, ataxia, and paresis, as well as a reported decrease in activity and radiographic, histologic, and necroscopic analysis recorded decreased disc space, lesions, spondylosis, herniations and subsequent injury to spinal cords [13].Generally, degenerative changes were more frequently discovered in lumbar regions compared to cervical or thoracic discs in these large cats [13].Additionally, all the affected cats showed decreased appetite and weight loss leading up to their deaths which when reported in combination with ataxia, limb atrophy, paresis, or changes in activity should be considered as clinical indicators of potential degenerative spinal disease.Limited radiographic analysis in large felids leave the onset of degenerative changes unclear, however this type of analysis is recommended for diagnosis of IVDD [13]. Small Cats Case studies of two domesticated small cats (Felis catus) brought into veterinary clinics due to reduced appetite, changes in level of activity and changes to urination and constipation revealed the first reports of sacrococcygeal IVD protrusion [173].Clinical findings included pain with lumbosacral palpation, effects on tail movement and extension of pelvic limbs, yet [173] both cases showed unremarkable serum biochemistry [173].MRI analysis revealed protruded discs at the S3-Cd1 levels with both cases showing a hypointense NP, while non-affected discs had hyperintense NPs.Mild spondylolysis at the S3-Cd1 level was reported in one case, while both cases reported angulation of the spine and presumed spinal nerve root compression [173].A 2001 study examining six cats between three and nine years of age with IVDD, identified through spontaneous disc extrusion, explored cerebrospinal fluid (CSF), serum biochemistry, radiographs, and histopathological analysis of material recovered during a hemilaminectomy ranging from T13-L6 across the six cats as a means to surgically decompress the region [174].In this study clinical symptoms included progressive or acute paraparesis, paraplegia, absence of voluntary urination, changes to pelvic limb gait, back pain, and fecal incontinence.CSF analysis resulted in two cats with CSF blood contamination, and two with neutrophilic inflammation.Between one and three extradural compressive lesions were found in each cat.Mineralization was confirmed in two cats through histopathological analysis of the removed material while degeneration was confirmed in all six of the cats [174].A 2020 case study examined L5-L6 intramedullary disc extrusion in a 10-year-old cat [175].In this study clinical presentation included paraparesis affecting the right side more prominently, loss of bowel control and urination, pain with manipulation of the lumbar spine, and loss of tail movement.Computerized tomography (CT) imaging showed mineralization which was not confirmed in histologic analysis.CT analysis did show a decrease in IVD space while histologic analysis of surgically removed material confirmed NP degeneration.MRI analysis showed mild spinal cord swelling, reduction of NP volume, and intramedullary lesion [175].Both Debreque et al. 2020 and Knipe et al. 2001 had cases with evidence of mineralization in imaging analysis that failed to be confirmed through histologic analysis, suggesting CT or radiographic imaging is not always reliable in diagnosing mineralization of discs. Others American Black Bear In 2012, the first report of IVDD in an American black bear (Ursus americanus) was confirmed by MRI [12].Clinical signs included rear limb paralysis, however due to winter torpor, it was not possible to determine for how long this was occurring.Following euthanasia due to poor prognosis, spondylosis and severe spinal cord compression were discovered [13]. Discussion Understanding the developmental origins and cellular composition of the IVD is important to grasp its contributions to the structural support and functionality it provides to the spine.Recent developments in the omics fields now facilitate a better understanding of cellular heterogeneity and cell metabolic functions on a molecular level and can provide a basis for better diagnostic tools to identify early IVDD onset including timing, symptoms and biomarkers in animals, especially large breeds [27].Importantly, scRNA seq has tremendously improved understanding of cell population dynamics between healthy and degenerated discs and the type of cells populating the region.Notably, scRNA seq further suggests progenitor cells and expression of stemness genes in cellular subpopulations.Further research into these cell population dynamics and signaling that contributes to a dwindling NC-like phenotype will likely improve our understanding of IVDD pathophysiology beyond human patients.This research has begun to identify an infiltration of immune cells participating in the inflammatory response in IVDD. Broadly, the main tissue types and biochemical makeup of the IVD is consistent across the animal species discussed, however cellular differences exist in the presence and fate of NC cells throughout an animals' lifespan.Notably, horses appear to lack NC cells already very early on in embryonic development, however other species dependent differences exist.While early understanding from morphological observations suggested the complete disappearance of NC cells quickly after the NC sheath broke down, scRNA seq identified subpopulations of cells expressing NC markers, suggesting that NC-like cells are present into adulthood in many species, including humans.This is important as NC cells support a different ECM makeup than other NP cells.The breakdown of the ECM, including loss of PG, changes in matrix synthesis, and disruption of a balance between anions and cations, contributes to degeneration [48].Furthermore, these changes can lead to a decrease in disc pressure and disc height, contributing to increased stress on the AF and CEP [75].Inflammation is not only a product of degeneration but also a catalyst for degeneration through the induction of inflammatory cascades, up regulation in pro-inflammatory cytokines, and increased cell death [85,87,93]. Recognizing clinical signs of IVDD on a molecular level is important for diagnosis across animals, especially for large animals were imaging based diagnostics is challenging.Clinical signs of IVDD in severe cases across species often reported limb ataxia, changes in appetite, limited range of motion or increased signs of pain with movement and proprioceptive changes [11][12][13]170].The spinal region with the highest prevalence of IVDD and degenerative changes varies among species, likely due to changes in movement patterns, and it is important to consider how clinical manifestation may vary.Spinal degeneration is most likely to occur in the cervical spine of horses, the lumbar region for cats and humans, and the thoracolumbar region for dogs [66,71,143,144,162], which may influence certain clinical symptoms.More extensive and scoping case studies would be beneficial to identify broader trends in IVDD clinical manifestation across different animal species.Clinical symptoms such as loss of appetite, loss of bowel control, incontinence, decreased ambulation, and limb paralysis or paraparesis indicate a need for further testing to rule out or diagnose IVDD (Table 3).At this time, MRI imaging is the most useful diagnostic tool of IVDD, enabling visualization of changes to disc height, endplate irregularities, and disc protrusions, however it is not easily applicable to large animals [174].In cases where MRI is not possible, testing for increased serum levels of IL6, ferritin, creatine phosphokinase, mCRP, GFAP, pNF-H, increased plasma levels of CCL5 and CXCL6, decreased plasma levels of GRB10, and elevated CSF levels of xanthochromia or clusterin may be used to support an IVDD diagnosis (Table 2) [107,114,115,118,124,169,171]. Mechanical stress and IVD acidification have been shown to decrease ECM protein synthesis, alter nutrient diffusion, and increase pro-inflammatory cytokines.Recording biochemical and physical changes to the IVD in degeneration is extremely useful in identifying potential therapeutic targets and biomarkers of both degeneration and tissue types.However, it may be the case that pH levels affect IVD cells to varying extents between species as described.Macrophages may offer therapeutic relief, as they are reversing the pro-inflammatory effects of TNFα or up regulate ECM proteins [100,101].Interestingly IVDD, shares many degenerative and inflammatory aspects of disease with type 1 diabetes mellitus, disc herniation, and osteoarthritis including up regulation of CRP.Exploring IVDD through the lens of each disease, such as CRP levels or examination of heightened pyroptosis pathways, may help delineate degeneration mechanisms of IVDD. Conclusions Exposure to environmental, mechanical, and metabolic stress factors alongside genetic predisposition influences IVDD progression in many species.Technological advances in the imaging and "omics" fields can identify IVDD on a molecular and cellular level and enable novel diagnostic and therapeutic measures, especially for larger species. Figure 1 . Figure 1.Simplified illustration of IVD formation adapted from [18-24].(A) Specification of the embryonic axial skeleton, shortly after the formation of metameric somites is initiated by signals from the NC.(B) Ventromedial cells undergo an epithelial to mesenchymal transformation (EMT), taking on a mesenchymal sclerotome fate and migrate to surround the NC and neural tube.(C/D) Resegmentation of the axial sclerotome: The cell-dense caudal portion of one somite (C in (C) dark gray in (D)) will combine with the more loosely organized rostral portion of the adjacent somite (R in (C) light gray in (D)).(E) Following this resegmentation, the anterior layer of the cell dense section will give rise to the AF of the IVD (green), while the NP is derived from the NC (pink).Chondrogenesis enables the formation of the CEP and VB.(F) The reorganization and shift influenced by complex signaling events will enable innervation (light blue) of the myotome derived skeletal musculature (purple).(G) Transverse depiction of an isolated IVD illustrating only the caudal CEP for simplicity.(H) Depicts a sagittal section of the VB and IVD.The simplified illustration is not drawn to scale.AS: axial sclerotome; C: caudal; CEP: cartilage end plates; iAF: inner annulus fibrosus; IVD: intervertebral disc; LS: lateral sclerotome; NC: notochord; oAF: outer annulus fibrosus; R: rostral; VB: vertebral body. Figure 1 . Figure 1.Simplified illustration of IVD formation adapted from [18-24].(A) Specification of the embryonic axial skeleton, shortly after the formation of metameric somites is initiated by signals from the NC.(B) Ventromedial cells undergo an epithelial to mesenchymal transformation (EMT), taking on a mesenchymal sclerotome fate and migrate to surround the NC and neural tube.(C/D) Resegmentation of the axial sclerotome: The cell-dense caudal portion of one somite (C in (C)→ dark gray in (D)) will combine with the more loosely organized rostral portion of the adjacent somite (R in (C)→ light gray in (D)).(E) Following this resegmentation, the anterior layer of the cell dense section will give rise to the AF of the IVD (green), while the NP is derived from the NC (pink).Chondrogenesis enables the formation of the CEP and VB.(F) The reorganization and shift influenced by complex signaling events will enable innervation (light blue) of the myotome derived skeletal musculature (purple).(G) Transverse depiction of an isolated IVD illustrating only the caudal CEP for simplicity.(H) Depicts a sagittal section of the VB and IVD.The simplified illustration is not drawn to scale.AS: axial sclerotome; C: caudal; CEP: cartilage end plates; iAF: inner annulus fibrosus; IVD: intervertebral disc; LS: lateral sclerotome; NC: notochord; oAF: outer annulus fibrosus; R: rostral; VB: vertebral body. Figure 2 . Figure 2.Illustrates the timeline of embryonic development from the onset of gastrulation to birth across different species such as mice, rats, rabbits, non-chondrodystrophic (NCD) dogs, chondrodystrophic (CD) dogs, humans, cows, and horses, in addition to the reported disappearance of notochord (NC)-like cells in the NP.Embryonic development data in humans is compiled from[40] and all other animals from[41] while NC-like cell disappearance is compiled from mice[37], rats[26,42], rabbits[26,42], dogs[26,37], humans[30,42,43], cows (personal observation) and[14], horses[11].Information on NC or NC-like cells disappearance varied across sources and was based on histological or morphological data.Advancements in scRNA seq analysis can provide further clarification and identified NC-like cells into adulthood in humans. Figure 3 . Figure 3.Mallory's tetrachrome histological staining on formaldehyde preserved tissues was performed as previously described[14] to visualize the AF and NP tissue across species.All but the human sample were coccygeal IVDs.Ages were generally from adult organisms but varied across species.The dog IVD was from a 1-day old non-chondrodystrophic boxer docked breed.Scale bar reflects 50 µm.Staining represents: nuclei (red), collagen fibrils, ground substance, cartilage, mucin and amyloid (blues), erythrocytes and myelin (yellow) or elastic fibrils (pale pink, pale yellow or unstained)[54]. Figure 3 . Figure 3.Mallory's tetrachrome histological staining on formaldehyde preserved tissues was performed as previously described[14] to visualize the AF and NP tissue across species.All but the human sample were coccygeal IVDs.Ages were generally from adult organisms but varied across species.The dog IVD was from a 1-day old non-chondrodystrophic boxer docked breed.Scale bar reflects 50 µm.Staining represents: nuclei (red), collagen fibrils, ground substance, cartilage, mucin and amyloid (blues), erythrocytes and myelin (yellow) or elastic fibrils (pale pink, pale yellow or unstained)[54]. Figure 4 . Figure 4. Fluorescent microscopy of primary adult coccygeal bovine IVD (NP and AF) and adipose (FAT) cells from the same animal and low passage fetal NP cells cultured in low glucose DMEM with 10% FBS.Cells were stained with MitoView TM (green), which quickly accumulates in mitochondria.DAPI (blue) labels nuclei.All cells were stained and imaged the same day at the same magnification.Scale bar reflects 50 µm for all images.All tissue was collected as waste from local abattoirs. Figure 4 . Figure 4. Fluorescent microscopy of primary adult coccygeal bovine IVD (NP and AF) and adipose (FAT) cells from the same animal and low passage fetal NP cells cultured in low glucose DMEM with 10% FBS.Cells were stained with MitoView TM (green), which quickly accumulates in mitochondria.DAPI (blue) labels nuclei.All cells were stained and imaged the same day at the same magnification.Scale bar reflects 50 µm for all images.All tissue was collected as waste from local abattoirs. Figure 5 . Figure 5. Top image shows a composite image created from overlayed sequential brightfield images of 7 µm paraffin sections of an adult bovine coccygeal IVD taken across the diameter of the disc from oAF to NP to oAF.The composite fluorescence image shows the DAPI labeled nuclei of each corresponding image.Scale bar represents 100 µm.4.4.2.Horses Despite the historical importance of domesticated horses (Equus callabus) in agriculture and transportation and the financial interest in racing horses, characterization of equine IVDs has been a contradictory topic with great delays in the initial diagnosis of IVDD in horses compared to dogs or humans.Horses have seven cervical vertebrae, 18 thoracic, five to six lumbar, five sacral, and 15-21 coccygeal vertebrae [161] separated by IVDs.Warmblood horses have a documented CEP and fibrous AF, however literature describing Figure 5 . Figure 5. Top image shows a composite image created from overlayed sequential brightfield images of 7 µm paraffin sections of an adult bovine coccygeal IVD taken across the diameter of the disc from oAF to NP to oAF.The composite fluorescence image shows the DAPI labeled nuclei of each corresponding image.Scale bar represents 100 µm. Table 3 . Outlines the reported clinical manifestation, radiographic findings, notable biochemical changes, serum changes, and histological reports for equine IVDD based on published literature.
2023-08-26T15:05:17.241Z
2023-08-24T00:00:00.000
{ "year": 2023, "sha1": "582dbfee0976ad6de7c904cefd43c2c91d6a53bb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2306-7381/10/9/540/pdf?version=1692883132", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2ffd3cd0fd813aabb7d078213f38be6aafdb54e5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
228064159
pes2o/s2orc
v3-fos-license
Artin glueings of toposes as adjoint split extensions Artin glueings of frames correspond to adjoint split extensions in the category of frames and finite-meet-preserving maps. We extend these ideas to the setting of toposes and show that Artin glueings of toposes correspond to a 2-categorical notion of adjoint split extensions in the 2-category of toposes, finite-limit-preserving functors and natural transformations. A notion of morphism between these split extensions is introduced, which allows the category Ext(H,N) to be constructed. We show that Ext(H,N) is contravariantly equivalent to Hom(H,N), and moreover, that this can be extended to a 2-natural contravariant equivalence between the Hom 2-functor and a naturally defined Ext 2-functor. Introduction Artin glueings of toposes were introduced in [1] and provide a way to view a topos G as a combination of an open subtopos G o(U ) and its closed complement G c(U ) . This situation may be described as the 'internal' view, but we might instead look at it externally. Here we have that Artin glueings of two toposes H and N correspond to solutions to the problem of which toposes G does H embed in as an open subtopos and N as its closed complement. There is an analogy to be made with semidirect products of groups. We may either view a group as being generated in a natural way from two complemented subgroups (one of which is normal), or externally, view a semidirect product as a solution to the problem of how to embed groups H and N as complemented subobjects so that N is normal. Of particular importance to us is that semidirect products correspond to split extensions of groups (up to isomorphism). Artin glueings of Grothendieck toposes decategorify to the setting of frames and in this algebraic setting the analogy to semidirect products has been made precise. In [11] it was shown that Artin glueings correspond to certain split extensions in the category of frames with finite-limitpreserving maps. While these results were proved in the setting of frames, it is not hard to see that the arguments carry over to Heyting algebras. It is this view that we now extend back to the elementary topos setting. We now recall the main results of [11]. In the category of frames with finite-meet-preserving maps, there exist zero morphisms given by the constant 'top' maps. This allows us to consider kernels and cokernels. Cokernels always exist and the cokernel of f : N → G is given by e : G → ↓f (0) where e(g) = f (0) ∧ g. This map has a right adjoint splitting e * sending h to h f (0) . Kernels do not always exist, but kernels of cokernels always do, and the kernel of e : G → ↓u is the inclusion of ↑u ⊆ G. The cokernel is readily seen to be the open sublocale corresponding to u and the kernel the corresponding closed sublocale. This immediately gives that the split extensions whose splittings are adjoint to the cokernel correspond to Artin glueings. A notion of protomodularity for categories equipped with a distinguished class of split extensions was first introduced in [5]. In a similar way, in [11] (as well as the current paper) a distinguished role is played by those split extensions whose splittings are right adjoint to the cokernel. With this correspondence established, the corresponding Ext functor was shown to be naturally isomorphic to the Hom functor. Each hom-set Hom(H, N ) has an order structure and this order structure was shown to correspond contravariantly in Ext(H, N ) to morphisms of split extensions. Finally, it was demonstrated how the meet operation in Hom(H, N ) naturally induces a kind of 'Baer sum' in Ext(H, N ). In this paper all of the above results find natural generalisation to the topos setting after we provide definitions for the analogous 2-categorical concepts. (Also see the paper [16] by Niefield which appeared after we wrote this paper and discusses the relationship between (non-split) extensions and glueing in a very general context. The current paper uses split extensions and discusses the functorial nature of the construction for toposes in more detail.) 2-categorical preliminaries. There are a number of 2-categories and 2-categorical constructions considered in this paper and so we provide a brief overview of these here. All the 2-categories we consider shall be strict. Briefly, a (strict) 2-category consists of objects, 1-morphisms between objects and 2-morphisms between 1-morphisms. Phrased another way, instead of hom-sets between objects as is the case with 1-categories, for any two objects A and B we have an associated hom-category Hom(A, B). Both 1-morphisms and 2-morphisms may be composed under the right conditions. If F : A → B and G : B → C are 1-morphisms, then we may compose them to yield GF : A → C. We usually represent this with juxtaposition, though if an expression is particularly complicated we may use G • F . As with natural transformations, there are two ways to compose 2-morphisms -vertically and horizontally. We may compose α : F → G and β : G → H vertically to give βα : F → H, sometimes written β • α. Orthogonally, if we have F 2 F 1 : A → C, G 2 G 1 : A → C, α : F 1 → G 1 and β : F 2 → G 2 , then we may compose α and β horizontally to form β * α : F 2 F 1 → G 2 G 1 . Vertical and horizontal composition are related by the so-called interchange law. For each object A there exists an identity 1-morphism id A and for each 1-morphism F there exists an identity 2-morphism id F . Just as one can reverse the arrows of a category B to give B op , one can reverse the 1-morphisms of a 2-category C to give C op . It is also possible to reverse the directions of the 2-morphisms yielding C co and when both the 1-morphisms and 2-morphisms are reversed we obtain C co op . In this paper we will make extensive use of string diagrams. For an introduction to string diagrams for 2-categories see [14]. We will use the convention that vertical composition is read from bottom to top and horizontal composition runs diagrammatically from left to right. We consider 2-functors between 2-categories defined as follows. (We follow the convention that 2-functors are not necessarily strict.) Definition 2.1. A 2-functor F between 2-categories C and D consists of a function F sending objects C in C to objects F(C) in D and for each pair of objects C 1 and C 2 in C a functor F C 1 ,C 2 : Hom(C 1 , C 2 ) → Hom(F(C 1 ), F(C 2 )), for which we use the same name. Additionally, for each pair of composable 1-morphisms (F, G) we have an invertible 2-morphism ω G,F : F(G) • F(F ) → F(G • F ) called the compositor, and for each object C in C we have an invertible 2-morphism κ A : Id F (A) → F(Id A ) called the unitor. This data satisfies the following constraints. (3) If F : X → Y is a 1-morphism, then we have the unit axiom ω F, There is a notion of 2-natural transformation between 2-functors defined as follows. Next for each object X ∈ X , ρ must respect the identity Id X . For this we need the following diagram to commute. Finally, they must satisfy the following 'naturality' condition for F, F : X → Y and α : F → F . When each component ρ X is an equivalence, we call ρ a 2-natural equivalence. One 2-functor of note is the 2-functor Op : Cat co → Cat which sends a category C to its opposite category C op . We will use this 2-functor in Section 5 to help compare two 2-functors of different variances. Limits and colimits have 2-categorical analogues, which will be used extensively throughout this paper. A more complete introduction to these concepts can be found in [13]. In particular, we will make use of 2-pullbacks and 2-pushouts, as well as comma and cocomma objects, which we describe concretely below. (1) Let T : X → B and S : X → C be 1-morphisms and let ψ : F T → GS be a 2-morphism. Then there exists a 1-morphism H : X → P and invertible 2-morphisms ν : P G H → T and µ : (2) If H, K : X → P are 1-morphisms and α : P G H → P G K and β : P F H → P F K are 2-morphisms satisfying that ϕK • F α = Gβ • ϕH, then there exists a unique 2-morphism γ : H → K such that P G γ = α and P F γ = β. A 2-pullback is defined similarly, except both ϕ and ψ are required to be invertible, and is represented as follows. We now recall the definition of a fibration of categories. In fact, we will also need the notion of a fibration in other 2-categories, such as the 2-category Cat lex of finitely-complete categories and finite-limit-preserving functors. The general definitions of fibrations, morphisms of fibrations and 2-morphisms of fibrations can be found, for example, in [7,. However, it is not hard to see that the fibrations in Cat lex are simply the finite-limit-preserving functors which are fibrations in Cat. 2.2. Elementary toposes and Artin glueings. By a topos we mean an elementary toposthat is, a cartesian-closed category admitting finite limits and containing a subobject classifier. The usual 2-category of toposes has 1-morphisms given by geometric morphisms and 2-morphisms given by natural transformations. For Grothendieck toposes, the subobjects of the terminal object may be imbued with the structure of a frame. Moreover, a geometric morphism between two Grothendieck toposes induces a locale homomorphism between their locales of subterminals. This induces a functor from the category of Grothendieck toposes into the category of locales, which is in fact a reflector. The open subtopos E o(U ) corresponding to a subterminal U has a reflector given by the exponential functor (−) U . It is not hard to see that this topos is equivalent to the slice topos E/U , which in turn can be thought as the full subcategory of the objects in E admitting a map into U . From this point of view, the reflector E : E → E/U maps an object X to the product X × U . We denote its right adjoint by E * = (−) U and write θ and ε for the unit and counit respectively. Note that E * E(G) = (G × U ) U ∼ = G U . In addition to a right adjoint, E also has a left adjoint E ! , which is simply the inclusion of E/U into E. The closed subtopos E c(U ) has reflector K * : E → E c(U ) given on objects by the following pushout. is given by the universal property of the pushout in the following diagram. Here the left, front and top faces commute and so a diagram chase determines that p G 1 f and p G 2 id U indeed form a cocone. We denote the right adjoint of K * by K and write ζ and δ for the unit and counit respectively. As expected, E o(U ) and E c(U ) are complemented subobjects. Given toposes H and N we can ask for which toposes G may H be embedded as an open subtopos and N its closed complement. This is solved completely by the Artin glueing construction. For any finite-limit-preserving functor F : H → N we may construct the category Gl(F ) whose objects are triples (N, H, ) in which N ∈ N , H ∈ H and : N → F (H) and whose morphisms are pairs (f, g) making the following diagram commute. The category Gl(F ) is a topos, and moreover, the obvious projections π 1 : Gl(F ) → N and π 2 : Gl(F ) → H are finite-limit preserving. The projection π 2 has a right adjoint π 2 * sending objects H to (F (H), H, id F (H) ) and morphisms f to (F (f ), f ). This map π 2 * is a geometric morphism and, in particular, an open subtopos inclusion. Similarly, π 1 has a right adjoint sending objects N to (N, 1, !) and morphisms f to (f, !) where ! : N → 1 is the unique map to the terminal object. This is itself a geometric morphism, and indeed, a closed subtopos inclusion. Remarkably, Artin glueings may be viewed as both comma and cocomma objects in the category of toposes with finite-limit-preserving functors. We provide a proof of the latter in Section 4. One sees immediately that π 1 π 2 * = F . This suggests a way to view any open or closed subtopos as corresponding to one in glueing form. If K : G c(U ) → G and E * : G/U → G are respectively the inclusions of open and closed subtoposes, then there is a natural sense in which these maps correspond to π 1 * : G c(U ) → Gl(K * E * ) and π 2 * : G/U → Gl(K * E * ) respectively. This fact is well known, though a new proof will be provided in Section 3.2. We now note that the maps π 1 : Gl(F ) → N and π 2 : Gl(F ) → H are fibrations. By the above argument, these results apply equally to the inverse image maps of open and closed subtoposes. This is likely well known, though we were unable to find explicit mention of this in the literature. where N , and P f are defined by the following pullback. The cartesian property of (P f , f ) follows from the universal property of the pullback. To see that this map satisfies the universal property, suppose that we have a morphism (g 1 , g 2 ) : (A, B, k) → (N, H, ) which is mapped by π 1 to f h. We must show there is a unique map h such that (g 1 , g 2 ) = (f, id H )h and π 1 (h) = h. These constraints imply that h = (h, g 2 ). To see that (h, g 2 ) is a morphism in Gl(F ), we consider the following diagram. The left-hand square commutes as π 1 (g 1 , g 2 ) = f h and the right-hand square commutes since (g 1 , g 2 ) is a morphism. Adjoint extensions In generalising the frame results to the topos setting, it is clear that the appropriate 2-category to consider is Top lex , the 2-category of toposes, finite-limit-preserving functors and natural transformations. (This is the horizontal bicategory of the double category of toposes considered in [15].) For convenience, we will assume that 1 always refers to a distinguished terminal object in a topos, and 0 a distinguished initial object. We will now introduce the necessary concepts in order to discuss extensions of toposes and show how Artin glueings can be viewed as adjoint extensions. In particular, the definition of extension will require notions of kernel and cokernel. 3.1. Zero morphisms, kernels and cokernels. The definition of extensions requires a notion of zero morphisms. Let us now define these in the 2-categorical context. • Z contains an object of each hom-category, • Z is an ideal with respect to composition (as in [9]) -that is, g ∈ Z =⇒ f gh ∈ Z, • Z is closed under 2-isomorphism in the sense that if f ∈ Z and f ∼ = f then f ∈ Z, • for any parallel pair f 1 , f 2 of morphisms in Z, there is a unique 2-morphism ξ : Definition 3.2. A zero object in a 2-category is an object which is both 2-initial and 2-terminal. Lemma 3.3. A 2-terminal or 2-initial object in a pointed 2-category is always a zero object. Furthermore, any 2-category with a zero object has a unique pointed 2-category structure where the zero morphisms are those 1-morphisms which factor through the zero object up to 2-isomorphism. Remark 3.4. Definition 3.1 is a categorification of pointed-set-enriched categories. Pointed categories are often instead defined as those having a zero object. The previous lemma shows that our definition agrees with a definition in terms of zero objects when the category has a terminal or initial object. Definition 3.5. The 2-cokernel of a morphism f : A → B in a pointed 2-category is an object C equipped with a morphism c : B → C such that cf is a zero morphism and which is the universal such in the following sense. (1) If t : B → X is such that tf is a zero morphism, then there exists a morphism h : C → X such that hc is isomorphic to t. The 2-kernel of a morphism in a pointed 2-category C is simply the 2-cokernel in C op . A similar notion was defined for groupoid-enriched categories in [8]. Note that these may also be defined in terms of 2-pushouts or 2-coequalisers involving the zero morphism (in the same way kernels and cokernels can be defined in the 1-categorical setting). Although there would usually be coherence conditions that need to be satisfied, the uniqueness of natural isomorphisms between zero morphisms eliminates all of them in the case of 2-cokernels. Remark 3.6. Note that the condition (2) for 2-kernels is simply the statement that the 2-kernel map is a fully faithful 1-morphism. Moreover, since an adjoint of a fully faithful morphism is fully faithful in the opposite 2-category, we have that when a putative 2-cokernel c : B → C has a (left or right) adjoint d, then condition (2) for 2-cokernels is equivalent to d being fully faithful. We can now consider how these concepts behave in our case of interest. Note that Top lex has a zero object, the trivial topos. Then zero morphisms in Top lex are precisely those functors which send every object to a terminal object. In Top lex , 2-cokernels of morphisms F : N → G always exist and are given by the open subtopos corresponding to F (0). Proof. We know that E lies in Top lex and so we begin by showing that that EF is a zero morphism. The terminal object in G/F (0) is F (0) and so consider the following calculation. Next suppose that T : G → X is such that T F is a zero morphism. We claim that T E * : H → X when composed with E is naturally isomorphic to T . Observe that The final condition for the 2-cokernel holds immediately because E has a full and faithful adjoint. Unfortunately, 2-kernels do not always exist in Top lex . However, they do exist in the larger 2-category Cat lex of finitely-complete categories and finite-limit-preserving functors. Proposition 3.8. Let F : G → H be a morphism in Cat lex . The 2-kernel of F , which we write as Ker(F ), is given by the inclusion into G of the full subcategory of objects sent by F to a terminal object. Proof. Since F preserves finite limits and sends each object in Ker(F ) to a terminal object, it is clear that Ker(F ) is closed under finite limits. Naturally, the inclusion is a finite-limit-preserving functor. It is clear that F K is a zero morphism. We must check that if T : X → G is such that F T is a zero morphism, then it factors through Ker(F ). Note that since F T is a zero morphism, all objects (and morphisms) in its image lie in Ker(F ). Thus, it is easy to see that T factors through Ker(F ). The uniqueness condition of the universal property is immediate, as the inclusion of Ker(F ) is full and faithful. We will only be concerned with 2-kernels of 2-cokernels. The following proposition shows that these do always exist in Top lex . Proposition 3.9. Let U be a subterminal of a topos G and consider E : G → G/U defined as in Proposition 3.7. Then the kernel of E is given by K : G c(U ) → G, the inclusion of the closed subtopos corresponding to U . Proof. Since Top lex is a full sub-2-category of Cat lex , it suffices to show that the closed subtopos G c(U ) is equivalent to Ker(E), the full subcategory of objects sent by E to a terminal object. The reflector K * : G → G c(U ) sends an object G to the following pushout. First we show that K * (G) lies in Ker(E). We know that E preserves colimits and so we obtain the following pushout in G/U . But note that p is an isomorphism, since it is the pushout of an identity morphism and thus EK * (G) ∼ = U and K * (G) lies in Ker(E). Finally, we must show that K * fixes the objects of Ker(E). First observe that U is the initial object in Ker(E), since if X is an object in Ker(E) then we have There is precisely one morphism in Hom G/U (U, U ), since U is the terminal object in G/U . Now consider the following candidate pushout diagram where G lies in Ker(E). To see that the square commutes, note that by assumption G lies in Ker(E) and so G × U ∼ = U . Therefore, G × U is initial in Ker(E) and there is a unique map into G. We now have This gives that G is the pushout and hence fixed by K * . Adjoint extensions and Artin glueings. We are now in a position to define our main object of study: adjoint split extensions. equipped with a natural isomorphism ε : EE * → Id H is called an adjoint split extension if K is the 2-kernel of E, E is the 2-cokernel of K, E * is the right adjoint of E and ε is the counit of the adjunction. Remark 3.11. Propositions 3.7 and 3.9 suggest that every adjoint split extension is equivalent to an extension arising from a closed subtopos and its open complement (in a sense that will be made precise in Definition 4.1). The above situation is precisely the setting in which Artin glueings are studied and it is well known that in this case G is equivalent to an Artin glueing Gl(K * E * ). We will present an alternative proof of this result from the perspective of extensions. We begin by showing that Artin glueings can be viewed as adjoint split extensions in a natural way. Proposition 3.12. Let F : H → N be a finite-limit-preserving functor. Then the diagram Proof. We first note that Gl(F ) is a topos. This is a fundamental result in the theory of Artin glueings of toposes and a proof can be found in [18]. By Proposition 3.8, it is immediate that π 1 * is the 2-kernel of π 2 . To see that π 2 is the 2-cokernel of π 1 * , we first observe that the slice category of Gl(F ) by the subterminal (0, 1, !) = π 1 * (0) is equivalent to H. The objects of Gl(F )/(0, 1, !) are isomorphic to those the form (0, H, !) (since every morphism into an initial object in N is an isomorphism) and its morphisms of the form (!, f ). The following proposition is shown for Grothendieck toposes in [1], but deserves to be more well known. Here we prove it for general elementary toposes. (It also follows easily from the theory of Artin glueings, but here we will use it to develop that theory.) Recall that we use θ for the unit of the open subtopos adjunction E E * and ζ for the unit of the closed subtopos adjunction K * K. Proof. First note that the diagram commutes by the naturality of ζ. Recall that, setting is the pushout of G and U along the projections π 1 : G × U → G and π 2 : G × U → U . Now we can rewrite the relevant pullback diagram as follows. Here the ι maps are injections into the pushout and c is the unit of the exponential adjunction, which intuitively maps elements of G to their associated constant functions. Let us express P in the internal logic. We have where ∼ denotes the equivalence relation generated by f ∼ * for * ∈ U . Explicitly, we find that Finally, commuting the subobject and the quotient we arrive at where the equivalence relation is generated by (f, g) ∼ (f, * ) for * ∈ U . Note that the union is now disjoint. The map r : We can define a candidate inverse by s : P → G by (f, g) → g and (f, * ) → f ( * ), which can seen to be well-defined, since if (f, g) and (f , * ) are elements of the disjoint union with f = f and * ∈ U , then f ( * ) = c(g)( * ) = g. . Thus, r and s are inverses as required. Remark 3.14. The non-classical logic in the above proof can be hard to make sense of. It can help to consider the cases where U = 0 and U = 1. In the former case, G U contains no information and G + G×U U ∼ = G, while in the latter case the opposite is true. The general case 'interpolates' between these. The next result will play a central role in this paper. An enjoyable diagram chase around the cube shows that p is the unique morphism making the cube commute by the pullback property of the front face. Since pullbacks in the functor category are computed pointwise, this yields the desired result. Remark 3.16. It is remarked in [11] that adjoint extensions of frames can be viewed as weakly Schreier split extensions of monoids as defined in [4]. Propositions 3.13 and 3.15 can be viewed as a categorified version of the weakly Schreier condition for the topos setting, though it is as yet unclear how the weakly Schreier condition might be categorified more generally. It would also be interesting to see how the general theory of (S-)protomodular categories [2,6] might be categorified. Another potential example of 2-dimensional protomodularity can be found in [12]. We can now prove the main result of this section. For now we will treat equivalences of adjoint extensions without worrying too much about coherence, which we will discuss in more detail when we define morphisms of extensions in Section 4. be an adjoint split extension. Then it is equivalent to Proof. We denote the unit of π 2 π 2 * by θ : Id Gl(K * E * ) → π 2 * π 2 and observe that its component at (N, H, ) can be given explicitly by We denote the unit of π 1 π 1 * by ζ : Id Gl(K * E * ) → π 1 * π 1 and its components are given by Consider the functor Φ : . This is a morphism of extensions in the sense that we have natural isomorphisms α : π 1 * → ΦK, β : π 2 Φ → E and γ : π 2 * → ΦE * given by We must show that Φ is an equivalence. We claim that the following pullback in Hom( We shall make extensive use of Proposition 3.15 in order to prove this. To see that Φ Φ ∼ = id G note that composition with Φ on the right preserves limits and thus can be represented as the following pullback in Hom(G, G). Note that π 1 Φ = K * , π 2 Φ = E and that θ Φ(G) = (K * θ G , id E(G) ) which of course gives that Kπ 1 θ Φ = KK * θ. After making these substitutions into the diagram above, we have the pullback square occurring in Proposition 3.15, which by the universal property gives that ΦΦ is naturally isomorphic to Id G . Together with Proposition 3.12 this shows that adjoint split extensions and Artin glueings are essentially the same. The equivalence Φ so defined is natural in a sense that will become clear later in Section 4.2. The category of extensions It follows from Theorem 3.17 that equivalence classes of adjoint extensions between H and N are in bijection with those of Hom(H, N ). However, the extensions have a natural 2-categorical structure and so we would like to know how this relates to the categorical structure of Hom(H, N ). Morphisms of extensions. Definition 4.1. Suppose we have two adjoint extensions, with the same kernel and cokernel objects and associated isomorphisms ε 1 and ε 2 respectively. Consider the following diagram where α, β and γ are natural isomorphisms and Ψ is a finitelimit-preserving functor. The morphisms compose in the obvious way: by composing the functors and pasting the natural transformations together by juxtaposing the squares from the diagram above. Horizontal and vertical composition of 2-morphisms is given by the corresponding operations of the natural transformations τ . It is not hard to see that this gives a strict 2-category Ext (H, N ). Proof. Any such β must satisfy ε 2 = ε 1 (βE 1 * )(E 2 γ). But this can be rewritten as ε 2 (E 2 γ −1 ) = ε 1 (βE 1 * ), which shows that β and γ −1 are mates with respect to the adjunctions E 1 E 1 * and E 2 E 2 * and hence determine each other. We now show that β so defined is an isomorphism. As the mate of γ −1 , we can express β as (ε 2 E 1 )(E 2 γ −1 E 1 )(E 2 Ψθ 1 ). Now since ε 2 and γ −1 are isomorphisms, we need only show E 2 Ψθ 1 is an isomorphism. This map occurs in the pullback obtained by applying E 2 Ψ to the pullback square of Proposition 3.15. Now observe that E 2 ΨK 1 ∼ = E 2 K 2 ∼ = 1 is a zero morphism and hence so are E 2 ΨK 1 K * 1 and E 2 ΨK 1 K * 1 E 1 * E 1 . Therefore, the bottom arrow of the above diagram is an isomorphism, and as the pullback of an isomorphism, E 2 Ψθ 1 is an isomorphism too. Finally, we show that the condition on β for 2-morphisms of extensions is automatic. Simply observe the following string diagrams. Here the first diagram represents β (E 2 τ ) and the last diagram represents β. In moving from the first diagram to the second we shift τ above θ 1 and to move from the second diagram to the third we use γ = (τ E 1 * )γ. Proof. Suppose τ : Ψ → Ψ is a 2-morphism of extensions. Then we have τ K 1 = α α −1 and τ E 1 * = γ γ −1 . Now by composing the pullback square of Proposition 3.15 with Ψ and Ψ and using the naturality of τ we have the following commutative cube in Hom Top lex (G 1 , G 2 ). The universal property of the pullback on the front face then gives that τ is uniquely determined by τ K 1 K * 1 and τ E 1 * E 1 , and hence by τ K 1 = α α −1 and τ E 1 * = γ γ −1 . Thus, the morphism τ is unique if it exists. We can also attempt to use a similar cube to construct τ without assuming it exists a priori by replacing τ K 1 with α α −1 and τ E 1 * = γ γ −1 in the above diagram. However, in order to obtain a map τ from the universal property of the pullback, we require that the right-hand face commutes. Note that this square commutes if and only if the similar square obtained by inverting the horizontal morphisms commutes. But this latter square is precisely the square we need to commute to obtain a 2-morphism in the opposite direction. Uniqueness then shows that these two 2-morphisms compose to give identities. Finally, observe that commutativity of this square is the required equality stated above whiskered with E 1 . This is equivalent to the desired condition, since E 1 is essentially surjective. This will justify working with Ext(H, N ) (viewed as a 1-category) going forward. Lemma 4.6. From a morphism of extensions (Ψ, α, β, γ) we can form an associated natural trans- where δ 2 is the counit of the K * 2 K 2 adjunction and which is depicted below. Two parallel morphisms of extensions are isomorphic if and only if their corresponding natural transformations are equal. We can now move all the unprimed variables to the left and primed variables to the right by multiplying both sides of this equation on the left by α −1 K * 1 E 1 * and on the right by γ to obtain (α −1 K * 1 E 1 * )(Ψζ 1 E 1 * )γ = (α −1 K * 1 E 1 * )(Ψ ζ 1 E 1 * )γ . These are the mates of the desired natural transformations with respect to the adjunction K * 2 K 2 . 4.2. The equivalence of categories. In this section we show that the categories Ext(H, N ) and Hom(H, N ) op are equivalent. This requires showing that isomorphism classes of morphisms of extensions correspond to natural transformations. We have already seen that each isomorphism class has an associated natural transformation. We will now further explore this relationship, making use of the following folklore result. Proof. We first check the 2-categorical condition. Consider two finite-limit-preserving functors U, V : G → X and natural transformations µ : U E * → V E * and ν : U K → V K such that (V ζE * )µ = (νK * E * )(U ζE * ). We must find a unique ω : U → V such that ωE * = µ and ωK = ν. We use Proposition 3.15 to express U and V as pullbacks and then as in Lemma 4.4 we find that there is a unique map ω : U → V with ωE * = µ and ωK = ν as long as the following diagram commutes. But commutativity of this diagram is simply the assumed condition whiskered with E on the right. Now we show the 1-categorical condition. Suppose we have finite-limit-preserving functors T 1 : H → X and T 2 : N → X and a natural transformation ϕ : T 1 → T 2 K * E * . We must construct a finite-limit-preserving functor L : G → X and natural isomorphisms τ 1 : LE * → T 1 and τ 2 : Suppose we are given such a functor L and natural isomorphisms τ 1 , τ 2 and consider the following pullback diagram. Here the bottom trapezium commutes by the naturality of τ 2 K * and the right trapezium commutes since (ϕτ 1 )E = (τ 2 K * E * • LζE * )E by assumption. Note that the left edge of the large square is the mate of τ 2 with respect to K * K and the top edge is the mate of τ 1 with respect to E E * . Now without assuming L exists to start with, we can use the outer pullback diagram to define it and we may recover τ 1 and τ 2 as the mates of the resulting pullback projections. Observe that precomposing the pullback with K turns the right-hand edge into an isomorphism between zero morphisms. Hence the left-hand morphism (τ 2 K * )(Lζ)K is an isomorphism as well. Since τ 2 is given by composing this with the isomorphism T 2 δ, we find that τ 2 is an isomorphism. On the other hand, precomposing the pullback with E * turns the bottom edge into an isomorphism (as N E * − → G is a reflective subcategory). It follows that τ 1 is also an isomorphism. Finally, we show that ϕ be can recovered in the appropriate way. The commutativity of the pullback square gives ϕE • τ 1 E • Lθ = T 2 K * θ • τ 2 K * • Lζ. The result of whiskering this with E * on the right and composing with T 2 K * E * ε is depicted in the string diagram below. The desired equality follows after using the triangle identities to 'pull the wires straight'. Finally, Γ H,N and Γ −1 H,N form an equivalence since the unit and counit are isomorphisms. With this in mind we may now consider the full subcategory of Ext(H, N ) whose objects are only those extensions of the form N Gl(F ) H π 1 * π 2 π 2 * for some F : H → N . It is evident that this full subcategory is equivalent to Ext(H, N ) and for the remainder of the paper we will choose to perform calculations in this subcategory for simplicity. We will discuss how this can be done coherently when we investigate the Ext 2-functor in Section 5. We can now give a concrete description of the behaviour of morphisms of extensions. Suppose that (Ψ, α, β, γ) : Γ H,N (F 1 ) → Γ H,N (F 2 ) is a morphism of adjoint extensions as in the following diagram. Since Ψ preserves finite limits, we can use Proposition 3.13 to completely determine its behaviour. Every object in Gl(F 1 ) can be written as the following pullback diagram, where objects in the category are represented by the green arrows pointing out of the page and the pullback symbol has elongated into a wedge. Note that the front and back faces are pullback squares in N and the other faces correspond to morphisms in Gl(F 1 ). We may now study how Ψ acts on this pullback diagram. Observe that the bottom face corresponds to the morphism π F 1 1 * ( ) where : N → F 1 (H). It is then sent by Ψ to the morphism represented in the diagram below. The pullback of these two faces will then give the image of (N, H, ) under Ψ. The pullback diagram is given by the large cuboid in the diagram below. Here we have factored this pullback as in the similar pullback diagram in the proof of Proposition 4.7. The bottom face of the bottom left cube and the right face of the top right cube are the commutative squares considered above. These have also been extended by the identity maps in the bottom right cube, so that the bottom and right-hand faces of the full cuboid are as required for the pullback in question. The bottom left cube commutes by the naturality of α, while the top right cube commutes by the definition of ψ = Γ −1 H,N (Ψ) as in Proposition 4.8. Since α and γ are isomorphisms, the top left cube is also a pullback. Recall that the front and back faces are then also pullbacks. Since the top face of the top left cube must commute, we find that the green arrow we seek is given by p ψ H ( ) . Hence, Ψ(N, H, ) is isomorphic to (N × F 1 (H) F 2 (H), H, p ψ H ( )). Of course, every natural transformation ψ : F 2 → F 1 yields a morphism of extensions defined by such a pullback. For the associated natural isomorphisms we may take β to be the identity and α to be ( α, id) where α is defined by the diagram below. Finally, we take γ = ( γ, id) where γ is specified by the diagram below. We now end this section with what is perhaps a surprising result about morphisms of extensions. Proof. Let ψ : F 2 → F 1 be the natural transformation associated to Ψ. We can construct a functor Ψ * : Gl(F 2 ) → Gl(F 1 ) which sends (N, H, ) to (N, H, ψ H ) and leaves morphisms 'fixed' in the sense that (f, g) : , which may be seen to be a morphism in Gl(F 1 ) using the naturality of ψ. We claim that Ψ * is left adjoint to Ψ. To see this we consider the candidate counit ε N,H, = (ε , id H ), where ε is defined as in the following pullback diagram. We must show that given a morphism (f, g) : there exists a unique morphism ( f , g) : g). We will construct this map using the following diagram. Here the maps out of N 1 form a cone as we have ψ H 2 F 2 (g) 1 = F 1 (g)ψ H 1 1 = 2 f , where the first equality follows from naturality of ψ and the second from the fact that (f, g) is a morphism in Gl(F 1 ). By the universal property we have that l 2 f = F 2 (g) 1 , which means that ( f , g) is a morphism from (N 1 , H 1 , 1 ) to (N 2 , H 2 , 2 ) in Gl(F 2 ). It is immediate from the diagram that (ε 2 , id H 2 ) • ( f , g) = (f, g) and it is also not hard to see that this is the unique such morphism. Thus, Ψ * is indeed left adjoint to Ψ. Finally, we must show that Ψ * preserves finite limits. This follows immediately from the fact that finite limits in the glueing may be computed componentwise. Remark 4.14. Notice that Ψ * is in fact a morphism of non-split extensions in the sense that it commutes with the kernel and cokernel maps up to isomorphism. However, it does not commute with the splittings unless Ψ is the identity. Ext(H, N ). In [11] it was shown for frames H and N that there was something akin to a Baer sum of extensions in Ext(H, N ). It is natural to ask if something analogous occurs in the category Ext (H, N ). Indeed, it is not hard to see via the equivalence with Hom(H, N ) op that Ext (H, N ) has all finite colimits. The following functor will help us compute these colimits. For the action on morphisms, let (Ψ, α, β, γ) be a morphism of extensions and let ψ : K * 2 E 2 * → K * 1 E 1 * be the corresponding natural transformation. H, ψ H ) and (f, g) to (f, g). It is immediate that the necessary diagram for this to be a morphism in the slice category commutes. We must demonstrate that this cone satisfies the universal property. Suppose we have some other cone (Ξ i : C → Gl(K * i E i * )) i∈J and consider the following diagram in Cat where Ψ = D(f ) for some morphism f : i → j in J . Colimits in Since each Ξ i is a morphism in Cat/(N ×H) we have that it commutes with the ! maps. This means that the Ξ maps all agree on the first two components. If we assume that Ξ k (C) = (N C , H C , k C ), then Ξ i = Ψ * Ξ j gives i C = ψ H C j C . Now consider the following diagram in N where we make use of the aforementioned limiting cone in Hom(H, N ). Here we use the universal property of R componentwise at H C to produce the map C . This allows us to construct a map S : C → Gl(R) with S(C) = (N C , H C , C ). As for morphisms, now note that each Ξ k sends f : C → C to the 'same' pair (f 1 , f 2 ) and we define S to act on morphisms in the same way. The pair (f 1 , f 2 ) can be seen to be a morphism in Gl(R) from S(C) to S(C ) by considering the above diagram in the functor category and then using the naturality of : π 1 Ξ k → Rπ 2 Ξ k . This morphism S is the desired map and is easily seen to be unique. From the above it is clear that M preserves limits and that every limiting cone of M D is isomorphic to one of the form Hom(H, N ). For M to create limits, it remains to show that every cone of D which maps to a limiting cone of M D is isomorphic to one of the above form. This follows since M is conservative. Notice that the limit diagram was embedded into the slice category so that each Ξ in the proof would agree on the first two components. If the limit diagram is connected, this will happen automatically and so we obtain the following corollary. A disconnected (co)limit is the subject of the following example. . Since products in a slice category correspond to pullbacks, we may construct this coproduct using the following pullback in Cat. If ! P is the composite morphism from P to N × H, then the coproduct extension may be recovered as N P H (π 1 ! P ) * π 2 ! P (π 2 ! P ) * . The Ext functor Given that we have established that Ext(H, N ) is equivalent to Hom(H, N ) In other contexts (for instance, see [3]) the extension functor can be obtained from a fibration. In the protomodular setting, we start from the 'fibration of points' sending split epimorphisms to their codomain. In the more general setting of S-protomodularity (see [6]) we consider only a certain subclass of split epimorphisms. This suggests we consider a 2-fibration sending open subtopos adjunctions to the codomain of their inverse image functors. A categorification of the Grothendieck construction (see [7]) gives that 2-fibrations correspond to 3-functors into 2Cat. Fortunately, aside from motivation, we will largely be able to avoid 3-functors for the same reasons that Ext(H, N ) is essentially a 1-category (Corollary 4.5). While the paradigmatic example of a fibration is the codomain fibration, which maps from the whole arrow category to the base category, the domain of the analogous 2-fibration is restricted to the category of fibrations. See [7] for more details on 2-fibrations. The fibre 3-functor Top co op lex → 2Cat corresponding to the 2-fibration Cod : Fib Top lex → Top lex can be described as follows (omitting the description of the coherence data for simplicity): • On objects it sends a topos E to the slice 2-category of finite-limit-preserving fibrations from toposes to E. • On 1-morphisms it sends a finite-limit-preserving functor T : E → E to the 2-functor T corresponding to pulling back along T . -Then we may use the fibration property of E to lift the natural transformation τ S (E) to a natural transformation into P S . Explicitly, for an object X ∈ D S , the morphism τ S (E)(X) : T (S (E)(X)) → S(S (E)(X)) can be lifted to a morphism in D with codomain P S (X). These lifted morphisms assemble into a natural transformation τ from a new functor L : D S → D to P S . This functor sends a morphism f : X → Y to the morphism obtained by factoring P S (f )τ X through τ Y as shown in the diagram below. L(X) E -Finally, consider the 2-pullback of E and T and note that the maps L : D S → D and S (E) : D S → E form a cone as shown below. Thus, we may factor these through P T and T (E) respectively to obtain a functor from D S to D T . This is the desired functor τ E : S (E) → T (E). The coherent set of 2-isomorphisms for the 2-natural transformation can also be obtained by the cartesian property of the lifted maps and universal property of the 2-pullback. We can then easily modify this to describe the fibre 3-functor for the 2-fibration of (open) points. Moreover, to obtain Ext(−, N ) : Top co op lex → 2Cat we restrict to inverse image functors of open subtoposes (equipped with right adjoint splittings) with fixed kernel object N . The above discussion restricts easily to this case, since these functors are stable under pullback along finite-limit-preserving functors and the relevant morphisms can be shown to be morphisms of extensions. To see this we will use the following folklore result, which we prove here for completeness. Proof. We must first describe Q. Consider the following comma object diagrams. Now we paste our candidate 2-pullback square to our other comma object diagrams as follows. Since π F 1 Q = π F T 1 , we see that the pasting diagram above is just the comma object diagram corresponding to Gl(F T ). It follows that the upper square is a 2-pullback in a similar manner to (the converse direction of) the pullback lemma. We will now describe the Ext 'functor' explicitly and at the same time demonstrate its relationship to Hom. as in the following diagram. Gl(F T ) The functor Ext(T, N ) acts on morphisms via the universal property of the 2-pullback. Let Ψ be a morphism of extensions corresponding to the natural transformation ψ : F 2 → F 1 and consider the following diagram, noting that T π F 1 T N N F 2 T (H) This is readily seen to be the same pullback which determines Γ H ,N (ψT ) (N, H, ). Finally, we discuss how Ext(−, N ) acts on 2-morphisms. We follow the construction outlined above for the codomain 2-fibration. Let τ : T → T be a natural transformation. Then we describe the natural transformation Ext(τ, N ) : Ext(T , N ) → Ext(T, N ) componentwise. Without loss of generality we may describe each component at extensions of the form Γ H,N (F ). Consider the following diagram. As discussed in the case of the codomain fibration, we may define a functor L τ as follows. We N , H, ). It remains to show that this gives a morphism of split extensions, but it is clear from the above pullback diagram that this functor is the morphism of split extensions corresponding to τ F (which itself is equal to Hom(τ, N ) F ). The morphisms Ext(τ, N ) Γ H,N (F ) define the desired natural transformation Ext(τ, N ). (Naturality follows from the interchange law or from the general theory of the codomain 2-fibration.) The 2-functor Ext(−, N ) composes strictly, and for the unitors observe that Ext(Id H , N ) is equal to Γ H,N Γ −1 H,N : Ext(H, N ) → Ext(H, N ) so that we make take as our unitors the unit of the adjunction Γ H,N . The necessary 2-functor axioms can then easily be seen to hold. Remark 5.2. The above argument also proves that the 2-functor sending adjoint extensions with fixed kernel N to their cokernels is a 2-fibration. Proof. The equality in point (2) is clear by inspection of the definition of Ext(T, N ). The proof of the coherence conditions is easy. In particular, the first coherence condition is satisfied because each morphism of the diagram is the identity. Similarly, for the second condition again all morphisms are the identity (though marginally more work is required to show that the unitor at H whiskered with Γ H,N is in fact the identity). We turn our attention to the functor Ext(H, −) : Top co lex → Cat. We could not find an elegant description of this in terms of a 2-fibration. However, we believe a reasonable definition can be given by dualising our arguments for Ext(−, N ). Naturally, for an object N we have that Ext(H, N ) is just the category of adjoint split extensions. Thus, fixing particular pushouts we can describe Ext(H, S) concretely as sending an extension As mentioned above, Ext(H, S) should act on morphisms by the universal property of the pushout. Let Ψ = (Ψ, α 1 , β 1 , γ 1 ) be a morphism of extensions and consider the 2-cocone given by P 2 Ψπ F 1 1 * 1 * Id N S as in the following pasting diagram (where we omit the 2-morphisms do avoid clutter). Here the front, back and left faces commute strictly and the top face has associated invertible 2-morphism α −1 1 . The left-hand square commutes by naturality of σ and the right-hand square commutes since (f, g) is morphism in Gl(F ). Now the σ N arrange into a natural transformation σ : L σ → P S . If things are to behave dually, we should have L σ factor through P S . By the naturality of σ we have that the following diagram commutes. We might now hope to take Ext(H, σ) F (H) to be this resulting factor Γ H,N (σF ) * . However, this map goes in the 'wrong' direction. We can remedy this by taking the right adjoint and setting Ext(H, σ) F (H) = Γ H,N (σF ). In order to specify Ext(H, −) completely, it only remains to discuss the compositors and unitors. As before we have that Ext(H, −) composes strictly and we take the unitors to be the unit of the adjunction Γ * H,N Γ H,N in Theorem 4.12. We can express the relationship between Ext(H, −) and Hom(H, −) as follows. Proof. Just as before, the equality in point (2) is clear by inspection of the definition of Ext(H, S) and the necessary coherence conditions hold, because each involved morphism is an identity. A bifunctor theorem for 2-functors was discussed in [10] and gives the precise conditions that allow two families of 2-functors M B : C → D and L C : B → D to be collated into a bifunctor P : B × C → D for which P (B, −) is isomorphic to M B and P (−, C) is isomorphic to L C . These conditions are that L C (B) = M B (C) and that for each f : B 1 → B 2 in B and g : C 1 → C 2 in C there exists an invertible 2-morphism χ f,g : L C 2 (g)M B 1 (f ) → M B 2 (f )L C 1 (g) satisfying certain coherence conditions reminiscent of those for distributive laws for monads. Such families together with χ are called a distributive law of 2-functors. Thus, we may apply the results of [10] to arrive at the 2-functor (Ext, ω, κ) : Top op lex ×Top lex → Cat defined below. N ). It is shown in [10] that 'morphisms between distributive laws' can also be collated to give 2-natural transformations between the corresponding bifunctors. The 2-natural equivalences in Theorems 5.3 and 5.5 can be collected into a 2-natural equivalence Γ : Ext → Hom op provided that Γ N H = Γ H N and the Yang-Baxter equation holds. These conditions are immediate in our setting and so we obtain the following theorem.
2020-12-10T02:15:51.994Z
2020-12-09T00:00:00.000
{ "year": 2020, "sha1": "772572450dd3a10e75c51bf1809a52a77e89cda5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.jpaa.2022.107273", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "772572450dd3a10e75c51bf1809a52a77e89cda5", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
119246363
pes2o/s2orc
v3-fos-license
Emergent phenomena in multicomponent superconductivity: an introduction to the focus issue Multicomponent superconductivity is a novel quantum phenomenon in many different superconducting materials, such as multiband ones in which different superconducting gaps open in different Fermi surfaces, films engineered at the atomic scale to enter the quantum confined regime, multilayers, two-dimensional electron gases at the oxide interfaces, and complex materials in which different electronic orbitals or different carriers participate in the formation of the superconducting condensate. In all these systems the increased number of degrees of freedom of the multicomponent superconducting wave-function allows for emergent quantum effects that are otherwise unattainable in single-component superconductors. In this editorial paper we introduce the present focus issue, exploring the complex but fascinating physics of multicomponent superconductivity. Multicomponent superconductivity is a novel quantum phenomenon in many different superconducting materials, such as multiband ones in which different superconducting gaps open in different Fermi surfaces, films engineered at the atomic scale to enter the quantum confined regime, multilayers, two-dimensional electron gases at the oxide interfaces, and complex materials in which different electronic orbitals or different carriers participate in the formation of the superconducting condensate. In all these systems the increased number of degrees of freedom of the multicomponent superconducting wave-function allows for emergent quantum effects that are otherwise unattainable in single-component superconductors. In this editorial paper we introduce the present focus issue, exploring the complex but fascinating physics of multicomponent superconductivity. Very soon after the formulation of the Bardeen-Cooper-Schrieffer (BCS) theory, the prediction of multiband superconducting systems and the first extension of the BCS theory to two-band/two-gap superconductors were offered by Suhl, Matthias and Walker [1]. The discovery of superconductivity in MgB 2 in 2001 marked the formal appearance of the new class of superconductors: multiband/multigap ones, to which many of the recently discovered iron-based superconductors also belong. Multigap superconductivity arises when the gap amplitudes on different sheets of the Fermi surface are radically disparate, e.g. due to different dimensionality of the bands for the usual phonon-mediated pairing, as in the case of MgB 2 , or due to the repulsive interband pairing interactions, as in the case of most iron-based superconductors, or due to the appearance of multiple Fermisurface pockets dictated by the crystalline symmetry like in FeSe x Te 1−x and other unconventional multiband superconducting compounds. Multiband/gap superconductivity is thus emerging as a complex quantum coherent phenomenon with physical consequences which are different from or cannot be found at all in single-band superconductors. The cross-pairing between bands is energetically disfavored, hence multiple coupled condensates coexist and govern the overal superconducting behavior. The increased number of degrees of freedom allows for novel effects which are unattainable otherwise. Without claiming completeness, in what follows we point out some recent revelations based on multicomponent quantum physics in superconductors. As mentioned above, the first two-gap superconductor with clearly distinct gaps in different bands was MgB 2 (for review, we refer to [2, 3]). Even for this well-known material (studied for over a decade, with thousands of papers published), microscopic parameters and resulting superconducting length scales [4-6] and magnetic behavior [7] are under ongoing debate. In that respect the behavior of vortex matter is crucial, since experimentally observed vortex states can serve as a smoking gun for the underlying physics. Cases where different condensates have very different electromagnetic properties, or coupling between them causes unconventional behavior (e.g. stemming from frustration [8,9]), can lead to novel vortex (nonuniform [10][11][12], fractional [13][14][15] and skyrmion [16,17]) patterns -impossible otherwise. Phase solitons [18] and massive or massless Leggett modes [19,20] are also possible benchmarks for multi-gap superconductivity, being associated with nontrivial phase differences between the condensates in different electronic bands, as detailed in Ref. [21]. In that regime, the timereversal symmetry breaking is possible, and can even survive moderate disorder (see Ref. [22]). Such states have not been observed to date, but could be experimentally accessible in multiband iron pnictides and chalcogenides. The latter iron-based materials form the other predominantly multi-band/gap family of superconductors (as probed by muon spin rotation [23], spectroscopic measurements [24], point-contact Andreev-reflection spectroscopy [25], magnetization and transport [26], measurements of the penetration depth [27], etc.), but are much richer in novel physics than transition metal-borides. For one thing, the Cooper-pairing there is likely to be driven by electron-electron interactions, rather than the conventional electron-phonon coupling. Further, in iron-based superconductors experiments showed that weakly and strongly correlated conduction electrons coexist in much of the phase space, hence multiorbital physics becomes necessary to understand the phase diagram of these materials [28]. Another interesting example is found in e.g. Ba 0.6 K 0.4 Fe 2 As 2 (T c =37K), where two different s-wave gaps open in the different sheets of Fermi surface (FS): a large gap of ∆ 2 =12 meV on the small FS and a small gap ∆ 1 =6 meV in the large FS [29]. The ratio 2∆ 1 /T c =3.7 is very close to the BCS value of 3.5, indicating BCS-like weakly coupled pairs in the large FS, while 2∆ 2 /T c =7.5 is very large and typical of BEC-like strongly coupled pairs in the small FS. Hence, the total superconducting condensate in Ba 0.6 K 0.4 Fe 2 As 2 is a coherent mixture of BCS-like and BEC-like partial condensates (see also Ref. [30]). Actually, various subgroups of iron-based superconductors show small Fermi surfaces at optimum doping where T c is the highest, appearing when the chemical potential is near a band edge, close to the bottom (if electron like) or top (if hole like) of the energy bands [31]. In this situation, experiments show no evidences for nesting topology and the mechanism for high-T c can be associated with the shape resonance scenario [32]. We refer to Ref. [32] for the schematic representation of the topology of Fermi surfaces for different superconducting iron-based materials, showing that in all cases large Fermi surfaces coexist with small Fermi surface pockets, supporting the two-band model for superconductivity as the minimal model to capture the band-edge physics and corresponding novel multi-band BCS-BEC crossover phenomena (see Ref. [33]). This crossover regime has been recently detected by the collapse of the small Fermi surface pocket and electronic band dispersion becoming an inverted parabola in the coherent state [34]. The multiband BCS-BEC crossover can thus determine the best situation for high-T c superconductivity, but also determine the optimal condition to allow the screening of superconducting fluctuations [35]. Fluctuations in recent multiband materials are an interesting study object on their own (for example, Ref. [36] reports the peculiar differences between single-band and multi-band superconductors concerning the anisotropy of the fluctuations effects above and below T c ). As shown in Ref. [37], fluctuations are very sensitive to the discrepancy in coherence lengths between the band-condensates, which in turn are most different in the presence of a shallow band (as discussed above) or in the vicinity of hidden criticality [38], both of which are likely to be found in iron-based superconductors. Interestingly, multiband superconductivity can also be induced by nanoengineering, even in elementary metals. Thanks to recent breakthroughs in nanofabrication, highquality ultrathin films of Pb, Sn, In, Nb are now attainable, where multiband superconductivity is induced by confinement even though the bulk material is single-band (for brief review and theoretical challenges see Ref. [39]). Namely, as first shown by Blatt and Thompson [40], separate single-electron bands are formed due to quantum confinement, and can significantly shift in energy depending on the sample thickness. This leads to major changes of key superconducting properties each time when the bottom of a new single-electron band passes through the Fermi surface. Quantum confinement and shape resonances in nanoscale systems can induce sizeable T c amplifications [43,44], experimentally confirmed in metallic nanowires of Al and Sn [45]. Ultrathin high-quality metallic films are integrable in electronic circuits [41], and will be highly responsive to applied electric field [42] -a desirable feature for any transistor for example. Furthermore, ultrathin multiband superconductors can also host the BCS-BEC crossover, in analogy to the above case in iron-based superconductors, and are strongly influenced by fluctuations -which crosslinks the physics of these two rather different superconducting systems. Note that on the level of fundamental physics, here discussed issues are very closely related to multi-subband superfluidity in confined ultracold Fermi gases (see Ref. [46]) and electric-field-induced surface superconductivity in oxides [47]. In all above systems, understanding the underlying physics and the effects of hybridization between multiple quantum condensates in a single system, is clearly a pathway to yet unseen phenomena in fundamental science and practical achievements that can lead to future applications. This is the key objective of the current Focus Issue of Superconductor Science and Technology and the conference series 'MultiSuper' (the first in this conference series was held in Lausanne (Switzerland) in 2012, with a sequel held in Camerino (Italy) in 2014 [48]). The scope of this action is broader than just multiband superconductors, and comprises other forms of multicomponent superconductivity, e.g. one found in artificial multicomponent coherent systems (e.g. multilayers [49]), multicarrier superconductivity at oxide interfaces [50], or superconducting materials with multiple competing orders or symmetries of the order parameter [51,52], a complexity certainly present in many copper-oxide, iron-pnictide, and heavy-fermion superconductors. All together, we hope that this Focus Issue will provide an illustrative snapshot of the current state of research in multicomponent superconductivity in a wide range of materials, and a roadmap for further investigations in this booming field.
2015-04-27T09:31:58.000Z
2015-04-24T00:00:00.000
{ "year": 2015, "sha1": "84bd364855880fe4ec581f2f6bc5773a9a9eb9ed", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1504.06995", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "84bd364855880fe4ec581f2f6bc5773a9a9eb9ed", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
30955993
pes2o/s2orc
v3-fos-license
Primary Intestinal Lymphangiectasia and its Association With Generalized Lymphatic Anomaly Background: Lymph is a fluid originating in the interstitial spaces of the body that contains cells, proteins, particles, chylomicrons, and Background Lymph is a fluid originating in the interstitial spaces of the body that contains cells, proteins, particles, chylomicrons, and sometimes bacteria.It enters the lymphatic system, a complex network of fine vessels with unidirectional valves, and gains access to the lymph nodes before joining the cisterna chyli (CC).The lymph then reaches the thoracic duct (TD), which drains into the major circulation system.A large proportion of the total amount of lymph, called chyle, originates in the abdominal organs, particularly the intestine and the liver (1). As Mulliken et al. suggested in 1982 (2), the nomenclature of congenital vascular anomalies is the greatest obstacle to understanding and managing them effec-tively.Although currently unused, congenital lymphatic anomalies have historically been classified according to their anatomopathological characteristics (3)(4)(5); however, these classifications sometimes overlap and are generally quite confusing.To obtain a homogeneous classification and to promote its use, the international society for the study of vascular anomalies (ISSVA) published a classification scheme in 1996, which was expanded and updated in 2014 (6). Primary intestinal lymphangiectasia (PIL) is a rare entity first described by Waldmann in 1961 (7).Its general prevalence is unknown, since less than 500 cases have been reported worldwide.PIL was traditionally thought to have been caused by a congenital intestinal lymphop-C o r r e c t e d P r o o f athy featuring dilated intestinal lacteals, resulting in lymph leakage into the small bowel lumen responsible for protein-losing enteropathy leading to lymphopenia, hypoalbuminemia, and hypogammaglobulinemia.It can appear in isolation or in association with other extraintestinal lymphatic anomalies.The diagnosis requires an endoscopic and histologic confirmation of the lymphatic anomaly.The keystone of treatment is a low-fat diet (8,9). Generalized lymphatic anomaly (GLA), which is synonymous with "generalized cystic lymphangiomatosis," "cystic angiomatosis," or "lymphangiomatosis," has systematically been reported in the literature to have initially been described by Redenbacher in 1828 (wrongly referred to as Rodenberg in most reports) (10,11).However, this account is not true, because in Redenbacher and De Ranula Sub Lingua 1828 thesis concerning a lymphatic malformation, he referred to a ranula without implying the existence of any lymphatic pathogenesis (12).Thus, the first description of a GLA was actually provided by Milligan in 1926 (13), and the first description with bone involvement was delivered by Harris and Prandoni in 1950 (14).GLA is a rare multisystem disorder that is characterized by diffuse infiltration of common lymphatic malformations (LMs) in any tissue with lymphatic vessels (3).Its general prevalence is unknown, since less than 200 cases have been reported worldwide, and its diagnosis and treatment still remain challenging.Among the scientific community, the belief is widespread that each symptom of congenital lymphatic anomalies is a primary entity.However, the nomenclature used frequently overlaps, and is in many cases confusing.This nomenclature is based on the established classifications of congenital lymphatic anomalies, which are based, above all, on histology (3)(4)(5).An updated classification scheme was adopted by the ISSVA in 2014 (6) (Box 1). The ISSVA 2014 classification scheme is not yet widely used by the scientific community.According to this classification, common LMs correspond to the previously misnamed lymphangioma due to an improperly developed lymphatic system.On the other hand, channel-type LMs are entities that are the result of an obstruction, aplasia, or defect in the chyle evacuation process.GLA is a generalized lymphatic disorder with visceral involvement, osteolysis, and/or central conducting lymphatic anomalies.Gorham's syndrome is characterized by osteolysis with cortical destruction.It is essential to differentiate between these various lymphatic malformations because their morbidity rates and treatment methods differ according to the type. Several disorders belong to the channel-type LMs group, including chylothorax, chylous ascites, lymphangiectasia with protein-losing enteropathy in the context of a LM (previously called PIL), chylopericardium, and chyluria.According to this classification, intestinal lymphangiectasia must not be considered as a primary disorder.Moreover, because it can be part of a generalized lymphatic disorder, such as GLA, it must be named lymphangiectasia with protein-losing enteropathy in the context of an LM.The same affirmations are applicable to primary chylothorax and primary chylous ascites.Along these lines, Servelle (15) observed 120 patients with congenital malformations of the intestinal lymphatic vessels by intestinal lymphography and found that the malformations were secondary to hypoplasia of the CC and to anomalies in the mesenteric nodes.Consequently, the intestinal vessels could not drain effectively, dilating and losing their valve function and thus allowing chyle to reflux.When one of these lymphatic vessels of the mesentery or the gastrointestinal wall was dilated excessively, it broke toward the abdominal cavity, producing chylous ascites.When the dilated vessel was in the intestinal mucosa and broke toward the lumen, protein-losing enteropathy was produced.Due to the hypoplasia of the CC, the chyle absorbed by the intestine had to drain through diaphragmatic collaterals that could dilate and break toward the pleural cavity or pericardium, producing chylothorax or chylopericardium. Following the ISSVA classification, it is necessary to substitute "diffuse lymphangiomatosis" for the term "generalized lymphatic anomaly."The suffix "oma" implies increased endothelial turnover.However, as Meijer-Jorna et al. (16) and Dellinger (17) have shown, there is no cellular proliferation in LMs. Objectives Our purpose is to show that primary intestinal lymphangiectasia (PIL) is a secondary event which results from a disruption of lymphatic circulation in the context of generalized lymphatic anomaly.Increasing knowledge of the pathology of this entity could improve its treatment in the future. Materials and Methods This is a case series and record review of 21 patients with intestinal lymphatic involvement who were di-C o r r e c t e d P r o o f agnosed and/or followed up on in a tertiary hospital between 1965 and 2013.The diagnoses included in this study were PIL, primary chylous ascites, intestinal LM, and protein-losing enteropathy in the context of an associated LM.Most patients received different diagnoses depending on the specialist in charge (e.g., gastroenterologist, radiologist, pathologist, general practitioner, or surgeon).We kept the original nomenclature on each patient's file for the first diagnosis, taking into consideration that all of them clinically presented with evident protein-losing enteropathy and were matched with the typical clinical course of the so-called primary intestinal lymphangiectasia. Patients with any of these entities secondary to pathology other than a primary LM were excluded from consideration, as well as patients with primary LM but without intestinal lymphatic involvement.Access to medical records was a limiting factor: 5 patients' records had already been destroyed due to the significant length of time since their death. Informed consent was obtained formerly from each patient included in the study and was available in each patient's records.The study protocol conforms to the ethical guidelines of the 1975 Declaration of Helsinki as reflected in a priori approval by our institution's human research committee. The following data were systematically collected from the patients' files: demographic information, clinical symptoms, complications (growth, digestive symptoms, frequency of infections, tetany, associated LM, edema, thrombosis, coagulopathy, and osteolysis), diagnostic tools (blood parameters, imaging studies, endoscopy, and biopsies), and treatments.The blood parameters analyzed included total proteins, albumin, lymphocytes, calcium, cholesterol, immunoglobulins, stool fat, and α 1 -antitripsine.The chylous effusion diagnosis (chylothorax or chylous ascites) was based on the findings of the liquid attained by puncture: milky aspect, triglyceride levels > 110 mg/dL, and presence of chylomicrons. A histologic diagnosis was made for each patient, complementing regular hematoxylin-eosin stains with immunohistochemistry for D2-40, a monoclonal antibody to an Mr 40,000 0-linked sialoglycoprotein, which is a selective marker of lymphatic endothelium (18).In these cases, paraffin-embedded tissue blocks were selected and sections from the blocks were cut off and placed on glass slides coated with 3-amynoprpyltriethoxysilane.They were then incubated with the human D2-40 monoclonal antibody (Signet laboratory, Dedham, MA, USA) at 1:200 dilution for 60 minutes at room temperature, then with biotinylated anti-mouse immunoglobulin for 15 minutes, and then with avidbiotin complex reagent (LSAB kit, Dako, Carpinteria, CA) for another 15 minutes.After these procedures, they were reacted with a 3, 3'-diaminobenzidine tetrahydrochloride (Mutoukagaku, Tokyo, Japan) solution and 0.01% (weight/volume) hydrogen peroxide for 2 to 5 minutes at room temperature, and counterstained with hematoxylin. The intranodal lymphography was performed by first positioning the patient on the radiologic table.Then, under ultrasonography guidance, both the right and left inguinal nodes were punctured with a 21 G catheter.Lipiodol was injected into the inguinal node by a slow-speed bomb (infusion rate 0.21 mL/s) with an approximate total dose of 4 mL.Next, radiographs were conducted in anteroposterior and posteroanterior projections every 15 minutes for the first hour, and at 12 and 24 hours afterward. In the intradermal lymphoscintigraphy, the first step was the subcutaneous injection of the radiotracer ( 99 mTc-nanocoloid or 99 mTc-MAA, dose 37 -185 MBq) in the first interdigital space of the lower limb with a 25 -30G catheter.The patients were then asked to walk.The images were made with anteroposterior and posteroanterior projections using a gamma camera at 20 minutes, and at 2 and 24 hours afterward.Some diagnostic procedures were not available when some patients were first diagnosed, and were therefore performed during their follow-up instead. The data were analyzed with SPSS statistic 17 multilanguage.Medium, median and standard error were used for quantitative variables, and absolute and relative frequencies for qualitative ones. Demographic Data and Clinical Evolution A total of 21 patients with PIL were enrolled for analysis, of whom 11 were male and 10 were female.Demographic data and clinical evolution are presented in Table 1.Ten patients had been diagnosed before 5 years of age (1 prenatally), 8 patients at between 5 and 18 years of age, and 3 patients at older than 18 years of age.The follow-up period varied between 1 and 34 years (median 6).One patient had Noonan syndrome, and two patients were siblings. Patients presented with associated LM (16), diarrhea (10), chylothorax (11), chylous ascites (10), pericardial effusion (6), coagulopathy (3) and osteolysis (7).Two patients presented with thrombosis (one in the superior mesenteric vein and the other in the right jugular vein).The right jugular vein thrombosis could be explained by prolonged vessel catheterization for parenteral nutrition.In the patient with thrombosis in the superior mesenteric vein, no related cause was identified.As for analytical evolution, all of the patients but 5 presented with a pattern of chyle loss (hypoproteinemia, hypoalbuminemia, lymphopenia, hypogammaglobulinemia and tendency to hypocalcemia); 7 presented with steatorrhea and 11 with fecal loss of protein. Endoscopy and Histological Study An endoscopy and a histological study were performed on each of the patients, resulting in the diagnosis of intestinal lymphangiectasia in 11 patients.A single endoscopy was sufficient for all of the patients except for 2, for whom this diagnosis was not obtained until the third endoscopy.The previous histological diagnoses were either nonspecific chronic duodenitis or no alterations.An exploratory laparotomy was performed on only one patient (patient 12), in whom a profuse milky liquid was observed in the abdominal cavity.Multiple biopsies were collected with no significant result.The examination was compatible with intestinal lymphangiomatosis. A biopsy of extraintestinal lesions was performed on 9 patients.Samples were acquired from the skin, pleura, masses of several locations, and/or bone.Of these patients, 4 were diagnosed histologically with LM using a single biopsy; in 2 patients, it was necessary to repeat the biopsy to reach this diagnosis.Another 2 patients required reevaluation of the histological study once the case was clarified by the clinical course and imaging, thus confirming the diagnosis, and in 1, the biopsy was inconclusive. Intradermal Lymphoscitigraphy Intradermal lymphoscintigraphy of one or several limbs was performed on all of the patients.The findings were as follows: -No significant alterations: 5 patients (24%). Intranodal Lymphography Intranodal lymphography was performed on 8 patients due to the difficulty of controlling chylous effusions.Lymphatic obstruction was identified in 6 of these patients, in 2 of whom the obstruction had already been detected by lymphoscintigraphy.Therefore, a total of 12 patients were identified as having lymphatic obstructions (57%). Diagnosis In the medical records, the first 8 patients were classified with PIL, with or without chylous effusions and/ or limb lymphedema and/or genital lymphedema in an independent diagnosis.Patient 9 was diagnosed with primary chylous ascites and associated LM.At first, patients 10, 11, and 12 were diagnosed with PIL, and their diagnosis changed to lymphangiomatosis when chylous effusions and LM appeared during their evolution.Patients 13 -16 were diagnosed with lymphangiomatosis.Initially, patients 17, 19, 20 and 21 had been diagnosed with Gorham's syndrome, which was changed afterwards to lymphangiomatosis.Patient 18 was first diagnosed with lymphoma, then with Gorham's syndrome, and finally was diagnosed as having lymphangiomatosis. Treatments All of the patients were administered a low-fat diet supplemented with medium-chain triglycerides (MCT), vitamins, and calcium; other treatments are shown in Table 2. Groups According to the Lymphatic Involvement According to the lymphatic involvement, our patients could be classified into two groups: Group 1 includes patients without evidence of soft tissue LM, and group 2 includes patients with evidence of soft tissue LM.The first 5 patients presented with diarrhea and without LMs or osteolysis, and 2 patients had chylous effusions.Patient 5 had Noonan syndrome.In all of the patients, only a single endoscopy was required for them to be diagnosed with PIL.The thoracoabdominal MRIs showed no LM or osteolysis.The intradermal lymphoscintigraphy was normal in 1 patient and detected obstructions in 4 patients.All of the patients were diagnosed as having PIL with or without chylous effusions as an independent and primary diagnosis.They were all stable with conservative treatment, except for patient 5, in whom several treatments were attempted: tranexamic acid with octreotide, propranolol, thalidomide, and recently, sirolimus. The remaining 16 patients presented with LMs.Of these, 5 had diarrhea, 11 had chylous effusions, and 7 had osteolysis.Six patients were diagnosed as having PIL by endoscopy and histological study, with a second and third endoscopy being required for 2 of the patients.One patient required the exploratory laparotomy previously described.The thoracoabdominal MRIs showed associated LMs in all patients and bone lesions in 7. The intradermal lymphoscintigraphy was normal in 3 patients, detected obstructions in 5 patients, and revealed lymphedema in 9 patients.Intranodal lymphography was performed on 8 patients because of difficulty in controlling chylous effusions.In 6 of these patients, lymphatic obstructions were identified; of these patients, the obstruction had already been observed by lymphoscintigraphy in 2. The first 3 patients were classified as having PIL, with or without chylous effusions and/or limb lymphedema and/ or genitals lymphedema as an independent diagnosis.Patient 9 was diagnosed with primary chylous ascites and associated LM.The rest of the patients had finally been diagnosed with lymphangiomatosis.All of the patients received other therapies in addition to conservative treatment.At the end of the follow-up period, 7 patients were stable; all were alive at the end of the follow-up period, except for patient 15, who died of chylothorax complications. Discussion We have analyzed central conducting lymphatic anomalies, evaluating their repercussions on the digestive system, and we have reclassified their lymphatic involvement in the function of primary lymphatic disorder according to the ISSVA 2014 classification scheme.Furthermore, as one of the great contributions of this study, we have demonstrated that intestinal lymphangiectasia is not a primary entity, but is rather part of the clinical spectrum of TD disruption. Need of a New Classification Our study is a good example of the confusion that the classic nomenclature creates.Some patients had the same symptoms but different diagnoses.For example, patients 6, 8, 10, and 12 had the same symptoms of proteinlosing enteropathy, chylous effusions, and LM; patients 6 and 8 were diagnosed with PIL, chylous effusions, and LM as primary and independent diagnoses, and patients 10 and 12 were diagnosed with diffuse lymphangiomatosis.In the same way, patients 17, 18, 19, 20, and 21 were first classified as having Gorham's syndrome, but the diagnoses were eventually changed to lymphangiomatosis.Therefore, a new nomenclature based on a more comprehensive classification scheme is clearly necessary.According to the ISSVA 2014 classification, we believe that all of our patients had channel-type LM and/or GLA.The first 5 patients only had intestinal involvement, whereas the other 6 patients had osteolysis and involvement of the intestines, soft tissue, and viscera.None of the patients had common LM. Three of our patients presented with coagulopathy with recurrent bleeding.One is patient 5, who has Noonan syndrome, which has an established association with coagulopathy and recurrent bleeding (19,20).However, the combination of coagulopathy, recurrent bleeding ,and chylous effusions in the other two patients lead us to believe that these patients could instead have kaposiform lymphangiomatosis (21)(22)(23).These patients' biopsies would need to be reviewed to confirm this conclusion. Diagnosis Procedures Biopsy and histology have traditionally been considered the gold standard in testing for lymphatic anomalies.However, histological classifications can lead to misunderstanding.Increasing numbers of authors believe that their diagnoses should be based not only on histological information, but also on clinical and radiological characteristics (10,11).Authors such as Wunderbaldinger et al. (11) and Nesbit et al. (24) have shown that distinguishing the type of vascular malformation (arterial, venous, lymphatic, or mixed) is possible with image tests (MRI, sonography, TAC, angiography, and scintigraphy).Kreindell and Alomari have described patterns in the images revealed by MRI and/or TAC in 41 patients with cen-tral conducting lymphatic anomalies (25).Along these lines, Lala et al. has reclassified 51 patients with lymphatic anomaly with bone involvement in GLA or Gorham's syndrome according to the radiologic characteristics (10). In our study, biopsies of extraintestinal lesions were taken from 9 patients.The first sample was diagnostic in 44%; with a second sample, the diagnostic capacity increased to 67%.In 22%, biopsy was only useful for confirming the diagnosis already determined based on the presented symptoms and radiology, and in 11 the biopsy was inconclusive.We agree that the diagnosis of LM is necessarily based on clinical analysis and radiology histology is not sufficient, and is even unnecessary and iatrogenic in some cases (rib biopsies are contraindicated, because they can lead to chronic pleural effusion) (17,26). According to the imaging results, 2 patients presented with cutaneous involvement, and 7 had bone lesions; there was thoracic involvement in 7 and abdominal involvement in all of them.In 14 patients, there was thoracoabdominal involvement and in 7 such involvement was in a single region.These findings are consistent with the results displayed by Kreindell and Alomari (25).Therefore, we agree with these authors that clinical examination and imaging tests are the first diagnostic step in central conducting lymphatic anomalies. Regarding the specific case of intestinal lymphangiectasia, biopsy has been considered the predominant method of diagnosis.However, due to the patched pattern, sometimes no anomalies are found in these tests, which forces their repetition or the requirement for an enteroendoscopy or laparotomy.In our sample, although all of the patients had intestinal involvement, only 11 patients were diagnosed with intestinal lymphangiectasia by endoscopy and a biopsy.No alteration was found in those patients with ascites and/or gastrointestinal wall thickening on the MRI, which shows its incapacity to detect all of the cases with intestinal lymphangiectasia, only detecting those cases with mucosal involvement.Therefore, we believe that the diagnosis of intestinal lymphangiectasia can be assumed if the MRI shows diarrhea, fecal protein loss, chylous effusions, LM and/or intestinal involvement (that is, lymphangiectasia with protein-losing enteropathy in the context of GLA).However, endoscopy and a biopsy are necessary if there is only diarrhea and/or fecal protein loss and/or intestinal involvement shown on the MRI, because these findings are not pathognomonic of intestinal lymphatic involvement. Moreover, it is necessary to reintroduce lymphatic imaging in the study of central conducting lymphatic anomalies, above all intranodal lymphography and dynamic lymphography, because although scintigraphy is easier to perform, it is less specific and precise (27)(28)(29)(30).With these tests, it is possible to identify the delay or non-opacification of the proximal ducts, chylous reflux, a focal leak, or anomalies in the terminal portion of the TD, which is crucial information given that their possible treatments are very different. C o r r e c t e d P r o o f In our study, lymphatic obstruction was detected in 57% of the patients.Although it had not been identified in all of the cases, we believe that an obstruction or a leak is the cause of all of the clinical symptoms of the central conducting lymphatic anomalies in our sample, proteinlosing enteropathy, chylous ascites, and gastrointestinal wall thickening observed on the MRI.We completely agree with the current ISSVA classification scheme and with those authors who advocate that these anomalies are not primary disorders, but are instead secondary to a disruption in chyle evacuation. One example demonstrating that it is not always possible to show the disruption in chyle evacuation by intranodal lymphography, and much less by intradermal scintigraphy, is the case of patient 20.Although both of those tests were normal, we performed a percutaneous TD embolization (which was ineffective), and afterwards, a surgical ligature with resolution of the recurrent chylothorax that this patient presented.This successful result confirms the TD disruption and the presence of lymphatic collaterals as the cause of her chylothorax. Treatment For all complex lymphatic anomalies, conservative treatment is essential; other therapies must be considered according to the clinical impact of the lesion.For intestinal symptoms, a low-fat diet supplemented with MCT must be followed.Treatment of complex lymphatic anomalies varies by the mechanism of lymphatic dysfunction and the location of active complications (26).Unfortunately, for the majority of children with engorged lymphatics, dysmotility, and reflux, interventional and surgical treatments are largely palliative.For symptoms related to reflux of lymphatic fluid, diversion of the fluid by embolization or surgical resection can improve symptoms, although recurrence or redirection of lymphatic fluid is inevitable.When lymphangiography demonstrates TD dysfunction, surgical resection of the terminal TD and microanastomosis to a valved vein is indicated.Focal leaks can potentially be treated by direct puncture of the CC, with subsequent embolization of the TD. Systemic medical therapy is rapidly evolving.Cases have been described in which treatments such as sildenafil (31), propranolol (32)(33)(34), and bevacizumab have been effective (35), but cases of resistance to these drugs have also been reported (36,37).Sirolimus, an mTOR inhibitor, has been reported to improve pleural effusions and mediastinal mass size (38)(39)(40).Preliminary results of a phase 2 clinical trial with sirolimus in the treatment of complex vascular anomalies were shown in the 20th workshop of ISSVA (41), not yet published.Regarding lymphatic anomalies, 25 patients were included and partial clinical response was observed in GLA, Gorham's syndrome and common LM.No activity was observed, however, in central conducting lymphatic anomalies. In our sample, the conditions of 4 out of 5 patients without visceral or bone involvement were controlled effectively with conservative treatment.The remaining patients received other treatments.Nine patients were treated with sirolimus, and in 1 patient a TD ligation was performed.It is too early to determine the treatments' efficacy because the treatments of the 5 patients with sirolimus and the patient with a TD ligation are in very early stages.Sirolimus could be described as effective in 2 patients: one used sirolimus only and the other used sirolimus in combination with bevacizumab and propranolol. In both patients, dose decrease is being undertaken. Limitations The primary limitation of our study is the rarity of complex lymphatic anomalies, which has resulted in a small sample size (despite our hospital being the primary reference center in the country for these anomalies) and the consequent inability of finding statistically significant differences in more variables.This limitation can also be observed in the literature.Another limitation is that we did not have access to 5 deceased patients' medical records because of the prolonged time since their death.Therefore, we could not investigate the vital prognosis of the described anomalies because we did not know these patients' causes of death.Finally, there are the limitations derived from the study design in terms of descriptive application, since we can only describe associations and not causal relationships. Major Points for Clinical Practice Intestinal lymphangiectasia is not a primary entity, but is instead part of the clinical spectrum of TD disruption.By lymphatic imaging, the mechanism of lymphatic disruption and the location of active complication can be detected, and, consequently, effective treatment can be administered.Further studies are needed to increase knowledge about this rare entity and to continue to improve treatment methods. Conclusions A new classification of lymphatic anomalies is needed, based on symptoms, radiology, histology, and physiology.The classification presented in 2014 by the ISSVA meets these requirements.Therefore, diagnoses must be reviewed, given that terms such as intestinal lymphangiectasia and lymphangiomatosis have changed.Intestinal lymphangiectasia is not a primary entity, but is instead a consequence of a TD disruption.This same physiopathology produces chylopericardium, chylothorax, and chylous ascites.Therefore, primary intestinal lymphangiectasia should be renamed lymphangiectasia with protein-losing enteropathy in the context of LM, lymphangiomatosis, and GLA.Finally, to choose an effective therapy, the treatment of LM must be directed toward the physiopathology of the lesion. c Male Male Female Male Female Male Male Man Female Female Male Male Female Female Female Male Female Male Age at diagnosis, y Death due to respiratory insufficiency secondary to chylothorax.d Gastrostomy.C o r r e c t e d P r o o f J Pediatr Rev. In Press (InPress):e4790 J, Geiger J, Foldi E, Adams D, Niemeyer C. Retrospective analysis of syrolimus in the treatment of generalized lymphatic malformation and Gorham Stout disease.The 20th Workshop of the International Society for the Study of Vascular Anomalies; 2014.40.Bassi A, Syed S. Multifocal infiltrative lymphangiomatosis in a child and successful treatment with sirolimus.Mayo Clin Proc.2014;89(12).doi: 10.1016/j.mayocp.2014.05.020.[PubMed: 25468520] 41.Adams D, Hammil A, Trenon C, Vinks A, Patel M, Chaudry G. Phase II clinical trial of sirolimus for the treatment of complicated vascular anomalies: initial results.The 20th Workshop of the International Society for the Study of Vascular Anomalies; 2014.C o r r e c t e d P r o o f Table 2 . Treatments Other Than Diet and Stabilization a Removed due to lack of effectiveness.
2019-03-13T13:30:28.414Z
2016-01-10T00:00:00.000
{ "year": 2016, "sha1": "4699232ce2103cb2f3760c44f3a790b3166079ca", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.17795/jpr-4790", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "7989f912c908ce77e9c3463e88c06b669002008e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235288200
pes2o/s2orc
v3-fos-license
Design of Work System for Reducing Pollution and Forest Fire Smoke Air quality has an impact on human life. The incidence of forest and land fires has caused many casualties. On the other hand, poor air quality as a result of forest and land fires also threatens human life directly. Therefore a Pollution and Smoke Reduction Tool was designed as a solution to the problems of pollution and smoke due to forest fires. The purpose of writing this article is to describe the design and manufacture of Pollution and Smoke Reducers as well as its working principles. The writing method used is descriptive qualitative, with data collection techniques in the form of literature studies to strengthen ideas. Pollution and Smoke Reducers are tools that can convert particulate CO (PM) PM10 and PM2.5 into CO_2 and burn particulates PM10 and PM2.5 until they disappear. Pollution and Smoke Reducers are the development of research on catalytic converters and diesel particulate filters by utilizing a fan/blower as a smoke suction agent and a heater to heat the smoke until it burns completely. Pollution and Smoke Reducers are also equipped with wire mesh and fiber to trap particulates and hold them until they burn entirely. Pollution and Smoke Reducers are designed with several materials and tools specifically designed to reduce the direct impact of forest and land fires by burning PM 10 and PM2.5 and lowering CO emissions. The smoke reduction capability based on the designed design has smoke and particulate reduction capacity of ±43,4769880184/ft3. The capacity of the smoke and particulate suction rate is 21500ft3/minutes with the ability of the significant smoke suction rate so that smoke and particulates due to forest fires can enter the equipment and do not fly freely under the wind. Hence, the air that is inhaled by the community has better potential and safe for health. Introduction Forests are an essential factor in the ecology of the environment and human life. However, burning emissions from forest fires, namely smoke, have represented a global public health problem [1]. The potential for forest fires itself is quite high. It is also known that forest fires from tropical regions in South America, Africa, and Asia are global hotspots to produce fire emissions [2]. In the current situation and conditions, forest fires directly contribute to the most substantial pollution and climate change, which have an immediate and immediate impact on the lives of living things around them [3]. The pollution also affects the ecosystem and the economic activities of the community in the location of the forest fires [4]. Smoke haze pollution from forest fires that occurs contributes significantly to human health problems and carbon emissions / global warming problems [5]. This is because some of IOP Publishing doi: 10.1088/1757-899X/1125/1/012107 2 the smoke content from forest burning is very dangerous. The smoke content varies with the type of fuel being burned and the stage of combustion [6] However, in general the smoke content from forest fires consists of , NOX, , CO and small, very aerodynamic particulates [7]. The resulting high-temperature biomass fires such as forest fires also produce emissions and a large amount of smoldering soot [8,9]. Soot is also considered very dangerous, soot which is meant in the event of forest fires is dust particles (PM 10), and particulates (PM 2.5). Of these pollutants, particulate matter (PM) is the most concerning, given its very small size and ability to be inhaled deep into the lungs, which in turn hurts asthma [10]. Particles with an aerodynamic diameter of less than 10 µm were defined as PM10. Whereas PM2.5 has an aerodynamic diameter of less than 2.5µm [11]. Based on the problems caused by forest fire pollution, many studies have discussed related emissions and pollution such as [12] [13] then fire risk and detection [14][15] and also fire impact assessment [16] [17], according to research [18] the highest number of research articles were found on fire emissions and pollution (46%), followed by fire risk and detection (42%), and fire impact assessment (12%). However, from the many studies, there is still little research that discusses how to stop the effects of forest fires. Including the handling of smoke, which is currently an unresolved problem. As a result of the impact it has had long enough to stop. The high risk of the impact of forest fires, namely pollution and smoke, and there are still few ways to deal with them, is a problem that has not been resolved. One way that can be done to reduce levels of pollution and smoke is to reduce it with a tool. So that making smoke and pollution reduction tools due to forest fires is an absolute must. Previously, research has been carried out related to the reduction of exhaust emissions by engineering the installation of a catalytic converter intended for diesel engines [19]. The results obtained are quite good because the catalytic converter can reduce harmful pollutants more efficiently and at a lower cost. Another similar study is a study entitled "Application of copper-zinc metal as a catalytic converter in the motorcycle muffler to reduce the exhaust emissions" [20]. Research shows the use of a catalyst can reduce exhaust emissions much more significantly than without a catalyst. Based on the problems and the results of previous research, the authors innovated a catalytic converter technology to reduce the impact of forest and land fires. It is hoped that the innovation in designing this tool can be realized to reduce smoke from forest and land fires quickly so that it does not endanger and harm the community. Forest Fire Smoke Forest fire smoke has several pollutants released, such as PM 10 and PM 2.5, which are very dangerous to the environment, forest ecosystem, and human health. PM 10 and PM2.5 are of significant concern at the moment, as they are small enough to penetrate deep into the lungs and pose significant health risks. PM10 is a particle small enough to pass through the nose and throat and can enter the lungs. Once inhaled, these particles can affect the heart and lungs and cause serious health effects. At the same time, PM2.5 can be associated with several types of cardiovascular problems, such as hypertension [21], acute myocardial infarction [22], heart failure [23], and inflammation and blood clots [24] PM2.5 has even increased mortality associated with cardiovascular and ischemic heart disease [25]. Besides, PM2.5 has associated with susceptibility to pulmonary metastases [26] and breast cancer survival [27]. The main fuel in forest fires is wood. When wood is burnt smoldering, it releases most of CO, , and other organic compounds such as particulates [28]. Based on research simulations on the burning of eight main tree species in subtropical China, the total emission factors that have been calculated and analyzed with a burning flame equal to 230, 49 [35]. However, the differences between this study and other studies can be influenced by regional tree growth and experimental tools [36]. CO, PM10, and PM2.5 are the largest amounts contained in fire pollution. IOP Publishing doi:10.1088/1757-899X/1125/1/012107 3 CO, PM10, and PM2.5, which are currently considered dangerous for human health, are one of the urgent issues to be resolved. Metallic Catalytic Converter Pollution reducers and forest fire smoke have designs and materials that can reduce smoke, supported by catalytic converters that are proven to reduce pollution well [37] [38]. The reaction of exhaust gases in the exhaust using a catalytic converter, exhaust gas from CO to , and HC to O. The process is when CO emission can change to and it takes one molecule of O meanwhile, for the emission of HC to O, two adjacent H molecules receive an additional one molecule of O from [39]. everal metals that are known to be useful as oxide and reduction catalysts ranging from large to small are Pt, Pd, Ru > Mn, Cu > > Ni > Fe > Cr > Zn and oxides of metals [40]. Diesel Particulate filters he addition of a particulate filter, like the working concept of diesel particulate filters, functions to reduce PM10 and PM2.5 pollutants by filtering them and holding them for a while until they burn themselves and disappear [41]. This method is considered to be the most effective [42] and is capable of filtering particles from the fuel combustion residue [43]. The addition of ceramic fiber has the potential to filter out very small particulates. Also, they also have the potential to be used as diesel particulate filters because of their rigidity, thermal expansion, good thermal conductivity, and fracture toughness [44]. Design Tools A means of reducing pollution and forest fire smoke is a tool that can convert pollutants (especially CO, PM10, and PM2.5) into CO 2 and PM10 and PM2.5, which will burn until they disappear. The design of the tool is composed of several essential components in it, starting from a stainless-steelbased tool frame, as shown in Figure 1, which has been designed using the Solidworks 2013 application. There is a fan/blower inside, which is used to suck smoke into the tool. In the framework, there is a bulkhead and space which will be filled with a copper-based catalytic converter. The partitions in the wire mesh frame or stainless-steel filter are used to lock the ceramic fibers, which are used as PM10 and PM2.5 filters to trap and burn themselves in the appliance. The design concept is taken from the combination of the idea of a diesel particulate filter and a catalytic converter. The stages of designing tools are carried out based on the following. Create designs They are making designs using Solidworks 2013 Student Version software. The stages that must be carried out in this stage include designing the machine frame design, the design of the tool head frame with a diameter of 2 m with an overall length of 6 m. Creating a catalytic converter with a diameter of 1.04 m with a range of 0.33 m and having a copper plate thickness of 1 mm, for the height of the catalytic converter curve measuring 3 mm, with this measure the reduction and suction capacity of the tool will be higher. Design Design Analysis The design was obtained based on literature studies, and a design that was able to increase the pressure in reducing pollution, and forest fire smoke was chosen [45]. The flue fluid flow pressure will be increased in the chamber containing the catalytic converter and fiberglass. The room was arranged because the emission reduction process and CO, PM10, and PM2.5 were mostly located in that room [46]. An increase in pressure will affect increasing the temperature in the appliance so that with an increase in temperature, the emission reduction performance of CO, PM10, and PM2.5 also increases [47]. The minimum temperature increase must be reached at the catalyst working temperature to convert the hazardous emissions, which generally occur at 250-300°C [48]. Increasing the pressure on pollution reducers and forest fire smoke is done by making designs that can increase the fluid flow rate. In this study, the increase in fluid flow rate and pressure and temperature was carried out by making the majority of the tool design circular in shape. This was done so that the fluid flow did not have obstacles due to surface angle curves. Besides, the design is made for a chamber with an increasingly tapered and smaller tilt angle. This is done so that the fluid flow is faster and then affects the pressure and temperature increase in the chamber containing the catalyst. Even though there is an additional heater in the appliance, it is necessary to maintain performance and optimize the reduction results. To determine the design and determine the fluid flow performance in this tool, it is necessary to perform Computational Fluid Dynamics (CFD) analysis used to predict flow behavior, thermal characteristics and conversion efficiency of monolith substrates [49] [50]. This analytical study also needs to be carried out for further research as a development to maximize the performance of the tool. Research on fluid flow analysis is mostly done to determine flow performance. Many researchers also investigate and simulate steady-state flow in flow conditions reacting to catalytic converters [ Determination of Tool Material The metals platinum (Pt), palladium (Pd), and rhodium (Rh) are precious metals as active components of catalytic converters to reduce emissions of hydrocarbons, carbon monoxide and nitrogen oxides [57]. Furthermore, some metals which are known to be effective as large to small oxidation and reduction catalysts are Pt, Pd, Ru> Mn, Cu> Ni> Fe> Cr> Zn and the oxides of these metals [38] In this study, the material used is stainless steel as a tool frame, the choice of this material is because stainless steel has good corrosion resistance and thermal conductivity. Besides, the choice of material [60], makes a temperature of 180-200°C which is a conversion curve of CO to a much lower reaction temperature [61]. Besides, the choice of Cu is due to its relatively low price compared to Pt, Pd, and Ru and is a material that is quite abundant and easy to find, so it is suitable for large-scale production [62]. The material used for the reduction of PM10 and PM25 is fiber glass fiber, fiber glass fiber has high heat resistance. As well as stainless-steel wire mesh to insulate, the fiber glass fiber is not felled. Work Design Manufacturing tools After analyzing the design and selection of the tool material, the next step is making a tool plan with the supporting equipment that has been determined. Making tools goes through several processing processes, including: -Selection of the right materials -Cutting material/raw materials -Manufacture of catalytic converters -Welding -Series Determination of Variable Testing Tools After the tool is successfully made, the next stage is testing the tool. The parameter used to test the performance of the tool is a functional test on each component of the tool to determine the success of the tool being made. A performance test is carried out with an emission test using a gas analyzer to determine the content of the smoke reduction results in the pollution and smoke reducing device. To improve the quality of the resulting equipment, several evaluations of each variable must be carried out, namely: a) Temperature in the emission reduction room, b) Speed of rotation of the exhaust fan/blower c) The number of glass wool / ceramic fibers d) The type of material for the tool frame and catalytic converter. 4) Ceramic fiber 5) Reheater The performance of the tool is directly proportional to the temperature of the smoke in the tool, the higher the temperature, the faster the reduction is made. The working principle of the tool starts from the rotating fan and then sucks the smoke into the tool. The smoke that has entered will be in the catalytic converter room which has been insulated by wire mesh and glass wool or ceramic fiber which is commonly used for high-temperature insulation and is stable up to a temperature of 1600°C [44]. The smoke will then filter and stick to the ceramic fibers. Another function of this ceramic fiber is to prevent the release of smoke and particulates that have not been completely burned or have not been reduced in the tool. So that PM10 and PM2.5 are filtered first and then will be reduced by themselves in the tool. Through this tool, PM10 and PM2.5 are reduced and safer for the environment. The working principle of PM reduction is like the working principle of diesel particulate filters which work as PM filtration [63]. Reducers with a catalytic converter will be assisted by a reheater that can heat up to a temperature of 800°C if the smoke temperature does not reach the reduction temperature. in this tool will be equipped with a temperature indikator [64]. The reaction that occurs in the catalytic converter. 1. CO = 2. HC = + 3. NOx = + After the reduction reaction occurs, the smoke that comes out has a smaller potential and is safe for the environment. The catalytic converter used has a design like a Figure 3 with copper as the base material. This catalytic converter is used as a reducer of smoke generated from forest fires [65]. The catalyst will decrease the activation energy so that it is oxidized CO + ½ O2 becomes Will be reached more quickly at lower temperatures (250°C -300°C). Meanwhile, oxidation CO + ½ becomes the phase without a catalyst will occur at the temperature 700°C [66]. Figure 3. Catalytic Converter Design The design and placement of the equipment during a fire will be arranged around the fire located in a safe position, the position that surrounds the smoke is carried out so that all the smoke can be absorbed in the tool. The arrangement of the tools is expected to provide excellent and safe reduction results for the surrounding environment. The capacity of this tool is calculated in the space where the reduction process occurs, which includes a catalytic converter and fiber to reduce emissions CO, PM10 dan PM2.5. The reduction room has a height 1,45 m and in diameter 1,04 m by volume 1,2311312 converted to 43,4769880184 . The reduction capability of this tool is equal to 43,4769880184 Which is compressed in the reduction room together with the catalytic converter and fiber fibers. In the reduction room / middle body contains 3 catalytic converters sealed with stainless-steel wire mesh and glass wool with a thickness of 0.05m. There are 4 heaters that can be controlled to reach a temperature of 800°C. The last part of the fluid discharge, there is another catalytic converter which is 0.5 m long and 0.4 m in diameter. The addition of a catalytic converter at the end serves to maximize the reduction results because after it is compressed with the chamber body, the fluid rate will increase, and the pressure will also increase. So that the temperature carried by the fluid is also getting higher, the addition of catalytic is the right way to maximize the return of emissions reduction results. The air intake fan/blower used has a high capacity to carry a lot of smoke and pollution from forest fires. The choice of high blower capacity is caused by forest fires that have a wide range, the spread of emissions is also extensive and irregular. The suction capacity of the blower is expected to be able to carry smoke and particulates that fly freely into the equipment and are not carried away freely by the wind. The determination of the fan has been carried out by studying the literature and looking at the types of fans and fan capabilities on the market. In this study, researchers chose fans in the BALAJI Fans and Blowers industry [67]. The fan used has a fan diameter of 40 inc or 1,016 m with a power input of 5 HP, 3 phases, a fan speed of 960 rpm, and a smoke/air intake rate of 21500 Cubic Feet per Minute (CFM). So that the ability of the designed pollution reduction device and forest fire smoke has a smoke reduction capacity and compressed particulates of ±43,4769880184 . The speed capacity . Based on this capacity, the air that people breathe has the potential to be better and safer for health. The use of tools is recommended after a fire because the potential for smoke and emissions is most likely to occur after a fire. Besides, the installation of the tool will be safer after a fire so that the safety of the equipment operator is maintained. Conclusion Pollution and smoke reducers are designed with a design that can provide good reduction results. With additional supporting components such as fans, catalytic converters, reheaters, temperature indicators, and ceramic fibers, each part plays an active role in the process of reducing CO, PM10, and PM2.5 fumes in cases of forest and land fires. Pollution and smoke reducers work when the fan is turned on and suck smoke into the appliance. After that, there is a reduction by the catalytic converter assisted by a reheater until it reaches a temperature of 250 °C -300°C. Unburned smoke, especially PM10 and PM2.5, will be filtered out on the wire mesh and filtered by ceramic fibers. Furthermore, the smoke will be stored, and then it will reduce itself. So, the output from the reduction in this tool is better and safer for the surrounding environment. The smoke capability based on the design that has been designed as smoke and particulate reduction capacity of ±43,4769880184 The capacity of the smoke and particulate suction rate is 21500 with the ability of the significant smoke suction rate, the smoke and particulates due to forest fires can enter the equipment and do not fly freely under the wind, so the air that the community breathes has better potential and safe for health.
2021-06-03T01:01:39.723Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "9d5a76c0537fc31c66162868d8eff8be4eed0776", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/1125/1/012107", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "9d5a76c0537fc31c66162868d8eff8be4eed0776", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
14893241
pes2o/s2orc
v3-fos-license
On the estimation of $Z_2(s)$ Estimates for $Z_2(s) = \int_1^|infty |\zeta(1/2+ix)|^4x^{-s}dx (\Re s>1)$ are discussed, both pointwise and in mean square. It is shown how these estimates can be used to bound $E_2(T)$, the error term in the asymptotic formula for $\int_0^T |\zeta(1/2+it)|^4dt$. The aim of this note is to study the estimation Z 2 (s), both pointwise and in mean square. This research was begun in [11], and continued in [7]. It was shown there that we have and we also have unconditionally Here and later ε denotes arbitrarily small, positive constants, which are not necessarily the same ones at each occurrence, while σ is assumed to be fixed. The constant c appearing in (1.1) is defined by 2000 Mathematics Subject Classification. Primary 11M06, Secondary 11F72. where the function E 2 (T ) denotes the error term in the asymptotic formula for the mean fourth power of |ζ( 1 2 + it)|. It is customarily defined by the relation (1.4) T 0 |ζ( 1 2 + it)| 4 dt = T P 4 (log T ) + E 2 (T ), with (1.5) For the explicit evaluation of the a j 's in (1.5), see [3]. Mean value estimates for Z 2 (s) are a natural tool to investigate the eighth power moment of |ζ( 1 2 + it)|. Indeed, one has (see [7, (4 We shall prove here the pointwise estimate for Z 2 (s) given by Theorem 1 corrects an oversight in the proof of Theorem 3 of [7], where the better exponent 1 − σ was claimed, since unfortunately the condition 1/3 ≤ ξ ≤ 1/2 (see [11, (4.22)]) has to be observed, and our argument needs ξ = ε to hold. It improves the exponent 2(1 − σ) that was obtained in [11]. Probably the exponent 1 − σ could be reached with further elaboration. In any case this is much weaker than the bound conjectured in [7] by the author, namely that for any given ε > 0 and fixed σ satisfying 1 2 < σ < 1, one has Both pointwise and mean square estimates for Z 2 (s) may be used to estimate E 2 (T ). This connection is furnished by THEOREM 2. Suppose that for some ρ ≥ 0 and r ≥ 0 we have Then we have Note that from (1.2) with σ = 1 2 + ε one can take in (1.8) r = 1 2 , hence (1.9) gives The bound (1.11) is, up to "ε", currently the best known one (see [10] and [15], where E 2 (T ) ≪ T 2/3 log 8 T is proved). Thus any improvement of the existing mean square bound for Z 2 (s) at σ = 1 2 + ε would result in (1.11) with the exponent strictly less than 2/3, which would be important. Of course, if the first bound in (1.8) holds with some ρ, then trivially the second bound will hold with r = ρ. Observe that the known value r = 1 2 and (1.10) yield which is, up to "ε", currently the best known bound for the eighth moment (see [1,Chapter 8]), and ant value r < 1 2 would reduce the exponent 3/2 in the above bound. The necessary lemmas This section contains the lemmas needed for the proof of Theorem 1. Let, as usual, α j = |ρ j (1)| 2 (cosh πκ j ) −1 , where ρ j (1) is the first Fourier coefficient of the Maass wave form corresponding to the eigenvalue λ j to which the Hecke L-function H j (s) is attached. This result is proved by the author in [6]. Note that M. Jutila [12] obtained but this result and (2.1) do not seem to apply each other. Both, however, imply the hitherto sharpest bound for H( 1 2 ), namely This bound is still quite far away from the conjectural bound which may be thought of as the analogue of the classical Lindelöf hypothesis (ζ( 1 2 + it) ≪ ε |t| ε ) for the Hecke series. LEMMA 2. Let ξ ∈ (0, 1) be a constant, and set Then we have Here I 2,r is an explicit main term, the contribution of I 2,h is small, and F is the hypergeometric function. This fundamental result is the spectral decomposition formula of Y. Motohashi (see [16,Section 5 gives an explicit evaluation of the main term I 2,r (T, T ξ ) (≪ log 4 T ), which can be used to show that the contribution from this function to the relevant expression in Section 4 will be indeed absorbed by the other terms. The structure of the continuous part I 2,c (T, T ξ ) is similar in nature to (2.4), only the presence of integration instead of summation over κ j makes this term less difficult to deal with than (2.4). LEMMA 3. The hypergeometric function, defined for |z| < 1 by This is a special case of the classical quadratic transformation formula for the hypergeometric function (see e.g., [12, (9.6.12)]). The estimation of Z 2 (s) We are ready now to proceed with the estimation of Z 2 (s). We suppose that 1 2 < σ 0 ≤ σ ≤ 1, where σ 0 is fixed and let for some C > 1 The reason for introducing T is for potential applications of our method to mean square estimates of Z 2 (s). We start from the decomposition say. It is by introducing ψ(T ), given by (2.2), that we are able to exploit the spectral decomposition furnished by (2.3). We suppose that 0 < ξ ≤ 1 2 , but eventually we shall take ξ = ε, namely arbitrarily small. This will follow, in the course of the estimation of Z 32 (s), by an analysis similar to the one made in [7]. We suppose For x ≥ Y we set ω(x) = 1 − σ(x). Then we have ω (ℓ) (Y ) = 0 and ω (ℓ) (x) ≪ ℓ x −ℓ for ℓ = 0, 1, 2, . . . , and ω ′ (x) = 0 for x ≥ 2Y . This decomposition of Z 2 (s) differs from the one that was made in [11]. Namely we have introduced here the parameters X, Y and the smoothing functions ρ, σ and ω. Clearly the functions Z 12 (s), Z 22 (s), Z 32 (s) are entire functions for s belonging to the region defined by (3.1). The function Z 42 (s) is initially defined for σ > 1, but we shall presently see that it admits analytic continuation to the region 1 2 < σ 0 ≤ σ ≤ 1, and moreover its contribution (for Y sufficiently large) will be negligible. To see this write (see (1.4)) Integrating by parts we obtain say. Since ω ′ (x) ≪ 1/x, it follows from the mean square bound for E 2 (T ) (see e.g., [9]), by the Cauchy-Schwarz inequality for integrals, that Z 62 (s) is regular for σ > 1 2 and that Then for s satisfying (3.1) we have hence the contribution of Z 62 (s) will be negligible. Repeated integration by parts gives, since Note that Q (4) 4 (log x) = C, a constant, since Q 4 (z) is a polynomial of degree four in z. Thus the last integral above becomes, for ℓ ≥ 2, on taking ℓ = ℓ(σ 0 ) sufficiently large. The remaining integrals with Q (j−1) 4 (log x) are treated in an analogous way. Integration by parts is applied a large number of times, until each summand by trivial estimation is estimated as O(T − 1 2 ). Therefore the total contribution of Z 42 (s) will be negligible, as asserted. Now we trivially estimate Z 12 (s) by the fourth moment of |ζ( 1 2 + it)| as for s = σ + it and 1 2 < σ 0 ≤ σ ≤ 1. Next, the change of variable t = αx ξ log x in (2.2) gives The integral over α may be truncated with a negligible error at |α| = b, with b a small, positive constant. The relevant portion of Z 22 (s) will be a multiple of where (3.4) is used. Namely, for ξ = ε, the portion of Z 22 (s) containing Q 4 makes a total contribution that does not exceed the one in (3.6). Also we have For given positive α, we combine the contributions of α and −α in (3.7). In the respective integrals we put τ = τ (x, ±α), in the integral involving E ′ 2 (x) we simply change the notation x to τ , and we set Then in view of (3.9) it is seen that (3.7) becomes (3.10) In the integrals with G we replace the lower bounds of integration in the τ -integrals by X. Changing the variable back to x = x(τ, ±α), using Lemma 1 and (1.14) it follows that the total error made in this process will be (ξ = ε) Now we write G(x(τ, ±α)) = exp(−α 2 log 2 τ )σ(τ ) log τ + H(x(τ, ±α)), say, where H(x(τ, ±α)) is independent of s. Taking into account (3.8), it follows by using the mean value theorem that H(x(τ, ±α)) ≪ τ ξ−1 log 3 τ. Since (3.1) holds we obtain, by using the fourth moment for |ζ( 1 2 + it)|, We pass now to the contribution of Z 32 (s). While the estimation of Z 12 (s) and Z 22 (s) was essentially elementary, it is the function Z 32 (s) that is the most delicate one in (3.2) and its treatment requires the application of spectral theory, namely Lemma 2. As discussed in [11] and in Section 2, after Lemma 2, the main contribution to Z 32 (s) will come from I 2,d in (2.4), namely from the discrete spectrum. Thus only this contribution will be treated in detail. It equals where (3.1) is assumed. The interchange of integration and summation follows from (2.8) of Lemma 4, which ensures absolute convergence on the right-hand side. Note that in place of the integral on the right-hand side of (3.16) we can consider since the term with Ξ(ir; x, x ξ ) (see (2.5)) has no saddle-point, and its estimation is less difficult. Next, note that in view of (2.9) of Lemma 4 the sum in (3.16) can be restricted to κ j ≤ T C1 with a suitable constant C 1 = C 1 (σ 0 , ξ), since the tails of the series will make a negligible contribution. We make the change of variable y = z/x in the Ξ-integral (see (2.6)) in (3.17). This is done to regulate the location of the corresponding saddle point, similarly as in [7] and [11]. After the change of variable the integral X r (s) becomes In (3.19) we consider separately the ranges z/x ≤ x −δ and z/x > x −δ for a sufficiently small, fixed δ > 0. In the latter range, the exponential factor is ≪ x −A for any fixed positive A provided that ξ > δ, which we may assume, and thus the total contribution of the range z/x > x −δ in (3.19) is negligible. Therefore so far we have reduced the problem to the estimation of a finite sum over κ j in (3.16) and a finite z-integral in (3.19). where (3.29) From (3.24) and (3.26) we have (z 0 ϕ ′′ 0 (z 0 )) −1/2 ∼ 1. Hence inserting (3.28)-(3.29) in (3.18) it is seen that the main contribution will be (by using Stirling's formula to simplify the gamma-factors) a multiple of (3.30) say, where in view of (3.27) we have From the term exp − 1 4 x 2ξ log 2 (1 + z 0 /x) it transpires that the range r ≥ x 1−ξ log x will make a negligible contribution. Similarly if (3.31) |r − t| > r α > r 3 t ε x 2 , repeated integration by parts shows that the contribution is negligible. But as r ≤ x 1−ξ log x, (3.31) will hold for (3 − α)(1 − ξ) < 2, namely for If 0 < ξ < 1/3, as we assume, then it follows that 0 < α < 1, hence only the range |r − t| < r α is relevant, which gives (3.32) for suitable constants C 1 , C 2 . Repeated integration by parts in (3.30) shows then that, after each integration, the order of the integrand is lowered by a factor of (3.33) It follows that, if x ≥ r 3/2 ≍ T 3/2 , then the above expression is ≪ T −ε for |r − t| > T ε , and hence its total contribution is negligible. If |r − t| ≤ T ε , then by trivial estimation and Lemma 1 the total contribution will be ≪ t −1/2 where f (x) (> 0) is a smooth function supported in [T, T +H] , such that f (x) = 1 for T + 1 4 H ≤ x ≤ T + 3 4 H. Taking account that Z 2 (s) is the (modified) Mellin transform of |ζ( 1 2 +ix)| 4 , it follows by the Mellin inversion formula that (see [7, (3.27 where we have set, as in (3.4), Q 4 (log x) = P 4 (log x) + P ′ 4 (log x) and L denotes the line ℜs = 1 + ε with a small indentation to the left of s = 1. If we integrate (4.2) from x = 1 to x = T and take into account the defining relation of E 2 (T ), we shall obtain Then from (4.1) and (4.3) we have by Cauchy's theorem ( 1 2 < c < 1, T > 1) and we also have an analogous lower bound for E 2 (T ). Since f (r) (x) ≪ r H −r it follows that the s-integral in (4.4) can be truncated at |ℑm s| = T 1+ε H −1 with a negligible error, for any c satisfying 1 2 < c < 1. We take c = 1 2 + ε and use the first bound in (1.8) to obtain with the choice H = T 2ρ+1 2ρ+2 . This proves the first part of Theorem 2. To prove the second, we proceed similarly, but use the Cauchy-Schwarz inequality and the second bound in (1.8). We have 2r+2 . Finally to prove (1.10), note that by [8, eq. (4.9)] we have (c 2 ( 1 2 + ε) = 1 + 2r in this notation) Let 0 ≤ C ≤ 8 be a constant. Then, for p > 0, q > 0, 1/p + 1/q = 1, Hölder's inequality for integrals gives
2014-10-01T00:00:00.000Z
2003-11-18T00:00:00.000
{ "year": 2003, "sha1": "ae3407818ca37793a99794def0813c29293b8a13", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ae3407818ca37793a99794def0813c29293b8a13", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
258028912
pes2o/s2orc
v3-fos-license
Giant localized electromagnetic field of highly doped silicon plasmonic nanoantennas In this work, we present the analysis and design of an efficient nanoantenna sensor based on localized surface plasmon resonance (LSPR). A high refractive index dielectric nanostructure can exhibit strong radiation resonances with high electric field enhancement inside the gap. The use of silicon instead of metals as the material of choice in the design of such nanoantennas is advantageous since it allows the integration of nanoantenna-based structures into integrated-optoelectronics circuits manufactured using common fabrication methods in the electronic industry. It also allows the suggested devices to be mass-produced at a low cost. The proposed nanoantenna consists of a highly doped silicon nanorod and is placed on a dielectric substrate. Different shapes and different concentrations of doping for the nanoantenna structures that are resonant in the mid-infrared region are investigated and numerically analyzed. The wavelength of the enhancement peak as well as the enhancement level itself vary as the surrounding material changes. As a result, sensors may be designed to detect molecules via their characteristic vibrational transitions. The 3D FDTD approach via Lumerical software is used to obtain the numerical results. The suggested nanoantennas exhibit ultra-high local field enhancement inside the gap of the dipole structure. Scientific Reports | (2023) 13:5793 | https://doi.org/10.1038/s41598-023-32808-w www.nature.com/scientificreports/ is typically utilized for sensing several greenhouse gases, such as CO 2 , CH 4 , andH 2 S . These gases often absorb in the infrared (IR) or ultra-violet wavelength ranges 11 . In this study, We will not condone the dielectric environment that also affects the LSPR frequency which is an essential parameter for gas sensors. A gas sensor is built to sense the different gases and select their concentration using dielectric/metallic or highly doped semiconductor nanoantennas, the first resonant in the visible region while the other in the infrared region. In addition to enhanced electric fields, a shorter resonant linewidth is desirable for specific applications. In gas sensing applications, for example, the detection length is defined not only by the LSPR's sensitivity to the local dielectric material but also by the resonant linewidth 13 . A narrower linewidth can detect smaller shifts 12 . Doped Silicon was used to build nanoantennas that support plasmonic phenomena reported in metallic nanostructures and achieve comparable enhancement shifted to the mid-IR spectral region rather than the visible range 14 . The resonance frequency of the highly doped silicon nanoantenna fluctuates as the refractive index of the environment surrounding it changes. We suggest that the gas sensors based on highly doped semiconductors are better than ones based on plasmonic materials because if we can control the concentration of the doping, we can also governate the resonance region 10,11,15 . So, we rely on the concertation of 5 × 10 20 cm −3 which affects the mid-infrared (MIR). In MIR each gas owns a unique fingerprint. Numerical method To study the optical properties and the near-field calculations of the nanoantenna, three-dimensional finitedifference time-domain simulations were conducted using a commercially available Maxwell equations software program (Lumerical FDTD Solutions) 16 . We first investigate the optical properties of the dipole antenna with a total field scattered field (TFSF) source polarized along the z-direction, as illustrated in Fig. 1. The incident radiation spectrum covers the wavelength range 1 μm to 15 μm. The 3D simulation box is bounded by antisymmetric, symmetric, and perfect-matched boundary conditions (BCs) in x, y, and z directions, respectively in order to minimize the computation time. Mesh override sections covering distinct volumes of the structure were employed to achieve fine resolution. A mesh size of 5 nm was employed around each pole of the nanoantenna and 1 nm inside the gap. The near-field (wavelength-dependent intensity enhancement) inside the gap is obtained by the field and power monitor. The absorption cross-section of a structure is calculated by multiplying the power passing through the silicon-based plasmonic nanoantenna structure by the intensity of the plane wave. In the simulations, we insert a power monitor box around the nanoantenna (6 two-dimensional monitors altogether) and then estimate the transmission of each monitor. The software's transmission radiation returns the amount of power transmitted via a power monitor or a profiled monitor, normalized to the source power. The absorption cross-section can be calculated by adding the six transmissions and multiplying by the source area. The scattering cross-section data could be produced similarly, with the main difference being that the monitors are located only in the scattered light zone. The doped silicon antenna and the dielectric substrate are now both covered by the matching scattering power monitor box 17 . In this paper, we are dealing in general with dipole antenna with different shapes created with highly doped silicon placed on different substrates. Figure 1 illustrates a schematic diagram of a highly doped silicon nanorod antenna on top of a silicon substrate. The Palik model 18 was used to calculate the real and imaginary refractive indices of the silicon (Si) substrate and the free space is supposed to be the surrounding medium with n = 1. We www.nature.com/scientificreports/ used the most realistic high-doping concentration to achieve plasmonic frequency with the shortest wavelength in the MIR range. Figure 2 depicts the material dispersion of phosphorus-highly doped silicon with a concentration of 5 × 10 +20 cm −3 as defined using the Drude model for permittivity 11,19 : where ε ∞ = 11.7 F/m is the dielectric permittivity at high frequency, ε s is the background dielectric constant, N d represents the free carrier concentration, ω p is the plasma frequency, ε 0 represents free space, and m * denotes the electron effective mass; Ŵ is the collision frequency in rad/s and is defined as Ŵ = q/(m * µ) where µ represents carrier mobility and q denotes electron charge. In the case of a doping concentration (N d ) 5 × 10 20 cm −3 , ω p = 2.474 × 10 15 rad/s and Ŵ = 9.456 × 10 9 rad/s. It can be shown from Fig. 2 that, for wavelengths higher than 3 μm, the highly doped silicon has a large negative real and a small positive imaginary dielectric function that can support plasmon excitation, which is a collective excitation of conduction electrons 20 . Doped silicon performs better than metals in plasmonic-based gas sensor applications at mid-infrared 11,15 . In general, semiconductors do not have rough surfaces, which reduce scattering losses as compared to noble metals. Furthermore, silicon offers various benefits, such as CMOS compatibility and easy fabrication using standard Si fabrication techniques. Furthermore, working in the mid-IR region while utilizing doped silicon material allows for the development of integrated plasmonic devices on the microscale. This combination facilitates the fabrication of many conventional plasmonic devices 15 . The suggested doped silicon nanorod antenna can be fabricated using electron-beam lithography (EBL) 10,21 . Lithography is one of the most straightforward methods for fabricating nanostructures since it provides excellent repeatability as well as the capacity to create nanostructures of complex forms using a combination of lithographic techniques. Traditional lithographic methods have been used effectively to create single nanoparticles of various shapes. For example, silicon nanorod antennas with an outside diameter of 30 nm and a gap between the nanorods > 20 nm were created using a combination of electron-beam lithography and reactive-ion etching 22 . Another suggested fabricated method is the direct laser ablation which was used in the first experiments to fabricate high-index nanoparticles: an ultrashort-laser-pulse-induced material fragmentation into spherical nanoparticles and their deposition near the focal region 10,23 . This experiment demonstrated the use of laser ablation in the synthesis of high-index nanoparticles with the optical response (scattering efficiencies, Q factors, and so on) in the visible and infrared spectral ranges. Another approach that can be used to produce high-index nanoparticles on a large scale is dewetting of a thin layer. This mechanism indicates nanoparticle agglomeration during thin film heating due to the minimization of the total energy of thin film surfaces, including a film-substrate interface 24,25 . Figure 2. Material dispersion of highly doped silicon with a concentration of 5 × 10 20 cm −3 . For wavelengths higher than 3 μm, the real part of the dielectric function becomes negative. Results and discussions In order to verify the simulation results calculated by the 3-D FDTD method 26 , plasmonic nanoantenna-dielectric nanocavity designed by Yan-Hui Deng, et al. 17 is initially considered. The dielectric substrate disc has a diameter and height of 160 nm. The plasmonic nanoantenna has been designed using two identical Au nanorods. Each Au nanorod has a total length and diameter of 26 nm and 10 nm, respectively. The hemisphere at both ends of the nanorod has a diameter of 10 nm. As illustrated in Fig. 3, the gap distance between the two nanorods is 5 nm. A spacer is placed between the dielectric nanodisk and the plasmonic nanoantenna. The spacer's height is 5 nm with a refractive index of n = 1.46. The dielectric nanodisk has a refractive index that is assumed to be n = 3.3 17 . The wavelength-dependent intensity enhancement computed by the FDTD approach and by Yan-Hui Deng, et al. 17 is shown in Fig. 3. It can be noted that a good agreement can be observed between our results and those published by 17 , indicating that our model is accurate. To first investigate the optical properties and the local field enhancement, the dipole nanoantenna is illuminated by (TFSF) source incident normally in the z-direction and is modeled using the 3-D FDTD approach via the Lumerical package as shown in Fig. 1 16 . In this paper, the dipole nanorod antenna with highly doped silicon material placed on a silicon substrate is introduced as an alternative solution to enhance the near-field intensity with low loss in the MIR range. The geometrical parameters of the dipole nanorod design are as follows: the substrate dimensions are 2000 nm × 1700 nm × 90 nm in x, y, and z directions, respectively and it is modeled by Palik 18 . The nanoantenna is structured by two identical highly doped silicon nanorods. Each nanorod is L = 222.5 nm long and has a radius of 15 nm, which is the same as the radius at the two ends. As seen in Fig. 1, the surface space between the two nanorods is g = 30 nm. The local field intensity enhancement at the center of the gap relative to the incident field at the MIR of the nanorod, as well as the field profile at the resonance wavelength, are plotted in Fig. 4. We first start our investigation with a nanorod on a silicon (Palik) substrate which shows a narrow peak resonant at 8.71404 μm as shown in Fig. 4b. The electric near-field intensity is confined in the center of the gap between nanorods which shows a high enhancement of about 17,464.5 times compared to the normalized electric field (Fig. 4b). This strong near-field in the gap between nanorods indicates a localized surface plasmonic resonance (LSPR). The simulation profile demonstrates that at MIR wavelengths, the system exhibits the advantages of a coupled dipolar resonance, as shown in Fig. 4c. Due to the coupling between the two adjacent doped silicon nanorods, a plasmonic mode at a MIR wavelength is created (9) . Additionally, As the direction of the incident electric field (polarization) is parallel to the long axis of the nanorod antenna, free electrons accumulate along the shorter edge (width), stimulating the longitudinal plasmon mode at longer wavelengths (Fig. 4c) 9 . In general, this results in substantial near-field enhancement, which is useful in sensitivity applications. We also highlight how the shape and geometry of the nanoantenna influence the resonance properties 27 . If we adjusted the length, gap, and substrate of the previous nanoantenna geometry, the resonance wavelength obtained is blue-shifted to 6.51423 µm in the case of spheroidal shape (see Fig. 4b). The near field was almost distributed equally on the two edges of each nanospheroid, as shown in Fig. 4d. The local field enhancement inside the gap center for the nanospheroid is investigated and analyzed. It is found from Fig. 4b that the maximum normalized field intensity is dropped to 11,682.6 compared to the nanorod shape. The results indicate that as the doping concentration increases, the resonance wavelength is a blue shift. The increase in doping concentration corresponds to an increase in field intensity enhancement, and the best concentration of doping obtained is 5 × 10 20 cm −3 , as shown in Fig. 5, which is employed in all nanoantenna structures. The intensity ripples shown in Fig. 5 are attributed to the surface plasmons (SPs) which are the collective oscillations of free electrons evanescently coupled to a metal surface that interact with the electromagnetic field to create localized surface plasmon resonances (LSPRs) 28 . Collective oscillations of conduction band electrons are confined to the nanoscale structure boundaries, resulting in intense electromagnetic fields. Similarly, localized magnetic fields can be generated by induced electric oscillation currents in plasmonic nanostructures. Localized Surface Plasmons (LSPs) are standing wave Surface Plasmon Polariton (SPP) modes at confined nanostructure www.nature.com/scientificreports/ boundaries. As a result, the same nanoparticles can manage a wide range of LSP modes with varying orders of the standing wave at different optical wavelengths as shown in Fig. 5 29 . The field enhancement of the rod with concentration 5 × 10 20 cm −3 on a hybrid substrate shown in Fig. 5 exhibits a mean peak resonance and multiple plasmon modes at different wavelengths. The field distribution in Fig. 6a,b are related to the two peaks at = 6.57422µm and = 6.73972µm before the major mode at = 7.004µm and its profile field is shown in Fig. 6c. Figure 6d,e illustrate the field distribution of the peaks at = 7.241µm and = 7.41652µm . In addition to the variation of the nanoantenna design, the variation of the surrounding substrate has been reported. The surface plasmon resonance of the coupled nanoparticle pair was found to vary depending on the substrate material. It is more sensitive to its surroundings/substrate than an isolated nanoparticle. As a result, the influence of the substrate material on field enhancement inside the gap center is also studied. The same nanoantenna as shown in Fig. 4a is placed on the same substrate dimensions, however with different materials: Si, SiO 2 , TiO 2 , and hybrid substrate as recommended in 17 . The hybrid substrate is composed of two materials; a substrate with a refractive index of n = 3.3 and a very thin spacer with a refractive index of n = 1.46 17 . The substrate (n = 3.3) and spacer (n = 1.46) are about l t = 980 nm and l s = 5 nm in height and both widths are w = 2000 nm, as shown in Fig. 7a. Figure 7b exhibits the light intensity inside the gap center as influenced by various substrates. As demonstrated in Fig. 7b, the hybrid substrate exceeded the silicon substrate, doubling the enhancement in the field, with the intensity peak reaching 37,930.1 and being resonated at approximately 7.0042 µm. The study found that substrate materials such as SiO 2 , and TiO 2 exhibit significant enhancement behaviors in our investigated frequency range. A normal incident TFSF excites the nanoantenna. In order to have a better understanding of the multipole LSPRs in the nanoantenna presented in this research work, The scattering and absorption cross-section spectrums of the nanorod antenna on the hybrid substrate are estimated as can be seen in Fig. 7c,d. The scattering cross-section in Fig. 7c depicts the resonance positions under normal excitation. The scattering spectrum shows different resonances with wavelengths of 3.16, 3.68, 4.28, 4.9, 6.12, and 8.3 µm, corresponding to the multipolar modes 29 . Figure 7d depicts the absorption spectrum of the coupled system. The spectra show a sharp peak at λ = 7.004 µm. It can be shown from Fig. 7b that, the field exhibits significant confinement at the resonance wavelength of 7.004 μm. This indicates that the mentioned design will be sensitive to any slight change in the surrounding refractive index (RI). We studied the effect of intensity for the nanoantenna by changing the refractive index of the surroundings using FDTD simulations. Figure 8a reveals the difference between air and gas field intensities as a function of the wavelength, and as noticed, the resonance wavelength exhibits a red shift due to changes in the RI of the surroundings related to an increase in the electric field enhancement. In order to get the sensitivity of the nanorod in accordance with the change in RI, the shift in the resonance wavelength is divided by the refractive index change (S = Δλ/Δn nm/RIU). As can be shown in Fig. 8b, the variations in resonant wavelength are fitted linearly as a function of medium refractive index, revealing a high refractive index sensitivity of 3118 nm/RIU. www.nature.com/scientificreports/ By adding more steps with smaller dimensions to the standard dipole design, a highly confined field is obtained inside the center of the gap of the suggested nanorod antenna design (tapered nanorods) after the simulations. This improvement is the result of the surface plasmon that the plane source stimulated. Accordingly, the field values are recorded across the specified frequency range between the two nanorods at the previously indicated spacing gap. The three tips that form a tapered nanorod have lengths of L 1 = 172.5 nm, L 2 = 30 nm, and L 3 = 20 nm and have radii of R 1 = 15, R 2 = 10, and R 3 = 7.5 nm. The proposed tapered-dipole nanoantenna, which is regarded as a modified form of the conventional dipole with two additional steps of smaller dimensions introduced, is shown in Fig. 9a along with its design characteristics 30 . The near-field intensity enhancement of the tapered-dipole nanoantenna is investigated in this study over the MIR wavelength range from 1 to 15 µm. The tapered dipole nanoantenna is optimized to achieve maximum near-field intensity over the MIR spectrum. Figure 9 depicts the field intensity spectra of the tapered nanoantenna installed on top of the hybrid substrate. It is evident from Fig. 9b that an intensity peak appears at the resonance wavelength of 6.59446 µm, which corresponds to the dipolar mode. The near-field intensity at resonance wavelength is around 45,000, which is more than the previously reported nanorod antenna in the MIR band shown in Fig. 7. As indicated in the electromagnetic theory, this field enhancement can be attributed to the field accumulated at the tips of the doped silicon nanorods 31 . Based on this phenomenon, the suggested design was developed www.nature.com/scientificreports/ by incorporating additional steps of smaller dimensions into the conventional dipole design, which results in a large divergence of surface current over the nanoantenna surface due to the multiple thickness grading involved in the design 30 . The electric field profile for tapered-dipole nanoantennas at the resonance wavelength is shown in Fig. 9c. The electric field is accumulated with high intensity around the three tips of the nanoantenna, as seen in this figure, due to a high divergence of surface current 30 . For the tapered nanoantennas, the scattering and absorption cross-sections are estimated versus the wavelength. Figure 9d,e depict the results. There are multi-resonance modes shown in the tapered structure with wavelengths of 3.155, 3.67, 4.28, 4.9, 6.1, and 8.3 µm. Figure 9e shows the coupled system's absorption spectrum. A strong peak can be seen in the spectra at λ = 6.59446 µm. The three design parameters, i.e., the radii of the three tips, that may be adjusted to provide high local field enhancement are one of the key advantages of the proposed tapered nanorod design. The large nanorod's radius has been tuned, and the radii of the other tips have been increased and reduced by the same factor. For the suggested design, the impacts of the nanorod radius are examined. Figure 10a depicts the intensity enhancement centered on the gap of the suggested design as a function of the radius of the large rod. The local field of the nanoantenna is enhanced to its maximum value of 45,000 at a radius of around 15 nm by tuning the radii of tapered nanorods. The variation of the resonance wavelength as a function of the radius of the nanorod is seen in Fig. 10b. The resonance wavelength shifts toward a shorter wavelength of 6.2 µm as the radius increases from 10 to 20 nm. Furthermore, as shown in Fig. 10b, the desired wavelength of the maximum local field enhancement (6.6 µm) can be obtained at a radius of 15 nm. As a result, the radius of the nanorod can play a crucial role in controlling the resonant wavelength of the electromagnetic radiation of the suggested nanoantenna design. Now consider the coupled system depicted in Fig. 11a,b. The coupled system is made up of a ring that surrounds the previously mentioned nanorod antenna and the dielectric substrate. The surrounding medium index is also one. The coordinate system's origin is set in the middle of the dielectric substrate. A plane wave also excites the structure. A dielectric rectangular-ring structure has been reported to have excellent magnetic and electric field enhancements around it 17,32 . As a result, it might be a promising approach for further optimizing the electric near-field of a highly doped silicon nanoantenna. A schematic of such a coupled structure with typical incident radiation is shown in Fig. 11. The incident wave is polarized along the z-axis. The dielectric rectangular has a width and height of 1750 and 980 nm, respectively. The ring has 1015 nm in height with an inner and outer radius of 2500 and 5000 nm, respectively. Both the substrate and the ring have refractive indices of n = 3.3. a spacer is placed between the doped silicon nanorod antenna and the dielectric substrate. The spacer has a height of 5 nm and a refractive index of n = 1.46. The geometrical parameters of the doped silicon antenna are the same as in the dielectric coupled structure, which is analogous to the structure without a ring Fig. 7c. The local electric field is significantly enhanced in the gap center of the doped silicon nanoantenna. This is due to the combination of dielectric structure field enhancement (coupled structure) and doped silicon antenna field enhancement. The electric field enhancement inside the center of the gap increases to 134,779 when the height of the ring is extended to 1515 nm, while the resonant wavelength remains constant as shown in Fig. 4a. The nanorod antenna parameters used produce a sharp resonance peak of about = 7.0042 µm in Fig. 11d. In particular, we address certain experimentally possible situations in which the nanoantenna dielectric coupled structure is situated on a substrate. The schematic of such systems is shown in Fig. 12a. SiO 2 with a refractive index of 1.46 is assumed to be the substrate. The nanoantenna dielectric coupled structure is similar to that seen in Figs. 7 and 9. Figure 12c depicts the electric field enhancement at the center of the gap for the coupled doped silicon nanorod antenna and the coupled tapered nanoantenna. The resonant intensity enhancement in the coupled structure gives a high value, similar to what occurs in the absence of a substrate. In comparison to On their scattering spectrum (Fig. 13a,b), the nanorod coupled system has a sharp resonance peak at λ = 7.0042 µm, whereas the tapered coupled structure has the main peak at λ = 6.59446 µm, which is dominated by a magnetic dipole resonance. Several previous studies have found similar results 17,32,33 . This strong far-field resonance frequently results in a significant electric field enhancement around the coupled structure, as seen in Fig. 12. The plasmonic-based doped silicon nanorod antenna has hybridized mode of two electric dipole plasmon resonances of the rods is approximate at λ = 7.0042 µm and the resonance of the tapered nanorod antenna is at λ = 6.59446 µm which is about the magnetic dipole mode of the coupled structure. The same results have also been achieved in the literature 17,32 . Figure 13c,d show the scattering cross section for the nanorod antenna and the tapered nanoantenna, respectively. Figure 14 exhibits the scattering cross-section spectrum of the individual hybrid substrate, individual ring, and the coupled system (hybrid substrate with ring). A normal incident TFSF excites the hybrid substrate. The heights of the substrate (n = 3.3) and spacer (n = 1.46) are about l t = 980 nm and l s = 5 nm, respectively, and both widths are w = 2000 nm. The origin of the coordinate system is located at the center of the hybrid substrate. The scattering cross-section of the individual substrate (Fig. 14a) shows a resonance peak at λ = 6.14069 μm. This peak is associated with a magnetic dipole mode, as indicated by the magnetic field enhancement (|H| 2 /|H 0 | 2 ) distribution in Fig. 14b. The ring has a height of (h = 1015 nm). The inner and outer radii of the ring are R in = 2500 and R out = 5000 nm, respectively. A resonant peak is shown about λ = 7.0042 μm with the specified geometrical parameters (Fig. 14c). This resonance position is close to that of the magnetic dipole mode obtained in the case of the hybrid substrate. In order to figure out this mode, the magnetic near field distributions in the y-direction (|H y |/|H 0 |) are also simulated (Fig. 14d). Now, the coupled system in Fig. 14e is also investigated. The coupled system is composed of the hybrid substrate and ring mentioned above. The surrounding medium index is 1. The origin of the system's coordinate is located at the center of the hybrid substrate (Fig. 14e). A TFSF with normal incidence excites the structure. The scattering cross-section in Fig. 14e shows the coupling between the hybrid substrate and the ring. A resonant peak can be shown at λ = 6.14069 μm. This peak value is close to the individual substrate and ring resonance positions. In order to indicate the magnetic dipole mode at the resonance wavelength of the nanorod antenna, the magnetic field distribution (|H| 2 /|H 0 | 2 ) has also been computed at λ = 7.0042 μm. It can be shown that the magnetic distribution also identifies the magnetic dipole mode for the coupled system as shown in Fig. 14f. Fig. 12a has been also investigated by using FDTD simulations. The difference between air and gas field intensities is seen in Fig. 15a which reveals a significant change in the resonance peaks between air and different medium indices. The tapered design exhibits a red shift in the resonant wavelength, however, the field enhancement is decreased, as illustrated in Fig. 15a. Furthermore, the sensitivity of the medium refractive index change is studied as shown in Fig. 15b. The maximum sensitivity reaches 3729 nm/RIU. Conclusion In conclusion, two different designs for the doped silicon dipole nanoantennas are proposed and studied using the Lumerical software package based on the 3D FDTD approach. The suggested dielectric nanoantenna enhances the local field intensity at the MIR. The goal of the study is to develop high-doped silicon-based nanoantennas that exhibit Localized Surface Plasmon Resonance (LSPR), comparable to plasmonic metal nanoantennas. The proposed nanoantenna is made up of two highly doped nanorod structures that are stacked on top of a dielectric substrate. It is found that the doped silicon nanoantenna exhibits strong local field enhancement at MIR wavelengths. Furthermore, the suggested design has lower dissipation losses. When compared to the nanoantenna in a silicon substrate, the hybrid substrate composition exhibits a significant enhancement; the enhancement value is around two-fold. The geometry of the nanoantenna influences the peak of the field enhancement and its resonance, with the nanorod and tapered nanoantennas achieving a high level of field enhancement. The coupled system with the incorporation of the ring around the nanorod leads to a condition in an ultrahigh enhancement field while remaining the same resonance wavelength. This enhancement is attributed to the integration of nanoantenna which has hybridized mode of two electric dipole plasmon resonances due to the two nanorods with a gap of 30 nm which is roughly the magnetic dipole mode of the dielectric substrate. Data availability The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
2023-04-09T13:15:32.810Z
2023-04-08T00:00:00.000
{ "year": 2023, "sha1": "75cf3cabba06bf832540359c47baf47c31b183d1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "75cf3cabba06bf832540359c47baf47c31b183d1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
247722113
pes2o/s2orc
v3-fos-license
A rare case of suspected lupus erythematous panniculitis as the presenting skin feature of juvenile dermatomyositis: A case report Juvenile dermatomyositis is a rare autoimmune myopathy of childhood, associated with systemic vasculopathy, primarily affecting the capillaries. Panniculitis is seen histologically in about 10% of patients with dermatomyositis; however, its clinical presentation is rare, with only 30 cases presented in the literature to date. The histopathology overlaps with other inflammatory disease states, and is almost identical to the panniculitis seen in lupus erythematous panniculitis. In the cases with both panniculitis and dermatomyositis, skin and muscle inflammation is usually the first clinical manifestation. We present a case of a 16-year-old female with panniculitis as the initial presenting feature of juvenile dermatomyositis in the context of a prior diagnosis of indeterminate colitis. Introduction Juvenile dermatomyositis (JDM) is a rare autoimmune myopathy of childhood, associated with a systemic vasculopathy, primarily affecting the capillaries. 1 The disease is characterized by cutaneous findings, such as heliotrope rash and Gottron's papules, proximal muscle weakness, elevated creatinine kinase, and endomysial infiltration of mononuclear cells surrounding myofibers. 2 JDM has an annual incidence rate of two to three cases per one million children, with females affected two to five times more than males. 1 Current standard treatment for JDM includes high-dose systemic steroids, methotrexate (MTX), and hydroxychloroquine for initial treatment, and intravenous immunoglobulin (IVIG) and/or cyclophosphamide or biologic therapies for refractory cases. 3 Panniculitis has rarely been described in the setting of dermatomyositis (DM) in adult and pediatric patients despite 10% of patients having subclinical evidence of panniculitis on muscle biopsy. 4 In the cases reported, myositis almost always occurs before the panniculitis manifests. [5][6][7] The histopathological findings of the panniculitis in JDM overlap with other inflammatory diseases, notably lupus erythematous panniculitis (LEP). We present the case of a 16-year-old female who developed panniculitis as the first manifestation of JDM, thought initially to be LEP, in the context of a prior diagnosis of inflammatory bowel disease (IBD). Case report The patient initially presented with fatigue, bloody stools, and low-grade fevers at the age of 14.5 years, and was diagnosed with atypical indeterminate colitis based on rectal biopsy findings. She was trialed on multiple formulations of A rare case of suspected lupus erythematous panniculitis as the presenting skin feature of juvenile dermatomyositis: A case report mesalazine, oral budesonide, and oral prednisone, before symptom remission with a colon-specific oral mesalazine and rectal mesalazine. She was referred to pediatric rheumatology and dermatology clinics at age 16 years for new concerns of bruising and leg pain without any obvious injury, associated with underlying painful, firm palpable lesions on her thighs and upper arms ( Figure 1). These lesions were suspected to be erythema nodosum in the context of her IBD diagnosis, but the distribution affecting the proximal limbs was atypical. Her rheumatologic review of systems was negative at that time, and her investigations demonstrated positive antinuclear antibody (ANA) ⩾1:640, anti-neutrophil cytoplasmic antibody with perinuclear pattern (P-ANCA), myeloperoxidase antibody (MPO) 1.8 Antibody Index Units (AI) (0.0-0.9 AI), and centromere B 1.4 AI (0.0-0.9 AI). Anti-ds DNA antibody and rheumatoid factor were negative; C3, C4, and immunoglobulins (Igs) were normal. A deep skin biopsy including the fascia was performed, and the pathology was consistent with LEP ( Figure 2). She was started on hydroxychloroquine 300 mg PO daily with improvement. After 3 months of treatment, she developed periorbital edema with suborbital ecchymosis, facial rash in the malar distribution, myalgia, arthralgia, worsening fatigue, and 30 min of morning stiffness in her fingers with marked dilated nailfold capillaries. She was trialed on a 5-day course of 5 mg oral prednisone with no improvement so increased to 50 mg daily dosing with some improvement. Repeat investigations showed positive RNP-A at 1.3 AI (0.0-0.9 AI), medium positive anti-histone antibody, creatine kinase (CK) 312 (20-300 U/L), and Epstein-Barr virus (EBV) IgM positive, IgG negative. It was thought she had an intercurrent EBV infection with the Hoagland sign 8 as an explanation for the periorbital edema, and thus, her steroids were tapered. At lower doses of oral prednisone, she had increased myalgias, muscle weakness particularly with lifting arms overhead, and worsening periorbital edema with more prominent malar rash. Her myositis antibody panel was negative. As her weakness began to progress, her prednisone dose was increased to 60 mg PO daily when she became severely unwell with fever, hypotension, and profound weakness, resulting in an admission to the intensive care unit. A magnetic resonance imaging (MRI) of her muscles demonstrated widespread myositis (Figure 3). A muscle biopsy showed classic features of immune myopathy with perimysial pathology that was most consistent with the clinical entity of JDM. She was treated with IVIG 2 g/kg, IV methylprednisolone 1 g daily for 5 days, MTX 25 mg subcutaneous weekly, and remained on hydroxychloroquine 300 mg daily. Her symptoms improved significantly throughout her stay and she was discharged after 11 days in hospital. Discussion Our patient demonstrates a rare occurrence of lobular panniculitis as a first dermatologic feature of JDM. Our initial diagnosis of LEP was reasonable, as the histopathologic findings of DM panniculitis are largely identical to those of lupus panniculitis, 9,10 demonstrating the challenge of distinguishing between these entities histopathologically. LEP is a rare form of chronic cutaneous lupus erythematous (LE), characterized by inflammation of the subcutaneous fat, presenting in 1%-3% of patients with cutaneous LE. 11 The disease manifests as indurated plaques or nodules usually on the proximal extremities and trunk, which are tender or painful, may progress to calcifications or ulcerations, and frequently result in lipoatrophy upon resolution. 12 As with DM panniculitis, histopathology, demonstrates dermal perivascular infiltrates of mononuclear cells with lobular panniculitis, hyaline fat necrosis, paraseptal lymphoid aggregates, and lymphocytic vasculitis. 13 Usually, LEP and DM panniculitis can be differentiated based on the broader clinical picture. When the patient started demonstrating significant myositis, the diagnosis of JDM was considered more likely. Since her weakness worsened with the initial steroid wean, it was unlikely that her presentation was due to steroid-induced myopathy. 14 The patient's periorbital edema was another early cutaneous finding, which has been reported as part of the clinical presentation of JDM in case reports only, but is not one of the classic cutaneous manifestations of JDM. 15 Our patient had another unique feature, with her prior diagnosis of indeterminate colitis and treatment with mesalazine. Mesalazine has been associated with druginduced lupus. 16 Anti-histone antibodies are present in more than 95% of cases of drug-induced lupus, but can also occur in up to 80% of patients with idiopathic lupus. 17 Interestingly, anti-histone antibodies have also been reported in up to 17% of patients with DM. 18 It is known that IBD and DM can occur together. 19 In adult studies, the incidence of DM is higher in patients with ulcerative colitis (UC) compared to control groups, and the presence of UC may actually be a risk factor for DM. 20 Given that the entire clinical picture is compatible with JDM, and that there are no reports of mesalazine inducing JDM, her disease was likely not drug-induced. As a precaution, our patient discontinued mesalazine and continued MTX, high-dose prednisone, IVIG, and hydroxychloroquine to treat her colitis, skin manifestations, and myositis. At time of publication-3 months after her acute presentation-her panniculitis has resolved with lipoatrophy, she has normal functional muscle strength and her IBD symptoms are well controlled. This case is an example of overlapping inflammatory diseases occurring in a single patient, and the complex diagnostic dilemma of panniculitis as a presenting feature of evolving JDM. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article. Informed consent Informed consent for information and images to be published was provided by the patient.
2022-03-27T15:28:58.248Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "55417660827754e14804ac27314ee711fafd6724", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "0cd6912f9d194e19f122dcb7c953f54f6aff5e5f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13377665
pes2o/s2orc
v3-fos-license
Partial Reversible Gates(PRG) for Reversible BCD Arithmetic IEEE 754r is the ongoing revision to the IEEE 754 floating point standard and a major enhancement to the standard is the addition of decimal format. Furthermore, in the recent years reversible logic has emerged as a promising computing paradigm having its applications in low power CMOS, quantum computing, nanotechnology, and optical computing. The major goal in reversible logic is to minimize the number of reversible gates and garbage outputs. Thus, this paper proposes the novel concept of partial reversible gates that will satisfy the reversibility criteria for specific cases in BCD arithmetic. The partial reversible gate is proposed to minimize the number of reversible gates and garbage outputs, while designing the reversible BCD arithmetic circuits. INTRODUCTION Nowadays, the decimal arithmetic is receiving significant attention as the financial, commercial, and internet-based applications cannot tolerate errors generated by conversion between decimal and binary formats. Furthermore, a number of decimal numbers, such as 0.110, cannot be exactly represented in binary, thus, these applications often store data in decimal format and process data using decimal arithmetic software [1]. Since the decimal arithmetic is getting significant attention, specifications for it have recently been added to the draft revision of the IEEE 754 standard for floating-point arithmetic. IEEE 754r is an ongoing revision to the IEEE 754 floating point standard [2,3]. It is anticipated that once the IEEE 754r Standard is finally approved, hardware support for decimal floating-point arithmetic on the processors will come into existence for financial, commercial, and Internet-based applications. Researchers like Landauer have shown that for irreversible logic computations, each bit of information lost, generates kTln2 joules of heat energy, where k is Boltzmann's constant and T the absolute temperature at which computation is performed [4]. Bennett showed that kTln2 energy dissipation would not occur, if a computation is carried out in a reversible way [5], since the amount of energy dissipated in a system bears a direct relationship to the number of bits erased during computation. Reversible circuits are those circuits that do not lose information and reversible computation in a system can be performed only when the system comprises of reversible gates. These circuits can generate unique output vector from each input vector, and vice versa, that is, there is a one-to-one mapping between input and output vectors. As the Moore's law still holds, the processing power continues to double every 18 months. The current irreversible technologies will dissipate considerable heat and can reduce the life of the circuit. The reversible logic operations do not erase (lose) information and dissipate very less heat. Thus, reversible logic is likely to be in demand in high speed power aware circuits. Reversible circuits are also of high interest in low-power CMOS design, optical computing, nanotechnology and quantum computing. The most prominent application of reversible logic lies in quantum computers [5]. Any unitary operation is reversible and hence quantum networks effecting elementary arithmetic operations such as addition, multiplication and exponentiation cannot be directly deduced from their classical Boolean counterparts (classical logic gates such as AND & OR are clearly irreversible). Thus, quantum arithmetic must be built from reversible logical components. One of the major constraints in reversible logic is to minimize the number of reversible gate used and garbage output produced. This paper proposes the novel concept of partial reversible gates to minimize the number of reversible gates and garbage outputs, while designing the reversible BCD arithmetic circuits. Thus, an attempt has been tried towards minimizing the number of reversible gates and garbage outputs while designing the optimal reversible BCD arithmetic units. II. PARTIAL REVERSIBLE GATES In this paper, we propose the novel concept of partial reversible gates which will satisfy the reversibility criteria not in all cases but for specific cases in BCD arithmetic. We propose this concept to minimize the number of reversible gates and garbage outputs, while designing the complex reversible BCD circuits. Figure 1 shows an example of the proposed concept of partial reversible gate (PRG). Table I shows the truth table of the proposed PRG gate shown in Fig 1. It is to be noted from Table-I that the proposed partial reversible gate (PRG) is reversible only in 'Sect-1', and from 'Sect-2' it loses its feature of reversibility. On carefully examining 'Sect-1' in Table I, we can easily find out that it is the truth table of BCD to excess 3 code converter. The 'Sect-2' of Table-I will never occur in Truth Table of BCD to excess 3 code converter, as BCD number goes only till 1001. Thus, we can use the PRG gate shown in Fig.1 to design reversible BCD to excess 3 code converters since the non-reversible case occurring in 'Sect-2' starting from 1010 in Table-I will never be encountered. The proposed PRG in Fig. 1 is reversible for BCD to excess 3 code converter as there is one to one mapping between input and output vectors which is the primary requirement for logical reversibility. The advantage of the proposed concept of partial reversible gates is that we are able to realize a complete BCD to excess-3 code converter with only one reversible gate and no garbage output. Thus the proposed concept of partial reversible gates will be a boon for designing reversible circuits with minimal number of reversible gates and garbage outputs. Similarly, other Boolean functions can be examined and partial reversible gates can be proposed for them. The partial reversible gate concept will be especially beneficial in BCD arithmetic as BCD number only goes from 0 to 9. A case study and comparison of implementing BCD to excess-3 code converter with partial reversible gate and existing conventional method is discussed below. B. Case Study It can be revealed that excess-3 equivalent code can be obtained from the BCD code by the addition of 0011. This addition can be easily obtained through a 4-bit reversible parallel adder. The design of BCD-to-Excess-3 code converter designed using efficient and optimized reversible parallel adder designed from TSG gate [7] is shown in Figure 2. Table II shows the result, that compares the reversible implementation of the BCD-to-Excess-3 code converter using TSG gate, with the proposed concept of partial reversible gate (PRG). It can be observed from Table 2 that implementing the BCD-to-Excess-3 code converter with the proposed concept of PRG has an improvement ratio of 400% and 900% in terms of number of reversible gates and garbage outputs respectively, compared to its implementation with conventional method. Considering the delay of one gate as one unit, there is also a reduction of 400% in unit delay. III. CONCLUSIONS The focus of this paper is the design of reversible BCD arithmetic units with minimal gates and garbage outputs considering the growing importance of IEEE 754r (the ongoing revision considering decimal arithmetic). Thus, this paper proposes the novel concept of partial reversible gates which will satisfy the reversibility criteria not in all cases but for specific cases. The partial reversible gates are of great help in minimizing the number of reversible gates and garbage outputs.
2007-11-16T12:25:20.000Z
2007-11-01T00:00:00.000
{ "year": 2007, "sha1": "b6299d975dfcf6ebbf8943168aa922816effad1c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b6299d975dfcf6ebbf8943168aa922816effad1c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
59303662
pes2o/s2orc
v3-fos-license
Development of novel EST‐SSR markers for Ephedra sinica (Ephedraceae) by transcriptome database mining Premise of the Study Ephedra sinica (Ephedraceae) is a gymnosperm shrub with a wide distribution across Central and Eastern Asia. It is widely cultivated as a medicinal plant, but its wild populations are monitored to determine whether protection is needed. Methods and Results Thirty‐six microsatellite markers, including 11 polymorphic markers, were developed from E. distachya RNA‐Seq data deposited in the National Center for Biotechology Information dbEST database. Among 100 genotyped E. sinica individuals originating from five different population groups, the allele number ranged from three to 22 per locus. Levels of observed and expected heterozygosity ranged from 0 to 0.866 (average 0.176) and 0 to 0.876 (average 0.491), respectively. Allelic polymorphism information content ranged from 0.000 to 0.847 (average 0.333). Cross‐species amplifications were successfully conducted with two related Ephedra species for all 11 di‐ or trinucleotide simple sequence repeats. Conclusions This study provides the first set of microsatellite markers for genetic monitoring and surveying of this medicinal plant. METHODS AND RESULTS A total of 4981 ESTs generated from mRNA sequencing of E. distachya were retrieved from the National Center for Biotechnology Information (NCBI) Expressed Sequence Tags database (dbEST) (accessed by searching with "(Ephedra) AND "Ephedra distachya"[porgn:__txid3389]"). Microsatellites with a minimum repeat number of five were detected for 324 ESTs with a minimum length of 200 bp. We obtained 203 unique EST-SSR loci by an allagainst-all BLAST analysis and successfully designed primers for 171 unique EST-SSR loci. All bioinformatic operations were performed using the microsatellite detection and development pipeline QDD version 3.1 (Meglécz et al., 2014). Finally, we selected 88 di-or trinucleotide loci with at least five repeats for further evaluation. We sampled five populations (100 individuals total) of E. sinica in Datong, Shanxi Province, China (Appendix 1). Voucher specimens were deposited in the Herbarium of Beijing Forestry University (BJFC). In order to test for successful amplification of the 88 EST-SSR loci selected, we conducted PCR analysis using eight individual plants of E. sinica. These eight individuals were collected in the Beijing Botanical Garden, Chinese Academy of Sciences. The genomic DNA was extracted from dried leaves using the cetyltrimethylammonium bromide (CTAB) protocol (Doyle and Doyle, 1987). An M13 tail (FAM, HEX, TAMRA, ROX) was attached to the forward primer (Meglécz et al., 2014) for visualization. The final PCR volume was 20 μL, containing 10 μL of 2× Taq PCR Mix (Tiangen, Beijing, China), 4 μL of fluorescent dye-labeled M13 primer (4 pM), 4 μL of mixed forward and reverse primers, and 2 μL (20 ng) of DNA. The following PCR conditions were used: 94°C incubation for 5 min; 25 cycles at 94°C for 40 s, 55°C for 40 s, and 72°C for 45 s; 10 cycles at 94°C for 40 s, 53°C for 40 s, and 72°C for 45 s; and a final extension at 72°C for 10 min. Among the 88 identified di-or trinucleotide loci, 38 displayed the expected size bands. After final capillary electrophoresis analysis on an ABI 3730 sequencer (Applied Biosystems, Waltham, Massachusetts, USA), SSR alleles were called with GeneMarker version 2.20 (SoftGenetics, State College, Pennsylvania, USA). Of these 38 loci, 36 showed clear, single peaks for each allele as essential for confident scoring, and 11 of these loci were polymorphic among the initially screened eight individuals. Characteristics of the 25 pairs of monomorphic microsatellite loci developed for E. sinica are shown in Appendix 2. The 11 polymorphic primer pairs were subsequently used to screen five E. sinica populations (with sample sizes n = 20 per population) and two additional populations originating from E. likiangensis Florin (n = 20) and E. equisetina Bunge (n = 6) (Appendix 1). Table 1 shows the primer sequences, repeat motifs, amplification sizes, GenBank accession number of the target sequences, and functional annotations determined with the protein family database, Pfam (Finn et al., 2014). We employed GenAlEx version 6.5 (Peakall and Smouse, 2012) to calculate genetic diversity parameters. The allelic polymorphism information content (PIC) was calculated using CERVUS 3.0 (Kalinowski et al., 2007). Allele numbers ranged from three to 22, with an average of 11.55 alleles per locus. Levels of observed and expected heterozygosity ranged from 0 to 0.842 (average 0.176) and 0 to 0.883 (average 0.491), respectively. In addition, PIC values ranged from 0 to 0.847 (average 0.333). The genetic parameters calculated for the 11 polymorphic EST-SSR loci are detailed in Table 2. The target sequences for all microsatellite loci are provided in Appendices S1 and S2. Furthermore, we conducted cross-species amplification of the 11 polymorphic primer pairs on two related species: E. likiangensis from Yulong, Yunnan Province, and E. equisetina from Datong, Shanxi Province, China (Appendix 1). All 11 primer pairs successfully amplified E. likiangensis, except for locus E-20, which produced monomorphic bands in the species (Table 3). For E. equisetina, nine out of the 11 primers tested were polymorphic, and two loci failed to amplify. The interspecific amplification profile may be partially related to the phylogenetic relationships between species, as the relationship between E. equisetina and E. sinica is more distant (Ickert-Bond and Wojciechowski, 2004). In terms of polymorphisms, except for primers at the E-49 locus, the remaining primer pairs showed moderate polymorphism in E. equisetina, possibly due to the small sample size. CONCLUSIONS The EST-SSR polymorphic markers developed in this study will be potentially useful for studies of population structure and genetic diversity in E. sinica conservation genetics. These new markers will also be applicable for E. likiangensis and E. equisetina and can enrich the number of DNA markers available for Ephedra. ACKNOWLEDGMENTS The authors thank Dr. X.-R. Wang and Dr. X.-Y. Kang for their valuable suggestions. This study was supported by grants from the Fundamental Research Funds for the Central Universities (no. YX2013-412018BLCB08). DATA ACCESSIBILITY Expressed sequence tags used for primer development were downloaded from the National Center for Biotechnology Information (NCBI) Expressed Sequence Tags database (dbEST). GenBank accession numbers for target sequences of both polymorphic and monomorphic SSR loci are provided in Table 1 and Appendix 2. SUPPORTING INFORMATION Additional Supporting Information may be found online in the Supporting Information section at the end of the article. APPENDIX S1. Monomorphic microsatellite target sequences from microsatellite marker development in Ephedra sinica. APPENDIX S2. Polymorphic microsatellite target sequences from microsatellite marker development in Ephedra sinica.
2019-01-30T14:03:19.020Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "c5c4f207ff53ad1bb45be67856093437ace441f4", "oa_license": "CCBY", "oa_url": "https://bsapubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/aps3.1212", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c5c4f207ff53ad1bb45be67856093437ace441f4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
252763800
pes2o/s2orc
v3-fos-license
An agent-based modeling approach for lung fibrosis in response to COVID-19 The severity of the COVID-19 pandemic has created an emerging need to investigate the long-term effects of infection on patients. Many individuals are at risk of suffering pulmonary fibrosis due to the pathogenesis of lung injury and impairment in the healing mechanism. Fibroblasts are the central mediators of extracellular matrix (ECM) deposition during tissue regeneration, regulated by anti-inflammatory cytokines including transforming growth factor beta (TGF-β). The TGF-β-dependent accumulation of fibroblasts at the damaged site and excess fibrillar collagen deposition lead to fibrosis. We developed an open-source, multiscale tissue simulator to investigate the role of TGF-β sources in the progression of lung fibrosis after SARS-CoV-2 exposure, intracellular viral replication, infection of epithelial cells, and host immune response. Using the model, we predicted the dynamics of fibroblasts, TGF-β, and collagen deposition for 15 days post-infection in virtual lung tissue. Our results showed variation in collagen area fractions between 2% and 40% depending on the spatial behavior of the sources (stationary or mobile), the rate of activation of TGF-β, and the duration of TGF-β sources. We identified M2 macrophages as primary contributors to higher collagen area fraction. Our simulation results also predicted fibrotic outcomes even with lower collagen area fraction when spatially-localized latent TGF-β sources were active for longer times. We validated our model by comparing simulated dynamics for TGF-β, collagen area fraction, and macrophage cell population with independent experimental data from mouse models. Our results showed that partial removal of TGF-β sources changed the fibrotic patterns; in the presence of persistent TGF-β sources, partial removal of TGF-β from the ECM significantly increased collagen area fraction due to maintenance of chemotactic gradients driving fibroblast movement. The computational findings are consistent with independent experimental and clinical observations of collagen area fractions and cell population dynamics not used in developing the model. These critical insights into the activity of TGF-β sources may find applications in the current clinical trials targeting TGF-β for the resolution of lung fibrosis. TGF-β-dependent functions for fibroblasts The function for TGF-β-dependent recruitment of fibroblasts from [2] was fit to experimental data of [3], represented by solid red circles, and is shown in [6].We extracted data between day 4 and day 9 of the fibroblast population and the ECM concentration because we observed fibroblast activation and recruitment in our model during that period.Also, in Hao et al. [6], the population dynamics and ECM deposition remained linear during that period.We assumed fibroblast cell weight was 2 × 10 −9 g to calculate the number of fibroblasts.We used Eq S1 to estimate fibroblast collagen production rate (k F C ): 1 . 6 .Neutrophils 1 ./15 4 . 1 . 1 .Fibroblasts 1 . 4 . 5 . Live epithelial cells undergo apoptosis after sufficient cumulative contact time with adhered CD8+ T cells 2. Dead epithelial cells produce debris 3. Virus adheres to unbound external ACE2 receptor to become external (virus)-bound ACE2 receptor 4. Bound external ACE2 receptor is internalized (endocytosed) to become internal bound ACE2 receptor 5. Internalized bound ACE2 receptor releases its virion and becomes unbound internalized receptor.The released virus is available for use by the viral lifecycle model Internalized unbound ACE2 receptor is returned to the cell surface to become external unbound receptor 7. Each receptor can bind to at most one virus particle 8. Internalized virus is uncoated 9. Uncoated virus (viral contents) lead to release of functioning RNA 10.RNA creates viral protein at a constant rate unless it degrades 11.Viral RNA is replicated at a rate that saturates with the amount of viral RNA 12. Viral RNA undergoes constitutive (first order) degradation December 14, 2023 1/15 13.Viral protein is transformed to an assembled virus state 14.Assembled virus is released by the cell (exocytosis) 15.After infection, cells secrete chemokine 16.As a proxy for viral disruption of the cell, the probability of cell death increases with the total number of assembled virions 17.Apoptosed cells lyse and release some or all their contents 18.Once viral RNA exceeds a particular threshold, the cell enters the pyroptosis cascade 19.Once pyropotosis begins, the intracellular cascade is modeled by a system of ODEs monitoring cytokine production and cell volume swelling 20.Cell secretion rate for pro-inflammatory increases to include secretion rate of IL-18 21.Cell secretes IL-1β which causes a bystander effect initiating pyroptosis in neighboring cells 22. Cell lyses (dying and releasing its contents) once its volume has exceeded 1.5× the homeostatic volume 23.Infected epithelial cells secrete pro-inflammatory cytokine 24.Antigen presentation in infected cells is a function of intracellular viral proteinMacrophages 1. Resident (unactivated) and newly recruited macrophages move along debris gradients 2. Macrophages phagocytose dead cells.Time taken for material phagocytosis is proportional to the size of the debris 3. Macrophages break down phagocytosed materials 4.After phagocytosing dead cells, macrophages activate and secrete pro-inflammatory cytokines 5. Activated macrophages can decrease migration speed 6. Activated macrophages have a higher apoptosis rate 7. Activated macrophages migrate along chemokine and debris gradients 8. Macrophages are recruited into tissue by pro-inflammatory cytokines 9. Macrophages can die and become dead cells only if they are in an exhausted state 10.Macrophages become exhausted (stop phagocytosing) if internalized debris is above a threshold 11.CD4+ T cell contact induces activated macrophage phagocytosis of live infected cells Neutrophils are recruited into the tissue by pro-inflammatory cytokines 2. Neutrophils die naturally and become dead cells 3. Neutrophils migrate locally in the tissue along chemokine and debris gradients December 14, 2023 2Neutrophils phagocytose dead cells and activate 5. Neutrophils break down phagocytosed materials 6. Activated neutrophils reduce migration speed 7. Neutrophils uptake virus 8. Neutrophils secrete ROS upon phagocytosis Dendritic cells (DCs) 1. Resident DCs exist in the tissue 2. DCs are activated by infected cells and/or virus 3. Portion of activated DCs leave the tissue to travel to the lymph node 4. DCs chemotaxis up chemokine gradient 5. Activated DCs present antigen to CD8+ T cells increasing their proliferation rate and killing efficacy (doubled proliferation rate and attachment rate) 6. Activated DCs also regulate the CD8+ T cell levels in within a threshold by enhancing CD8+ T cell clearance CD8+ T cells 1. CD8+ T cells are recruited into the tissue by pro-inflammatory cytokines 2. CD8+ T cells apoptose naturally and become dead cells 3. CD8+ T cells move locally in the tissue along chemokine gradients 4. CD8+ T cells adhere to infected cells.Cumulated contact time with adhered CD8+ T cells can induce apoptosis 5. Activated DCs present antigen to CD8+ T cells, which increases the CD8+ T cell proliferation rate 6.Activated DCs also regulate the CD8+ T cell levels in within a threshold by enhancing CD8+ T cell clearance 7. CD8+ T cells have a max generation counter and will not proliferate after the set generation CD4+ T cells 1. CD4+ T cells are recruited into the tissue by the lymph node 2. CD4+ T cells apoptose naturally and become dead cells 3. CD4+ T cells move locally in the tissue along chemokine gradients 4. CD4+ T cells are activated in the lymph node by three signals: antigenic presentation by the DCs, direct activation by cytokines secreted by DCs, and direct activation by cytokines secreted by CD4+ T cells 5. CD4+ T cells are suppressed directly by cytokines secreted by CD4+ T cells 6. CD4+ T cells have a max generation counter and will not proliferate after the set generation Tissue microenvironment Virus diffuses in the microenvironment 2. Virus adhesion to a cell stops its diffusion (acts as an uptake term) 3. Pro-inflammatory cytokine diffuses in the microenvironment December 14, 2023 3/15 4. Pro-inflammatory cytokine is taken up by recruited immune cells 5. Pro-inflammatory cytokine is eliminated or cleared 6. Chemokine diffuses in the microenvironment 7. Chemokine is taken up by immune cells during chemotaxis 8. Chemokine is eliminated or cleared 9. Debris diffuses in the microenvironment 10.Debris is taken up by macrophages and neutrophils during chemotaxis 11.Debris is eliminated or cleared 12. Immunoglobulin (Ig) diffuses in the microenvironment 13.Virions and Ig react in tissue removing both components 14.Reactive oxidative species (ROS) diffuses in the microenvironment Fibrosis model We extended the overall model to include the mechanisms of fibrosis.The details of the cell types and biological hypotheses are described in detail in the manuscript.Here, we summarize the cell types and biological hypotheses (agent-based model rules) for the fibrosis model.M2 Macrophages 1. CD8+ T cell contact stops activated macrophage secretion of pro-inflammatory cytokine and switches to M2 phase 2. M2 macrophages secrete TGF-β Secreting agent Secreting agents are created at the site that a CD8+ T cell kills an epithelial cell to mimic latent TGF-β activation embedded in the tissue 2. TGF-β cytokine is eliminated or cleared 3. Secreting agents secrete TGF-β for a set amount of time Resident inactive fibroblasts exist in the tissue 2. Resident inactive fibroblasts do not apoptose to maintain homeostasis 3. TGF-β activates the resident inactive fibroblasts and absence of TGF-β inactivates fibroblasts Active fibroblasts are recruited into the tissue by TGF-β Active fibroblasts and inactive fibroblasts from the active state apoptose naturally and become dead cells 6. Fibroblasts move locally in the tissue along up gradients of TGF-β 7. Fibroblast cells deposit collagen continuously Tissue microenvironment 1. TGF-β diffuses in the microenvironment 2. TGF-β is degraded or removed Text B. Methods Fig A.A. This fitted curve is used as F g (T β ) for recruitment of new fibroblasts in Eq 5 of the fibrosis model.We assumed constant response (green line) when TGF-β concentration exceeds the threshold value (10 ng/mL, marked by the vertical black line in Fig A.A) because of the polynomial nature of the recruitment signal.Fig A.B shows the function used for TGF-β-dependent collagen deposition rate from fibroblasts, F c (T β ).The experimental data from [4, 5], represented by solid red circles, were normalized and fitted with a Michaelis-Menten kinetic function represented by a blue curve to estimate V T β and k T β for F c (T β ) in Eq 7 of the fibrosis model. Fig Fig A. TGF-β concentration dependency for functions used in fibrosis model to describe (A) recruitment of fibroblasts (Eq 5) and (B) collagen deposition from fibroblasts (Eq 7).In A) the green curve is used as the F g (T β ) constant value for T β > 10. Fig D. Fig D. Case DM: equal TGF-β activation rate from stationary damaged sites and secretion rate from mobile macrophages.Simulated dynamics of cell population with (A-G) collagen and (H-N) TGF-β concentration fields, shown behind the cells (circles) and corresponding to the color bars.Representative model results (one replicate) at days (A, H) 0, (B, I) 2, (C, J) 5, (D, K) 6, (E, L) 8, (F, M) 9, and (G, N) 15.Epithelial cells are blue, macrophages are green, CD8+ T cells are red, secreting agents are black, and fibroblasts are purple.The color codes for other immune cells are the same as in Getz et al. [1].The cases are defined in Fig 2. Fig E. Fig E. Case D: TGF-β activation from stationary damaged sites only, and secretion from mobile macrophages is turned off.Simulated dynamics of cell population with (A-G) collagen and (H-N) TGF-β concentration fields, shown behind the cells (circles) and corresponding to the color bars.Representative model results (one replicate) at days (A, H) 0, (B, I) 2, (C, J) 5, (D, K) 6, (E, L) 8, (F, M) 9, and (G, N) 15.Epithelial cells are blue, macrophages are green, CD8+ T cells are red, secreting agents are black, and fibroblasts are purple.The color codes for other immune cells are the same as in Getz et al. [1].The cases are defined in Fig 2. Fig F. Fig F. Case M: TGF-β secretion from mobile macrophages only, and activation rate from stationary damaged sites is turned off.Simulated dynamics of cell population with (A-G) collagen and (H-N) TGF-β concentration fields, shown behind the cells (circles) and corresponding to the color bars.Representative model results (one replicate) at days (A, H) 0, (B, I) 2, (C, J) 5, (D, K) 6, (E, L) 8, (F, M) 9, and (G, N) 15.Epithelial cells are blue, macrophages are green, CD8+ T cells are red, secreting agents are black, and fibroblasts are purple.The color codes for other immune cells are the same as in Getz et al. [1].The cases are defined in Fig 2. Fig G. Fig G. Effects of sources on collagen deposition.Dynamics of collagen concentration spatially averaged over the domain varying with the sources of TGF-β from (A) case DM, (B) case D, and (C) case M. The solid curves represent the mean of predictions, and shaded areas represent the predictions between the 5th and 95th percentile of 15 replications of the agent-based model.The cases are defined in Fig 2. Fig H. Fig H. Case DHMH: high TGF-β activation rate from stationary damaged sites and high TGF-β secretion rate from mobile macrophages.Simulated dynamics of cell population with (A-G) collagen and (H-N) TGF-β concentration fields, shown behind the cells (circles) and corresponding to the color bars.Representative model results (one replicate) at days (A, H) 0, (B, I) 2, (C, J) 5, (D, K) 6, (E, L) 8, (F, M) 9, and (G, N) 15.Epithelial cells are blue, macrophages are green, CD8+ T cells are red, secreting agents are black, and fibroblasts are purple.The color codes for other immune cells are the same as in Getz et al. [1].The cases are defined in Fig 2. Fig I. Fig I. Case DLML: low TGF-β activation rate from stationary damaged sites and low TGF-β secretion rate from mobile macrophages.Simulated dynamics of cell population with (A-G) collagen and (H-N) TGF-β concentration fields, shown behind the cells (circles) and corresponding to the color bars.Representative model results (one replicate) at days (A, H) 0, (B, I) 2, (C, J) 5, (D, K) 6, (E, L) 8, (F, M) 9, and (G, N) 15.Epithelial cells are blue, macrophages are green, CD8+ T cells are red, secreting agents are black, and fibroblasts are purple.The color codes for other immune cells are the same as in Getz et al. [1].The cases are defined in Fig 2. Fig J. Fig J. Case DHML: high TGF-β activation rate from stationary damaged sites and low TGF-β secretion rate from mobile macrophages.Simulated dynamics of cell population with (A-G) collagen and (H-N) TGF-β concentration fields, shown behind the cells (circles) and corresponding to the color bars.Representative model results (one replicate) at days (A, H) 0, (B, I) 2, (C, J) 5, (D, K) 6, (E, L) 8, (F, M) 9, and (G, N) 15.Epithelial cells are blue, macrophages are green, CD8+ T cells are red, secreting agents are black, and fibroblasts are purple.The color codes for other immune cells are the same as in Getz et al. [1].The cases are defined in Fig 2. Fig K. Fig K. Case DLMH: low TGF-β activation rate from stationary damaged site and high TGF-β secretion rate from mobile macrophages.Simulated dynamics of cell population with (A-G) collagen and (H-N) TGF-β concentration fields, shown behind the cells (circles) and corresponding to the color bars.Representative model results (one replicate) at days (A, H) 0, (B, I) 2, (C, J) 5, (D, K) 6, (E, L) 8, (F, M) 9, and (G, N) 15.Epithelial cells are blue, macrophages are green, CD8+ T cells are red, secreting agents are black, and fibroblasts are purple.The color codes for other immune cells are the same as in Getz et al. [1].The cases are defined in Fig 2. Fig M. Fig M. Case DA with Gaussian distribution for initial placement of virions.(A) Cell populations; (B) TGF-β concentration spatially averaged over the domain; (C) collagen concentration spatially averaged over the domain; and (D) heat map showing collagen area fraction (AF ) above the threshold value of 1 × 10 −7 µg µm -3 .The solid curves represent the mean of predictions, and shaded areas represent the predictions between the 5th and 95th percentile of 15 replications of the agent-based model.The cases are defined in Fig 2. Fig N. Fig N. Case DA with Gaussian distribution for initial placement of virions.Simulated dynamics of cell population with (A-G) collagen and (H-N) TGF-β concentration fields, shown behind the cells (circles) and corresponding to the color bars.Representative model results (one replicate) at days (A, H) 0, (B, I) 2, (C, J) 5, (D, K) 6, (E, L) 8, (F, M) 9, and (G, N) 15.Epithelial cells are blue, macrophages are green, CD8+ T cells are red, secreting agents are black, and fibroblasts are purple.The color codes for other immune cells are the same as in Getz et al. [1].The cases are defined in Fig 2. Fig O. Fig O. Case MA: extension of case M with longer duration of TGF-β secretion from mobile macrophages.Simulated dynamics of cell population with (A-G) collagen and (H-N) TGF-β concentration fields, shown behind the cells (circles) and corresponding to the color bars.Representative model results (one replicate) at days (A, H) 0, (B, I) 2, (C, J) 5, (D, K) 6, (E, L) 8, (F, M) 9, and (G, N) 15.Epithelial cells are blue, macrophages are green, CD8+ T cells are red, secreting agents are black, and fibroblasts are purple.The color codes for other immune cells are the same as in Getz et al. [1].The cases are defined in Fig 2. Fig P. Fig P. Dynamics of populations of macrophages in different states.The overall model has five states: inactivated state (MI), M1 phenotype (M1), M2 phenotype (M2), hyperactive state (MH), and exhausted state (ME).Both M1 and M2 phenotypes can also exist in MH and ME states.Details are available in Getz et al. [1].(A) Case D, (B) case M, (C) case DA, and (D) case MA.The solid curves represent the mean of predictions, and shaded areas represent the predictions between the 5th and 95th percentile of 15 replications of the agent-based model.The cases are defined in Fig 2. Fig R. Fig R. Case MAU: extension of case MA with the uptake of TGF-β by fibroblasts.Simulated dynamics of cell population with (A-G) collagen and (H-N) TGF-β concentration fields, shown behind the cells (circles) and corresponding to the color bars.Representative model results (one replicate) at days (A, H) 0, (B, I) 2, (C, J) 5, (D, K) 6, (E, L) 8, (F, M) 9, and (G, N) 15.Epithelial cells are blue, macrophages are green, CD8+ T cells are red, secreting agents are black, and fibroblasts are purple.The color codes for other immune cells are the same as in Getz et al. [1].The cases are defined in Fig 2.
2022-10-07T13:11:22.828Z
2022-10-04T00:00:00.000
{ "year": 2023, "sha1": "c3e63f46dc341fb5470f43176e04069854136ec8", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1011741&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6b8435c901fec730f00ef7ed7c31c5eb83dd5db9", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
59378980
pes2o/s2orc
v3-fos-license
Spontaneous decay in arbitrary cavity size We consider a complete study of the influence of the cavity size on the spontaneous decay of an atom excited state, roughly approximated by a harmonic oscillator. We confine the oscillator-field system in a spherical cavity of radius $R$, perfectly reflective, and work in the formalism of dressed coordinates and states, which allows to perform non-perturbative calculations for the probability of the atom to decay spontaneously from the first excited state to the ground state. In free space, $R\to\infty$, we obtain known exact results an for sufficiently small $R$ we have developed a power expansion calculation in this parameter. Furthermore, for arbitrary cavity size radius, we developed numerical computations and showed complete agreement with the exact one for $R\to\infty$ and the power expansion results for small cavities, in this way showing the robustness of our results. We have found that in general the spontaneous decay of an excited state of the atom increases with the cavity size radius and vice versa. For sufficiently small cavities the atom practically does not suffers spontaneous decay, whereas for large cavities the spontaneous decay approaches the free-space $R\to\infty$ value. On the other hand, for some particular values of the cavity radius, in which the cavity is in resonance with the natural frequency of the atom, the spontaneous decay transition probability is increased compared to the free-space case. Finally, we showed how the probability spontaneous decay go from an oscillatory time behaviour, for finite cavity radius, to an almost exponential decay, for free space. I. INTRODUCTION The study of spontaneous decay in cavities is an interesting issue, that attracted the interest over the years, and which was started in Ref. [1], where it was showed the enhancement of the spontaneous decay for atomic transitions in resonance with the cavity modes. Many years after, in Ref. [2], following the same ideas, it was showed the suppression of the spontaneous decay of an excited atomic state placed in a sufficiently small cavity. Afterward, many works has been devoted to the subject, both experimental [3][4][5][6][7] and theoretically [8][9][10][11][12][13][14][15]. However to our best knowledge, there has been not considered a full study of the spontaneous decay for arbitrary cavity size. In this paper we consider this task by using a simplified model for the atom, roughly approximated by a one harmonic oscillator and the electromagnetic field is taken as massless scalar field. Obviously, real atoms do not have equally spaced energy levels, they are not one dimensional systems. However, our main purpose in this work in not to understand how the nature of the atom affects the spontaneous decay, but about how it depends in a precise way on the cavity size. To mimic the atom by a harmonic oscillator, with regard to the stability of the ground state, we will use the dressed coordinates and states approach introduced in Ref. [16] as a method to account, in a non perturbative way, for the oscillator radiation process in free space. In subsequent works the concept was used to study the spontaneous emission of atoms in small cavities [17], the quantum Brownian motion [19,20], the thermalization process [21,22,29], the time evolution of bipartite systems [23][24][25], the entanglement of biatomic systems [27,28,30] and other related issues [31][32][33][34][35]. For a clear explanation see reference [32]. The formalism showed to have the technical advantage of allowing exact computation of the probabilities associated with the different oscillator (atom) radiation processes [35]. For example, it has been obtained easily the probability of the atom to decay spontaneously from the first excited state to the ground state for arbitrary coupling constant, weak or strong. Nevertheless, in all above References exact computations have been possible only for free-space and for sufficiently small cavity radius it has been possible to do only roughly estimations. The purpose of this work is to present techniques applicable to cavities of arbitrary size. Although the developed computational techniques are applicable to all problems cited above, we will restrict here to compute the transition probability, due to spontaneous decay, of the first excited state of an atom, approximated by the dressed harmonic oscillator. For sufficiently small cavities we will develop a power expansion computation in the cavity radius parameter, where all the coefficients of the expansion are in principle calculable. On the other hand for arbitrary values of the cavity radius we will consider numerical computations. Finally, we will compare these numerical computations with the one we developed for small cavities and with the exact result, R → ∞, finding good agreement. This work is organized as follows. In section II briefly we review the concept of dressed coordinates and states. In section III we consider the analytical computation of the transition probability, due to spontaneous decay, in the free-space case, R → ∞, and for sufficiently small cavity radius. In section IV we present numerical computations for arbitrary cavity size radius and finally in section V we give our concluding remarks. II. DRESSED COORDINATES AND STATES The model we consider is a harmonic oscillator linearly interacting with a massless scalar field, the whole system confined inside a spherical cavity of radius R. The Hamiltonian of the system can be put in the form [16] where q 0 , p 0 are the oscillator degrees of freedom and q k , p k are the corresponding ones for the field modes of frequencies ω k = πk/R; k = 1, 2, 3, ...; c k = ηω k , η = √ 2g∆ω, ∆ω = ω k+1 − ω k = π/R and g is a frequency dimensional coupling constant. In Eq. (1) the limit N → ∞ must be taken at the end of calculations and the last term can be seen as a frequency renormalization that guarantees a positive defined Hamiltonian [36]. We diagonalize Hamiltonian (1) introducing collective coordinates and momenta Q r , P r , as Substituting above expressions in (1) we get where the matrix elements t r µ are given by and the collective frequencies Ω r are solutions of the equation, Next we introduce dressed coordinates and states as the physically measurable ones, for details see Ref. [32]. Denoting the dressed coordinates as q µ , we have that q 0 represents the coordinate of the dressed harmonic oscillator of frequency ω 0 and the coordinates q k represent the coordinates of the dressed field modes of frequencies ω k . In terms of the dressed coordinates we introduce the dressed states, where ψ nµ (q µ ) is eingenfunction of one dimensional harmonic oscillator of frequency ω µ and energy (n µ +1/2)ω µ . Physically, the dressed state (6) is the one in which the dressed harmonic oscillator is its n 0 -th energy level and the field system is a state in which there are n k field quanta of frequencies ω k . Requiring the stability of the dressed oscillator ground state in the absence of field quanta, we get the relation between dressed coordinates and the collective ones. To assure the stability of the state ψ 0,0,0,... (q ) we identify it with the ground state of the total system Hamiltonian (3), from which we get, The relation between dressed and collective coordinates allows to do computations of transitions amplitudes between the dressed states (6). If at t = 0, the oscillatorfield system is in the state |n 0 , n 1 , n 2 , ... d (whose coordinate representations is given by ψ n0,n1,n2,... (q )), the probability amplitude to find the system at t > 0 in the state |m 0 , m 1 , m 2 , ... d is given by As showed in Ref. [35], the different transition amplitudes above can be cast in terms of For example, considering as initial state, the one in which the dressed harmonic oscillator is in its first excited level and there are no field quanta, we get for the probability amplitude for the system to remain in the same initial state, Also, the probability amplitude for the dressed harmonic oscillator to decay spontaneously from their first excited level to the ground state, by emission of a field quanta of frequency ω k , is given by In any case, the computation of f 00 (t) or f 0k (t) is a formidable task: we have to sum infinite terms that we don't know a priori. This, because each term of the sum depends on Ω r , given as the solutions of Eq. (5), which in the limit N → ∞, possess infinite solutions that can not be computed analytically. Nevertheless, in the limit in which the cavity radius R → ∞, it is possible to get closed expressions for f µν (t) [21]. In next section we extend the method for arbitrary cavity radius and perform analytical calculations for sufficiently small R. III. ANALYTICAL COMPUTATION OF PROBABILITY AMPLITUDES Following Ref. [21] we introduce a complex variable function, Note that the real roots of W (z) = 0 are precisely the solutions of Eq. (5). Also, from Eq. (4) we can write for t r 0 , where W (z) = dW/dz. Taking µ = ν = 0 in Eq. (9) and using (13) we get where in passing to the second line we have used residue theorem and C is a counter-clockwise contour that encircles the real positive roots Ω r , i.e, a contour that encircles the real positive axis in the complex plane z. In the same way we obtain for (11) Using ω k = kπ/R and identity, we get for W (z), Formulas (14) and (15) are valid for any arbitrary cavity radius R, however we can go further analytically only in the two extreme cases: very large and sufficiently small cavity radius. For very large cavity, the free space, considering z = x ± i and → 0 + , we get for (17) Using above expression in (14) and considering a rectangular contour for C, we get for the probability amplitude of the harmonic oscillator to remain in the first excited state, in free space [21] f 00 (t) = 2gˆ∞ from which we have f 00 (t → ∞) = 0. For finite values of t we can perform easily a numerical computation of above integral and we get an almost exponential decay function for the corresponding probability [21]. A. Computation of f00(t) for small R For arbitrary cavity radius it is not possible to get analogous results as (19) and we limit here to the case in which the cavity radius is sufficiently small. In this case, using (17), it is convenient to write Eqs. (14) and (15) as, dz. (21) where we made the change of variable zR → z in both integrals above. Since Rg is an dimensionless quantity, expanding the denominator of (20) in powers of πRg, we have where Using residue theorem, we can compute all above coefficients. We have for the first term where we used the fact that the pole z = Rω 0 is the only one inside C. Taking j = 1 in Eq. (23) we have for a 1 , In this case we have the second order pole Rω 0 and first order poles 0, π, 2π, 3π, .... Using residue theorem we have All other terms (23) can be calculated noting that Rω 0 is a pole of order j + 1 whereas the other poles, 0, π, 2π, 3π,... are of order j. Although all the coefficients a j are computable, for higher j the expressions are cumbersome. Here we quote only a 2 , given by We have to remark that the infinite sums in expressions (26) and (27) which at second order in (πRg) is We stress here that |f 00 (t)| 2 , given by (28), satisfies at each order in gR, |f 00 (0)| 2 = 1. Before to consider numerical computations for (28), we discuss the question about the validity of the expansion (22). This series expansion will be convergent if Since in the limit j → ∞ the order of the poles in a j and a j+1 are almost the same, we get lim j→∞ |a j+1 |/|a j | = 1, Consequently, for fixed parameter g, the cavity radius R must be considered small if R < 1/(πg), a condition independent of time and of the frequency oscillator ω 0 . On the other hand if we consider very small cavities, R << 1/(πg), we expect that the first terms in the power expansion (22) to give the relevant contributions, at least for time values not sufficiently large. This last condition can be inferred from expressions for a 1 or a 2 , for example, where we can note that such coefficients increases with time almost linearly or quadratically. Therefore, if the series expansion is truncated, the corresponding probability (28), could violate the upper limit |f 00 (t)| 2 ≤ 1 for time values sufficiently large. For such time values, we have to consider higher order contributions. As illustration, we consider a very small cavity with g = ω 0 /274 and ω 0 R = 1, where R = 1/(274g) << 1/(πg). In Fig. 1 we depict P (t) = |f 00 (t)| 2 , as given by (28), with a l computed at first, second, thirth and fourth order in (Rg). At first order P (t) ≤ 1, as expected, at second order it violates this condition around ω 0 t ≈ 80, at third order P (t) > 1 around ω 0 t ≈ 200 and at fourth order P (t) > 1 for ω 0 t ≈ 350. However, if we include higher terms the behaviour of P (t) improves for larger values of ω 0 t. For example, at sixth order we display P (t) in Fig. 2, where we see that the result is valid up to ω 0 t = 400. On the other hand if we compare P (t) at different orders of approximations, we find that for ω 0 t not sufficiently large, all approximations gives almost the same results, as one can conclude from Fig. 3, where we compare P (t) at first, third, and sixth order. Note, that very small differences only appears for sufficiently large values of ω 0 t and if this parameter is not large enough, the results are almost the same. If we consider other values for the cavity radius in the regime R << 1/(πg), we get almost the same results above. Therefore, we conclude that for a sufficiently small cavity higher order corrections terms in the expansion (28) will be important only for large values But what, about the physical meaning of our results? From Fig. 2, we see that P (t) oscillates around 0, 991, never decreasing than 0, 982 and we can conclude that for the very small cavity radius considered, we have that the probability of the dressed harmonic oscillator to remain in their first excited level is around 99, 1 %. We have the inhibition of the spontaneous decay similar to the pointed out for the first time in Ref. [2]. If we consider other values for the cavity radius less than the one considered above, the probability P (t) increases, that is, the spontaneous decay of the first excited state is more and more suppressed as R decreases. In order to appreciate the orders of magnitude involved in this phenomenon, we consider SI units, in this case ω 0 R = 1 can be replaced by ω 0 R/c = 1 from which considering ω 0 ∼ 4 × 10 14 s, in the visible, red we get R ∼ 7.5 × 10 −7 m and for ω 0 ∼ 2 × 10 10 s, a typical microwave frequency, we have R ∼ 1.5 × 10 −2 m. For these parameter values we expect an almost stability of atomic excited levels. B. Computation of f 0k (t) As done for f 00 (t), expanding the denominator of Eq. (21) in powers of Rg, we get, (33) All above coefficients can be computed using residue theorem, where the pole Rω k = πk is of order (j + 1), the pole Rω 0 is of order (j + 1) and the poles 0, π, 2π, ... are Although we can perform numerical computations with the obtained expression for |f 0k (t)| 2 at the order we desire, as done for P (t), we have to note that such quantity is in general very small, for sufficiently small cavities, as one can easily verify from the identity k |f 0k (t)| 2 + |f 00 (t)| 2 = 1, from which we find k |f 0k (t)| 2 = 1 − P (t). For the values just considered above, ω 0 R = 1, g = ω 0 /274, we obtain |f 0k (t)|2 < 0, 018, that is, the probability for the harmonic oscillator to decay from its first excited state to the ground state by emission of an arbitrary field quanta is smaller that 1, 8 %. From expressions (34) or (35) it is possible to see that the maximum contribution for |f 0k (t)| 2 , is given by those values of ω k around ω 0 . In general for sufficiently small cavity radius, ω k = kπ/R > ω 0 and there is no value for ω k close enough to ω 0 . Consequently, |f 0k (t)| 2 will be very small. In other words, when the cavity size is sufficiently small, there is no field quanta with energy near the gap energy between the first excited energy level and the ground state, and in this way, the spontaneous decay of the first excited level is practically suppressed. On the other hand, if we consider cavities where ω 0 = ω k , that is, resonant values of R = kπ/ω 0 , k = 1, 2, . . . we expect from (34)- (35), that |f 0k (t)| increases appreciably in relation to the non resonant values. In this case, rigorously, expressions for (26), (27), (34) and (35) are not valid since for resonant values of the cavity radius, the poles of a j and b j are of order different from the ones considered previously. Although we have computed the corresponding expressions, we do not present them here, since in general the first terms of the series expansion for f 00 (t) or f 0k (t), are valid only for initial time values, that is, resonant values of R are small but not not sufficiently small. Instead, in next chapter we perform numerical computations, for arbitrary cavity radius, where we will show the enhancement of the spontaneous decay in resonant cavities whenever R = nπ/ω 0 , n = 1, 2, . . .. IV. ARBITRARY CAVITY SIZE: NUMERICAL COMPUTATIONS We can compute f 00 (t) or f 0k (t) numerically, for arbitrary cavity radius in two ways. First, we can use expressions (14) or (15) with an appropriate contour C to perform the integral lines numerically. We can consider for example a rectangular closed contour, with parameters in such a way that this contour encircles the poles in the real positive axis. This is not an easy task, since given a contour there is no way to prove that inside the contour the only poles are those in the real positive axis. Therefore, we have to proceed iteratively decreasing the size of the contour in each step until the results stabilizes. However, this becomes in long time computations. Another way to compute f 00 (t) or f 0k (t) is to solve for the collective frequencies Ω r from (5) numerically and performing the sums in (9). But since it is not possible to solve numerically for all the collective frequencies, we compute only o finite number of them, for example the first 10 4 solutions. For the other collective frequencies we can use with good precision Ω k = ω k , since as Ω r increases it approaches ω k [16]. Also the summation in (9) must stop at the maximum values obtained for Ω r . Again, this could be a problem, but as we will show bellow the sums in (9) converges rapidly. Therefore, we will do numerical computations in the way just described. For this end, first we perform the sums in Eq. (4) and (5), using (16) we have respectively and cot(RΩ r ) = Ω r πg From last expression we note that (t r 0 ) 2 ∼ Ω −2 r for large Ω r , therefore we can compute numerically with a finite number of solutions for Ω r , large solutions Ω r will give negligible contributions. As a first application, we consider, ω 0 R = 1, g = ω 0 /274, the case treated in above section. In this case we get for P (t), the result depicted in Fig. 4 as doted line, where for comparison, we plotted the one obtained in the last section as dashed line. We can note that both results are in good agreement. Next we consider the time behaviour of P (t) for other values of the cavity radius. In order to compare, the behavior of P (t) for increasing values of R we consider g = ω 0 /274 fixed and Rg = 1/274, 5/274, 10/274, 40/274, and 400/274. The results for P (t) are depicted in Fig. 5, in the time interval 0 ≤ ω 0 t ≤ 500 . We conclude that P (t), in general, decreases as R increases and viceversa. Note that as R increases P (t) approaches the freespace case, R → ∞, whereas for very small cavities the probability of spontaneous decay practically go to zero, P (t) ≈ 1. For R finite, P (t) is an almost oscillating function of ω 0 t, whose period increases with R. In Fig. 5, for g = ω 0 /274 and gR = 400/274 it appears that P (t) decreases in time for all t, however, considering sufficiently large time values, we can see in Fig. 6, that P (t) increases from a given time value and afterward decreases again. In the same figure we also depict the case in which gR = 1000/274, with a similar behaviour. From this results we note that although P (t) increases from a given value of time, it remains practically at zero value for a large time interval before the first oscillation and such time interval increases with R. Also, in the time interval before P (t) increases, this remains the same in both cases and practically is the same as in the R → ∞ case. In this way we have a clear picture about how the time behaviour for P (t) go from the oscillating behaviour, for R finite, to the almost exponential decay in free-space, as R → ∞, the period of oscillation go to infinity. Although in general the spontaneous decay increases when R is increased and vice-versa, there are however some values of R for which this behaviour could be different. Consider the case in which R takes a value in which the cavity is in resonance with the frequency of the atom, i.e, ω 0 = ω k = πk/R, k = 1, 2, ... To be specific we consider g = ω 0 /274 and the minimum resonant value for R, Rω 0 = π. We obtain the result showed in 7 where we also depicted the behaviour of the R → ∞ case for comparison. We note from Fig. 7 that although in this case, the probability of the atom to remain in its first excited level is an oscillatory function of time (we have Rabi oscillations) it decays more rapidly than the free space case for initial time values. Therefore, for that resonant cavity we have an enhancement of the spontaneous decay, which is more significant, for early times, before the first Rabi oscillation. If we consider other values for R resonant, we get the same conclusion, but the effect is more appreciably in the case we just considered, Rω 0 = π, as one can conclude from Fig. 8, where we display the P (t) behaviour for other resonant values of R. V. CONCLUSIONS In this paper we considered the dependence of the spontaneous decay of an atom, roughly approximated by a dressed harmonic oscillator, on the cavity size in which it is enclosed. For small cavities, we obtained analytical expressions and for cavities of arbitrary size, we carried out numerical computations that we found in good agreement with the analytical results for sufficiently small cavities and for free-space. In general, when the cavity size increases, the probability of spontaneous decay of the atom increases and vice versa. We obtained the well know experimental result, that for sufficiently small cavities the probability of spontaneous decay is greatly suppressed in relation to the free case, whereas for large values of the cavity radius it approaches the free-space case, R → ∞. On the other hand, we found that there are some values for the cavity radius for which the spontaneous decay is increased in relation to the free case. This occurs when R = nπ/ω 0 , n = 1 , 2 , 3 . . ., the maximum enhancement of the spontaneous decay being achieved for n = 1. From the obtained results, it is not difficult to see that in the initial times, the atom decays as if it were in free space for a time interval that increases with the cavity radius. This behaviour can be explained in terms of the time the field quanta takes to go up the cavity wall and back up to the atom. Before the field quanta comes back to the atom, it does not "know" that it is confined, therefore the spontaneous decay evolves as in the free case for a time interval of the order ∆t ≈ 2R/c. For example, considering the case in which Rω 0 = 400, we get ∆tω 0 = 800 (in c = 1 units), and for Rω 0 = 1000 we have ∆tω 0 = 2000, both values in good agreement with our numerical computations depicted in curves 1 and 2 of Fig. 6. Finally, we would like to call attention about the dependence of the spontaneous decay on the coupling constant. Since the dimensionless parameter in our model is gR, if we fix R, all our conclusions remains the same in terms of the coupling constant.
2017-11-01T11:14:06.000Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "755571d9040eb26f9f6ca62297470572057e3301", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "755571d9040eb26f9f6ca62297470572057e3301", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
218889412
pes2o/s2orc
v3-fos-license
Is diffuse intracluster light a good tracer of the galaxy cluster matter distribution? We explore the relation between diffuse intracluster light (central galaxy included) and the galaxy cluster (baryonic and dark) matter distribution using a sample of 528 clusters at $0.2\leq z \leq 0.35$ found in the Dark Energy Survey (DES) Year 1 data. The surface brightness of the diffuse light shows an increasing dependence on cluster total mass at larger radius, and appears to be self-similar with a universal radial dependence after scaling by cluster radius.We also compare the diffuse light radial profiles to the cluster (baryonic and dark) matter distribution measured through weak lensing and find them to be comparable. The IllustrisTNG galaxy formation simulation offers further insight into the connection between diffuse stellar mass and cluster matter distributions -- the simulation radial profile of the diffuse stellar component does not have a similar slope with the total cluster matter content, although that of the cluster satellite galaxies does. Regardless of the radial trends, the amount of diffuse stellar mass has a low-scatter scaling relation with cluster's total mass in the simulation, out-performing the total stellar mass of cluster satellite galaxies. We conclude that there is no consistent evidence on whether or not diffuse light is a faithful radial tracer of the cluster matter distribution. Nevertheless, both observational and simulation results reveal that diffuse light is an excellent indicator of the cluster's total mass. INTRODUCTION Galaxy clusters are permeated by a diffuse component known as the intracluster light (ICL), composed of stars that do not appear to be bound to any cluster galaxies. The existence of ICL in galaxy clusters was first reported in Zwicky (1951), but limited by its very low surface brightness (∼ 30 mag/arcsec 2 ), diffuse intracluster light only started to receive wide attention in the 1990's along with technological advances such as the usage of CCD cameras in astronomy (Uson et al. 1991;Bernstein et al. 1995;Gonzalez et al. 2000;Feldmeier et al. 2003). Given its low surface brightness level, diffuse intraclus-ter light is difficult to observe, but is nevertheless an important component of galaxy clusters. Observations, semianalytical modeling, and simulation studies report that the ICL and cluster central galaxies may make up 10 -50% of the total cluster stellar light (e.g. Feldmeier et al. 2004;Zibetti et al. 2005;Gonzalez et al. 2007;Behroozi et al. 2013;Pillepich et al. 2014;Zhang et al. 2019b). A very interesting new perspective on intracluster light is its connection to the cluster dark matter distribution. Most noticeably, Montes & Trujillo (2019) observed a similarity between the shape of cluster dark matter distribution and diffuse intracluster light, even more than the similarity between dark matter and intracluster gas. A possible explanation is that both dark matter and diffuse intracluster light contain collisionless particles, while the intracluster gas strongly interacts within itself. Thus, diffuse intracluster light is potentially a better tracer of the cluster dark matter distribution and could be an alternative mass proxy for new wide and deep surveys such as LSST (Ivezić et al. 2019). Additional evidence for such a connection between diffuse intracluster light and the underlying cluster dark matter distribution can be found in a few other observation-based studies. Many have noticed a correlation between cluster mass (including dark matter mass) and total diffuse light luminosity or stellar mass especially at large radius (e.g., Zibetti et al. 2005;Kluge et al. 2020;DeMaio et al. 2020;Huang et al. 2018b,a). Moreover, Zhang et al. (2019b, hereafter Z19) discovered that the ratio between diffuse light surface brightness and a weak-lensing measurement-based cluster mass-density model appears to be flat with a cluster radius outside 100 kpc, and that diffuse intracluster light radial profiles are "self-similar", i.e., independent of cluster mass after scaling by cluster radius. This possible diffuse light and cluster dark matter connection has also triggered the interest to model this elusive component with simulations by several groups. Recently, Alonso Asensio et al. (2020) investigated 30 clusters of galaxies with a narrow range of masses (10 14 < M 200 /M < 10 15.4 ) simulated in the Cluster-EAGLE suite and found that the galaxy clusters stellar and total matter distributions are even more similar than in Montes & Trujillo (2019). Probing a larger range of halo masses, Cañas et al. (2019) found in the Horizon-AGN simulation that the stellar diffuse light mass fraction increases with halo mass, while its scatter decreases with mass. Earlier, Pillepich et al. (2018) already examined the diffuse light distribution in IllustrisTNG simulations and pointed out that the diffuse light stellar density beyond 100 kpc of the cluster center has a similar radial slope with cluster dark matter. In this paper, we further explore this connection between diffuse intracluster light and cluster total matter distribution using data from the Dark Energy Survey (DES, Dark Energy Survey Collaboration et al. 2016), a wide-field optical imaging survey in g, r, i, z, Y using the 4-meter Blanco telescope and the Dark Energy Camera (DECam, Flaugher et al. 2015). The analysis of diffuse light in galaxy clusters greatly benefits from extremely wide-field surveys like SDSS (e.g. Zibetti et al. 2005, hereafter Z05) and DES (e.g. Z19), as it allows to improve statistical analysis. Z19 successfully detected the diffuse intracluster light using DES data out to a cluster radius range of 1 -2 Mpc at redshift ∼ 0.25 by averaging ∼ 300 clusters. We use the Z19 methods to fur- Table 1. Nomenclature used in this paper. Name Definition Cluster Central Galaxy, Central Galaxy, CG The cluster central galaxy identified by the redMaPPer algorithm. Qualitatively, these names refer to the light/stellar mass contained within the inner ∼ 30 kpc of the galaxy center. Cluster Satellite Galaxies The light or stellar mass contained in the non-central cluster galaxies, each defined within a Kron aperture observationally, or at twice the individual object's stellar half mass radius in simulations. Intracluster Light, Diffuse Intracluster Light, ICL The diffuse light beyond the outskirts of the central cluster galaxy not associated with any cluster satellite galaxy. Qualitatively, these names refer to the light/stellar mass not already contained in the cluster central galaxy or the cluster satellite galaxies. Diffuse Light, Diffuse Stellar Mass The light or stellar mass combination of intracluster light and the cluster central galaxy. Cluster Total Light, Total Cluster Light, Cluster Total Stellar Mass, Total Cluster Stellar Mass Total light or stellar mass contained in the galaxy cluster within a cluster radial range specified in the context. This is the combination of diffuse light and cluster satellite galaxies. ther examine the relation between diffuse light and galaxy cluster mass and update Z19 analysis with a larger sample (528 galaxy clusters). Given the difficulties in separating diffuse intracluster light and the cluster central galaxy, we follow the convention in Pillepich et al. (2018) to analyze diffuse intracluster light and cluster central galaxy together as "diffuse light", while "intracluster light" or ICL is reserved to qualitatively describe the unbound light that is not contained within a few 10s of kpcs around cluster central galaxies. Tab. 1 summarizes the definitions used in this paper. This paper is organized as follows: in Sec. 2 we describe the DES data (e.g., images, source catalog, and galaxy cluster catalog) and our analysis methods. In Sec. 3 we explore how diffuse light profiles behave as a function of galaxy cluster mass. We also investigate if the profiles are self-similar, for different cluster masses, and its ratio to cluster total light. In Sec. 4 and 5, we explore the main question of this paper -whether or not diffuse light can be used as a tracer of the cluster matter distribution, first, by comparing the diffuse light radial distribution to that of cluster total matter measured with weak lensing in Sec. 4. Then, we also analyze the diffuse light properties in the IllustrisTNG hydrodynamic simulations (Pillepich et al. 2018) in Sec. 5 to compare to our measurements. Finally, we discuss and summarize the results in Sec. 6. In agreement with Z19, cosmological distances are calculated with a flat ΛCDM model with h = 0.7 and Ω m = 0.3. DATA AND METHODS Our analysis is based on the observations collected and processed by the Dark Energy Survey 1 . In this paper, we closely follow Z19 in terms of the adopted data products and diffuse light measurement methods. This section provides a brief review, and notes any differences from Z19. The redMaPPer cluster sample As in Z19, we use the galaxy cluster sample identified by the red-sequence Matched-filter Probabilistic Percolation (redMaPPer) algorithm in DES Year 1 data. Each identified cluster is assigned a richness value, denoted as λ, which has been shown to be an excellent low scatter indicator of cluster mass (e.g. Farahi et al. 2019). To minimize the need for applying redshift-related corrections, we only make use of the clusters in a narrow redshift range (e.g., 0.2 ≤ z ≤ 0.35). The upper redshift limit is higher than Z19 to match the weak lensing studies performed on the same cluster sample in McClintock et al. (2019). We further split our sample into four richness bins: 20 ≤ λ < 30, 30 ≤ λ < 45, 45 ≤ λ < 60 and 60 ≤ λ < 150, again following the choice in McClintock et al. (2019). Our selection ends with 538 clusters in total, 305, 149, 52, and 32 clusters in each of the respective richness bins. We use the mass-λ relation from McClintock et al. (2019) to estimate cluster mass M 200m , defined as the mass contained inside a spherical radius within which the cluster has a 200 times overdensity with respect to the universe mean matter density at the cluster's redshift. The lowest richness value from our cluster sample corresponds to a M 200m value of 1.2 × 10 14 M , while the highest richness value corresponds to a M 200m value of 1.8 × 10 15 M . We further follow up the cluster images and note 10 bad images in our cluster sample (for instance, with unmasked objects and very bright regions caused by nearby stars). We remove them from our analysis, reducing the cluster sample size to 528 in total, 297, 148, 52 and 31 clusters at 20 ≤ λ < 30, 30 ≤ λ < 45, 45 ≤ λ < 60 and 60 ≤ λ < 150, respectively. Figure 1 shows the redshift, richness and mass distribution of the cluster sample. We make diffuse light measurements around the redMaPPer-selected central galaxies, which aim to select the cluster galaxies closest to the peaks of the cluster matter distribution. Studies have found the selections to be correct for ∼ 75 ± 8% of the clusters (Zhang et al. 2019a), but otherwise the redMaPPer algorithm may misidentify a cluster satellite galaxy, or a projected foreground/background galaxy as the center (Hollowood et al. 2019). Thus, we expect our diffuse light measurements to have lower surface brightness than those from an ideal situation (e.g., in simulation) in which the central galaxy identification is unambiguous. However, mis-centering should have minimal effects on our results that compare diffuse light, total cluster light, and cluster weak lensing measurements, as those are measured around the same central galaxy selections. Richness as a function of redshift for 528 galaxy clusters. The colour represents the mean mass computed for the redMaPPer clusters. We use the mass-richness relation and the best-fit parameters reported in McClintock et al. (2019) to obtain the clusters mean mass from their richness. Light profile measurement The cluster diffuse light in this analysis is derived from single epoch images from the DES Year 3 processing campaign by the DES Data Management (DES DM) working group (Abbott et al. 2018). For a given cluster, all single epoch images which overlap with the central cluster galaxy (within 9 ) are averaged to reduce variations in the sky background. Because bright stars or nearby galaxies can affect diffuse light measurements, we remove clusters that are anywhere nearer than 526 (equivalent to 2,000 pixels at DECam pixel scale) from these objects (using bad region mask > 2 described in Drlica-Wagner et al. 2018). The single expoch images of each cluster are then coadded together using the SWarp software (Bertin 2010) to create one image of the cluster. The single epoch images have been subtracted of sky background estimations which are evaluated over the whole DECam fieldof-view using using a Principal Component Analysis (PCA) method (Bernstein et al. 2017) for each DECam exposure image, and the SWarp sky subtraction function is turned off during the coadding process. To isolate diffuse light from galaxies and foreground or background objects in the cluster field, we use the DES coadded object catalog to mask detected astronomical objects, except redMaPPer selected cluster central galaxies. The masks are constructed as ellipses with inclination, major and minor axis provided by the DES DM coadd catalog described in Abbott et al. (2018). Figure 2 shows an example of three redMaPPer clusters (z ∼ 0.27) analyzed here (top panel) and the masks applied (bottom panel). Unlike Z19, in which object brightness and detection significance cuts are applied before masking and then the faint galaxy contribution is estimated using the galaxy luminosity function constraints, we mask all objects to the DES Y3 catalog limit with detection S/N > 1.5 (magerr_auto_i < 0.72, mag_auto_i < 30.0, effectively mag_auto_i < 25). This generous limit should mask any real objects detected in the images, and we do not apply a faint galaxy contribution correction afterwards, as Z19 demonstrated this component to be insignificant at redshift ∼ 0.25 in DES data. After the masking process, as mentioned in Sec. 2.1, we further visually inspect all the clusters, and prune a total of 10 clusters that appear to be incompletely masked (because of image and catalog mismatching), or appear to be badly affected by nearby stars. The diffuse light profiles are then calculated as the average pixel values of the unmasked regions of the images in radial circular annuli, from which we then subtract residual background profiles to acquire the final measurements. The residual background profiles are measured around redMaP-Per random points, which uniformly sample the sky coverage of the redMaPPer clusters in DES data. The same measurement process applied to the redmapper central galaxies, including masking, and averaging pixel values in circular annuli were applied to the random points. Thus we expect the residual background measurements to contain fluxes of sky background residuals as well as fluxes from undetected foreground and background astronomical sources (Eckert et al. 2020). We do require the random points to be at least 5 arcmin away from the cluster centers to avoid over-subtraction and a total number of 3859 random points are used in our analysis. In the further measurement process, the clusters and random points are assigned to 40 regions using the Kmeans algorithm 2 (Steinhaus 1956;MacQueen et al. 1967), which uses a clustering algorithm to divide the sky coverage of the redMaPPer clusters into regions with approximately the same area. We average the random point radial profiles in each region and use it as an estimation of the sky background of that region. This averaged random profile is subtracted from each of the measured cluster radial profiles in the same 2 https://github.com/esheldon/kmeans_radec region. Each of the subtracted cluster profiles is then corrected to to an observer frame at redshift z = 0.275 (median redshift of the sample), accounting for both distance dimming and angular to physical distance. Finally, we sample the averaged cluster profiles using the jackknife method to estimate their uncertainties. Differently from Z19, we do not subtract the average level flux value in the last bin of the image. In Z19, this measurement process has been tested by stacking random points and simulated diffuse light profiles which shows that the random background subtraction and averaging process produce bias-free measurements. Discussions have also been undertaken about the influence of sky background estimations and the effect of instrument point spread function (PSF) on diffuse light interpretations. We refer the readers to that paper for further details regarding the measurement methods and tests. Surface brightness in luptitude While it is traditional to describe diffuse light surface brightness in units of mag/area, for sky subtracted low surface brightness measurements near the noise limit, which can be negative in flux, this leads to extremely noisy figures which are hard to interpret. In this paper, we present the surface brightness measurements of diffuse light in terms of asinh magnitudes proposed by Lupton et al. (1999), which is informally known as "luptitudes" (and we use lup as a symbol for this unit). The luptitude system behaves very closely to the traditional log-based magnitude in the high signal-tonoise (S/N) regime, but has the advantage of robustness in the low signal-to-noise (S/N) regime or even when the flux is negative. We calculate luptitude and its uncertainty from diffuse light flux and uncertainty using the following equations, In these equations, m 0 = 30 is the zeropoint magnitude; b is the softening parameter, or knee of the luptitude function where standard magnitudes and luptitudes begin to significantly diverge, and is defined as b ≡ √ aσ f ≡ 1.042σ f , in which σ f is fixed to be a flux uncertainty at 500 kpc which sets b ≡ 0.03217925; a ≡ 2.5 log 10 e = 1.0857 (Pogson ratio); σ f is the flux ( f ) measurement uncertainty. In Figure 3, we show the diffuse light surface-brightness profiles of the four cluster richness subsets (Sec. 2.1), following the measurement processes described in Sec. 2.2. The measurements are presented in luptitudes (µ, black lines) and also in the traditional magnitude (m, coloured lines) for comparison. The luptitude and magnitude values of the surface brightness measurements are in excellent agreement when the flux measurements have high S/N (within 200 kpc). As we approach larger cluster radius and the uncertainty of diffuse light measurement increases, µ behaves reliably while m shows discontinuities (because of negative fluctuations), justifying the usage of µ in this paper. Flux profile and integrated flux As mentioned in sec. 2.1, we divide the clusters into 4 richness sub-samples following McClintock et al. (2019), 20 ≤ λ < 30, 30 ≤ λ < 45, 45 ≤ λ < 60 and 60 ≤ λ < 150, which correspond to mean masses of 1.6 × 10 14 , 2.7 × 10 14 , 4.3 × 10 14 and 8.0 × 10 14 M . We compute the surface brightness profiles as described in Sec. 2, accounting for both distance dimming and angular-to-proper distance and convert the fluxes to luptitudes with a zero-point of 30 (see Sec. 2.3). These surface brightness profiles are also integrated to derive the total diffuse light luminosity as a function of radius, as in where L(r ) is the flux profile. Figures 4 and 5 show, respectively, surface and integrated brightness diffuse light profiles for different richness ranges. Unsurprisingly, the surface brightness and integrated brightness of diffuse light in richer clusters is brighter, which can be explained given that richer and thus more massive clusters host more satellite galaxies (Gao et al. 2004), and tidal stripping as well as dwarf galaxy disruption have the opportunity to disperse more stars into the intracluster space. However, the surface brightness and integrated brightness of diffuse light in the cluster central region varies little with cluster richness. Such an effect is in agreement with the inside-out growth scenario, which assumes that galaxy centers form early in a single star-burst, and the accreted galaxy stellar content at later times are deposited onto the galaxy outskirts (e.g. van Dokkum et al. 2010; van The surface brightness profiles of diffuse light presented in both magnitude (coloured regions) and luptitude (black lines) of clusters in four richness ranges. We shift the profiles of the cluster richness subsets by 2 lup/kpc 2 from each other to better visualize the differences between systems. Shaded regions represent their corresponding uncertainties, which are computed using the jackknife sampling method. Lower panel: We show the difference between magnitude and luptitude, again shifted by 2 mag for each cluster richness subset. These figures demonstrate that luptitude is better suited than magnitude for presenting diffuse light surface brightness. It is in excellent agreement with magnitude when the diffuse light surface brightness measurement has high S/N, and behaves reliably when the measurement has low S/N. We further investigate the mass dependence of the diffuse light integrated fluxes within five radii, 15, 50, 150, 300, and 500 kpc, which range from being dominated by the BCG, to being dominated by the diffuse light. We use the cluster mass estimations modeled from cluster weak lensing measurements in McClintock et al. (2019). Figure 6 shows the integrated flux in these radial ranges as a function of the cluster mass. To examine the steepness of the cluster mass dependence, we perform a linear fit to the logarithmic values of integrated diffuse flux versus M 200m , as where α is the slope and β is the y-intercept. We also estimate the Pearson correlation coefficient (ρ cc ) as, We report the best-fit parameter values and the correlation coefficients in Tab. 2. The slope of the flux-M 200m dependence is insignificant at small radii (15 and 50 kpc), but becomes steeper with enlarging radius and is most pronounced at the largest radius. The correlation between total diffuse light luminosity and cluster mass is excellent at large radius beyond 50 kpc: the fitting slope indicating the diffuse light mass-dependence is steep and significant at 500 kpc; the correlation coefficient values is also significant, reaching ρ cc > 0.9 outside of 300 kpc. We will return to this correlation and further explore the connection between diffuse light and cluster masses in the upcoming sections. Self-Similarity The distribution of dark matter, hot gas, and even member galaxies in galaxy clusters are known to exhibit a large degree of self-similarity (Peebles 1980), so that these cluster components follow a nearly universal radial profile after scaling by a characteristic radius related to the cluster's mass and redshift (e.g., dark matter: Navarro et al. 1997, hot gas: Kaiser 1986, cluster galaxies: Budzynski et al. 2012). These extraordinary properties often mean a low scatter relation that relates the cluster's dark matter, gaseous or satellite galaxy observables to the cluster's total mass. In Z19, it was discovered that cluster diffuse light also appears to be self-similar, i.e., clusters of different masses appear to have a universal diffuse light profile at large radii beyond 100 kpc of the cluster center, after scaling by the cluster's R 200m , indicating a tight relation between diffuse light and cluster mass. In this section, we revisit the diffuse light self-similarity by scaling the surface brightness profiles where M 200m is the mean mass of each sub-sample estimated with the mass-richness relation from McClintock et al. (2019) and z m is the mean cluster redshift, 0.275; ρ m (z m ) = Ω m ρ crit (1+z m ) 3 is the mean cosmic matter density in physical units for z m , ρ crit is the critical density at redshift zero. The R 200m values are estimated to be 1305. 76, 1561.73, 1822.72, 2240.30 kpc at 20 ≤ λ < 30, 30 ≤ λ < 45, 45 ≤ λ < 60 and 60 ≤ λ < 150, respectively. Figure 7 shows the diffuse light profiles after scaling by R 200m . We observe self-similarity between all the richness bins outside 0.05 R 200m and up to 0.8 r/R 200m within 1σ. Cluster total light We also derive the radial profiles from the cluster images without masking any objects (as shown in the top panels of Figure 2). When none of the objects are masked, the cluster images not only contain the light from the diffuse light, but also the rest of the cluster galaxies. The images also contain light from the foreground and background structures, although these contributions are eliminated later by subtracting light profiles derived from random images. Throughout this paper, we refer to the light profiles derived from the unmasked images as the cluster total light profiles. For the computation of these cluster total light profiles, we follow the same procedure described in Sec. 2.2, with the exception that we use the unmasked images for both clusters and random points and we re-bin the cluster total light Figure 8. Cluster total light (red dashed line) and the diffuse light (blue solid line) profiles in different cluster richness bins, in terms of surface brightness (upper panels) and integrated fluxes (lower panels). The profile uncertainties are represented by the shaded regions and estimated using jackknife sampling. In both, the surface brightness and integrated surface brightness profile, the contribution from cluster total light profiles is higher than diffuse light profiles. profiles. When computing the sky brightness level using the unmasked random images, the sky brightness level obtained is higher than that from the masked random images, because we are observing the contribution of all the components of the image. We apply the subtraction between the unmasked cluster images and the random images to derive the cluster total light profiles. We notice these profiles to be much noisier at radii larger than r = 27.5 kpc. Thus, for the regions beyond 27.5 kpc, we re-binned the profiles by using larger width annuli to improve the signal-to-noise. We use 15 radii bins in logarithmic space beyond 27.5 kpc. The uncertainties of the cluster total light profiles are sampled with the jackknife method applied to the re-binned individual profiles. Figure 8 displays the cluster total light profiles in comparison to the diffuse light profiles. Both diffuse light and cluster total light profiles become fainter as the radius increases, with the total light surface brightness reaching ∼ Figure 9. Cluster mass dependence of the integrated cluster total light flux. We compute the integrated fluxes within 5 radii (15, 50, 150, 300 and 500 kpc) and show them as a function of the cluster mass. The dotted lines show a best linear fitting to the logarithmic values of the flux and the cluster mass. The different heights of the fitting indicates that cluster total light in clusters gets more luminous with the increasing of the radius, also the slope gets steeper at larger radii, showing that the increasing luminosity is even higher in more massive clusters. 31 lup/kpc 2 at r = 500 kpc. Since the cluster total light is completely dominated by the BCG light within r ∼ 10 kpc, the cluster total light and diffuse light profiles coincide at this radial range. The bottom panels of Figure 8 further show the integrated radial profiles of the diffuse light and total light. The total light in the richest clusters reaches a brightness of 15 lup/kpc 2 at r = 1 Mpc, and the cluster total light deviates significantly from the diffuse light beyond ∼ 100 kpc. As in Sec. 3.1, we derive the integrated flux of cluster total light in five radial ranges, and study their mass dependence as shown in Figure 9. A linear fit to the logarithmic values between the integrated flux and the cluster mass, M 200m , is performed and the best-fit parameters are reported in the lower section of Tab. 2. The cluster total light flux also shows increasing mass dependence at larger radius, but the trend is not as robust as the trend of diffuse light. At 500 kpc, the cluster total light flux also shows strong mass dependence with a significant correlation coefficient value ρ cc > 0.9 with a steep mass dependence slope of 0.432±0.055. The results show that cluster total light is also well correlated with cluster total mass, but the correlation is not as strong as that of diffuse light. Diffuse light to cluster total light fraction A very important quantity in diffuse light studies is the fraction of diffuse light in total cluster light. We measure this quantity by dividing the surface brightness profiles and the integrated profiles of the diffuse light by the corresponding profiles of total cluster light, which is the diffuse light fraction and cumulative diffuse light fraction respectively. Figure 10 shows these fractions in different cluster radial and richness ranges. We report the diffuse light and integrated ratio at 50, 300, 700 and 1000 kpc for 20 ≤ λ < 30, 30 ≤ λ < 45, 45 ≤ λ < 60 and 60 ≤ λ < 150 in Tab. 3. Within 50 kpc, diffuse light makes up most of the cluster total light, and the cumulative fraction is above 80% regardless of cluster richness. Beyond 50 kpc, given the faster increase of cluster total light with radius than diffuse light, the cumulative diffuse light fraction steadily decreases with increasing radius, which reaches around ∼24% at 700 kpc regardless of cluster richness. We do not notice obvious cluster richness/mass dependence of the diffuse light and integrated flux ratios, especially beyond 200 kpc. Many previous studies have measured diffuse light fraction, but the results seem to be at tension possibly caused by different analysis choices. An important consideration is that the diffuse light fraction changes with the analysis radius, as our measurement demonstrates. Previously, Krick & Bernstein (2007) found that the diffuse light fraction is between 6±5% and 22±12% at one-quarter of the virial radius using r−band, while Montes & Trujillo (2018) found this fraction to be between 8.6±5.6% and 13.1±2.8% at R 500 , and Z19 measured a diffuse light fraction of 44±17% at 1 Mpc. Our results of diffuse light fraction being ∼ 24% at 700 kpc agrees with the range in the previous work. How the diffuse light fraction changes with cluster mass is another interesting topic in diffuse light studies. Efforts with semi-analytical studies suggested an increasing diffuse light fraction with cluster mass, reaching around 50% in clusters of 1.42 × 10 15 M mass (Lin & Mohr 2004). Observationally Zibetti et al. (2005) found no evidence of mass dependence of the diffuse light fraction. Figure 10 shows our results demonstrating the mass (in)dependence of the diffuse light fraction in three radii. There is no outstanding difference in the diffuse light fractions between cluster richness subsets within 300 kpc, which is in agreement with Zibetti et al. (2005). However, since our results are derived in physical radius, the diffuse light fractions will likely change with cluster mass when derived in terms of the normalized clus- Figure 10. Upper panels: Diffuse light fraction in the cluster total light, as a function of radius (left panel) and the cluster total mass measured. Lower panels: Cumulative diffuse light fraction in the cluster total light, as a function of radius (left panel) and cluster total mass (right panel). The mass dependence increases mildly with radius, whereas the diffuse light ratio at 50 kpc presents no trend with increasing of mass and a mild trend at 300 kpc; while integrated flux ratio presents no trend at 50 kpc and 300 kpc and a mild trend at 1000 kpc. ter radius, such as with respect to R 200m . In addition, at large radius, we notice a low significance increase of the diffuse light fraction with mass, although higher signal-to-noise measurements will be needed to confirm this trend. COMPARISON TO WEAK LENSING Recent studies have presented significant evidence of a connection between diffuse light and the cluster dark matter (or total mass) distribution -diffuse light profiles have similar radial slopes with the total cluster dark matter density distribution (e.g. Pillepich et al. 2014;Pillepich et al. 2018;Montes & Trujillo 2018); The diffuse light surface brightness contours are highly similar to the cluster mass density contours (e.g. Montes & Trujillo 2019; Alonso Asensio et al. 2020). In Z19 and this paper (Figure 7), we also note the diffuse light surface brightness to be self-similar, appearing to have a universal radial profile after scaling by cluster R 200m radius. These analyses raise an interesting question -does diffuse light trace the cluster dark matter, and thus trace the cluster total matter distribution? In this section, we explicitly explore this question by comparing the diffuse light ra-dial dependency with that of the cluster total matter measured through weak lensing. Weak-lensing measurements The cluster total matter radial distribution are derived through the tangential shear measurements from weak lensing around the clusters of interest. The azimuthally averaged tangential shear is related to the 2-dimensional surface density as: whereΣ(< R) is the average cluster surface mass density inside the radius of R,Σ(R) is the surface mass density at the radius of R, and Σ crit is given as, where z and D denote the redshift and the distance to the object, respectively, and the subscripts s and l, the source and the lens. We use ∆Σ(R) for the following analyses, as described ahead. Figure 11. Upper Panel shows the cluster mass profiles in four richness bins measured through WL tangential shear estimations. Lower panel shows the validation cross-component shear measurements around the same clusters, which is consistent with a null signal and indicates the shear measurements to be relatively bias-free. The cluster mass profiles measured through WL tangential shear estimations is compared to the diffuse light profiles in Sec. 4. The shape catalog (Zuntz et al. 2018) used for the weak lensing measurements in this paper is produced by Metacalibration (Sheldon & Huff 2017). In contrast to other shear estimation algorithms, Metacalibration adopts galaxy images themselves to relate the measured ellipticity of galaxies to the true shear through the 2×2 response matrix, R. The response matrix is calculated by deconvolving the point spread function (PSF) from the image, injecting a small artificial shear and re-convolving the image with the representation of the PSF. The resultant representation of the mean true shear, γ , can be written as, In practice, we define the average response asR = (R 11 + R 22 )/2. We have checked that given the noise level in our data, using this approximation does not affect our measurement significantly. In addition, there is a second component that contributes to the response matrix, which is due to the selection of the galaxies, R sel . Since the selection response is only meaningful as ensembles of galaxies, we make use of the mean value R sel . For details of Metacalibration, we refer the readers to Sheldon & Huff (2017). In McClintock et al. (2019), it is shown that the optimal estimator for ∆Σ(R), including the response is, for the k-th radial bin, where B(R k ) is the correction factor for contamination from the cluster members and foreground galaxies (boost factor), which we describe in the next paragraph. The summation goes over all the lens (j) -source (i) pairs, and where z MC s i is a random Monte Carlo sample from the full photo-z probability distribution for the i-th source and for which the photometric redshifts of galaxies are estimated with Directional Neighbourhood Fitting (DNF) algorithm (De Vicente et al. 2016). Even with a redshift cushion of 0.1 between the lens and the source, because of photometric redshift uncertainties and contamination from the cluster members, some of the source galaxies we use are in front of the lens clusters. These galaxies do not retain any gravitational shear due to the lens, therefore dilute the weak lensing signal. We correct for this effect following the procedure in Sheldon et al. (2004), where i and j represent the lens-source pairs, and k,l the random-source pairs. In Figure 11, we show the WL measured surface mass density profiles of the four cluster richness subsets used in this paper. These measurements will be used for direct comparison to the diffuse light radial profiles. The validation cross-component shear measurements around the same clusters, are also shown in Figure 11, which is consistent with a null signal and indicates the shear measurements to be relatively bias-free. Multiplicative bias due to shear calibration or redshift calibration bias may still be present, but will not affect the conclusions of the paper as we do not compare the absolute amplitudes of the lensing and diffuse light luminosity measurements. Conversion into annular surface differential density We aim to directly compare the stacked diffuse light radial profiles to the stacked cluster mass profiles measured through weak lensing. However before we start, we need to carefully evaluate what observational quantities to use for such comparison. For the diffuse light profiles, we directly measure their surface brightness as a function of radius, which informs us about the diffuse light surface stellar mass density on the plane of the sky. Therefore, we would like to compare this quantity to the surface mass density of galaxy clusters, Σ(r). However, in DES weak lensing measurements (Sec. 4.1), the direct observable is the cluster tangential shear profile, which probes the cluster's differential surface mass density, ∆Σ(r), and is related to the surface mass density Σ(r) as, Figure 11 shows the cluster ∆Σ(r) profiles derived from weak lensing in each of the richness sub-samples. Although it is possible to derive Σ(r) from the weak Figure 12. Comparison between the Υ DL and Υ WL profiles (upper panels) and the Υ total and Υ WL profiles (lower panels). The red solid lines represent the Υ DL and Υ total profiles while the blue dashed lines represent the Υ WL profiles. The uncertainties of the profiles are derived with the jackknife sampling method. Resemblance between diffuse light ADSB, cluster total light ADSB and cluster total mass ADSD profiles is seen in this plot, and diffuse light seems to trace the cluster total matter distribution beyond 300 kpc closer than cluster total light. lensing-measured ∆Σ(r) as is done in Z19, these derivations rely on model assumptions of the cluster mass distribution. To avoid our diffuse light-weak lensing comparison being affected by model choices, we decide to instead, convert the diffuse light and cluster total light surface brightness into a differential surface brightness as Note though that cluster differential surface mass density ∆Σ(R) is inevitably affected by the Σ(R) values at small radius, where the diffuse light profiles have been shown to have significantly different radial slopes than the cluster total mass distribution in Z19. To eliminate the small radial contributions, we further convert ∆Σ(R) into the Annular Differential Surface Density (ADSD: Mandelbaum et al. Figure 13. Upper panel: Ratios between Υ WL and Υ DL as a function of cluster radius. Lower panel: Ratios between the Υ WL and Υ total as a function of cluster radius. Note that the y-axes of the two panels use a combination of linear and log scales, linear within -1 to 1, transitioning into log scales outside of 1 or -1 to show large deviations. 2013), Υ, as In the above equation, R 0 is a chosen radius within which the cluster's surface mass density will not affect the measurements of Υ(R; R 0 ). In this paper we use a R 0 value of 200 kpc. Similarly, we convert the diffuse light and cluster total light differential surface brightness into Annular Differential Surface Brightness (ADSB) as Comparison Result In Figure 12, we show the comparisons between the WLderived cluster mass ADSD and the ADSB of diffuse light, as well as the comparison between the cluster total mass ADSD and the ADSB of cluster total light. Note that the values of the WL-derived cluster ADSD profiles are scaled by the average weak lensing and total-light ratio, ADSD/ADSB, between 550 and 1050 kpc, and the values of the diffuse light ADSB profiles are also scaled by the average diffuse/total-light ratio between 550 and 1050 kpc, so the cluster ADSD(B) profiles are in similar numerical ranges. Overall, we find that the ADSB profiles of diffuse light and the ADSD profiles of cluster total mass have similar radial dependence especially outside 200 kpc, consistent within their 1 σ measurement uncertainty range. However, the ADSB or ADSD profiles, within 200 kpc, start to show some deviations, but the deviation is not significant, and the profiles are still consistent within 1σ. Interestingly, the ADSB or ADSD profiles of cluster total light and cluster mass are also consistent within their 1σ uncertainty ranges, although the ADSB profiles of cluster total light are measured to be much noisier than the ADSB profiles of cluster diffuse light. We further derive the ratios between Υ WL profiles and Υ DL as well as between Υ WL and Υ total , as shown in Figure 13. Again, we note that the ADSB(D) profiles of diffuse light and cluster total mass have consistent radial dependence outside 200 kpc, but show deviations at a low S/N within 200 kpc. The ADSB(D) profiles of cluster total light and cluster total mass also appear to have consistent radial dependence, but the comparisons are much noisier. Given these comparisons, we conclude that we see evidence of consistency between diffuse light and cluster total mass radial distributions from weak lensing measurements especially outside 200 kpc of the cluster center. However, given the large uncertainties associated with the ADSD(B) observables, further high S/N measurements of both the cluster weak lensing signals and diffuse light surface brightness will be necessary to distinguish any subtle differences. We will return to this topic of radial resemblance between cluster diffuse light and total mass distribution in Sec. 5.1. DIFFUSE LIGHT PROPERTIES IN SIMULATION In the previous section, we notice similarities between the diffuse light radial profiles and the cluster total mass radial profiles, but can not draw a conclusive statement about their consistency. Diffuse light simulations offer more insight into this aspect. In this section, we turn to the Illustris The Next Generation (IllustrisTNG) hydrodynamic simulation to investigate the similarity between the distributions of the diffuse light and the cluster total mass Nelson et al. 2015). The IllustrisTNG simulation is a powerful, high-resolution hydrodynamic cosmological simulation, which considers gravitational and hydrodynamic effects as well as sophisticated models for processes of radiation, diffuse gas, and magnetic field. We use the IllustrisTNG 300-1 simulation and in particular, the snapshot at redshift 0.27, which matches the median redshift of our redMaPPer cluster samples. We select halos with masses M 200m above 10 14 M /h, which roughly matches the mass range of the redMaPPer clusters analyzed in this paper, and eliminate halos that are within 20 Mpc/h of the snapshot boundaries to avoid boundary effects. These selection criteria yield 110 halos suitable for our analysis. We Figure 14. The radial density profiles in 3D (upper panel) and 2D projected distances (middle panel) of various cluster mass components in the IllustrisTNG 300-1 simulation -dark matter, gas, diffuse light and subhalo stellar mass -as well as the total halo mass densities. Lower panel: The radial derivative of the radial densities in 2D projected space. Throughout the plots, we notice that the halo diffuse light appears to be the most concentrated, while the halo gaseous content appears to have the lowest radial concentration, which is consistent with diffuse light being produced from galaxy stripping/disruption towards the halo center, while gaseous particles experience frequent interactions that flatten out their radial distribution. The most faithful radial tracer of the halo dark matter distribution appears to be the subhalo stellar mass. then derive the densities of the simulation stellar particles, dark matter particles, and gaseous particles as a function of 3D halo radius, and also 2D projected radius on the simulation x/y plane. Does diffuse light have the same radial dependence with cluster total mass? To derive the radial density profiles of the diffuse light in the simulation, we first compute the radial density profiles of the stellar particles contained in subhalos. The stellar mass of subhalos within twice the stellar mass half-radius of the subhalos are used to derive these profiles, although we limit the calculation to the subhalos of stellar mass above 10 9 M (contained within the radius of V max ). The subhalo radius and the subhalo mass thresholds are selected to roughly match the galaxy masking radius and depth limit of our measurements. This subhalo stellar profile is then subtracted from the radial density profiles of all the stellar particles around Figure 15. Upper Panel: the Υ density profiles of various cluster mass components -dark matter, gas, diffuse light and subhalo stellar mass -as well as the total halo mass densities, derived from the 2D projected radial density profiles in Figure 14. Lower Panel: the relative ratios between the Υ profiles of halo total mass and the various halo mass components -dark matter, gas, diffuse light and subhalo stellar mass. The Υ profiles of the subhalo stellar mass appears to have the same radial trend with the total halo mass profile, while the Υ profile of the halo diffuse light appears to drop more rapidly with increasing radius than the total halo mass Υ profile. The least radially concentrated halo mass component, i.e., the halo gaseous mass, has a Υ profile that drops least rapidly with increasing radius. the halos, and the subtracted result is considered the diffuse stellar radial distribution. These subtractions are done in both 3D and 2D to derive the 3D and projected 2D radial distribution of the diffuse light. The upper and middle panels of Figure 14 show those radial dark matter, gaseous and stellar profiles, averaged over all the selected halos to reduce noise. In either the 3D radial profiles or the projected 2D radial profiles, the total halo stellar content appears to have the most concentrated radial distribution, while the halo gaseous component appears to have the least concentrated radial distribution due to the high interaction rate between the gaseous particles. Neither the stellar particles or the gaseous particles appear to faithfully follow the radial dependence of dark matter (or halo total mass). However, after separating the total halo stellar content into the diffuse and the subhalo components, we notice that the subhalo stellar component is following the dark matter (or the halo total mass) radial distribution remarkably well, while the diffuse stellar component deviates further from the halo dark matter radial distribution, and becomes the most radially concentrated halo component. The lower panel of Figure 14 shows the 2D radial den-sity derivatives of the various halo components. As noted above, the most faithful radial tracer of the halo dark matter (or the halo total mass) distribution appears to be the subhalo stellar mass. The halo gaseous component has the most mild radial slope among all of the analyzed components, while the halo diffuse stellar component has the steepest radial slope and thus is the most radially concentrated. Since diffuse light is expected to originate from galaxy stripping/disruption, which can only happen at small halo radii after the sub-halos's outer dark matter component is completely destroyed, we consider these simulation findings unsurprising. We further convert the simulation projected 2D radial densities into a Υ radial profile, so as to be more directly comparable to the measured cluster matter/diffuse light density profiles in Section 4.2. The conversion made it less obvious to directly spot the radial concentration of the various halo components as shown in the upper panel of Figure 15, thus we plot the ratios between the various component's Υ profile to the Υ profile of the total halo mass. The Υ profile of halo diffuse light drops most quickly with halo radius, while the least concentrated halo gaseous component has a Υ profile that drops the least quickly with halo radius. The Υ profile of the subhalo stellar mass appears to have consistent radial trend with the total halo mass. We conclude that the simulation results do not support that diffuse light is a faithful radial tracer of the cluster total mass, although cluster satellite galaxy stellar content is. We note that in this Υ profile comparison, the radial dependencies of the various halo components appear to be only distinguishable when the measurements are made with high S/N. Given that the Υ ratios between diffuse/total stellar and cluster mass change, at most, by a factor of ∼ 3 from 200 to 1000 kpc, our observational-based measurements in Sec. 4.2 likely does not have enough signal-to-noise to distinguish dissimilarity of radial trends between diffuse light, cluster total light, and total mass -in the future, higher S/N measurements of cluster weak lensing signals as well as light distributions are necessary to confirm our findings in observations. Is diffuse light a good indicator of cluster mass? In Sec. 5.1, we find that simulation diffuse light is not a faithful radial tracer of the cluster matter distribution, but our analysis as well as previous studies have clearly noted a strong correlation between diffuse light and the cluster's total mass (e.g. Pillepich et al. 2014;Pillepich et al. 2018;Montes & Trujillo 2018), such as the similar shape in the radial density contour lines between diffuse light and cluster mass (e.g. Montes & Trujillo 2019;Alonso Asensio et al. 2020) and the self-similarity of the diffuse light radial profiles (Z19). It is possible that although diffuse light does not faithfully trace the cluster matter distribution, it simply follows a different radial distribution that still has a strong dependence on cluster total mass. Thus, even if the diffuse light profiles can not be used to directly map out the dark matter distribution inside clusters, its total luminosity can still serve as a strong cluster mass indicator. In this subsection, we examine the correlations between halo mass and the various halo baryonic mass components, including the diffuse light, the sub-halo stellar mass and the gaseous mass. For each halo in the simulation that are at least 20 Mpc/h away from the simulation boundaries (to avoid the results being affected by the boundary of the simulation), we derive their diffuse stellar masses, subhalo stellar masses, total stellar masses (diffuse + subhalo) and gaseous masses, integrated over 3D radial ranges. The relations between the masses of those components and the cluster's total mass is shown in Figure 16. In the radial ranges above 50 kpc, all the cluster baryonic mass components show clear correlations with cluster mass. From 15 kpc to 50 kpc, the cluster subhalo stellar mass do not show significant correlation with cluster mass; the diffuse and the gaseous mass still show correlations, but the mass dependence is milder than the other radial ranges as measured by the slope of the component-mass/halo-mass relations. A particularly interesting quantity is the scatters of these component-masses at fixed halo mass. In the lower panel of Figure 16, we show the mean scatter of the component-masses around their mean values in a fixed halo mass range. As well known (e.g., Voit 2005;Kravtsov et al. 2006) in previous studies, the halo gaseous mass is an excellent low-scatter indicator of halo mass, showing the lowest scatter in our examination -around 0.1 dex throughout the 15 kpc to the 500 kpc radial range. The diffuse stellar mass, appears to be the next best low-scatter mass indicator with a scatter around 0.2 to 0.25 dex in the radial range of 15 kpc to 300 kpc. However, the scatter of the diffuse stellar mass does increase with radius caused by the rapid decrease of diffuse stellar density with radius. The halo total stellar mass has consistent scatter with the halo diffuse mass within 300 kpc, but this is likely due to the domination of halo diffuse light in the total halo stellar content within this radius range. The subhalo stellar mass has the highest scatter among all of the probed components, around 0.2 to 0.5 dex depending on the radial or halo-mass range. Outside of 300 kpc, subhalo stellar mass starts to have similar scatter with the diffuse mass, and meanwhile becomes a bigger contributor to the halo total stellar mass over the diffuse stellar mass. Comparing those stellar mass components, we highly recommend using halo diffuse stellar mass, or halo total stellar mass within 500 kpc as a robust, low-scatter halo mass indicator. The halo total stellar mass estimation must include halo diffuse light to minimize the scatter within 300 kpc, which has not been studied in previous analyses (Palmese et al. 2020;Anbajagane et al. 2020). This simulation analysis conclusion is also in agreement with our observational result in Sec. 3.1, in which we find strong correlation between diffuse light luminosity and cluster total mass, which is even more evident than the correlation between cluster total light and mass. Note though, these simulation conclusions are derived with halo components mass enclosed within 3D radii, while observations are almost always measured in 2D projected radii, and thus affected by foreground and background structures. We find that using halo stellar mass enclosed within 2D projected radii increases its scatter, but it may be possible to reduce such a scatter in real observations with imaging colour information, which is not reliably available in the Il-lustrisTNG simulation (we notice that the satellite galaxies in massive halos in the simulation do not display obvious red sequence features and thus cannot decide on a reasonable colour cut). Given the vital importance of developing lowscatter halo-mass indicators in cluster cosmological studies, it would be interesting to carry out observational studies of cluster total stellar mass or cluster diffuse stellar mass, especially using multi-wavelength data that can observationally evaluate the scatter of cluster mass indicators (e.g., Farahi et al. 2019;Palmese et al. 2020). Additional diffuse light properties As a qualitative comparison to the observational results presented in Sec. 3, we derive additional diffuse stellar mass properties in the IllustrisTNG simulation and show those in Figure 17. We demonstrate the 2D-projected radial profiles of the diffuse stellar mass and the halo total stellar mass, in two halo mass ranges (limited by the small size of the Illus-trisTNG cluster sample), and then derive the ratios between the two as the diffuse stellar fraction in the simulation. These two results are qualitatively comparable to the observational results shown in Figure 4, 8 and 10. We find that diffuse stellar/light is more abundant in more massive clusters, and the diffuse stellar/light fractions do not appear to change with cluster mass. However, the diffuse stellar fractions appear to be significantly higher in the simulation than in observations, as high as ∼40% at 1 Mpc of the cluster center, while the observational measurements are around ∼ 30%. It is possible that diffuse light has been over-produced in the simulation. We also averaged the diffuse stellar mass profiles after scaling by cluster radius (R 200m ). In good agreement with our observational finding (Sec. 3.2), the diffuse stellar mass profiles also display self-similarity, that their radial profiles appear to be uniform after scaling by cluster radius R 200m . CONCLUSIONS In this paper, we present for the first time a direct comparison of the radial dependence of the diffuse light surface brightness and the weak-lensing measured cluster matter distribution, for a statistically large cluster sample with high S/N diffuse light measurements to a cluster radial range of 1 Mpc. We also present both observational and simulation evidence for a strong correlation of diffuse light luminosity with cluster mass. Specifically, the findings can be summarized as the following. • Strong correlation between diffuse light brightness and cluster mass at large radius: We observe that more massive Figure 17. Diffuse stellar mass properties in the simulation: the diffuse stellar mass and cluster total stellar mass radial profiles (upper panel), the diffuse stellar fraction (middle panel) and the cluster R 200m -scaled diffuse stellar mass and cluster total stellar mass radial profiles (bottom panel). These diffuse stellar properties as well as the cluster total stellar properties are in qualitative agreement with the observational measurements in this paper. clusters have more diffuse light in the regions outside 20 kpc of the cluster center, and the mass dependence becomes steeper with increasing cluster radius. The total stellar luminosities contained within 15 kpc of the cluster centers are almost indistinguishable between clusters of different richnesses/masses, but the total stellar luminosities contained around ∼ 300 kpc of the cluster radius show significant correlation with cluster total mass. • Self-similarity of the diffuse light radial profiles: the diffuse light surface brightness radial profiles appear to have a universal distribution at intermediate and large radius after scaling by cluster R 200m . • Mass (in)dependence of the diffuse light fraction: we derive the diffuse light fraction in total cluster stellar luminosity as a function of cluster radius and mass. The cumulative diffuse light fraction drops with enlarging cluster radius, reaching ∼ 24% at ∼ 700 kpc. Interestingly, we do not find diffuse light fraction to be dependent on cluster mass within 1 Mpc of the cluster radius, possibly because the cluster growth is well correlated with diffuse light accretion within this radial range. • Comparison to weak lensing matter distribution: we directly compare the radial density distribution of diffuse light to that of the cluster total matter (including dark matter) measured through weak lensing. We find that the diffuse light radial distributions indeed show some level of resemblance with the cluster matter distributions. In addition, the radial distribution of cluster total stellar mass also appears to have a similar, but noisier similarity with cluster matter. • Diffuse light properties in the IllustrisTNG simulation. In the IllustrisTNG simulation, the diffuse light radial distribution is more concentrated towards the center than the cluster mass (including dark matter mass) distribution, while the radial profile of the cluster subhalo stellar mass appears to well match that of the cluster mass. We do find that the total stellar mass of diffuse light at large radii scales remarkably well with the cluster mass with a low scatter, comparable to the scaling relation of cluster gaseous mass within 150 kpc, and outperforms the cluster subhalo stellar mass throughout the 0 to 500 kpc radial range. This result is consistent with our observation that diffuse light has an excellent scaling relation with cluster mass. Given our results, is diffuse intra-cluster light a good tracer of the galaxy cluster matter distribution (including dark matter)? Maybe. Observationally, we find that the diffuse light radial profile shows some resemblance with that of cluster matter measured through weak lensing, but simulation analysis suggests that they are not tracing each other faithfully. However, the diffuse light luminosity at large radius scales extraordinarily well with cluster total mass with a power-law like relation in both observation and simulation. We hence recommend developing the diffuse light observable as a potential low scatter mass indicator for cluster astrophysics and cosmology studies. Such mass proxies can be particularly useful for low-mass clusters where multiwavelength data is scarce and accurate cluster mass estimation is challenging (e.g., see discussion in in DES Collaboration et al. 2020), but existing wide-field optical survey programs like DES offer deep enough data to acquire accurate measurements of diffuse light. Moving forward, these interesting findings can enjoy a better understanding with higher S/N measurements. The next generation of wide-field survey programs such as the Legacy Survey of Space and Time 3 (LSST) based at the Vera Rubin Observatory and the Euclid Wide Survey 4 provide great opportunities to further investigate the properties of cluster diffuse light. Moreover, we have not explored the effect of cluster relaxation process on diffuse light production, or studied the correlation between cluster morphological parameters (smoothness, cuspiness, asymmetry, and concentration) and diffuse light. Meanwhile, simulation studies still need to explain the origin of diffuse light and present evidence that matches diffuse light properties in observations. We advocate that continuing to study diffuse light with both observations and simulations will have much to contribute to understanding galaxy and galaxy cluster evolution. 68048, SEV-2016-0588, SEV-2016-0597, and MDM-2015-0509, some of which include ERDF funds from the European Union. IFAE is partially funded by the CERCA program of the Generalitat de Catalunya. Research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Program (FP7/2007(FP7/ -2013
2020-05-27T01:01:07.431Z
2020-05-25T00:00:00.000
{ "year": 2020, "sha1": "04a22aff5aef90a7a6e3b069658a9be248a6a52e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2005.12275", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "04a22aff5aef90a7a6e3b069658a9be248a6a52e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
255125305
pes2o/s2orc
v3-fos-license
HD 213258: a new rapidly oscillating, super-slowly rotating, strongly magnetic Ap star in a spectroscopic binary We report about HD 213258, an Ap star that we recently identified as presenting a unique combination of rare, remarkable properties. Our study of this star is based on ESPaDOnS Stokes I and V data obtained at 7 epochs spanning a time interval slightly shorter than 2 years, on TESS data, and on radial velocity measurements from the CORAVEL data base. We confirm that HD 213258 is definitely an Ap star. We found that, in its spectrum, the Fe II {\lambda}6149.2 {\AA} line is resolved into its two magnetically split components. The mean magnetic field modulus of HD 213258,~ 3.8 kG does not show significant variations over ~2 years. Comparing our mean longitudinal field determinations with a couple of measurements from the literature, we show that the stellar rotation period must likely be of the order of 50 years, with a reversal of the field polarity. Moreover, HD 213258 is a rapidly oscillating Ap (roAp) star, in which high overtone pulsations with a period of 7.58 min are detected. Finally, we confirm that HD 213258 has a mean radial velocity exceeding (in absolute value) that of at least 99% of the Ap stars. The radial velocity shows low amplitude variations, which suggests that the star is a single-line spectroscopic binary. It is also a known astrometric binary. While its orbital elements remain to be determined, its orbital period likely is one of the shortest known for a binary roAp star. Its secondary is close to the borderline between stellar and substellar objects. There is a significant probability that it may be a brown dwarf. While most of the above-mentioned properties, taken in isolation, are observed in a small fraction of the whole population of Ap stars, the probability that a single star possesses all of them is extremely low. This makes HD 213258 an exceptionally interesting object that deserves to be studied in detail in the future. Introduction The peculiarity of HD 213258 (also known as BD+35 4815) was first reported by Bidelman (1985).He assigned it to a new group of upper main-sequence chemically peculiar (CP) stars that he had recently identified (Bidelman 1981): the F str λ 4077 stars.Originally, this classification referred to stars whose spectra resemble those of Am stars but show an abnormally strong Sr ii λ 4077 Å line.However, as noted by North (1987), a specific definition of the criteria that allow one to distinguish F str λ 4077 stars from Am, Ap Sr, or λ Boo stars was missing at the time 1 .This left some ambiguity in the classification.As a matter of fact, in their Table 3, North & Duquennoy (1991) flagged HD 213258 as a possible Ap star.Nevertheless, it was not included in the Catalogue of Ap, HgMn, and Am stars (Renson & Manfroid 2009). In this short note, we report that HD 213258 has a quasiunique combination of rare, remarkable properties with respect to its magnetic field (Sect.2), its rotation (Sect.3), its space velocity and its binarity (Sect.4), and its pulsation (Sect.5).As a conclusion, in Sect.6 we show why this combination of prop-1 Later, North & Duquennoy (1991) found that at least half of F str λ 4077 stars are main-sequence Ba stars, which owe their chemical peculiarity to binary evolution.erties makes HD 213258 an object of exceptional interest that deserves to be studied in detail in the future. Magnetic field The ESPaDOnS (Echelle SpectroPolarimetric Device for the Observation of Stars) spectrograph at the Canada-France-Hawaii Telescope (CFHT) was used to record Stokes I and V spectra of HD 213258 at seven epochs between November 2020 and October 2022.They cover the spectral range 3700−10 000 Å, at a resolving power R ∼ 65 000.They were reduced by the CFHT team using the dedicated software package Libre-ESPrIT (Donati et al. 1997).The resulting S/N in Stokes I reaches its maximum of about 400 or more in echelle order #32 (7080 Å).A portion of one of these spectra is shown for illustration in Fig. 1.The sharpness of the spectral lines is striking.One can see that the Fe ii λ 6149.2Å line is resolved into its magnetically split components.This is indicative of a very low projected equatorial velocity v sin i and of the presence of a strong magnetic field.The quantitative determination of the latter is discussed below. One can also note in the spectrum the presence of strong lines that are characteristic of typical Ap stars, such ambiguity that affected the original classification of HD 213258 and indicates that it is definitely an Ap star. On the other hand, all the lines present in the observed spectrum show clear Stokes V signatures.They reveal the presence of a sizeable component of the magnetic field along the line of sight, which does not average out over the stellar hemisphere that was visible at the epoch of the observation. The mean magnetic field modulus, B , defined as the lineintensity-weighted average over the visible stellar hemisphere of the modulus of the magnetic vector, was determined from the wavelength separation of the resolved components of the Fe ii λ 6149.2Å line.The magnetic splitting pattern of this line is a doublet.The value of B is derived via application of the following formula: (1) In this equation, λ r and λ b are, respectively, the wavelengths of the red and blue split line components; g is the Landé factor of the split level of the transition (g = 2.70; Sugar & Corliss 1985); ∆λ Z = k λ 2 0 , with k = 4.67 × 10 −13 Å −1 G −1 ; λ 0 = 6149.258Å is the nominal wavelength of the considered transition. The procedure used to measure the wavelengths λ b and λ r of the Fe ii λ 6149.2Å split line components has been described in detail by Mathys & Lanz (1992) and by Mathys et al. (1997).As in many other Ap stars, the Fe ii λ 6149.2Å line in HD 213258 is blended on the blue side with an unidentified rare-earth line.We disentangled its contribution from that of the two Fe ii λ 6149.2Å line components by fitting three Gaussians to them.As shown by Mathys et al. (1997), this represents a very effective way to achieve consistent determinations of the wavelengths λ b and λ r , and hence of the value of B .The difficulty in estimating the uncertainty affecting the derived values of the mean magnetic field modulus was discussed in detail by Mathys et al. (1997).In the present case, since the measurements obtained thus far have only sampled a fraction of the stellar rotation cycle (see Sect. 3), we followed the prescription of these authors and estimated the uncertainty of the B determinations in HD 213258 to be of the order of 40 G from a comparison of the profile of the Fe ii λ 6149.2Å line in HD 213258 with its profile in other stars for which this uncertainty is better constrained. The mean longitudinal magnetic field, B z , is the lineintensity-weighted average over the visible stellar hemisphere of the component of the magnetic vector along the line of sight.It is determined from the wavelength shift of the spectral lines between the two circular polarisations via application of the following formula: where λ R (respectively λ L ) is the wavelength of the centre of gravity of the line in right (respectively left) circular polarisation and ḡ is the effective Landé factor of the transition.The value of B z is determined through a least-squares fit of the measured values of λ R − λ L by a function of the form given above.The standard error, σ z , that was derived from that least-squares analysis was used as an estimate of the uncertainty of the obtained value of B z .The results of the magnetic measurements of HD 213258 are presented in Table 1.Column 1 gives the heliocentric Julian A72, page 2 of 6 Table 1.Mean magnetic field modulus, mean longitudinal magnetic field, and heliocentric radial velocity measurements.present the results of the determination of the mean longitudinal magnetic field: its value, B z , its uncertainty, σ z , and the number, N, of diagnostic lines that were measured.They are lines of Fe i, which span the wavelength range 4175−6180 Å.The same lines were also used to derive the radial velocity of HD 213358, from a least-squares fit of the wavelength shifts of the centres of gravity of the Stokes I line profiles with respect to their laboratory values against these laboratory values, using the same diagnostic lines as for the B z determinations.The derived values of the heliocentric radial velocity, v r , and their uncertainties, σ v (that is, the standard error of the least-squares analysis), are given in Cols.6 and 7 of Table 1. Rotation The magnetic measurements of Sect. 2 are plotted against time in Fig. 2. In the upper panel, the mean longitudinal magnetic field shows an apparently linear trend from the first epoch of observation to the most recent one, from more negative to less negative values.All B z values are comprised in a narrow range, between −1000 G and −885 G.However, two earlier mean longitudinal magnetic field determinations from the literature, performed by North et al. (1992) The magnetic measurements obtained so far have sampled the rotation cycle too sparsely to constrain the shapes of the variation curves of either B or B z .Even trying to characterise these shapes under the frequently made assumption of a magnetic field structure that is to first order dipolar would represent an over-interpretation of the available data.Therefore, the discussion below is based on a linear approximation, which is sufficient for setting meaningful constraints on the lower limit of the rotation period.Indeed, from one extremum to the next, and away from both extrema, a straight line does not drasti- cally depart from the actual sinusoidal shape of the B z variation curve that corresponds to a dipole. Extrapolating the linear trend of variation in the mean longitudinal magnetic field observed over the past two years (690 days), it would take HD 213258 of the order of 5060 days from the most recent ESPaDOnS observation to get back to a value of B z ∼ 0 G, which is close to the value derived in 1989.This suggests a minimum value of ∼5750 days for half the rotation period.The full rotation period should be at least twice as long, that is, have a minimum value of about 11 500 days, or 31.5 yr.However, the time elapsed between the observations of North et al. (1992) and the first of the present ESPaDOnS observations is 11 432 days.These observations were definitely obtained at very different rotation phases, so they are inconsistent with values of the rotation period close to 11 500 days.The actual period must be much longer.Accordingly, the B z values from 1989 and 2020 cannot both be close to the extrema of the mean longitudinal magnetic field.Either the negative B z extremum must have an absolute value greater than 1 kG, or B z must become positive over part of the rotation cycle and reach a positive extremum, or both extrema must be outside the range of B z values that have been observed so far.For instance, if B z was close to its negative extremum in 2020, and if its variation curve is approximately symmetric about this extremum, the mean longitudinal magnetic field could plausibly have been positive between ∼1989 and ∼2002, and it could have gone through its positive extremum around 1995 or 1996.This suggests that the minimum value of the rotation period should be at least of the order of 50 yr.The period might even be a few years longer if over part of it B z reached values more negative than −1.0 kG.While consideration of the mean magnetic field modulus sets an upper limit to the maximum absolute value of the mean A72, page 3 of 6 The lower panel of Fig. 2 shows the measurements of the mean magnetic field modulus of HD 213258.This field moment does not show any significant variation over the 2-yr time interval spanned by the ESPaDOnS spectra.This is not unexpected, as 2 yr probably represents less than 0.04 rotation periods.The relative amplitude of the variations in B does not exceed 30% in most Ap stars (Mathys 2017), so such variations may not be detectable over too small a fraction of a rotation cycle. Radial velocity and binarity The spectral line measurements carried out to determine the mean longitudinal magnetic field were used to derive the heliocentric radial velocity of HD 213258 at the seven epochs of observation with ESPaDOnS.The average of the seven individual values, −87.95 km s −1 , is consistent with the average radial velocity computed by North & Duquennoy (1991) from six CORAVEL (CORrelation-RAdial-VELocities) measurements, −86.8 km s −1 .These six CORAVEL values, together with four later ones, are listed in Table 2.All the radial velocity values of Tables 1 and 2 are also plotted in Fig. 3.The CORAVEL and ESPaDOnS datasets appear mutually consistent, in line with the conclusion reached by Mathys (2017) for other stars observed with these two instruments.We also confirm the claim by North & Duquennoy (1991) that the radial velocity of HD 213258 is definitely variable.In particular, the difference between the measurement obtained on JD 2459361 and the six other contemporaneous radial velocity determinations, although small (of the order of 1 km s −1 ), is highly significant.North & Duquennoy (1991) contemplated the possibility that the radial velocity variations that they detected reflect the changing aspect of the visible hemisphere of a spotted star over its rotation period.This interpretation can be ruled out given the abovementioned strong evidence that the rotation period of the star is of the order of decades.Thus, HD 213258 must almost certainly be part of a single-lined spectroscopic binary system.Indeed, no obvious evidence of a secondary was found in the spectra. There are too few radial velocity measurements and the epochs of CORAVEL and ESPaDOnS observations are too far apart from each other to allow the orbital elements to be determined.Consideration of Fig. 3 suggests that the orbital period may be of the order of weeks, or possibly longer.Indeed, radial velocity values determined from observations obtained on two to three consecutive or nearly consecutive nights do not show any significant variations, but variations occur between such groups of observations spaced from each other by a few weeks.In any case, the orbital period of HD 213258 must be much shorter than its rotational period.This occurs frequently for ssrAp stars in binaries (Mathys 2017).More observations are needed to characterise the system better. Furthermore, from analysis of the proper motion anomaly between the Hipparcos catalogue and the Early Third Data Release of the Gaia catalogue, Brandt (2021) and Kervella et al. (2022) also show that HD 213258 is an astrometric binary.Assuming a circular orbit observed face-on and using the mass estimate m 1 = 1.70 M for the Ap primary, Kervella et al. (2022) derived estimates of the mass, m 2 , of the secondary: m 2 = 129.33 M J for an orbital radius r = 3 au (corresponding to an orbital period P orb ∼ 4.0 yr), m 2 = 69.56M J for r = 5 au (P orb ∼ 11.2 yr), and m 2 = 87.97M J for r = 10 au (P orb ∼ 31.6 yr).This puts the secondary of HD 213258 close to the borderline between stellar and substellar objects.There is a significant probability that it is a brown dwarf. The average value of the heliocentric radial velocity of HD 213258, of the order of −88 km s −1 , is exceptionally large (in absolute value) for an Ap star.For instance, in the systematic study of a sample of 186 Ap stars by Levato et al. (1996), the values of the heliocentric radial velocity range from −40 to 42 km s −1 .However, as noted by North & Duquennoy (1991), the space velocity of HD 213258 is essentially radial, so while large, it is not extreme.We are not aware of any modern study of the space velocity distribution of Ap stars, so we cannot at present put the high radial velocity of HD 213258 in perspective, nor understand its implications (if any) for the other stellar properties. Pulsation The star HD 213258 was observed twice, in Sectors 16 and 56, by the space telescope TESS (Transiting Exoplanet Survey Satellite) during the first 4 yr of its operation.Employing the utilities of the Lightkurve Python package designed for analysis of Kepler and TESS data (Lightkurve Collaboration 2018) and a Python code developed by Jonathan Labadie-Bartz (priv.comm.), the TESS images of this target observed in Sector 56 with a cadence of 158 s were downloaded from the Mikulski Archive for Space Telescopes (MAST)2 .The light curve was extracted from the TESS full frame images in a manner similar to in Labadie- Bartz et al. (2022).The cut-out images with a size of 24 × 24 pixels were used to infer the raw light curve of HD 213258 and to remove the sky background using the principal component analysis detrending method.The derived flux was transformed to stellar magnitudes and analysed for periodic variability using a Lomb-Scargle periodogram (VanderPlas 2018). The Lomb-Scargle periodogram clearly shows a distinguishable triplet at frequencies around 190 d −1 (see Fig. 4).The central mode has the highest amplitude and corresponds to the period P = 7.579(2) min.There are three frequencies with significant amplitudes that are split with an average frequency separation of δν = 2.38(5) d −1 .This kind of high-overtone pulsation is typically observed in rapidly oscillating Ap (roAp) stars (Kurtz 1982).The detection of roAp-type pulsations and a significant magnetic field in HD 213258 (see Fig. 2) are strong pieces of evidence that it is a CP roAp star with extremely slow rotation (see Sect. 3). Conclusion The Ap star HD 213258 presents a quasi-unique combination of remarkable properties, each of which is observed in isolation in only a small fraction of the entire population of Ap stars.A few percent of these stars have rotation periods longer than 1 yr (Mathys 2017;Shultz et al. 2018).The longest period that has been accurately determined for an Ap star is that of HD 50169, which is 29 yr (Mathys et al. 2019).The only Ap star for which a lower limit of the period value greater than 50 yr has been set is γ Equ (Bychkov et al. 2016).If HD 213258 has a rotation period of the order of 50 as we contend, it is one of the most promising candidates for detailed study of extremely slow rotation in Ap stars. The lack of rotational broadening of the spectral lines of the ssrAp stars lends itself well to the resolution of magnetically split lines.But the number of stars that show such resolution remains small, of the order of a few percent of all Ap stars (Mathys 2017), as the threshold of detection of magnetically resolved lines in the visible is of the order of 2 kG and magnetic fields of several kilogauss are rare.Thus, the fact that the Fe ii λ 6149.2Å line is resolved into its magnetic components in HD 213258 is a distinctive trait. While the rate of occurrence of roAp stars among ssrAp stars, which may reach ∼20%, is considerably higher than the fraction of roAp stars among all Ap stars (Mathys et al. 2022), roAp stars seldom belong to binary systems (Hubrig et al. 2000;Schöller et al. 2012;Hey et al. 2019).Such binary systems are wide: the shortest orbital period that has been accurately determined for a pair containing an roAp star, HD 42659, is 93 d .2(Hartmann & Hatzes 2015).The radial velocity measurements obtained thus far for the roAp star HD 213258 (see Tables 1 and 2) do not rule out the possibility of a shorter period: it will be very valuable to determine v r at additional epochs to constrain its orbital elements.Even more remarkably, there is a significant probability that the secondary of the pair is substellar.If confirmed, this would make HD 213258 the first roAp star known to have a brown dwarf companion.In any event, it is certainly the roAp star with the least massive companion known to date. Furthermore, it is rather uncommon for roAp stars to have very strong magnetic fields.For instance, of the 44 roAp stars for which an averaged value of the magnetic field is given in Table 1 of Smalley et al. (2015), only 8 (18%) have field values greater than 3.5 kG.Admittedly, this statistic is very approximative since it is based on inhomogeneous literature sources and magnetic field measurements of differing completeness and quality.But this does not detract from the fact that, with B ∼ 3.8 kG, HD 213258 ranks among the most strongly magnetic roAp stars. Finally, as illustrated in Sect.4, the mean radial velocity of HD 213258 appears greater than that of more than 99% of Ap stars. In summary, each of the above-mentioned properties of HD 213258 taken in isolation puts it in a minority group of Ap stars that may represent between ∼20% and less than 1% of the whole population of Ap stars.What makes HD 213258 especially remarkable is that it presents a combination of all these rare properties.Such a combination must have a very low probability of occurring, and to the best of our knowledge, HD 213258 is as of now the only known Ap star possessing it.For instance, γ Equ is an roAp star that has a magnetic field of the same order of magnitude as that of HD 213258 and a rotation period that is probably longer than that of HD 213258.But while the importance of the much higher space velocity of HD 213258 is unclear, the fact that its companion is much closer than that of γ Equ and that it has a very low mass and may plausibly be a brown dwarf marks HD 213258 as truly unique.The purpose of this note is to call the attention of the stellar astrophysics community to this star so that it receives the attention it deserves.For instance, this star is an excellent workbench for studying the effects of a strong magnetic field and of a companion on the atmospheric and pulsation properties of Ap stars.We are currently working to determine its fundamental parameters, the chemical abundances in its atmosphere, and their vertical stratification.The results of this detailed analysis will be the subject of a future paper. asFig. 1 . Fig. 1.Portion of the spectrum of HD 213258 recorded on HJD 2,459,180.728 in Stokes I (top) and V (bottom), showing the resolved magnetically split line Fe ii λ 6149.2Å.A few other lines that are typical of Ap stars are identified.The spectrum has been normalised to the continuum (I c ), and the wavelength scale has been converted to the laboratory reference frame. Fig. 2 . Fig. 2. Mean longitudinal magnetic field (top) and mean magnetic field modulus (bottom) of HD 213258 against time. Fig. 3 . Fig. 3. Heliocentric radial velocity of HD 213258 against time.Black diamonds identify the measurements based on CORAVEL observations and red dots those based on ESPaDOnS spectra. Fig. 4 . Fig. 4. High-overtone pulsations of HD 213258 detected during the analysis of photometric data in Sector 56 provided by TESS.Vertical dotted red lines specify the position of three significant signals in the Lomb-Scargle periodogram. z = −1.25 kG could be reached by HD 213258.
2022-12-27T06:42:08.340Z
2022-12-24T00:00:00.000
{ "year": 2022, "sha1": "4145dc84dfd1c1c2e28ba9fd9cc72f1a68fcf3ff", "oa_license": "CCBY", "oa_url": "https://www.aanda.org/articles/aa/pdf/2023/02/aa45568-22.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "de81c490d8be7bb745ac84249a89d328fb6b918d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
85546638
pes2o/s2orc
v3-fos-license
Cohomology and base change for algebraic stacks We prove that cohomology and base change holds for algebraic stacks, generalizing work of Brochard in the tame case. We also show that Hom-spaces on algebraic stacks are represented by abelian cones, generalizing results of Grothendieck, Brochard, Olsson, Lieblich, and Roth--Starr. To accomplish all of this, we prove that a wide class of Ext-functors in algebraic geometry are coherent (in the sense of M. Auslander). Introduction Our first main result is the following version of cohomology and base change. Theorem A. Fix a morphism of locally noetherian algebraic stacks f : X → S which is separated and locally of finite type, M q ∈ D − Coh (X), and N ∈ Coh(X) which is properly supported and flat over S. For each integer q ≥ 0 and morphism of noetherian algebraic stacks g : T → S, there is a natural base change morphism: b q (T ) : g * Ext q (f ; M q , N) → Ext q (f T ; L(g X ) * Q M q , g * X N), where f T : X T → T denotes the pullback of f by g and g X : X T → X denotes the pullback of g by f . Now fix s ∈ |S| such that b q (s) is surjective. (1) Then, there exists an open neighbourhood U ⊆ S of s ∈ |S| such that for any g : T → S factoring through U , the map b q (T ) is an isomorphism. (2) The following are equivalent: (a) b q+1 (s) is surjective, (b) the coherent O S -module Ext q+1 (f ; M q , N) is free at s. In the proof of Theorem A for projective morphisms [EGA,III.7.7.5], the essential point was to corepresent the relevant functors by bounded above complexes of coherent sheaves on S. Thus the following notion was indispensible: for a noetherian scheme S, a functor QCoh(S) → QCoh(S) is corepresentable by a complex if it is of the form J → H 0 (RHom OS (Q q , J)) where Q q ∈ D − Coh (S). Brochard noticed [Bro12, App. A], however, that corepresenting the cohomology functors of a non-tame stack by complexes is quite subtle-the problem is that these stacks tend to have infinite cohomological dimension. In particular, the "Mumford Lemma" [Mum70,Lem. 5.1] is no longer applicable. Our next main result shows that this problem can be circumvented when one has duality. D − Coh (X) and N q ∈ D b Coh (X). If N q has finite tor-dimension over S, then there exists a quasi-isomorphism: , where N ∈ Coh(X) is flat over S, then the formation of E q M q ,N q is compatible with base change. Since any separated scheme of finite type over a Gorenstein ring (e.g. Spec Z or a field) and the spectrum of any maximal-adically complete noetherian local ring admits a dualizing complex. Thus, Theorem B covers most cases encountered in practice (in particular, it generalizes [BF97, Lem. 6.1]). We wish to point out, however, that the collection of functors QCoh(S) → QCoh(S) which are corepresentable by a complex is poorly behaved. Indeed, it is not even closed under direct summands [Har98,Prop. 4.6 & Ex. 5.5]. The following generalization of functors which are corepresentable by a complex was considered by M. Auslander [Aus66] in order to correct such deficiencies. For an affine (not necessarily noetherian) scheme S, a functor F : QCoh(S) → Ab is coherent if there exists a morphism of quasicoherent O S -modules K 1 → K 2 , such that for all I ∈ QCoh(S), there is a natural isomorphism of abelian groups: F (I) ∼ = coker(Hom OS (K 2 , I) → Hom OS (K 1 , I)). Note that if S is noetherian, the coherent functors of R. Hartshorne [Har98] are precisely those coherent functors of M. Auslander [Aus66] which preserve direct limits. In any case, the collection of coherent functors is very well-behaved: it is an abelian category which is closed under extensions and inverse limits-precisely the sort of properties that are convenient to have at one's disposal when performing induction arguments. Thus, using Theorem B, we can prove the following Theorem. Theorem C. Fix an affine scheme S, a morphism of algebraic stacks f : X → S which is separated and locally of finite presentation, M q ∈ D QCoh (X), and N ∈ QCoh(X), with N of finite presentation, properly supported and flat over S. Then, the functor: We wish to emphasize that Theorem C eliminates from Theorem B the finiteness hypotheses on S (i.e. S is permitted to be non-noetherian) and on M q (i.e. M q can be unbounded with quasicoherent cohomology). Moreover, the employment of Auslander's definition of coherent functors is essential for the proof and truth of Theorem C. If S is noetherian and M q ∈ D − Coh (X), then Theorem C implies that the functor is coherent in the sense of Hartshorne (see Lemma 1.1). Theorem A is proved by combining a clever result of A. Ogus and G. Bergman [OB72, Cor. 5.1], Theorem C, and some general vanishing results for coherent functors. An interesting application of Theorem C is the following: given a scheme S, an algebraic S-stack X, and M, N ∈ QCoh(X), we define the S-presheaf Hom OX /S (M, N) as follows: where τ X : X × S T → X is the projection. Then, we prove Theorem D. Fix a scheme S and a morphism of algebraic stacks f : X → S, which is separated and locally of finite presentation. Let M, N ∈ QCoh(X), with N of finite presentation, flat over S, with support proper over S. Then, Hom OX /S (M, N) is representable by an abelian cone over S (which is, in particular, affine over S). If M is of finite presentation, then Hom OX /S (M, N) is finitely presented over S. We wish to emphasize that Theorem D is completely elementary once Theorem C is known. Using different techniques to that employed in Theorem C, we prove a coherence result when nothing is assumed to be flat (at the expense of making the diagonal finite). Theorem E. Fix an affine and noetherian scheme S, a morphism of algebraic stacks f : X → S which is locally of finite type with finite diagonal, M q ∈ D QCoh (X), and N q ∈ D b Coh (X). If N q has properly supported cohomology sheaves over S, then the functor: We wish to emphasize that Theorem E is independent of Theorem B. Relation with other work. In [Hal12], coherent functors featured prominently in a criteria for algebraicity of stacks. Thus Theorem C can be used to show that certain stacks are algebraic [op. cit., § §8-9]. Theorem D can also be used to show that many algebraic stacks of interest have affine diagonals [Hal12, § §8-9]generalizing and simplifying the existing work of M. Olsson [Bro12,Prop. A.4.3] demonstrate that Theorem A holds when f is proper, tame, and flat, and M is flat over S as well as being the cokernel of a map vector bundles. For example, if f is projective, or more generally, X is tame and has the resolution property, Theorem A is known to hold for any coherent sheaf M which is flat over S. After completing this paper we also located in the literature two very nice papers of H. Flenner addressing similar results for analytic spaces. In particular, if S is excellent and of finite Krull dimension, then Theorem A follows from the results of [Fle81,§7] and Theorems B and D are the main results of [Fle82] (this is in the analytic category, however, thus M is assumed to be coherent in Theorem D). Without further assumptions on f and M q (e.g. M q and X are flat over S when q ≥ 1 [AK80, §1]) we cannot see how Theorem A can be easily reduced to the case where S meets Flenner's hypotheses. In fact, we use coherent functors to accomplish this descent, which is effectively the content of Theorem C. This has no counterpart in the analytic category where everything is excellent, coherent, and admits a dualizing complex. We do not believe that Theorem E has been considered previously. Assumptions, conventions, and notations. For a scheme T , denote by |T | the underlying topological space (with the Zariski topology) and O T the (Zariski) sheaf of rings on |T |. For t ∈ |T |, let κ(t) denote the residue field. Denote by QCoh(T ) (resp. Coh(T )) the abelian category of quasicoherent (resp. coherent) sheaves on the scheme T . Let Sch/T denote the category of schemes over T . The bigétale site over T will be denoted by (Sch/T )É t . For a ring A and an A-module M , denote the quasicoherent O Spec A -module associated to M by M . Denote the abelian category of all (resp. coherent) Amodules by Mod(A) (resp. Coh(A)). We will assume throughout that all schemes, algebraic spaces, and algebraic stacks have quasicompact and quasiseparated diagonals. Derived categories of sheaves on algebraic stacks In this section, we review derived categories of sheaves on algebraic stacks. Fix an algebraic stack X. We take Mod(X) (resp. QCoh(X)) to denote the abelian category of O X -modules (resp. quasicoherent O X -modules) on the lisse-étale site of X [LMB, 12.1]. Take D(X) (resp. D QCoh (X)) to denote the unbounded derived category of Mod(X) (resp. the full subcategory of D(X) with cohomology in QCoh(X)). Superscripts such as +, −, ≥ n, and b decorating D(X) and D QCoh (X) should be interpreted as usual. In addition, if X is locally noetherian, one may consider the category of coherent sheaves Coh(X) and the derived category D Coh (X). If X is a Deligne-Mumford stack, there is an associated smallétale site. We take Mod(Xé t ) (resp. QCoh(Xé t )) to denote the abelian category of O Xé t -modules (resp. quasicoherent O Xé t -modules). There are naturally induced morphisms of abelian categories Mod(X) → Mod(Xé t ) and QCoh(X) → QCoh(Xé t ). Set D QCoh (Xé t ) to be the triangulated category D QCoh(Xé t ) (Mod(Xé t )). Then, the natural functor D QCoh (X) → D QCoh (Xé t ) is an equivalence of categories. If X is a scheme, the corresponding statement for the Zariski site is also true. For generalities on unbounded derived categories on ringed sites, we refer the reader to [KS06,§18.6]. We now record for future reference some useful formulae. If M q and N q ∈ D(X), then there is the derived tensor product M q ⊗ L OX N q ∈ D(X), the derived sheaf Hom functor RHom OX (M q , N q ) ∈ D(X) and the derived global Hom functor RHom OX (M q , N q ) ∈ D(Ab). For all P q ∈ D(X) we have a functorial isomorphism: as well as a functorial quasi-isomorphism: Set RΓ(X, −) = RHom OX (O X , −), then there is also a natural quasi-isomorphism: QCoh (X) (resp. D + Coh (X)). These results are all consequences of [Ols07, §6] and [LO08,§2]. Fix a morphism of algebraic stacks f : X → Y . We let Rf * : If the morphism f is quasicompact and quasiseparated, then the restriction of Rf * to D + QCoh (X) induces a functor Rf * : .20]. If X and Y are Deligne-Mumford stacks, then the restriction of Rf * to D + QCoh (X) coincides with the restriction of the derived functor of (fé t ) * : Mod(Xé t ) → Mod(Yé t ) to D QCoh (Xé t ). A consequence of [LO08, Ex. 2.1.11] is that if in addition f is representable, then the restriction of Rf * to D QCoh (X) induces a functor Rf * : A morphism of algebraic stacks f : X → Y , however, does not necessarily induce a left exact morphism of corresponding lisse-étale sites [Beh03,5.3.12], thus the construction of the correct derived functors of f * : QCoh(Y ) → QCoh(X) is somewhat subtle. There are currently two approaches to constructing these functors. The first, due to M. Olsson [Ols07] and Y. Laszlo and M. Olsson [LO08], uses cohomological descent. The other approach appears in the Stacks Project [Stacks]. The latter approach is more widely applicable, but uses a completely different formulation (big sites) and requires significant amounts of technology that many people may not be familiar with. In this article, where there is always some finiteness at hand, the approach of Olsson and Laszlo-Olsson is sufficient, thus is the method that we will employ. In any case, there exists a functor Lf * In addition, if f is quasicompact and quasiseparated, I q ∈ D QCoh (Y ), and N q ∈ D + QCoh (X), then there is a natural isomorphism: Thus, Lf * Q is the left adjoint of Rf * . In the situation where X and Y are Deligne-Mumford stacks, there also exists a derived functor Lf * et : If f : X → Y is a quasicompact morphism of locally noetherian algebraic stacks, If f : X → Y is a representable and quasicompact morphism of algebraic stacks, then the isomorphism (1.4) can be extended to all N q ∈ D QCoh (X). In this situation-which is covered in [HR12c], generalizing [Nee96,Prop. 5.3]-there is also the projection formula, which gives a functorial quasi-isomorphism for all N q ∈ D QCoh (X) and I q ∈ D QCoh (Y ): For a morphism of algebraic stacks f : . We conclude this section with the following easily proven lemma. Lemma 1.1. Fix an affine noetherian scheme S and a morphism of noetherian algebraic stacks f : Coh (X). If N q has finite tor-dimension over S, then the following functor preserves filtered colimits: We now briefly review homotopy limits in a triangulated category T admitting countable products. Fix for each i ≥ 0 a morphism in T, t i : T i+1 → T i . Set t : Π i≥0 T i → Π i≥0 T i to be the composition of the product of the morphisms t i with the projection Π i≥0 T i → Π i≥1 T i . We define holim i T i via the following distinguished triangle: The category of lisse-étale O X -modules is a Grothendieck abelian category, thus D(X) admits small products. Moreover, we wish to point out that the functors RΓ(X, −), RHom OX (M q , −) for M q ∈ D(X), and Rf * for a morphism of algebraic stacks f : X → Y all preserve homotopy limits because they preserve products. The following result is well-known and appears in [Stacks, 08IY] (albeit in a slightly different, but equivalent formulation). Lemma 1.2. Let X be an algebraic stack and fix N q ∈ D(X). Then, the projections N q → τ ≥−i N q induce a non-canonical morphism: If N q ∈ D QCoh (X), then any such φ is a quasi-isomorphism. Note that the main result of [Nee11] produces-in positive characteristic-complexes N q ∈ D(QCoh(BN a )) with the property that there are no quasi-isomorphisms: We wish to emphasize that this does not contradict Lemma 1.2. Indeed, while the categories D + (QCoh(BG a )) and D + QCoh (BG a ) are equivalent [Lur04, Thm. 3.8], this equivalence does not extend to the unbounded derived categories. Corepresentability of RHom-functors In this section we prove Theorem B. Before we get to this we require the following two easily proven lemmas. Our first lemma gives two important maps used to prove Theorem B. Lemma 2.1. Fix a morphism of noetherian algebraic stacks f : (2) There is a natural quasi-isomorphism in D + QCoh (X): Our next lemma will give the compatibility of the corepresenting object in Theorem B with base change-the result for schemes boils down to the well-known tor-independent base change [FGI + 05, Thm. 8.3.2]. Lemma 2.2. Fix a 2-cartesian square of noetherian algebraic stacks: We can now prove Theorem B. Proof of Theorem B. For background material on dualizing complexes we refer the reader to [Har66, V.2]. For the convenience of the reader, however, we will recall the relevant results. A complex K q ∈ D b Coh (S) is dualizing if it is locally of finite injective dimension and for any F q ∈ D Coh (S), the natural map: is a quasi-isomorphism. For notational convenience we set D(−) = RHom OS (−, K q ). Two useful facts about dualizing complexes are the following: (1) the functor D(−) interchanges D − Coh (S) and D + Coh (S); (2) for F q , G q ∈ D Coh (S), there is a natural quasi-isomorphism: Now fix I q ∈ D + Coh (S), then we have the following sequence of natural quasiisomorphisms: (2)). The final quasi-isomorphism is a consequence of the following sequence of observa- . Thus (2) applies. Hence, we have produced a natural quasi-isomorphism for all I q ∈ D + Coh (S): . We now need to extend the quasi-isomorphism (2.1) to I q ∈ D + QCoh (S). First, we note that because N q is bounded, there exists an r such that for all n and all I q ∈ D QCoh (S) the natural map: is a quasi-isomorphism. Hence, by Lemma 1.2 there exist maps for all I q ∈ D Coh (S): Note, however, that the maps above depend on M q , N q , and I q in a non-natural way (this is because holim n is constructed as a cone, thus is not functorial). In any case, corresponding to the identity map . Now take I q ∈ D + QCoh (S), then Lemma 2.1(1) provides a natural sequence of maps: By (2.1), the map above is certainly a quasi-isomorphism for all I q ∈ D + Coh (S). To show that it is a quasi-isomorphism for all I q ∈ D + QCoh (S), by the "way-out right" results of [Har66,I.7.1], it is sufficient to prove that it is a quasi-isomorphism for all quasicoherent O S -modules. We may now reduce to the case where S is an affine and noetherian scheme. Hence, it is sufficient to prove that the natural transformation of functors from QCoh(S) → Ab: ) is an isomorphism. By Lemma 1.1, both functors preserve filtered colimits and the exhibited natural transformation is an isomorphism for all I ∈ Coh(S). Since any I ∈ QCoh(S) is a filtered colimit of objects of Coh(S), we deduce the result. It now remains to address the compatibility of E q M q ,N q with base change in the situation where N q ≃ N[0] and N ∈ Coh(X) is flat over S. So, we fix a morphism of noetherian algebraic stacks g : T → S such that T admits a dualizing complex and form the 2-cartesian square of noetherian algebraic stacks: By Lemma 2.2 and what we have proven so far, there is a quasi-isomorphism, natural in I q ∈ D + QCoh (T ): By trivial duality (1.5) we thus obtain a quasi-isomorphism, natural in I q ∈ D + QCoh (T ): By (1.3), we thus see that we have a quasi-isomorphism, natural in I q ∈ D + QCoh (T ): By Lemma 1.2 and the above we obtain a sequence of quasi-isomorphisms: Lemma 3.1. Fix a ring A, then the forgetful functor: Finitely generated and coherent functors is an equivalence of categories. In particular, for a ring A, set T = Spec A and note that Lemma 3.1 shows that the category of additive functors QCoh(T ) → Ab is equivalent to the category of A-linear functors Mod(A) → Mod(A). We will use this equivalence without further mention to translate definitions between the two categories. A functor Q : Mod(A) → Sets is finitely generated if there exists an A-module I and an object η ∈ Q(I) such that for all A-modules M , the induced morphism of sets Hom A (I, M ) → Q(M ) : f → f * η is surjective. We call the pair (I, η) a generator for the functor Q. The notion of finite generation of a functor is due to M. Auslander [Aus66]. We refer to the data (f : I → J, η) as a presentation for F . For accounts of coherent functors, we refer the interested reader to [Aus66,Har98]. We now have a number of examples. Example 3.5. Given an exact sequence of additive functors H i : Mod(A) → Ab: where for i = 3, we have that H i is coherent. Then, H 3 is coherent. In particular, the category of coherent functors is stable under kernels, cokernels, subquotients, and extensions. This follows from [Aus66, Prop. 2.1]. Example 3.6. Fix a ring A and an A-algebra B. If F : Mod(A) → Ab is a coherent functor, then analogously to Example 3.4, the restriction F B : Mod(B) → Ab is also coherent. An A-linear functor of the form M → Ext i A (Q • , M ) is said to be corepresentable by a complex. By Example 3.7, such functors are coherent and half-exact, and were intially studied by M. Auslander [Aus66], with stronger results-in the noetherian setting-obtained R. Hartshorne [Har98]. In [HR12a], it is shown thatétale locally any half-exact, coherent functor is corepresentable by a complex. Coherence of Hom-functors: flat case Proof of Theorem C The proof of Theorem C will be via an induction argument that permits us to reduce to the case of Theorem B. The following notation will be useful. Notation 4.1. For a morphism of algebraic stacks f : X → S, and M q , N q ∈ D QCoh (X), set: to be the full subcategory having objects those M q with the property that H M q ,N q [n] is coherent for all n ∈ Z. We begin with two general reductions. Lemma 4.2. Fix an affine scheme S and a morphism of algebraic stacks f : X → S and N q ∈ D QCoh (X), then the subcategory T N q X/S ⊂ D QCoh (X) is triangulated and closed under small direct sums. In particular, if (1) D − QCoh (X) ⊂ T N q X/S , or (2) N q has finite tor-dimension over S and M ∈ T N q X/S for all M ∈ QCoh(X), Proof. Certainly, T N q X/S is closed under shifts. Next, given a triangle M , we obtain an exact sequence of functors: . By Example 3.9 we conclude that M q ∈ T N q X/S . For (1), by [LO08,Lem. 4.3.2], given M q ∈ D QCoh (X), there is a triangle: By hypothesis, τ ≤n M q ∈ T N q X/S for all n ≥ 0, and so M q ∈ T N q X/S . For (2) by (1), it is sufficient to prove that D − QCoh (X) ⊂ T N q X/S . Since N q has finite tor-dimension over S, there exists an integer l such that the natural map , then for any integer n we have a natural isomorphism of functors: QCoh (X) ⊂ T N q X/S . Working with truncations and cones gives the result. Lemma 4.3. Fix an affine scheme S, a representable morphism of algebraic Sstacks p : X ′ → X which is quasicompact, and G ′ q ∈ D QCoh (X ′ ). Then, Proof. Fix M q ∈ D QCoh (X), then: with the penultimate isomorphism given by the projection formula (1.6). We now have our first induction result. Lemma 4.4. Fix an affine scheme S, a morphism of algebraic stacks f : X → S which is of finite presentation, N q ∈ D QCoh (X), and an integer n ∈ Z. Suppose that the functor H M,N q [r] is coherent in the following situations: (1) for all r < n and M ∈ QCoh(X); (2) r = n and M ∈ QCoh(X) of finite presentation. Then, the functor H M,N q [n] is coherent for all M ∈ QCoh(X). Proof. Fix M ∈ QCoh(X), then we must prove that H M,N q [n] is coherent. The morphism f is of finite presentation, so by [Ryd09, Thm. A], the quasicoherent O X -module M is a filtered colimit of O X -modules M λ of finite presentation. Let Q 1 = ⊕ λ M λ , Q 2 = ⊕ λ≤λ ′ M λ and take θ : Q 2 → Q 1 to be the natural map with Since N 0 is S 0 -flat, the natural transformation of functors H τ ≥0 L q ,N → H L q ,N is an isomorphism for all L q ∈ D QCoh (X). Hence, if l ≥ 0 and r ≤ 0: Moreover, if l < 0, then τ ≥−l M q ≃ 0, so V r,l ≡ 0. Thus, it remains to show that H M q ,N[l] is coherent for all l. Let I ∈ QCoh(S), then Lemma 2.2 gives a natural isomorphism H M q ,N[l] (I) ∼ = H M0,N0[l] (g * I). Example 3.6 now gives the result. We can now prove Theorem C. Proof of Theorem C. We must prove that D QCoh (X) = T Coherence of Hom-functors: non-flat case In this section we prove Theorem E. For this section we also retain the notation of §4. We begin by dispatching Theorem E in the projective case. Lemma 5.1. Fix an affine and noetherian scheme S, a morphism of schemes f : X → S which is projective, and N q ∈ D b Coh (X). Then, T N q X/S = D QCoh (X). Proof. By Lemma 4.2(1), it is sufficient to show that D − QCoh (X) ⊆ T N q X/S . Since f is a projective morphism and S is an affine and noetherian scheme, f has an ample family of line bundles. It now follows from [SGA6, II.2.2.9] that if M q ∈ D − QCoh (X), then M q is quasi-isomorphic to a complex Q q whose terms are direct sums of shifts of line bundles. Thus, by Lemma 4.2, it is sufficient to prove that if L ∈ Coh(X) is a line bundle, then L[0] ∈ T N q X/S . Fix n ∈ Z, then we have natural isomorphisms: , the latter isomorphism is given by the projection formula (1.6). Since L is O Xflat and N q ∈ D b Coh (X), it follows that Proof of Theorem E. By inducting on the length of the complex N q , it is sufficient to prove the result when N q is only supported in cohomological degree 0. In particular, there exists a closed immersion i : Y → X, such that the composition f • i is proper, together with a coherent O Y -module N 0 and a quasi-isomorphism i * N 0 [0] ∼ = N q . By Lemma 4.3, it suffices to prove that T Y /S = D QCoh (Y ). Hence, we have reduced the claim to the case where the morphism f is proper and where N q ≃ N[0] for some N ∈ Coh(X). Now let C X/S ⊂ Coh(X) denote the full subcategory with objects those N ∈ Coh(X) such that T X/S = D QCoh (X). By the 5-Lemma, it is plain to see that C X/S is an exact subcategory (in the sense of [EGA,III.3.1]). We now prove by noetherian induction on the closed substacks of X that C X/S = Coh(X). By virtue of Lemma 4.3 and the technique of dévissage [EGA, Proof of III.3.2], it is sufficient to prove that C X/S = Coh(X) when X is integral and T So, we fix N ∈ Coh(X). Combining Chow's Lemma [EGA,II.5.6.1] with [EHKV01, Thm. 2.7], there exists a morphism p : X ′ → X that is proper, surjective, and generically finite such that X ′ is a projective S-scheme. The diagonal of X is finite, thus X ′′ := X ′ × X X ′ is also a projective S-scheme, denote by q : X ′′ → X the induced morphism. By Lemma 5.1 we deduce that T Next, by Lemma 4.3, T Rp * p * N X/S = D QCoh (X). Also, p is generically finite, thus generically affine, so the support of the cohomology sheaves of τ ≥1 (Rp * p * N) vanishes generically. In particular, by noetherian induction, we deduce that T τ ≥1 (Rp * p * N) X/S = D QCoh (X), thus T p * p * N[0] X/S = D QCoh (X). An identical analysis for q also proves that T q * q * N[0] X/S = D QCoh (X) and so p * p * N and q * q * N ∈ C X/S . Hence, the equalizerÑ of the two maps p * p * N ⇒ q * q * N belongs to C X/S . Of course, there is also a natural map θ : N →Ñ. But X is integral, thus there is a dense open U ⊂ X such that p −1 (U ) → U is flat. By flat descent, θ is an isomorphism over U . So the exactness of the subcategory C X/S ⊂ Coh(X) and dévissage now prove that N ∈ C X/S . 6. Applications 6.1. Representability of Hom-spaces. As promised, Theorem D is now completely elementary. Proof of Theorem D. The latter claim follows from the former by standard limit methods. Now, Hom OX /S (M, N) is anétale sheaf, thus it is sufficient to prove the result in the case where S is affine. By Theorem C, the functor: is coherent. The flatness of N over S also shows that this functor is left-exact, so by Example 3.10, it is corepresentable by a quasicoherent O S -module Q M,N . So, fixing an affine S-scheme (T τ − → S), there are natural isomorphisms: Proof. Fix a generator (I, η) for F and a prime ideal p ⊳ A such that F Ap ≡ 0. For a ∈ A there is the localisation morphism l a : I → I a (resp. l p : I → I p ) and we set η a = (l a ) * η (resp. η p = (l p ) * η). Since F (I p ) = 0, it follows that η p = 0. However, as I p = lim − →a/ ∈p I a and the functor F commutes with direct limits of A-modules, there exists a / ∈ p such that η a = 0 in F (I a ). Since the pair (I a , η a ) generates F Aa , we have that F Aa ≡ 0. We now record for future reference a result that is likely well-known, though we are unaware of a reference. Proof. First, assume that the A-module M is finite free. The functor F commutes with finite products, thus the natural transformation δ F,M induces an isomorphism. For the general case, by Lazard's Theorem [Laz64], we may write M = lim − →i P i , where each P i is a finite free A-module. Since tensor products commute with direct limits, as does the functor F , the Proposition follows from the case already considered. Corollary 6.3. Fix a noetherian ring R and a bounded R-linear functor G : Mod(R) → Mod(R) that commutes with direct limits. (1) For any quasi-finite R-algebra R ′ , the functor G R ′ is bounded. (2) For any p ∈ Spec R, the functor G Rp is bounded. Proof Remark 6.4. Corollary 6.3(2) also holds for the henselization and strict henselization of R p . An R-linear functor G : Mod(R) → Mod(R) is universally bounded if for any noetherian R-algebra R ′ , the functor G R ′ : Mod(R ′ ) → Mod(R ′ ) is bounded. To combine Proposition 6.1 and Corollary 6.5, it is useful to have the following easily proven Lemma at hand. Lemma 6.6. Fix a ring A and an A-linear functor F : Mod(A) → Mod(A) preserving direct limits. If the functor F is finitely generated, then there exists a generator (I, η) with I a finitely presented A-module. In particular, if the ring A is noetherian, then the functor F is universally bounded. Combining Proposition 6.1, Corollary 6.5, and Lemma 6.6, we obtain the vanishing result we desire. Corollary 6.7. Fix a noetherian ring R and an R-linear, half-exact functor F : Mod(R) → Mod(R) which is finitely generated and preserves direct limits. If q ∈ Spec R and F (κ(q)) = 0, then there exists r ∈ R − q such that F Rr ≡ 0. Proof. By Corollary 6.5, q ∈ V(F ). By Proposition 6.1, the set V(F ) is Zariski open, thus there exists r ∈ R − q such that p ∈ Spec R r ⊂ V(F ). Let N ∈ Mod(R r ) and p ∈ Spec R r , then by Proposition 6.2 it follows that F (N ) p = F (N p ). But p ∈ V(F ) and so F (N ) p = 0. Since F (N ) is an R r -module, the result follows. We now combine Corollary 6.7 with the exchange property proved by A. Ogus and G. Bergman [OB72, Cor. 5.1]. Some notation: for a ring A, a pair of A-linear functors F 0 , Let q ∈ Spec R and suppose that φ 0 (κ(q)) is surjective. Then, (1) there exists r ∈ R − q such that for all M ∈ QCoh(R r ), the map φ 0 (M ) is an isomorphism. Proof of Theorem A. Throughout we fix a 2-cartesian diagram of noetherian algebraic stacks: The results are all smooth local on S and T , thus we may assume that S = Spec A and T = Spec B and g is induced by a ring homomorphism A → B. We may also work with the global Ext-groups instead of the relative Ext-sheaves. For an A-module I set E q (I) = Ext q OX (M q , N ⊗ OX f * I), which gives an A-linear functor Mod(A) → Mod(A). By Theorem C, the functor E q is coherent and by Lemma 1.1 the functor preserves filtered colimits (in particular, it preserves direct limits). By Lemma 2.2, if J is a B-module, there is a natural isomorphism: In particular, taking J = B we obtain a natural map: The result now follows from Corollary 6.8.
2013-03-15T15:52:08.000Z
2012-06-19T00:00:00.000
{ "year": 2014, "sha1": "a64e15300a95dc623a3c1bfd82cc0d5edca62b44", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1206.4179", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a64e15300a95dc623a3c1bfd82cc0d5edca62b44", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
3334290
pes2o/s2orc
v3-fos-license
Talin1 is critical for force-dependent reinforcement of initial integrin–cytoskeleton bonds but not tyrosine kinase activation Cells rapidly transduce forces exerted on extracellular matrix contacts into tyrosine kinase activation and recruitment of cytoskeletal proteins to reinforce integrin–cytoskeleton connections and initiate adhesion site formation. The relationship between these two processes has not been defined, particularly at the submicrometer level. Using talin1-deficient cells, it appears that talin1 is critical for building early mechanical linkages. Deletion of talin1 blocked laser tweezers, force-dependent reinforcement of submicrometer fibronectin-coated beads and early formation of adhesion sites in response to force, even though Src family kinases, focal adhesion kinase, and spreading were activated normally. Recruitment of vinculin and paxillin to sites of force application also required talin1. FilaminA had a secondary role in strengthening fibronectin–integrin–cytoskeleton connections and no role in stretch-dependent adhesion site assembly. Thus, force-dependent activation of tyrosine kinases is independent of early force-dependent structural changes that require talin1 as part of a critical scaffold. Introduction The generation of force on the ECM is required for cell viability, differentiation (Huang and Ingber, 2000), and migration (Lauffenburger and Horwitz, 1996). Forcedependent effects involve the spatial and temporal regulation of integrin interactions with both ECM proteins and the actin cytoskeleton. Forces applied to these nascent ECM-integrincytoskeleton connections induce strengthening of the integrincytoskeleton interactions (Choquet et al., 1997) leading to focal complex initiation and stabilization (Rottner et al., 1999;Galbraith et al., 2002). Further force generation is responsible for maturation of focal complexes to focal adhesions , which also require sustained forces for their stabilization . However, the proteins that compose the early mechanosensory system within precursors of focal complexes and the force-sensing phenomenon are poorly defined (Geiger and Bershadsky, 2002). At least two processes are known to be involved in early mechano-sensing: (1) the activation of signaling pathways including the Src family kinases (SFKs; von Wichert et al., 2003) and (2) the recruitment of structural proteins on a local scaffold leading to assembly of focal complexes at sites of cell-matrix adhesion (Galbraith et al., 2002). The importance of signaling pathways in early mechanosensing is shown by studies indicating that tyrosine phosphorylation and dephosphorylation are critical for rapid formation and turnover of adhesion sites (for review see Schoenwaelder and Burridge, 1999). Activations of SFKs and FAK are early, enzymatically linked events that immediately follow integrin engagement (Miyamoto et al., 1995;Cary et al., 2002), and tyrosine phosphorylation events are linked to the force applied on integrins (Pelham and Wang, 1997). FAK-deficient cells are unable to reorient their movement and form new adhesion sites in response to external forces on collagen-coated flexible substrates (Wang et al., 2001), and Src kinase activity weakens ␣ v ␤ 3 / integrin-cytoskeletal linkages (Felsenfeld et al., 1999) and may stimulate focal adhesion turnover. Furthermore, the tyrosine phosphatase receptor-like protein phosphatase ␣ (RPTP ␣ ) has been shown to act as a transducer of early mechanical force on fibronectin (FN)-integrin-cytoskele-ton linkages through ␣ v ␤ 3 /integrin-dependent activation of SFKs (von Wichert et al., 2003). The second important aspect of mechano-sensing is the recruitment of proteins to sites of force generation mediated by binding to components of the adhesion site that are structurally altered by force. For example, when detergentinsoluble cytoskeletons are mechanically stretched, adhesion site-associated proteins will bind independent of kinase and phosphatase activity (Sawada and Sheetz, 2002). The molecular nature of the structural component involved in force sensing is still elusive, but it is logical to look first at proteins connecting integrins with the cytoskeleton because regions of contact between integrins and matrix are the sites of greatest stress, and there is local recruitment of cytoskeletal and adhesion site proteins. In vitro studies show that integrins can be coupled to the actin cytoskeleton through several connector proteins: ␣ -actinin, tensin, filamin, and talin (Liu et al., 2000). Both filaminA and talin1 bind directly to integrins (Liu et al., 2000) and to the actin cytoskeleton (Hemmings et al., 1996;Stossel et al., 2001), and, therefore, might be involved in transmitting force on integrins to the cytoskeleton. Indeed, mechanical forces locally reinforce linkages between ␤ 1 integrins and the cytoskeleton through actin and filaminA recruitment, an effect not observed in fil-aminA-deficient melanoma cells (Glogauer et al., 1998). Deletion of talin1 inhibits adhesion site formation in mouse embryonic stem (ES) cells (Priddle et al., 1998) and formation of focal adhesion-like structures during Drosophila melanogaster embryogenesis (Brown et al., 2002). However, a talin1-deficient "fibroblast-like" cell line derived from talin1 ( Ϫ / Ϫ ) ES cells was able to assemble vinculin-and paxillincontaining adhesion structures (Priddle et al., 1998), suggesting that other actin-binding proteins such as filamin, ␣ -actinin, tensin, or talin2 (Monkley et al., 2001) can compensate to a certain extent for talin1 deficiency. We have focused here on the roles that talin1 and fila-minA play in the reinforcement of integrin-cytoskeleton connections leading to initiation and stabilization of early adhesion sites in response to force. We have also addressed whether tyrosine kinase activation can be separated from the structural changes needed for reinforcement in response to matrix-generated forces. In the talin1-deficient cells, the force-dependent activation of SFKs and FAK were normal, whereas there was no reinforcement of integrin-actin connections at early times. The separation of enzymatic from structural changes induced by force provides the first evidence that these processes can be activated independently. Results Talin1 is not necessary for cell spreading and force-induced, integrin-mediated signaling in talin1 ( Ϫ / Ϫ ) cells Because the talin1 head domain has been shown to interact with the cytoplasmic domains of integrin ␤ 1 and ␤ 3 subunits (Calderwood et al., 1999) and FAK (Critchley, 2000), we assayed a mouse talin1 ( Ϫ / Ϫ ) fibroblast-like cell line for ECM-activated integrin functions. For comparison, the cells were transiently transfected with an HA-tagged mouse ta-lin1 cDNA (talin1 ( Ϫ / Ϫ )WT cells). Efficient expression of Figure 1. Integrin-and force-dependent activation of SFKs and FAK is normal during spreading of talin1-deficient cells on FN. (A) After 30 min of spreading on FN 120 kD, talin1 (Ϫ/Ϫ) cells transiently cotransfected with HA-talin1 and paxillin-GFP were fixed; paxillin-GFP and HA-talin1 were visualized by fluorescence and immunofluorescence, respectively. (B) After 10 min of spreading on FN 120 kD, talin1 (Ϫ/Ϫ) cells or cells transiently cotransfected with talin1 and EGFP (talin1 (Ϫ/Ϫ)WT) cells were scored for flat, intermediary, or round morphology. Results represent the mean Ϯ SD of three experiments. (C) Talin1 (Ϫ/Ϫ) and talin1 (Ϫ/Ϫ)WT cell suspension or cells allowed to spread for 10 min on either FN 120 kD or VN were lysed, and the protein was analyzed by Western blotting using a phosphospecific anti-SFK (SFK Tyr-416), a phosphospecific anti-FAK (FAK Tyr-397), and a talin antibody. The total amount of proteins was verified using an anti-Src antibody. (D) Talin1 (Ϫ/Ϫ) and talin1 (Ϫ/Ϫ)WT cells allowed to spread for 10 min on FN in the presence or absence (cont) of 20 mM of the myosin inhibitor BDM or in suspension (sus) were lysed; the protein was analyzed by Western blotting using a phosphospecific anti-SFK (SFK Tyr-416), a phosphospecific anti-FAK (FAK Tyr-397), and a talin antibody. The total amount of proteins was verified using an anti-Src and an anti-FAK antibody. (C and D) Results shown are representative of three independent experiments. talin1 (2,541 amino acids) was confirmed by Western blotting ( Fig. 1, C and D); the residual talin immunoreactive protein in talin1 ( Ϫ / Ϫ ) cells is likely to be talin2, as determined using talin1-and talin2-specific antibodies (Craig, S.W., personal communication, unpublished data). The correct localization of HA-talin1 to adhesion sites was confirmed by immunostaining of talin1 ( Ϫ / Ϫ )WT cells cotransfected with paxillin-GFP ( Fig. 1 A). The early spreading efficiency of talin1 ( Ϫ / Ϫ ) cells and talin1 ( Ϫ / Ϫ ) WT cells on FN was similar (e.g., 10 min after plating; Fig. 1 B). The expression level of integrins ␣ 5 , ␣ v , ␤ 1 , and ␤ 3 , which are all involved in adhesion and spreading on FN, was comparable in deficient and rescued cells (for review see Priddle et al., 1998;unpublished data). Integrin-dependent activation of tyrosine phosphorylation events (Pelham and Wang, 1997), and particularly FAK (Wang et al., 2001) and SFKs (Felsenfeld et al., 1999;von Wichert et al., 2003), has been linked to adhesion site formation during spreading and force-dependent signaling. Interestingly, in talin1 ( Ϫ / Ϫ ) cells, SFK and FAK activation appeared normal during the initial spreading (10 min) on FN or vitronectin (VN). With antibodies specific for autophosphorylation of SFKs (such as c-Src, Fyn, and c-Yes) on Tyr416, and for autophosphorylation of FAK on Tyr397 ( Fig. 1 C), we observed a similar increase in phosphorylation after cell binding to FN-or VN-coated surfaces in both talin1 ( Ϫ / Ϫ ) and talin1 ( Ϫ / Ϫ )WT cells. Next, we tested whether forces generated by talin1 ( Ϫ / Ϫ ) and talin1 ( Ϫ / Ϫ ) WT cells during the spreading are involved in SFK and FAK activation (Fig. 1 D). Inhibition of myosin-dependent contractility by 2,3-butanedione monoxime (BDM; 20 mM) significantly decreased both FAK and SFK activation during spreading, suggesting that tyrosine phosphorylation of SFK and FAK resulted from force-induced, integrinmediated signaling. The normal spreading and SFK and FAK activation on FN for talin1 ( Ϫ / Ϫ ) cells suggest that early, force-dependent, matrix-integrin signaling processes do not directly involve talin1. Initiation of adhesion site assembly is delayed in talin1 ( Ϫ / Ϫ ) cells Focal complex/adhesion formation is another major function involving integrins and force generation, and was previously reported to be decreased in undifferentiated talin1 ( Ϫ / Ϫ ) ES cells deficient for talin1. However, upon differentiation of these cells into the fibroblast-like cells used here, focal adhesions appeared normal at least at later times (Priddle et al., 1998). The dynamic reorganization of integrin-associated protein complexes during focal complex formation (Galbraith et al., 2002) prompted us to characterize the temporal dependence of adhesion site formation in talin1 ( Ϫ / Ϫ ) cells, and in cells expressing either the full-length mouse talin1 cDNA (talin1 ( Ϫ / Ϫ )WT cells) or a talin1 polypeptide (residues 1-2299) lacking the highly conserved COOH-terminal actinbinding site (talin1 ( Ϫ / Ϫ ) ABS; Hemmings et al., 1996). Visualization of adhesion sites was performed 1 and 24 h after plating on FN using paxillin-GFP as a marker. Although after 24 h there appeared to be little difference, after 1 h there was a dramatically lower percentage of cells displaying paxillin-GFP containing contacts in talin1( Ϫ / Ϫ ) and talin1( Ϫ / Ϫ )ABS cells than in the talin1 ( Ϫ / Ϫ )WT cells (Fig. 2, A and B), implying that formation of linkages between talin1 and the actin cytoskeleton are critical for the recruitment of paxillin-GFP and formation of adhesion sites. We tested if the lower percentage of adhesion sites in talin1 ( Ϫ / Ϫ ) cells after 1 h spreading is due to a slower formation of focal adhesions or to a delay in the initiation of focal complexes slowing down the formation of focal adhesions. We performed total internal reflection fluorescence (TIRF) microscopy to detect the initiation of paxillin-GFP clusters in talin1 ( Ϫ / Ϫ )WT (Fig. 2 C) and talin1 ( Ϫ / Ϫ ) cells (Fig. 2 D). In talin1 ( Ϫ / Ϫ )WT, focal complexes were initiated in concert with protrusions and retractions of the lamellipodia (Fig. 2 C). The mean time necessary for initiation and stabilization of the paxillin-GFP clusters was 123 Ϯ 55 s ( n ϭ 31 adhesion sites; seven cells). In talin1 ( Ϫ / Ϫ ) cells, despite the presence of active lamellipodia, no initiation of distinct adhesion sites was detected during Ͼ 30 min of data acquisition (Fig. 2 D) in 80% of cells. In ‫ف‬ 20% of the cells, adhesions were observed, and the rate of adhesion site formation in those talin1 ( Ϫ / Ϫ ) cells was similar to talin1 ( Ϫ / Ϫ )WT cells (116 Ϯ 54 s; n ϭ 17 adhesion sites; three cells). This suggests that the adhesion site formation was an all or none process perhaps triggered by compensation of a talinlike gene (e.g., talin2 or tensin). Nevertheless, talin1 ( Ϫ/Ϫ) cells were able to apply force on ECM-integrin contacts as indicated by (a) the force-dependent activation of FAK and SFK ( Fig. 1 D); (b) the ruffling of protrusions; and (c) the movement of FN-coated beads out of the laser trap (Fig. 3). Thus, early focal complex formation appears to depend on talin1, which raises the possibility that talin1 is involved in the initial linkages between integrin and the actin cytoskeleton, but is not essential for integrin signaling or cell spreading. Talin1 is required for reinforcement of integrin-cytoskeleton linkages Forces exerted on integrin-cytoskeleton connections result in their strengthening (Choquet et al., 1997) and are associated with initiation and stabilization of focal complexes (Galbraith et al., 2002). We used laser tweezers to assess the role of talin1 in regulating the strength of early connections between FN and the cytoskeleton in a reinforcement assay (Fig. 3). To specifically focus on focal complex initiation and stabilization, we used a fragment of FN type III domains 7-10 (FNIII7-10), which contains the cell-binding domain (RGD) and the synergy site. FNIII7-10-coated beads bound to cell surface integrins, and the retrograde moving actin cytoskeleton normally pulled the beads out of the trap by applying a significant force to the FNIII7-10integrin-cytoskeleton linkages (Choquet et al., 1997). Later, when the force of the tweezers was applied again to the bound bead, reinforced beads could not be moved back toward the leading edge, whereas unreinforced beads were moved (Choquet et al., 1997). Talin1 (Ϫ/Ϫ) and talin1 (Ϫ/Ϫ)ABS cells showed abnormal behavior in the reinforcement assay at several levels. Initially, a lower fraction of beads was able to escape the laser trap (56 Ϯ 19% and 55 Ϯ 18%) compared with talin1 (Ϫ/Ϫ)WT cells (79 Ϯ 10%). Furthermore, the time required Restrained beads (gray strips, turn on of the laser trap) able to escape the laser trap (movement outside the first gray strip) were tested with a second pulse (second gray strip) after turning off (white strip) and repositioning the laser trap (Ͻ0.5 m behind the bead). Beads were scored reinforced if they were unable to be dislocated (no changes of the bead trajectory Ͼ100 nm, just after the beginning of the second gray strip, test pulse). Most beads were not displaced by the test pulse (retrap [reinforced]), suggesting that the cell regulates, in response to the rigidity of the laser trap, the strength of integrin-cytoskeleton linkages. (B) Representative trace showing displacement of trimeric FNIII7-10-coated beads from its initial position over time on the surface of talin1 (Ϫ/Ϫ) cells. Trimeric FNIII7-10-coated beads, which were not reinforced, were displaced by the laser trap test pulse (second gray strip; retrap [loose]) after initial escape from the laser trap (movement outside the first gray strip). (C) Summary of reinforcement assay results, showing the percentage of experiments in which beads were reinforced (black bars) or loose (white bars). Transfected cells were identified by EGFP cotransfection. Results represent the mean Ϯ SD of at least three experiments. for a bead to move 50 nm from the trap center in talin1 (Ϫ/Ϫ) cells (10 Ϯ 8 s) was significantly longer than in talin1 (Ϫ/Ϫ)WT (3 Ϯ 5 s) cells (Fig. 3, compare A with B), suggesting that talin1 is involved in strengthening initial integrin connections with the cytoskeleton. When tested for reinforcement, there was an even greater difference. In talin1 (Ϫ/Ϫ)WT cells, 58 Ϯ 11% of the beads that were able to escape the trap were reinforced and did not move in response to the tweezers' force, whereas in talin1 (Ϫ/Ϫ) cells only 10 Ϯ 8% of the escaped FNIII7-10-coated beads were reinforced (Fig. 2 C). Transient expression of a truncated talin1 lacking the COOH-terminal actin-binding site was unable to restore normal reinforcement (10 Ϯ 6%), suggesting that the interaction between talin1 and actin filaments is necessary for the reinforcement process. Although integrin coupling to the actin cytoskeleton is possible without talin1, it occurs much more slowly, and the integrin-cytoskeleton connections cannot be strengthened in response to force. The nearly sixfold difference in reinforcement was not related to a difference in FNIII7-10 bead binding. In the bead binding assay, FNIII7-10-coated beads were placed for 3 s on the upper surface of lamellipodia (Ͻ0.5 m from the leading edge of the cell) and the trap was turned off. When beads were coated with high levels of FNIII7-10, little difference was found in the binding frequency between talin1 (Ϫ/Ϫ) cells (86 Ϯ 5%) and talin1 (Ϫ/Ϫ) WT cells (97 Ϯ 3%). We compared the reinforcement process in talin1 (Ϫ/Ϫ) and talin1 (Ϫ/Ϫ)WT at similar adhesion strength. To do this comparison, we reduced the percentage binding of FNIII7-10-coated beads on talin1 (Ϫ/Ϫ)WT cells to 75 Ϯ 8%, (by adding BSA with FN, FNIII7-10/BSA 1/1). Under these conditions, the reinforcement in talin1 (Ϫ/Ϫ)WT was not impaired (57 Ϯ 8%, n ϭ 51 beads), which demonstrated that at the same adhesion strength between the ECM and integrins, integrincytoskeleton strengthening was dependent on talin1. Interestingly, we observed that after Ͼ25 passages, the talin1 (Ϫ/Ϫ) cells lost their severe impairment in the reinforcement process (35 Ϯ 16% showed reinforcement). Later passages of talin1 (Ϫ/Ϫ) cells were characterized by up-regulation of expression of filaminA and talin2 (unpublished data). From the analysis of the role of filaminA in reinforcement (see Fig. 6), it appears that the up-regulation of talin2 restored the reinforcement process. The talin1 force-dependent reinforcement of FNintegrin-cytoskeleton linkages involves ␣ v ␤ 3 /integrin The ␣ v ␤ 3 /integrin, originally described as the VN receptor, binds to a variety of plasma and ECM proteins including VN and FN (Boettiger et al., 2001). Because ␣ v ␤ 3 /integrindependent activation of SFK is involved in early adhesion site formation on FN and reinforcement of FNIII7-10integrin-cytoskeleton connections (von Wichert et al., 2003), we tested if ␣ v ␤ 3 /integrin was implicated in the reinforcement process in talin1 (Ϫ/Ϫ)WT cells by adding the cyclic peptide GPenGRGDSPCA (GPen; 0.5mM; Fig. 3 C). At this concentration, the peptide was shown to be a selective, competitive inhibitor of the ␣ v ␤ 3 /integrin, and did not block binding of FN to its receptor (␣ 5 ␤ 1 /integrin; Pierschbacher and Ruoslahti, 1987). GPen treatment reduced the binding of FNIII7-10-coated beads on the surface of talin1 (Ϫ/Ϫ)WT cells from 97 Ϯ 3% to 55 Ϯ 5%. Of the beads bound in the presence of GPen, 55 Ϯ 16% escaped from the laser trap and only 4 Ϯ 8% were reinforced (Fig. 3 C). Because we had found no significant effect of ␣ v ␤ 3 /integrin inhibition by GPen under different conditions (c-Src(ϩ/ϩ) cells after 24 h spreading with FNIII7-10 monomer; Felsenfeld et al., 1999), we tested the effect of GPen in our experimental conditions. In c-Src (ϩ/ϩ) cells, we found that our current FNIII7-10-coated beads show inhibition of reinforcement in the presence of GPen inhibitor. These results indicated that binding of FN to ␣ v ␤ 3 /integrin and recruitment of talin1 were both essential for reinforcement of FNintegrin-cytoskeleton connections. Talin1 causes tighter integrin-cytoskeleton connections Without reinforcement, FNIII7-10-coated beads on talin1 (Ϫ/Ϫ) cells may be less rigidly attached to the actin cytoskeleton and may show a greater diffusion rate perpendicular to the direction of the rearward movement. We used the mean square displacement (MSD) of the bead diffusion (Qian et al., 1991) as a reporter of integrin diffusion after the bead escaped the laser trap (Fig. 4). Although beads moved toward the nucleus in all cases, on talin1 (Ϫ/Ϫ) cells there was a high MSD perpendicular to the direction of movement (395 Ϯ 170 nm 2 ; Fig. 4, A and C) immediately after escaping the laser trap compared with reinforced beads in talin1 (Ϫ/Ϫ)WT cells (52 Ϯ 29 nm 2 ; Fig. 4, B and C). Interestingly, when reinforcement was inhibited by GPen treatment, the MSD (157 Ϯ 154 nm 2 ) in talin1 (Ϫ/Ϫ)WT cells was increased compared with nontreated talin1 (Ϫ/Ϫ)WT cells, but reached talin1 (Ϫ/Ϫ) cell values only after a period outside the trap (4 s; Fig. 4 C). This suggested that although ␣ v ␤ 3 /integrin binding to FN was required for the reinforcement process, talin1 could link other integrins, probably ␤ 1 to the actin cytoskeleton (Felsenfeld et al., 1996). These data define talin1 as a key component in establishing a stable connection between integrins and the cytoskeleton, even in the absence of reinforcement (in the presence of GPen). Talin1 induces recruitment of paxillin and vinculin at sites of force generation Accumulation of paxillin and vinculin at matrix contacts on large beads is associated with the force-dependent reinforcement of integrin-cytoskeleton connections and focal complex initiation and stabilization (Galbraith et al., 2002;von Wichert et al., 2003). Thus, we tested if talin1 was required for paxillin and vinculin recruitment to sites of force generation (Fig. 5). We centrifuged large beads (5.9-m diam) coated with FNIII7-10 onto talin1 (Ϫ/Ϫ) and talin1 (Ϫ/Ϫ)WT cells. In contrast to small beads used for laser trap experiments, large beads do not require generation of external forces to induce accumulation of paxillin and vinculin, but rather myosin activity is important (Galbraith et al., 2002). The percentage of cells displaying paxillin-EGFP (Fig. 5 A) and EGFP-vinculin (Fig. 5 B) assembly underneath and around beads (bound within 10 m from the edge) was more than threefold higher in talin1 (Ϫ/Ϫ)WT cells ‫)%06ف(‬ than in talin1 (Ϫ/Ϫ) cells ‫;%71ف(‬ Fig. 5 C). To exclude the possibility of a volume-effect around the beads, we transfected cells with EGFP alone, which did not cause an increase in signal intensity around the beads in any case (unpublished data). Therefore, talin1 is involved in the recruitment of paxillin and vinculin, events correlated with strengthening of linkages between integrins and the cytoskeleton and focal complex initiation and stabilization. FilaminA is not involved in force-dependent reinforcement of integrin-cytoskeleton connections Filamin, like talin, binds to both integrins and actin filaments. Force-dependent rigidification of collagen-coated bead contacts was decreased in filaminA-deficient human melanoma (M2) cells compared with M2 cells rescued by stable expression of filaminA (A7 cells; Glogauer et al., 1998 ). Therefore, we tested reinforcement in M2 and A7 cells (Fig. 6). A lower level of reinforcement of FNIII7-10coated beads was found in M2 (28 Ϯ 22%) cells than in A7 cells (71 Ϯ 23%; Fig. 6, A-C), which is consistent with previous observations (Glogauer et al., 1998). However, a substantial fraction of M2 cells (28%) displayed a normal reinforcement, unlike talin1-deficient cells, which had only background levels of reinforcement (10%; Fig. 3 C). Moreover, unlike talin1-deficient cells (Fig. 4 C), the average MSD of nonreinforced beads in filaminA-deficient M2 cells just after escaping from the laser trap (135 Ϯ 111 nm 2 ) was not significantly different from that of filaminAexpressing M7 cells (118 Ϯ 127 nm 2 ; Fig. 6 D). No difference in spreading on FN or VN was observed between M2 and A7 cells (unpublished data), suggesting that interaction of integrins ␣ 5 ␤ 1 and ␣ v ␤ 3 with ligand is normal. Despite the participation of the full-length filaminA in the reinforcement process, it seems that the filaminA linkage with the actin cytoskeleton is not as critical as talin1. Therefore, this rules out a simple mechanism where integrins are bridged by filaminA to the actin cytoskeleton forming a force-sensing module. Talin1 is involved in stretch-dependent adhesion site formation at early, but not late, times Another role for talin1 may be in the response to stretching forces, where the recruitment of proteins such as paxillin into adhesion sites is promoted by force on the cytoskeleton (Sawada and Sheetz, 2002). To test this hypothesis, we used a stretchable FN-like-coated (pronectin) silicon substrate to apply external forces to spreading cells. After 10 min of spreading, talin1 (Ϫ/Ϫ) and talin1 (Ϫ/Ϫ)WT cells expressing paxillin-GFP displayed no visible adhesion sites, and variation of the fluorescence at the cell edge corresponded to ruffling of the lamellipodia (Fig. 7, A and B, before stretch). Stretching of talin1 (Ϫ/Ϫ)WT cells for 2 min induced the formation of new adhesion sites (55 Ϯ 8%; Fig. 7 B, during stretch; and Fig. 7 D); however, the formation of new adhesion sites in talin1 (Ϫ/Ϫ) cells was greatly reduced (11 Ϯ 3%; Fig. 7 A, during stretch, and Fig. 7 D). Moreover, inhibition of ␣ v ␤ 3 /integrin with GPen prevented the formation of these stretch-induced adhesion sites in talin1 (Ϫ/Ϫ)WT cells (Fig. 7 D). In contrast, filaminA null M2 cells showed no defect in early stretch-dependent formation of adhesion sites compared with rescued A7 cells (Fig. 7, C and D). Thus, talin1, but not filaminA, has a significant role in the force-sensing mechanisms leading to rapid stabilization of nascent integrin-cytoskeleton connections necessary for initiation of focal complexes and their stabilization. After 1 h of spreading on the pronectin-coated stretchable substrate, ‫%71ف‬ of talin1 (Ϫ/Ϫ) cells had detectable adhesion sites without stretching (Fig. 7 F), which was similar to FN-coated glass (Fig. 2 B). Unlike the 10-min time point, stretching induced the formation of adhesion sites in ‫%84ف‬ of the cells (Fig. 7 E, after stretch; and Fig. 7 F), similar to the control (talin1 (Ϫ/Ϫ)WT) cells after 10 min of spreading. The percentage of talin1 (Ϫ/Ϫ)WT cells having adhesion sites after 1 h of spreading on the pronectin-coated substrate was already high ‫;)%06ف(‬ after stretching, 86% of talin1 (Ϫ/Ϫ)WT cells displayed adhesions sites. That talin1 was not required for stretch responsiveness after 1 h was consistent with the fact that a similar percentage of talin1 (Ϫ/Ϫ) and talin1 (Ϫ/Ϫ)WT cells had adhesion sites after 24-h plating. Therefore, talin1 was critical in rapid force sensing by the early adhesions, but might be replaced by a slower accumulation of other actin-binding proteins around nascent ECM-integrin-cytoskeleton connections over time. Discussion Talin1 (Ϫ/Ϫ) fibroblast-like cells appear normal in their spreading behavior and the activation of SFKs and FAK on FN. Furthermore, inhibition of contractility inhibited activation of SFKs and FAK during spreading of both talin1 (Ϫ/Ϫ) and talin1 (Ϫ/Ϫ)WT cells, suggesting that talin1 is not necessary for integrin-dependent signaling in response to force. However, force-dependent reinforcement of FNIII7-10-integrin-cytoskeleton connections, and stretch-induced initiation of focal complex assembly are decreased to nearly background levels. The normal phenotype is rescued by expression of full-length talin1, but not a talin1 polypeptide lacking the COOH-terminal actin-binding site. In contrast, the depletion of filaminA has a milder effect on reinforcement and no effect on the early stretch response. Reinforcement of submicrometer bead contacts has been linked to the recruitment of paxillin and vinculin, a response which is also missing in talin1 (Ϫ/Ϫ) cells. Because of its ability to bind integrins, F-actin, vinculin, and FAK, and the lack of known enzyme functions, we suggest that talin1 plays a structural scaffolding role in focal complex initiation and assembly in response to force. Although it has been suggested that the formation of focal complexes is independent of force (Geiger and Bershadsky, 2002), recent studies have demonstrated that forces are required for the initiation and stabilization of focal complexes (Galbraith et al., 2002;von Wichert et al., 2003). Focal complexes are dissociated by inhibitors of myosin II-dependent contractility, but not by an inhibitor of Rho-kinase (Rottner et al., 1999). Accumulation of paxillin and vinculin around large FN-coated beads is dependent on Rac but not Rho activity, and is inhibited by the myosin light chain kinase inhibitor ML-7, which indicates that forces are involved in the initiation of focal complexes (Galbraith et al., Figure 7. Talin1 is critical for the early integrin-and stretch-dependent formation of adhesion sites. (A) Representative stretching (10%) of talin1 (Ϫ/Ϫ) cells transiently transfected with paxillin-GFP. After 10 min of spreading, cells were stretched, held for 2 min, and subsequently relaxed by allowing the stretched silicone membrane to return to its original size. Before stretch (left), 2 min after stretch (middle), and after relaxation of stretch (right). Boxed areas are shown enlarged (right panels). (B) Representative stretching of talin1 (Ϫ/Ϫ) cells transiently transfected with the full-length talin1 and paxillin-GFP. As for talin1 (Ϫ/Ϫ) cells, no adhesion sites were visible in talin1 (Ϫ/Ϫ) WT cells before stretch (left). However, after 2-min stretch, adhesion sites were formed at the cell periphery visualized by paxillin-GFP fluorescence (arrows). Boxed areas are shown enlarged (right panels). (C) M2 (filaminA null) or A7 cells (stably expressing filaminA) were allowed to spread for 10 min and stretched for 2 min before fixation with 3.7% formaldehyde/PBS. M2 and A7 cells were subjected to immunostaining using antipaxillin antibody. 2002). In laser tweezers experiments, reinforcement of integrin-cytoskeleton connections is correlated with the assembly of paxillin/vinculin around FNIII7-10-coated beads experiencing external forces (Galbraith et al., 2002;von Wichert et al., 2003). Because talin1 (Ϫ/Ϫ) cells are impaired in the force-dependent responses at early times, we suggest that talin1 is critical in the initiation and stabilization of focal complexes in response to forces. Increased diffusion of FN beads on talin1 (Ϫ/Ϫ) cells as well as the slower rate of attachment to the cytoskeleton are both consistent with a structural role for talin in the linkage between the FN-integrin complex and the cytoskeleton (Priddle et al., 1998;Brown et al., 2002). Nevertheless, the movement of FNIII7-10-coated beads out of the laser trap in talin1 (Ϫ/Ϫ) cells implies that other cytoskeletal proteins can couple integrins to the cytoskeleton (Liu et al., 2000), even though those linkages are weaker and cannot be strengthened by force application. In a separate study, we found that talin1 is required to create a discrete, but weak, mechanical linkage between the FNIII7-10 trimer-integrin complex and the cytoskeleton. The bond can slip and form new bonds repeatedly Fig. 8 A). Forces generated during reinforcement can break this linkage, suggesting that the formation of multiple discrete linkages, or the recruitment of additional proteins, is needed to strengthen integrin-cytoskeleton connections. At a molecular level, the talin1 dimer contains several binding sites for F-actin, vinculin, and integrins (Critchley, 2000;Xing et al., 2001). Therefore, talin1 may link as many as four liganded integrins to the cytoskeleton; with assistance from vinculin (Bass et al., 1999) or paxillin (Turner, 2000), talin1 may form the basis of a large complex that would be reinforced through multiple interprotein bonds. We suggest that the talin1-dependent recruitment of paxillin, vinculin, and other proteins to sites of force application enables the strengthening of linkages between integrins and the cytoskeleton (Fig. 8 B). Recent studies show that the deletion of RPTP␣ from cells inhibits ␣ v ␤ 3 /integrin-dependent SFK activation and reinforcement very effectively (von Wichert et al., 2003). In talin1 (Ϫ/Ϫ)WT cells, inhibition of ␣ v ␤ 3 /integrin signaling by GPen treatment also inhibits the reinforcement process but does not totally suppress linkages to the cytoskeleton. Likewise, the initial talin1-dependent connections between integrin and the cytoskeleton form normally in RPTP␣ (Ϫ/Ϫ) cells . In the case of talin1 (Ϫ/Ϫ) cells, the activation of SFKs is normal, with no difference in the amount of SFK and FAK activation in response to forces at early times ( Fig. 8 D). This suggests that force acts at two levels in reinforcement of integrin-cytoskeleton connections. At early times, force activates parallel enzymatic and structural changes and talin1 is involved in structural sensing of force on integrin-cytoskeleton linkages. Consistent with our suggestion, a recent work has shown that talin is not required for integrinmediated signaling to regulate gene expression during D. melanogaster embryogenesis (Brown et al., 2002). We propose that FN-integrin-talin1-actin connections and ␣ v ␤ 3 /integrindependent activation of RPTP␣/SFKs comprise two major elements in a minimum reinforcing module, with RPTP␣/SFKs being the regulatory component and talin1-actin being the scaffolding modified in response to force (Fig. 8). Functional analysis, using microinjection of talin antibodies (Nuckolls et al., 1992;Bolton et al., 1997), or recombinant domains of talin (Hemmings et al., 1996) show that talin is involved in stabilization of adhesion sites. At later times, talin1 (Ϫ/Ϫ) cells are able to form adhesion sites (Priddle et al., 1998;Fig. 1) and are responsive to matrix stretching (Fig. 7), which indicates that Figure 8. Talin1 acts as a scaffold but does not support the signaling in the reinforcement of integrin-cytoskeleton interactions. (A) FN binding to ␣ v ␤ 3 /integrin induces the rapid formation of a weak slipping connection between talin1 and the rearward moving cytoskeleton. (B) Sustained force applied on the FN-integrin-cytoskeleton connection induces ␣ v ␤ 3 /integrin-dependent activation of RPTP␣/SFKs, which is responsible for paxillin and vinculin (green triangles) recruitment to the talin1/ cytoskeletal interface. Assembly of this complex leads to stabilization of the talin1-cytoskeleton connection and is responsible for focal complex initiation and stabilization. (C) In the absence of talin1, protein X is envisaged to support the delayed interaction of integrins with the actin cytoskeleton. (D) Protein X supports the force-induced activation of RPTP␣/SFKs, but this does not result in efficient recruitment of paxillin and vinculin, and this precludes reinforcement of the initial linkage. The interaction of integrins with the actin cytoskeleton is not stabilized and focal complexes fail to form. In either case, RPTP␣-SFK pathway activation is involved in stimulation of cell spreading. other proteins can substitute for talin1 in building later integrin-cytoskeleton connections. Indeed, when talin localization to adhesion sites is altered by antibody injection (Nuckolls et al., 1992) or sequestration of phosphoinositides (Martel et al., 2001) there is no simultaneous disruption of mature adhesion sites. However, antibody injection disrupts newly formed adhesion sites or prevents their formation, which further emphasizes the critical role of talin during early formation of adhesion sites, rather than maturation or stabilization of older adhesion sites. Tensin, which colocalizes with paxillin, seems a better candidate to functionally replace talin1 than filaminA or ␣-actinin (unpublished data). Talin has been described as an early component of adhesion site precursors and has been localized at the distal margins of lamellipodia (Izzard, 1988) as high affinity integrin (Kiosses et al., 2001;Nishizaka et al., 2000), and RPTP␣ (von Wichert et al., 2003). Therefore, components of the early force-sensing apparatus are concentrated at the leading edge of the cell and respond rapidly to localized changes in the stiffness of ECM proteins (Choquet et al., 1997). In terms of a general model, we suggest that talin1 is a critical part of the minimum specific complex linking integrins to the cytoskeleton . Cells build on this complex in response to force, as emphasized by the dramatic effect of talin1 disruption on cell migration at gastrulation (Monkley et al., 2000). Although the activation of signaling pathways by force exerted on integrins is important for cellular processes such as motility, proliferation, apoptosis, and morphogenesis, the structural scaffold provided by talin also appears to be a critical factor. Spreading assays Spreading assays were performed as described previously with VN-and FN-coated surfaces (von Wichert et al., 2003). For paxillin distribution assays, cells were transiently transfected using Fugene 6 with paxillin-GFP, plated as described in the previous paragraph, and subsequently analyzed by confocal microscopy (100ϫ; model Fluoview 300; Olympus). All the cells displaying at least one distinct adhesion site were scored positive. The criteria used to classify an adhesion site were its intensity compared with the surrounding region. Preparation of FN-coated silica beads 0.64-m silica beads (Bang Laboratories, Inc.) were coated with the trimer of FN as described previously . Laser trap experiments For bead binding assays, beads were held for 3 s on the cell surface 0.2-0.5 m from the leading edge using a 100-mW (20 pN/m) optical gradient laser trap setup (model Axiovert 100TV; Carl Zeiss MicroImaging, Inc.) (Felsenfeld et al., 1996;Choquet et al., 1997). Beads were scored as attached if they remained in focus in the plane of the membrane for 10 s after inactivating the trap. For MSD assays, ligand-coated beads were held in the 100-mW laser trap on the cell surface until the bead had moved Ͼ500 nm from the trap center. x and y coordinates were determined from video micrographs using single particle tracking routines performed with Isee software (Invision Corporation) running on a Silicon Graphics O2 workstation. The MSD was calculated using an algorithm modified from Qian et al. (1991). For reinforcement assays, ligand-coated beads were held in a 100-mW laser trap on the cell surface for up to 30 s or until the bead had moved Ͼ600 nm from the trap center. Beads still in the trap after 30 s were scored as "no escape." Beads were tested with a second pulse of the trap (100 mW) positioned Ͻ0.5 m behind the bead (toward the leading edge). Beads were scored as "reinforced" if they could not be rapidly (within Ͻ100 ms) displaced by Ͼ100 nm after the 100-mW test pulse. Large bead assays Large bead assays were performed as described previously (von Wichert et al., 2003). Living cells stretching experiments Talin1 (Ϫ/Ϫ) or talin1 (Ϫ/Ϫ)WT cells transiently transfected with paxillin-GFP were plated on the pronectin-coated silicone membrane for 10 min (Flexcell International) and stretched biaxially (10% in each dimension) for 2 min. For living cell experiments, the GFP fluorescence was observed by fluorescence microscopy (model BX50; Olympus; using a 60ϫ, 0.9 NA water immersion objective); 5 min after stretch, the silicone substrate was relaxed to its original size, and the fluorescence was recorded. Otherwise, for determination of the percentage of responsive cells, talin1 (Ϫ/Ϫ), talin1 (Ϫ/Ϫ)WT, M2, and A7 cells (transfected as indicated in the figure legends), cultured on silicone membranes, were stretched for 2 min and fixed with 3.7% formaldehyde/PBS. After fixation, the cells were permeabilized with 0.1% Triton X-100/PBS and subjected to immunostaining as indicated. TIRF microscopy FN-coated coverglass was placed on the TIRF microscope at 37ЊC with a cell suspension in 0.5% serum media. A cooled CCD camera (model CoolSnap fx; Roper Scientific) recorded digital grayscale images from the microscope to a computer running custom image capture software operating as a plugin from within the free ImageJ (http://rsb.info.nih.gov/ij) software package. TIRF images were captured every 10 s. The TIRF laser was synchronously shuttered with the CCD camera to mitigate phototoxicity and photobleaching.
2014-10-01T00:00:00.000Z
2003-10-27T00:00:00.000
{ "year": 2003, "sha1": "7cb8b4f7e4bf05c9eb821e04e9587d1c571db47d", "oa_license": "CCBYNCSA", "oa_url": "http://jcb.rupress.org/content/163/2/409.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "7cb8b4f7e4bf05c9eb821e04e9587d1c571db47d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
132711739
pes2o/s2orc
v3-fos-license
Role of homoeopathic medicine in the treatment of infantile haemangioma Infantile haemangioma is a benign vascular tumour of childhood, characterised by endothelial cell proliferation. It usually develops shortly after birth and grows most rapidly over the first 6 months. However, it may keep growing for up to 12–18 months. After that, it undergoes regression or involution, and 50% of all infantile haemangiomas have completed involution by the age of 5 years, 70% by the age of 7 years and 90% by the age of 9–12 years. However, in a small percentage of patients in whom haemangioma is not disappearing completely, residual fatty tissue or superficial skin telangiectasias remains. These patients may require drug therapy (propranolol/timolol/steroids/vincristine), surgery and/or laser therapy often during childhood involving certain risks or side effects. However, homoeopathic medicine can quickly, safely and effectively diminish proliferative growth and hasten resolution without any side effects. Two children with infantile haemangioma were treated with homoeopathic medicines, selected on the basis of their totality of symptoms and repertorisation. Each child was followed up every 2–4 weeks’ interval, and photographs were taken to assess/compare the vascularity, height (thickness), pliability and pigmentation according to the Vancouver Scar Scale chart. In the 1st case, the score reduced from 9 to 1 in about 10 months of follow-up and showed 88.8% improvement. In the 2nd case, the score reduced from 9 to 0 in about 10 months of follow-up and showed 100% improvement. These case reports show that early treatment of infantile haemangioma with Homoeopathy medicine can diminish proliferative growth and hasten resolution as early as possible without any side effects. Case 1 A 5-month-old male child presented with a large, firm, reddish, vascular tumour on the nose [ Figure 1a]. At birth, the tumour was not observed, but within a month, reddish discolouration appeared on the nose. Gradually, it increased in size and looked like a soft reddish, bloody cap and gradually became firm. The child's mother consulted a paediatrician who advised her to drain the blood through surgical procedure at the age of 3 months, but she did not follow. Umbilicus enlarged and increased more in size during crying. The other symptoms were: • Discharge from the left ear since 2 months • Had hiccough lasting for long time and recurred frequently • Wept all the time but quiet when carried • Tendency to easily catch cold. The baseline assessment of the patient was done according to the Vancouver Scar Scale on the first visit and score was 9, and categorised as 'severe' [ Table 2]. Case Analysis Based on the symptoms, repertorisation of this case was done according to the Kentian method. On repertorial analysis, both 'Calcarea carb' and 'Calcarea flour' covered most of the rubrics (4 out of 6), but Calcarea carb scored the highest marks (9), whereas 'Calcarea flour' scored next highest marks (8) [ Table 3]. The Calcarea carb patient presents the guiding symptom of 'Profuse sweat on the occiput, wetting the pillow' and constitution as flabby muscles, large head, but the patient did not present with this type of symptoms and constitution. On the other hand, Calcarea flour is more effective in treating bloody tumour of newborns than Calcarea carb. Calcarea flour covers physical, general and particular symptoms and the same was compared in the Materia Medica by different authors. Hence, the most appropriate remedy selected for this case was 'Calcarea flour'. Prescription and follow-up Two doses of Calcarea flour in centesimal potency (1M) were administered to the patient at a time. One globule (poppy-seed size) of the medicine in 1M potency was dissolved in 10 ml of distilled water containing 0.2 ml (2% v/v) of dispensing alcohol, pre-mixed in it, followed by ten uniformly forceful downward strokes given against the bottom of the phial. This solution was given to the patient with the instructions regarding the dosage. Eventually, improvement was observed within 2 weeks of treatment and within 4 months, hiccough relieved, otorrhoea subsided as well as umbilicus restored to normal size. Medication was not repeated as long as the improvement in symptoms of the patient continued. When improvement was not commenced, the dose was repeated by gradually The baseline assessment of the patient was done according to the VSS on the first visit and scored 9, and considered as severe [ Table 5]. increasing the potency, i.e., power of the medicine such as 10M, 50M and CM step by step [5] when required [ Table 4]. After 10 months of the treatment, the haemangioma reduced considerably [ Figure 1b]. In this case, the VSS score reduced from 9 to 1 in 10 months of follow-up and showed 88.8% improvement [ Table 4]. This result shows the usefulness of the homoeopathic medicine Calcarea flour in treating haemangioma. Case 2 A 6-month-old female child presented with a large, firm, reddish, vascular tumour on the left cheek [ Figure 2a] The patient was the 1 st child and was delivered normally with a history of low birth weight. At birth, the tumor was not observed, but within a month, reddish discoloration on the cheek appeared. Gradually, it increased in size and looked like a soft reddish, bloody cap and gradually became firm Other symptoms were: • Profuse sweat on the head • Suppuration on both ears • Passes copious urine • During eating, abdomen distended. Case Analysis Based on the symptoms, repertorisation of of the case was done as per the Synthesis repertory. On repertorial analysis, 'Silicea' scored the highest marks (6) covering most of the rubrics (4 out of 6) [ Table 6]. The physical, general and particular general symptomatologies were well covered by this remedy and compared according to Materia Medica by different authors. Hence, the most appropriate remedy for this case was 'Silicea'. Prescription and follow-up Sixteen doses of Silicea in fifty millesimal potencies (LM2) were administered. One globule (poppy-seed size) of the medicine in 0/1 potency was dissolved in 80 ml of distilled water containing 1.6 ml (2% v/v) of dispensing alcohol, premixed in it, followed by ten uniformly forceful downward strokes given against the bottom of the phial. This solution was given to the patient with the instructions regarding the dosage. Improvement was observed within 2 weeks of the treatment and within 3 months, otorrhoea subsided, sweat reduced as well as firmness of haemangioma slightly improved. The doses Placebo dose was repeated by gradually increasing the potency, 0/2, 0/3 and 0/4. [6] However, after that, no improvement occurred. Therefore, after careful consideration of the symptom 'Cicatrices (Scar Tissue)' and based on the symptom according to repertory 'Thiosinamine 0/1' was prescribed. After prescribing this medicine, remarkable improvement occurred, and the dose was repeated by gradually increasing the potency, as 0/2, 0/3, 0/4, 0/5 and 0/6 step by step [ Table 7]. Within 10 months of the treatment, the haemangioma reduced [ Figure 2b]. In this case, the VSS score reduced from 9 to 0 in 10 months of follow-up and showed marked improvement [ Table 7]. This result shows the utility of the homoeopathic medicine as well as fifty millesimal potency in the treatment haemangioma. There is no scarring, ulceration or infection resulting from homoeopathic therapy in this study. Much more time than usual was required only since the doses were not administered in a proper way as instructed. discussion In these case reports, the patients showed improvement not only in the case of haemangioma but also in other associated complaints with the prescription of homoeopathic medicine. The results of these cases demonstrate that early treatment of infantile haemangioma with homoeopathic medicine can diminish the growth and hasten resolution without any side effects. Homoeopathic therapy shows positive results in the treatment of haemangioma, and these results need further validation by conducting clinical trials.
2019-04-26T13:36:14.221Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "6a8631633b82dc868224b6f2cc6fce2969f880cc", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ijrh.ijrh_53_18", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8dfdb6b0061dfc0136043a1c44fc2f7846d373a2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232775308
pes2o/s2orc
v3-fos-license
Evaluating the Effect of a Neonatal Care Bundle for the Prevention of Intraventricular Hemorrhage in Preterm Infants Germinal matrix intraventricular hemorrhage (IVH) remains a severe and common complication in preterm infants. A neonatal care bundle (NCB) was implemented as an in-house guideline at a tertiary neonatal intensive care unit to reduce the incidence of IVH in preterm infants. The NCB was applied either to preterm infants <1250 g birth weight or <30 weeks gestational age or both, and standardized patient positioning, nursing care, and medical procedures within the first week of life. A retrospective cohort study was performed to investigate the effect of the NCB and other known risk factors on the occurrence and severity of IVH. Data from 229 preterm infants were analyzed. The rate of IVH was 26.2% before and 27.1% after implementing the NCB. The NCB was associated neither with reducing the overall rate of IVH (odds ratio (OR) 1.02; 95% confidence interval (CI) 0.57–1.84; p = 0.94) nor with severe IVH (OR 1.0; 95% CI 0.67–1.55; p = 0.92). After adjustment for group differences and other influencing factors, amnion infection syndrome and early intubation were associated with an increased risk for IVH. An NCB focusing on patient positioning, nursing care, and medical interventions had no impact on IVH in preterm infants. Known risk factors for IVH were confirmed. Introduction Germinal matrix intraventricular hemorrhage (IVH) is a common form of brain damage in preterm infants <32 weeks of gestation [1]. IVH occurs mainly within the first days of life and can spread from the germinal matrix region into the lateral ventricles. If the bleeding obstructs terminal veins, periventricular hemorrhagic infarction (PVHI) can occur [2,3]. While even mild IVH (grade 1 and 2) may increase the risk of neurodevelopmental impairment, severe IVH (grade 3 and PVHI) is particularly associated with a poor neurological outcome [4]. Since there are few therapeutic options after diagnosing IVH [5], prevention is of crucial importance. Prenatal preventive measures include avoidance of preterm delivery, antenatal corticosteroid therapy, and transfer of mothers in preterm labor to centers specialized in high-risk delivery [6,7]. Postnatal preventive interventions aim to reduce stress and hemodynamic fluctuations through appropriate respiratory support and adequate blood pressure management. Increasing evidence suggests that patient posture, i.e., keeping the preterm infant's head in midline and elevated position, decreases IVH incidence [2,[8][9][10][11][12][13][14]. Based on average performance concerning (severe) IVH reported on a German nationwide information portal on the quality of care for very-low-birth-weight infants [15], in 2012, the study site's neonatal intensive care unit (NICU) introduced a neonatal care bundle (NCB) as part of an in-house guideline and quality improvement initiative aiming to reduce the incidence of IVH. It focused primarily on patient positioning and minimal handling during the first week of life and applied either to infants <1250 g birth weight or <30 weeks gestational age or both. The NCB encompassed several aspects for which associations with IVH have been reported or are thought to influence IVH risk: avoidance of head rotation and tilting the incubator to promote cerebral venous drainage [16][17][18]; specifying parts of nursing care and medical procedures (for example, using closed suction catheters and slow arterial blood sampling/flushing) in order to minimize rapid cerebral blood flow fluctuations [11,19]. The NCB was based on a treatment concept developed at the perinatal center of the Children's University Hospital Ulm, leading to a reduction in IVH incidence [20]. To investigate whether the NCB affected the incidence of IVH, we conducted a retrospective cohort study. Study Design This retrospective analysis of two historical cohorts was carried out at the NICU of the Children's Hospital Singen, a tertiary perinatal center. It covered a period between July 2006 and June 2017. The NCB was implemented in June 2012. Digital and paper patient records were analyzed. The study was registered at the German Register of Clinical Trials (trial no. DRKS00018859) and was approved by the ethics committee of the Albert-Ludwigs-University of Freiburg (application no. 350/19). Due to the anonymized data and retrospective design, the ethics committee granted a waiver of informed consent. Patients The analysis included all preterm infants with either <1250 g birth weight or <30 weeks gestational age or both admitted to the NICU within the first 72 h of life. Infants with complex congenital malformations (i.e., cardiac, renal, or neurological) or palliative care were excluded. Routine Care and Neonatal Care Bundle for the Prevention of IVH During the study period, respiratory care was performed according to international standards [21]. Preterm infants with respiratory distress syndrome received nasal continuous positive airway pressure or mechanical ventilation when necessary. The target range for CO 2 was between 40 and 60 mmHg with a pH > 7.25. The oxygen saturation target was 89-95%. Surfactant was administered when FiO 2 requirement was >0.3 to 0.4. Arterial hypotension, defined by mean arterial pressure lower than gestational age in mmHg combined with poor peripheral perfusion, was treated by administering isotonic fluid boluses or inotropic agents (dopamine, noradrenaline, or norepinephrine) as needed. Indomethacin and ibuprofen were used for pharmacologic closure of hemodynamically significant persistent ductus arteriosus (PDA). Fluid management followed the European Society for Paediatric Gastroenterology, Hepatology, and Nutrition guidelines on pediatric parenteral nutrition [22]. Prenatal management by the study site's Obstetrics and Gynecology department consisted of tocolysis, antenatal corticosteroids, and treatment of amniotic infection syndrome (i.e., clinical chorioamnionitis) with antibiotics. Before implementing the NCB, preterm infants were placed in varying positions, i.e., prone, supine, or lateral position, without explicit specifications for head positioning. Elevation of the bed's head, i.e., tilting the incubator, was also not performed routinely. Nursing interventions and medical/diagnostic procedures were performed according to nurses' and physicians' discretion; drawing blood from arterial lines was not standardized. After a three-month development and training period between March and May 2012, the NCB was introduced as an in-house guideline at the study site in June 2012. It standardized patient positioning, nursing care, and medical procedures within the first week of life ( Table 1). The main focus was set on patient posture: (i) maintaining a supine midline position for the first three postnatal days, (ii) neutral head position with avoidance of prone position until the seventh day of life, and (iii) tilting the incubator to 10 to 20 degrees in the first week of life. Weight was checked on the first and fourth day of life; measurement of head circumference and length and routine nursing interventions such as cleaning the incubator and extended body wash had to be performed not earlier than the fourth day of life. All invasive medical procedures within the first week of life had to be performed by experienced neonatologists. Medical procedures and nursing measures had to be adapted to the care rounds; inadequate exposure to light and noise had to be avoided; signs of stress and pain were continuously monitored using eligible pain scales. Table 1. Neonatal care bundle for the prevention of IVH (based on [20]). Review of Practice -All cases with IVH ≥ grade 3 and PVHI should be discussed within case conferences. Neuroimaging Before implementing the NCB, cranial ultrasound scanning (CUS) was performed on the first and fourth day of life. Afterwards, CUS had to be performed not earlier than the fourth day of life, except for diagnostic workup of severe anemia. In both cohorts, follow-up CUS was performed depending on respective findings at least once every two weeks until discharge. IVH was classified according to Volpe and Deeg [23,24] by three neonatologists. Study Objectives The study endpoint was the rate of IVH before and after implementing the NCB and investigating the influence of group differences in the cohorts and known/potential risk factors on IVH rate. Included influencing/risk factors were (i) treatment according to the NCB, (ii) amniotic infection syndrome, (iii) absence or incomplete antenatal corticosteroid therapy for fetal maturation, (iv) <28 weeks gestational age, (v) birth weight <1000 g, and (vi) arterial hypotension with the need for catecholamines. Statistical Analysis Basic patient data and the presence of risk factors for IVH were reported descriptively and stratified by the treatment according to the NCB (before vs. after). Possible differences between groups were evaluated using t-test in normally distributed numerical factors and Mann-Whitney U-test in non-normally distributed ones, as well as chi-square test for categorical items. Statistical significance was set at p-value <0.05. Statistical analyses were performed using logistic regression (outcome: occurrence of IVH) to adjust for group imbalances and influencing/risk factors for IVH factors (each coded yes vs. no depending on presence or absence) and generalized logit models (outcome: severity of IVH). Possible collinearity and interactions of the influencing/risk factors were tested. No adjustment for multiple testing was made. All p-values are descriptive. Analyses were performed using Statistical Package for Social Sciences Version 25 (IBM Corp., Armonk, NY, USA) and SAS Version 9.2 (SAS Institute, Cary, NC, USA). Results In total, 247 infants with either a birth weight <1250 g or <30 weeks of gestation or both were eligible for participation in this study. Eighteen cases met exclusion criteria (palliative care n = 9; chromosomal aberration with severe heart defects n = 5; patient charts not evaluable n = 2; antenatally diagnosed intracranial bleeding n = 2; admittance to the study site's NICU >72 h of age n = 1), which led to 229 infants for further analysis. Two infants died before cranial ultrasound was performed. Respective data were reported but not considered as a competing endpoint in the analysis. Patient characteristics are shown in Table 2. Prophylactic indomethacin treatment for PDA and amnion infection syndrome was reported more often before implementing the NCB, while intubation within the first 72 h of life was reported more often afterwards. Implementing the NCB did not reduce the rate of IVH in general or its respective grades of severity ( Figure 1). Preterm infants <500 g birth weight and <26 weeks gestational age showed higher numbers of IVH after the implementation, while a decrease in IVH rate was observed in the remaining groups stratified by birth weight and gestational age, except for groups 1000-1249 g and 28-29 weeks, respectively (Table 3a,b). In univariate analysis, treatment according to the NCB was not associated with a reduction in overall IVH (odds ratio (OR) 1.02; 95% confidence interval (CI) 0.57-1.84; p = 0.94). This was also the case for mild IVH (IVH grade 1 or 2: OR 1.0; 95% CI 0.7-1.44; p = 0.98) and severe IVH (IVH grade 3 or PVHI: OR 1.0; 95% CI 0.67-1.55; p = 0.92). Neither mortality (OR 0.95; 95% CI 0.39-2.28; p = 0.90) nor IVH-free survival (OR 1.06; 95% CI 0.60-1.86; p = 0.84) were affected by implementing the NCB. Using multivariable logistic regression to adjust for the reported group differences and risk factors for IVH, we included the influencing/risk factors mentioned in Section 2.5, and early intubation and prophylactic PDA treatment with indomethacin in the regression model. After adjusting for these factors, treatment according to the NCB was not associated with reducing overall IVH (adjusted (a) OR 1.0; 95% CI 0.42-2.2; p = 0.90). Amniotic infection syndrome and early intubation within the first 72 h of life were associated with an increased risk of suffering from an IVH (a OR 3.2; 95% CI 1.4-7.1; p < 0.01 and a OR 12.6; 95% CI 4.3-36.9; p < 0.01, respectively). The remaining influencing/risk factors showed p-values > 0.05 in the regression model. Discussion This retrospective analysis evaluated the effect of an NCB to reduce the incidence of IVH in very preterm infants. Implemented as an in-house guideline at the study site, the NCB aimed to reduce rapid cerebral blood flow fluctuations within the first days of life through standardizing patient positioning, nursing care, and medical interventions. We failed to detect any impact on both the overall rate of IVH and the respective grades of severity after implementing the NCB, as they remained virtually unchanged. We identified amniotic infection syndrome and endotracheal intubation within the first 72 h of life as risk factors for IVH, but the different incidences of these factors in our study cohorts before and after implementing the NCB did not explain the lack of effect of our NCB. While the mean gestational age was the same in our cohorts, the mean birth weight was slightly lower after implementing the NCB (907.8 g vs. 952.3 g). After stratification by birth weight and gestational age, preterm infants <500 g and <26 weeks of gestation showed higher rates of IVH after implementing the NCB. Amnion infection syndrome and prophylactic indomethacin treatment for PDA were reported more often before implementing the NCB, while intubation during the first 72 h of life had to be performed more often afterwards. These discrepancies regarding potential influencing factors on IVH applied to the entire study population ( Table 2) but were also true for the mentioned subgroup of preterm infants <500 g and <26 weeks (not shown). Whereas amniotic infection syndrome represents a detrimental factor and prophylactic indomethacin a potential protective factor in the pathogenesis of IVH [12,25], the need for early intubation may indicate more clinically unstable infants after implementing the NCB. Additionally, early intubation and mechanical ventilation are risk factors for IVH [26]. With the simultaneous presence of counteracting influential factors on IVH, we have no explanation for the higher IVH rate in the subgroup of preterm infants <500 g and <26 weeks after implementing the NCB other than this being a finding by chance related to the small sample size. Apgar scores at five and ten minutes were lower after implementing the NCB. However, Apgar scores above six at five minutes were not associated with IVH in a recent cohort study [1]. Dalili et al. did not find significant associations between a low conventional Apgar score and IVH [27]. Furthermore, in Schmid et al., the cohort with reduced IVH rates had significantly lower five-and ten-minute Apgar scores [20]. As part of our statistical analysis, we included the five-and ten-minute Apgar scores in the regression model without altering the results (treatment according to the NCB: a OR for IVH 1.0; 95% CI 0.45-2.3; p = 0.94). Considering the already extensive model, we excluded Apgar scores from the final model. After adjusting for the mentioned group differences and influencing/risk factors using multivariable logistic regression, treatment according to the NCB was not associated with a lower (or higher) risk for IVH. We identified amniotic infection syndrome and early intubation within 72 h of life as risk factors for IVH at our study site, which is in concordance with other publications concerning this topic [26,28]. Besides the mentioned group differences, poor protocol adherence could have influenced the outcome as we did not quantify or record it. However, adherence to the NCB was assessed regularly during repeated daily ward rounds. In 2013, Schmid and co-workers reduced IVH rates in preterm infants by implementing a comprehensive treatment concept and prospectively monitoring risk factors for IVH [20]. Our observed overall IVH rates of 26.2% (before NCB) and 27.1% (after NCB) are higher than those published by Schmid et al. regarding their cohort before introducing the treatment concept to prevent IVH (22,1%). However, in the cohort of Schmid et al., 23.2% of infants had a birth weight of 1250-1499 g, whereas this was only 8.2% in our cohort. The incidence of overall IVH and severe IVH for infants <1250 g was equal in our cohort (25.9% and 12.5%, respectively) and the cohort of Schmid et al. (25.3% and 11.9%). While the Schmid study included a much larger number of patients, the authors gave no explanation as to why the cohort after implementing the NCB was smaller (n = 191), and the recruitment period extended over a shorter time (23 months) than the cohort before NCB implementation (n = 265; 31 months). Although mean gestational age was significantly higher after implementing the NCB, Schmid et al. addressed this imbalance by adjusting for gestational age, which resulted in non-significant differences between the two cohorts concerning survival without IVH and severe IVH, especially in preterm infants with a birth weight <1000 g. The study authors also noted that interventions such as delayed cord clamping or cord milking might have influenced neonatal outcomes in the cohort after implementing the NCB (neither intervention was performed during our study). Likely, the effect of a particular treatment concept cannot easily be transferred to another center even though its elements have been adopted unchanged. In their publication, the authors pointed out that the center-specific measures can only be applied to other hospitals to a limited extent. Schmid et al. instead provided information on how a center with a high incidence of IVH could proceed to reduce the incidence of IVH, including interdisciplinary cooperation, identification of specific risk factors for IVH, comparison of the center's approach preventing IVH with a center with a low IVH incidence, and development of an individual treatment concept and verification of its adherence [20]. Along with the work of Schmid et al., at least one other study has shown that similar interventions can lead to a decrease in IVH incidence. In 2019, de Bijl-Marcus et al. evaluated the effect of a nursing intervention bundle on IVH incidence, which was remarkably similar in its essential points to our NCB. The authors focused on posture (maintaining midline head position; tilting the incubator 15 to 30 degrees; avoiding head down positions and sudden elevation of the legs; avoiding rapid blood sampling from arterial lines; avoiding rapid intravenous/arterial flushes) during the first 72 postnatal hours. They observed a significant decrease in "new or progressive IVH" within the first 72 h after birth (a OR 0.34; 95%CI 0.20-0.56; p < 0.001) as well as in their primary composite outcome, "new or progressive IVH, mortality or cystic periventricular leukomalacia" (a OR 0.42; 95% CI 0.2-0.65; p < 0.001) [13]. These results are contrary to ours as De Bijl-Marcus and colleagues observed a decrease in IVH rate, especially in the group of very immature preterm infants, while we observed an increase. The cause of this discrepancy remains unclear since the two NCBs were quite similar in terms of their main components, but it may also be due to our smaller sample size. With one key point in patient positioning (i.e., head in elevated midline position) during the first days of life, the discussed NCB implemented pathophysiological considerations on the genesis of IVH into the clinical routine. Based on observations, the mentioned head position prevents venous congestion as a potential contributory cause of IVH [16][17][18]29]. Although this has been recommended for years [12], the clinical effect of specific patient and head positioning to prevent IVH in preterm infants during the first days of life is still under evaluation, with varying results [30][31][32][33]. Recently, Kochan et al. contributed to a possible clarification of this issue with a randomized controlled trial in which they found fewer PVHI in preterm infants when they were nursed in a supine position with the bed elevated at 30 degrees (which is considerably more than in our NCB) [34]. However, this study had several limitations as baseline characteristics differed among the groups, with more pre-eclampsia in the treatment group and more prolonged rupture of membranes in the control group. There was no difference in the overall rate of IVH and IVH grades of severity 1 to 3 [35]. Since the issue of optimal postnatal positioning is not yet conclusively clarified, it must also be taken into account that, for example, by strictly avoiding prone position during the first days of life, potential positive effects of prone position on ventilation and food tolerance are withheld from preterm infants [36][37][38]. This single-center retrospective cohort study has several limitations. The patient number may have been too small to detect an effect of the NCB on IVH, although no trend could be observed. Possibly, other factors crucial to the preterm infants' care were not sufficiently analyzed by this study. There may have been lingering changes in care practices that we were not aware of but may explain the lack of effect of the NCB on the IVH rate. Concerning the logistic regression interpretation, it has to be considered that, inherently, several influencing/risk factors were concomitantly present in our cohorts, despite our testing for collinearity and interactions. For example, prophylactic indomethacin for PDA was typically applied only to very immature infants, mainly intubated within 72 h of life, while inherently having the highest risk for IVH. These factors may have led to nonsignificant results for obvious risk factors for IVH, such as low gestational age, when applying the full regression model. To reduce stress, and as CUS during the first three days of life has no impact on treatment (exception defined in Section 2.4), we performed the first CUS on the fourth day of life. Therefore, this study could not evaluate the NCB's effect on the occurrence of IVH during the first three days of life. Our approach also hampered differentiation between IVH and prenatally occurring intracranial hemorrhage. However, the latter is relatively rare and should have had little impact on the detected IVH rates [39]. Transferring NCBs that have successfully reduced the rate of IVH elsewhere does not guarantee the same results in another center. Therefore, as with every intervention, the proof of concept always requires additional studies. As our study is, like others, not randomized, confounding factors for this multifactorial-triggered morbidity, so far unrecognized, may explain the lack of an effect of the NCB in our center. We decided not to further adhere strictly to the NCB but to apply more individualized care and treatment. Conclusions Implementation of an NCB focusing on patient positioning and minimal handling was not associated with a reduction in IVH rate in a mid-sized German NICU. Known risk factors for IVH were confirmed. Regarding the effectiveness of NCBs to reduce IVH, our study's result is in contrast to the results of others [13,20]. It is conceivable that transferring a successful strategy from one NICU to another does not necessarily lead to similar results as local conditions may require a more individualized approach. Thus far, there is no convincing evidence from randomized controlled trials that special care bundles affect IVH outcome. All studies, including ours, have only a hypothesis-generating character and have no confirmatory power. Therefore, as IVH is one of the major morbidities in very preterm infants, larger, multicenter, randomized, controlled trials are needed to evaluate the discussed measures' effectiveness to reduce IVH. Informed Consent Statement: The local ethics committee waived patient consent due to the use of anonymized data and retrospective design. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2021-04-04T06:16:27.593Z
2021-03-25T00:00:00.000
{ "year": 2021, "sha1": "9b5648fe7de52324d6caa7705653a6606f0a35be", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9067/8/4/257/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5fdc3b0c4749e3ab5e60152c679ac21af3300cb3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53167730
pes2o/s2orc
v3-fos-license
Cerebellar radiological abnormalities in children with neurofibromatosis type 1: part 1 - clinical and neuroimaging findings Background Many children with neurofibromatosis type 1 (NF1) have focal abnormal signal intensities (FASI) on brain MRI, whose full clinical impact and natural history have not been studied systematically. Our aims are to describe the clinical and neuroradiological features in children with NF1 and cerebellar FASI, and report on the natural history of FASI that display atypical features such as enhancement and mass effect. Method A retrospective review of the hospital charts and brain MRIs was performed on children from Manitoba diagnosed between 1999 and 2008 with NF1, who also had cerebellar FASI on MRI. Results Fifty patients (mean age: 16.1y, minimum-maximum: 6.4 - 30y, 27 M) were identified. Mean duration of follow up was 10.1y. Developmental delay, learning disabilities, tumors, and visual signs occurred commonly. Cerebellar signs were not reported. Mean age of the patients at baseline MRI was 7.8 (SD: 4.5) years. FASI occurred in several brain locations and were rarely confined to the cerebellum. FASI displayed mass effect and enhancement infrequently but were associated with malignancy only once. The number of FASI at baseline MRI was significantly less in patients with attention deficient hyperactivity disorder and more if a first degree relative had NF1 or if they had decreased visual acuity. Discussion Patients with NF1 and cerebellar FASI do not have motor or consistent non-motor (e.g. developmental delay or learning disabilities) cerebellar features. The number of FASI may correlate with some clinical features. FASI may display enhancement and mass effect but they rarely become malignant. Background Neurofibromatosis type 1 (NF1) is an autosomal dominant neurocutaneous disorder with an estimated incidence of 1/3500 live births [1]. Roughly half of all cases are inherited, while the other half are due to de novo mutations. Diagnosis requires the presence of at least two of seven major criteria: six café-au-lait spots, axillary or inguinal freckling, two neurofibromas or one plexiform neurofibroma, two Lisch nodules, an optic glioma, a distinctive osseous lesion, or a first-degree relative with NF1. These features tend to manifest in a characteristic sequence throughout infancy, childhood and adolescence [1]. NF1 exhibits complete penetrance but extremely variable expressivity [2]. This, combined with the infrequent correlation between specific genetic mutations and clinical phenotype [2,3], make it difficult to predict the severity of NF1 for a given patient. Patients with NF1 are at increased risk of developing both benign and malignant tumors compared to those in the general population [4]. The most common intracranial neoplasms in patients with NF1 are optic pathways gliomas followed by low-grade astrocytomas of the posterior fossa [5][6][7]. These tumors are usually less aggressive than comparable lesions in children without NF1 [6,7]. Tumors of the optic pathways may present in early childhood with vision loss, proptosis or precocious puberty [1,2,7]. However, many are asymptomatic and are discovered incidentally on MRI, as are the majority of tumors in the posterior fossa with rare exceptions [5,6,8]. Screening for these tumors in asymptomatic individuals is controversial [5,7]. Patients with NF1 may have behavioral impairment and cognitive dysfunction. While many studies have been done on the subject, results are often conflicting and no clear NF1 cognitive profile has emerged. Visuospatial and fine motor deficits have long been considered hallmark neurological features of NF1 [9,10]. Multiple studies have reported that children with NF1 have normal intellectual ability, but as a group tend to score below-average on IQ tests when compared to controls [9][10][11]. These children often have widespread academic difficulties and it is estimated that between 35 and 65% have a learning disability [9]. In addition, half of them meet the criteria for attention-deficit hyperactivity disorder [9]. Many attempts have been made to correlate these cognitive and neurological deficits with radiologic findings. The most common abnormalities found on brain MRI are regions of increased signal visible on T2weighted images i.e., focal abnormal signal intensities (FASI). These are believed to be present in 60-80% of pediatric patients with NF1 [12], though some studies have reported a prevalence as high as 93% [5,13]. FASI are most common in the basal ganglia, but are also found in the thalamus, brainstem, cerebellum and occasionally cerebral hemispheres [13][14][15]. They do not typically exhibit mass effect or enhancement, and are not associated with focal neurological defects [10,13]. Studies examining the clinical significance of FASI have contradictory findings, though most agree that there is no correlation between neurological deficits and the presence or absence of FASI [16]. There is evidence to suggest that their location, but not their number or size, may have neuropsychological correlates. Specifically, FASI located in the thalamus are associated with cognitive impairment [11,12]. Despite being poorly understood, there are some who feel that FASI in children are pathognomonic of NF1 and recommend their presence be incorporated as one of the major diagnostic criteria of the disease [17]. The cerebellum plays an important role in the processing of non-motor in addition to motor tasks. Many studies support non-motor roles for the cerebellum in cognition [18]. Children with cerebellar disorders have been documented to have cognitive and neuropsychiatric disorders. In addition, developmental delay, learning disabilities, and behavioral problems have been commonly reported in children with developmental cerebellar disorders [18]. Therefore, we hypothesized that children with cerebellar FASI were likely to have cerebellar motor signs and developmental delay and/ or learning disabilities. The aims of this study are to: Describe the clinical features in children with NF1 and cerebellar FASI on MRI, correlate the clinical features, especially developmental delay, learning disabilities, and examination findings, with baseline neuroradiological abnormalities, and describe the natural history of FASI that display atypical features including mass effect and enhancement. Methods Electronic MRI neuroimaging reports stored in the radiology information system at our Health Sciences Centre (HSC) for the 10-year period starting January 1, 1999 were read and flagged for the occurrence of posterior fossa abnormalities. Patients with NF1 who had cerebellar radiological abnormalities mentioned on any brain MRI report when they were less than 17 years old between 1999 and 2008, were selected. All patients were seen in Winnipeg Children's Hospital for a clinical evaluation. Patients from neighboring provinces who attended Winnipeg Children's Hospital were also included. Ethical approval for the study was given by the Research Ethics Board of the University of Manitoba. Inclusion criteria for the study were: (1) Patients were less than 17 years old during the study period, (2) patients had NF1, and (3) patients had cerebellar abnormalities on MRI. Exclusion criteria were: (1) Patients with solid central nervous system tumors unrelated to NF1, (2) trauma involving the posterior fossa, (3) significant brain malformations and anomalies unrelated to NF1, (4) patients who had brain radiotherapy or chemotherapy before their first brain MRI, and (5) patients with other unrelated brain disease e.g. demyelinating disorders. The hospital chart, pediatric neurology clinic letters, and genetic clinic letters were reviewed in each of these patients. The relevant data available until 2013 were extracted and entered into a purpose-specific database. The data consisted of demographic information, duration of follow-up period, pregnancy, birth and perinatal history, family history of NF1, presenting symptoms, presence of features seen in NF1 including café-au-lait spots, axillary freckling, neurofibromas, bone dysplasia, NF1 related tumors; other diagnoses, developmental milestones, learning disabilities, and physical exam findings especially ocular and central nervous system examination abnormalities. Brain MRI was acquired on 1.5-or 3-Tesla MRI scanner (GE) using standardized protocol with sagittal T1-weighted, axial/ coronal T2-weighted, and axial/ coronal fluid-attenuated inversion recovery (FLAIR) images. Supplemental imaging sequences were performed as needed including T2*, DWI, ADC (apparent diffusion coefficients) maps, fast spoiled gradient echo (FSPGR) images, and MRA. Contrast with Gadolinium was given at the discretion of the radiologist. All initial and subsequent brain MRI images available that were completed by June 2014 were reviewed independently by two pediatric radiologists with expertise in neuroimaging. Disagreements were resolved by consensus. Age at the time of the MRI scan, structural and signal abnormalities in the cerebellar vermis, cerebellar hemispheres, brainstem, and supratentorial structures including the cortex and white matter, basal ganglia, thalami, hypothalamus, and optic nerves/ chiasm were recorded, as well as the presence of cerebellar hypoplasia (small size but normal shape) or cerebellar atrophy (shrunken size with prominence of the cerebellar folia). Optic nerve(s) and/ or chiasm gliomas (optic pathways gliomas) were diagnosed when these structures were enlarged/ thickened on MRI. Detailed information was collected on FASI on FLAIR images including their number, locations, diameter (defined as a straight line passing from side to side through their center and thus representing their maximum length), signal characteristics, uptake of contrast on T1-weighted images, and mass effect on adjacent structures. Brain locations were divided as follows: cerebellum, brainstem (midbrain, pons, medulla, cerebellar and cerebral peduncles), thalamus/ hypothalamus, basal ganglia/ internal capsule (these structures were combined since lesions in the internal capsules tended to involve the basal ganglia and it was difficult to separate the two, as was also reported previously) [19], and cerebrum (cortex, subcortical and periventricular white matter, hippocampus, corpus callosum, and fornix). The data were converted to numerical variables using a numerical coding scheme to render the data suitable for statistical analysis and modelling. Mean and median were used to describe the normally distributed and skewed data, respectively. Results Fifty patients fulfilling the inclusion criteria were identified. Their mean age at the end of the study period was 16.1 years (minimum: 6.4, maximum: 30 years). There were 27 males and 23 females. Mean duration of follow up was 10.1 years. Table 1 shows further demographic information. There were four deaths, all caused by malignant tumors ( Table 2). Table 3 shows selected information on maternal health, pregnancy and delivery. Most patients were born near or at term and no patient was born before 33 weeks gestation. The presence of café au lait macules and a positive family history of NF1 were the most common reasons for the initial assessments of these patients (Table 4). Table 5 shows the clinical features in our cohort. Tumors, developmental delay, and learning disabilities occurred commonly. Eye movement exam was documented poorly. Saccadic smooth pursuit was reported in four patients. Cerebellar motor signs were not reported including head titubation, dysarthria, dysmetria, dysdiadochokinesia, intention tremor, rebound, gait ataxia, and wide-base gait. None of the patients had dyskinesia. Neuroimaging A few brain MRI scans during the early part of the study period were not available for review. Fifty baseline (or the first available) brain MRI scans were reviewed. In addition, if any FASI displayed atypical features then all subsequent available MRI scans were reviewed. Mean age of the patients on the first MRI was 7.8 years (N = 50, SD: 4.5 years, median: 7.3 years, minimum: 1.2, maximum: 18.2 years), where SD is the standard deviation. Table 6 shows the number of patients with FASI in different brain locations on the first MRI available. Total FASI count is also displayed. FASI was most commonly seen in the brainstem and basal ganglia in addition to the cerebellum. One patient had two basal ganglia FASI on the first scan but developed FASI in the middle cerebellar peduncle on the second MRI. The rest of the patients had cerebellar FASI on the first MRI. A summary of the distribution pattern of FASI is displayed in Table 7. In many patients, FASI was present concurrently in multiple brain locations. FASI was first seen in only four patients below the age of 2 years (8% of all patients). The youngest was 14 months old. In those four patients FASI was present in one or more other brain locations (i.e., basal ganglia, thalamus, brainstem, and cerebrum) in addition to the cerebellum. A single patient (#54) had widespread numerous FASI in the cortex and subcortical white matter of both cerebral hemispheres. To minimize extreme skewness of the data, we excluded this patient from the analysis involving cerebral FASI counts. The longest diameter of FASI The longest diameter of FASI irrespective of its brain location among our patients in the first scan had a median length of 1.6 cm (N = 49, minimum: 0.8, maximum: 5.6 cm). In one patient the image was degraded by motion artifact hence no measurement was done. FASI with the longest diameter were mostly located in the cerebellum on the first MRI (in 22 of 49 patients). In several instances FASI with the longest diameter, especially those > 2.5 cm, was caused by several confluent FASI that could not each be measured individually and separately. Table 8 shows details of patients with FASI that displayed mass effect, contrast enhancement, or both, and their outcome. Figures 1, 2 and 3 show examples of these atypical features. Mass effect was infrequently seen. Of the 50 patients, mass effect from FASI was noted in 7 (14%) at any scan. Of those 7 patients, the mass effect resolved on subsequent scans in 5 (patients #8, 13, 20, 33, 41 in Table 8), while one patient showed persistence of the mass effect (patient #44 in Table 8). In one patient mass effect was only present on their last scan when FASI became malignant (patient #50 in Table 8). Atypical features of FASI -mass effect and contrast enhancement Contrast enhancement of FASI was also seen infrequently. Of the 50 patients, 2 only had one MRI and contrast was not given. Another 4 patients had no contrast given in only one of their scans. Eight patients showed contrast enhancement at any scan. Of those 8 patients, 2 showed resolution of the contrast enhancement (patients # 8 and 13 in Table 8) and 6 showed persistence of the contrast enhancement (including one patient who was thought to have had only a single scan but a repeat scan during the study period was located later [patient #7 in Table 8]). Location of FASI and age at symptom onset/ first clinic visit There was a significant relationship between the occurrence of FASI in the basal ganglia and age of the patients at symptoms onset. FASI in the basal ganglia occurred at a significantly later age (2 years or older) in comparison with the younger age groups (0-0.9 and 1-1.9 years) at symptom onset (p = 0.043). On the other hand, there was a trend for the hypothalamus to be involved in the youngest patients (age 0-0.9 year) in comparison with older patients i.e., 1-1.9, 2-4.9, and 5 years or older at their first clinic visit (p = 0.055). The relationship between the clinical features and FASI location(s) and total number of FASI Learning disabilities were significantly less likely in the presence of FASI in the thalamus (p = 0.033). A similar trend was also found for developmental delay and thalamic FASI (p = 0.08). On the other hand, a trend between developmental delay and the presence of FASI in the hypothalamus was present (p = 0.075). Otherwise, no other relationship between FASI location and other clinical symptoms was seen. In addition, none of the clinical symptoms showed any significant relationship with various combinations of FASI in different brain locations. As for clinical signs, univariate analysis revealed that strabismus was significantly more likely to be associated with basal ganglia FASI (p = 0.009). In addition, abnormal visual fields were significantly less likely with thalamic FASI (p = 0.044), while abnormal visual fields and fundi exam were significantly more likely with When various combinations of brain locations of FASI were correlated with clinical signs, there was a trend for abnormal visual fields not to be associated with the presence of FASI in all of the following locations: cerebellum, thalamus or basal ganglia, and brainstem (p = 0.079), while abnormal fundi were significantly associated with widespread involvement of FASI in all brain regions (p = 0.009). However, on multivariate analysis where funduscopy and visual fields were paired, the latter association was no longer significant. When the clinical features were correlated with the total number of FASI on baseline MRI, significant associations were found in patients with attention deficient hyperactivity disorder (ADHD) (p = 0.01) and in patients with impaired visual acuity (p = 0.019) after adjusting for all other clinical features. The expected number of FASI at baseline scan for patients with ADHD was 39.3% lower than for those without ADHD after adjusting for all other variables (i.e., there was a negative association). In addition, the expected number of FASI at baseline scan for patients with impaired visual acuity was 60.1% higher than for those with normal visual acuity after adjusting for all other variables (i.e., there was a positive association). There was only an overall trend (p = 0.076) for a positive history of NF1 in any family member to be associated with the total number of FASI on baseline MRI. However, this association became significant if a first degree relative rather than a more distant relative had NF1 (p = 0.029). The expected number of FASI at baseline scan for patients with a first degree relative with NF1 was 56.1% higher than for those with no family history of NF1 after adjusting for all other variables. Optic pathways gliomas and their relationship with visual signs, FASI locations, and FASI counts Of the 50 patients, 18 had optic pathways gliomas on their initial MRI scan. Abnormalities in the visual acuity, visual fields, and funduscopy exam were significantly associated with optic pathways gliomas on baseline brain MRI (p = 0.0006, p = 0.011, and p < 0.0001, respectively). There was a significant association between the presence of optic pathways gliomas on baseline MRI and the number of patients with FASI in the cerebrum but not in other brain locations. Cerebral FASI was present significantly more commonly in patients with optic pathways gliomas, than in patients without optic pathways gliomas on baseline MRI (58.8% versus 41.2%, p = 0.019). Furthermore, the mean number of FASI in the supratentorial region and in the cerebrum (but not in other brain locations) were each significantly higher in patients with optic pathways gliomas (N = 18), than in patients without optic pathways gliomas (N = 31) on baseline MRI (mean FASI count = 4.46 versus 2.93 for the supratentorial region, p = 0.008; The association between maternal age at conception and the number of FASI and clinical features There was a significant relationship between maternal age at conception (available in 41 patients) and the number of infratentorial FASI on baseline MRI (p = 0.029). With every one-year increase in maternal age at conception, the expected number of infratentorial FASI at baseline MRI was 2.75% lower. Similarly, there was a significant relationship between maternal age at conception and the number of brainstem FASI on baseline MRI (p = 0.029). With every one-year increase in maternal age at conception, the expected number of brainstem FASI at baseline MRI was 3.98% lower. There was only a similar Two trends were found for associations between decreasing maternal age at conception and the increased presence of learning disabilities (p = 0.065) and ADHD (p = 0.081) in children with NF1. Discussion Our investigation focuses on a subgroup of patients with NF1 and cerebellar FASI on their MRI scans during childhood. We selected them a priori to investigate cerebellar FASI and their clinical impact. Recently, there has been much interest in the expanding roles of the cerebellum and its important role in non-motor in addition to motor tasks [18]. Our patients represent what is commonly seen in clinical practice since many patients with NF1 have FASI [12,13,20], and 23.5-84% of these patients also have FASI in the cerebellum/ brainstem [13,14,17,21,22]. The diagnosis of NF1 was made early in our cohort. A positive family history of NF1, reported commonly in our patients (48%), and similar to other studies (e.g. 35% [17], and 54.8% [23],), and the frequent presence of café-au-lait macules likely prompted an early referral and diagnosis. Tumors (especially neurofibromas), developmental delay, and learning disabilities occurred commonly in our patients, while headaches, ADHD, and seizures were less frequent clinical features. Their prevalence is similar to prior reports [24,25]. The prevalence of other features including café-au-lait spots, axillary and inguinal freckling, Lisch nodules, and sphenoid dysplasia were also similar to other studies [17,26]. Visual signs e.g. decreased visual acuity, abnormal visual fields, and optic discs pallor, detected commonly on examination, were important markers associated with optic pathways gliomas. Indeed, optic pathways gliomas occurred commonly and were mostly present on the initial MRI scans in our study. These anticipated findings have been reported previously and are consistent with impaired optic nerve function in some patients with NF1 and optic pathways gliomas [27]. There was high prevalence of optic pathways gliomas (44.7%) in our cohort in comparison with several studies (4-25%) [17,24,25,27]. A large study in patients with NF1 reported a significant association between the presence of FASI and optic pathways gliomas (odds ratio: 2.1, 95% CI: 1.2-3.6) [26], which may account for our finding since all our patients had FASI. However, in a study of 31 children with NF1 who had brain MRI, there was no correlation between FASI (present in 27 patients) and optic pathways gliomas, which similar to our study was noted in 14 (45%) of the patients [23]. In another study investigating the natural history of FASI, optic pathways gliomas were present in 33% of 46 patients with NF1 [13]. The higher prevalence of optic nerve gliomas in our study may also be due to selection bias, since such patients are more likely to be seen in a tertiary hospital and have neuroimaging. Similar referral and selection bias may explain the higher proportion of our patients with NF1 who developed other neoplasms (24%) than the prevalence of neoplasms reported in other studies (4-10.7%) [24,25]. On the other hand, cerebellar motor signs were absent in our cohort, consistent with a normal cerebellar motor function despite the presence of cerebellar FASI. While clumsiness, lower manual dexterity score, and impaired fine motor skills or coordination has been reported in NF1 patients, the occurrence of frank cerebellar ataxia has not [11,12,20,28,29], unless other complicating factors are present, such as an expanding posterior fossa or spinal cord tumor or high cervical cord lesions compressing the cord [1,6]. The nature of FASI is unclear but may be related to increased fluid within the myelin associated with hyperplastic or dysplastic glial proliferation as suggested in a study using newer MRI techniques, e.g. multi-exponential T2 relaxation and diffusion MRI including diffusion tensor imaging and diffusion kurtosis imaging [30]. Developmental anomalies such as hamartomas, dysplasias, and heterotopia would not be anticipated to produce reversible signal abnormalities [13,21]. Studies documenting the neuroimaging-pathological correlation of FASI are rare. In one study, three pediatric patients with NF1 had autopsy which showed that the pathological correlation of FASI were areas of fluid-filled vacuolar or spongiotic change [31]. There was no evidence of demyelination, inflammation, gliosis, stainable material, or axonal damage. Diffusion tensor imaging on 50 children with NF1 showed higher ADC values not only in FASI but also in normal appearing white matter in patients in comparison to control patients without NF1, reflecting an increase in the magnitude of water molecules diffusion and microstructural damage. The specifically higher diffusivity (measured in eigenvalues) found only in FASI was consistent with microstructural abnormalities caused by decreased axonal packing, intramyelinic edema, vacuolation, or fluid accumulation [32]. Fractional anisotropy (FA) was significantly lower in FASI located in the cerebellar white matter only. In another study of 27 children and young adults with NF1, FA decreased in the cerebellum, thalamus, and basal ganglia but only in NF1 patients whose FASI decreased in number and volume in those regions, suggesting persistent microstructural damage even when FASI disappear [15]. Similar conclusions were reported in 15 children with NF1, where ADC values were high in normal appearing brain and highest in FASI in comparison to healthy controls. The ADC values in the locations where FASI regressed in some patients, were higher than normal appearing brain in these patients, suggesting that macroscopic resolution of FASI on MRI does not necessarily lead to the full resolution of the microstructural abnormalities [33]. Brain MRI scans in our children with cerebellar FASI showed that FASI rarely occur in isolation and most commonly occur in multiple brain regions in addition to the cerebellum, especially the brainstem and basal ganglia as noted before [15,17,19,22], i.e. FASI showed no predilection to a specific brain region in our patients. There was a suggestion that age at symptom onset or first clinic visit may correlate with where FASI develops, since FASI that developed in the basal ganglia occurred in patients who were relatively older at age of symptom onset. There was a tendency to develop hypothalamic FASI in patients who were relatively younger at the age of their first clinic visit. FASI developed below the age of 2 years in different brain locations in a few of our patients, which is not considered to be typical by some authors [21], and especially as myelination is incomplete [17,33]. The total or regional number of FASI was independent of the age of symptom onset or age at first clinic visit, consistent with their asymptomatic development. Prior studies have shown that cognitive deficits were more common in patients with NF1 and thalamic FASI [11,12], especially if the lesions are well circumscribed [34]. However, the association is controversial since other studies showed no such association with executive/ cognitive function in terms of the presence, number, size, or location of FASI [20,22,23]. In our investigation, learning disabilities, reported in 30-65% of children with NF1 [20], and developmental delay (for which there was only a trend), occurred significantly less commonly in patients with NF1 and thalamic FASI. However, our patients did not have formal neuropsychological testing. In addition, our study design is different from the aforementioned studies [11,12], since all our patients also had cerebellar FASI as part of the inclusion criteria. We only found a trend for developmental delay to occur more commonly in patients with hypothalamic FASI. Furthermore, there was no significant relationship between any of the clinical symptoms and several combinations of brain locations where FASI were present. A longitudinal study of cognitive function and FASI with long term follow up showed that if patients with NF1 had FASI then their IQ is less during childhood but their IQ increases to average values when FASI disappears in early adulthood [35]. One study reported that patients with cerebellar FASI had significantly lower full scale IQ and verbal IQ scores in comparison with NF1 patients without cerebellar FASI [10]. We were not able to investigate this finding since our patients did not undergo detailed cognitive testing and all had cerebellar FASI. However, several of our patients did not have developmental delay or learning disabilities, suggesting that having cerebellar FASI per se does not necessarily leads to impaired development or learning. As for clinical signs, visual fields and especially funduscopic abnormalities were associated more commonly with patients who had cerebral FASI. This can be explained through confounding with the presence of optic pathways gliomas on baseline MRI, since cerebral FASI was seen significantly more commonly in patients whose baseline MRI also showed optic pathways gliomas. Strabismus was significantly associated with the presence of FASI in the basal ganglia. We found no studies that directly link strabismus with disorders of the basal ganglia. In one study of 213 children with cerebral palsy, strabismus was reported in 3 of 15 children with dyskinetic cerebral palsy. Abnormalities in the basal ganglia are implicated in dyskinetic cerebral palsy [36]. Our finding may have arisen by chance. Further corroboration is needed before any definitive conclusions can be made. The total number of FASI at baseline MRI was significantly less in patients with ADHD and more if a first degree relative had NF1 and also more in patients with decreased visual acuity. This latter association may have arisen through the occurrence of optic pathways gliomas, which themselves were associated with higher mean numbers of FASI in the supratentorial regions and more specifically in the cerebrum. The former two associations i.e. ADHD and family history of NF1, may be caused by genetic factors/ predisposition but this is speculative. FASI was not associated with the presence of ADHD in 76 NF1 patients, who were preselected and did not have epilepsy or optic nerve glioma [34]. The relationships between decreasing maternal age at conception and the increasing number of FASI at baseline (statistically significant) or the increasing presence of learning disabilities and ADHD (statistical trends) are intriguing; albeit, small in magnitude. They raise the question on whether decreasing maternal age at conception per se or through confounding factors such as socioeconomic status or educational achievement [37], somehow influences the development of FASI, especially in the brainstem region, or adversely affects learning and behavior. It will need further validation and their clinical significance in patients with NF1 is unclear at this point since adverse health outcomes (including cognitive disability and ADHD) and maternal age at conception, whether in younger or older mothers, occurs independently of NF1 [37][38][39]. In one study in patients with NF1, learning disability was not associated with maternal age at conception. The presence of ADHD in the cohort was not reported [40]. A few FASI showed contrast enhancement, mass effect, or both. Such findings are reported to occur rarely [13,14,41,42], but are not considered to be typical of FASI in NF1 [13,21]. They are concerning for the development of neoplasms [5]. Mass effect was seen mostly initially and tended to resolve in our patients. On the other hand, contrast enhancement tended to persist on repeat MRI scans. In a few of our patients contrast enhancement was still present 16 months -11 years of follow up. On only one occasion, a single FASI located in the periventricular region developed a mass effect later and became malignant (patient #50 in Table 8). In two other patients, a cerebellar FASI enlarged and displayed contrast enhancement in each patient. Their biopsies revealed a benign ganglioma and a low grade pilocytic astrocytoma (patients #8 and 41 in Table 8). Tumors in the cerebellum and cerebral hemispheres are uncommon in patients with NF1 [7], and most are benign [6,8]. However, they occur more commonly in patients with FASI [26]. In 5 of 46 patients with NF1, FASI enlarged, showed mass effect, and enhanced [13]. Biopsy was performed in one other patient, whose brain MRI with contrast was normal initially but who then developed several FASI and also a large enhancing mass in the splenium of the corpus callosum. When the latter increased further in size, a biopsy revealed grade II astrocytoma. The rest of the patients with enhancing FASI were observed and no further follow up information on their outcome was given. The authors advised a wait-and-watch approach in such otherwise concerning cases, since current medical wisdom dictates that mass effect and enhancement of a lesion are suggestive of a tumor but at the same time tumor regression without treatment is recognized in NF1 [2,5,13]. In summary, enlargement in the size of FASI is well documented. However, infrequently a few FASI tend to also develop a transient mass effect and enduring contrast enhancement with time [14,41,42]. Sometimes this enduring contrast enhancement resolves [41], and only rarely does malignancy develops. Therefore, we support the clinical practice of repeating brain MRI in patients with NF1 who develop new clinical symptoms or signs (e.g. those suggestive of raised intracranial pressure or hemiparesis [43],) or in patients, whose FASI show mass effect and enhancement, since very rarely they can become malignant. If FASI becomes cystic, as we found in one of our patients, a tumor is highly likely [5,43]. It is still unknown whether the same neuropathology occurs in the lesions that enhance or show mass effect in comparisons with FASI that do not enhance or show mass effect [13]. Limitations of our study include incomplete ascertainment of patients, missing data, variable durations of follow up, variable ages at which the MRI scans were done, non-uniform MRI magnetic strength and protocols used in these patients, difficulties associated with measuring the diameter of ill-defined FASI, and parental report of learning disabilities were not formally verified through comprehensive cognitive assessment. Conclusions Patients with NF1 and cerebellar FASI do not have cerebellar motor symptoms or signs. If ataxia develops then neuroimaging is indicated. Patients with NF1 and cerebellar FASI may or may not have learning disabilities or developmental delay and therefore, cerebellar FASI cannot be implicated directly as a causative factor for non-motor cerebellar symptoms. FASI occurred in several brain locations and were rarely confined to the cerebellum in our preselected patients. FASI has the potential, though uncommonly, to show mass effect and/ or contrast enhancement. Rarely, such features herald a malignant change. The total number of FASI at baseline MRI was significantly less in patients with ADHD and more if a first degree relative had NF1 and also more in patients with decreased visual acuity. The latter association is likely caused by the presence of optic pathways gliomas in our cohort. The influence of maternal age of conception on the development of FASI or its association with a few clinical features requires further study.
2018-11-02T16:04:39.966Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "36eb976edfc6c630c8c88f2fb20c90cc13de5441", "oa_license": "CCBY", "oa_url": "https://cerebellumandataxias.biomedcentral.com/track/pdf/10.1186/s40673-018-0093-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6533de48d4faf2290a7f56d56e8f0636a38b1346", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
21657862
pes2o/s2orc
v3-fos-license
Low-density lipoprotein receptor–related protein-1 dysfunction synergizes with dietary cholesterol to accelerate steatohepatitis progression Reduced low-density lipoprotein receptor–related protein-1 (LRP1) expression in the liver is associated with poor prognosis of liver cirrhosis and hepatocellular carcinoma. Previous studies have shown that hepatic LRP1 deficiency exacerbates palmitate-induced steatosis and toxicity in vitro and also promotes high-fat diet–induced hepatic insulin resistance and hepatic steatosis in vivo. The current study examined the impact of liver-specific LRP1 deficiency on disease progression to steatohepatitis. hLrp1+/+ mice with normal LRP1 expression and hLrp1−/− mice with hepatocyte-specific LRP1 inactivation were fed a high-fat, high-cholesterol (HFHC) diet for 16 weeks. Plasma lipid levels and body weights were similar between both groups. However, the hLrp1−/− mice displayed significant increases in liver steatosis, inflammation, and fibrosis compared with the hLrp1+/+ mice. Hepatocyte cell size, liver weight, and cell death, as measured by serum alanine aminotransferase levels, were also significantly increased in hLrp1−/− mice. The accelerated liver pathology observed in HFHC-fed hLrp1−/− mice was associated with reduced expression of cholesterol excretion and bile acid synthesis genes, leading to elevated immune cell infiltration and inflammation. Additional in vitro studies revealed that cholesterol loading induced significantly higher expression of genes responsible for hepatic stellate cell activation and fibrosis in hLrp1−/− hepatocytes than in hLrp1+/+ hepatocytes. These results indicate that hepatic LRP1 deficiency accelerates liver disease progression by increasing hepatocyte death, thereby causing inflammation and increasing sensitivity to cholesterol-induced pro-fibrotic gene expression to promote steatohepatitis. Thus, LRP1 may be a genetic variable that dictates individual susceptibility to the effects of dietary cholesterol on liver diseases. Nonalcoholic fatty liver disease (NAFLD) 3 is rapidly emerging as a major health issue due to the increasing prevalence of obesity and insulin resistance worldwide (1). An estimated 17-30% of the Western population and 2-4% worldwide is inflicted with NAFLD today (2). Furthermore, NAFLD is anticipated to surpass hepatitis C as the leading cause for liver transplantation by 2020 (2). However, it is important to note that NAFLD describes a wide spectrum of liver diseases ranging from simple steatosis, which is benign, to liver steatosis with inflammation and fibrosis classified as nonalcoholic steatohepatitis (NASH). Only ϳ10 -20% of NAFLD cases proceed to NASH, and only 10 -20% of NASH patients eventually progress into liver cirrhosis, which significantly increases the risk for hepatocellular carcinoma (1,2). Steatosis is a prerequisite, but a second hit involving environmental and/or genetic factors is necessary for disease progression to NASH, liver cirrhosis, and hepatocellular carcinoma (3)(4)(5). Unfortunately, identifying NAFLD patients who are prone to proceed to NASH and liver cirrhosis proves to be difficult because the etiology and mechanism underlying the progression of NAFLD to NASH and liver cirrhosis remain unclear. Hence, better understanding of mechanisms and identification of genetic factors that modulate NAFLD disease progression are necessary to develop effective alternative therapies as well as reduce unnecessary liver transplants to conserve the limited number of healthy donor livers for subjects who have progressed to the end stage. One recognized risk factor associated with increased risk of liver cirrhosis and cancer is high dietary cholesterol intake (3). A recent study also revealed an association between low levels of the LDL receptor-related protein-1 (LRP1) with poor prognosis of hepatocellular carcinoma, suggesting that LRP1 may be one genetic factor that regulates NAFLD progression (6). The LRP1 protein is a multifunctional receptor responsible for cellular uptake of lipid nutrients and plasma clearance of macromolecules, such as apolipoprotein E-containing lipoproteins and protease-protease inhibitor complexes. The protein also serves cell-regulatory functions via the integration of numerous signal transduction events (7)(8)(9). Hence, dysfunction of this receptor impacts the development and progression of a wide spectrum of diseases spanning from cardiovascular disease and obesity/diabetes to neurodegenerative disorders and tumor invasion and metastasis (7)(8)(9). The mechanism underlying LRP1 modulation of disease risk varies in a tissue-specific manner. In the liver, LRP1 not only complements the LDL receptor for chylomicron remnant clearance, but its deficiency in hepatocytes also lowers plasma cholesterol levels when the animals are maintained on a low-fat, low-cholesterol chow diet due to reduction of high-density lipoprotein. Nevertheless, plasma triglyceride, nonesterified fatty acids (NEFA), glucose, and insulin levels are similar to those of hLrp1 ϩ/ϩ mice (10). Deficiency of LRP1 in hepatocytes also promotes lipid accumulation and lipotoxicity in response to excessive fatty acids through lysosomal-mitochondrial permeabilization and endoplasmic reticulum stress (11). Thus, mice with liver-specific LRP1 gene deletion (hLrp1 Ϫ/Ϫ mice) exhibited robust dyslipidemia and hepatosteatosis secondary to hepatic insulin resistance when placed on a high-fat diet (12). However, liver disease in mice fed a high-fat diet without cholesterol supplementation is restricted to steatosis with minimal inflammation and fibrosis (13)(14)(15). These latter observations are consistent with human population studies illustrating that dietary fat with high-cholesterol intake, but not dietary fat alone, is a major risk factor for advanced liver disease (3). Hence, whether LRP1 deficiency accelerates progression of NAFLD to NASH remains unknown. The current study compared hLrp1 ϩ/ϩ and hLrp1 Ϫ/Ϫ mice fed a high-fat, high-cholesterol diet to assess the importance of LRP1 expression in NASH progression. Hepatic LRP1 deficiency synergizes with dietary cholesterol to promote liver disease progression When the animals were fed a 60% high-fat diet, the hLrp1 Ϫ/Ϫ mice showed a ϳ33% increase in hepatic triglyceride accumulation compared with that observed in hLrp1 ϩ/ϩ mice, indicating that hepatic LRP1 deficiency increased sensitivity to diet-induced hepatosteatosis (12). However, collagen 1 and collagen 3 expression levels were similar between hLrp1 ϩ/ϩ and hLrp1 Ϫ/Ϫ mice after feeding the 60% high-fat diet. These results are consistent with previous reports that a high-fat diet without added cholesterol is insufficient and that excessive dietary cholesterol is necessary to promote hepatic inflammation and fibrosis in mice (13)(14)(15)). An additional study was performed to compare hLrp1 ϩ/ϩ and hLrp1 Ϫ/Ϫ mice on a low-fat, low-cholesterol chow or a high-fat diet supplemented with 1.25% cholesterol (HFHC) to examine the influence of LRP1 and dietary cholesterol in progression of NAFLD to NASH. Both hLrp1 ϩ/ϩ and hLrp1 Ϫ/Ϫ mice displayed similar food intake under both dietary conditions (Fig. 1A). In contrast to mice fed the low-fat, low-cholesterol chow or the high-fat diet without cholesterol supplementation (12), hLrp1 ϩ/ϩ and hLrp1 Ϫ/Ϫ mice fed the HFHC diet displayed comparable plasma cholesterol and triglyceride levels ( Fig. 1, B and C). Both hLrp1 ϩ/ϩ and hLrp1 Ϫ/Ϫ mice showed similar increase in plasma glucose levels and reduction of plasma NEFA levels when fed the HFHC diet (Fig. 1, D and E). Body weight and fat content were also similarly increased in HFHC diet-fed hLrp1 ϩ/ϩ and hLrp1 Ϫ/Ϫ mice (Fig. 1, F and G). However, significant hepatomegaly with a ϳ50% increase in liver weight was observed in HFHC-fed hLrp1 Ϫ/Ϫ mice compared with hLrp1 ϩ/ϩ mice (Fig. 1H). The increased weight of the hLrp1 Ϫ/Ϫ mouse livers was due to robust triglyceride and cholesterol accumulation as well as a 2-fold increase in hepatocyte cell size (Fig. 2, A-E). Additionally, a 2-fold increase in the number of cells displaying keratin-containing Mallory-Denk bodies was observed in HFHC diet-fed hLrp1 Ϫ/Ϫ mice compared with hLrp1 ϩ/ϩ mice (Fig. 2F). The hLrp1 ϩ/ϩ (filled bars) and hLrp1 Ϫ/Ϫ (open bars) mice were fed a low-fat, low-cholesterol chow diet or the HFHC diet for 16 weeks. Food intake was measured everyday over a 2-week period and averaged (A). Plasma cholesterol (B), triglyceride (C), glucose (D), and NEFA (E) levels were determined after an overnight fast. The body weight (F), fat content (G), and liver weight (H) were determined after 16 weeks on the HFHC diet. Data were reported as mean Ϯ S.E. from 16 hLrp1 ϩ/ϩ mice and 14 hLrp1 Ϫ/Ϫ mice. *, significant difference from hLrp1 ϩ/ϩ mice at p Ͻ 0.01. LRP1 dysfunction accelerates steatohepatitis transition Hepatic LRP1 deficiency suppresses HFHC-induced expression of LXR-responsive genes Previous studies have shown more triglyceride accumulation in the livers of hLrp1 Ϫ/Ϫ mice compared with hLrp1 ϩ/ϩ mice due to reduced VLDL secretion (12). However, hepatic cholesterol levels were similar between high-fat-fed hLrp1 ϩ/ϩ and hLrp1 Ϫ/Ϫ mice (12). To identify the mechanism by which the additional cholesterol in the diet also caused cholesterol accumulation in the livers of hLrp1 Ϫ/Ϫ mice, we focused our attention on LXR-responsive genes that are known to be regulated by LRP1 in other cell types, including macrophages and smooth muscle cells (16,17). Results showed that whereas hLrp1 ϩ/ϩ mice displayed increased hepatic expression of ABCG5 and ABCG8 after HFHC feeding, expression levels of these genes along with Cyp7a were significantly reduced in the livers of hLrp1 Ϫ/Ϫ mice after HFHC feeding (Fig. 3, A-C). The reduction in Cyp7a that encodes cholesterol 7␣-hydroxylase, the rate-limiting enzyme for cholesterol conversion to bile acids, also led to reduction of bile acid levels in the intestine and in the excrement of HFHC-fed hLrp1 Ϫ/Ϫ mice (Fig. 3, D and E). However, bile acid levels in the liver were comparable between HFHC-fed hLrp1 ϩ/ϩ and hLrp1 Ϫ/Ϫ mice (Fig. 3F). The hepatic bile acids in HFHC-fed hLrp1 Ϫ/Ϫ mice may be derived from the alternative pathway (18). Because ABCG5 and ABCG8 are responsible for cholesterol excretion into the bile, their reduced expression along with decreased bile acids found in the intestine and feces indicated that the robust hepatic cholesterol accumulation observed in hLrp1 Ϫ/Ϫ mice is due to reduced excretion of cholesterol and its metabolic products to the bile. Hepatic LRP1 deficiency does not affect autophagic flux The underlying mechanism responsible for the excessive Mallory-Denk body accumulation in the livers of HFHC-fed hLrp1 Ϫ/Ϫ mice was explored with a focus on the autophagylysosome pathway for degradation of protein aggregates (19). The comparison of autophagic flux in HFHC-fed hLrp1 ϩ/ϩ and hLrp1 Ϫ/Ϫ mice showed that leupeptin-induced p62 and LC3II accumulation was similar between the two groups of mice (Fig. 4). These data suggested that the Mallory-Denk bodies accumulated in livers of HFHC-fed hLrp1 Ϫ/Ϫ mice could not be explained by changes in autophagy activity but were probably the result of lysosomal defects due to LRP1 deficiency, as we have shown previously (11). Hepatic LRP1 deficiency accelerates HFHC-induced liver damage Our in vitro data also showed that LRP1-deficient hepatocytes were more susceptible to palmitate-induced lipotoxicity (11). Hence, we determined whether the hLrp1 Ϫ/Ϫ mice are also more sensitive to HFHC diet-induced liver injury. For these experiments, caspase-3 activity was assessed in protein lysates prepared from HFHC diet-fed hLrp1 ϩ/ϩ and hLrp1 Ϫ/Ϫ mice. Results showed a ϳ3-fold elevation of activated caspase-3 in hLrp1 Ϫ/Ϫ mouse livers (Fig. 5A). Consistent with these results was the detection of a ϳ3-fold increase in the number of apoptotic cells in livers from hLrp1 Ϫ/Ϫ mice compared with hLrp1 ϩ/ϩ mice (Fig. 5B). Although serum AST levels were higher in hLrp1 Ϫ/Ϫ mice compared with hLrp1 ϩ/ϩ mice, these differences did not reach statistical significance (Fig. 5C). Nevertheless, the hLrp1 Ϫ/Ϫ mice displayed ϳ3-fold higher serum ALT levels compared with hLrp1 ϩ/ϩ mice (Fig. 5D). Taken together, these data indicated that hepatic LRP1 inactivation increases sensitivity to HFHC diet-induced liver damage. Hepatic LRP1 deficiency accelerates HFHC-induced liver inflammation The increased liver injury observed in HFHC-fed hLrp1 Ϫ/Ϫ mice is consistent with the hypothesis that hepatic LRP1 deficiency accelerates the progression of NAFLD to NASH. Indeed, immunofluorescence staining of liver sections prepared from LRP1 dysfunction accelerates steatohepatitis transition HFHC-fed hLrp1 ϩ/ϩ and hLrp1 Ϫ/Ϫ mice showed an increased number of F4/80 ϩ cells as well as CD68 ϩ cells in hLrp1 Ϫ/Ϫ mice (Fig. 6). Flow cytometry analysis of nonparenchymal cells revealed an increase in both CD68 Ϫ and CD68 ϩ subsets of Kupffer cells/macrophages (Fig. 6), suggesting that both inflammatory cytokine and reactive oxygen production may be enhanced in the livers of hLrp1 Ϫ/Ϫ mice (20). Additional experiments analyzing gene expression by quantitative RT-PCR showed that the HFHC diet increased F4/80 and CD68 mRNA levels in the livers of hLrp1 Ϫ/Ϫ mice more dramatically compared with the increase observed in hLrp1 ϩ/ϩ mice (Fig. 7, A and B). The hLrp1 Ϫ/Ϫ mouse livers also showed a higher CD14 expression level, indicative of increased lymphocytes in addition to the increase in Kupffer cells/macrophages (Fig. 7C). The increased presence of inflammatory cells in HFHC-fed hLrp1 Ϫ/Ϫ mice is consistent with increased inflammation, as demonstrated by the elevated expression of inflammatory The gene expression data represent the mean Ϯ S.E. from 12 mice in each group. Bile acid levels were determined from tissues isolated from seven hLrp1 ϩ/ϩ and six hLrp1 Ϫ/Ϫ mice. * and **, significant differences from hLrp1 ϩ/ϩ mice at p Ͻ 0.01 and p Ͻ 0.001. A, caspase-3 activity was measured from liver extracts prepared from eight mice in each group. B, liver sections were stained with TUNEL reagents to identify apoptotic cells in each field (n ϭ 20 mice in each group). Liver cirrhosis was estimated based on AST (C) and ALT (D) activities in serum of 14 mice in each group. The data represent mean Ϯ S.E. *, significant difference from hLrp1 ϩ/ϩ mice at p Ͻ 0.01. The data represent mean Ϯ S.E. *, significant difference from hLrp1 ϩ/ϩ mice at p Ͻ 0.01. Hepatic LRP1 deficiency accelerates HFHC-induced liver fibrosis The liver sections of HFHC diet-fed hLrp1 ϩ/ϩ and hLrp1 Ϫ/Ϫ mice were also stained with Sirius Red to assess fibrosis. Whereas Sirius Red staining was observed only sporadically in hLrp1 ϩ/ϩ mouse livers, significant Sirius Red staining indicative of fibrosis was consistently observed in the portal triad region of hLrp1 Ϫ/Ϫ mouse livers (Fig. 8, top panels). Immunofluorescence staining of smooth muscle ␣-actin also revealed more activated hepatic stellate cells in the livers of hLrp1 Ϫ/Ϫ mice compared with hLrp1 ϩ/ϩ mice (Fig. 8, middle panels). These histology observations were corroborated with RT-PCR data showing increased expression of fibrotic genes, including collagen-1 and osteopontin (Fig. 8, bottom panels). Hepatic LRP1 deficiency synergizes with cholesterol to activate the hedgehog signaling pathway in promotion of fibrosis The underlying mechanism by which cholesterol enrichment and LRP1 dysfunction in hepatocytes promotes fibrosis was explored by incubating primary hepatocytes from hLrp1 ϩ/ϩ and hLrp1 Ϫ/Ϫ mice with cholesterol in vitro. Results showed that cholesterol loading of hLrp1 ϩ/ϩ hepatocytes had minimal effects on expression of hedgehog pathway genes, whereas cholesterol loading of hLrp1 Ϫ/Ϫ hepatocytes led to elevated expression of Sonic and Indian hedgehogs, smoothened, Gli2, and Gli3 (Fig. 9, A-E). Additionally, osteopontin expression was also found to be higher in hLrp1 Ϫ/Ϫ hepatocytes compared with hLrp1 ϩ/ϩ hepatocytes in a manner that was independent of cholesterol loading (Fig. 9F). Because hedgehog pathway activation in hepatocytes has been shown to trans-activate fibrogenic genes in hepatic stellate cells (21,22), these results suggested that the increased fibrosis observed in HFHC-fed hLrp1 Ϫ/Ϫ mice was due to cooperative effects of cholesterol loading and LRP1 deficiency in activating hedgehog pathway genes in hepatocytes. Additional experiments were performed to substantiate the hypothesis that sonic hedgehog pathway activation in cholesterol-loaded hLrp1 Ϫ/Ϫ hepatocytes is responsible for the transactivation of hepatic stellate cells to cause fibrosis. In these experiments, conditioned media from hLrp1 ϩ/ϩ and hLrp1 Ϫ/Ϫ hepatocytes incubated with or without 20 M cholesterol for 16 h were added to HSC T6 hepatic stellate cells in culture. Cell activation was assessed by expression levels of smooth-muscle ␣-actin, desmin, and TGF-␤ mRNAs. Results showed no differences in expression levels of these stellate cell activation genes between hLrp1 ϩ/ϩ and hLrp1 Ϫ/Ϫ hepatocytes under basal conditions. Cholesterol loading had minimal impact on expression of these genes in hLrp1 ϩ/ϩ hepatocytes, whereas their expression was significantly elevated in hLrp1 Ϫ/Ϫ hepatocytes after cholesterol loading. Importantly, inhibition of hedgehog signaling by two distinct inhibitors, cyclopamine and GNAT-58, reduced the expression of smooth muscle ␣-actin, desmin, and Discussion Results of the current study showed that hepatic LRP1 deficiency accelerates HFHC diet-induced progression of NAFLD to NASH. Examination of liver histology revealed increased hepatic steatosis, inflammation, and fibrosis in HFHC diet-fed hLrp1 Ϫ/Ϫ mice compared with similarly fed hLrp1 ϩ/ϩ mice. Additionally, hepatocyte cell size and liver weight were also significantly increased in hLrp1 Ϫ/Ϫ mice. Interestingly, plasma cholesterol, triglyceride, and NEFA levels were not significantly different between hLrp1 ϩ/ϩ and hLrp1 Ϫ/Ϫ mice after HFHC feeding. Thus, the accelerated pathology observed in the livers of hLrp1 Ϫ/Ϫ mice is independent of plasma dyslipidemia and is most likely a result of aberrant intracellular lipid processing due to LRP1 deficiency in hepatocytes. Similar to results reported previously when the mice were fed a 60% high-fat diet without cholesterol (12), no difference in plasma cholesterol levels was observed between HFHC-fed hLrp1 ϩ/ϩ and hLrp1 Ϫ/Ϫ mice. However, in contrast to the earlier study, which showed lower plasma triglyceride levels in hLrp1 Ϫ/Ϫ mice compared with hLrp1 ϩ/ϩ mice when fed the 60% high-fat diet (12), the current study showed that their plasma triglyceride levels were comparable when fed the HFHC diet. The different results most likely reflect differences in the composition of the diet used in the two studies. In particular, lard is the primary fat source in the 60% high-fat diet, whereas the HFHC diet contains primarily cocoa butter and soybean oil as the fat source. It is of note that a lard-based diet promotes hypertriglyceridemia due to elevated VLDL secretion in mice (23). Thus, the reduced secretion of VLDL in hLrp1 Ϫ/Ϫ mice on a HFHC diet prevented the high-fat diet-induced VLDL secretion on the lard-based diet, leading to plasma triglyceride levels similar to that observed in chow-fed mice (12). In contrast, neither cocoa butter nor soybean oil increases plasma triglyceride levels in mice (23); hence, both hLrp1 ϩ/ϩ and hLrp1 Ϫ/Ϫ mice displayed normal triglyceride levels despite the chronic feeding of the HFHC diet. Importantly, both the 60% high-fat diet and the HFHC diet resulted in significantly more triglyceride accumulation in the livers of hLrp1 Ϫ/Ϫ mice compared with hLrp1 ϩ/ϩ mice. Thus, the results from the current HFHC diet study strengthened the conclusion of the earlier study with 60% high-fat diet that LRP1 expression in the liver is protective against diet-induced hepatic steatosis. The current study showed that hepatic LRP1 deficiency also accelerates HFHC diet-induced liver disease progression with hepatocyte cell death, inflammation, and fibrosis that are hallmarks of NASH. Typically, HFHC diet-induced liver disease progression from simple steatosis toward inflammation and fibrosis in WT mice requires a prolonged feeding period of Ն30 weeks (15) but can be accelerated in genetically modified mouse models that are predisposed to severe hypercholesterolemia (13,14). Interestingly, NASH development was observed in hLrp1 Ϫ/Ϫ mice after only 16 weeks of HFHC feeding. Moreover, the accelerated NAFLD transition to NASH in hLrp1 Ϫ/Ϫ mice was independent of plasma lipid levels but was related to a ϳ10-fold increase in cholesterol accumulated in the livers of hLrp1 Ϫ/Ϫ mice. As shown in Fig. 3, the robust cholesterol accumulation observed in the livers of hLrp1 Ϫ/Ϫ mice is the result of synergism between excessive dietary cholesterol intake and the reduced expression of cholesterol excretion and bile acid synthesis genes as a consequence of LRP1 inactivation. Importantly, these results also indicated that intracellular cholesterol accumulation instead of plasma hypercholesterolemia may be the key determinant in HFHC diet-induced NAFLD transition to NASH. Excessive cholesterol has been shown to induce oxidative stress and exacerbates lipotoxicity in the liver to signal inflammatory macrophage infiltration (14). Cholesterol accumulating in hepatocytes also sensitizes mitochondria to inflammatory cytokine-induced injury, thereby perpetuating the vicious cycle of liver injury to promote tissue inflammation (24). A second mechanism by which LRP1 deficiency accelerates liver inflammation may be due to the increased sensitivity of LRP1-deficient hepatocytes to steatosis-induced cell death. In previous studies, we showed that hepatic LRP1 deficiency Conditioned media from these cell cultures were added to the T6 hepatic stellate cells, and incubation was continued for an additional 24 h in the presence or absence of a 10 M concentration of the hedgehog signaling inhibitor cyclopamine or GANT-58. Total cellular RNA was isolated for RT-qPCR analysis of smooth muscle ␣-actin (A), desmin (B), and TGF-␤ (C). The data were analyzed using cyclophilin mRNA levels as control. Expression levels observed when T6 cells were incubated with hLrp1 ϩ/ϩ hepatocyte conditioned medium in the absence of cholesterol were set as 1.0. The data represent mean Ϯ S.E. from duplicate experiments, each performed with three biological replicate samples. *, significant differences at p Ͻ 0.05. LRP1 dysfunction accelerates steatohepatitis transition impairs lipophagic lipid hydrolysis in the lysosomes, leading to the augmentation of palmitate-induced oxidative stress that ultimately results in cell death (11). In the current study, we showed the accumulation of Mallory-Denk bodies in hepatocytes of hLrp1 Ϫ/Ϫ mice after HFHC feeding, indicating that LRP1 deficiency also impairs lysosomal degradation of protein aggregates. The combination of robust accumulation of cholesterol-rich lipid droplets and Mallory-Denk body protein aggregates increases hepatotoxicity, leading to increased immune cell infiltration and the accelerated inflammation observed in the livers of HFHC-fed hLrp1 Ϫ/Ϫ mice. In addition to liver inflammation, another hallmark of NAFLD transition to NASH is hepatic stellate cell activation and fibrosis. In this study, we showed that feeding an HFHC diet promotes liver fibrosis in hLrp1 Ϫ/Ϫ mice, but fibrosis was sparingly observed in hLrp1 ϩ/ϩ mice during the 16-week time course of this study. The in vitro study revealed that cholesterol loading of hLrp1 Ϫ/Ϫ hepatocytes induced expression of hedgehog pathway genes that are responsible for stellate cell activation (21). Whereas the excessive cholesterol accumulated in hLrp1 Ϫ/Ϫ hepatocytes compared with that found in hLrp1 ϩ/ϩ hepatocytes may be responsible for induction of hedgehog pathway genes, it is likely that an additional mechanism(s) may be involved. It is of note that dietary promotion of NASH and hedgehog pathway induction has been attributed to elevated activity of the Hippo pathway transcriptional activator TAZ (22). In view of the recent reports linking Hippo signaling to liver size and hepatocellular carcinoma (25), the relationship between low LRP1 levels and hepatocellular carcinoma progression may also be due to TAZ activation in the absence of LRP1. Additional studies are warranted to test this possibility. As summarized schematically in Fig. 11, results of the current study showed that dietary cholesterol and hepatic LRP1 deficiency act synergistically to promote the transition of NAFLD to NASH. The contributory role of LRP1 deficiency is probably mediated by excessive cholesterol-rich lipid droplets and protein aggregates accumulation to increase lipotoxicity and inflammation as well as induction of hedgehog pathway genes to activate hepatic stellate cells. These results from mouse models are consistent with human epidemiology studies that have revealed dietary cholesterol consumption as an independent risk factor for liver cirrhosis and hepatocellular carcinoma (3) and revealed that low LRP1 levels are also associated with poor outcome of hepatocellular carcinoma after curative resection (6). Taken together, these results suggest that LRP1 expression level may be a genetic variable that dictates susceptibility of individuals to the effect of dietary cholesterol on liver diseases. Antibodies Antibodies against LRP1 were made in rabbits using peptide sequence corresponding to the C terminus of the 85-kDa subunit of human LRP1 and affinity-purified against the synthetic peptide antigen. Specificity of the LRP1 antibodies was verified based on reactivity with a single 85 kDa band on Western blotting with liver lysates from hLrp1 ϩ/ϩ mice with no reactivity detected with liver lysates from hLrp1 Ϫ/Ϫ mice. All other antibodies as well as primers used in the current study were obtained commercially as listed in Tables 1 and 2. Body composition and lipid measurements Body weight measurements were performed biweekly, and body fat mass was measured in conscious mice using 1 H magnetic resonance spectroscopy (EchoMRI-100, Echo-medical Systems) as described previously (27). Plasma was prepared from blood samples obtained from mice after an overnight fast. Plasma cholesterol, triglyceride, and NEFA were quan- Figure 11. Schematic summary of dietary triglyceride (TG) and cholesterol (CH) handling by hLrp1 ؉/؉ and hLrp1 ؊/؊ hepatocytes. In normal hepatocytes, triglyceride and cholesterol are stored in lipid droplets (LD), and excessive cholesterol is excreted into the bile via ABCG5/8 as well as converted to bile acids (BA) for excretion. In hepatocytes with LRP1 deficiency, where ABCG5/8 and cyp7a expression is reduced, excessive cholesterol increases CD14 ϩ and monocytes/macrophages/Kupffer (F4/80) cell activation, leading to liver inflammation. Excessive cholesterol in hLrp1 Ϫ/Ϫ hepatocytes also promotes hedgehog (Hh) signaling activation, resulting in transactivation of hepatic stellate cells (HSC) with elevated fibrosis. Liver tissue preparation and analysis Mice were anesthetized and then perfused with 3 ml of 10% formalin in PBS. Liver tissues were then excised and weighed before storage in 10% formalin for 48 h before cryopreservation. Cryosections of 5-M thickness were stained with hematoxylin and eosin for histological examination. Cryosections were also stained with Sirius Red (Abcam) to determine fibrosis. Immunofluorescence detection of antigens was performed with primary antibodies against F4/80, CD68, and smooth muscle ␣-actin, followed by secondary antibodies conjugated to Alexa Fluor 488 or 594. Immunohistological analysis of Mallory-Denk bodies was performed by staining with anti-cytokeratin 8/18 and visualized using a VectaStain Elite ABC kit (Vector). Additional cryosections were permeabilized to detect apoptotic cells with the TUNEL staining kit (Roche Diagnostics). All sections were counterstained with 4Ј,6-diamidino-2-phenylindole, and images were obtained with an Olympus BX61 microscope. Images were analyzed and quantified by ImageJ software. Bile acid analysis Bile acids were measured enzymatically using the mouse bile acid assay kit (80470, Crystal Chem) as described (28). Briefly, preweighed tissue and ground dried feces were extracted in 75% ethanol at 50°C for 2 h. After centrifugation, 100 l of the supernatant was then diluted with 400 l of PBS for the assay. Autophagic flux assay Autophagic flux was measured in vivo as described by Haspel et al. (29). Briefly, HFHC-fed hLrp1 ϩ/ϩ and hLrp1 Ϫ/Ϫ mice were fasted overnight before receiving an intraperitoneal injection of saline or leupeptin (40 mg/kg body weight) dissolved in saline. The mice were euthanized after 4 h, and livers were obtained and homogenized in buffer containing 10 mM Tris-HCl, pH 8.0, 5 mM EDTA, and 250 mM sucrose. Lysosomeenriched fractions were obtained by centrifugation for 10 min at 700 ϫ g at 4°C, followed by centrifugation of 2 mg of the homogenate for 30 min at 20,000 ϫ g. The pellets were washed twice and solubilized in SDS gel sample buffer for analysis. Flow cytometry Mice were anesthetized, and livers were perfused with 10% Krebs-Henseleit buffer containing 0.5 mM EGTA before infusion with a digestion solution of Krebs-Henseleit buffer containing 125 units/ml collagenase (Sigma) and 2% BSA. The livers were excised and capsule-stripped in RPMI medium containing penicillin and streptomycin. The liver homogenates were sterile-filtered and then centrifuged to remove cell debris before pelleting hepatocytes and Kupffer cells by differential centrifugation at 90 ϫ g and 300 ϫ g, respectively. The remaining cells were plated in Dulbecco's modified Eagle's medium for 2 h and then detached with Accutase (Sigma) into flow cytometry buffer (Hanks' balanced salt solution containing 2% BSA and 0.1% sodium azide. The cells were incubated with conjugated anti-F4/80 or anti-CD68 with secondary antibodies conjugated to Alexa Fluor 488 and then analyzed using the Guava EasyCyte TM 8HT flow cytometry system (Millipore). Data were analyzed using Guava Incyte software (Millipore). Western blots Livers were perfused and then homogenized in ice-cold radioimmune precipitation assay buffer (Thermo Fisher Scientific) containing protease and phosphatase inhibitor mixture (Roche Diagnostics). Proteins in whole-cell lysates were resolved by SDS-PAGE and then transferred to polyvinylidene difluoride membranes (Bio-Rad). The membranes were blocked in Odyssey blocking buffer (LI-COR) with 0.1% Tween 20 for 1 h at 4°C and then incubated for 90 min with a 1:1000 dilution of primary antibodies ( Table 1). The membranes were washed and then incubated with horseradish peroxidase-conjugated secondary antibodies and visualized using enhanced chemiluminescence reagents (Pierce). Densitometry analysis was completed using ImageJ software. Quantitative real-time RT-PCR Livers were homogenized in Qiazol (Qiagen), centrifuged twice to remove lipid contents, and then subjected to Qiazol LRP1 dysfunction accelerates steatohepatitis transition column purification of total cellular RNA. Reverse transcription was performed using an iScript kit (Bio-Rad), and PCR was performed on a Bio-Rad SyberOne RT-qPCR thermocycler using primers as indicated in Table 2. Caspase activity Mice were anesthetized using isoflurane, and livers were perfused with 3 ml of ice-cold PBS. Whole livers were excised and flash-frozen with liquid nitrogen. Tissue was thoroughly homogenized using buffer provided in the caspase activation kit (Pierce), and fluorescent caspase activity was determined via fluorescence at 496/520 nm. Sample fluorescence was normalized based on protein content (Life Technologies, Inc.). Cholesterol enrichment of primary hepatocytes Primary hepatocytes were isolated from hLrp1 ϩ/ϩ and hLrp1 Ϫ/Ϫ mice as described previously (11). After plating in culture dishes for 4 h to achieve adhesion, the cells were treated with or without cholesterol-methyl-␤-cyclodextrin (Sigma, catalogue no. C4951) in serum-free Williams medium for 16 h before harvesting for RNA isolation. Gene expression was assessed by quantitative RT-PCR as described above. Hepatic stellate cell activation Conditioned media were collected from hLrp1 ϩ/ϩ and hLrp1 Ϫ/Ϫ hepatocytes after a 16-h incubation with or without 20 M cholesterol-methyl-␤-cyclodextrin and added to the T6 hepatic stellate cells for an additional 24-h incubation. The hedgehog inhibitors cyclopamine (Cayman Chemicals) and GANT-58 (Cayman Chemicals) were included in selected cultures at 10 M concentrations. At the end of the incubation period, cellular RNA was isolated for RT-qPCR analysis. Statistical analysis Statistical analysis was performed using SigmaPlot version 13.0 software (SysStat Software, San Jose, CA). All data were expressed as mean Ϯ S.E. Normality was examined using the Shapiro-Wilk test. Data with equal variance based on Levene's analysis were evaluated by Student's t test or analyzed by twoway analysis of variance. When analysis of variance demonstrated significant differences, individual mean differences were analyzed with the Student-Newman-Keuls test. Differences at p Ͻ 0.05 were considered statistically significant.
2018-05-21T20:27:39.735Z
2018-05-11T00:00:00.000
{ "year": 2018, "sha1": "8a4a3666a38b56d71c0aad87ae4a8121d3d34628", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/293/25/9674.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "e599452a42f95ce161fde49185e2a09418f16a6b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270141821
pes2o/s2orc
v3-fos-license
INTEGRATION OF GREEN OPEN SPACES IN TOURISM ACCOMMODATION GLAMPING RICHLAND BATURITI BALI Glamping tourism is currently experiencing rapid growth. This is because this type of tourism offers a unique natural tourism experience with modern and comfortable facilities. This research focuses on green open spaces at Richland Glamping Baturiti Bali. Richland Glamping Baturiti is a glamping accommodation that provides green open space in an area with beautiful natural beauty. This allows the development of optimal green open spaces harmoniously with the surrounding environment. Hopefully, this research can contribute to realizing glamping tourism, which provides a unique and exciting tourist experience, preserves nature and supports sustainable development. This research uses qualitative methods with the object of the Richland glamping case study in Bali, which has green open spaces. Primary data was collected through observation and documentation, while secondary data was obtained from the library. All information and data are analyzed using relevant theories and then explained descriptively. Glamping offers luxury accommodations in nature with a commitment to cleanliness and personal service. The green open space is designed with ecological principles to improve the quality of life, groundwater, and ecosystem balance. Through these environmental design principles, an analysis was carried out on the case objects of green open space at Glamping Richland Bali. The Richland Glamping Baturiti Bali case study shows how this principle can be applied by choosing the right plants and location, creating a harmonious and environmentally friendly experience for visitors. Introduction The tourism industry in Indonesia has experienced rapid growth in recent years.Glamping tourism has become an increasingly popular trend among the various developing types of tourism.Glamping offers a unique and different nature tourism experience compared to traditional hotels, where tourists can enjoy the beauty of nature and feel the sensation of camping with more modern and comfortable facilities.According to (Sinaga & Fitri, 2022), the term glamping is an abbreviation of the words "glamorous" and "camping", which means a camping style with complete facilities and more luxurious than traditional camping activities in general.However, the popularity of glamping also raises concerns about its environmental impact. Glamping is synonymous with tourist accommodation close to nature; its main attraction is its location, which offers stunning natural scenery.Integrating green open spaces into the glamping concept presents an opportunity to create a balanced environment between humans and nature.This can positively benefit the psychology of civitas and environmental sustainability practices.The integration of green open spaces is closely related to the interior design of the glamping building itself.As a core element in design, space plays a central role in creating an unforgettable experience for visitors.The use of space in the design of glamping buildings affects not only the visual aspect but also the emotions and perceptions of visitors toward their environment. Based on this explanation, this research aims to examine the impact of green open spaces on visitor satisfaction and how these green open spaces influence the design of glamping buildings.This research takes the case of green open space at Richland Glamping Baturiti Bali.Richland Glamping Baturiti is one of the recommendations for glamping in Bali, which provides green open space in an area famous for its natural beauty.With its potential for green open space, the object of this case is interesting for further study.This research is hoped to contribute to realizing glamping tourism, which provides unique and exciting tourist experiences, preserves nature and supports sustainable development. Research Method This research employs a qualitative methodology emphasizing exploration and indepth understanding of the Richland Bali glamping case phenomena.Qualitative data was collected through document analysis, observation of visitor interactions with green open spaces, and semi-structured interviews conducted with visitors and staff at the glamping accommodation.Additionally, secondary data was obtained from written sources such as scientific articles, books, media publications, and trusted websites.Thematic analysis was then conducted on the collected data to identify recurring themes and patterns, aiming to reveal insights into the relationship between green open spaces, glamping design, and visitor satisfaction. A. Glamping Tourism Concept. Glamping is an abbreviation of "glamorous camping", offering a different camping sensation with a touch of modern luxury and comfort.In contrast to simple traditional camping, glamping provides a variety of complete facilities that far exceed ordinary camping standards.An article (Adamovich dkk., 2021) says that the tradition of camping as natural recreation has been known for a long time.This activity is usually carried out by setting up a tent in the open air for a certain period. Tracing its history, according to (Vrtodušić Hrgović dkk., 2018) the concept of camping has existed since the time of the Ottoman Empire.During the Ottoman era, luxury tents equipped with various facilities were erected for the Sultan, and during the time of Genghis Khan, Mongolian tribes used yurts (circular portable tents) as comfortable places to live.Yurts can be easily disassembled and assembled, so they are very suitable for tribal residences with a nomadic lifestyle (a lifestyle that moves from place to place). The transformation of camping activities continues to develop.Glamping is not only limited to using tents but also offers more comfortable accommodations such as caravans and various other types of temporary shelter.This is in line with the opinion of (Utami, 2020) in previous research, which states that glamping accommodation can be divided into several types, namely • Treehouse: Offers a unique experience of staying at a height with beautiful natural views. • Bubble: A transparent tent that allows guests to enjoy unobstructed views of nature. • Tent: A classic accommodation option with modern luxury and comfort. • Van: Offers mobility and flexibility for adventurers exploring various places. • Bungalow: Comfortable and private accommodation with an attractive design. • Cabin house: A small wooden house that provides a warm and comfortable atmosphere.Different from the concept of camping in general, glamping has unique characteristics that make it more popular today.According to (Juniarta dkk., 2022) glamping has five main factors, namely • Has complete facilities.Even though many glamping buildings use the form of tents or nonpermanent buildings, they still have complete facilities inside.The entire facilities in question can include a comfortable bed, a cupboard for storing clothes or other items, the availability of electricity, an air conditioner, WiFi, cooking utensils, even television entertainment facilities, and others.Regarding facilities, if you use an outside bathroom or shared bathroom in other glamping sites, at Richland Bali, each room is equipped with an inside bathroom.Camping is generally synonymous with activities that require a lot of preparation, so only some can and want to do it.Meanwhile, the manager provides all the facilities in glamping so visitors can skip preparing their own equipment.This allows glamping tourism to be enjoyed by all levels of society, from children to older people. • Attractive natural environment.Glamping activities are generally located in open areas with attractive views, such as mountains, forests, beaches, or lakes, likewise at Richland, which has views of the lake and hills.Beautiful views, such as towering mountains, dense forests, or stunning waterfalls, can provide calm and peace for visitors.Being in an attractive natural environment has many benefits for health and well-being.Fresh air, calming natural sounds, and beautiful views can help reduce stress, improve mood, and improve sleep quality. • Service.Glamping staff usually consists of friendly, professional people who are always ready to help guests.They will ensure guests feel comfortable and satisfied during their stay at the glamping.The services generally provided are regular accommodation cleaning, laundry, food and drinks, and other services. • Cleanliness of the surrounding area.Cleanliness is an essential factor that should be addressed in glamping.The glamping area, including accommodation, bathrooms, and public places, must always be kept clean so that guests feel comfortable and avoid illness. Glamping staff are usually responsible for cleaning the glamping area regularly.Guests are also expected to maintain cleanliness by throwing rubbish in the right place and not damaging existing facilities. • The building has an attractive shape.Besides determining the location with its superior view, glamping accommodation also competes to provide a memorable staying experience through its unique and beautiful building design.This attractive and unique building shape is a special attraction for glamping visitors.In this case, there are three types of tents offered by Richland Bali with different shapes, as seen in the picture below.Regarding the unique shape of the building, it is cited as a source that says that architectural form is a visual characteristic that can become an identity and differentiate it from other buildings (Toddy & Noorwatha, 2019).The various factors and types of glamping accommodation are the main attraction for visitors.It is a new tourist trend, and glamping can also be a combination of history and culture packaged with a modern touch.Various unique and innovative glamping accommodation options are a special attraction for tourists who want to enjoy the beauty of nature uniquely. B. Principles of Green Open Space Design According to Pancawati (Peramesti, 2017), open space can be divided into green open space and non-green open space.Green open space refers to an area allocated and designated as a public space wholly or partially covered with trees, vegetation, and other natural elements.In his writing (Afaar, 2015) said that physically green open space can be divided into two, namely natural and non-natural green open space.Natural green open spaces can be protected areas, national parks, and all areas with natural wild habitats.Meanwhile, non-natural green open space is open space that is deliberately created for specific purposes, such as parks, fields, and others. The existence of green open space has many benefits.In line with this statement, (Dharmadiatmika, 2017) said that green areas are needed to improve the quality of life.Green open space is planned to create a place or space for interaction in an open area.Additionally, green open space can improve air quality, reduce air temperature in the surrounding environment, and increase oxygen levels.Another opinion (Afaar, 2015) • Architecturally, green open space can be a supporting aesthetic element for architectural buildings.The existence of buildings and green areas around them does not stand alone. Each other influences and supports each other.The presence of green landscapes, flowers, and fountains will add to the appeal and support the visuals of the building as a whole. • Economically, green open space can be used as agricultural land or plantations.Alternatively, open green spaces can also be developed into tourist locations that attract tourists.According to (Sinatra dkk., 2022) Efforts to develop green open space as a place for recreation and tourism aim to prevent and minimize damage, the majority of which is caused by human behavior that is not environmentally friendly. Green open space is essential in glamping designed based on ecological design principles to benefit the environment and visitors.According to (Syarapuddin dkk., 2016) the principles of environmental design are • Understanding of local communities, especially socio-cultural aspects, • The planned design can maintain the ecosystem, • Able to reduce energy and material use, • Creating a harmonious relationship between culture and nature, • Able to maintain aspects of the natural environment (soil, plants, etc.). Adding to this opinion, citing a source (Monica dkk., 2023) in previous research said that ecological design principles are needed to respond to the needs of glamping tourism.Ecological Design Principles focus on creating environments that are in harmony with nature and minimizing human environmental impact.In this context, there are several things to note: • Integration with Local Ecological Conditions: Design must consider the unique characteristics of the local environment, including flora, fauna, and soil conditions. • Response to Micro and Macro Climate: micro climate refers to the project area's conditions.Macro climate involves regional weather and climate patterns. • Site Conditions and Building Program: site conditions, such as topography and drainage, influence the design.The building program (functions and needs) must also be well integrated. • Concept and Aesthetics: ecological design must create harmony between function and beauty.The design concept must combine natural and human elements. • Minimize Energy Use: The design should minimize e-consumption. C. Analysis of Green Open Space at Glamping Richland Bali Integration with local ecological conditions means utilizing the potential of the local environment in planning and developing green open spaces.This principle refers to including and combining surrounding natural elements in open areas' planning and development process.This aims to create harmony between glamping facilities and the surrounding natural environment.By understanding local environmental conditions, we can optimize the benefits of green open space.According to Purnomohadi (Lestari dkk., 2013) vegetation is the most essential element that plays a role in green open space.The choice of plant will affect the function and impression it creates.Adding to this opinion, (Afaar, 2015) said that plant selection must consider several things such as climate, soil, and local flora.Suitable plants will grow well and support the ecosystem.According to Ministerial Regulation no. 5 of 2008, the criteria for selecting vegetation for environmental and city parks are a.It is not poisonous or thorny; the branches do not break easily, and the roots do not disturb the foundation.b.The canopy is quite leafy and compact but not too dark; c.The height of the plants varies; the green color with other color variations is balanced; d.The stature and shape of the crown are pretty beautiful; e. Medium growth speed; f.In the form of habitat for local plants and cultivated plants; g.Annual or seasonal plant type; h.Plant spacing is semi-close to produce optimal shade; i. Resistant to plant pests and diseases; j.Able to absorb and absorb air pollution; k.As far as possible, it is a plant that attracts birds. Glamping Richland Bali is located in Candikuning Village, Baturiti Tabanan District, precisely on the shore of Lake Beratan, at an altitude of 1,239 meters above sea level.Apart from that, the location is also surrounded by hills, giving the location a beautiful panorama and cool air.Richland Bali has quite an ample green open space at the back of its location, directly facing the lake.As seen in the picture below, the space looks spacious and airy because it is only filled with several types of ornamental plants that are not too tall and a stretch of green grass.Spacious green open spaces allow visitors to carry out outdoor activities freely.In addition, the green open space design appears to blend with the mountainous background, showing integration with local geographic characteristics.A Salvia Farinacea plant, better known as blue salvia, is on the edge.This plant has characteristic purple flowers with a shape resembling lavender flowers.It is a type of shrub that usually grows in clumps.It can grow up to 90 cm high and has an upright growth direction.These characteristics make this plant suitable as a border plant (Salvia farinacea (Mealy Cup Sage), t.t.)In line with this explanation, this plant is used as a border between land and water/lake areas at Glamping Richland Bali.Its presence grows along the edge of the Richland Bali Glamping area, forming a border, and can add to the beauty of the existing lake panorama.The combination of purple flowers and the lake view in the background is one of this glamping accommodation's best popular photo spots. The Richland Bali glamping area primarily features short ornamental plants.Shade plants are absent, and even the green open spaces are dominated by grass.Interviews revealed that, despite the cool highland location, the lack of shade makes visitors uncomfortable, especially on hot days.According to Ridwan (2022), previous research suggests that trees can function as shade plants, lowering temperatures and reducing the impact of solar radiation.Areas shaded by trees can experience air temperatures 30°C -40°C cooler than surrounding areas exposed to direct sunlight.Factors that must be considered when choosing shade plants are tree height and leaf density.Trees exceeding 15 meters in height with high leaf density are said to be most effective at capturing radiation and creating comfortable shaded areas.Based on this analysis, Glamping Richland Bali must consider providing shade trees in its green open areas.This will allow visitors to carry out outdoor activities comfortably during the day.Furthermore, at the front of the tent, the types of ornamental plants used are Dypsis lutescens, Philodendron selloum, and Syzygium oleana, which are types of shrubs that can grow up to 2 meters high.Unlike other plants that tend to be short, this ornamental plant has another function apart from being a garden decoration, namely, as a divider between tents.At Richland Bali, each tent has a terrace as a transition space between the tent and the outside area.Each terrace has chairs and tables so visitors can use them to chat or enjoy the view.Based on this, the characteristics of ornamental plants that can grow quite tall can become a visual barrier for visitors in each tent.This can be a barrier and provide privacy for active visitors in the terrace area.Concepts and aesthetics in ecological design refer to the harmonization of function and beauty.In environmental design, function refers to the ecological, social, and cultural roles and benefits produced by an element or system.Meanwhile, beauty refers to the visual and aesthetic aspects that enrich the user experience. The green open area at the glamping site consists of plot areas separated by paths.Each tent has a front open space covered with short grass.While a campfire stove is provided, there are no garden chairs or other amenities in the green open area.Chairs are only available on the glamping unit's terrace.The current setup offers a unique experience, aligning with the glamping concept of picnics and camping activities.However, using mats and small tables provided only upon request becomes uncomfortable for extended periods.Furthermore, although chairs are available on the terrace, visitors may prefer to relax in the center of the green open area.According to Mumcu & Yılmaz (2016), the success of green open spaces hinges on how well they facilitate interaction and activities, both social and individual.Research suggests a lack of seating can significantly hinder social interaction in green spaces.Seating directly impacts a person's comfort level and the duration of interaction.The more comfortable the seating options, the more likely the community is to spend time outdoors engaging in conversation, increasing the frequency, duration, and variety of outdoor activities.Apart from lakes and mountains, Richland Bali is also agricultural.The agricultural land uses fertilizer, which can attract flies and is annoying.In line with one of the principles of green open space, integrating the building and the surrounding environment must be wellestablished and mutually supportive.Buildings are physical structures built as shelter for humans.In the case study of Glamping Richland Bali, the building is in the form of a tent that is not permanent.The problem was that visitors were reluctant to open the tent due to fly disturbances.However, the beautiful view outside can only be enjoyed inside the tent if the door is opened.To overcome this, the management installed an additional layer of netting on the tent door so visitors could still enjoy the natural panorama without being disturbed by flies inside the tent.This aligns with one of the ecological design principles, which states that building programs are essential in creating requirements.In the context of glamping accommodation, the building program must consider all facilities that support the tourist experience.By applying ecological design principles, glamping can be a tourism option that is fun and beneficial for nature conservation and visitors' health.Harmony between the natural environment and the building itself.The building program refers to the goals and space Conclusion Glamping offers a unique and sustainable tourism experience combining luxury, comfort, and natural charm.Green open spaces within glamping accommodations provide various benefits, including improved air quality, groundwater quality, and the balance of the surrounding ecosystem.These spaces can also facilitate social interaction and complement the design of glamping buildings.The selection of appropriate vegetation, utilization of the surrounding environment's potential, and a harmonious design are crucial factors for a successful green open space.Glamping Richland Bali, for example, uses short vegetation to ensure unobstructed views of the natural panorama.However, the current green open space needs shade plants, shade structures, and comfortable seating, especially for daytime use.This finding aligns with previous studies that emphasize the importance of visitor comfort in outdoor spaces.By addressing these shortcomings, glamping accommodations can optimize the utilization and enjoyment of their green spaces, leading to increased visitor satisfaction and a more substantial reputation for eco-friendly services.Glamping Richland Bali utilizes non-permanent tent structures for its accommodations.These tents integrate seamlessly with the surrounding green open space, with strategically placed openings maximizing views and access to the area.While this research highlights the benefits of incorporating green open spaces in glamping accommodations, further research is necessary.This additional research should explore long-term environmental impacts and investigate visitor perceptions of design elements like seating arrangements and vegetation types.The information from such research will be valuable for developing future green open spaces in glamping accommodations. Figure 2 . Figure 2. Type of glamping in Richland Bali (Source: Personal Documents, 2024) divides the function of green open space into four categories: ecological, socio-cultural, architectural, and economic.• Ecologically, green open spaces are said to increase groundwater through the absorption of rainwater.Rainwater that seeps into the ground can help improve groundwater quality in the surrounding area.Rainwater absorption can also reduce water flow on the surface, reducing the risk of flooding.Regarding ecological factors, plants in green open spaces can absorb carbon dioxide (CO2) in the air, thereby maintaining air quality and ecosystem balance.• Socioculturally, green open spaces can facilitate communication and interaction.Its existence can also allow users to relax, enjoy nature and reduce stress.Trees, flower gardens and green views create a calming atmosphere.Green open spaces can also reflect local identity, for example by including elements of sculpture, use of local plants, and other elements of local traditions in green open space elements. Figure 6 . Figure 6.The open area in front of the tent can be a picnic spot.(Source: Personal Documents, 2024)To maximize visitor enjoyment of the green open spaces, various types of outdoor seating can be incorporated to support different activities.Long benches encourage communal gatherings, while hammocks, day beds, or sun loungers provide options for relaxation. Figure 7 . Figure 7. Type of outdoor seating facility (Source: Pinterest, 2024) Apart from softscape elements, in the green open space of Glamping Richland, there are also hardscape elements in the form of paths that function as circulation paths.The paths are continuous and connected and covered with gravel.The footpath is still comfortable to walk on and can be passed one after another.Laurie (Setya Mariana, 2008) said footpaths in green open spaces are essential.Its existence can form patterns and support exploration experiences.
2024-05-31T15:14:34.893Z
2024-05-29T00:00:00.000
{ "year": 2024, "sha1": "6521d6febbc99f9f48b637153adeafb68cbf623d", "oa_license": "CCBYNCSA", "oa_url": "https://jurnal.isi-dps.ac.id/index.php/lekesan/article/download/2839/1031", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ce3418d05bd14a64fb8e2904540370c5859371c5", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
255839637
pes2o/s2orc
v3-fos-license
Preparation by mandatory E-modules improves learning of practical skills: a quasi-experimental comparison of skill examination results Until recently, students at UMC Utrecht Faculty of Medicine prepared for practical skills training sessions by studying recommended literature and making written assignments, which was considered unsatisfactory. Therefore, mandatory e-modules were gradually introduced as substitute for the text based preparation. This study aimed to investigate whether this innovation improved students’ performance on the practical skills (OSCE) examination. In both the 2012 and 2013 OSCEs, e-modules were available for some skill stations whereas others still had text based preparation. We compared students’ performance, both within and between cohorts, for skill stations which had e-module preparation versus skill stations with text based preparation. We found that performance on skill stations for which students had prepared by e-modules was significantly higher than on stations with text based preparation, both within and between cohorts. This improvement cannot be explained by overall differences between the two cohorts. Our results show that results of skills training can be improved, by the introduction of e-modules without increasing teacher time. Further research is needed to answer the question whether the improved performance is due to the content of the e-modules of to their obligatory character. Background Before medical students start their first clerkships, they have to be thoroughly trained in physical examination skills. Most medical programmes have introduced practical skills training sessions. Skills training requires practice materials, rooms, and a high teacher-student ratio, which makes skill training very resource intensive. It is important to use time and recourses available for skills training as efficiently as possible, to optimize learning outcomes. The medical curriculum of UMC Utrecht Faculty of Medicine is a six year programme characterized by early patient-contact: the medical students in Utrecht start with their first full-fledged clerkship in the third year. To prepare the students for these early clerkships, in the first two years of the programme students learn how to perform the basics of physical examination (e.g., examination of lungs, elbow or neurological examination) and other basic medical skills (e.g., intramuscular injections or resuscitation). In addition to these supervised training sessions students have the possibility to practice and further improve their skills by unsupervised practice in a skills lab, as well as the possibility to consult the skills teachers with any questions they have by using the electronic learning environment and by appointment. Even with these additional training possibilities, in all likelihood the regular training sessions are the most important element in mastering the skills, because it is the only preparation that involves modeling of a skill by a trainer. Students are required to prepare for the training sessions, in order to keep time for instructions to a minimum, leaving as much time as possible to actually practice the skills under supervision. Until recently, students had to prepare by studying recommended literature and make assignments about the theoretical, practical and clinical background of the skill. In the rest of the paper we will refer to this type of preparation as 'text based preparation'. Due to time limitations these written assignments were not discussed in detail in the training sessions and students did not receive any feedback on their preparation. Thus, many students lacked motivation to thoroughly prepare (confirmed by course evaluations in which students indicate they did not prepare well). As a consequence, teachers noticed that most students arrived at the training sessions unprepared and valuable practice time was lost introducing the clinical background, the necessary anatomical knowledge and performance of the skill to the students. In addition to the lack of student motivation, the text based preparation might not have been the best means to prepare students for a practical skills training session due to its inherently passive nature (it is impossible to exercise the skill). Therefore, in 2010 the faculty decided to gradually substitute e-modules for text based preparation. These e-modules cover the same content as the text based preparation: information about the theoretical background of the skill and its clinical application, as well as instructions on how the skill is performed in practice. However, in the e-modules this content is delivered in a richer format, including video, sound, animations and exercises with direct feedback. An e-module finishes with a formative self-test and takes students approximately 1-1.5 h to complete. The e-modules were developed specifically for our practical skills training programme. Because the development of e-modules is time and labor intensive, emodules were developed successively over a prolonged period of time for all 16 physical examination sessions from 2011 to 2014. Thus, from academic years 2011-2012 until 2013-2014 for some skills training sessions emodules were available, whereas for other skills training sessions, students still used text based preparation. Besides being more appealing to students, the digital nature of e-modules allows teachers to check whether the students have completed the e-module. At our faculty, completion of the modules is mandatory before students are allowed to participate in the training session. In contrast, for the text based preparation it was practically not possible to check whether students had prepared for their skill training session. Although completing the e-modules is obligatory, and completion is checked in the electronic learning environment before each training session, no control is exerted as to how thoroughly the students study the issues addressed. Students have to proceed through the e-module linearly and can't skip topics or interactive elements the first time. After completion of the e-module (which is registered in the electronic learning environment), the e-module remains available to the student. The addition of e-modules to a practical training session is a form of blended learning. Blended learning has consistently been shown to have a high learning satisfaction and either an equal or more positive effect on theoretical knowledge acquisition and clinical reasoning [1][2][3][4][5]. For practical skills training preparation, however, few studies have been published. Bloomfield and Jones [6] compared student satisfaction in learning clinical skills with a blended learning approach versus a traditional approach. In this research, students did think e-modules could be valuable for developing clinical skills, but they did not want to relinquish practical training sessions. Two other studies, by Arroyo-Morales et al. [7] and Orientale et al. [8], showed that the availability of media material, like videos, in addition to the face-to-face lessons, could improve the performance on skills, like palpation of the knee or other physical examination skills. However, their e-learning did not include interactive elements. This study aimed to investigate whether obligatory interactive e-modules as preparation for practical skills training sessions led to better performance of the skills by students than training sessions with text based preparation. The learning outcome was measured by the average scores obtained at a 2-station Observed Structured Clinical Exam (OSCE) for physical examination skills at the end of each academic year. This is a high-stakes examination where all students will try to perform to the best of their abilities, because admittance to their clinical rotations is conditional on passing this OCSE. In the OSCE we measure the final level at which students have mastered these skills. This level depends not solely on preparation and training sessions, because in addition to re-using the e-module, students have other opportunities to practice these skills to prepare themselves for the OSCE, e.g. exercising the skills on fellow students [9,10]. The time lag between the practical training sessions and the OSCE ranged from a few weeks up to 20 months. This latter time lag would occur if a student attended the practical training of a particular skill in the beginning of year 1 and was tested for this skill in the OSCE at the end of year 2. Given this time lag and the array of opportunities students had to prepare themselves for the OSCE, it could be expected that we were only able to find small differences between students' performance on skills they prepared text based versus e-modules. However, if we would find higher OSCE scores for students tested on skills for which e-modules were available than for skills with text based preparation, this would be evidence that use of e-modules could result in better performance of the skills tested by the OSCE. In other words, we predict that the OSCE exam has a higher overall score in 2012-2013 due to the introduction of more e-modules in that year. Method Our study covered the academic years 2011-2012 and 2012-2013. In both years for part of the skill training sessions e-modules were available whereas for other sessions students still used text based preparation. In years 1 and 2, a total of 27 training sessions were scheduled: 16 physical examination training sessions, and 11 other basic medical skills training sessions (see Appendix). Each training session focused on a specific skill or set of skills and started with a an instruction and a demonstration by a teacher to groups of eight students. Subsequently, the students practiced the skill in pairs, either on each other or using training materials (medical phantoms and low fidelity simulations e.g. intravenous practice arms). The duration of each training session was 1 h and 45 min. All e-modules dealt with physical examination skills. As a result, our study was limited to these skills. Participants were all first and second year medical students at the UMC Utrecht who participated in the end of year 2-station OSCE. In this OSCE two different skills were tested in a 12 min time frame. The intervention was part of the regular quality improvement cycle, and the collected data were part of the routine educational assessment. Therefore, approval from the ethical committee of the Dutch Society of Medical Education was not necessary. Data (OSCE scores, see Table 1) were collected from 15 stations in year 1, and 15 stations in year 2, so 30 stations in total. Our main unit of analysis was the individual student score. The individual student score of each station consisted of an average of 3 to 6 sub-items -depending on the station-, each sub-item being scored on a 5-point scale. Theoretically the individual student scores could range from 1.00 to 5.00. The total number of students in the dataset was 780, each participating in 2 to 4 stations. In total we were able to collect data from 2010 individual student scores from which we calculated the average scores per station (see Tables 2 and 3). In 2011 For a more detailed analysis of the effect of e-module preparation, scores for stations which had e-module preparation were compared with the scores for stations which had text based preparation. Three main comparisons were made: A within cohort comparison of scores for all skill stations with e-module preparation versus all stations with text based preparation (different set of stations). We expected higher average performance at stations with e-module preparation for the training session. 2. A between cohort comparison of scores before and after introduction of mandatory e-module preparation (same set of stations). To check whether a possible difference could have been caused by a cohort difference in students' performance, we also compared average scores of stations with text based preparation for the training sessions in both years. Because in 2011-2012 only four stations had e-module preparation, we had few data to compare the results of stations that had e-module preparation in both 2011-2012 and 2012-2013. We predicted higher scores for stations which changed from text based preparation to e-module preparation, but no difference on scores for stations for which students had text based preparation for both years. A within participants comparison (different stations). Because at our OSCE, students were non-systematically assigned to the stations, it could occur that a student was examined on: two stations which both had text based preparation, two stations which both had e-module preparation, or one station with text based preparation and one station with e-module preparation. For the last group of students (332 students) we compared the individual station scores between e-module versus text based preparation. We predicted that these students would have a higher score for the station with an e-module preparation. Data-analysis The distribution of station scores was tested for normality. As this distribution did depart significantly from normality, for the text based preparation group (W = 0.988, df = 554, p < 0.0001), as well as for the emodule preparation group (W = 0.980, df = 502, p < 0.0001); the appropriate test would be a non-parametric test. However, as the data covered a wide range of decimal values on the 5-point scale and the distribution was unimodal, we decided nonetheless to perform independent T-tests [11]. A p-value of <0.05 is considered significant. The within participants comparison is analysed with a paired T-test. Results The The average was significantly higher in the academic year in which more e-modules were available to prepare for training sessions. In our subsequent analyses we tried to unravel whether this difference could be attributed to the introduction of blended learning. Comparison of scores for skill stations with e-module preparation and stations with text based preparation (within cohort, different stations) Students from the 2012-2013 cohort obtained an average score of 3.58 ± 0.67 (n = 554) on stations with text based preparation versus 3.67 ± 0.71 (n = 502) on stations with e-module preparation. The average scores of stations with an e-module preparation were significantly higher than the average scores of stations with text based preparation (t = 2.14 p = 0.03; Cohen's d = 0.13) (see Table 2). Comparison of scores before and after introduction of mandatory e-module preparation (between cohorts, same station) The average score on stations at the 2012 (cohort 2011-2012) and 2013 (cohort 2012-2013) examination showed that at the 2013 examination, students obtained a higher score on the stations which had moved from text based preparation in 2012 to e-module preparation in 2013; 3.44 ± 0.63 (n = 308) and 3.68 ± 0.70 (n = 314) (t = 4.97 p < 0.01; Cohen's d = 0.36) respectively. This difference was significant (see Table 3). The results showed no significant difference between the two cohorts on stations for which students had prepared by text based preparation in both academic years, 3.57 ± 0.67 (cohort 2011-2012, n = 467) and 3.61 ± 0.67 (cohort 2012-2013, n = 517), (t = 1.47 p = 0.34) . Thus, the higher score on the stations with e-module preparation was not a result of the cohort as a whole performing better on all stations. Within participants comparison (within student, different stations) For 332 students we had two station scores with different preparation: one station with text based preparation and one with e-module preparation. The within-subject comparison showed a station average of 3.50 ± 0.68 for the stations with a text based preparation and an average of 3.60 ± 0.66 for stations with an e-module preparation. Discussion Our results showed that students obtained on average a higher score for stations with an e-module preparation before the training session than on stations with text based preparation before the training session. We have confidence in our results, because we performed several comparisons that support this conclusion. To begin with, we demonstrated that students at the 2013 OSCE, when more e-modules were available to prepare for the skills training sessions, scored on average higher than at the previous year's OSCE, when fewer e-modules were available. Though this could theoretically be caused by the 2012-2013 cohort being better in general than its predecessor, within-cohort comparisons also revealed better scores on OSCE stations with e-module preparation than those without. As this comparison included different OSCE stations (different skills), we needed to corroborate these results by investigating OSCE scores of the same skills that moved from text based preparation to e-module preparation between 2012 and 2013 (between cohorts comparison). Again, we found better scores for skills with e-modules than for skills with written assignments. Finally, we compared within participant differences for those students who were tested on one skill with text based preparation versus a second skill with e-modules. Though this comparison involved different skills, the results were in line with the other findings: students showed better performance on skills for which they had prepared by e-modules. Our findings add to an earlier evaluation of mandatory e-modules as preparation for training sessions, which reports high satisfaction in both students and teachers [12]. In addition to this higher satisfaction, the current study found more objective evidence of e-module preparation actually being more effective in improving learning outcomes from skill training sessions. The 2-station OSCE is a high-stakes examination for which students will prepare extensively. Students must pass this examination before they are allowed to start with their clinical rotations. In this context, it would not be very realistic to expect large effects of the substitution of text based preparation by mandatory e-modules, which is a relatively small intervention given that students have many more options to practice the skills. Yet, we were still able to demonstrate a significant effect. And although the average improvement may be quantitatively small, ranging from 0.09 (comparison 1) to 0.24 (comparison 2) on a 5-point scale, a difference of this size for the whole cohort may have a relevant impact on numbers of students failing the exam. It might be expected that in a large number of students many students score around the pass/fail cutoff. Therefore, a small improvement can result in an increase in the number of students passing the exam. It might be argued that our results were an artifact of e-modules being developed for relatively easy skills, on which students already showed better performance across the board. However, our results suggested the opposite, i.e. e-modules were developed for the more difficult stations. In 2011-2012 students obtained lower scores on the OSCE stations for which e-module preparation was developed in 2012-2013 than on the stations for which no e-modules were yet developed in 2012-2013 (Table 3). This was an accidental finding, because there was no faculty policy to prepare e-modules for more difficult skills first, but it substantiated our results. This suggests that our study may underestimate the actual improvement caused by the introduction of e-module preparation. Our study has a few limitations, though. First, the ideal study design to test an effect of this form of blended learning would be a controlled trial. However, for practical reasons such a design would not be possible as it would imply randomly assigning half of the students to an e-module preparation condition, and the other half to a text based condition. Such randomization should be made per training group to optimize expected gain in training time, but in practice students frequently change groups. Besides, students exchange information and it would be impossible to prevent students from using e-modules anyway. In an effort to overcome this limitation, we performed the abovementioned checks on the results. Because the study was not a controlled trial, the OSCE examiners were not blinded with respect to the availability of e-modules for the skills they assessed. Theoretically, whether or not students had e-module preparation could have influenced the scores the examiners assigned. Four of the seven examiners were also the developers of the e-modules. However, this would be far-fetched, if only because at the time of the examination the examiners were unaware that the results would be used to assess the effectiveness of the preparation for skill training sessions. We have no reason to expect any relationship between e-module preparation and students' assignment to OSCE stations. Second, the within student comparison showed better performance on stations which had e-module preparation versus text based preparation. However, it should be noted that the 2 OSCE stations were included in one OSCE time slot. Hence, one skill was always tested first, followed by the second skill. This implied that if a student took much time to perform the first skill, less time would be available to perform the second skill. We found after further analysis of our data that in this comparison, skills with e-module preparation were more often tested first than skills with text based preparation and hence, these latter skills had a somewhat higher probability of not being completed by a student because of lack of time, which might result in poorer performance. In this within participant comparison, our results indeed showed lower performance on skills tested as second station, which can, at least partly, be attributed to lack of time to complete the second station. This effect only occurred in the within students comparison; the other comparisons were not affected by this potential bias, because first and second tested skills had equally often e-module and text based preparation. Third, our intervention can be conceived as the introduction of a form of blended learning, and we cannot distinguish between the effects of three different aspects of the intervention: the fact that the E-module preparation has been made "obligatory" (i.e. unlike the text based preparation, we checked whether students went through the module), the richer format and the organization of the e-module (which was quite different from the text based preparation), and the fact that trainers changed their approach during the actual training sessions when the students had prepared by e-modules. For example, trainers had a strong impression they had to spend less time explaining the skill to the students and as a result, students had more time available to practice the skills. It is very likely that all these elements, mentioned above, reinforced each other and together contributed positively to the results. That is, we saw a spectacular increase in the number of students who indicated in the student evaluations that they prepared well (from 72 % not preparing to 2 % not preparing), and also the time they spent for the preparation increased substantially (from an average of 40 min per student in case of text based preparation to 69 min when they used e-modules). In addition, students pointed out receiving immediate feedback on their answers as very positive aspect of the e-modules. They missed this in the written assignments of the text based preparation. This could have improved their motivation to prepare more thoroughly. Teacher evaluations indicate that the training sessions became more effective, since the students were better prepared and more motivated, because they were already familiar with the physical examination skills, at least at a theoretical level. Thus, it remains an open question what caused the improvement, whether it was the content of the e-modules or some collateral factor, such as motivation or time on task. Skills education has an important place in medical education. With the growing attention to the importance of patient safety and qualified personnel in healthcare, its role is expected to increase even further. Extensive practical skills training is often limited due to resources; this type of education has a high teacher-student ratio and is resource intensive. Increasing the effectiveness of skills training without prolonging costly training time, can therefore be very valuable for medical education. Our results show obligatory e-modules before practical skills training sessions improves learning of physical examination skills in first and second year medical students, and hence contribute to better preparation of students for their first clerkships. Based on these results it is certainly worthwhile for medical faculties to consider introducing a blended learning approach for practical skills training to enhance efficient use of this time and labour intensive type of education. Further research is needed to answer the question whether the improved performance is due to the content of the e-modules or to their obligatory character and to find the most effective and efficient way to teach practical medical skills. Conclusion In conclusion, our results show that results of skills training can be improved, by the introduction of emodules without increasing teacher time.
2023-01-16T14:55:50.749Z
2015-06-10T00:00:00.000
{ "year": 2015, "sha1": "6786f20afec41fee5db34db7ef38582dcbbd569f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12909-015-0376-4", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "6786f20afec41fee5db34db7ef38582dcbbd569f", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
181935693
pes2o/s2orc
v3-fos-license
High-voltage vertical GaN-on-GaN Schottky barrier diode using fluorine ion implantation treatment This paper reports on a high-voltage vertical GaN Schottky barrier diode (SBD) using fluorine (F) ion implantation treatment. Compared with the GaN SBD without F implantation, this SBD effectively enhanced the breakdown voltage from 155V to 775V and significantly reduced the reverse leakage current by 105 times. These results indicate that the F-implanted SBD showed improved reverse capability. In addition, a high Ion/Ioff ratio of 108 and high Schottky barrier height of 0.92 eV were also achieved for this diode with F implantation. The influence of F ion implantation in this SBD was also discussed in detail. It was found that F ion implantation to GaN could not only create a high-resistant region as effective edge termination but be employed for adjusting the carrier density of the surface of GaN, which were both helpful to achieve high breakdown voltage and suppress reverse leakage current. This work shows the potential for fabricating high-voltage and low-leakage SBDs using F ion implantation treatment. © 2019 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). https://doi.org/10.1063/1.5100251 Gallium Nitride (GaN) power devices have attracted worldwide attention due to the huge potential for high voltage and high power application.1 Recently, due to the breakthrough of bulk GaN growth, there has been a chance for GaN power devices to be grown homoepitaxially, which accelerates the progress of vertical GaN devices.2 Compared to lateral devices, achievements can be realized easier in vertical devices including high power in small size, superior thermal management and high reliability.3 With the merits of low turn-on voltage and high-speed switching, vertical GaN Schottky barrier diodes (SBDs) are highly desired for various high power application in electronic circuits. However, vertical GaN SBDs always suffer from high reverse leakage current and premature breakdown voltage. To solve the above challenges, one solution is to lower the doping concentration of drift layer.4–6 It has been known that reducing the doping concentration of the top drift layer could reduce the peak electric field at the junction and thus enhance the breakdown voltage. Although controlling the doping concentration of GaN epilayer has been realized to a certain extent by metalorganic chemical vapor deposition (MOCVD), the ultra-low doping concentration (about 1015 cm-3 or below) is still hardly obtained restricted by growth condition.7–9 Another solution is to design effective edge termination.10–12 It has been found that electric field crowding occurs at the edge of Schottky contact, and edge termination technique could distribute electric field to achieve high breakdown voltage. In particular, ion implantation technique is a way to form guard rings or create high-resistance region to enhance reverse capability, and various groups have investigated implantation of different species AIP Advances 9, 055016 (2019); doi: 10.1063/1.5100251 9, 055016-1 Gallium Nitride (GaN) power devices have attracted worldwide attention due to the huge potential for high voltage and high power application. 1 Recently, due to the breakthrough of bulk GaN growth, there has been a chance for GaN power devices to be grown homoepitaxially, which accelerates the progress of vertical GaN devices. 2 Compared to lateral devices, achievements can be realized easier in vertical devices including high power in small size, superior thermal management and high reliability. 3 With the merits of low turn-on voltage and high-speed switching, vertical GaN Schottky barrier diodes (SBDs) are highly desired for various high power application in electronic circuits. However, vertical GaN SBDs always suffer from high reverse leakage current and premature breakdown voltage. To solve the above challenges, one solution is to lower the doping concentration of drift layer. [4][5][6] It has been known that reducing the doping concentration of the top drift layer could reduce the peak electric field at the junction and thus enhance the breakdown voltage. Although controlling the doping concentration of GaN epilayer has been realized to a certain extent by metalorganic chemical vapor deposition (MOCVD), the ultra-low doping concentration (about 10 15 cm -3 or below) is still hardly obtained restricted by growth condition. [7][8][9] Another solution is to design effective edge termination. [10][11][12] It has been found that electric field crowding occurs at the edge of Schottky contact, and edge termination technique could distribute electric field to achieve high breakdown voltage. In particular, ion implantation technique is a way to form guard rings or create high-resistance region to enhance reverse capability, and various groups have investigated implantation of different species ARTICLE scitation.org/journal/adv (e.g. Mg, Ar, N and H) to be employed for GaN diodes. Compared with conventional SBD, Mg-implanted GaN rectifiers can obtained lower leakage current with breakdown voltage of 500-600V, while the efficiency of Mg activation is very low. 13 The breakdown voltage for vertical GaN device can be effectively increased by Argon implant edge termination, while argon ions would also contribute to reverse leakage current. 12,14 Meanwhile, fluorine (F) ion implantation technique has attracted great interests in AlGaN/GaN HEMT because F ion can deplete the 2DEG to form enhanced-mode AlGaN/GaN HEMT 15 and improve the off-state breakdown voltage in AlGaN/GaN HEMT. 16 This technique is also employed for active device isolation. 17 In addition, the underlying physics and stability related with F ion implantation in GaN were also investigated by various groups. [18][19][20][21] In this work, a vertical GaN SBD using F ion implantation treatment was fabricated. Compared with the GaN SBD without F ion implantation, this SBD showed significantly improved reverse capability. The SBD not only lowered the reverse current by about 5 orders of magnitude at high bias but also boosted the breakdown voltage from 155V to 775V. In addition, this SBD using F ion implantation treatment obtained a high Ion/I off ratio of 10 8 and high Schottky barrier height of 0.92 eV. Before device fabrication, an n --GaN epilayer was grown on an n + -GaN bulk substrate material ([Ge] = 10 18 cm -3 ) by hydride vapor phase epitaxy (HVPE) from Nanowin. Compared with Sidoped bulk GaN, Ge atoms is a shallow donor and would cause less lattice distortion. 22,23 Ge-doped GaN substrate material is beneficial to further grow a smoother, fewer defects and lower stress epilayer. 24 The GaN epilayer was analyzed by a series of characterization experiments, and the results were presented in Fig. 1. The doping concentrations of GaN epilayer were analyzed by secondary ion mass spectroscopy (SIMS) measurement shown in Fig. 1(a). The concentration of Si was about 10 17 cm -3 , and the concentrations of C and O impurities were close to the detection limit. The surface morphology of the GaN epilayer was characterized by atomic force microscopy (AFM) shown in Fig. 1(b). The surface presented a typical step-flow morphology and the root-mean-square (RMS) roughness of a 10×10 µm 2 scanning area of the epilayer was 0.38 nm. The dislocation density of the GaN epilayer was 1.04×10 6 cm -2 , which was estimated by planar-sectional panchromatic cathodoluminescence (CL) images as shown in Fig. 1(c). The crystal quality was characterized by highresolution X-ray diffraction (HRXRD). The rocking curves (RCs) of the (002) and the (102) plane of the GaN are represented in Fig. 1(d). The full width at the half maximum (FWHM) of the RCs were 39.2 and 52.6 arcsec for the (002) and the (102) plane, respectively. The thickness of the n --GaN epilayer was about 8.4 µm checked by crosssectional CL images as shown in Fig. 1(e), taking advantage of the image contrast caused by different carrier density. These characterization results indicate the homoepitaxial GaN layer grown by HVPE was high-quality and smooth with well-controlled impurities. The devices fabrications started with a 1-µm-deep mesa isolation etch by inductively coupled plasma (ICP). Fig. 2(a) and (b) shows the schematic structure of the SBD without F ion implantation as a reference (diode A) and the SBD using F ion implantation treatment (diode B). Diode B used AZ5214 photoresist (∼1.4 µm) as a mask to define the implanted region. The F ions was implanted by three energies successively (40 keV, 80 keV and 140 keV), and the F profile as a function of depth was estimated by Srim software based on Monte Carlo simulation as shown in Fig. 2(c). The simulation result indicated F implantation would create a 350nmdeep box-like profile with a total ion concentration of 10 18 ∼10 19 cm -3 . After implantation and photoresist removal, a 100-nm-thick SiO 2 film was deposited on both diodes by plasma enhanced chemical vapor deposition (PECVD). Schottky barrier diodes with a diameter of 100 µm were defined by opening circle window after standard photolithography and BOE wet etching. Prior to metallization, the samples were dipped in HCl solution for 1 minute to remove the oxide film. Pt (40nm)/Au (250nm) Schottky contacts were deposited by e-beam evaporation and patterned by lift-off. Afterward, rapid thermal annealing at 400 ○ C in N 2 atmosphere for 10 minutes was carried out for improving Schottky contacts cohesion and recovering the damage caused by ion implantation. 15 Finally, Ti (50nm)/Pt (100nm)/Au (50nm) Ohmic contacts were deposited by e-beam evaporation at the backside of GaN bulk substrates treated by inductively coupled plasma (ICP) etching. For diode B, the actual F concentration as a function of depth without mask was also measured by SIMS after SiO 2 film removal by HF solution as shown in Fig. 2(c). The actual F ion concentration was 2×10 18 -10 19 cm -3 underneath the surface 300-nm depth in accordance with the simulation result, but the actual F concentration profile formed a long tail into n-GaN. The difference between simulation and SIMS result was mainly due to the channeling effect or diffusion by annealing. The roughness of GaN surface after F ion implantation was checked by AFM. The surface still kept a step-flow morphology as as-grown GaN shown in Fig. 1(b), and the roughness was almost identical (RMS=0.46 nm), indicating negligible surface damage by F ion implantation. C-V measurements of two diodes were carried out at a frequency of 1 MHz shown in Fig. 3(a). The net doping concentration where ε 0 is the permittivity of the vacuum (8.854 × 10 −12 F/m), εr is the relative permittivity of GaN (9.5), and A is the effective area of the diode. The net doping concentration as a function of junction depth can be plotted as shown in the inset of Fig. 3(a). For diode A, the net doping concentration of drift layer always maintained around the level of 10 17 cm -3 at the whole range, which agreed reasonably with the Si doping concentration analyzed by the above SIMS result of epilayer. However, for diode B, the net doping concentration near the junction decreased significantly and returned the normal value beyond the depth of 500 nm. To investigate the surprising phenomenon of C-V measurement of diode B, the region beneath the Schottky contact of diode B was checked by SIMS measurement as shown in Fig. 3(b) after removal of the Au cap layer. Compared with Si concentration, relatively low concentration of F ion (5×10 15 cm -3 -1×10 15 cm -3 ) was detected within 500-nm depth to GaN surface. Thus, the surprising phenomenon of C-V measurement of diode B probably resulted from F ion introducing. Due to the strongest electronegativity, F ions would capture free electrons to form fixed negative charges in active area. 18,25 In addition, F ions 19 The forward characteristics for diode A and diode B are shown in Fig. 4. Compared with diode A, the forward current of diode B was lower as shown in Fig. 4(a). The turn-on voltages (Von, extracted from I-V curves by linear extrapolation) of diode A and diode B were ∼0.51V and ∼0.64V. The on-resistance (Ron) of diode A and diode B were 0.20 Ω⋅cm 2 and 0.27 Ω⋅cm 2 , respectively. Besides, according to the thermionic emission model, where A * is the Richardson constant (26.4A cm -2 K -2 for n-GaN), and η is the ideality factor and Bn0 is Schottky barrier height (SBH), 26 η and Bn0 of diode A and diode B were 1.22, 0.70eV and 1.07, 0.92eV by drawing lnJ-V curves as shown in the inset of Fig. 4(a). Besides, high Ion/I off ratio of electronic devices is key for future high voltage and high power application. 11,[27][28][29] As shown in Fig. 4(b), diode B had a high Ion/I off ratio in the order of 10 8 , while the ratio of diode A was 10 4 . Table I summarizes the device parameters of forward characteristics for two diodes. These results indicate that diode B using F ion implantation treatment obtained a higher Ion/I off ratio and SBH but increased on-resistance slightly. Fig. 5 presents the reverse characteristics of diode A and diode B. The breakdown voltage of diode B was 775V, while diode A broke down at 155V. Diode B had lower reverse leakage current by about 10 5 times than diode A at the reverse bias of 150V. The rate of the reverse current rise of diode B was also slower than diode A, which indicated that more leakage paths may exist in diode A. 30 These results showed diode B performed better reverse capability. Besides, for a given doping concentration (ND), the breakdown voltage (BV) of vertical GaN SBD could be estimated with this power By substituting the ND with 10 17 cm -3 , the ideal breakdown voltage of 446.3V was calculated. The measured breakdown voltage of diode B was higher than the ideal breakdown voltage with drift layer of uniform doping concentration of 10 17 cm -3 . It indicates the carrier density in the surface of GaN was decreased in accordance with the above C-V results. In particular, the catastrophic damages of both diodes occurred at the edge of Schottky contacts due to edge electric field crowding, as shown in the optical microscopy image in the inset of Fig. 5. Thus, the maximal electric field should be occurred at the edge of Schottky contacts for both diodes and edge termination is critical for GaN SBDs. The electric field distributions of two diodes were also simulated by using Silvaco TCAD software as shown in Fig. 6(a) and (b). For diode A, a uniform doping concentration of 10 17 cm -3 for drift layer was set. For diode B, the carrier density in the region of 600 nmdepth beneath SiO 2 film was assumed to be fully compensated forming a high-resistance region, where the concentration of F was higher than Si according to SIMS result in Figure 2(c). The net doping concentration in the active area as a function of depth used the data calculated by C-V measurement. The bias voltages were set to the measured breakdown voltage, which were -155V for diode A and -755V for diode B. By plotting the electric field near the surface of GaN drift layers in diode A and diode B shown in Fig. 6(c) and (d), the electric field crowding both happened at the edge of Schottky contacts for two diodes, which agreed with the above optical image in Fig. 5. The maximal electric field were 2.97 MV/cm for diode A and 3.73 MV for diode B respectively when they broke down, which both close to the previous report (3.75MV/cm). 6 Besides, the significantly reduced leakage current of diode B could be attributed to the high-resistance region at the edge of Schottky contact and the elevated Schottky barrier height. In summary, a vertical GaN Schottky barrier diode (SBD) using F ion implantation treatment was fabricated, and the device performance was analyzed by comparing with the SBD without F ion implantation. This SBD using F ion implantation treatment not only boosted breakdown voltage significant from 155V to 775V but also lowered reverse leakage current by 10 5 times. Thus, F ion implanted SBD showed significantly improved reverse capability but increased on-resistance from 0.20 Ω⋅cm 2 to 0.27 Ω⋅cm 2 . In addition, this SBD had higher Ion/I off ratio and Schottky barrier height. This work shows the potential for obtaining highvoltage and low-leakage GaN SBDs using F ion implantation treatment. This work was supported by National Key R&D Program of China (no. 2017YFB0404100).
2019-06-07T21:51:06.468Z
2019-05-20T00:00:00.000
{ "year": 2019, "sha1": "c4912e189b4d232ba2489928d663278f2eb941de", "oa_license": "CCBY", "oa_url": "https://aip.scitation.org/doi/pdf/10.1063/1.5100251", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1dc6380a69cfc93f5705931c5fa9e1fdde57caff", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
8763245
pes2o/s2orc
v3-fos-license
Inter-dependence of the volume and stress ensembles and equipartition in statistical mechanics of granular systems We discuss the statistical mechanics of granular matter and derive several significant results. First, we show that, contrary to common belief, the volume and stress ensembles are inter-dependent, necessitating the use of both. We use the combined ensemble to calculate explicitly expectation values of structural and stress-related quantities for two-dimensional systems. We thence demonstrate that structural properties may depend on the angoricity tensor and that stress-based quantities may depend on the compactivity. This calls into question previous statistical mechanical analyses of static granular systems and related derivations of expectation values. Second, we establish the existence of an intriguing equipartition principle - the total volume is shared equally amongst both structural and stress-related degrees of freedom. Third, we derive an expression for the compactivity that makes it possible to quantify it from macroscopic measurements. The statistical mechanical formalism, introduced to describe granular materials [1][2][3], was expected to be a platform for derivations of experimentally measurable equations of state and constitutive relations. It has not yet lived up to its full potential due to several difficulties that traditional thermodynamic theories do not suffer from: lack of ergodicity, uncertainty over the identities and number of degrees of freedom (DoF), and the difficulty to realise a simple analog of a thermometer -a 'compactometer'. Whilst these problems reflect more on the application of the theory rather than on the theory itself, a more serious concern has arisen from recent suggestions of an absence of an equipartition principle [4,5] in agitated systems. Here we derive a number of significant results. First, we show that the correct phase space consists of both structural and force DoF, thus calling into question much of the results in the literature, obtained from either ensemble alone. Second, we show the existence of an equipartition principle in two-dimensional static systems. Third, we show that, in such systems, the compactivity can be quantified directly from macroscopic mean volume measurements. The initial statistical mechanical approach was based on a volume partition function of N (>> 1) grains [1], where W is a volume function that sums over all the possible volumes that basic volume elements can realise and X 0 is the compactivity -a measure of the fluctuations in the ensemble of realisations that is the analogue of temperature. The structural DoF (SDF), identified explicitly below, are all the independent variables that determine the structure of an assembly of grains in mechanical equilibrium, given the mean number of forcecarrying contacts per grainz [6]. The partition function Z v enables almost closed thermodynamics. For example, once W and the SDF have been identified (see below), the mean volume 2D can be computed. Nevertheless, Z v is unable to specify the macroscopic state of the system completely, since the entropy remains only partially accounted for, it leaves out an entire set of microstates -those of different stress states. These microstates are described by a different partition function, Z f [7][8][9], an idea supported later numerically [9,10]. The stress ensemble gives rise to the partition function Here α, β run over the Cartesian components x, y and F αβ are the components of the force moment function, from which the stress σ αβ is derived, Here the sum runs over pairs of grains in contact gg ′ , R gg ′ is the position of the contact point, measured from the centroid of grain g, F gg ′ is the force that g ′ applies to g and V g is the volume associated with grain g. X αβ = ∂F αβ /∂S has been named 'angoricity' -a tensorial analogue of the temperature and the compactivity [7,11,12] -and S is the entropy, defined as the log of the number of both structural configurations and stress states. Note that integrating this partition function, any expectations values would be a function of all R gg ′ . The volume and stress ensembles are treated in the literature as independent, leading to a total partition function Z = Z v Z f . Consequently, results have been derived solely from the statistics of one ensemble or the other. Here, we challenge this view. We argue that such derivations are misguided and we outline the calculation of the correct partition function. Using the correct ensemble we demonstrate derivation of a number of expectation values in two dimensions (2D), including the expected intergranular force distribution, and derive a surprising equipartition principle for static systems. We put forward the following three arguments. 1. The volume ensemble alone is insufficient to describe the entropy of mechanically stable granular systems. A volume ensemble implies the exact same boundary forces. However, many-grain experiments cannot reproduce the same grain configurations, nor the precise forces on every boundary grain. Only global boundary stresses can be controlled, i.e. averages over boundary force components. Thus, the statistics of the boundary forces must be taken into consideration. 2. The stress ensemble alone is insufficient to describe the entropy of mechanically stable granular systems. The stress ensemble comprises a fixed granular configuration, to which all combinations of boundary forces are applied. The ensemble is subject to constraints, e.g. that the total boundary stresses are fixed. Such a system cannot be realised experimentally in very large assemblies (albeit possible in numerical experiments). Indeed, any integration over Z f remains a function of the SDF. 3. The volume and stress partition functions are interdependent, Z = Z v Z f . This statement follows from the above two argumentscorrect calculations of expectation values must be based on a combined ensemble of all structural arrangements and all boundary forces. Specifically, this is a consequence of the explicit dependence of both the volume function in Z v and the force moment function in Z f on the structural DoF (SDF). The above arguments hold in any dimension and we proceed to illustrate them explicitly in 2D. Consider an ensemble of 2D N -grain systems (N ≫ 1), each of mean contact numberz. The systems are in mechanical equilibrium under M external compressive forces, acting on the boundary grains. We disregard body forces, in the absence of which 'rattlers' can also be ignored, as they do not affect the stress states in static piles. It was proposed to use 'quadrons' [13,14] as the elementary volumes, both in two and in three dimensions. These are space-tessellating (generically) quadrilateral elements (figure 1). The quadron is constructed on two vectors as its diagonals: r q connects contact points around a grain in the clockwise direction and R q extends from the centroid (mean position) of the contacts around the grain to the centroid of the contacts around a neighbour cells. In terms of these, the volume function is W = q v q = 1 2 | r q × R q | (summation implied over repeated indices) and the partition function is Here N sdf is the number of SDF, discussed below. The vectors R q can be expressed as linear combinations of the r q [8] and, since the latter close loops, only Nz/2 of them are independent [6,8], leading to N sdf = Nz. Defining the vector ρ ≡ r 1 x , r 2 x , ..., r Nz/2 x , r 1 y , r 2 y , ..., r Nz/2 y , W becomes exactly quadratic and we have Here p, q run over quadrons, α, β run over vector components x, y and A is a matrix whose elements are q, p ≤ Nz/2 a qp xy q ≤ Nz/2 , p > Nz/2 a qp yx p ≤ Nz/2 , q > Nz/2 a qp yy q, p > Nz/2 We can now evaluate Z v . Assuming a uniform measure of the DoF and that the contribution of very large r magnitudes is negligible, (5) can be calculated explicitly The stress partition function consists of all the possible combinations of compressive forces on the boundary grains, g m (m = 1, 2, ..., M ), subject to the constraint that the total stress on the boundary is fixed [6,11,12]. Since the configuration is presumed fixed, only boundary forces that do not drive the system out of mechanical equilibrium are permissible, i.e. the boundary stresses must be below the yield surface. It has been 000000000 000000000 000000000 000000000 000000000 000000000 000000000 000000000 000000000 000000000 000000000 000000000 000000000 000000000 000000000 000000000 000000000 000000000 000000000 000000000 shown [13,14] that the force moment function (3) can be written as where f q α is the α component of a loop force of the cell containing the quadron q (see figure 2). The loop forces are defined in terms of the contact forces [13], e.g. figure 2, and they conveniently satisfy force balance conditions. In 2D assemblies of arbitrarilyshaped frictional grains, there are N (z/2 − 1) loop forces (a consequence of Euler relation), of which N/2 can be determined from the torque balance conditions. For clarity, we next focus on isostatic systems (z = 3); extension to hyperstatic assemblies (z > 3) is possible. Substituting (7) into (2) gives where the integration runs over all the independent boundary forces g m . Since quadrons sharing the same cell have the same loop force (figure 2) and the loop forces depend linearly on the M boundary forces, then only N/2 of the Nz quadron forces are independent. It is therefore convenient to define the loop forces vector The solution for φ in terms of the boundary forces can be written as where α, β = x, y, c = 1, 2, . . . , N/2 runs over all cells, m = 1, 2, . . . , M runs over all boundary forces and C is N × 2M matrix. In terms of these, f q = E φ, where E is a Nz × N matrix. Defining further B qp αβ = X −1 αβ δ qp , with δ qp being the delta function, we finally obtain Z f To compute the total partition function, we need to integrate over the combined volume-stress phase space, where we have defined, for brevity, Q = B T ·E ·C. This expression establishes our claim that Z is not the product of Z v and Z f of eqs. (6) and (10). Integrating (11) is straightforward due to the integrand's Gaussian form and we use it next to calculate several expectation values. The exponential contains a linear and a quadratic term in ρ and, completing to quadrature and changing variables toρ = ρ+A −1 Q g, we can separate the variables in the exponent, where we define for shorter notation P = Q T · A −1 · Q. Calculating the mean volume we obtain: which separates into two gaussian integrals, giving This result is significant for several reasons. First, it is independent of the details of the connectivity matrix A and of the particular stress state. Second, it reveals a striking equipartition principle: the mean volume is shared equally among thezN structural and the 2M force DoF, each having on average X 0 /2. It is analogous to the mean energy per DoF in thermal systems 3k B T /2, but we emphasize that no energy is involved in this formalism. Third, it quantifies the compactivity X 0 in terms of measurable quantities. An important consequence of this finding is that it makes possible to start analyses 'inductively' by assuming that the volume per DoF is X 0 /2, as done as standard in thermal systems analogously with k B T . Note that using only Z v (eq. 5) gives V v =zN X 0 /2, which overestimates the compactivity. All relevant expectation values can be expressed in terms of ρ and g and hence evaluated, albeit with more algebra, Y is the matrix that diagonalises P , p i are the eigenvalues of P , and η ij are straightforward functions of E and C. Note that both results (13) and (16) are directly relevant to experimental measurements [15,16]. These exact results do more than demonstrate the utility of the combined ensemble, they reveal unexpected dependences of these expectation values on the compactivity and angoricity. For example, ρ · ρ is not only proportional to X 0 , as one would expect, but it also depends on X αβ via a homogeneous function (HF) of order 0. Also unexpectedly, the mean inter-granular force magnitude square f · f is both a HF of order 2 of X αβ and linear in X 0 . Yet F αβ is, unsurprisingly, a HF of order 1 of the angoricity and independent of X 0 . These results show the significance of using both the stress and the volume ensembles. To conclude, we have presented three main results. First, we have shown that the compactivity-based volume ensemble and the angoricity-based stress ensemble are dependent and need to be used simultaneously. We reiterate, the entropy is the log of all the microstates, which include both the SDF and the stress states -because Z v and Z f are dependent, it is not simply the sum of the configurational and stress entropies. This calls into question the large body of work, obtained from either ensemble alone. We have used the combined partition function to obtain explicitly the expectation values of: the mean volume, force moment, distance between intra-grain neighbour contact points, and contact force magnitude. We find, surprisingly, that V depends explicitly on the force degrees of freedom, that ρ · ρ depends on the angoricity, and that f · f on the compactivity. Second, the calculation of V reveals the existence of an equipartition principle -the mean volume of static systems is shared equally amongst both structural and force-related DoF, with each getting a volume of X 0 /2. This result shows that, although equipartition is questionable in dynamic dense systems [4,5], it exists for static ones. Moreover, since static granular systems are the equivalent of "zero temperature" granular fluids, this result gives hope that an equipartition principle may be found for dense dynamic systems by extending dynamic descriptions to include structural and force DoF. Third, we have derived an expression for the compactivity in terms of measur-able quantities -the mean volume and the mean contact number and the loading forces. A significant implication of our results is that the compactivity and angoricity are not in themselves the conjugate variables of volume and force moment, as previously believed. Instead, it is the expression in eq. (12) that represents a convolution of the volume and force moment functions with the compactivity and angoricity. It would be interesting to test our analysis experimentally and numerically, e.g. by constructing assemblies at different compactivities and angoricities and examining expectation values as functions of these parameters. Furthermore, since the arguments establishing the interdependence of the volume and stress ensembles hold in any dimension, it should be possible to extend our analysis to 3D systems, at least numerically.
2012-10-21T17:07:48.000Z
2012-04-13T00:00:00.000
{ "year": 2012, "sha1": "b9fb229ef14e94596af8bec05260c16c2178c7f5", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1204.2977", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5753571b60fb4f3ea99e45ab05d0810d275b534f", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Physics", "Mathematics", "Medicine" ] }
252683570
pes2o/s2orc
v3-fos-license
Energy-Rate-Quality Tradeoffs of State-of-the-Art Video Codecs The adoption of video conferencing and video communication services, accelerated by COVID-19, has driven a rapid increase in video data traffic. The demand for higher resolutions and quality, the need for immersive video formats, and the newest, more complex video codecs increase the energy consumption in data centers and display devices. In this paper, we explore and compare the energy consumption across optimized state-of-the-art video codecs, SVT-AV1, VVenC/VVdeC, VP9, and x.265. Furthermore, we align the energy usage with various objective quality metrics and the compression performance for a set of video sequences across different resolutions. The results indicate that from the tested codecs and configurations, SVT-AV1 provides the best tradeoff between energy consumption and quality. The reported results aim to serve as a guide towards sustainable video streaming while not compromising the quality of experience of the end user. I. INTRODUCTION Over the past years, video network traffic is rapidly increasing and currently accounts for the highest Internet-exchanged traffic [1]. In addition, the recent COVID-19 pandemic contributed to the rapid adoption of digital online services. As a result, live and on-demand video exchange becomes the norm for daily work and leisure activities [2]. Popular examples are on-demand streaming platforms (Netflix, Apple TV, HBO, Amazon Prime, etc.) and live video conferencing and collaborative online workspaces (Zoom, Webex, MS Teams, etc.). Furthermore, the accessibility to powerful and affordable devices, and the advances in cloud-computing technologies, enable users to create and share live or on-demand short usergenerated content clips over social media/sharing platforms (Instagram, TikTok, YouTube, etc.). Associated with the demands and drivers described above, the content creation and video communications pipelines contribute significantly to global energy consumption. Cloud computing services, data centers, display devices, and video delivery are the main contributors to this increased energy expenditure. While most climate change organizations focus on the transport and energy sectors' emissions, it is essential to recognize that ICT technologies also generate a considerable carbon emissions footprint [3]. Hence, efficiency must improve as technology usage increases if sustainability targets are to be met. According to a recent study by Huawei [4], data centers currently consume about 3% of global electricity. This, This work has been supported by Bristol+Bath Creative Industry Cluster. however, is expected to rise to over 8% by 2030, a figure larger than the energy consumption of some nations. While estimates vary, there is a consensus of an impending major global issue. Another critical issue is the energy required from the users to capture, transmit, and display the video data. Recent research has shown that the energy consumed on the user side is much higher than on the provider side [5] given that a single encoding is delivered to thousands of viewers. While video streaming companies are highly engaged in optimizing their algorithms to offer the highest quality of experience, the energy consumption is not part of this process yet. Each new generation of video codec reduces the amount of data transmitted over the network at the cost of increased computational complexity. A ∼50% efficiency gain of each new codec usually comes with a vast increase in computational complexity [6], [7] yielding to significantly increased encoding times. However, decoding has been kept relatively low complying with the requirement for smooth play-outs without rebuffering. With this growth in computational load, video providers, like Netflix, BBC, and others, are working towards assessing the environmental cost and committing towards net zero emissions [8], [9]. Various research activities have focused on modeling and predicting the energy consumption at the decoder side [10], [11]. As a result, tools that analyze the encoding statistics and estimate the decoding energy consumption for H.264/Advanced Video Coding (AVC) [12], High Efficiency Video Coding (HEVC) [13], and Versatile Video Coding (VVC) [14] are currently being developed. Similarly, researchers in [7], [15] explore both the encoder and the decoder on the latest VVC standard and compare it against HEVC. Challenged by the above, in this work, we investigate the energy, quality, and bitrate tradeoff across different stateof-the-art codecs, particularly, x.265, VVenC/VVdeC, VP9, and Scalable Video Technology AV1 (SVT-AV1). The energy is measured both at the encoder and the decoder side. We selected these production-optimized versions of codecs instead of the reference software implementations as these are usually deployed by the industry. After collecting the quality, rate, and energy statistics, we compare their tradeoffs. Although previous studies have performed codec comparisons in terms of delivered quality and compression effectiveness [16], [17], to the best of our knowledge, this is the first work comparing these codecs with regard to their energy-rate-quality tradeoffs. For the evaluation of the results, a new metric to reflect the energy cost for the required bits is proposed. The reported results aim to serve as a guide for the development of sustainable optimization solutions for video compression and streaming algorithms. The remainder of this paper is organized as follows. Section II describes the proposed method and metrics employed. Section III presents the experimental setup, the test configurations, and discussed the results. Finally, conclusions and future work are outlined in Section IV. II. METHODOLOGY In this section, we provide details on the video codecs used, the measurement setup, and the defined metrics. A. Video Codecs H.264/MPEG-4-AVC [12] was launched in 2004 and remains one of the most widely deployed video coding standards, even though the next generations of standards, H.265/HEVC [13], [18], [19] and VVC provide enhanced encoding performance [14]. H.265/HEVC was finalized in 2013, and H.266/VVC was released in 2020 with impressive coding gains of over 30% compared to H.265/HEVC. Besides the activities reported above, there has been increased activity over the past three years in the development of open-source royalty-free video codecs, particularly by the Alliance for Open Media (AOMedia). AOMedia used VP9 [20], which was earlier developed by Google, as a basis for AV1 [21], [22]. AV1 is currently the primary competitor for the current MPEG video coding standards, especially in the context of streaming applications. The commercial deployment depends on the hardware and the specifications of the display devices. Moving towards an extended parameter space, namely higher bitdepth (up to 16 bits) and higher spatio-temporal resolutions, the latest standards are expected to roll out to more use-cases and applications. Based on the above standards, optimized implementations of these codecs have been developed and served as part of the FFmpeg software suite [23]. Similarly, SVT-AV1 encoder was developed and optimized for CPU platforms improving the quality-latency tradeoffs for a wide range of video coding applications. It supports multi-dimensional parallelism, multi-pass partitioning decision, multi-stage/multi-class mode decision, and more [24]. Shortly after the finalization of VVC, an open-source optimized implementation of the VVC encoder (VVenC) and decoder (VVdeC) for randomaccess high-resolution video encoding was released [25], [26]. VVenC was designed to achieve faster runtime than the VVC reference software (VTM). VVenC also supports additional features like multi-threading, rate control and more. B. Measurements We propose to measure the energy consumption on both the encoder and the decoder ends. The encoding side is a good representation of the energy consumption at the video provider's side (e.g. in data centers), while the decoding end reflects directly to the decoding at the end-user devices (typically Codec Command Invocations mobile devices, laptops, or TVs). We formulate and perform two basic measurements. The first power measurement is performed during encoding, P enc , and the second during decoding, P dec . A third power measurement P idle quantifies the idle mode of our system. The energy consumption during the encoding and decoding is derived from the measured power minus the idle time measurements. Thus, over an observed time interval, the encoding and decoding energy is obtained by: where T enc and T dec are the encoding and decoding times. Our performance investigation is based on the integrated power meter in Intel CPUs, the Running Average Power Limit (RAPL) [27]. This tool is used in other similar research activities (e.g., [10], [28]) and accurately measures the power demand of the CPU, the DRAM, and the whole integrated circuit at 100ms intervals. Background processes of the operating system can skew our measurements. Therefore, we started our experimentation with a small-scale study assessing the energy consumption at an idle state, E idle . Later, by profiling the power consumption during the encoding and decoding process, we can assess the energy requirements for both processes. We repeat until the confidence intervals of the distribution of the measurements are tight validating their precision. For the encoding process, a smaller number of encoding iterations was required to converge, while for the decoding a greater number of decoding loops was necessary. This is attributed to the significant difference in the time duration of the encoding and decoding processes. C. Metrics 1) Quality performance: To assess the quality of the encoded test sequences, we selected full reference metrics typically used over the last years in the video technology research community: the Peak Signal to Noise Ratio (PSNR) averaged over all color components (YUV) and Video Multi-Method Assessment Fusion (VMAF) [29]. The latter exhibits a higher correlation with perceptual quality. 2) Compression performance: The performance of video coding algorithms is usually assessed by comparing their rate-distortion (or rate-quality) performance on various test sequences. Objective quality metrics or subjective opinion measurements are normally employed to assess compressed video quality, and the overall Rate-Quality (RQ) performance difference between codecs can be then calculated using Bjøntegaard Delta (BD) measurements [30] on objective metrics. We computed the BD metrics for the PSNR-Rate and VMAF-Rate curves. 3) Energy performance: To compare the energy performance across different codecs, we need to express the required energy as a function of the bitrate. By observation, the Rate-Energy (RE) curves both for encoding and decoding are very close to linear (see Fig. 3). Therefore, based on the slope of the RE line, we define another metric, the Energyto-Bitrate Ratio (EBR). EBR expresses the required energy expenditure for different compression levels. The lower the slope value (close to zero), the lower the EBR between the tested compression levels and the more energy-efficient the compression technology. In order to compute the slope of the RE curves, we first fitted the RE points into a linear model, where α, β ∈ R + .Ẽ is the estimated energy either for encoding or decoding. It is well known that in first-order polynomials α expresses the slope. Thus, EBR values are equal to α. The close-to-linearity behavior of the RE curves is confirmed through testing on the dataset described below. All R-squared values are higher than 0.92 indicating a very good fit. III. EVALUATION In this section, we describe the test sequences employed for the evaluation, the basic codec configurations, and report on the findings from our experiments. A. Test Data The selection of test content is important as compression is content-dependent and should provide a diverse and representative coverage of the video parameter space. For our experiments, we selected the SDR CTC test sequences [31] reported in Table II. These sequences have been used for many video codec evaluations, as they cover a typically used range of spatial resolutions {Class D: 416×240, Class C: 832×480, Class B: 1920×1080, Class A: 3840×2160}, frame rates -from 30 to 60-, and bit depths -from 8 to 10 bits. Besides this, the content also covers a representative range of spatial and temporal characteristics, as indicated by the spatial and temporal information [32], [33] scattered in Fig. 1. In the presented results, we have considered the sequences with resolutions up to 1920x1080 (class B). These results are adequate to convey the energy consumption trends. . For x.265, we selected the fastest preset in order to create an anchor that would represent an efficient codec that has low energy consumption. Note that the different codec configurations are switching on and off coding tools with a direct impact on the computational complexity and, thus, on the energy consumption. Here, we examine only a subset of the available configurations. All experiments were executed on the same workstation with a Hexacore Intel Comet Lake-S CPU at 3300MHz and 64GB RAM. More technical details can be found in the project page 1 . C. Results The computed performance metrics, BD and EBR, for the tested codecs are reported in Table III. For the BD metrics computation, the x.265 was considered the anchor codec. The BD metrics were computed for both types of RQ curves, Rate-PSNR and Rate-VMAF. Only the BD-PSNR (in dB) and BD-VMAF are reported in pairs. We did not include the BD-Rate results in this table, because differences in quality scales in most cases are so big that the extrapolation across the bitrate axis to compute the integral differences would not be accurate. It is clear from the BD-PSNR values, that SVT-AV1 offers more gains over x.265 compared to the other codecs. On the other hand, according to BD-VMAF VVenC/VVdeC offers on average the highest perceptual gains. Overall, VVenC/VVdeC seems to be offering a good tradeoff for the achieved quality, especially at low bitrate ranges. These results are confirmed by the plots of the average RQ curves in Fig. 2. The observed improved compression performance of SVT-AV1 and VVenC/VVdeC in terms of quality comes at the cost of higher complexity and, thus, energy consumption. This is confirmed by the RE curves in Fig. 3 for both encoding and decoding. It is also noticeable from these figures and the EBR values in Table III that the two latest codecs, SVT-AV1 and VVenC/VVdeC, have an almost equivalent slope in decoding, although VVdeC requires more energy for decoding. Regarding encoding, SVT-AV1 is performing significantly better than AV1 with EBR enc values comparable to those of x.265. It is also worth mentioning that although VP9 demonstrates the second best EBR dec and average decoding energy consumption, its encoding energy consumption is the highest on average compared to the other codecs. Another interesting view of the tradeoffs between quality and encoding/decoding energy can be derived from Fig. 4, where the average PSNR and VMAF are plotted against the encoding and decoding energy required. It is observed from these plots that SVT-AV1 appears to offer the best tradeoff on the encoding side in terms of quality and required energy, as it achieves on average a very high quality (over 90 in terms of VMAF). All these results reflect the expected energy consumption of these four codecs in a peer-to-peer scenario and for the specific codec configurations. It is expected that the codecs could behave differently under different settings. IV. CONCLUSION In this paper, we presented a study on the energy consumption of four state-of-the-art codecs, SVT-AV1, VP9, VVenC, and x.265, that are optimized to be used in production. The experimental setup was built based on the codecs CTCs and using a video dataset with sequences with spatial resolution 1 https://angkats.github.io/Sustainability-VideoCodecs/ On the other hand, for low-energy solutions, x.265 seems to be the best choice at the cost of lower video quality on average. Future work will include further experimentation with the different codec configurations. Moreover, we plan to extend this use case to include an estimation of the networking energy consumption to study the impact of the audience size on the energy consumed by the end user (decoding energy). Furthermore, the concept of energy-driven codec selection associated with the content and audience size under the constraint of maintaining a high user experience will also be explored.
2022-10-04T06:42:08.598Z
2022-10-02T00:00:00.000
{ "year": 2022, "sha1": "db5a4124411afda1f252c54f14314049ade984c2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "db5a4124411afda1f252c54f14314049ade984c2", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
73715066
pes2o/s2orc
v3-fos-license
Data-Driven Predictive Control Applied to Gear Shifting for Heavy-Duty Vehicles In this paper, the data-driven predictive control method is applied to the clutch speed tracking control for the inertial phase of the shift process. While the clutch speed difference changes according to the predetermined trajectory, the purpose of improving the shift quality is achieved. The data-driven predictive control is implemented by combining the subspace identification with the model predictive control. Firstly, the predictive factors are constructed from the input and output data of the shift process via subspace identification, and then the factors are applied to a prediction equation. Secondly, an optimization function is deduced by taking the tracking error and the increments of inputs into accounts. Finally, the optimal solutions are solved through quadratic programming algorithm in Matlab software, and the future inputs of the system are obtained. The control algorithm is applied to the upshift process of an automatic transmission, the simulation results show that the algorithm is in good performance and satisfies the practical requirements. Record Type: Published Article Submitted To: LAPSE (Living Archive for Process Systems Engineering) Citation (overall record, always the latest version): LAPSE:2018.0660 Citation (this specific file, latest version): LAPSE:2018.0660-1 Citation (this specific file, this version): LAPSE:2018.0660-1v1 DOI of Published Version: https://doi.org/10.3390/en11082139 License: Creative Commons Attribution 4.0 International (CC BY 4.0) Powered by TCPDF (www.tcpdf.org) Introduction Compared with the manual transmission, the automatic hydraulic transmission (AT) not only reduces the intensity of driver's work, but improves the labor productivity for heavy duty vehicles, especially in characteristics with power shifting ability [1].The traditional AT is a kind of clutch-to-clutch transmission which the original clutch is separated and the on-coming clutch combined during shifting process.The clutch alternation process should avoid the power interruption, shift overlapping and shocks [2].The shift process is usually defined as including torque phase and inertia phase.The transmitted torque would change with the oil pressure between the off-going clutch and the on-coming clutch in the torque phase.When the torque transmitted by off-going clutch transfers to the on-coming clutch completely, finally the inertia phase occurs.In this phase, the accurate oil pressure would make the output speed and input speed of transmission synchronous.Based on the experimental data the input speed and output speed changes a little during the torque phase, whereas, it varies intensely in the inertia phase.Therefore, it is crucial to control the clutch speed during the inertial phase for improving shifting quality. For improving shifting quality several researchers have carried on a large amount of research about the clutch speed tracking control during the inertia phase, and they have already reported successes.As we all known that the mature PID method has been widely used in industry control.Meng et al. [3] used the PID method for improving system characteristics by using a robust 2-D controller defining optimal control parameters, which tracked the desired speed difference trajectory.Mishra and Srinivasan [4] combined adaptive feedback linearization control and sliding mode control to track reference trajectory accurately in inertia phase.The adaptive feedback linear controller calculated the reference oil pressure of the clutch based on the desired clutch speed and real speed, and the collected real pressure and the reference one would be used for input.Gao et al. [5] designed a nonlinear controller with stable input performance using a backstepping method considering system nonlinearity and uncertainty for the inertia phase.Depraetere et al. [6] have designed a two level controller for shifting, the low level of which solved the control variables through building an optimal problem of objective function with constraints, whereas, the high level controller utilized the ILC (Iterative Learning Control) method to update the model and constraints based on the measured data, and the control variables were calculated by the low level controller.Dutta et al. [7][8][9] solved the clutch engagement control problem by using Model Predictive Control, and they emphatically discussed the learning strategy for clutch engagement.Shi et al have developed the way of maintaining the shift quality of automatic transmissions consistent with adaptive control in mass production and with mileage accumulation [10]. In this paper the authors developed model-based learning control (two level nonlinear model predictive control, two level iterative learning control and iterative optimization) and model-free learning control (genetic algorithm and reinforcement learning), and those controllers have been verified by experiments with good control effectiveness. Most of the mentioned methods for shifting control depend on a detailed model.Since shifting is a complicated and nonlinear process, it is hard to build a theoretical model accurately.The higher order system is also difficult to solve, even when a precise model is obtained [11].With the rapid development of computer science and technology, a large amount data of during system working had been collected, which includes the system information.When a precise theoretical model is hard to build, data-driven methods could be used to obtain the characteristic information from off-line data and real time date for realizing the optimization control, forecast and evaluation of the system process, which had received a great deal of attention among the control community [12][13][14][15].In addition, due to some natural advantages, more and more control systems are applied to model predictive control. In this paper the subspace identification and model predictive control are combined, and a data-driven predictive controller has been designed for gear shifting of ATs.Firstly, the dynamic equation of a planetary gear box was induced by the Lagrange method.Actually, the controller is a kind of model-free control which obtains the system characteristic information to establish the predictive controller from the input and output data by using subspace identification.In the process of clutch speed tracking control, it is necessary to balance the conflicting control requirements.Therefore, the tracking control was changed to a multi-objective optimization problem which is solved by predictive control with considering the constraints in real mechanical system and the improved particle swarm optimization algorithm was utilized for searching optimal solution. Topology Structure of Powertrain System The powertrain system divided into several subsystems in the simulation model, which includes diesel engine model, torque converter, transmission, drive shaft, longitudinal vehicle model and hydraulic model. Diesel Engine Model By using linear function, sine function and hyperbolic function, the full range of continuous speed regulation characteristic function model was constructed, and it was applied to the engine model shown in Figure 1 [16]. Torque Converter Model Based on the pump capacity factor p λ and torque ratio K , the characteristic curve of the torque converter is determined by the test data which is utilized in the torque converter model [17]. Transmission Model In this study the automatic transmission consists of four planetary systems, two clutches and four brakes.The main work of modelling the AT is focused on definition of the friction model of clutch and the dynamic performances of planetary.For the clutch friction model, the Woods static and dynamic friction model had been used, which is deemed to be a three-state logic function related with the friction torque and reference speed of clutches.Regarding the planetary system, the planetary dynamic equations acquired by the Lagrange method and virtual work principle, which be introduced for modelling particularly in [18,19]. The structure of the automatic transmission was shown in Figure 2, where the torque converter, hydraulic retarder, planetary system, clutches/brakes, hydraulic system and control unit can be seen.The planetary systems represented by P1, P2, P3 and P4.CL, CS and CH indicate the clutches, respectively.In addition, BS, BM, BL and BR represent brakes.When the control unit sends signals to the hydraulic system, the clutches and brakes start working in different ways to acquire the desired ratio.The schedule is shown in Table 1, where '√' means engaging.According to the structure of AT, the Lagrange method was used to establish the dynamic function of the planetary system.Because the fourth planetary didn't work during shifting from 1st gear to 2nd gear, the reverse gear was neglected during deduction of the function to simplify the modelling.Therefore, the number of planetary systems was reduced to three.The connection type of the planetary set was shown in Figure 3.It can be seen that the ring gear of planetary P1 is Torque Converter Model Based on the pump capacity factor λ p and torque ratio K, the characteristic curve of the torque converter is determined by the test data which is utilized in the torque converter model [17]. Transmission Model In this study the automatic transmission consists of four planetary systems, two clutches and four brakes.The main work of modelling the AT is focused on definition of the friction model of clutch and the dynamic performances of planetary.For the clutch friction model, the Woods static and dynamic friction model had been used, which is deemed to be a three-state logic function related with the friction torque and reference speed of clutches.Regarding the planetary system, the planetary dynamic equations acquired by the Lagrange method and virtual work principle, which be introduced for modelling particularly in [18,19]. The structure of the automatic transmission was shown in Figure 2, where the torque converter, hydraulic retarder, planetary system, clutches/brakes, hydraulic system and control unit can be seen.The planetary systems represented by P1, P2, P3 and P4.CL, CS and CH indicate the clutches, respectively.In addition, BS, BM, BL and BR represent brakes.When the control unit sends signals to the hydraulic system, the clutches and brakes start working in different ways to acquire the desired ratio.The schedule is shown in Table 1, where ' √ ' means engaging. Torque Converter Model Based on the pump capacity factor p λ and torque ratio K , the characteristic curve of the torque converter is determined by the test data which is utilized in the torque converter model [17]. Transmission Model In this study the automatic transmission consists of four planetary systems, two clutches and four brakes.The main work of modelling the AT is focused on definition of the friction model of clutch and the dynamic performances of planetary.For the clutch friction model, the Woods static and dynamic friction model had been used, which is deemed to be a three-state logic function related with the friction torque and reference speed of clutches.Regarding the planetary system, the planetary dynamic equations acquired by the Lagrange method and virtual work principle, which be introduced for modelling particularly in [18,19]. The structure of the automatic transmission was shown in Figure 2, where the torque converter, hydraulic retarder, planetary system, clutches/brakes, hydraulic system and control unit can be seen.The planetary systems represented by P1, P2, P3 and P4.CL, CS and CH indicate the clutches, respectively.In addition, BS, BM, BL and BR represent brakes.When the control unit sends signals to the hydraulic system, the clutches and brakes start working in different ways to acquire the desired ratio.The schedule is shown in Table 1, where '√' means engaging.According to the structure of AT, the Lagrange method was used to establish the dynamic function of the planetary system.Because the fourth planetary didn't work during shifting from 1st gear to 2nd gear, the reverse gear was neglected during deduction of the function to simplify the modelling.Therefore, the number of planetary systems was reduced to three.The connection type of the planetary set was shown in Figure 3.It can be seen that the ring gear of planetary P1 is According to the structure of AT, the Lagrange method was used to establish the dynamic function of the planetary system.Because the fourth planetary didn't work during shifting from 1st gear to 2nd gear, the reverse gear was neglected during deduction of the function to simplify the modelling.Therefore, the number of planetary systems was reduced to three.The connection type of the planetary set was shown in Figure 3.It can be seen that the ring gear of planetary P1 is connected to the sun gear S2 and sun gear S3, which is named r1s2s3, while the carrier C2 connected ring gear R3 which is named c2r3.Based on the meshing relationship, the kinematical equation of planetary system was shown as follows: Gear CS BS CH BM BL BR Ratio (2) where, the terms 1 For the convenience of operation, 1 α  , 1 β  and 3 β  were selected as the independent coordinate system.After the above six constraint equations are solved, the equation was written in matrix form: Based on the meshing relationship, the kinematical equation of planetary system was shown as follows: . . . where, the terms α 1 , β 1 and γ 1 represent the angular displacement of the sun gear, carrier and pinion in the P1 planetary system, respectively, θ 1 represents the angular displacement of the ring gear in P1, sun gear in P2 and sun gear in P3, β 2 represents the angular displacement of the carrier in P2 and ring gear in P3, θ 2 represents the angular displacement of carrier in P2, γ 2 represents the angular displacement of pinion in P2, β 3 presents the angular displacement of carrier in P3 and γ 3 represents the angular displacement of the pinion in P3. For the convenience of operation, .α . . β 3 were selected as the independent coordinate system.After the above six constraint equations are solved, the equation was written in matrix form: Energies 2018, 11, 2139 where A is the coefficient matrix: During the gear shifting, the external force from the hydraulic system is added to the planetary system.The virtual work generated by external torques in virtual displacements is given as follows: It can be seen that the torque would be change when the gear shifts to different gear.Towards the torque of the sun gear in P1, T s1 , it codetermined by the friction torque T f ,CS and T f ,BS induced by the clutch CS and brake BS.The torque of carrier in P1, T c1 , it related with the input torque of the turbine T in and the friction torque from the clutch CS.Due to the fact the carrier in P1 is connected with the sun gears in P2 and P3, the external torque of this component is determined by the friction torque from clutch CH.The carrier in P2 and ring gear in P3 are connected together, and the torque determined by the friction torque of brake BL.The external torque of the ring gear in P2 was codetermined by the friction torque of clutch CS and brake BM.Finally, the torque of carrier in P3 is related with the output torque of transmission, T out . Applying the Lagrange equation to the system, the total energy was taken the derivative with respect to time and calculated by the partial differentiation.The dynamic equation of the AT500 planetary system was acquired about the generalized coordinates, .. According to Equation (7), the angular speed of the coordinate components was solved when the friction torque of clutches and brakes were known.In addition, the kinematic relationship of all components of AT500 was shown in the following Figure 4. where A is the coefficient matrix: (1 ) A A A A A A A A A A A A A A A A A During the gear shifting, the external force from the hydraulic system is added to the planetary system.The virtual work generated by external torques in virtual displacements is given as follows: It can be seen that the torque would be change when the gear shifts to different gear.Towards the torque of the sun gear in P1, 1 s T , it codetermined by the friction torque Applying the Lagrange equation to the system, the total energy was taken the derivative with respect to time and calculated by the partial differentiation.The dynamic equation of the AT500 planetary system was acquired about the generalized coordinates, 1 α  , 1 β  and 3 β  : , , According to Equation ( 7), the angular speed of the coordinate components was solved when the friction torque of clutches and brakes were known.In addition, the kinematic relationship of all components of AT500 was shown in the following Figure 4. Output Shaft and Longitudinal Vehicle Model For modelling convenience, the longitudinal vehicle model was simplified [20].The main reducer was considered a simple transmission, and the connection between the reducer and vehicle body was considered as a torsion spring with a certain stiffness which is reduced to a mass block with a certain moment of inertia. Output Shaft and Longitudinal Vehicle Model For modelling convenience, the longitudinal vehicle model was simplified [20].The main reducer was considered a simple transmission, and the connection between the reducer and vehicle body was Energies 2018, 11, 2139 6 of 12 considered as a torsion spring with a certain stiffness which is reduced to a mass block with a certain moment of inertia. Control Scheme For achieving speed synchronization of two relative rotating objects, Meng et al. [1] utilized the piecewise function method to fit the speed difference curve, which the quadratic function was applied in two terminals and the linear function in the middle part.This way insured the difference curve derivative, furthermore, the relative speed variation maintained as zero at the beginning and last moment.Additionally, some research papers have introduced that the cubic function can be used as the curve [21][22][23][24][25].Over those designations of various difference curves, they aimed to guarantee the relative speed variation as zero and the shifting time fulfilled the requirement since in tracking problems the error between the reference trajectory and real speed is the focus rather than the selection of a reference trajectory.In this paper, the cubic function was used as the reference curve and it expressed as below: where, ∆ω * represents the reference trajectory of speed difference, ∆ω 0 represents the speed difference at start moment of inertia phase and t 0 , t f represented the start time and end time respectively.We focused on the shifting process from first gear to second gear in this paper which is the alternating process between off-going clutch CS and on-coming clutch BS.When the gear shift starts, the pressure of brake BS is increased gradually.The external torque for AT500 is gradually applied step by step from the clutch CS to the brake BS and it would be split automatically.When the clutch CS is separated totally, it means that the inertia phase starts.Therefore, the target for the pressure control of brake BS is to track the reference trajectory deducing by Equation ( 8) which is shown in Figure 5. To optimize the shift quality, the current of electric valve controlled the brake BS i and throttle angle θ are the control variables, which fulfil 0 ≤ i ≤ 0.6 A and 0 ≤ θ ≤ 1. Considering the incremental method utilized in the controller designation, we set the range of control variables satisfying 0 ≤ di dt ≤ 3 A/s and 0 ≤ dθ dt ≤ 4%/s, and the increment constraints for the control variables would be calculated according to the step length of simulation and its change rate. Control Scheme For achieving speed synchronization of two relative rotating objects, Meng et al. [1] utilized the piecewise function method to fit the speed difference curve, which the quadratic function was applied in two terminals and the linear function in the middle part.This way insured the difference curve derivative, furthermore, the relative speed variation maintained as zero at the beginning and last moment.Additionally, some research papers have introduced that the cubic function can be used as the curve [21][22][23][24][25].Over those designations of various difference curves, they aimed to guarantee the relative speed variation as zero and the shifting time fulfilled the requirement since in tracking problems the error between the reference trajectory and real speed is the focus rather than the selection of a reference trajectory.In this paper, the cubic function was used as the reference curve and it expressed as below: where, * Δω represents the reference trajectory of speed difference, 0 Δω represents the speed difference at start moment of inertia phase and 0 t , f t represented the start time and end time respectively.We focused on the shifting process from first gear to second gear in this paper which is the alternating process between off-going clutch CS and on-coming clutch BS.When the gear shift starts, the pressure of brake BS is increased gradually.The external torque for AT500 is gradually applied step by step from the clutch CS to the brake BS and it would be split automatically.When the clutch CS is separated totally, it means that the inertia phase starts.Therefore, the target for the pressure control of brake BS is to track the reference trajectory deducing by Equation ( 8) which is shown in Figure 5. To optimize the shift quality, the current of electric valve controlled the brake BS i and Shift Controller Design Looking into the mechanism of the system model structure is unnecessary, while the subspace identification just needs a linear algebra tool such as QR without iterative optimization.Therefore, it was easy to achieve and thus has developed rapidly [26].The main idea is to design an appropriate driving sign for the system, and use it to acquire input and output data.After data normalization, the Hankel matrix built from the input and output data was used to divide it by the QR method and singular value decomposition for estimating the state vectors of the system.Finally, the state space model matrixes A, B, C and D were solved by the least squares method, while the model predictive method was usually used to calculate the future control sequence minimizing the cost function based on the latest measurement at each sampling time in a determined model.Using Shift Controller Design Looking into the mechanism of the system model structure is unnecessary, while the subspace identification just needs a linear algebra tool such as QR without iterative optimization.Therefore, it was easy to achieve and thus has developed rapidly [26].The main idea is to design an appropriate driving sign for the system, and use it to acquire input and output data.After data normalization, the Hankel matrix built from the input and output data was used to divide it by the QR method and singular value decomposition for estimating the state vectors of the system.Finally, the state space model matrixes A, B, C and D were solved by the least squares method, while the model predictive Energies 2018, 11, 2139 7 of 12 method was usually used to calculate the future control sequence minimizing the cost function based on the latest measurement at each sampling time in a determined model.Using the first column of those acquired control sequence into the system, the above procedure would be repeated in the next sampling time until the end of the shift.To utilize the data drive predictive method for shifting control, the subspace identification and model predictive control are combined which is presented in the Figure 6. the first column of those acquired control sequence into the system, the above procedure would be repeated in the next sampling time until the end of the shift.To utilize the data drive predictive method for shifting control, the subspace identification and model predictive control are combined which is presented in the Figure 6. According the past and future input and output date, the subspace predictors in the prediction equations were acquired by the least squares method [27][28][29][30][31].The prediction equation was optimized to obtain the future control sequence.Compared with the subspace identification and the model predictive control, it can be seen that the data predictive control is seen as a model-free predictive control method without solving the coefficient matrixes of the state space model, but the predictive equation was acquired by input and output date directly.The description of data-drive predictive control method is as below: (1) Design an appropriate input to acquire the input data u and output data y; (5) Building the objective function J and defining the initial constraints; (6) Solving the objective function by QP method for acquiring the sequence of the future control variables f u Δ .The first one would be used into the system input. (7) Updating the relatively speed  and e R cbased on the system output. (8) Back to the sixth step until the end of shifting. Data-Driven Shift Predictor Considering the optimal control for the inertia phase of shifting, the current of electric valve and throttle angle were regard as the system inputs u and the relatively speed of clutch Δω was seen as the system output y .To stimulate the dynamic characteristics of system, the simulation was carried out in the inertia phase of shifting by using the random input signals. Usually, the linear time-invariant system is usually described as a state space: According the past and future input and output date, the subspace predictors in the prediction equations were acquired by the least squares method [27][28][29][30][31].The prediction equation was optimized to obtain the future control sequence.Compared with the subspace identification and the model predictive control, it can be seen that the data predictive control is seen as a model-free predictive control method without solving the coefficient matrixes of the state space model, but the predictive equation was acquired by input and output date directly. The description of data-drive predictive control method is as below: (1) Design an appropriate input to acquire the input data u and output data y; (2) Formulate the Hankel matrix U p , U f , Y p and Y f based on those input and output data; (3) Solving the subspace prediction equation for acquiring predictive factor L w and L u by least squares method; (4) Rewrite the predict equation as the incremental and build the initial relatively speed ∆w p = ∆y p ∆u p and reference output sequence R e . (5) Building the objective function J and defining the initial constraints; (6) Solving the objective function by QP method for acquiring the sequence of the future control variables ∆u f .The first one would be used into the system input. (7) Updating the relatively speed ∆w p = ∆y p ∆u p and R e cbased on the system output. (8) Back to the sixth step until the end of shifting. Data-Driven Shift Predictor Considering the optimal control for the inertia phase of shifting, the current of electric valve and throttle angle were regard as the system inputs u and the relatively speed of clutch ∆ω was seen as the system output y.To stimulate the dynamic characteristics of system, the simulation was carried out in the inertia phase of shifting by using the random input signals.Usually, the linear time-invariant system is usually described as a state space: where, u k , y k and x k represent the inputs, outputs and state variables, respectively.Assuming known the inputs u k and outputs y k at any moment, the Hankel matrix could be defined which was introduced in the thesis [22].According to the system identification, the output estimate matrix was deduced as follows: The predictive factor L w and L u could be acquired by solving the constructed least squares problem.To verify the effect of predictive factors, the first column of the future output matrix calculating by Equation ( 10) is compared with the test output data based on the simulation model.The simulation results show that the error between the estimated value and test value was very small, and it illustrates that the predictor was useful for estimating the future output date. Data-Driven Shift Controller We substituted the future output sequence into the objective function with considering the constraints.The optimal problem could be deal with as a quadratic programming problem: The increment sequence of future control variables is the solution of the above quadratic programming problem.The first incremental and the control variable of last step are used together to the system, and it will be repeated until the end of shifting. Simulation Results and Discussion Simulation of controlled object and controller were built by the Matlab/Simulink in this article.Due to the similar process for different gear, we just focus on the shifting from the first gear to the second gear in various working conditions which aimed to verify the effect of controller.In the simulation, the parameters of vehicle and road were changes along with the different conditions, while the parameters of controller maintained as same value. The main parameters of shifting controller included the number of rows for Hankel matrix i = 50, the number of columns j = 1100, the control horizon of the predictor N u = 4 and the prediction horizon N p = 50.Additionally, the control sequence and incremental constraints were the same as mentioned before.The weight vector, Γ u and Γ w , represented the row vectors after normalization, of which dimensions were 8 and 50, respectively.Because of the high dimensions of weight vector Γ w , only the weight vector is shown: Γ u = 0.2789 0.2789 0.1317 0.1317 0.0609 0.0609 0.0285 0.0285 (12) where, Γ u reflects the impact on the objective function of different control sequences in future time. It means that the future control sequence was closer to the current time and the cost functions were influenced easier.We assume that the influence of the throttle angle and the current of electric valve were the same in the same moment.In this simulation, we set the start moment of the inertia phase as t 0 = 4 s and it would last about 0.5 s. Figure 7 reveals that the simulation results in empty condition without slope, where the mass of vehicle m v = 55, 000, the slope of road α = 0 • , the initial throttle angle θ = 0.9, the initial current of electric valve i = 370.2and the initial relative speed difference of the brake BS ∆ω 0 = 118.8.According to the simulation results, the effect of the designed controller fulfilled the requirements, and the maximum tracking error for angular speed of clutch is about 5.3 rad/s.The throttle angle and the current of electric valve have fluctuations at the beginning of inertia phase that lead to the unsmooth curve of tracking error.Reference [22] has revealed that the turbine speed was affected easily by the control sequence at the beginning of inertia phase.Thereafter, the control sequence and speed difference Δω start to become smooth.Especially, there is no error between the simulation speed difference and the reference curve at the end of inertia phase.Comparing with the limiting change rate of speed at the beginning and end of inertia phase, the maximum tracking error occurred at the middle part.Due to the requirement only for those two parts, the controller has achieved the desire effect which could be used for shifting. Since the vehicle works under time-varied conditions, it should be tested under other conditions.The simulation results with empty loss in slope road were shown in the Figure 8, where the mass of vehicle is 55 000 According to the simulation results, the effect of the designed controller fulfilled the requirements, and the maximum tracking error for angular speed of clutch is about 5.3 rad/s.The throttle angle and the current of electric valve have fluctuations at the beginning of inertia phase that lead to the unsmooth curve of tracking error.Reference [22] has revealed that the turbine speed was affected easily by the control sequence at the beginning of inertia phase.Thereafter, the control sequence and speed difference ∆ω start to become smooth.Especially, there is no error between the simulation speed difference and the reference curve at the end of inertia phase.Comparing with the limiting change rate of speed at the beginning and end of inertia phase, the maximum tracking error occurred at the middle part.Due to the requirement only for those two parts, the controller has achieved the desire effect which could be used for shifting. Since the vehicle works under time-varied conditions, it should be tested under other conditions.The simulation results with empty loss in slope road were shown in the Figure 8, where the mass of vehicle is m v = 55, 000 kg and the slope of road is α = 5 • .In this condition, the initial electric valve current changed to i = 423.2and the initial clutch speed difference is ∆ω 0 = 48.35rad/s.It shows clearly that the control variables and output have large fluctuations and takes about 0.06 s to become smooth.In addition, the maximum tracking error is 4.2 rad/s.The results of the full loss in slope are presented in Figure 9, where the mass of vehicle increases to m v = 72, 000 kg, the slope of road is α = 2 • , the initial throttle angle is θ = 0.9, the initial current of electric valve changed to i = 413.5 and the initial clutch speed difference is ∆ω 0 = 73.84rad/s.The simulation results in this condition falls in between the two mentioned conditions where the maximum error is 3.3 rad/s.According to the simulation results, the effect of the designed controller fulfilled the requirements, and the maximum tracking error for angular speed of clutch is about 5.3 rad/s.The throttle angle and the current of electric valve have fluctuations at the beginning of inertia phase that lead to the unsmooth curve of tracking error.Reference [22] has revealed that the turbine speed was affected easily by the control sequence at the beginning of inertia phase.Thereafter, the control sequence and speed difference Δω start to become smooth.Especially, there is no error between the simulation speed difference and the reference curve at the end of inertia phase.Comparing with the limiting change rate of speed at the beginning and end of inertia phase, the maximum tracking error occurred at the middle part.Due to the requirement only for those two parts, the controller has achieved the desire effect which could be used for shifting. Since the vehicle works under time-varied conditions, it should be tested under other conditions.The simulation results with empty loss in slope road were shown in the Figure 8, where the mass of vehicle is 55 000 Conclusions To reduce the dependency on the detailed model, the subspace identification and model predictive control are combined in this paper.The data-driven predictive control method has been applied to optimize the shifting quality for the inertial phase and it changed the optimization to track reference desired curve.The model-free algorithm utilized the input and output signal data to implement the predictive control by the predictive equation. When the control variables θ and i were solved for ensuring the two contradictory indicators that are the shift time and the shift impact, the objective function has been designed considering the constraints for control variables and change rate.The controller was designed in the Matlab/Simulink software, which works out the objective function by the QP with constraints.We used the solved control variables as the input for shifts during the inertial phase, and the simulation results showed that the speed of on-coming clutch tracking the reference curve accurately.Moreover, the simulation result verified the effect of the controller and has robust performance when the mass and road slope change. In conclusion, the data-driven predictive control not only improves the shift quality but provides a theoretical basis for real-time testing by hardware, which could be utilized in shifting problems for traditional vehicles, hybrid vehicles and electric vehicles without building the transmission model.The results also lay a technical foundation for hardware-in-the-loop tests and real vehicle tests. Conclusions To reduce the dependency on the detailed model, the subspace identification model predictive control are combined in this paper.The data-driven predictive control method has been applied to optimize the shifting quality for the inertial phase and it changed the optimization to track reference desired curve.The model-free algorithm utilized the input and output signal data to implement the predictive control by the predictive equation. When the control variables θ and i were solved for ensuring the two contradictory indicators that are the shift time and the shift impact, the objective function has been designed considering the constraints for control variables and change rate.The controller was designed in the Matlab/Simulink software, which works out the objective function by the QP with constraints.We used the solved control variables as the input for shifts during the inertial phase, and the simulation results showed that the speed of on-coming clutch tracking the reference curve accurately.Moreover, the simulation result verified the effect of the controller and has robust performance when the mass and road slope change. In conclusion, the data-driven predictive control not only improves the shift quality but provides a theoretical basis for real-time testing by hardware, which could be utilized in shifting problems for traditional vehicles, hybrid vehicles and electric vehicles without building the transmission model.The results also lay a technical foundation for hardware-in-the-loop tests and real vehicle tests. Figure 2 . Figure 2. Diagram of the automatic transmission. Figure 2 . Figure 2. Diagram of the automatic transmission. Figure 2 . Figure 2. Diagram of the automatic transmission. α , 1 β and 1 γ 1 θ 2 β 2 θ 3 β 3 γ represent the angular displacement of the sun gear, carrier and pinion in the P1 planetary system, respectively, represents the angular displacement of the ring gear in P1, sun gear in P2 and sun gear in P3, represents the angular displacement of the carrier in P2 and ring gear in P3, represents the angular displacement of carrier in P2, 2 γ represents the angular displacement of pinion in P2, presents the angular displacement of carrier in P3 and represents the angular displacement of the pinion in P3. Energies 2018 , 11, x FOR PEER REVIEW 5 of 12 the clutch CS and brake BS.The torque of carrier in P1, 1 c T , it related with the input torque of the turbine in T and the friction torque from the clutch CS.Due to the fact the carrier in P1 is connected with the sun gears in P2 and P3, the external torque of this component is determined by the friction torque from clutch CH.The carrier in P2 and ring gear in P3 are connected together, and the torque determined by the friction torque of brake BL.The external torque of the ring gear in P2 was codetermined by the friction torque of clutch CS and brake BM.Finally, the torque of carrier in P3 is related with the output torque of transmission, out T . Figure 4 . Figure 4. Dynamic characteristic solver of gear set. Figure 4 . Figure 4. Dynamic characteristic solver of gear set. throttle angle θ are the control variables, which fulfil 0 utilized in the controller designation, we set the range of control and the increment constraints for the control variables would be calculated according to the step length of simulation and its change rate. Figure 5 . Figure 5.The reference trajectory of the clutch speed difference. Figure 5 . Figure 5.The reference trajectory of the clutch speed difference. Figure 6 . Figure 6.Principle of data-driven predictive control algorithm. ( 2 )( 3 ) Formulate the Hankel matrix p U , f U , p Y and f Y based on those input and output data; Solving the subspace prediction equation for acquiring predictive factor w L and u L by least squares method; (4) Rewrite the predict equation as the incremental and build the initial relatively speed = Figure 6 . Figure 6.Principle of data-driven predictive control algorithm. Figure 7 . Figure 7. Simulation results (vehicle mass 55 m t = ; road slope = 0  α It shows clearly that the control variables and output have large fluctuations and takes about 0.06 s to become smooth.In addition, the maximum tracking error is 4.2 rad/s.The results of the full loss in slope are presented in Figure9, where the mass of vehicle increases to 72.The simulation results in this condition falls in between the two mentioned conditions where the maximum error is 3.3 rad/s. Figure 7 . Figure 7. Simulation results (vehicle mass 55 m t = ; road slope = 0  α It shows clearly that the control variables and output have large fluctuations and takes about 0.06 s to become smooth.In addition, the maximum tracking error is 4.2 rad/s.The results of the full loss in slope are presented in Figure9, where the mass of vehicle increases to 72.The simulation results in this condition falls in between the two mentioned conditions where the maximum error is 3.3 rad/s. Table 1 . Combined clutch/brake schedule.Energies 2018, 11, x FOR PEER REVIEW 4 of 12 connected to the sun gear S2 and sun gear S3, which is named r1s2s3, while the carrier C2 connected ring gear R3 which is named c2r3.
2018-12-23T16:11:45.532Z
2018-08-16T00:00:00.000
{ "year": 2018, "sha1": "f93e72b57de5b439453b3cf3a715d5ca7fbf4e47", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/11/8/2139/pdf?version=1534423817", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "7a1ba0b77945d010eff696fabfbfbbc8d2b48aca", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
269078164
pes2o/s2orc
v3-fos-license
Research on Energy Management Strategy for Series Hybrid Tractor under Typical Operating Conditions Based on Dynamic Programming : In response to the issues of hybrid tractors’ energy management strategies, such as reliance on experience, difficulty in achieving optimal control, and incomplete analysis of typical operating conditions of tractors, an energy management strategy based on dynamic programming is proposed in combination with various typical operating conditions of tractors. This is aimed at providing a reference for the modeling and energy management strategies of series hybrid tractors. Taking the series hybrid tractor as the research object, the tractor dynamics models under three typical working conditions of plowing, rotary tillage, and transportation were established. With the minimum total fuel consumption of the tractor as the optimization target, the engine power as the control variable, and the state of charge of the power battery as the state variable, an energy management strategy based on a dynamic programming algorithm was established and simulation experiments were conducted. The simulation results show that, compared with the power-following energy management strategy, the energy management strategy based on the dynamic programming algorithm can reasonably control the operating state of the engine. Under the three typical working conditions of plowing, rotary tillage, and transportation, the battery SOC consumption increased by approximately 8.37%, 7.24%, and 0.77%, respectively, while the total fuel consumption decreased by approximately 25.28%, 21.54%, and 13.24%, respectively. Introduction Tractors occupy an important position in the development of modern agriculture and are one of the main types of agricultural power machinery.However, traditional diesel tractors have problems such as high pollution emissions and poor fuel economy [1][2][3].Facing increasingly strict emission standards around the world, the agricultural machinery industry is also increasingly urgent in its demand for environmentally friendly and energysaving agricultural tractors [4,5].Pure electric tractors can achieve zero pollution, but, due to current battery technology limitations, they have a short continuous operation time and are unable to perform high-load agricultural production for extended periods of time [6,7].The series hybrid tractor is equipped with an engine and a generator on the basis of the pure electric tractor.By reasonably controlling the operating state of the engine, it can achieve the same power performance as the traditional diesel tractor while reducing fuel consumption [8][9][10]. The energy management strategy has an important impact on the fuel economy, power performance, and lifespan of the power source of the hybrid tractor [11,12].Currently, energy management strategies are basically divided into three categories: rule-based energy management strategies, learning-based energy management strategies, and optimizationbased energy management strategies [13].Rule-based energy management strategies are simple to develop and highly feasible, and were the first energy management strategies applied to hybrid vehicles [14].Chen et al. [15] designed an adaptive fuzzy energy management strategy for extended-range electric vehicles by using BP neural network optimized by an improved genetic algorithm, which effectively improved the fuel economy of the whole vehicle.Yang et al. [16] designed an energy management strategy that combines constant temperature control, power following, and fuzzy rules.This strategy reduces equivalent hydrogen consumption while increasing the lifespan of the system.Zou et al. [17] proposed an energy management strategy for fuel cell hybrid vehicles that utilizes fuzzy logic to optimize the power following control strategy.This control strategy optimizes the output power of the hydrogen fuel cell while reducing hydrogen consumption.However, rule-based energy management strategies usually require a great deal of debugging to determine suitable parameters, relying on the developer's design experience, and it is difficult to achieve optimal control [18]. Learning-based energy management strategies are control strategies with adaptive learning capabilities and good robustness [19].Xu et al. [20] proposed a supervised learningbased driving cycle pattern recognition method that can accurately predict road conditions and improve the fuel economy of hybrid vehicles.Wu et al. [21] proposed an energy management strategy based on deep deterministic policy gradients, which has near-global optimal dynamic programming performance and can achieve optimal energy allocation for vehicles in continuous spaces.Wang et al. [22] combined computer vision with deep reinforcement learning, enabling the algorithm to autonomously learn the optimal control strategy using visual information collected from on-board cameras, which resulted in reduced fuel consumption and achieved performance at 96.5% of the global optimum dynamic programming.However, learning-based energy management strategies require a large amount of data for training, have high computational requirements, and have relatively complex control strategies [23]. Optimization-based energy management strategies use cost functions as optimization objectives and measure the optimization effect by minimizing the cost function [24].Zhao et al. [25] proposed an energy management strategy based on the principle of maximizing external energy efficiency, which significantly reduces equivalent hydrogen consumption and improves the overall efficiency of the hybrid tractor.Dou et al. [26] proposed an energy management strategy based on an equivalent fuel consumption minimization algorithm.This strategy can adaptively distribute the required torque based on the load condition, resulting in better fuel consumption compared to rule-based energy management strategies.Curiel-Olivares et al. [27] proposed a model-predictive-based energy management strategy for series hybrid tractors.This strategy outperforms rule-based energy management strategies in terms of fuel consumption while also optimizing the operating state of the battery. This article takes a series diesel-electric hybrid tractor as the research object and proposes a globally optimal hybrid energy management strategy based on dynamic programming (DP) [28][29][30].By reasonably controlling the operating state of the engine and optimizing its output power, it is possible to reduce the total fuel consumption of tractors under three typical working conditions of plowing, tilling, and transportation while ensuring the tractor's power performance.The remainder of this article is organized as follows.Section 2 introduces the topological structure and main performance parameters of the power system for series hybrid tractors.Section 3 explains the simulation models of various components of series hybrid tractors.Section 4 designs two energy management strategies, namely those based on dynamic programming and power following (PF).Section 5 verifies and analyzes the energy management strategies through simulation experiments.Section 6 discusses the results of the simulation experiments and outlines future research directions.Section 7 summarizes the research content and experimental results of this paper. Structural Parameters of Tractor's Power System Figure 1 shows the topological structure of the power system of a series diesel-electric hybrid tractor.This tractor uses a drive motor as the power source, and the torque output by the drive motor is transmitted to the drive wheels and power takeoff (PTO) through the transmission system.The power battery delivers power to the drive motor through a power converter.When the power battery's charge is insufficient, the engine drives the generator to generate electricity, which is then delivered to the power battery through the power converter to charge the battery.The main component parameters of the series diesel-electric hybrid tractor are shown in Table 1, including the rated power and rated speed of the diesel engine and the drive motor, as well as the rated capacity and rated voltage of the power battery and other specification parameters. experiments.Section 6 discusses the results of the simulation experiments and outlines future research directions.Section 7 summarizes the research content and experimental results of this paper. Structural Parameters of Tractor's Power System Figure 1 shows the topological structure of the power system of a series diesel-electric hybrid tractor.This tractor uses a drive motor as the power source, and the torque output by the drive motor is transmitted to the drive wheels and power takeoff (PTO) through the transmission system.The power battery delivers power to the drive motor through a power converter.When the power battery's charge is insufficient, the engine drives the generator to generate electricity, which is then delivered to the power battery through the power converter to charge the battery.The main component parameters of the series diesel-electric hybrid tractor are shown in Table 1, including the rated power and rated speed of the diesel engine and the drive motor, as well as the rated capacity and rated voltage of the power battery and other specification parameters. Driver Model Based on the principle of forward modeling, this article uses the difference between the target vehicle speed and the current vehicle speed as input, and the acceleration pedal opening and the brake pedal opening as output to build a driver model based on PI control.The principle of the driver model is shown in the following equation [31]: Figure 1.Topological structure diagram of the power system for a series hybrid tractor. Driver Model Based on the principle of forward modeling, this article uses the difference between the target vehicle speed and the current vehicle speed as input, and the acceleration pedal opening and the brake pedal opening as output to build a driver model based on PI control.The principle of the driver model is shown in the following equation [31]: where k p is the proportional coefficient, k i is the integral coefficient; e is the difference between the target velocity of the tractor and the current velocity of the tractor, km/h; O p is the pedal opening, where O p ∈ (0,1) indicates the accelerator pedal opening, and O p ∈ (−1,0) indicates the brake pedal opening; v ref is the target velocity of the tractor, km/h, and v act is the current velocity of the tractor, km/h. Generator Set Model In the structure of a series hybrid system, the engine drives the generator to generate electricity through mechanical connection, and the engine and generator are not connected to the transmission system, relatively independent of the power system of the entire vehicle.Therefore, the engine and generator are usually considered as a whole, namely the generator set.The research on hybrid energy management strategies focuses on analyzing the fuel economy of tractors, so numerical modeling methods are adopted, considering only the input and output relationships of the generator set, as shown in the following equation [32]: where P e is the engine power, kW; n e is the engine speed, r/min; T e is the engine torque, N•m; P G is the generator set power, kW; η G is the generator efficiency. Under the entire set of operating conditions, the total fuel consumption of the engine is represented by the following equation: where E is the total fuel consumption of the engine, L; b e is the fuel consumption rate of the engine, g/kWh; and ρ f is the density of diesel fuel, g/L. In order to achieve optimal fuel economy for the engine, the engine MAP and optimal operating line (OOL) were fitted using engine bench test data, as shown in Figure 2. where kp is the proportional coefficient, ki is the integral coefficient; e is the difference between the target velocity of the tractor and the current velocity of the tractor, km/h; Op is the pedal opening, where Op∈ (0,1) indicates the accelerator pedal opening, and Op ∈ (−1,0) indicates the brake pedal opening; vref is the target velocity of the tractor, km/h, and vact is the current velocity of the tractor, km/h. Generator Set Model In the structure of a series hybrid system, the engine drives the generator to generate electricity through mechanical connection, and the engine and generator are not connected to the transmission system, relatively independent of the power system of the entire vehicle.Therefore, the engine and generator are usually considered as a whole, namely the generator set.The research on hybrid energy management strategies focuses on analyzing the fuel economy of tractors, so numerical modeling methods are adopted, considering only the input and output relationships of the generator set, as shown in the following equation [32]: 9549 e e e n T P  (3) where Pe is the engine power, kW; ne is the engine speed, r/min; Te is the engine torque, N•m; PG is the generator set power, kW; ηG is the generator efficiency. Under the entire set of operating conditions, the total fuel consumption of the engine is represented by the following equation: where E is the total fuel consumption of the engine, L; be is the fuel consumption rate of the engine, g/kWh; and ρf is the density of diesel fuel, g/L. In order to achieve optimal fuel economy for the engine, the engine MAP and optimal operating line (OOL) were fitted using engine bench test data, as shown in Figure 2. Figure 2 includes the engine fuel consumption rate MAP, optimal operating line of the engine, and the external characteristic curve of the engine.The fitted engine characteristic curve in Figure 2 can provide data support for the engine to operate in the optimal state and accurately obtain the current fuel consumption rate of the engine. Drive Motor Model The drive motor model employs a numerical modeling approach.The corresponding relationship between motor speed and motor torque is determined through drive motor bench test data using a look-up table method.The torque control mode is adopted, and the motor output is controlled through the accelerator pedal opening signal.The mathematical modeling principle of the drive motor is shown in Equation (7).Based on the drive motor bench test data, the drive motor's external characteristic curve is obtained, and then the motor efficiency MAP is obtained through interpolation fitting, as shown in Figure 3 [33]: where n m is the drive motor speed, r/min; T m is the drive motor torque, N•m; T m_max is the maximum torque at the current drive motor speed; k ac is the accelerator pedal opening; η m is the drive motor efficiency; and P m is the drive motor power, kW. Drive Motor Model The drive motor model employs a numerical modeling approach.The corresponding relationship between motor speed and motor torque is determined through drive motor bench test data using a look-up table method.The torque control mode is adopted, and the motor output is controlled through the accelerator pedal opening signal.The mathematical modeling principle of the drive motor is shown in Equation (7).Based on the drive motor bench test data, the drive motor's external characteristic curve is obtained, and then the motor efficiency MAP is obtained through interpolation fitting, as shown in Figure 3 [33]: where nm is the drive motor speed, r/min; Tm is the drive motor torque, N•m; Tm_max is the maximum torque at the current drive motor speed; kac is the accelerator pedal opening; ηm is the drive motor efficiency; and Pm is the drive motor power, kW. Transmission System Model The power source of a series hybrid tractor comes from the drive motor, and the torque generated by the drive motor acts on the drive wheels and the PTO through the transmission system.The transmission system model is shown in the following equation: _ max br br br where Ftr is the forward traction force acting on the tractor through the transmission system by the drive motor torque, N; ig is the gear ratio of the transmission; i0 is the gear ratio of the final drive; ηT is the efficiency of the transmission system; Rw is the radius of the drive wheel, m; Fbr is the braking force of the brake, N; kbr is the brake pedal opening; Fbr_max is the maximum braking force of the brake, N. Transmission System Model The power source of a series hybrid tractor comes from the drive motor, and the torque generated by the drive motor acts on the drive wheels and the PTO through the transmission system.The transmission system model is shown in the following equation: where F tr is the forward traction force acting on the tractor through the transmission system by the drive motor torque, N; i g is the gear ratio of the transmission; i 0 is the gear ratio of the final drive; η T is the efficiency of the transmission system; R w is the radius of the drive wheel, m; F br is the braking force of the brake, N; k br is the brake pedal opening; F br_max is the maximum braking force of the brake, N. The drive motor speed can also be calculated based on the tractor's current vehicle speed and transmission system parameters, as shown in Equation (10): Tractor Plowing Condition Dynamics Model Under the plowing condition of a tractor, its driving resistance is mainly determined by the plowing resistance and the rolling resistance, which are calculated as shown in the following equation: where F t is the driving force, N; F L is the plowing resistance, N; F f is the rolling resistance, N; Z is the number of plowshares; b is the width of a single plowshare, cm; h is the plowing depth, cm; k is the specific resistance of the soil, N/cm 2 ; m is the operating mass of the tractor, kg; g is the acceleration of gravity, m/s 2 ; f is the rolling resistance coefficient; α is the slope angle, ( o ). The current vehicle speed of the tractor can also be calculated based on the driving force, as shown in Equation ( 14): Tractor Rotary Tillage Condition Dynamics Model When performing rotary tillage operations, the series hybrid tractor can neglect the effects of air resistance and acceleration resistance.Due to the complexity of the formula for calculating rotary tillage power and the many influencing factors, this paper uses empirical formulas for calculation.The power balance is shown in the following equation [34,35]: where P drive is the tractor's driving power, kW; F i is the slope resistance, N; P r is the power of the rotary tiller, kW; B is the width of the rotary tillage area; η r is the transmission efficiency of the rotary tiller unit. Tractor Transportation Condition Dynamics Model When a tractor is performing transportation operations, the relationship between the driving force and the driving resistance is balanced as shown in the following equation: where F ac is the acceleration resistance, N; F af is the air resistance, N; δ is the tractor mass conversion coefficient; a is the tractor acceleration, m/s 2 ; C D is the wind resistance coefficient of the tractor; A is the windward area of the tractor, m 2 . Power Battery Model In the research on energy management strategies for hybrid electric vehicles, the battery model mainly reflects the interrelationship between the battery power and the power of other power systems in the vehicle.Therefore, this article treats the power battery as an ideal voltage source, ignores the temperature's impact on the battery's voltage, and adopts the Rint model to model the power battery.The dynamic equations for the battery's state of charge (SOC) and battery power are shown in the following equation: P B = (P m + P e )η B , (P m + P e ) < 0 , (P m + P e ) > 0 (23) where U oc is the open-circuit voltage of the power battery, V; R int is the internal resistance of the power battery, Ω; P B is the battery power, kW; Q B is the rated capacity of the battery, A•h; η B is the charge and discharge efficiency of the power battery.When (P m + P e ) is less than 0, the power battery is charging; when (P m + P e ) is greater than 0, the power battery is discharging. Tractor Simulation Model By studying the working characteristics and structural composition of a series hybrid tractor and combining the modeling requirements of the energy management strategy, a tractor simulation model is built based on Matlab/Simulink.The model includes a driver model, a drive motor model, a transmission system model, a power battery model, tractor (plowing, rotary tillage, and transportation) dynamics models, a generator set model, and an energy management strategy model.The specific model structure is shown in Figure 4. Based on the difference e between the target vehicle speed and the current vehicle speed under the current operating conditions, the driver model outputs the acceleration pedal angle and brake pedal angle (k ac , k br ).By controlling the transmission system model, the dynamic model of the tractor under various operating conditions, and the drive motor model through the acceleration pedal angle and brake pedal angle, the drive motor power P m is obtained.Meanwhile, the power P G of the generator set model is allocated according to the established control strategy through the energy management strategy model.Finally, the battery power P B and SOC are calculated through the power battery model. Dynamic Programming Energy Management Strategy Model For a series hybrid tractor, the optimization goal of its energy management strategy is to reasonably control the operating state and output power of the engine so that the tractor achieves the optimal fuel consumption under the current working conditions.The DP algorithm is a multi-stage decision optimization algorithm that divides the multi-stage decisionmaking process into multiple single-stage problems based on the Bellman optimality principle.By defining appropriate control variables, state variables, and objective functions, the DP algorithm uses reverse calculation to solve the multiple single-stage problems to obtain the optimal control. During the entire set of operating conditions of a tractor, its overall state changes over time.Therefore, when establishing a dynamic programming algorithm, the entire set of operating conditions is divided into N stages with a 1 s interval based on the tractor's operating conditions.State variables reflect the change process of the controlled object.For a series hybrid tractor, the state of charge (SOC) of the power battery can represent the state changes of the tractor under the entire set of operating conditions.Therefore, the SOC of the power battery is selected as the state variable.During the operation of the tractor, the main factor affecting the change in the SOC of the power battery is the output . Dynamic Programming Energy Management Strategy Model For a series hybrid tractor, the optimization goal of its energy management strategy is to reasonably control the operating state and output power of the engine so that the tractor achieves the optimal fuel consumption under the current working conditions.The DP algorithm is a multi-stage decision optimization algorithm that divides the multi-stage decisionmaking process into multiple single-stage problems based on the Bellman optimality principle.By defining appropriate control variables, state variables, and objective functions, the DP algorithm uses reverse calculation to solve the multiple single-stage problems to obtain the optimal control. During the entire set of operating conditions of a tractor, its overall state changes over time.Therefore, when establishing a dynamic programming algorithm, the entire set of operating conditions is divided into N stages with a 1 s interval based on the tractor's operating conditions.State variables reflect the change process of the controlled object.For a series hybrid tractor, the state of charge (SOC) of the power battery can represent the state changes of the tractor under the entire set of operating conditions.Therefore, the SOC of the power battery is selected as the state variable.During the operation of the tractor, the main factor affecting the change in the SOC of the power battery is the output power of the engine.Therefore, the engine power is selected as the control variable. The state variables and control variables are discretized as shown in Equation ( 24): where N is the dimensions of the discrete space; j is the number of discrete points. From Equation ( 22), the state transition equation can be obtained as Taking the total fuel consumption of the engine under the entire set of operating conditions as the optimization objective, the optimization objective function can be obtained from Equation (6) as To ensure that all components of the tractor operate within a reasonable range, the following constraints are added: (27) where SOC min and SOC max are the minimum and maximum allowable values for the SOC of the power battery; P B_min and P B_max are the minimum and maximum power of the power battery during operation; P e_min , P e_max , n e_min , n e_max , T e_min , and T e_max are the minimum and maximum power, minimum and maximum speed, and minimum and maximum torque of the engine during operation, respectively. The Solution Process of Dynamic Programming Algorithm The energy management strategy based on dynamic programming optimizes the operating state and output power of the engine throughout the entire set of working conditions using the dynamic programming algorithm, aiming to achieve the best fuel economy for the tractor.The solution process of dynamic programming is shown in Figure 5. The specific steps are as follows: 1. The solution process is divided into N stages based on the operating conditions.The operational parameters of each component of the tractor at each stage are calculated using the tractor's mathematical model, and all control variables are solved.To further optimize the fuel consumption of the tractor, all control variables, including the calculated engine power P e , corresponding engine speed ne, and engine torque T e , are selected from the OOL fitted in Figure 2. 2. Select control variables that satisfy the constraints of the current stage and use dynamic programming to solve for the state variables and control variable parameter values that yield the minimum value of the objective function J for that stage. 3. Let N = N − 1, which enters the next stage of the solution operation.This process continues until k = 0, at which point the optimal control variables and corresponding SOC dataset are obtained, and the solution process is complete. of the power battery; PB_min and PB_max are the minimum and maximum power of the power battery during operation; Pe_min, Pe_max, ne_min, ne_max, Te_min, and Te_max are the minimum and maximum power, minimum and maximum speed, and minimum and maximum torque of the engine during operation, respectively. The Solution Process of Dynamic Programming Algorithm The energy management strategy based on dynamic programming optimizes the operating state and output power of the engine throughout the entire set of working conditions using the dynamic programming algorithm, aiming to achieve the best fuel economy for the tractor.The solution process of dynamic programming is shown in Figure 5.The specific steps are as follows: 1.The solution process is divided into N stages based on the operating conditions.The operational parameters of each component of the tractor at each stage are calculated using the tractor's mathematical model, and all control variables are solved.To further optimize the fuel consumption of the tractor, all control variables, including the calculated engine power Pe, corresponding engine speed ne, and engine torque Te, are selected from the OOL fitted in Figure 2. 2. Select control variables that satisfy the constraints of the current stage and use dynamic programming to solve for the state variables and control variable parameter values that yield the minimum value of the objective function J for that stage. 3. Let N = N − 1, which enters the next stage of the solution operation.This process continues until k = 0, at which point the optimal control variables and corresponding SOC dataset are obtained, and the solution process is complete. Energy Management Strategy Based on Power Following The power following-based energy management strategy is a rule-based control strategy.In a series hybrid tractor, this strategy uses the demand power of the drive motor and the remaining battery charge of the power battery to determine the start-stop and output power of the engine.The principle of this strategy is shown in Figure 6. World Electr.Veh.J. 2024, 15, x FOR PEER REVIEW 10 of 19 The power following-based energy management strategy is a rule-based control strategy.In a series hybrid tractor, this strategy uses the demand power of the drive motor and the remaining battery charge of the power battery to determine the start-stop and output power of the engine.The principle of this strategy is shown in Figure 6.Where Pm_req is the required power of the drive motor; Pm_req_max is the maximum required power of the drive motor.The specific steps are as follows: 1.When SOC ≤ SOCmin, the engine starts.2. When SOCmin ≤ SOC ≤ SOCmax, if Pm_req ≥ Pm_req_max, the engine starts; otherwise, the engine maintains the started state from the previous moment.3. When SOC ≥ SOCmax, if Pm_req ≥ Pm_req_max, the engine maintains the started state from the previous moment; otherwise, the engine shuts down. To better compare the two control strategies, the engine control parameters output by the power following-based energy management strategy will also be selected from the OOL fitted in Figure 2. Plowing Condition As shown in Figure 7, the target vehicle speed tracking effect of the simulation model under the plowing condition is demonstrated.The results indicate that, under the plowing condition, the simulation model can effectively track the target vehicle speed with a max- Where P m_req is the required power of the drive motor; P m_req_max is the maximum required power of the drive motor. The specific steps are as follows: 1. When SOC ≤ SOC min , the engine starts. 2. When SOC min ≤ SOC ≤ SOC max , if P m_req ≥ P m_req_max , the engine starts; otherwise, the engine maintains the started state from the previous moment. 3. When SOC ≥ SOC max , if P m_req ≥ P m_req_max , the engine maintains the started state from the previous moment; otherwise, the engine shuts down. To better compare the two control strategies, the engine control parameters output by the power following-based energy management strategy will also be selected from the OOL fitted in Figure 2. Plowing Condition As shown in Figure 7, the target vehicle speed tracking effect of the simulation model under the plowing condition is demonstrated.The results indicate that, under the plowing condition, the simulation model can effectively track the target vehicle speed with a maximum error of no more than 0.24 km/h, meeting the test requirements. 3. When SOC ≥ SOCmax, if Pm_req ≥ Pm_req_max, the engine maintains the started state from the previous moment; otherwise, the engine shuts down. To better compare the two control strategies, the engine control parameters output by the power following-based energy management strategy will also be selected from the OOL fitted in Figure 2. Plowing Condition As shown in Figure 7, the target vehicle speed tracking effect of the simulation model under the plowing condition is demonstrated.The results indicate that, under the plowing condition, the simulation model can effectively track the target vehicle speed with a maximum error of no more than 0.24 km/h, meeting the test requirements.When the tractor performs plowing operations, the changes in drive motor power and battery power under two energy management strategies are shown in Figure 8.As can be observed from Figure 8a, under the plowing condition, the peak power of the drive motor is approximately 49.66 kW.As can be observed from Figure 8b, under the energy management strategy based on power following, the battery power will have negative values as the load of the plowing condition increases, indicating that the generator set will be activated to charge the battery during each plowing cycle.Under the energy management strategy based on dynamic programming, the battery power remains positive for approximately the first 2118 s.After approximately 2118 s, the battery power begins to decrease, at which point the generator set starts to operate and charge the battery.During the periods from approximately 2471 s to 2499 s and from 2971 s to 2999 s, the battery exhibits negative power, indicating that the entire tractor's load power is being supplied by the generator set.When the tractor performs plowing operations, the changes in drive motor power and battery power under two energy management strategies are shown in Figure 8.As can be observed from Figure 8a, under the plowing condition, the peak power of the drive motor is approximately 49.66 kW.As can be observed from Figure 8b, under the energy management strategy based on power following, the battery power will have negative values as the load of the plowing condition increases, indicating that the generator set will be activated to charge the battery during each plowing cycle.Under the energy management strategy based on dynamic programming, the battery power remains positive for approximately the first 2118 s.After approximately 2118 s, the battery power begins to decrease, at which point the generator set starts to operate and charge the battery.During the periods from approximately 2471 s to 2499 s and from 2971 s to 2999 s, the battery exhibits negative power, indicating that the entire tractor's load power is being supplied by the generator set.The operating state of the engine under the plowing condition is shown in Figure 9.As can be observed from Figure 9a, under the energy management strategy based on power following, the engine starts and stops multiple times throughout the entire plowing condition, with relatively short continuous operating times, and the peak power of the generator set is approximately 53.76 kW.On the other hand, under the energy management strategy based on dynamic programming, the engine starts to operate around 2118 s, and it does not frequently start and stop, with relatively concentrated operating times and a peak power of approximately 41.03 kW.As shown in Figure 9b, under both energy The operating state of the engine under the plowing condition is shown in Figure 9.As can be observed from Figure 9a, under the energy management strategy based on power following, the engine starts and stops multiple times throughout the entire plowing condition, with relatively short continuous operating times, and the peak power of the generator set is approximately 53.76 kW.On the other hand, under the energy management strategy based on dynamic programming, the engine starts to operate around 2118 s, and it does not frequently start and stop, with relatively concentrated operating times and a peak power of approximately 41.03 kW.As shown in Figure 9b, under both energy management strategies, the engine operates along the optimal operating curve, but, under the energy management strategy based on dynamic programming, the engine operates within a wider range.As can be observed from Figure 10a, the final SOC under the energy management strategy based on power following is approximately 50.54%, while the final SOC under the energy management strategy based on dynamic programming is approximately 46.31%.Compared to the energy management strategy based on power following, the battery SOC consumed approximately 8.37% more under the energy management strategy based on dynamic programming.As shown in Figure 10b, the total fuel consumption under the energy management strategy based on power following is 2.65 L, while the total fuel consumption under the energy management strategy based on dynamic programming is 1.98 L. With the energy management strategy based on dynamic programming, the total fuel consumption is reduced by approximately 25.28%. Rotary Tillage Condition As shown in Figure 11, the target vehicle speed tracking effect of the simulation model under the rotary tillage condition is demonstrated.The results indicate that, under As can be observed from Figure 10a, the final SOC under the energy management strategy based on power following is approximately 50.54%, while the final SOC under the energy management strategy based on dynamic programming is approximately 46.31%.Compared to the energy management strategy based on power following, the battery SOC consumed approximately 8.37% more under the energy management strategy based on dynamic programming.As can be observed from Figure 10a, the final SOC under the energy management strategy based on power following is approximately 50.54%, while the final SOC under the energy management strategy based on dynamic programming is approximately 46.31%.Compared to the energy management strategy based on power following, the battery SOC consumed approximately 8.37% more under the energy management strategy based on dynamic programming.As shown in Figure 10b, the total fuel consumption under the energy management strategy based on power following is 2.65 L, while the total fuel consumption under the energy management strategy based on dynamic programming is 1.98 L. With the energy management strategy based on dynamic programming, the total fuel consumption is reduced by approximately 25.28%. Rotary Tillage Condition As shown in Figure 11, the target vehicle speed tracking effect of the simulation model under the rotary tillage condition is demonstrated.The results indicate that, under the rotary tillage condition, the simulation model can effectively track the target vehicle As shown in Figure 10b, the total fuel consumption under the energy management strategy based on power following is 2.65 L, while the total fuel consumption under the energy management strategy based on dynamic programming is 1.98 L. With the energy management strategy based on dynamic programming, the total fuel consumption is reduced by approximately 25.28%. Rotary Tillage Condition As shown in Figure 11, the target vehicle speed tracking effect of the simulation model under the rotary tillage condition is demonstrated.The results indicate that, under the rotary tillage condition, the simulation model can effectively track the target vehicle speed with a maximum error of no more than 0.22 km/h, meeting the test requirements.When the tractor performs rotary tillage operations, the changes in drive motor power and battery power under two energy management strategies are shown in Figure 12.As can be observed from Figure 12a, under the rotary tillage condition, the peak power of the drive motor is approximately 46.10 kW.As can be observed from Figure 12b, under the energy management strategy based on power following, the trend regarding battery power changes is basically consistent with that of the plowing condition.In each rotary tillage cycle, the battery exhibits negative power, causing the generator set to start and charge the battery.Under the energy management strategy based on dynamic programming, the battery power remains positive for approximately the first 2145 s.After approximately 2145 s, the battery power begins to decrease, at which point the generator set starts to operate and charge the battery.During the periods from approximately 2464 s to 2506 s and from 2963 s to 3000 s, the battery exhibits negative power, indicating that the entire tractor's load power is being supplied by the generator set.The operating state of the engine under the rotary tillage condition is shown in Figure 13.As can be observed from Figure 13a, under the energy management strategy based on power following, the number of starts and stops of the engine during the entire rotary tillage condition has decreased compared to the plowing condition, but they remain relatively frequent, with short continuous operating times, and the peak power of the generator set is approximately 50.17 kW.On the other hand, under the energy management strategy based on dynamic programming, the engine starts to operate around 2145 s and When the tractor performs rotary tillage operations, the changes in drive motor power and battery power under two energy management strategies are shown in Figure 12.As can be observed from Figure 12a, under the rotary tillage condition, the peak power of the drive motor is approximately 46.10 kW.As can be observed from Figure 12b, under the energy management strategy based on power following, the trend regarding battery power changes is basically consistent with that of the plowing condition.In each rotary tillage cycle, the battery exhibits negative power, causing the generator set to start and charge the battery.Under the energy management strategy based on dynamic programming, the battery power remains positive for approximately the first 2145 s.After approximately 2145 s, the battery power begins to decrease, at which point the generator set starts to operate and charge the battery.During the periods from approximately 2464 s to 2506 s and from 2963 s to 3000 s, the battery exhibits negative power, indicating that the entire tractor's load power is being supplied by the generator set.When the tractor performs rotary tillage operations, the changes in drive motor power and battery power under two energy management strategies are shown in Figure 12.As can be observed from Figure 12a, under the rotary tillage condition, the peak power of the drive motor is approximately 46.10 kW.As can be observed from Figure 12b, under the energy management strategy based on power following, the trend regarding battery power changes is basically consistent with that of the plowing condition.In each rotary tillage cycle, the battery exhibits negative power, causing the generator set to start and charge the battery.Under the energy management strategy based on dynamic programming, the battery power remains positive for approximately the first 2145 s.After approximately 2145 s, the battery power begins to decrease, at which point the generator set starts to operate and charge the battery.During the periods from approximately 2464 s to 2506 s and from 2963 s to 3000 s, the battery exhibits negative power, indicating that the entire tractor's load power is being supplied by the generator set.The operating state of the engine under the rotary tillage condition is shown in Figure 13.As can be observed from Figure 13a, under the energy management strategy based on power following, the number of starts and stops of the engine during the entire rotary tillage condition has decreased compared to the plowing condition, but they remain relatively frequent, with short continuous operating times, and the peak power of the generator set is approximately 50.17 kW.On the other hand, under the energy management strategy based on dynamic programming, the engine starts to operate around 2145 s and The operating state of the engine under the rotary tillage condition is shown in Figure 13.As can be observed from Figure 13a, under the energy management strategy based on power following, the number of starts and stops of the engine during the entire rotary tillage condition has decreased compared to the plowing condition, but they remain relatively frequent, with short continuous operating times, and the peak power of the generator set is approximately 50.17 kW.On the other hand, under the energy management strategy based on dynamic programming, the engine starts to operate around 2145 s and continues to operate until the end of the rotary tillage operation, with relatively concentrated operating times and a peak power of approximately 35.39 kW.As shown in Figure 13b, under both energy management strategies, the engine operates along the optimal operating curve.However, under the energy management strategy based on dynamic programming, the engine operates within a wider range. World Electr.Veh.J. 2024, 15, x FOR PEER REVIEW 14 of 19 continues to operate until the end of the rotary tillage operation, with relatively concentrated operating times and a peak power of approximately 35.39 kW.As shown in Figure 13b, under both energy management strategies, the engine operates along the optimal operating curve.However, under the energy management strategy based on dynamic programming, the engine operates within a wider range.As shown in Figure 14a, the final SOC under the power-following energy management strategy is approximately 48.76%, while the final SOC under the dynamic programming-based energy management strategy is approximately 45.23%.Compared to the power-following energy management strategy, the battery SOC consumed approximately 7.24% more under the dynamic programming-based energy management strategy.As shown in Figure 14b, the total fuel consumption under the energy management strategy based on power following is 2.46 L, while the total fuel consumption under the energy management strategy based on dynamic programming is 1.93 L. With the energy management strategy based on dynamic programming, the total fuel consumption is reduced by approximately 21.54%.As shown in Figure 14a, the final SOC under the power-following energy management strategy is approximately 48.76%, while the final SOC under the dynamic programmingbased energy management strategy is approximately 45.23%.Compared to the powerfollowing energy management strategy, the battery SOC consumed approximately 7.24% more under the dynamic programming-based energy management strategy. World Electr.Veh.J. 2024, 15, x FOR PEER REVIEW 14 of 19 continues to operate until the end of the rotary tillage operation, with relatively concentrated operating times and a peak power of approximately 35.39 kW.As shown in Figure 13b, under both energy management strategies, the engine operates along the optimal operating curve.However, under the energy management strategy based on dynamic programming, the engine operates within a wider range.As shown in Figure 14a, the final SOC under the power-following energy management strategy is approximately 48.76%, while the final SOC under the dynamic programming-based energy management strategy is approximately 45.23%.Compared to the power-following energy management strategy, the battery SOC consumed approximately 7.24% more under the dynamic programming-based energy management strategy.As shown in Figure 14b, the total fuel consumption under the energy management strategy based on power following is 2.46 L, while the total fuel consumption under the energy management strategy based on dynamic programming is 1.93 L. With the energy management strategy based on dynamic programming, the total fuel consumption is reduced by approximately 21.54%.As shown in Figure 14b, the total fuel consumption under the energy management strategy based on power following is 2.46 L, while the total fuel consumption under the energy management strategy based on dynamic programming is 1.93 L. With the energy management strategy based on dynamic programming, the total fuel consumption is reduced by approximately 21.54%. Transportation Condition The tractor transportation condition refers to the EUDC_Man driving cycle, and, based on the tractor powertrain system parameters, the maximum vehicle speed has been adjusted to 25.5 km/h.As shown in Figure 15, the target vehicle speed tracking effect of the simulation model under the transportation condition is demonstrated.The results indicate that, under the transportation condition, the simulation model can effectively track the target vehicle speed with a maximum error of no more than 0.27 km/h, meeting the test requirements. Transportation Condition The tractor transportation condition refers to the EUDC_Man driving cycle, and, based on the tractor powertrain system parameters, the maximum vehicle speed has been adjusted to 25.5 km/h.As shown in Figure 15, the target vehicle speed tracking effect of the simulation model under the transportation condition is demonstrated.The results indicate that, under the transportation condition, the simulation model can effectively track the target vehicle speed with a maximum error of no more than 0.27 km/h, meeting the test requirements.During the transportation operation of the tractor, the changes in the drive motor power and battery power under the two energy management strategies are shown in Figure 16.As can be observed from Figure 16a, under the transportation condition, the peak power of the drive motor is approximately 112.40 kW.As shown in Figure 16b, the trends in battery power changes are basically consistent under both the power-following energy management strategy and the dynamic programming-based energy management strategy, and no negative battery power occurs.Under the power-following energy management strategy, the generator set operates and the battery power decreases during approximately 60 s to 118 s and 200 s to 368 s.Under the dynamic programming-based energy management strategy, the generator set operates and the battery power decreases during approximately 53 s to 113 s and 197 s to 362 s. Transportation Condition The tractor transportation condition refers to the EUDC_Man driving cycle, and, based on the tractor powertrain system parameters, the maximum vehicle speed has been adjusted to 25.5 km/h.As shown in Figure 15, the target vehicle speed tracking effect of the simulation model under the transportation condition is demonstrated.The results indicate that, under the transportation condition, the simulation model can effectively track the target vehicle speed with a maximum error of no more than 0.27 km/h, meeting the test requirements.The operating state of the engine under transportation conditions is shown in Figure 17.As can be observed from Figure 17a, under both the power-following energy management strategy and the dynamic programming-based energy management strategy, the number of starts and stops of the engine during the entire transportation condition is the same.Under the power-following energy management strategy, the peak power of the generator set is approximately 52.00 kW.Under the dynamic programming-based energy management strategy, the peak power of the generator set is approximately 37.74 kW.As shown in Figure 17b, under both energy management strategies, the engine operates along the optimal operating curve.However, under the dynamic programming-based energy management strategy, the engine operates within a wider range.The operating state of the engine under transportation conditions is shown in Figure 17.As can be observed from Figure 17a, under both the power-following energy management strategy and the dynamic programming-based energy management strategy, the number of starts and stops of the engine during the entire transportation condition is the same.Under the power-following energy management strategy, the peak power of the generator set is approximately 52.00 kW.Under the dynamic programming-based energy management strategy, the peak power of the generator set is approximately 37.74 kW.As shown in Figure 17b, under both energy management strategies, the engine operates along the optimal operating curve.However, under the dynamic programming-based energy management strategy, the engine operates within a wider range.As shown in Figure 18a, the final SOC under the power-following energy management strategy is approximately 73.01%, while the final SOC under the dynamic programming-based energy management strategy is approximately 72.45%.Compared to the power-following energy management strategy, the battery SOC consumed approximately 0.77% more under the dynamic programming-based energy management strategy.As shown in Figure 18b, the total fuel consumption under the power-following energy management strategy is 0.68 L, while the total fuel consumption under the dynamic programming-based energy management strategy is 0.59 L. The total fuel consumption As shown in Figure 18a, the final SOC under the power-following energy management strategy is approximately 73.01%, while the final SOC under the dynamic programmingbased energy management strategy is approximately 72.45%.Compared to the powerfollowing energy management strategy, the battery SOC consumed approximately 0.77% more under the dynamic programming-based energy management strategy.The operating state of the engine under transportation conditions is shown in Figure 17.As can be observed from Figure 17a, under both the power-following energy management strategy and the dynamic programming-based energy management strategy, the number of starts and stops of the engine during the entire transportation condition is the same.Under the power-following energy management strategy, the peak power of the generator set is approximately 52.00 kW.Under the dynamic programming-based energy management strategy, the peak power of the generator set is approximately 37.74 kW.As shown in Figure 17b, under both energy management strategies, the engine operates along the optimal operating curve.However, under the dynamic programming-based energy management strategy, the engine operates within a wider range.As shown in Figure 18a, the final SOC under the power-following energy management strategy is approximately 73.01%, while the final SOC under the dynamic programming-based energy management strategy is approximately 72.45%.Compared to the power-following energy management strategy, the battery SOC consumed approximately 0.77% more under the dynamic programming-based energy management strategy.As shown in Figure 18b, the total fuel consumption under the power-following energy management strategy is 0.68 L, while the total fuel consumption under the dynamic programming-based energy management strategy is 0.59 L. The total fuel consumption As shown in Figure 18b, the total fuel consumption under the power-following energy management strategy is 0.68 L, while the total fuel consumption under the dynamic programming-based energy management strategy is 0.59 L. The total fuel consumption decreased by approximately 13.24% under the dynamic programming-based energy management strategy. Discussion The results of this study emphasize the impact of the proposed energy management strategy for hybrid tractors on fuel economy under various typical operating conditions of tractors.Using the dynamic programming algorithm, the operating status of the engine in a series hybrid tractor was optimized under three typical operating conditions: plowing, rotary tilling, and transportation.Based on the simulation test results, this study provides a comprehensive reference for future related research.The main findings of the discussion are as follows. The optimization effect of the energy management strategy based on the dynamic programming algorithm is closely related to the operating conditions of the tractor.By analyzing the simulation test results of three typical operating conditions, namely plowing, rotary tilling, and transportation, it is found that, the larger the load of the operating condition, the better the fuel-saving effect of the dynamic programming algorithm compared to the power-following energy management strategy. According to the simulation test results, the power-following energy management strategy leads to frequent engine starts and stops in both plowing and rotary tilling test conditions.In actual tractor operation, frequent engine starts and stops can further increase fuel consumption.However, the dynamic programming-based energy management strategy does not exhibit frequent engine starts and stops.Future research on energy management strategies should also consider the issue of engine starts and stops. To better compare the control effects of energy management strategies, it is necessary to consider not only fuel consumption but also the cost impact of battery power consumption.In the simulation tests conducted in this study, the dynamic programming-based energy management strategy optimizes the operating state of the engine based on the entire working conditions.Under the same working conditions, it is difficult to achieve the same final SOC value as the power-following energy management strategy, which has a certain impact on testing the optimization effect of the energy management strategy.Therefore, future research work should also consider issues related to electricity costs. Conclusions This study describes an energy management strategy for a series hybrid tractor, aiming to achieve optimal fuel consumption throughout the entire operating cycle of the tractor.Firstly, the SOC of the power battery is considered as the state variable, and the engine power is the control variable.Then, the total fuel consumption of the engine throughout the entire set of operating conditions is taken as the objective function.Finally, a series hybrid tractor energy management strategy based on a dynamic programming algorithm is designed.The main conclusions are as follows. Under the conditions of plowing, rotary tillage, and transportation operations, the total fuel consumption values for the power following-based energy management strategy are 2.65 L, 2.46 L, and 0.68 L, respectively.For the dynamic programming-based energy management strategy, the total fuel consumption values are 1.98 L, 1.93 L, and 0.59 L, respectively.Compared to the power-following energy management strategy, the dynamic programming-based energy management strategy results in an additional consumption of approximately 8.37%, 7.24%, and 0.77% in battery SOC for plowing, rotary tilling, and transportation operations, respectively.Simultaneously, the total fuel consumption of the tractor decreases by approximately 25.28%, 21.54%, and 13.24% for the respective operations. In this paper, the total workload of the tractor during plowing operations is the highest, followed by rotary tillage operations, and transportation operations have the lowest workload.The trend of total workload change is consistent with the effect of reduced total fuel consumption observed in the simulation results.Specifically, the greater the total workload, the better the fuel-saving effect achieved by the dynamic programmingbased energy management strategy compared to the power following-based strategy. Compared to the power following-based energy management strategy, the dynamic programming-based strategy can better adjust the operating state of the engine, reasonably control the start-stop and output power of the engine, and keep the engine operating in a high-efficiency range. Figure 1 . Figure 1.Topological structure diagram of the power system for a series hybrid tractor. Figure 2 Figure 2 includes the engine fuel consumption rate MAP, optimal operating line of the engine, and the external characteristic curve of the engine.The fitted engine Figure 4 . Figure 4. Simplified diagram of tractor simulation model. 4 . Energy Management Strategy Design 4.1.Energy Management Strategy Based on Dynamic Programming 4.1.1 Figure 5 . Figure 5.The solution process of dynamic programming. Figure 5 . Figure 5.The solution process of dynamic programming. Figure 7 . Figure 7. Vehicle speed tracking effect under plowing condition.Figure 7. Vehicle speed tracking effect under plowing condition. Figure 7 . Figure 7. Vehicle speed tracking effect under plowing condition.Figure 7. Vehicle speed tracking effect under plowing condition. 19 Figure 11 . Figure 11.Vehicle speed tracking effect under rotary tillage condition. Figure 11 . Figure 11.Vehicle speed tracking effect under rotary tillage condition. 19 Figure 11 . Figure 11.Vehicle speed tracking effect under rotary tillage condition. Figure 15 .Figure 16 . Figure 15.Vehicle speed tracking effect under transportation condition.During the transportation operation of the tractor, the changes in the drive motor power and battery power under the two energy management strategies are shown in Figure 16.As can be observed from Figure 16a, under the transportation condition, the peak power of the drive motor is approximately 112.40 kW.As shown in Figure16b, the trends in battery power changes are basically consistent under both the power-following energy management strategy and the dynamic programming-based energy management strategy, and no negative battery power occurs.Under the power-following energy management strategy, the generator set operates and the battery power decreases during approximately 60 s to 118 s and 200 s to 368 s.Under the dynamic programming-based energy management strategy, the generator set operates and the battery power decreases during approximately 53 s to 113 s and 197 s to 362 s. Figure 15 . Figure 15.Vehicle speed tracking effect under transportation condition. Figure 15 .Figure 16 . Figure 15.Vehicle speed tracking effect under transportation condition.During the transportation operation of the tractor, the changes in the drive motor power and battery power under the two energy management strategies are shown in Figure16.As can be observed from Figure16a, under the transportation condition, the peak power of the drive motor is approximately 112.40 kW.As shown in Figure16b, the trends in battery power changes are basically consistent under both the power-following energy management strategy and the dynamic programming-based energy management strategy, and no negative battery power occurs.Under the power-following energy management strategy, the generator set operates and the battery power decreases during approximately 60 s to 118 s and 200 s to 368 s.Under the dynamic programming-based energy management strategy, the generator set operates and the battery power decreases during approximately 53 s to 113 s and 197 s to 362 s. Table 1 . Parameters of main components of hybrid tractor. Table 1 . Parameters of main components of hybrid tractor.
2024-04-12T15:54:43.475Z
2024-04-09T00:00:00.000
{ "year": 2024, "sha1": "b0bf166c1a0eea1acba852ca81101e7b3fef0467", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2032-6653/15/4/156/pdf?version=1712630364", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "2b4ca054658a8ec9e05dd3ee91e47feb4e9677de", "s2fieldsofstudy": [ "Engineering", "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
56007580
pes2o/s2orc
v3-fos-license
Shear viscosity of strongly interacting fermionic quantum fluids Eighty years ago Eyring proposed that the shear viscosity of a liquid, $\eta$, has a quantum limit $\eta \gtrsim n\hbar$ where $n$ is the density of the fluid. Using holographic duality and the AdS/CFT correspondence in string theory Kovtun, Son, and Starinets (KSS) conjectured a universal bound $\frac{\eta}{s}\geq \frac{\hbar}{4\pi k_{B}}$ for the ratio between the shear viscosity and the entropy density, $s$. Using Dynamical Mean-Field Theory (DMFT) we calculate the shear viscosity and entropy density for a fermionic fluid described by a single band Hubbard model at half filling. Our calculated shear viscosity as a function of temperature is compared with experimental data for liquid $^{3}$He. At low temperature the shear viscosity is found to be well above the quantum limit and is proportional to the characteristic Fermi liquid $1/T^{2}$ dependence, where $T$ is the temperature. With increasing temperature and interaction strength $U$ there is significant deviation from the Fermi liquid form. Also, the shear viscosity violates the quantum limit near the crossover from coherent quasi-particle based transport to incoherent transport (the bad metal regime). Finally, the ratio of the shear viscosity to the entropy density is found to be comparable to the KSS bound for parameters appropriate to liquid $^{3}$He. However, this bound is found to be strongly violated in the bad metal regime for parameters appropriate to lattice electronic systems such as organic charge transfer salts. I. INTRODUCTION The viscosity of a fluid is a measure of its resistance to externally applied shear or tensile stress. The shear viscosity of a fluid measures the resistance of a fluid to shear flows, where adjacent layers of a fluid move parallel to each other but with different speeds. The differential speed between different layers will give rise to friction between different layers which will resist their relative motion. This is known as the viscous drag. For example, viscous drag force per unit area in the x-direction, τ xy , due to velocity gradient ∂u x (y)/∂y in perpendicular ydirection is given by : where η is the coefficient of shear viscosity. The SI unit of shear viscosity is Pascal-seconds (Pa.s) equivalent to Newton-second per square meter (N.s m −2 ). The shear viscosity of water is about 10 −3 (Pa.s) at room temperature whereas the shear viscosity of highly viscous fluids such as glasses near the glass transition temperature can be as large at 10 13 Pa.s. For classical fluids η can be measured through viscous drag measurements in particle tracking experiments. For quantum fluids like He 3 , η can be measured through Stokes law for sound attenuation : where α is the rate of attenuation, ρ is the fluid density, ω and c s are the frequency and velocity of sound in the medium, respectively. The shear viscosity for an electron gas in metals, calculated from solution of the Boltzmann equation, is given where n is the density of electrons, k F is the Fermi velocity, and ℓ is the electronic mean free path, respectively. In the quasi-particle regime of transport k F ℓ ≫ 1, i. e., the mean free path is much larger than the lattice spacing, a ∼ k −1 F . Hence, in analogy with the Mott-Ioffe-Regel (MIR) limit, σ MIR = e 2 ha , for minimum metallic conductivity we can conjecture a lower limit for the shear viscosity, η q : corresponding to the case where the electronic mean free path becomes comparable to lattice spacing. Also, a comparable limit η ∼ n for shear viscosity was proposed by Eyring 2 almost 80 years ago. For a large class of strongly correlated systems like 3d transition metal oxide compounds, organic charge transfer salts such as κ-(BEDT-TTF) 2 X, the MIR limit is violated 3,4 and the coherent quasi-particle based transport picture breaks down, i.e. ℓ < a. Similarly, we might expect that in the incoherent regime of transport the shear viscosity, η, could violate the quantum limit to coherent transport, i.e. η < η q . Recently a string theory based approach has been proposed to understand incoherent quantum transport in strongly correlated electron systems especially the strange metal regime of doped cuprates [5][6][7][8] . The key idea of this method is to map a strongly coupled conformal field theory (CFT) to weakly coupled gravity in the anti-de Sitter (AdS) space in higher dimension 9 . This is known as the holographic duality or AdS/CFT correspondence. Furthermore, event horizon dynamics of a black hole in the anti-de Sitter space can be mapped to the dynamics of classical fluids. Using the AdS/CFT correspondence Kovtun, Son, and Starinets 10 calculated the ratio, η/s, of the shear viscosity (η) and the entropy density (s) in a specific string theory model (type IIB) and proposed a universal lower bound for the ratio. This bound is found to be well respected in classical fluids like water and quantum fluids like the quark-gluon plasma created in the Relativistic Heavy Ion Collider (RHIC) 11 , cold degenerate Fermi gases in the unitary limit of scattering 12 , and by theoretical calculations for graphene 13 . However, except near a quantum critical point, condensed matter systems are neither relativistic nor conformal 14 . In a recent calculation we tested a related but distinct bound on charge diffusivity, D ≥ v 2 F kB T , proposed by Hartnoll 8 . We found 15 clear violation of Hartnolls proposed bound in the strong coupling (bad metal) regime of the Hubbard model. In the present paper we calculate the shear viscosity in a single band Hubbard model and explore possible violations of the conjectured quantum bounds on η and η/s. The organization of the paper is as follows. In Sec. II we introduce the Kubo formula for calculation of the shear viscosity using linear response theory. In Sec. III we briefly describe the Dynamical Mean-Field Theory (DMFT) approach for calculating properties of a single band Hubbard model and the iterated perturbation theory (IPT) based approach used to treat the DMFT selfconsistency. In the same section we introduce calculation of the shear viscosity and entropy density in DMFT. In Sec. IV we first briefly review experimental results for the temperature dependence of the shear viscosity of liquid 3 He and its possible description by a Hubbard model. We then show our results for the Hubbard model on the Bethe and hypercubic lattices at half filling. Similar results are obtained for both lattices. II. SHEAR VISCOSITY Nonrelativistic simple fluids are characterized by the conserved mass density ρ, the momentum density π and the energy density E. These quantities will satisfy following conservation laws 16 : where Π ij is the momentum current density that the following discussion shows is central to the shear viscosity. As a consequence, in analogy with the case of Ohm's law for electrical conductivity : j e α = σ αβ E β ≡ −σ αβ ∂ β φ(r), the generalized Newton's law for shear flow is where η αβγδ is a viscosity tensor. In particular, the momentum current density Π xy in the presence of transverse velocity gradient ∂ux(y) ∂y is given by 17 where η ≡ η xyxy is the coefficient of shear viscosity for an isotropic fluid. The velocity field u x (y) gives rise to a perturbation with Hamiltonian It is important to mention that to derive Eq. 10 we have used the conservation law in Eq. 6 andπ(r, t) = exp(−iωt)π(r). The momentum current density Π xy induced byĤ ′ can be calculated from linear response theory. The shear viscosity is then obtained by taking the limit ω → 0: with Ξ(ω) = −i ν d 3 r dt e iωt θ(t) Π xy (r, t),Π xy (0, 0) (12) where ν = a 3 is the unit cell volume and θ(t) is the Heaviside step function. This formula is the analogue of the expression for the electrical conductivity involving the current-current correlation function. For Fermi gas with a quadratic energy dispersion the momentum current density operator is given by 17 where δf ≡ f k − f 0 k is the deviation of the distribution function from local equilibrium. For Bloch electrons in a crystal lattice 18 where ǫ n (k) is the energy dispersion of the n-th energy band. Then we will have with v kα = 1 ∂ǫ k ∂kα being the velocity of the Bloch electron. Using deformation potential theory 19 a similar result was found by Khan and Allen 20 when investigating sound attenuation by electrons in metals. III. DYNAMICAL MEAN FIELD THEORY We consider the single band Hubbard model with nearest neighbor hopping, described by the Hamiltonian where n iσ = c † iσ c iσ , t is the hopping amplitude, µ is the chemical potential, and U is the Coulomb repulsion when a given site is doubly occupied by two fermions with opposite spin configuration. Despite its simplicity this model has no exact solution except in one dimension. The study of this model in higher dimension involves various approximations. However, as in the case of classical mean field theory for the nearest neighbour Ising model, in the limit of large dimension, d → ∞ the model reduces to an effective single site model provided we do the scaling t → t * / √ 2d on a d-dimensional hypercubic lattice 21 . Under this approximation we neglect all spatial fluctuations yet fully retain local quantum dynamics. The selfenergy Σ ij (ω) for the lattice model then becomes local, i.e. Σ ij (ω) = Σ(ω)δ ij . This is known as the Dynamical Mean-Field Theory 22 (DMFT) approximation. It has been found that DMFT gives a good description of the correlation driven Mott metal-insulator transition observed in 3d transition metal oxides and the crossover from a coherent Fermi liquid to incoherent bad metal state with increasing temperature 3 . Furthermore, DMFT has also been found to provide quantitative description of the resistivity 23 and the frequency dependent optical conductivity 24 for organic charge-transfer salts that can be described by a half-filled two-dimensional Hubbard model on an anisotropic triangular lattice 25 . DMFT combined with electronic structure calculations based on density functional theory (DFT) has given an excellent description of a large class of transition metal and rare earth compounds 26 . The lattice problem under DMFT can be mapped onto an effective single impurity Anderson model 22 : where n d0σ = d † 0σ d 0σ . The operators d † 0σ and d 0σ characterizes a local site and {c † lσ , c lσ } characterizes the effective bath arising from fermions at all other sites. It is important to mention that the fictitious bath dispersioñ ǫ l has no relation to the lattice dispersion, ǫ k . The solution of the impurity problem is the toughest part and usually involves use of numerical methods such as Quantum Monte Carlo (QMC), exact diagonalization (ED), or the numerical renormalization group (NRG). We use iterated perturbation theory (IPT) 27,28 as it is semi-analytical, easy to implement, computationally cheap and fast. Yet IPT captures the essential physics in the parameter regime U < 0.8U c , where U c is the critical value of U at which the zero temperature Mott metalinsulator transition happens. Except in close proximity of the Mott transition IPT was found to be in good agreement with results from other impurity solvers such as the numerical renormalization group (NRG) 29 and continuous time quantum Monte Carlo (CT-QMC) 30 . In the next sub-section we discuss DMFT self-consistency using IPT. A. Iterated perturbation theory The irreducible self-energy in IPT is approximated using the second order (in U ) polarization bubble involving fully interacting bath Green's function G 0 (ω). The selfenergy under this approximation can be shown (using moment expansion of the interacting density of states) to smoothly interpolate between the atomic limit t = 0 and the weak-coupling limit U → 0. In the following paragraph we briefly discuss DMFT self-consistency using IPT as the impurity solver. As we are interested in calculating transport properties we work with real frequencies, as against the imaginary frequency formulation that requires analytical continuation of imaginary frequency data to real frequency. (i) For a given lattice density of states N 0 (ǫ) and selfenergy Σ(ω) the local Green's function is given by where µ is the local chemical potential and ω + = ω + iδ with δ > 0. (ii) From the knowledge of G(ω) and the local self-energy Σ(ω) we can calculate the bath Green's function G 0 (ω) using Dyson's equation where µ 0 = µ − U n is the bath chemical potential and it vanishes at half filling for the particle-hole symmetric case. (iii) The new self-energy can be calculated using IPT ansatz 28 as where and n, n 0 are the local and bath particle numbers, respectively. Σ (2) (ω) is the self energy from second order perturbation theory and is given by where ρ 0 (ω) = − 1 π Im[G 0 (ω + )] and δ → 0 + . We iterate (i)-(iii) until the desired self-consistency in self-energy and other physical quantities are achieved. Here we consider the particle-hole symmetric case at half filling n = 1. In this case µ = U 2 for all U and T . B. Shear viscosity in DMFT Using the self-consistent self-energy we can calculate the shear viscosity. In the limit of d → ∞ all vertex corrections to two-body correlation functions drops out 31 and the temperature dependent coefficient of shear viscosity, η(T ), given by the Kubo formula [Eq. (11)] can be calculated using simple polarization bubble as where ν = a d is the volume of the unit cell of a ddimensional hypercubic lattice with lattice constant a, are the spectral density and Fermi function, respectively. with v kα = 1 ∂ǫ k ∂kα is the transport density of states for the shear viscosity and N is the number of lattice sites. Following a similar procedure to that in Ref. 32 we can show that the transport density of states for shear viscosity for a d-dimensional hypercubic lattice with nearest neighbour hopping is given by where γ = ma 2 2 and N 0 (ǫ) = k δ(ǫ − ǫ k ) is the density of states. In the Appendix we give a detailed derivation of this important result. In the following sub-sections we explicitly evaluate this expression for the Bethe lattice and the hypercubic lattice cases. Bethe lattice case We consider the Bethe lattice (Cayley tree) with coordination number z. In the limit of infinite coordination number (z → ∞), the density of states has semicircular form 33 : where θ(x) is the Heaviside step function, W = 2t * is the half bandwidth and the nearest neighbour hopping amplitude (t) in this case is scaled as t → t * / √ z. For a Bethe lattice with coordination number z the connectivity K = z − 1 while that for a d-dimensional hypercubic lattice is 2d. So, in the limit of large coordination number we can always do the mapping z ↔ 2d. Because of its tree like structure the Bethe lattice has no closed loop and hence no energy dispersion with Bloch wavevector k. However, by invoking the f -sum rule we can still calculate Θ xy (ǫ). For the density of states we then have the following exact integrals Then by replacing these exact analytical integrals into the expression in Eq. (27) for Θ xy (ǫ) and using W = 2t √ 2d we get It is interesting to mention that the constant term as well as the tan −1 ǫ √ W 2 −ǫ 2 term cancels out in the final expression for Θ xy (ǫ). The expression in Eq.(23) for the shear viscosity for the Bethe lattice is then given by with m B = 2 a 2 W andω = ω/W is the dimensionless energy. Hypercubic lattice case for the density of states. It is important to mention that the chosen density of states in the limit d → ∞ requires the scaling : 2t → t * / √ d. The transport density of states for the shear viscosity is then given by are the dimensionless integrals andǫ = ǫ/t * is the dimensionless energy. In Fig. 1 (b) we show the scaled dimensionless transport density of states for shear viscosity,Θ(ǫ) : From Fig. 1 (b) it can be seen that and we confirm this numerically. Interestingly, the transport density of states for electrical conductivity also follows a similar relation. The shear viscosity is then given by where, with m H = 2 a 2 t * andω = ω/t * is the dimensionless energy. C. Entropy density The internal energy in DMFT is given by 34 43) where N e is the total number of electrons and N 0 (ǫ) is the non-interacting density of states and A(ǫ, ω) is the spectral density, as defined in Eq. (24) From E(T ) we can calculate the specific heat using C v (T ) = ∂E(T ) ∂T v and then we can calculate the local entropy density, s(T ), as D. Quantum limits The quantum limit of the shear viscosity, η q = 1 5 n , is based on the free particle dispersion E k = 2 k 2 2m in the continuum limit. For a discrete lattice model we need to derive an appropriate quantum limit for shear viscosity. For temperatures and frequencies much less than the coherence scale (i.e. T ≪ T K , ω ≪ k B T K where T K is the coherence temperature which is of the order of the Kondo temperature for the corresponding single impurity Anderson model) the self energy, Σ(ω), has the Fermi liquid form : where Z is the quasi-particle renormalization factor and C is a positive constant. Following the procedure as in Ref. 3 we can show that at low temperature (T ≪ T K ) the shear viscosity for the hypercubic lattice is given by where k BT = kB T t * is the dimensionless temperature and Using the definition of transport density of states for conductivity we can define 15 Fermi velocity, v F , in the limit of d → ∞ as Then for the hypercubic lattice in the limit of d → ∞ we have 34 Then we can identify Ct * (k BT ) 2 /I 01 as the dimensionless inverse mean free time, τ −1 . The quantum limit to shear viscosity will then correspond to τ ∼ 1 and we will have the quantum limit to shear viscosity The dimensionless scaled shear viscosity, η * (T ) ≡ η(T )/η lat q , is then given by Finally we define as the scaled dimensionless ratio of shear viscosity(η) and entropy density (s). η s KSS = 4πkB is quantum limit to this ratio proposed by Kovtun, Son, and Starinets. It is important to mention that η s KSS was derived for a model with conformal symmetry and hence does not depend on any energy or length scale. On the other hand the lattice model that we consider here has no conformal symmetry. As a result η/s and the scaled ratio ζ(T ) depends on the free particle mass m, the energy scale t * , and length scale a and we will need to study ζ(T ) with particular values for specific systems. IV. RESULTS We consider the case of half-filling n = 1, i.e. each site on the average is occupied by one fermion. We study shear viscosity and entropy density as a function of correlation strength U and temperature T (enters as k B T with dimension of energy). A. Parameters for liquid 3 He We consider liquid 3 He because of the availability of extensive experimental data for the temperature and pressure dependence of the shear viscosity, recently reviewed and parametrised by Huang et al. 35 . First, we review how liquid 3 He might be described as a lattice gas with a Hubbard model Hamiltonian. Low temperature properties of liquid 3 He can be described by Landau's Fermi liquid theory. The effective mass of the quasi-particles [as deduced from the specific heat] is about 3 times the bare mass m at 0 bar pressure and increases to 6 times at 33 bar, when the liquid becomes solid. The compressibility is also renormalised and decreases significantly with increasing pressure. This led Anderson and Brinkman to propose that 3 He was an "almost localised" Fermi liquid. Thirty years ago, Vollhardt worked this idea out in detail, considering how these properties might be described by a lattice gas model with a Hubbard Hamiltonian. 36 The system is at half filling with U increasing with pressure, and the solidification transition (complete localisation of the fermions) has some connection to the Mott transition. All of the calculations of Vollhardt were at the level of the Gutzwiller approximation (equivalent to slave bosons). A significant result from the theory is that it describes the weak pressure dependence and value of the Sommerfeld-Wilson ratio of the spin susceptibility to the specific heat [which is related to the Fermi liquid parameter F a 0 ]. At ambient pressure U was estimated to about 80 per cent of the critical value for the Mott transition. Vollhardt, Wolfle, and Anderson 37 also considered a more realistic situation where the system is not at half-filling. Then, the doping is determined by the ratio of the molar volume of the liquid to the molar volume of the solid (which by definition corresponds to half filling). Later Georges and Laloux 38 argued 3 He is a Mott-Stoner liquid, i.e., one also needs to take into account the exchange interaction and proximity to a Stoner ferromagnetic instability. If this Mott-Hubbard picture is valid then one should also see a crossover from a Fermi liquid to a "bad metal" with increasing temperature. Specifically, above some "coherence" temperature T K , the quasi-particle picture breaks down. For example, the specific heat per atom should increase linearly with temperature up to a value of order k B around T K , and then decrease with increasing temperature. Indeed one does see this crossover in experimental data (compare Figure 1 in Ref. 39). Extension of the Vollhardt theory to finite temperatures was done by Seiler, Gros, Rice, Ueda, and Vollhardt. 40 We now consider what Hubbard model parameters are appropriate for 3 He. First we consider the Bethe lattice case characterised by a bounded density of states. We can estimate the half bandwidth, W , and lattice constant, a, from experimental data on 3 He. From Ref. 41 we find, the Fermi energy, E F ≃ 1K and the density n/N A ≃ 1/26 cm 3 (where N A = 6.023 × 10 23 is Avagadros number). Then if we take W ≃ E F ≈ 1 K, we calculate that a ≃ 3.50Å, and ma 2 W 2 = 7.65. It is important to mention that the free particle mass m = 3.01 amu = 5.008 × 10 −27 kg of 3 He is much larger than the mass of electrons in metals. We can explicitly show that for d = 3, C H η = 1 2 C B η . This will correspond to an effective half-band width W = √ 2t * for the hypercubic lattice in the limit of d → ∞. In the following sections we compare some of our cal-culations of the shear viscosity with experimental data for 3 He. Huang et al. 35 showed that the shear viscosity of saturated liquid 3 He from 3 mK to 0.1 K follows the Fermi liquid relation η ∝ 1/T 2 . Furthermore, they showed that the shear viscosity data in the range from 3 mK to near the critical point at 3.31 K, collected over the past 50 years from various experimental groups can be fitted to the empirical form : with c 1 = 2.897 × 10 −7 Pa.sK 2 , c 2 = −7.02 × 10 −7 Pa.sK 1.5 , c 3 = 2.012 × 10 −6 Pa.sK and c 4 = 1.323 × 10 −6 Pa.s. They also note that the viscosity decreases by a factor of at most ten as the pressure increases from 1 kPa to 3 MPa [the melting pressure] for all temperatures below 1 K. We note that at low temperatures Eq.(54) has a Fermi liquid term. Note that at high temperatures Eq.(54) has the asymptotic value of c 4 which is comparable to n . B. Low temperature behaviour In Fig. 2 (a) and Fig. 2(b) we show the shear viscosity as a function of temperature for various interaction strength U , for the Bethe lattice and hypercubic lattice, respectively. We have used parameters for 3 He derived in the previous section in order to compare against experimental results. Since we chose W = 1K, temperature, T , on the horizontal axis in Fig. 2 (a) will also have units of K. Similarly, a choice of t * = 1K will translate to W = √ 2K and the temperature, T , on the horizontal axis in Fig. 2 (b) will also have units of K. Our calculated shear viscosity show qualitative as well as quantitative behaviour consistent with experiments for both the cases. Interestingly, our calculated shear viscosity for U = 1 for the hypercubic lattice nearly fits with the experimental results. Our calculation suggests 3 He is a weakly correlated system with U/U c ∼ 0.25 [U c ∼ 4.0 for hypercubic lattice] as against Volhardts suggestion 36 of 3 He being a nearly localized Fermi liquid with U/U c ∼ 0.8. Fig. 3 clearly shows that the shear viscosity follows Fermi liquid characteristic 1/T 2 behaviour in the low temperature region (T ≪ T K ). Also, the range of Fermi liquid behaviour decreases with increasing U . This is simply because the coherence scale [and Kondo temperature for the corresponding single impurity Anderson model] decreases with increasing correlation strength U . The 1/T 2 behaviour is similar to the low temperature behaviour of electrical conductivity and the quantum transport in this region can be characterised by coherent quasiparticle states. C. High temperature behaviour In the high temperature region, T ≫ T K , the shear viscosity shows significant deviation from the low temperature Fermi liquid behaviour as can be observed from Fig. 2 and Fig. 3. The quantum transport in this region is incoherent in nature. The temperature scale at which the deviation from 1/T 2 behaviour happens is closely related to the coherence temperature, T K . The range of Fermi liquid behaviour decreases with increasing correlation strength, U . For the weakly and moderately correlated systems the deviation is smooth and monotonic but for strongly correlated systems for U = 2.5 and above the deviation is much sharp and non-monotonic. This is due to sharp crossover between the Fermi liquid fixed point and the local moment fixed point in the strongly correlated regime. A similar non-monotonic temperature dependence is seen in the electrical resistivity from DMFT calculations and in organic charge charge transfer salts close to the Mott insulator. 3,15,23 D. Quantum limits Finally we consider violation of quantum limit of shear viscosity. In Fig. 4 we show scaled dimensionless shear viscosity η * (T ) = η(T )/η lat q as a function of temperature, T , for various correlation strength, U . In the weakly correlated system U = 0.5 the shear viscosity is always above quantum limit, η lat q but as we increase correlation strength, U , the shear viscosity smoothly goes below quantum limit with increasing temperature, T . This corresponds to the fact that at low temperatures (T ≪ T K ) the quantum transport is due to coherent quasi-particle states but at high temperatures (T > T K ) the transport becomes incoherent in nature. In Fig. 5 we show the entropy density, s(T ), as a function of temperature for various interaction strengths. At high temperatures the entropy density approaches ln(4) which arises due to local charge and spin fluctuations. As the temperature decreases charge fluctuations freezes out and the model can be described by localised weakly interacting spin 1/2 objects with characteristic entropy density ln (2). Finally in the Fermi liquid state the local spin degrees of freedom are dynamically screened and the entropy density vanishes linearly in temperature, T . For weakly and moderately correlated electron system the entropy shows violation of quantum limit, η lat q with increasing U . This is consistent with the picture that the transport becomes incoherent with increasing correlation strength. The calculation is for the hypercubic lattice case and both T and U are measured in units of t * . density smoothly crosses over ln (2). But for strongly correlated electron systems with U = 2.5 and above a kink like feature develops. This corresponds to formation of poorly screened local moment. The position of the kink in the specific heat versus temperature curve is related to the coherence temperature, T K . 42 For extremely correlated systems with U = 3.0 and above the entropy density in iterated perturbation theory (IPT) is under estimated. Consequently the specific heat in the coherent-incoherent crossover region becomes negative, which is unphysical. This is due to wrong total energy estimate in IPT which has been reported in ear-lier literature. (See for example, Figure 7 in Ref. 43) 44 . In the unphysical temperature range we set the specific heat to zero and the calculated entropy density which is an integrated quantity will as best deviate by not more than 5% from the actual value. Such a small error has little effect on whether the KSS bound is violated. Finally we consider the scaled shear viscosity, η(T ), en- As already mentioned ζ(T ) depends on material properties t * and a, particularly through the prefactor C η . In Fig. 6 we show ζ(T ) for parameters appropriate for 3 He, cuprates and organic charge transfer salts. For cuprates 45 t ≃ 0.18eV, a = 3.9Å and for organic charge transfer salts 25 t ≃ 0.05eV, a = 8Å. This will give C η = 0.26π for cuprates and C η = 0.35π for organics, as compared to C η = 2.44π for 3 He. As a result the shear viscosity for these electronic systems will be smaller by a factor of about 10 than for the charge neutral fermionic fluid 3 He. From Fig. 6 we can clearly see ζ(T ) for all U except U = 3.5 for 3 He parameters is well above the Kovtun-Son-Starinet (KSS) limit. For extremely correlated system U = 3.5 there is strong violation of the limit in the crossover region but even for this system at high temperature the bound is well respected and it seems that in the high temperature region the scaled ratio is approaching some limit. If we consider U = 1.0 for 3 He then the limit is well respected for all temperatures. For electronic systems such as cuprates and organic superconductors the limit is violated in the region T > T K . This is due to reduction of shear viscosity by a factor of 10 compared to 3 He parameters. The violation is more than 500% for these systems. Unfortunately there is no direct measurement of shear viscosity or the ratio η/s for these charged systems. However, recently and indirect estimate of η/s was made from ARPES experiments in cuprates 46 giving a value comparable to the KSS limit. V. CONCLUSIONS We have studied the shear viscosity, entropy density and their ratio for a single band Hubbard model using single site dynamical mean field theory. We considered Bethe lattice in the limit of large coordination number (z → ∞) characterised by a bounded density of states and hypercubic lattice in the limit of large dimensionality (d → ∞) characterised by Gaussian a density of states. Similar results were obtained for both density of states. We compared our results for the temperature dependence of the shear viscosity to experimental results for liquid 3 He. At low temperatures the shear viscosity is proportional to 1/T 2 corresponding to coherent quasi-particle based transport in the Fermi liquid state. At high temperatures the shear viscosity shows significant deviation from Fermi liquid state behavior. This corresponds to crossover from coherent quasi-particle based transport to incoherent transport (the "bad metal"). With increasing interaction strength U the shear viscosity becomes less than conjectured quantum limits of shear viscosity, of the order of n . Finally we consider the scaled dimensionless ratio between shear viscosity and entropy density. This ratio in the Hubbard model depends on the energy scale t * , length scale a, and the free fermion mass m. This is in contrast to the universal limit 4πkB predicted by Kovtun, Son, and Starinets using the AdS/CFT correspondence in a conformally symmetric field theory model. For 3 He parameters the ratio is above the universal bound but for parameters appropriate for electronic lattice systems, such as cuprate and organic metals, this bound is found to be strongly violated. APPENDIX: Transport density of states for shear viscosity In the limit d → ∞ the viscosity will involve the following transport function where γ ≡ ma 2 2 . Using relations for Bessel functions we can rewrite Y (ω) as We can Fourier transform back to calculate Θ xy (ǫ) as Using the convolution theorem we can easily show that each term of Y (ω) has the following form where, and Y α (ω) =F α (ω)G α (ω). For the first term where we have used Finally we get where we have used +∞ −∞ z 2 N 0 (z)dz = k ǫ 2 k = 2t 2 d and +∞ −∞ z 3 N 0 (z)dz = 0. A similar exercise for the second term will give For the third term we have and G 3 (ǫ) = − 1 2π Finally, we have
2015-08-18T01:33:08.000Z
2015-04-27T00:00:00.000
{ "year": 2015, "sha1": "4e314933a5c7a10c1ac573eaf19a1c4e55b50599", "oa_license": null, "oa_url": "https://espace.library.uq.edu.au/view/UQ:369811/UQ369811_OA.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4e314933a5c7a10c1ac573eaf19a1c4e55b50599", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
249237039
pes2o/s2orc
v3-fos-license
Legumain/pH dual-responsive lytic peptide–paclitaxel conjugate for synergistic cancer therapy Abstract After molecule targeted drug, monoclonal antibody and antibody–drug conjugates (ADCs), peptide–drug conjugates (PDCs) have become the next generation targeted anti-tumor drugs due to its properties of low molecule weight, efficient cell penetration, low immunogenicity, good pharmacokinetic and large-scale synthesis by solid phase synthesis. Herein, we present a lytic peptide PTP7-drug paclitaxel conjugate assembling nanoparticles (named PPP) that can sequentially respond to dual stimuli in the tumor microenvironment, which was designed for passive tumor-targeted delivery and on-demand release of a tumor lytic peptide (PTP-7) as well as a chemotherapeutic agent of paclitaxel (PTX). To achieve this, tumor lytic peptide PTP-7 was connected with polyethylene glycol by a peptide substrate of legumain to serve as hydrophobic segments of nanoparticles to protect the peptide from enzymatic degradation. After that, PTX was connected to the amino group of the polypeptide side chain through an acid-responsive chemical bond (2-propionic-3-methylmaleic anhydride, CDM). Therefore, the nanoparticle (PPP) collapsed when it encountered the weakly acidic tumor microenvironment where PTX molecules fell off, and further triggered the cleavage of the peptide substrate by legumain that is highly expressed in tumor stroma and tumor cell surface. Moreover, PPP presents improved stability, improved drug solubility, prolonged blood circulation and significant inhibition ability on tumor growth, which gives a reasonable strategy to accurately deliver small molecule drugs and active peptides simultaneously to tumor sites. Introduction Paclitaxel (PTX), as one of the most widely used chemotherapeutic agents, has been extensively used for the treatment of lung, breast and ovarian cancers (Kubota et al., 2014;Fleming et al., 2017;Kim et al., 2017). However, the limitations of PTX, such as highly hydrophobic, drug resistance as well as severe side effects have hindered its clinical applications (Singla et al., 2002). In order to enhance the physical and chemical properties of small molecule drugs, nano drug delivery systems (NDDSs) have been eagerly developed that feature passive or active targeting, enhanced stability in blood, improved bioavailability, high tumor accumulation and reduced toxic side effects (Davis et al., 2008;Song et al., 2019;Mirza & Karim, 2021). Although lots of researches about assembling chemotherapy drugs into nanoparticles have been explored (Yang et al., 2012;Wu et al., 2014;Piao et al., 2019), therapeutic peptides were seldom used to conjugate with chemotherapy drugs. Peptides are considered as a series of amino acids and penetrate easily into the tissue (Bolhassani et al., 2017). PTP-7 (FLGALFKALL) is a cationic lytic peptide which possesses primary target site on the cell membrane. Anticancer activities have been demonstrated in three thermodynamic steps: (1) the electrostatic attraction of cationic peptides to anionic cell membranes; (2) the accumulation of peptides on the cell surface; (3) the insertion of aggregated peptides into cell membranes to cause cell lysis (Tu et al., 2009;Chen et al., 2012a;Rafferty et al., 2016). Similarly, it can cause severe tissue damages due to its inability to make a distinction between tumor cells from normal cells (Chen et al., 2012b). Compared to small-molecules, PTP-7 is prone to degradation by proteases and has short half-life in the circulation so that have not been successfully applied to clinical cancer treatments (Tu et al., 2007;Luo et al., 2020). Peptide-drug conjugates (PDCs) are special drug delivery system which comprises a therapeutic peptide, a small molecule drug and a linker. Compared with single drug loaded drug delivery system, PDCs can enhance anti-tumor effect, reduce drug resistance and modified with virous linkers (Deng et al., 2021;Zhu et al., 2021). Therefore, constructing PTX/PTP-7 co-loaded polymeric nanoparticles could be a creative strategy which can overcome the above shortcomings. Given weak acidic pH is a special feature of tumor microenvironment, acid-sensitive linkages play an important role in establishing prodrugs with efficient transformation in tumor (Wang et al., 2011;Saadat et al., 2021). Li et al. utilized CDM as an acid-sensitive linkage to control prodrug release in tumor (Li et al., 2016). Moreover, Ding et al. designed a PEGylated PTX prodrug using acid-sensitive acetone-based acyclic ketal as the linkage which showed good antitumor efficacy (Mu et al., 2020). Legumain, a lysosomal/vascular cysteine protease, is strictly specific to the hydrolysis of peptide bonds with asparagine or aspartic acid (Morita et al., 2007). Legumain was demonstrated overexpressing in various cancers, such as breast cancer, colorectal cancer, and prostate cancer, whereas rarely expressed in normal tissues (Ohno et al., 2010;Haugen et al., 2015). According to the unique function and characterization of legumain, several legumain-targeted NDDS have been explored to increase drug bioavailability and minimize or eliminate side effects (Liu et al., 2003;Lin et al., 2015;He et al., 2018). In view of their own disadvantages of both therapeutic peptide PTP-7 and small molecule drug PTX, overcoming one individual obstacle was not sufficient to optimize therapeutic outcomes. Thus, we designed weak acidic-responsive and legumain-cleavable PTX/PTP-7 co-loaded nanoparticles (mPEG-PTP-7-PTX, PPP). To be specific, PTP-7, as hydrophobic segments, was conjugated with maleimide-polyethylene glycol by using four amino acid CAAN, a substrate of legumain (Liu et al., 2003), and CDM-PTX was connected to the side chain amino group of PTP-7 via ring-opening and addition reaction. Once PPP encounters the acidic tumor microenvironment, PTX will directly fell off PPP and released in the tumor matrix or released in the cells after PPP endocytoses into lysosomes. Then, the legumain-sensitive site of PTXreleased PPP (mPEG-PTP-7) was exposed and cut by legumain overexpressed in breast carcinoma and colorectal carcinoma, such as MCF-7, 4T1, and HCT116 cancer (Liu et al., 2014;Zhang & Lin, 2021), and finally releases PTP-7 to kill the tumor cell by cracking cell membrane (Figure 1). Our expectations are possible that PPP enhances the water solubility of PTX and enables PTX to target precise delivery to the tumor cells. What is more, PPP could protect PTP-7 from being degraded and prolong its half-life in the blood, subsequently achieve efficient synergistic anti-tumor effect of PTX and PTP-7. Synthesis of PPP nanoparticles Maleimide acid was attached to mPEG by esterification reaction. mPEG-OH (1 mmol), Mal-COOH (4 mmol), EDCÁHCl (4 mmol), and DMAP (0.1 mmol) were dissolved into dry dichloromethane (DCM). The solution was stirred at room temperature for 72 h under nitrogen atmosphere. Then, the mixture was diluted with DCM, washed with HCl (0.01 M), saturated sodium bicarbonate and saturated brine in turn, then dried over anhydrous sodium sulfate and evaporated, precipitated in excessive ice diethyl ether and dried under vacuum. mPEG-Mal was finally obtained. CAAN-PTP-7 (1 mmol) and TCEP (2 mmol) were dissolved with DMF for 1 h in order to break the disulfide bond which could be oxidized by two sulfhydryl groups of cysteine. After that, 1 mmol mPEG-Mal was added into the solution and the mixture was stirred at room temperature for 4 h. The polymeric solution was subject to dialysis against 2 L deionized water at 4 C for 48 h with 3500 Mw dialysis bag (Biotopped, Beijing, China). Then, the solution was lyophilized. PTX (1 mmol), 2-propionic-3-methylmaleic anhydride (CDM, 1 mmol), EDCÁHCl (3 mmol), and DMAP (0.1 mmol) were dissolved into dry DCM and stirred overnight under nitrogen atmosphere in room temperature. The reaction was monitored by thin-layer chromatography of DCM and ethyl acetate (3:1, v/v). Finally, the mixture was diluted with DCM, washed with HCl (0.01 M), saturated sodium bicarbonate and saturated brine in turn. After dried by sodium sulfate, filtered, concentrated and dried under vacuum, the crude product includes CDM-PTX that was obtained as a white solid. PPP was constructed using PEG as the hydrophilic segment and PTP-7 as the hydrophobic segment. mPEG-PTP-7 (1 mmol), CDM-PTX (3 mmol), and DMAP (0.1 mmol) were dissolved in dry DMSO and stirred at 30 C for 24 h under nitrogen atmosphere. The reaction was purified by dialysis against DMSO at room temperature for 48 h and further dialysis against PBS for 12 h with 3500 Mw dialysis bag. After washed with deionized water by ultrafiltration centrifuge tube (MWCO 10,000 Da), the solution was frozen at À80 C for 4 h and lyophilized to afford the dried faint yellow powder of PPP. To obtain the PPP micelle, the solid powder of PPP was dissolved in THF and added dropwise to PBS and stirred overnight until THF volatilized completely. After filtrated by 450 nm filter, the PPP NPs were stored at 4 C before use. 1 H NMR (400 MHz) (Bruker AVANCE DRX-400 NMR spectrometer, Bruker, F€ allanden, Switzerland) was used to confirm the obtained compounds. Determination of critical micelle concentration (CMC) Solution of Nile red in CH 2 Cl 2 in brown bottles was evaporated under vacuum. PPP micelles with different concentrations ranging from 0.1 mg/mL to 150 mg/mL were added to the bottles and stirred in the dark for 12 h. A microplate instrument (Flexstation 3, Molecular Devices LLC, Sunnyvale, CA) was used to record the fluorescence of each solution at an emission wavelength of 620 nm and excitation wavelength of 579 nm. CMC of PPP micelle was calculated by plotting the ratios of intensity and concentration (Cai et al., 2020). Characterization of nanoparticles The size, polydispersity index (PDI), and f-potential of PPP NPs were measured by dynamic laser scattering (DLS) (Zetasizer, Nano-ZS90, Malvern, Malvern, UK) and the morphology was observed by transmission electron microscopy (TEM) (Jeol Ltd., Akishima, Japan). In order to assess the stability, PPP NPs incubated in 10% FBS PBS solution were monitored by DLS for 13 days. In vitro hydrolysis of PPP One milliliter aliquot of PPP NPs formulation containing 0.1 mg PTX in dialysis bags (MWCO ¼ 3500 Da) was soaked in 29 mL PBS (pH 7.4 and 6.5) with 0.1% w/v Tween-80. The sample was incubated at 37 C and shaking was done at 100 rpm. Two hundred microliters released mediums were collected and equal PBS was added at specific time points. The samples were added with 200 mL acetonitrile, then delivered with water/acetonitrile (v/v ¼ 45/55) to HPLC system (Agilent, Santa Clara, CA) at a flow rate of 1.0 mL/min and detected at 227 nm by using a reversed-phase column (Inertsil C18, 5 mm, 4.6 mm  250 mm, Shinjuku, Japan) at 27 C (Cai et al., 2020). Cytotoxicity assay MCF-7 cells, HCT116 cells, and 4T1 cells were seeded in 96well plates at a density of 3  10 3 /well. After 12 h, cancer cells were incubated with free PTX and PPP micelles with a PTX concentration from 0.001 mg/mL to 10 mg/mL in 200 mL culture mediums for 72 h. One hundred microliters of culture medium containing 10 mL MTT (5 mg/mL) was added to each well for another 4 h in the dark. The absorbance was measured at 570 nm by a microplate reader (Flexstation 3, Molecular Devices LLC, Sunnyvale, CA). Comparing treated wells with controlled wells, the cell viability was analyzed as mean ± SD (n ¼ 5). In order to confirm the enzyme responsiveness of AAN between the mPEG and PTP-7, MTT assay was used to indirectly observe the cytotoxicity of PTP-7 cleaved by legumain. One microgram of recombinant mouse legumain (Abcam, Cambridge, UK) was activated in a buffer solution (pH 4.5) which contains 50 mM critic acid, 1 mM DTT, and 1 mM EDTA (Zhou et al., 2017). Then, PP was dissolved in the legumain solution and incubated at 37 C for 12 h. Next, culture medium containing 0.1-100 mg/mL of free PTP-7 or legumain treated mPEG-PTP-7 was added to cancer cells. After incubation for 24 h, the cell viability was calculated as above. Study of peptide-cell membrane interaction To obtain PTP-7 completely cleaved by legumain, PP was dissolved in the buffer solution at a concentration of 5 mg/mL legumain and incubated at 37 C for 12 h. After adjusting pH to 8, the solution was treated with 0.4 mmol of fluorescein isothiocyanate (FITC) (RHAWN, Shanghai, China) and stirred in the dark overnight to label PTP-7. MCF-7 cells were plated at 2  10 5 cells per well in 12-well plates for 12 h and exposed to FITC-labeled PTP-7 at a concentration of 120 mg/ mL PTP-7. The cells were washed with PBS for three times and the cell membranes were stained with Dil (10 mM) for 20 min. After being washed with PBS for three times to remove residual Dil, PTP-7 accumulation on cell membranes was investigated by fluorescent microscopy (Olympus IX73, Shibuya, Japan). 2.2.7. Three-dimensional (3D) tomography of live cells MCF-7 cells were seeded in a 35-mm glass-bottom dish for 24 h and then treated with PPP at a concentration of 20 mg/ mL for 3 h. The live morphology of MCF-7 cells was detected using a 3D cell holographic tomography microscope (Nanolive 3D Cell Explorer, Tolochenaz, Switzerland) and analyzed by STEVE software. During imaging, cultivation environment was maintained with sufficient amount of CO 2 at 37 C. Cellular uptake To verify cellular uptake of PPP NPs, 2  10 4 cells of MCF-7 cells, HCT116 cells and 4T1 cells were seeded in 24-well plates. One milligram PPP and 0.01 mg coumarin (C6) were dissolved in THF, then the mixture was added dropwise to 1 mL PBS and stirred overnight. Filtered with a 220 nm filter to ensure the unencapsulated C6 was removed, C6-loaded PPP NPs (PPP-C6) were obtained and the C6 concentration was measured using a fluorescent microscopy (Olympus IX73, Shibuya, Japan) with excitation wavelength of 466 nm and emission wavelength of 504 nm. The encapsulation efficiency of C6 was 0.168% which showed the effective loading of PPP-C6. Next, the cells were treated with PPP-C6 at 10 mg/ mL for 2 h. The medium was removed and the cells were washed with PBS for three times, fixed with 4% paraformaldehyde for 15 min. Hoechst 33342 was used to stain cell nuclei for 20 min. After washed with PBS for three times, cells were presented by fluorescent microscopy (Olympus IX73, Shibuya, Japan). Inhibition ability on tumor spheroid MCF-7 cells and HCT116 cells were dispersed in culture medium containing 10% methylcellulose (0.24%, w/v) at a density of around 1  10 5 cells/mL. Twenty microliters of the above medium was equably dropped on the lid of a 10 mm 2 cell culture dish and incubated at 37 C for 24 h. When the tumor spheroids were formed, they were removed to 96-well plates which were pre-coated with agarose. After incubation for 72 h, MCF-7 tumor spheroid was treated with PPP and free PTX at equivalent 0.1 mg/mL PTX and HCT116 tumor spheroid was treated with PPP and free PTX at equivalent 0.05 mg/mL PTX. The spheroid was observed for nine days and the diameter change of each spheroid was measured by Image J software (Bethesda, MD). Measured with maximum diameter (d max ) and minimum diameter (d min ), the tumor spheroid volume was calculated by the following equation: V¼(pÂd max Âd min )/6, and the negative control tumor spheroids grew up in culture medium. Pharmacokinetic evaluation The pharmacokinetics of PPP NPs was tested on female ICR mice ($5 weeks) compared with the clinically used Taxol. The mice were randomly divided into two groups (n ¼ 4) and intravenously injected with PPP NPs and Taxol at a PTX dosage of 7.5 mg/kg. At 0.25, 0.5, 1, 2, 4, and 6 h, blood of postorbital venous plexus was collected by capillaries into 1.5 mL heparinized EP tubes and centrifuged immediately for 10 min at 4 C, 5000 rpm. Then, 50 lL of serum was mixed with 150 lL acetonitrile and the mixture was centrifuged for 10 min to precipitate the proteins at 4 C, 5000 rpm. The supernatant was taken out and pretreated with 1 M hydrochloric acid overnight to completely release the PTX of PPP. After filtered with a 220 nm filter, the samples were analyzed by HPLC using water/acetonitrile (v/v ¼ 45/55) as the mobile phase. The analyzed flow rate was 1 mL/min and detected at 227 nm. In vivo tissue biodistribution studies The in vivo tissue biodistribution studies of PPP NPs were carried out in female BALB/c nude mice. To establish MCF-7 tumor-bearing model in the right flank of mice, each mouse was injected with 1  10 7 cells. The tumor volumes were calculated by the following equation: V¼length  width 2 /2. When the tumor grew up to $200 mm 3 , PPP NPs and Taxol (10 mg/kg PTX equiv.) were injected intravenously. After injection for 8 h, the heart, liver, spleen, lung, kidney, and tumor of each mouse were dissected, weighed, and homogenized with three times volumes of deionized water. The homogenization and acetonitrile were mixed at a ratio of 1:3. After vortexed for 10 min, the mixture was centrifuged at 5000 rpm, 4 C for 10 min. The supernatant was collected and dried by vacuum pump. Next, the residuum was redissolved with 80 lL of mobile phase (water/acetonitrile: 45/55) and treated with 1 M hydrochloric acid to completely release the PTX. The HPLC analyzed method was consistent with the method described above. Antitumor efficacy and safety evaluation Female BALB/c nude mice were ($5 weeks) implanted with 1  10 6 4T1 cells in the right flank. The 4T1 tumor-bearing mice were casually divided into three groups (n ¼ 4) when tumor volume reached about 150 mm 3 . All mice were intravenously injected with 200 mL PBS, Taxol, and PPP NPs (PTX, 7.5 mg/kg) every two days for five times. Tumors diameter and body weights were recorded every two days and the relative tumor growth was expressed as a rate of volume and initial volume. When all treatments were accomplished, three groups of mice were euthanized for dissecting. Tumors and major organs were collected, washed, and fixed with paraformaldehyde for further histological examinations. Statistical analysis All data analyzed for statistic were performed with GraphPad Prism (San Diego, CA) as mean ± SD. One-way ANOVA was used to evaluate the significant comparison between two groups. Statistical significance was indicated as à p<.05 and high significance was marked as Ãà p<.01. Characterization of PPP NPs Synthesis route of PPP which consisted of mPEG-OH, Mal-COOH, CAAN-PTP-7, CDM, and PTX is shown in Figure 2. To maximize the interactions between PTP-7 and tumor cells, pro-peptide CAAN-PTP-7 (CAANFLGALFKALLAAN) was prepared by modifying with cysteine-alanine-alanine-asparagine (CAAN), a particular legumain substrate, to the phenylalanine of PTP-7. First, mPEG5000-Mal was synthesized by esterification using mPEG5000-OH and the yield of mPEG5000-Mal was 63%. Fig. S1-A indicates the successful connection of maleimide group (d ¼ 6.7 ppm) to mPEG. Then mPEG-Mal was connected with CAAN-PTP-7 by addition reaction between the double bond of maleimide group and the sulfhydryl of cysteine (Fig. S1-B) and the yield of mPEG-PTP-7 was 78%. mPEG5000-OH was served as hydrophilic segments to prolong the circulation in blood and PTP-7 was served as hydrophilic segments. For PPP synthesis, PTX was reacted with CDM to prepare PTX-CDM and the yield of product was 51%. As shown in Fig. S1-D, new peaks appeared at 2.1 ppm and 2.8 ppm in comparison with PTX spectrum in Fig. S1-C. Then, the product was further coupled to mPEG-PTP-7 through a ring-opening and addition reaction between the CDM anhydride residue and the amino groups of PTP-7. The yield of PPP was 62% and the resultant amide bond would be hydrolyzed in the acid tumor microenvironment and then liberate PTX. The 1H NMR of PPP (Fig. S1-E) showed new hydrogen spectrum peaks of CDM-PTX compared to 1H NMR of PP (Fig. S1B) which indicated a 14% PTX loading content of PPP. Calculated by 1H NMR of PPP, the molecular weight of PPP was 7074.38. The CMC is a critical property for micelles as it indicates the polymer's capacity to form micelles in aqueous solutions. As shown in Figure 3(A), the CMC of PPP was 45.46 mg/mL which stated that PPP was easy to transform micelles. The results of DLS displayed that the average particle size of PPP was about 91.3 nm with a narrow size distribution, which was the proper size to penetrate effectively into the tumor site due to EPR effect (Figure 3(B)) (Zhang et al., 2018). The f-potential of PPP was approximately À3.97 mV (Fig. S2) which imparted longer blood circulation time to PPP. To test the stability of NPs in the mimicked physiology, the PPP NPs were incubated in PBS with 10% FBS and its size and PDI were observed by DLS. The average size and PDI were hardly changed in 13 days which indicated an excellent stability of PPP (Figure 3(E)). As indicated in Figure 3(C,D), PPP NPs were spherical and uniform in normal condition and the average particle size of PPP in the TEM image measured by Image J software (Bethesda, MD) was about 90.8 nm, while the nanostructure became less compact and transformed into a lysis that consisted of mPEG-PTP-7 in acidic condition. The results preliminaries verified the pH-responsive of PPP. In vitro release of PPP NPs and dual-responsive assays In vitro release of PTX underwent in two different pH dissolve mediums (pH 6.5 and 7.4) to further reveal the acidresponsiveness of resultant amide bond. As shown in Figure 3(F), PPP NPs in a weak acid medium presented a relatively quicker release with a 72.5% cumulation at 96 h than PPP in normal medium (47.4%). On a consequence of the different pH between normal tissues and tumor microenvironment, NPs with less cumulation release at pH 7.4 were favorable to the stability in blood circulation and low toxicity to normal tissues, while greater release of PTX at pH 6.5 pointed out an accurate targeting in tumor. The cytotoxicity of PTP-7 cleaved by legumain was measured by MTT assay to determine enzyme responsiveness of mPEG-PTP-7 (PP) (Figure 4(A)). Compared with free PTP-7, PP incubated with legumain showed considerable cell-killing ability on MCF-7 and HCT116 cells. In contrast, PP in the absence of legumain had little influence on cell viability. The inhibitory effects revealed that PTP-7 could be released from PP and produced membrane lytic activities in the tumor environment. Peptide-cell membrane interaction To further assess the cytotoxicity mechanism of PTP-7, interactions between FITC-labeled PTP-7 and Dil-labeled cell membranes were studied (Figure 4(B)). After 90 min of incubation, green fluorescence (FITC-labeled PTP-7) appeared on the cell surfaces and some cell membranes color changed from red to yellow (a mixture of green fluorescence and red fluorescence), which indicated that PTP-7 formed an extensive layer on cell surfaces and inserted into the lipid bilayer by a detergent-like manner. As the incubation time increased to 180 min, cell membranes were severely damaged and peptide aggregates could be clearly seen in the cytoplasm, demonstrating the cell lysis mechanism of PTP-7. Cell cytotoxicity and cellular uptake The cytotoxicity of PPP in MCF-7, HCT116, and 4T1 cells was investigated by MTT assay (Figure 4(C)). The half maximal inhibitory concentration (IC 50 ) of PPP NPs and free PTX was calculated to be 0.066 and 0.028 mg/mL on MCF-7 cells, 0.044 and 0.006 mg/mL on HCT116, 0.212 and 0.01 mg/mL on 4T1 cells which indicated a lower cell growth inhibition ability than free PTX. The result may be owing to the extremely stable property of PPP and low legumain expression in vitro leading to incomplete release of PTX and PTP-7 (Edgington et al., 2013). Moreover, the holographic microscopy studies showed the changes in cellular morphology of MCF-7 cells after treated with PPP for 3 h. Notably, PPP led to the morphological features of apoptosis such as shrinkage, losing contact with neighboring cells and floating relative to control (Figure 4(D)). Next, cell uptake of PPP NPs was studied against MCF-7, HCT116, and 4T1 cells by fluorescent microscopy. As shown in Figure 5(A), judging by the blue fluorescence, the cytoplasm was stained with a distinct green fluorescence, which means PPP NPs could be endocytosed by cells after incubated with PPP-C6 NPs for 2 h. The results clearly showed the rapid cell association with PPP NPs. Inhibition of tumor spheroid growth The tumor spheroids were used to mimic tumor microenvironment (Sung & Beebe, 2014) and the spheroids volume change was measured to further evaluate the cell viability ( Figure 5(B)). The PPP treatment led to a 25.49% inhibition of HCT116 spheroids while free PTX treatment was 20.51% suppression at 9th day. Meanwhile, the inhibition of PPP NPs and PTX was 52.68% and 47.88% on MCF-7 spheroids, respectively. At equivalent PTX concentration, the tumor spheres treated with PPP demonstrated slightly effective inhibition compared to the spheres treated with free PTX. In a way, we could own part of inhibition of PPP to PTP-7 cleaved by legumain. Pharmacokinetics and biodistribution Recent researches have established that PTX encapsulated in formulation still have the disadvantage of rapid blood elimination (Koudelka & Tur anek, 2012;Nehate et al., 2014). Pharmacokinetics of PPP NPs and Taxol were tested on female ICR mice (6-8 weeks) to determine whether NPs can prolong the blood circulation of PTX. 7.5 mg/kg PTX dosage of Taxol and PPP were administrated through tail vein injection. As Figure 6(A) and Table 1 show, compared with Taxol, PPP NPs displayed a dramatically longer blood circulation time. As displayed in Table 1, the circulation half-life and concentration-time curve (AUC 0-1 ) of the PPP were calculated to be 51.7 h and 134.3 mg L À1 h À1 , which were 56.5 and 7.5 times as much as that of Taxol. We speculated that superior body accumulation of PPP was attributed to its enhanced solubility, stability, and stealth performance of pegylated nanoparticles surface (Indoria et al., 2020;Kim et al., 2020). The biodistribution of PPP in heart, liver, spleen, lung, kidney, and tumor was further investigated in MCF-7 tumor-bearing BALB/c nude mice (Figure 6(B)). The PPP treatment showed dramatically high accumulation in spleen than Taxol treatment probably owing to its long blood retention and subsequently nonspecific uptake by macrophages in spleen through the mononuclear phagocyte system (Zhang et al., 2016). Moreover, PPP NPs and Taxol showed high accumulation in liver on account of strong clearance of liver phagocytic cells (Tsoi et al., 2016). Due to the lack of targeted ligand modification, PPP NPs exhibited no significant difference accumulation in tumor to Taxol. In vivo antitumor efficacy and safety evaluation In view of the performance of PPP NPs in vitro and superior pharmacokinetic in vivo, the in vivo anti-cancer efficacy was performed on 4T1 tumor overexpressing legumain proteases in vivo (Liu et al., 2014;He et al., 2018). It is observed that all of tumor volumes were suppressed with treatment of PPP and Taxol compared to PBS (Figure 6(C)). Noted that PPP NPs exhibited more distinct inhibition than Taxol, and average relative tumor volumes of PPP NPs and Taxol treated mice reached to 217% and 514% on day 9 compared to the first day, respectively. The satisfactory antitumor effects of PPP NPs should be put down to the drugs programmed release and the co-therapy of PTX and PTP-7. When PPP NPs were carried to tumor tissue, the acid-responsive CDM linker was triggered in the weak acid tumor microenvironment to Figure 6. (A) Plasma concentration of PTX in ICR mice after administration of Taxol and PPP (7.5 mg/kg PTX equiv., n ¼ 5). (B) The biodistribution of PTX in BALB/c nude mice bearing MCF-7 tumor at 8 h after administration of Taxol and PPP (10 mg/kg PTX equiv., n ¼ 3). (C-E) Antitumor efficacy of Taxol and PPP. Relative tumor volume change with treatment for 12 days in (C) and the arrows signify the time of intravenous administration, body weight changes of BALB/c nude mice bearing 4T1 tumor during treatment in (D) and the H&E and TUNEL assay of tumor tissues from three groups in (E). Data are described as mean ± SD (n ¼ 4, à p<.05, Ãà p<.01). Bar ¼ 100 mm. release PTX. Next, PTP-7 peptide contained in PPP NPs was specifically activated by highly expressed legumain, thereby achieving the lytic efficiency on tumor cells. In vivo therapeutic safety of PPP NPs was observed by monitoring the changes of mice body weight. Compared with PBS group, none of mice treated with PPP NPs or Taxol showed significant body weight loss (Figure 6(D)) indicating no systemic toxicity for both groups. Furthermore, TUNEL of tumors and H&E staining of major organs at the end of treatments were further used to evaluate therapeutic safety ( Figure 6(E)). H&E staining indicated that cell morphology and nuclear in tumors were almost intact with PBS treating group. On the contrary, PPP NPs treated group displayed the largest quantity of cell apoptosis compared to Taxol and PBS groups which can be seen more noticeable in TUNEL images. It is worth mentioning that some PBS treated mice suffered from splenomegaly, secreted more inflammatory cells and Taxol treated group showed mild shriveled glomerular and myocardial fiber breakage, while the PPP treated mice was normal. Beyond that none of mice presented significant tissue damages and abnormal behaviors manifesting nontoxicity of PPP ( Figure S3). Conclusions In summary, we successfully constructed a PTP-7 conjugated PTX nanoparticles to accomplish the cascade drug release in unique tumor microenvironment. The obtained PPP NPs possessed perfect stability in vitro and had longer blood circulation in vivo that were desirable as NDDS. When delivered to tumor microenvironment, PPP NPs could be activated in tumor extracellular matrix or intracellular lysosomes. The specificity of acid pH-stimuli and legumain-sensitive endows PPP NPs with precise drug release in the tumor site and synergistic therapy of PTX and PTP-7. Afterwards, the enhanced therapeutic efficacy on tumors produced by synergistic treatment of PTP-7 and PTX has justified the applicability of dualresponsive strategy. Given above, the rational design of PPP NPs has transmitted a strategy for environment-responsive drug delivery of overcoming drug barriers and also provided ideas of therapeutic peptide for broader clinical application.
2022-06-02T06:22:53.468Z
2022-05-31T00:00:00.000
{ "year": 2022, "sha1": "919c7012b0a1c8bd37d067995b0906b6e3372bf4", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/10717544.2022.2081380?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fbf0cb1db0d3b820282c9fb0c503695fca3b627d", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
252762100
pes2o/s2orc
v3-fos-license
Quantifying Political Bias in News Articles Search bias analysis is getting more attention in recent years since search results could affect In this work, we aim to establish an automated model for evaluating ideological bias in online news articles. The dataset is composed of news articles in search results as well as the newspaper articles. The current automated model results show that model capability is not sufficient to be exploited for annotating the documents automatically, thereby computing bias in search results. INTRODUCTION Search engines are ubiquitous. As reported by SmartSights [3], in 2017 46.8% of the world population accessed the internet and by 2021, the number is expected to reach 53.7%. Currently, on average 3.5 billion Google searches are done per day [2]. These statistics advocate that search systems are "gatekeepers to the Web" for many people [6]. As information seekers search the Web more, they are also more influenced by Search Engine Result Pages (SERPs) and their influence -negative included-potentially become visible in a wide range of areas. However, as with all software-based systems, search platforms do not lack of human influence, thereby they may suffer from embedded bias, i.e., corpus or algorithmic bias. Experimental studies suggest that particular information types or sources might be retrieved more or less, or might not be well represented [4]. For instance, during the elections, it is known that people issue repeated queries on the Web about political candidates and events such as "democratic debate", "Donald Trump" and "climate change" [10]. Epstein and Robertson [7] claim that SERPs returned in response to these queries may influence the voting decisions of the users and report that manipulated search rankings can change the voting preferences of undecided individuals at least by 20%. As individuals rely to a greater extent on the SERPs for their decision making, there is a thriving demand for these systems explainable. Empirical evidence has shown that individuals trust more sources ranked at higher positions in SERPs, but the ranking criteria may rather depend more on user satisfaction than the factual information, which jeopardizes the phenomenon of providing reliable information in exchange for satisfying users [4]. In spite of the given research findings, the majority of online users tend to believe that search engines provide neutral results, i.e., serving only as facilitators in accessing information on the Web due to their automated operations [9]. However, this romanticised view of search platforms does not reflect reality and there seems to be a growing skepticism related to objectivity and credibility of these platforms. To illustrate that, a recent dispute between the U.S. President Donald Trump and Google can be given, where Mr. Trump accused Google of presenting only negative news about him when his name is searched. Google refuted this claim by saying that: "When users type queries into the Google Search bar, our goal is to make sure they receive the most relevant answers in a matter of seconds" and "Search is not used to set a political agenda and we don't bias our results toward any political ideology" 1 . In this work, we hope to shed some light on that debate, by not specifically examining queries regarding Donald Trump but by fulfilling an in depth analysis of retrieved search answers to a broad set of queries related to controversial topics based on concrete evaluation measures. Bias implies undue emphasis. For a retrieval system, it can be defined as the balance and representativeness of Web documents retrieved from a database for a set of queries [11]. When a user issues a query to a search engine, documents from different sources are collected, ranked, and presented to the user. Assume that a user searches for 2016 presidential election and the top-n ranked results are displayed. In such a search scenario, the retrieved results may exaggerate or downplay particular perspectives and thereby provide an unbalanced picture of the given query as claimed by Mr. Trump, though without any scientific support. Hence, the potential undue inclusion or exclusion of specific perspectives in the retrieved results lead to bias [11]. Note that the existence of bias is different from relevance. In the presented scenario, even though the retrieved list of documents may all be judged as relevant with respect to the given query, if the selection of the documents is skewed or slanted, i.e., emphasizing one perspective over another, then the corresponding search engine is biased due to an imbalanced representation of the perspectives towards the query's topic. Bias is especially important if the query topic is controversial having opposing perspectives as described in the given scenario. The bias in SERPs can be used by search engines to inform their users about the bias by making themselves more accountable which is one of the crucial attributes that a retrieval system should possess [4]. In this work, we focus on the SERPs coming from the news sources and investigate two major search engines (Bing and Google) in terms of political bias. Our analysis has mainly three sides where we evaluate the political bias of the search engines separately, then make a comparison among them as well as track the source of this bias in the SERPs. The bias may come from the data, which may contain biases (input bias) or the search algorithm, which contains sophisticated features (algorithmic bias). In the scope of this work, we concentrate on the input bias that is intrinsically embedded in the data itself. In short, we aim to answer the following research questions: RQ1: On a conservative-to-liberal scale, do search engines return politically biased SERPs in response to queries related to controversial topics? RQ2: Are search engines significantly different from each other towards controversial topics? RQ3: Does the source of bias come from the input data? We address these research questions for controversial topics representing a broad range of issues in SERPs of Google and Bing through content analysis, i.e. analysing the textual content of the retrieved documents. We focus on content bias and describe our SERP politically content bias quantification framework in which we propose three different measures of bias based on common Information Retrieval (IR) utility-based evaluation measures: Precision at cut-off (P@n), Rank Biased Precision (RBP), and Discounted Cumulative Gain at cut-off (DCG@n). While the first measure quantifies bias considering only a weak ranking criterion, i.e. the first returned documents as in SERPs, the other two measures incorporate stronger ranking bias. In order to answer RQ1, we measure the degree of deviation of the ranked SERPs from an ideal distribution where different political perspectives are equally likely to appear [11]. To detect political bias which results from the imbalanced representation of politically different points of view, we label the documents' political perspectives with classification and use these labels for bias evaluation. To address RQ2, we compare the political bias in the SERPs of the two search engines to see if they show similar bias for the corresponding controversial topics. To answer RQ3, we measure the political bias in a similar manner as we do to answer the RQ1. The only difference is that this time we measure the bias in the whole corpus, i.e., all retrieved SERPs from Google and Bing, instead of the partial of it, as top documents. Then, we compare the bias of the whole corpus with the partial bias that we compute for the first part of our analysis to check if they are consistent, or not. Our main contributions in this work can be summarized as follows: (1) We propose a novel generalizable search bias evaluation framework to measure the bias in the SERPs. (2) Our fairness-aware set of measures are explainable utility-based IR measures which take into account of relevance as well, while evaluating bias in ranked results. (3) We apply our framework to compare the relative bias in the SERPs content of two search engines for controversial queries. To the best of our knowledge, this is one of the first works to compare the two search engines in terms of content bias. (4) We utilize the framework to track the source of bias in SERPs, i.e., checking if the existing bias comes from the corpus or not. Lastly, a preliminary version of the present framework has been published earlier [8]. This work expands and surpasses the previous endeavor mainly in the following points: (i) We used a deep learning framework that is widely-used in NLP tasks for classification, i.e., to predict the political perspectives of the retrieved documents from the two popular search engines, instead of obtaining these labels via crowdsourcing. (ii) We also crawled a different dataset of articles from the newspapers' websites to fine-tune the deep learning model specifically for the news articles, thereby achieve a better classification model in perspective detection. (iii) Finally, we included a new investigation on the source of bias in our experiments through examining if the measured bias in the top-n documents comes from the data. The remainder of the paper is structured as follows. In Section 2 we present the related work. The search bias evaluation framework is developed in Section 3. In Section 4 we detail the experimental setup, and present the results. BACKGROUND & RELATED WORK Studies indicate that online IR platforms have influenced public by causing an increase in polarization; for instance it has been shown that social media accounts with non-Western names are more likely to be indicated as fraudulent since most probably the system has been trained on Western names. Therefore, proposing new metrics can be useful in evaluating and understanding such ethical issues in IR systems [4]. Regarding this assertion, there have been a growing interest in quantifying bias in the recent years POLITICAL BIAS EVALUATION FRAMEWORK 2In this section, to answer the research questions asserted in the introduction, we propose our political bias evaluation framework. The first question focuses on detecting political bias (if exists) in SERPs and the second one examines if the search engines show the same level of bias. On the other hand, the third question investigates if the source of bias comes from the data. In this framework, we initially identify political perspectives of the news articles automatically via classification. Then, we present the measures of bias and the proposed protocol to identify political bias and investigate the source of bias in SERPs. Preliminaries Our first goal is to detect political bias with respect to the distribution of political perspectives expressed in the contents of the SERPs. Let S be the set of search engines and Q be the set of queries about controversial topics. When a query ∈ Q is issued to a search engine ∈ S, the search engine returns a SERP . We define the political perspective of the -th retrieved document with respect to as ( ). A political perspective can have the following values: conservative, liberal, both or neither. A document political perspective can be: conservative ( ) when the document content is in favour of the conservative political agenda. liberal ( ) when the document content is in favour of the liberal political agenda. both or neither ( ) when the document content is in favour of both or neither perspectives. For our analyses, we deliberately use controversial topics such as abortion, medical marijuana, gay marriage, and Cuba embargo which contain opposing viewpoints since complicated concepts concerning the identity, religion, or political leaning are the actual points where search engines are more likely to provide biased results [12]. Our second goal is related to the first one as we compare the search engines in terms of the political bias that they show in their SERPs. Our third goal is to find the source of political bias by checking if the bias comes from the data (input bias). We do this by measuring the bias also in the whole corpus of the SERPs in addition to the partial dataset, i.e. top 10 or 100 documents of the corpus, and comparing the corpus bias (if exists) with the partial one to see if they are consistent or not. Measures of Bias On the basis of the previously definition of bias given in Section 1, it can be quantified by measuring the degree of deviation of the distribution of documents from the ideal one. To give a generic definition of an ideal list in the field of bias evaluation poses problems; but in the scope of this work, we can assess the existence of political bias in a ranked list retrieved by a search engine if the presented information significantly deviates from true likelihoods [14]. Using this definition reversely, we can adopt the assumption that the ideal list is the one that minimizes the difference between two opposing political perspectives, which we indicate here as and . 3.2.1 Bias Score of a SERP. Formally, we measure the political bias in a SERP as follows: where is a function that measures the likelihood of in satisfying the information need of the user about the view and the view . An unbiased search engine would produce a mean bias of 0. A limitation of MB is that if a search engine is biased towards the perspective on one topic and bias towards the perspective on another topic, these two contributions will cancel each other out. In order to avoid this limitation we also define another metric. Mean Absolute Bias (MAB). The mean absolute bias, which consists in taking the absolute value of the bias for each . Formally, this is defined as following: An unbiased search engine produces a mean absolute bias of 0. Although this measure solves the limitation of MB, MAB says nothing about towards which perspective the search engine is biased, making these two measures of bias complementary. Retrieval Evaluation Measures. In IR the likelihood of in satisfying the information need of users is measured via retrieval evaluation measures. Among these measures we selected 3 utilitybased evaluation measures. This class of evaluation measures quantify in terms of its worth to the user and are normally computed as a sum of the information gain summed over the relevant documents retrieved by . The 3 IR evaluation measures used in the following experiments are: P@ , RBP, and DCG@ . P@ : P@ for the view is formalized in [8]. However, differently from the previous definition of where the only possible outcomes are 1 and 2 , here can return any of the label associated to a political perspective ( , , and ). Hence, only conservative and liberal documents, that is relevant to the topic, are taken into account, since ( ) returns both or neither when otherwise. Replacing P@ in Eq. (3.2.1) we obtain the first measure of bias: The main limitation of this measure of bias is that it has a weak concept of ranking, i.e. the first documents contribute equally to the bias score. The next two evaluation measures overcome this issue by defining discount functions. Since we are evaluating web-users, for P@ and DCG@ we set = 10 and for RBP we set = 0.8. This last formulation (Eq. ), although it looks similar to the rND measure, it does not suffer from the first four limitations introduced in Section 2. In particular all these presented measures of bias: 1) do not focus on one group; 2) use a binary score associated to the document stance or political perspective, similar to the way these measures are used in IR when considering relevance; also like in IR 3) can be computed at each rank; 4) exclude non-relevant documents from the measurement of bias, and; 5) provide various user models associated to the 3 IR evaluation measures: P@n, DCG@n, and RBP. Quantifying Political Bias Using the previously defined measures of bias in Section 3.2 we quantify the political bias of the two search engines, Bing and Google. Then we compare them in terms of the measured bias. Now, we describe each step of the proposed mechanism used to quantify political bias in SERPs. Dataset Construction. After having crawled all the SERPs for both search engines and all queries Q, for each returned document we obtain the political perspective of the document with respect to the corresponding query in the top 10 documents and the whole corpus. Both is done automatically via classification since we already have a dataset labelled by crowdsourcing for training. Bias Evaluation. We compute the bias measures of every SERP with all three IR-based measures of bias: P@ , RBP, and DCG@ for the first 10 documents and only P@ for the whole corpus, because the other measures of bias do not provide meaningful results due to the rank information used in these measures. We then aggregate the results using the two measures of bias, MB and MAB. Statistical Analysis. To identify whether the bias measured is not rooted from randomness, for the bias measure MAB, we compute a one-sample t-test: the null hypothesis is that no difference exists and that the true mean is equal to zero. If this hypothesis is rejected, this means that there is a significant difference; we claim that the evaluated search engine is biased. Then, we make a comparison in the bias difference measured across the two search engines using a two-tailed paired t-test: the null hypothesis is that the difference between the two true means is equal to zero. If this hypothesis is rejected, this means that there is a significant difference so we claim that there is a difference in bias between the two search engines. Dataset In this work, we used two datasets for political bias evaluation. The first dataset contains SERPs crawled from two search engines, Google and Bing. The second dataset, on the other hand is composed of the articles published in the known newspapers. News Articles in SERPs. We obtained the controversial queries issued for searching from Pro-Cong.org [13] and applied some filtering steps on the initial query set. After filtering, the final query set size became 57. We submitted each query in the final query set to the US News search engines of Google and Bing using a US proxy. Then, we extracted the whole corpus returned by both engines in response to all the queries in the set. Note that the data collection process was done in a controlled environment such that the queries are sent to the search engines at the same time. For more details about the selection of the queries and crawling the SERPs, please refer to our previous work [8]. After having crawled all the SERPs returned from both engines and extracted their contents, we annotated the top 10 documents. We obtained the stance label of each document with respect to the queries via crowdsourcing. To label the political perspective of queries, we also used crowdsourcing. To obtain the political perspective of documents, we transformed the stance labels into political perspectives based on the perspective of their corresponding queries. The details about our crowdsourcing campaigns as well as the transformation process can also be found in [8]. News Articles in Newspapers. To enrich the pre-trained BERT model on perspective detection task even more, we crawled the article contents of the selected newspapers. To choose the newspapers to be crawled, we prepared a list by utilizing the media bias chart of AllSides.com All [1]. AllSides.com is a US-based online website that shares information and ideas from all sides of the political spectrum with the purpose of fighting filter bubbles and polarization. The organization has researched since 2012, uses multiple methodologies to rate media bias and even achieved a patent on this area. Furthermore, AllSides only focuses on American sources and its media bias rating scale is based on American politics for the online news articles. For all these reasons, we chose the media bias chart of AllSides to determine the political perspective of newspapers which is displayed in Figure 2. Subsequently, we crawled the textual contents of the news articles of the corresponding newspapers in the chart by using a Python library called feedparser 2 that parses feeds in all known formats such as Atom, RSS, and RDF. Then, we labelled the crawled news articles by weak supervision such that labelling the articles of left/lean-left newspaper as liberal, right/lean-right as conservative, and center as neutral. We further used this newspaper dataset to fine-tune the BERT model after the first fine-tuning done with the training data by learning the topic keywords and semantic relations in the articles, thereby detecting the political perspectives more accurately. Classification To answer the first research question, we used the political perspective labels of the top 10 documents in the first dataset previously obtained via crowdsourcing. In the scope of this work, we used the labelled dataset to obtain those perspectives automatically. For automatic perspective detection, we used pre-trained Bidirectional Encoder Representations from Transformers (BERT) model which is proposed by Google that presents state-of-the-art results in a wide variety of NLP tasks [5]. BERT's key difference from other deep models is having an attention mechanism called Transformer that applies bidirectional training as opposed to directional models to language modelling. In this way, BERT learns contextual relations between words from all of the surroundings (left and right) of a given word. We splitted the labelled dataset into train and test sets and fine-tuned the pre-trained BERT model on the train test. Moreover, for making the BERT learn semantic clues better in the SERPs, in this work we crawled the newspapers' contents as the second dataset which is similar to those news articles in the SERPs, labelled with weak supervision and used this dataset to fine-tune the BERT model again. To secondly fine-tune the already fine-tuned BERT model, we used weak supervision labels of the news articles as depicted in Figure 1. As the lower-quality labels were being updated by the existing BERT model, the model was expected to learn linguistic clues such as semantic relations and important keywords which belong to a specific category (conservative, liberal, or neutral) simultaneously. The second question is highly related to the first one, thus we directly used the results of the first question which showed the levels of political bias in the SERPs of Bing and Google, then made a comparison among them by using the statistical analyis described in Section 3.3. To address the third question, we used the whole corpus in the first dataset to investigate if the input bias exists. To track the source of bias, we made use of the final (fine-tuned twice) BERT model to label all the SERPs in the first dataset and check if the bias in the top 10 documents is consistent with the whole corpus. Results Exp. 1: Fine-tuning BERT on only perspective data using the following inputs: a) only document content with title b) concatenating query and document content with title c) only document title d) concatenating query and document title Exp. 2: Fine-tuning BERT on only news data using the following inputs: a) only document content with title b) only document title
2022-10-10T01:16:06.666Z
2022-10-07T00:00:00.000
{ "year": 2022, "sha1": "74425b29b97216fcc91d9da2d9ba128e549cc4f0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "74425b29b97216fcc91d9da2d9ba128e549cc4f0", "s2fieldsofstudy": [ "Political Science", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
210840792
pes2o/s2orc
v3-fos-license
Cardiac Anomalies in Liveborn and Stillborn Monochorionic Twins Background Cardiovascular anomalies are more common in monochorionic twins, especially with twin-twin transfusion, compared to other twin types and to singletons. Because previous studies are based on fetal and neonatal echocardiography, more information is needed to study prevalence of cardiac anomalies in twin miscarriages, stillbirths, and children after the immediate neonatal period. Methods With specific attention to cardiac anomalies, we reviewed the medical records of 335 selected liveborn twin pairs from the Marshfield Clinic Twin Cohort (enriched for twin-twin transfusion) and all twins (175 pairs) identified in the Wisconsin Stillbirth Service Program cohort of late miscarriages and stillbirths. Results Structural cardiac defects occurred in 12% of liveborn monochorionic twin infants and 7.5% of stillborn infants with twin-twin transfusion compared to only 2% of liveborn dizygotic twins and no stillborn dizygotic infants. The most common cardiac lesion in liveborn twins was ventricular septal defect, which was usually isolated and discordant, preferentially affecting the smaller twin in monochorionic pairs. Among stillborn and miscarried monochorionic twins, the most common cardiac lesion was acardia. Conclusions Monochorionic twins, particularly those with TTT, are at increased risk for a spectrum of structural cardiac malformations which we suggest may be related to asymmetry of the inner cell mass resulting in a smaller poorly perfused twin. In severe cases, limited cardiac and circulatory development in the affected twin leads to acardia. In less severe cases, the smaller infant has deficient septal growth that sometimes results in ventricular septal defect. Previous studies have suggested several reasons for discordant cardiac lesions in MC pairs. Cardiac problems related to circulatory overload such as ROTO occur exclusively in recipients of TTT 1-3 and often improve following fetoscopic laser therapy. 9, 10 Herberg et al 10 in recipient twins, while atrial septal defects (ASD) occurred at similar frequencies in donors and recipients. Van den Boom et al 11 reported increased frequency of coarctation of the aorta among donor twins in pairs with severe TTT, suggesting that aortic coarctation may be an effect of hypovolemia. Al Rais et al 6 discovered ROTO and mitral dysplasia exclusively in TTT recipients, but in pairs without TTT, a variety of cardiac abnormalities were noted, mostly in the smaller twin. These observations suggest that ROTO, pulmonic stenosis, and mitral dysplasia in the recipient and possibly coarctation of the aorta in the donor result from unequal perfusion, while ASD and other cardiac malformations presumably have different causes. Since MC twin embryos develop in close proximity, even minor differences in developmental timing could allow laterality determining signaling pathways to function normally in one member of an MC pair while disrupting laterality development of the co-twin. Observations of increased heterotaxy, atrial isomerism, and looping defects affecting only one member of an MC pair support this hypothesis. 6 Concordance of MC pairs for laterality defects is rare, although Thacker et al 12 reported an interesting monozygotic (MZ) pair with right-and left-sided isomerism, respectively. Another possible mechanism for discordant cardiac anomalies in MC twins is asymmetry of the inner cell masses. 13 Discordant growth of the inner cell masses with corresponding changes in growth factor expression were observed by Noli et al. 14 Benirschke 15 pointed out that unequal division of the inner cell mass can initiate a sequence of delayed cardiac development leading to lower blood pressure and decreasing placental share for the smaller twin, who then presents later in pregnancy as the "donor" in TTT. In more severe cases, this "donor" twin may develop reversed circulation and degenerate to an acardiac fetus. Asymmetric growth of the fetuses and their placentae with deceased angiogenesis in the smaller twin may also contribute to cardiac malformations in the smaller twin, even without evidence of TTT. 4 This could also account for the increased prevalence of cardiac anomalies in both MZ and DZ twins observed by Herskind et al, in the Danish Twin Registry. 16 Studies based on prenatal echocardiography leave gaps in understanding cardiac defects in twins at both ends of the severity spectrum. Without autopsy-based studies of twin miscarriages, cardiac defects that cause demise prior to 20 weeks are missed. Less severe cardiac anomalies such as ASD and small ventricular septal defects (VSD), which are difficult to diagnose on fetal echocardiography, may also be underestimated in the absence of long-term follow-up. In one of the few studies with follow-up after the immediate neonatal period, Herberg et al 10 found structural cardiac malformations in 10/89 survivors of laser surgery for TTT. Analysis of cardiac anomalies in stillborn twin and liveborn twin cohorts with long-term follow-up data may contribute to understanding of the full spectrum of cardiac defects present in twin pairs and suggest novel mechanisms for the development of cardiac abnormalities in twin pregnancies. During an unrelated study of renal disease, detailed manual medical record review of all twin pairs with an ICD9 or ICD10 coded diagnosis of twin-twin transfusion in the Marshfield Clinic Twin Cohort (MCTC) yielded an incidental finding of VSD in 5/39 (4 donors and 1 recipient). Because this was a much higher prevalence of VSD's than observed in previous studies of congenital heart disease in twins, we decided to perform a similarly detailed manual record review in larger samples of same sex and opposite sex twins from the MCTC. Because the MCTC includes only liveborn pairs, TTT is likely underrepresented, so we also manually reviewed the records of all twin pairs referred to the Wisconsin Stillbirth Service Program for etiologic evaluation looking for differences in the types and prevalence of congenital cardiac malformations between the MCTC cohort, our stillborn cohort, and the published, primarily perinatal cohorts. Materials and Methods The Marshfield Clinic Twin Cohort (MCTC) is composed of 8,722 presumed twin pairs identified based on last name, birthdate, address, billing accounts, and/or natural language processing from 30 years of electronic health records (EHR). 17 These are liveborn individuals with follow-up from 1 to 30 years. During a previous study, electronic query for ICD9 and ICD10 codes indicating TTT revealed 39 pairs, each with TTT confirmed by manual record review. For each of the initial 39 TTT pairs, four additional same sex and four opposite sex pairs (matched for age, duration of medical record available and gender of same sex pairs) were selected for further study. Manual of the entire medical record of these additional cases identified an additional 6 TTT cases, 17 MZ pairs with no evidence of TTT (based on monochorionicity and/or zygosity testing), and 6 DZ pairs (discordant for genetic disorders) among the selected same sex pairs. Sixteen pairs proved not to be twins on detailed review and were excluded. The medical records of the 335 remaining twin pairs (45 TTT, 17 other MZ, 149 DZ, and 124 same sex pairs of unknown zygosity [UZ]) were manually reviewed with specific attention to cardiac anomalies, other birth defects, and growth parameters ( Figure 1A). Incomplete records were evaluated on the basis of available data. Cardiac anomalies were diagnosed by echocardiography during routine care, usually after a murmur was heard. Structural defects such as VSD or valvular pulmonic stenosis identified by echocardiography were counted, regardless of whether surgery was required. Transient peripheral pulmonic stenosis, patent foramen ovale, and/or patent ductus in preterm infants were not counted as structural heart disease. Infants with neonatal echocardiograms showing biventricular or right ventricular hypertrophy or clinically significant cardiomyopathy were combined under the term "cardiomegaly/cardiomyopathy." The Wisconsin Stillbirth Service Program (WiSSP) database of 3,137 stillbirths and second trimester miscarriages referred for etiologic evaluation between 1983 and 2017 includes 175 twin gestations in which one or both twins died during the second or third trimester. Of these, 95 pairs were identified as MZ (primarily on the basis of monochorionicity), 2 pairs were identifiable as DZ due to opposite gender, and the zygosity of the remaining 56 pairs was unknown ( Figure 1B). All records submitted at the time of initial evaluation (maternal records, photographs, radiographs, placental pathology reports, autopsy, and chromosomal or other genetic studies) were reviewed with particular attention to cardiac anomalies, other birth defects, and growth parameters. Incomplete records were evaluated on the basis of available data. Structural cardiac anomalies were diagnosed by autopsy, except in two cases identified prenatally by fetal echocardiography without postmortem evaluation of the heart. Pairs with abnormalities attributable to unequal perfusion such as polyhydramnios/ oligohydramnios, marked discrepancy in cardiac weight, twintwin disruption (TTD), and twin arterial reversed perfusion/ acardia were included in the TTT/TTD category whether or not placental anastomoses were studied. Very small amorphic or acephalic fetuses were counted in the "acardiac" category. Cardiomegaly was diagnosed based on heart weight above the 95th percentile at autopsy using the standards of Maroun & Graem 18 or description of "enlarged" heart on autopsy or prenatal ultrasound if no weight was available. Statistical comparisons were carried out on 2x2 contingency tables with using Fisher's Exact Test of Chi-squared with Pearson correction as appropriate for sample size. Structural Cardiac Defects In the MCTC liveborn twin cohort, structural cardiac defects occurred in 15/124 (12%) MZ twin subjects (10/90 TTT and 5/34 other MZ twins) compared to only 6/298 (2%) DZ and 9/248 (4%) UZ subjects. The most common cardiac lesion was VSD accounting for 21/30 or 70% of structural cardiac malformations. VSD occurred in 12/124 (10%) of MZ twins, usually as an isolated defect. In twin pairs with TTT, 6/7 VSDs occurred in the donor twin. Among UZ twins, VSD was less common (2%) and always occurred in association with other cardiac anomalies, most frequently ASD. In the DZ group, VSD occurred in only 4/298 (1.3%), usually as part of a recognizable syndrome. More detail regarding structural cardiac anomalies in liveborn twins is provided in Table 1. In the WiSSP cohort, twinning occurred in 175/3,137 (5.5%) of referrals, which exceeds the liveborn twinning rate of 3.4% reported by Martin et al. 19 The increase in twinning among miscarriages and stillbirths is due primarily to the large number with TTT/TTD (73 pairs compared to only 22 other MZ, 56 UZ, and 24 DZ pairs). Cardiac anomalies were identified in 11/146 (7.5%) TTT/TTD infants and 4/112 (3.6%) UZ infants. The rate of cardiac anomalies per twin pregnancy (8.6%) is comparable to the rate of cardiac anomalies in the entire WiSSP cohort (8.5%) reported by Jorgensen et al, 20 as well as the rate of cardiac anomalies in MC twin pairs (9.1%) reported by Manning and Archer. 2 Occurring in about 1.0% of livebirths 21 and 2.6% of unselected stillbirths, 19 VSD was found in only 2/348 stillborn twins, both in association with other malformations. Among stillborn twins, the most frequent defect was acardia (eight fetuses, all with reduction of the head, upper limbs, and/or thorax). One acardiac fetus had monosomy 21, while his pump twin appeared healthy; another shared multiple anomalies with his pump twin. Among the TTT pairs without acardia, there were only two cardiac defects. One larger twin had ectopia cordis presumably due to TTD, and one recipient had a secundum ASD similar to reports of ASD in living recipients of TTT. A more detailed listing of cardiac anomalies in stillborn twins is provided in Table 2. Ventricular Septal Defect Because the frequencies of VSD are dramatically different from expectations in both liveborn and stillborn groups, we made some comparisons with published data as well as between our liveborn and stillborn cohorts. The occurrence of VSD in 3.1% of the MCTC cohort is approximately three times a recent estimate for singletons. 21 Table 3. Cardiomegaly/Cardiomyopathy In the liveborn cohort, cardiomegaly/cardiomyopathy was diagnosed in 9/45 (20%) of the TTT recipients, but did not occur in donors. These nine pairs appeared to have particularly severe effects of TTT, since 4/9 recipients with cardiomegaly also had structural heart defects, and almost half of their donor co-twins had cardiovascular issues (two with structural heart defects and two with anomalies attributable to poor perfusion). None of the DZ or UZ twins had clinically significant ventricular hypertrophy and/or cardiomyopathy on postnatal echocardiography (Table 4). Among stillborn TTT/TTD pairs, 13/73 larger twins and 0/73 smaller co-twins had cardiomegaly (Table 4). Among the stillborn/miscarried twins with cardiomegaly, 12/13 had structurally normal hearts weighing >95th percentile for gestational age or described as "enlarged" at autopsy, while one had structural heart disease along with extracardiac anomalies, which he shared with his smaller acardiac co-twin. Although the other co-twins had normal hearts, several had other anomalies attributable to poor perfusion. In the UZ group, 2/112 infants had cardiomegaly, including one with a single ventricle who was the larger twin on ultrasound prior to demise and one hydropic infant with a structurally normal heart. Both infants had apparently healthy same sex dichorionic co-twins. None of the stillborn DZ twins had cardiomegaly. Among pairs dying from TTT, 5/9 pairs with documented cardiomegaly had twin-twin body weight differentials < 20% suggesting that cardiac size may be more sensitive than body weight for identifying pairs with life-threatening TTT. Discussion Our study is the first to specifically examine the type and prevalence of cardiac malformations in stillborn and miscarried twins, and one of very few studies to include follow-up after one year of age for liveborn twins. The detailed manual review of entire medical records provides an opportunity to recognize monozygosity and TTT, even when these are not accurately reflected in the ICD9 or ICD10 diagnostic codes, and to identify individuals with congenital heart disease, even when the diagnosis is made late. It also allows detailed evaluation of twintwin concordance for both cardiac and other anomalies. This approach does, however, result in some limitations. Manual review of medical records is extremely labor intensive, making it impossible to review the entire MCTC cohort. As with any retrospective review, some medical records will be more complete and contain more detail than others. Because most subjects did not undergo genetic zygosity testing, superficial similarity in appearance is not useful in identifying zygosity in stillborn twins, and most medical records of living individuals do not contain data regarding phenotypic similarity such as facial appearance or eye color; MZ pairs are identifiable primarily through placental pathology or due to complications of monochorionicity such as TTT. The discovery of 6 additional TTT pairs and 17 non-TTT MZ pairs among the subset of 153 like-sex pairs reviewed in detail suggests that the entire cohort likely contains about 865 MZ pairs, including 225 TTT pairs that could be identified by manual review. Because manual record review of the 39 electronically identified TTT pairs confirmed that they would have been detected through manual review of the entire MCTC, and the prevalence and types of cardiac malformation appear similar in the 39 electronically identified pairs and the 6 pairs identified through additional manual review, we believe that all 45 TTT pairs included in our study are representative of TTT pairs in the entire MCTC cohort. The enrichment for TTT pairs could have introduced some bias in estimates for risk for MZ pairs, since the number of non-TTT MZ pairs is clearly underestimated, but recalculation excluding the 39 electronically identified TTT pairs yielded a similar (actually slightly higher) prevalence of structural cardiac defects (7/46 = 15%) among monochorionic pairs in the same sex "control" group. Therefore we decided to include all identifiable monochorionic pairs in our analysis. The high prevalence of cardiac anomalies applies only to monochorionic pairs raising the possibility that shared circulation may be a risk factor whether or not TTT has been recognized. Dichorionic MZ pairs cannot be clearly distinguished from same-sex DZ pairs and are combined in the unknown zygosity group which has a much lower prevalence of cardiac malformation. Among the liveborn twins, some cardiac anomalies could have been missed because cardiology evaluation and echocardiogram were performed only when clinically indicated, and therefore, not all of the subjects had echocardiography. In the stillborn group, some cardiac defects could have been missed if small fetal size or severe maceration (as in a fetus papyraceous) limited evaluation of the heart or if the parents declined autopsy. Being retrospective and initially exploratory, this study was designed primarily to provide information on the type, concordance, and frequency of congenital heart disease in understudied twin populations. The high frequency of VSD in living TTT and other MC twins, especially in comparison to previous studies based on prenatal ultrasound, indicates a need for more careful postnatal follow-up for MZ twins, particularly MC pairs. The stark difference between the liveborn and stillborn groups, however, prompts speculation about possible mechanisms that could lead to the observed patterns. Because acardia is easily explained as a complication of MC twinning, the high frequency in stillborn and miscarried MC twins is expected; however, the increased prevalence of VSD among liveborn twins, particularly MC pairs, requires further explanation. As the most common structural cardiac anomaly in the MCTC twins, VSD occurs at an incidence of 3.1%, which is approximately three times a recent estimate for singletons. 21 Among liveborn twins, the highest risk for VSD is in MC pairs; however, despite a very high proportion of MC pairs among stillborn and miscarried twins, the prevalence of VSD is lower than expected. If the 9.7% frequency of VSD in liveborn MC twins was applied to the 190 stillborn twins, one would expect 18 twins with VSD, but only one VSD was observed. Different methods of ascertainment cannot explain this discrepancy, since it seems unlikely that echocardiography of symptomatic infants only in the liveborn cohort would be more efficient than autopsy in the stillborn cohort for detection of VSD. Though not intrinsically lethal, VSD occurred in 2.6% (54/2082) of the entire WiSSP cohort, 20 usually as part of a fatal multiple anomaly syndrome. If this frequency of VSD was applied to the 350 stillborn twins in the WiSSP cohort, we would expect 9 fetuses with VSD, but only 2 were observed. As observed in the stillborn singletons, we would expect that most of these VSDs would occur in association with other more lethal anomalies. To explain the dearth of VSD in stillborn MZ twins, we need to consider a mechanism that causes VSD in liveborn MC twins with or without TTT but not in severely affected pairs dying from TTT or other complications of MC twinning. We propose a mechanism of inequality of the twinning process that results in cardiac abnormalities for the smaller twin ranging from acardia in the most severe cases to VSD in less severely affected liveborn MC twins. Inequality of the twinning process and the type and distribution of vascular anastomoses contribute to the severity of complications in MC twins. Arteriovenous anastomoses, present in almost all MC pregnancies, are deep and unidirectional and cause complications if imbalance in the size and/or number of anastomoses results in a net transfusion. 22 To prevent shunting through arteriovenous anastomoses, laser surgery for TTT should obliterate entire cotyledons, but following successful laser surgery, scarring is frequently only superficial, which suggests plasticity of placental development continuing throughout pregnancy. 15 To explain this observation, Benirschke 15 proposed a model in which varying degrees of unequal division of the inner cell mass followed by differential perfusion result in a spectrum of phenotypes ranging from entirely normal MC twins to selective intrauterine growth restriction, TTT, twin reversed arterial perfusion, and acardia. We propose an extension of Benirschke's model, whereby an initially unequal division of the inner cell mass results in a smaller twin with fewer precursor cells. 15 In contrast to a localized loss of cardiac progenitors that can be compensated by regeneration, 23 a generalized deficiency of precursor cells may result in delayed growth and development for all organs including the heart and placenta with no extra cells available to replace those missing from vital organs. Due to reduced blood volume and delayed or less robust cardiac development, the small fetus has lower blood pressure, which delays the folding of capillaries to form placental villi thereby further reducing placental share and contributing to the development of unbalanced arteriovenous anastomoses. Delayed placental development may also result in hypoxia of the smaller fetus as noted by Yang et al. 24 Placental insufficiency as a mechanism for congenital heart disease may also be applicable to dichorionic twins, regardless of zygosity, if there is severe asymmetry of placental development as briefly discussed by Bahtiyar. 4 Acardia, which is the most frequent cardiac anomaly in stillborn and miscarried twins, fits easily into Benirschke's model as a complication resulting from decreased placental share if both arterioarterial and venovenous anastomoses are present, creating bidirectional connections between the fetal circulations, which, in the presence of a significant differential in blood pressure, may result in reversed perfusion with regression of cardiac development in the smaller twin along with a risk of high output heart failure and life-threatening hydrops fetalis in the larger pump twin. Mathematical models based on relative size of the twins can predict onset of reversed perfusion/acardia in the small twin 25 and hydrops in the pump twin. 26 In our cohort, however, the death of 6/8 pump twins from other causes prior to onset of hydrops demonstrates other risks inherent in severely unequal twinning. The very early cessation of cardiac development prevents structural evaluation of the transiently beating heart in the "acardiac" fetus. In the absence of arterioarterial and venovenous anastomoses, reversal of circulation is not possible, but imbalance in placental share and perfusion via arteriovenous anastomoses may result in TTT. Van dem Wijngaard et al 27 proposed a positive feedback mechanism where renin/angiotensin mediators produced in response to hypotension and hypovolemia in the donor twin reach the recipient via placental anastomoses further exacerbating existing hypervolemia and hypertension and contributing to high output heart failure and hydrops. Recipient twins, with or without hydrops, are at risk for acquired cardiac lesions including cardiomyopathy and ROTO due to venous hypertension and high cardiac output. In our group of severely affected TTT pairs in which at least one infant was stillborn (excluding acardia), 12/87 (14%) of the recipients had significant cardiomyopathy. Among the liveborn pairs with presumably less severe TTT, 9/45 (20%) of the recipients had clinically recognized cardiomyopathy in the newborn period. The increased incidence of structural cardiac malformations, especially VSD, in donor twins requires an earlier developmental explanation, since cardiac morphogenesis is usually completed before TTT becomes clinically evident. We postulate that in addition to causing acardia or TTT, underlying inequality in the twinning process adversely affects cardiac development early in the embryonic period by at least two possible mechanisms: (1) lack of progenitor cells in the smaller fetus, and (2) decreased oxygenation of the smaller twin. In an embryo with deficient cardiac progenitors and limited availability of stem cells for regeneration, deficient or delayed development of the second heart field could plausibly contribute to future septal defects in the donor heart. 28 Furthermore, given the developing septum is sensitive to hypoxia, 29 limitation of oxygenation in a fetus with decreased placental share may contribute to failure of septal development. The observation of decreased placental and fetal weight in liveborn singletons with major VSD 30 provides further evidence for interaction between placental and ventricular septal development. If deficiency in cardiac progenitors and/ or hypoxia due to placental insufficiency contributes significantly to defective septal development, an increase in VSD affecting the smaller twin in severely unbalanced MC twin pairs would be expected. Among stillborn pairs, however, an increase in VSD might be masked by even more severe anomalies such as acardia. While acardia occurs too early for effective intervention, earlier recognition and treatment of less severe degrees of unequal placental perfusion may be lifesaving for twins who do not yet meet the classical criteria for TTT. Cardiomegaly, even in the absence of major differences in body weight, may be a marker for life-threatening TTT. Due to the high prevalence of structural cardiac anomalies in liveborn MC twins, observation for possible cardiac defects, especially VSD, is important at all stages of pregnancy and should continue after delivery.
2020-01-22T14:01:58.844Z
2020-01-20T00:00:00.000
{ "year": 2020, "sha1": "91434fa60d42e610d4f621d941bb5ab1fda5b52c", "oa_license": null, "oa_url": "http://www.clinmedres.org/content/18/2-3/58.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "0666454e95473784a4ffdab6877357d908036b70", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54006369
pes2o/s2orc
v3-fos-license
Shape-conditioned Image Generation by Learning Latent Appearance Representation from Unpaired Data Conditional image generation is effective for diverse tasks including training data synthesis for learning-based computer vision. However, despite the recent advances in generative adversarial networks (GANs), it is still a challenging task to generate images with detailed conditioning on object shapes. Existing methods for conditional image generation use category labels and/or keypoints and are only give limited control over object categories. In this work, we present SCGAN, an architecture to generate images with a desired shape specified by an input normal map. The shape-conditioned image generation task is achieved by explicitly modeling the image appearance via a latent appearance vector. The network is trained using unpaired training samples of real images and rendered normal maps. This approach enables us to generate images of arbitrary object categories with the target shape and diverse image appearances. We show the effectiveness of our method through both qualitative and quantitative evaluation on training data generation tasks. Introduction Generating realistic images is a central task in both computer vision and computer graphics. Despite recent advances in generative adversarial networks (GANs), it is still challenging to fully control how the target object should appear in the output images. There have been several approaches for conditional image generation which introduce additional conditions to GANs such as class labels [23,31] and keypoints [21,25]. However, previous approaches still suffer from an inability to control detailed object shapes and lack generalizability to arbitrary object categories. Training data synthesis is one of the most promising applications of conditional image generation. Since recognition performance of machine learning-based methods heavily depends on the amount and quality of training images, there is an increasing demand for methods and datasets for training recognition models using synthetic data [33,24,26,12]. However, when synthetic training images are rendered with off-the-shelf computer graphics techniques, the trained estimators still suffer from an appearance gap from actual, often degraded test images. GANs have also 3D models Latent appearance vector 1 2 3 Input Normal map rendering Images generated by SCGAN Fig. 1: The proposed shape-conditioned image generation network (SCGAN) outputs images of an arbitrary object with the same shape as the input normal map, while controlling the image appearances via latent appearance vectors. been used to modify synthetic data to more realistic training images, and it has been shown that such data can improve the performance of learned estimators [27,28,2]. These methods use synthetic data as a condition on image generation so that output images remain visually similar to the input images and therefore keep their original ground-truth labels. In this sense, the aforementioned limitation of conditional image generation severely restricts the application of such training data synthesis approaches. If the method allows for more fine-grained control of object shapes, poses, and appearances, it can open a way for generating training data for, e.g., generic object recognition and pose estimation. In this work, we propose SCGAN (Shape-Conditioned GAN), a GAN architecture for generating images conditioned by input 3D shapes. As illustrated in Fig. 1, the goal of our method is to provide a way to generate images of arbitrary objects with the same shape as the input normal map. The image appearance is explicitly modeled as a latent vector, which can be either randomly assigned or extracted from actual images. Since we cannot always expect paired training data of normal maps and images, the overall network is trained using the cycle consistency loss [39] between the original and back-reconstructed images. In addition, the proposed architecture employs an extra discriminator network to examine whether the generated appearance vector follows the assumed distribution. Unlike prior work using a similar idea for feature learning [7], this appearance discriminator allows us to not only control the image appearance, but also to improve the quality of generated images. We demonstrate the effectiveness of our method in comparison with baseline approaches through qualitative analysis of generated images, and quantitative evaluation of training data synthesis performance on appearance-based object pose estimation tasks. Our contributions are twofold. First, to the best of our knowledge, we present the first GAN architecture which uses normal maps as the input condition for image generation. This provides a flexible and generic way for generating shape-conditioned images without relying on any assumption on the target object category. Second, through experiments, we show that the proposed method allows us to generate training data for appearance-based object pose estimation, with better performances than synthetic data generated by baseline GAN architectures. Related work Our method aims at generating shape-conditioned images with realistic appearances, related to prior methods on conditional image generation GANs. One of the potential applications of our method is generating realistic training data, and hence our method is further related to methods applying GANs for bridging the gap between synthetic training data and real images. GANs for Conditional Image Generation Generative Adversarial Networks (GANs) have made considerable advances in recent years [9,22,18,17], and have been successfully applied to various tasks such as image super-resolution [20,16], inpainting [15], and face aging [37]. GANs consist of mainly two networks, generator and discriminator, which are trained in an adversarial manner. The generator generates images so that they are recognized as real ones, while the discriminator learns to discriminate generated images from real images from a training dataset. The generator usually receives a vector of random numbers sampled from an arbitrary probability distribution as input, and outputs an image through the network. However, as discussed earlier, most of the standard GAN architectures do not allow for fine-grained control of the output images. To address this limitation, much research has been conducted on GAN architectures for conditional image generation. There have been several approaches to use class labels as a condition on generated images and to specify which object category to be drawn in the output image [23]. Similarly, some prior work proposed to control the generated images by conditioning them on human-interpretable feature vectors built in an unsupervised manner [5,29]. To increase the flexibility of image generation, some works further used input features indicating where and how the target object should be drawn, such as bounding box [25] and keypoints [21]. Alternatively, iGAN [38] and the Introspective Adversarial Network [3] take an approach to use user drawings as a condition for image generation. However, the conditions used in these methods still have a limitation that precise 3D shape control is only possible with specific object categories with hand-designed keypoint locations. In contrast, our method allows for direct control on arbitrary object shapes using normal map rendering, without requiring paired training data. Learning with Simulated/Synthesized Images Due to the limited availability of fully-labeled training images for diverse computer vision tasks, there is an increasing attention on synthetic training data. Computer graphics pipelines have been employed to synthesize images with desired groundtruth labels. Such a learning-by-synthesis approach is especially efficient for tasks whose ground-truth labels require costly manual annotation, such as semantic segmentation [24,26] and eye gaze estimation [33]. However, synthetic images still suffer from a large gap from real images in terms of object appearance and often degraded imaging quality, and hence the learned estimator cannot directly achieve desired performance on real-world input images. To fill the gap between training (synthetic) and test (real) image domains, there have been proposed many domain adaptation techniques. In addition to research attempts on the learning process [36,30,32,1,8], GANs have been also shown as promising tools for bridging the domain gap. Shrivastava et al. proposed the SimGAN that modifies the input synthetic images to be visually similar to real images, and showed that such an approach improves the baseline performances on tasks like hand pose and gaze estimation [27]. RenderGAN [28] takes a similar approach to convert simple barcode-like input images into realistic images. CycleGAN [39] architecture provides a way to mutually convert images from two different domains without requiring paired images, and also be applicable to the domain adaptation task. Bousmalis et al. proposed the pixel-level domain adaptation (PixelDA) approach which transfers source images to the target domain under the pixel-level similarity constraint. Essentially, synthetic images were used as a strong constraint on output images in these methods, and GANs were restricted only to modify the imaging properties of the target object. In contrast, since our method uses texture-less normal maps to provide purely shape-related information to the generator, it allows for a full flexibility to control object and background appearances. SCGAN: Shape-conditioned Image Generation Network The goal of SCGAN is to generate images of arbitrary object categories, with the same shape as the input normal map. While the training process requires an access to both normal maps and real images of the target object, in practice it is almost impossible to assume paired training data. To this end, SCGAN adopts the idea of cycle-consistency loss [39] and the whole network is trained using unpaired training images. Furthermore, to maximize the flexibility of object appearances, the image generator also takes an appearance vector as input, in addition to the normal map. By training the network so that appearance-related information is represented only with the appearance vector, our method realizes the shape-conditioned image generation task more efficiently and accurately. Network Architecture As illustrated in Fig. 2, the proposed architecture consists of five convolutional neural networks. G I is an image generator that takes an appearance vector z ∈ R n and a normal map N ∈ R m×m as input, and outputs an image I ∈ R m×m . Conversely, G N and G z correspond to the normal map and appearance vector generators with partially shared network weights that converts an image I to an appearance vector z and a normal map N . Each data modality has their own discriminators D I , D N , and D z . While D I and D N judge whether the input image and normal map are real or generated, D z judges whether the input appearance vector is the one sampled from a Gaussian distribution or not. As described earlier, the proposed network is designed to be trained on unpaired training samples using the cycle-consistency loss [39]. While the main goal of our approach is to train the image generator G I , normal map and appearance generators (G N and G z ) are also trained and used to back-reconstruct each modality and compare with the original input. However, if we only consider generators and discriminators of images and normal maps, generators tend to satisfy the cycle-consistency loss by embedding hidden information to intermediate data. For example, if the image generator learns to embed input information to the output image, the normal map generator can recover the original normal map without taking into account the object shape in the intermediate image. To avoid such situations, we also enforce the network to learn to separate shape and appearance information by introducing the appearance generator and discriminator. The proposed network effectively generates shape-conditioned images by modeling the appearance variation in the training data as a Gaussian appearance vector, while also allowing us to explicitly sample appearance information from actual images using the appearance generator G z . Training Loss We train discriminators and generators using the WGAN-GP loss [10] which is based on the Wasserstein-1 distance between real and generated data distributions. The loss functions L d and L g for discriminators and generators, respectively, are defined as: where x is real data (image, normal map, appearance vector),x is generated data from their corresponding generators, andẋ is randomweighted sum of input and generated data. P x , Px , and Pẋ indicate distributions of each data, and E represents the mean of the distribution. The third term of Eq. (1) has an effect of stabilizing the adversarial training [10]. In our implementation, while three discriminators are trained using the individual discriminator losses, all generators are jointly trained as: where L(G I ,G N ,G z ) is the joint loss function also taking into account the cycle-consistency losses: λ I , λ N , and λ z are weights for each cycle-consistency loss term which are defined as the distance between the input and the back-reconstructed output. These weights are required to take balance between discriminator and cycle-consistency losses in each domain, and they control how strictly the model should maintain the input shapes. Image I and normal map N are sampled from the distribution of real data P I and P N , and z are an appearance vector sampled from a zero-mean Gaussian distribution N (0, σ 2 ). Figure 3 shows the details of generator/discriminator networks. The architecture of the generator network follows Zhu et al. [39] and the network mainly consists of convolution (Convolution-Pixelwise normalization-ELU) block, deconvolution (Deconvolution-Pixelwise normalization-ELU) block, and ResNet block [11]. As described earlier, parameters of the first six convolution blocks of the normal map generator G N and the appearance vector generator G z are shared. The discriminator network for images and normal maps also consists of the convolution block and outputs a scalar value indicating discrimination results through a fully connected layer. The appearance discriminator network consists of a fully connected layer followed by a Instance normalization-ELU block, and also outputs a scalar value through a fully connected layer. The size of input images and normal maps are set to 64 × 64 pixels, and is downsampled to 16 × 16 before the ResNet blocks. During training, each discriminator was trained independently with respect to the corresponding discriminator losses. Then the generators were trained as Eq. (2) Fig. 3: Details of the generator/discriminator networks. N , I , and z indicate normal map, real image, and appearance vector. Parameters of the convolutional layers are indicated as C cSsK k, i.e., a feature map is convolved into C channels with stride S and kernel size K . Experiments We demonstrate the performance of the proposed SCGAN architecture through both qualitative analysis and quantitative evaluation. As a qualitative analysis, we compare shape-conditional generated images from the proposed method and other baseline methods in terms of both accuracies of object shape and diversity of object appearances. In addition, we show some ablation studies to analyze the efficiency of the proposed network design. As a quantitative evaluation, we further compare the performance of appearance-based object pose estimator using these generated images from different methods as training data. Training Datasets In both qualitative and quantitative experiments, we take three object classes as examples: cars, sofas, and chairs. Table 1 shows details of the training datasets. Each dataset consists of both real images and normal maps. Real images were sampled from the LSUN dataset [35] with a simple filtering process to select images showing a single and sufficiently large target object. Using a pre-trained Fig. 4: Examples of the training data taken from the LSUN dataset [35] and the ShapeNet dataset [4]. The top row is real images from the LSUN dataset after postprocessing, and the bottom row is normal maps rendered using models from the ShapeNet dataset. object detector [14], we accepted images with only one bounding box of the target class whose area is larger than 25% of the whole image. After the filtering process, there were in total 83,765, 151,758, and 386,370 images for sofa, chair, and car, respectively. These images were extended to 1:1 aspect ration by filling the borders by zero padding. Figures 4 (a), (b), and (c) show samples of the sofa, chair, and car images used for training. The top row is real images from the LSUN dataset after post-processing, and the bottom row is normal maps rendered using models from the ShapeNet dataset. As can be seen in the cases such as the top-middle example in Figs. 4 (b) and the top-left example in (c), the real images still contain some occlusions and mismatched object poses compared to the normal maps even after the automatic filtering, which illustrates the fundamental difficulty of handling unpaired data. We used 3D models taken from the ShapeNet dataset [4] to render normal maps. Using 3,173, 6,778, and 3,385 models for sofa, chair, and car, the normal maps were rendered so that the pose distribution roughly resembles the real image dataset. Table 1 lists the ranges of camera poses for each object, where the virtual camera was placed with increments of 5 degrees. In total, there were 114,228, 515,128, and 257,260 normal maps for sofa, chair, and car. Since the position of the object also differs in the real images, during training we also applied random shifting and scaling to these normal maps. Baseline Methods Although there is no other method directly addressing the same task of shape-conditioned image generation, we picked two closely related approaches as baseline methods: SimGAN [27] and pixel-level domain adaptation (Pix-elDA) network [2]. The network architectures, discriminator losses, and training hyper-parameters of these baseline methods were set to the same as our method (SCGAN) for fair comparison, while method-specific losses stayed the same as the original papers. Following the original method, SimGAN does not have the input appearance vector and there is no mechanism to change the appearance of generated images. Since these methods were designed to modify rendered images of textured 3D models, we also evaluated them using textured rendering as input condition. The textured images were rendered in the same settings as the normal maps. Figure 5 shows examples of generated images from each method. Figures 5 (a), (b), and (c) correspond to the cases of sofa, chair, and car, respectively. In each figure, the first row shows the input normal maps, and the second and third rows show the output from SCGAN and SimGAN using these normal maps as input. Comparison of Generated Images It can be seen that SCGAN generates more naturalistic images than baseline methods. SimGAN could not successfully modify normal maps and failed to generate realistic images in most cases. In addition, there are many cases where the baseline methods failed to generate realistic background in Fig. 5 (a). This illustrates the advantage of our method which does not rely on a strong constraint unlike baseline methods minimizing the distance between the generated and input images. Figure 6 further shows more output examples of SCGAN, using the same normal map but with different appearance vectors. Figures 6 (a), (b), and (c) are the input normal map and generated images of sofa, chair, and car, respectively. In each figure, the first column shows the input normal map and textured image. The remaining first row shows the generated images from the normal map using SCGAN, and the second row shows the output images from the textured image using both Sim-GAN and PixelDA. Since SimGAN cannot control the output image appearance, it only shows one example. While the baseline methods cannot control object shapes separately from the appearance, SCGAN can generate images with the same shape and diverse appearances. It is noteworthy that in Fig. 6 (a) the output images also keep the cushion placed on the sofa, which is not an easy case for keypoint-based methods. Without appearance discriminator Fig. 7: Generated images without real image reconstruction error and appearance discriminator. The first rows show input normal maps, and the rest shows output images generated by SCGAN, without the real image reconstruction loss, without the appearance discriminator loss. Ablation Study In Fig. 7, we further show the effectiveness of individual loss terms in Eq. (2). To demonstrate the effect of the proposed architecture using the separate appearance modeling and the cycle-consistent real image reconstruction loss, we evaluated models trained without real image reconstruction error and appearance discriminator. Figures 7 (a), (b), and (c) correspond to the cases of sofa, chair, and car, respectively. In each figure, the first row shows the input normal maps. The second row shows the output using all losses in Eq. (2). The third row corresponds to the training result without the real image reconstruction error (λ I was set to zero), and the fourth row corresponds to the case trained without the appearance discriminator. These examples show that the proposed approach improves the overall image quality by using these losses. The real image reconstruction error significantly con- tributes to the realism of generated images, and the results without image reconstruction error mostly failed to generate object appearances. When the network was trained without the appearance discriminator, the generated images sometimes become highly distorted as can be seen in middle columns of Fig. 7 (a). Appearance Representation As a consequence of the cycle-consistent training, the appearance generator G z can also be used to extract appearance vectors from real images for generating new images. Figure 8 shows some examples of images generated using appearances sampled from real images. Figures 8 (a), (b), and (c) correspond to the cases of sofa, chair, and car, respectively. As can be seen in these examples, SCGAN can generate shape-conditioned images with the similar appearance with the source images. This illustrates the potential of SCGAN for modifying pose and shape of objects in existing images. Handling Unknown 3D Shapes Another advantage of our method is that it can take an arbitrary normal map as input, even ones rendered using hand-crafted objects. In Fig. 9, we further show the output from the sofa image generator using hand-crafted sofa objects and shapes from the other object classes. The hand-crafted models were created by a person who has never experienced 3D modeling, and consists of basic 3D shapes without any texture. Each block corresponds to the result of one 3D model, with the same three appearances. The first rows are input normal maps, and the second rows are generated images. Even when the object shape is significantly different from ordinary sofa shapes, SCGAN successfully generates their corresponding images. As can be seen in the bottom-right blocks, the proposed method tries to map the object texture to the input shape even when the shape comes from completely different object categories. Training Data Generation for Object Pose Estimation Since SCGAN also keeps the object pose as the same as the input normal maps, the generated images can serve as a training data for appearance-based object pose estimation. In this section, we compare the effectiveness of SCGAN as training data Normal map (unkown) Generated image Fig. 9: Examples of generated images from unknown 3D shapes. Each block corresponds to one 3D model. The first rows are input normal maps, and the second rows are generated images. generation framework, by comparing the accuracy of trained pose estimator with the cases using generated images from baseline methods. The architecture of the pose estimation network follows the DenseNet [13], while the last fully connected layer is modified to output 3-dimensional pose parameters. The network weights were pre-trained on the ImageNet [6], and the whole network including the last layer was trained on each target object dataset. Object poses are represented as Euler angles (azimuth, elevation, theta), and the loss function is set to be the Euclidean distance between ground-truth and estimated poses. Test data were taken from the ObjectNet3D dataset [34] which consists of images annotated with pose-aligned 3D models. We selected images with the corresponding object annotations, and whose object poses stay within the pose range set for the training data. In total, we used 886, 2,547, and 3,939 test images for sofas, chairs, and cars, respectively. We compare the performance of the pose estimator with the ones trained using data generated by SimGAN [27] and PixelDA [2]. As in the training of image generators, random shifting and scaling were also applied to the normal maps. As an indicator of the upper-bound accuracy of this task, we also trained the same pose estimator using the test data via 5-fold cross-validation. In addition, we evaluated the pose estimator directly using the textured images to show the estimator performance without any domain adaptation. Similarly, to show the baseline performance of each task we also evaluated a naïve estimator which always output the mean pose in each object category. Table 2 lists pose estimation errors for each method and object category. The estimation error was evaluated as the geodesic distance between the ground-truth rotation matrix R t and the estimated rotation matrix R as 1 2 || log (R T R t )|| [34]. The first column (Target data) shows the upper-bound performance obtained via cross- validation. The second and third columns show the result using the dataset generated from normal maps, with SCGAN and SimGAN, respectively. Similarly, the third and fourth columns show the result using the dataset generated from textured images, with SimGAN and PixelDA, respectively. The fifth column (No op.) additionally shows the result directly using the original textured images. The sixth column shows the naïve baseline performance of the average predictor. The result shows that SCGAN achieved better pose estimation performances than SimGAN-based training results using normal maps, and better or close performance in comparison with SimGAN and PixelDA based training using textured 3D model images. SCGAN significantly improved the pose estimation performance especially in the case of the chair dataset. This is mainly because chair images have larger appearance gaps from the textured images, and SCGAN successfully generated training images closer to the actual test images. Conclusion In this work, we proposed SCGAN, a GAN architecture for shape-conditioned image generation. Given a normal map of the target object category, SCGAN generates images with the same shape as the input normal map. The network can be trained without relying on paired training data with cycle-consistency losses, and it is able to generate images with diverse appearances through the latent modeling of image appearances. Unlike prior work on conditional image generation, our method does not rely on any object-specific keypoint design and can handle arbitrary object categories. The proposed method therefore provides a flexible and generic framework for shape-conditioned image generation tasks. We demonstrated the advantage of SCGAN through both qualitative and quantitative evaluations. SCGAN not only improves the quality of generated images while maintaining the input shape, but also efficiently handles the training data synthesis task for appearance-based object pose estimation. In future work, we will further investigate applications of the proposed method including a wider range of learningby-synthesis approaches, together with more detailed human evaluation on generated images.
2018-11-29T07:20:29.000Z
2018-11-29T00:00:00.000
{ "year": 2018, "sha1": "16e6b917057c47434efaa71199745c1b5ca3d876", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1811.11991", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "bd4647a52cf76878ef9ee2f80b2ddf056d638c48", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
271302334
pes2o/s2orc
v3-fos-license
Formulation, preparation of niosome loaded zinc oxide nanoparticles and biological activities In this study, zinc oxide nanoparticles (Zn-NPs) were prepared by the green synthesis method and loaded inside niosomes as a drug release system and their physicochemical and biological properties were determined. Zn-NPs were prepared by the eco-friendly green strategy, the structure, and morphological properties were studied and loaded into niosomes. Subsequently, different formulations of niosomes containing Zn-NPs were prepared and the optimal formulation was used for biological studies. Scanning electron microscope (SEM) and dynamic light scattering (DLS) were used to investigate the morphology and size of nanoparticles. Fourier transform infrared spectroscopy (FTIR) and UV–Vis were used to confirm the synthesis of Zn-NPs. Energy dispersive X-ray spectrometer (EDS) determined the elemental analysis of the Zn-NPs synthesis solution and the crystalline structure of Zn-NPs was analysed by XRD (X-Ray diffraction). Furthermore, Zn-NPs were loaded inside the niosomes, and their structural characteristics, entrapment efficiency (EE%), the release profile of Zn-NPs, and their stability also were assessed. Moreover, its antimicrobial properties against some microbial pathogens, its effect on the expression of biofilm genes, and its anticancer activity on the breast cancer cell lines were also determined. To study the cytocompatibility, exposure of niosomes against normal HEK-293 cells was carried out. In addition, the impact of niosomes on the expression of genes involved in the apoptosis (Bcl2, Casp3, Casp9, Bax) at the mRNA level was measured. Our findings revealed that the Zn-NPs have a round shape and an average size of 27.60 nm. Meanwhile, UV–Vis, FTIR, and XRD results confirmed the synthesis of Zn-NPs. Also, the EE% and the size of the optimized niosomal formulation were 31.26% and 256.6 ± 12 nm, respectively. The release profile showed that within 24 h, 26% of Zn-NPs were released from niosomes, while in the same period, 99% of free Zn-NPs were released, which indicates the slow release of Zn-NPs from niosomes. Antimicrobial effects exhibited that niosomes containing Zn-NPs had more significant antimicrobial and anti-biofilm effects than Zn-NPs alone, the antimicrobial and anti-biofilm effects increased 2 to 4 times. Cytotoxic effects indicated that when Zn-NPs are loaded into niosomes, the anticancer activity increases compared to Zn-NPs alone and has low cytotoxicity on cancer cells. Niosomes containing ZnNPs increased the apoptosis-related gene expression level and reduced the Bcl2 genes. In general, the results show that niosomes can increase the biological effects of free Zn-NPs and therefore can be a suitable carrier for targeted delivery of Zn-NPs. such as food, health, environment, and medical industries.Moreover, Zn-NPs have antimicrobial applications against a wider range of bacteria and have received much attention in the last two decades 4 .The mechanism of antimicrobial action of Zn-NPs is similar to other metallic nanoparticles and acts mainly through the destruction of the bacterial cell wall 5 .The exact mechanism of Zn-NP action is not yet known, but there are many reports that these effects are caused by the high surface-to-volume ratio of nanoparticles, as well as entering the cell due to its small size, which destroys the membrane structure and DNA molecule 6 .In general, there are various chemical methods for the synthesis of Zn-NPs, however, the nanoparticles obtained from these methods due to the use of dangerous chemical substances, the methods have harmful effects on the environment and require high costs 7,8 .Therefore, it is necessary to find a suitable, inexpensive, and eco-friendly strategy without its residues being non-degradable, for the synthesis of Zn-NPs 9 .One of the methods that has recently attracted the attention of researchers is the biological method, and one of these biological methods is green synthesis, in which plant extract has the role of reduction and synthesis of nanoparticles 10 .One of the advantages of using plants is their harmlessness to the environment and their cheapness 11 .Also, because many plant species are natively available to researchers in many countries and can be grown in bulk without the need for specific nutrients, it is very easily available 12 .Another advantage of using plants is that the extract of plants has secondary compounds such as terpenes and terpenoids, which have reducing properties and accelerate the synthesis process of Zn-NPs 13 .Also, another challenge of using Zn-NPs is their targeted release, which can use nanocarriers such as niosomes 14 .Also, another pharmaceutical challenge is the targeted delivery of the drug to the cells, slow-release, reducing the side effects of the drugs, and increasing the stability and half-life of the drugs 15 . Niosomes are formed as a new drug delivery system by self-assembly of non-ionic surfactants in aqueous medium 16 .Niosomes as drug carriers reduce toxicity and significantly increase therapeutic indicators.Niosomes can enclose the drug and reduce its toxicity in the body, enhance the stability of the drug, and use a lower dose of the drug 17 .Moreover, the advantages of using niosomes compared to other nanocarriers such as liposomes are the ability to load hydrophilic and hydrophobic drugs, lower price, and more stability 18 .So far, few studies have been conducted on the pharmaceutical use of metal nanoparticles in niosomes 19 .In one of the studies conducted by Federica Rinaldi et al., silver nanoparticles were trapped inside the niosome structure, their chemical and biological properties were investigated 20 .Their results demonstrated that the entrapment efficiency of AgNPs is around 1-4% and the rate of release of AgNPs from niosomes is slow.Another report is by Farideh Rezaie Amale, et al. who determined the properties of gold nanoparticles synthesized by the green synthesis method and made several formulations of niosomes containing gold nanoparticles.In this study, the entrapment efficiency was reported 34.49% ± 0.84, and the authors stated that with the increase in the concentration of niosomes, the cytotoxic effects also increased 21 . One of the goals of this research is the preparation of Zn-NPs through the green synthesis route, the investigation of their physical and chemical properties, and finally, the loading of Zn-NPs into niosomes.The characteristics of the synthesized niosomes including size and morphology, their antimicrobial and anticancer effects were evaluated. Materials and methods The surfactants (including Span 60, and Tween 60), 96% ethanol, DMSO, agar, Mueller Hinton broth, crystal violet solution, and chloroform were obtained from Merck.Cholesterol, penicillin, streptomycin, zinc nitrate, cell culture medium, Trypan blue, and MTT dye were prepared from Sigma Aldrich.Bacterial cells and cell lines were taken from the center of biological and genetic resources of Iran.RNA extraction and cDNA synthesis kits were purchased from Qiagen, United States. Plant collection and extraction In this study, the aerial parts of Artemisia scoparia were obtained from the plant bank of Iran Biological Reserves Center with herbarium number 1326 and were approved by an expert botanist.First, plant parts should be dried in an environment away from light and powdered using a grinder.10 g of herbal powder was mixed with 50 ml of distilled water and extracted by maceration method.The prepared extract was filtered by filter paper (Whatman, Germany). Synthesis of zinc oxide nanoparticles (Zn-NPs) by green method We claim that all experiments (including plant collection) complied with relevant institutional, national, and international guidelines and legislation.To obtain Zn-NPs with high purity, 100 ml of zinc nitrate with a concentration of 1.5 mM was added to 20 ml of the aerial part extract of A scoparia, and then it was treated with 10 ml of sodium hydroxide (1 M).The resulting mixture was maintained at a temperature of 60 °C and on a stirrer (MS300HS, Jhal Tehiz, Iran).After 24 h, the formation of a white color was observed, which indicates the appearance of Zn-NPs.For further purification, the resulting solution was centrifuged at 13,000 rpm for 5 min (Sigma, model 2.16, USA), washed with distilled water, and then with ethanol and dried 22 . Preparation of niosome-loaded Zn-NPs To prepare the niosome-loaded Zn-NPs using the thin layer hydration method, cholesterol and surfactants (Span 60 and Tween 60) with different Mol ratios were dissolved in 10 ml chloroform (Merck, Germany) were placed in a rotary evaporator (113, Evaplus, Italy) at 55 °C and 120 rpm for 20 min under vacuum conditions (Table 3).Then, in the second step, sterile distilled water and 1 mg/ml Zn-NPs were added to the balloon with a final volume of 5 ml, and the rotation was set at 120 rpm and 55 °C for 20 min without vacuum conditions.Finally, the formed lipid vesicles containing the ZnNPs were stored in glass vials at room temperature for 24 h.Then, to reduce the size of the manufactured niosomes, it was placed in a sonicator (Ultrasonic Homogenizer App, 100 H, Hilscher, Germany) with a power of 100 W for 4 periods of 15 s 23 .The final concentration of Zn-NPs in the formed niosome-loaded Zn-NPs was 1 mg/ml and based on different molar ratios of Span 60 and Tween 60, 6 formulations of niosomes were synthesized (Table 3). Evaluation of the entrapment efficiency (EE%) The entrapment efficiency of the ZnNPs into niosomes was measured by centrifugation (using Wit Gamb laboratory technology device, Korea) at 13,000 rotations per minute (rpm) for 90 min.The free Zn-NPs remained in the supernatant and the trapped drug remained in the sediment.In the next step, to calculate the percentage of Zn-NPs entrapment in niosomes, the sediment obtained was dissolved with 5 ml of isopropyl alcohol to separate the niosome and release the entrapped drug.Then, 2 ml of the obtained solution was added to a volume of 25 ml with sterile distilled water, and its absorption rate as well as the absorption of the free drug in the supernatant with an ultraviolet-visible spectroscopic device (430 nm) (Lambda 25, Perkin) Elmer, USA) was read.Using the following formula, the entrapment efficiency percentage was calculated 24 : Characterization of nanoparticles After 120 min of the reaction time of adding the A. scoparia extract to zinc nitrate and changing the color of the reaction, ultraviolet-visible spectroscopic analysis of Zn-NPs using a UV-Vis spectrometer (Agilent, Cary 300, Spectrophotometer, USA) between 200 and 700 nm was carried out.To check the appearance of ZnNPs and niosome-loaded Zn-NPs, a micrograph of nanoparticles was prepared by transmission electron microscope (TEM).To perform TEM, a drop of Zn-NPs was placed on a copper grid.After that, the samples were washed with distilled water and stained with 2% uranyl acetate.Finally, imaging of the samples was done at 80 kV.In addition, the size and morphological study of Zn-NPs synthesized by the green synthesis method and niosome-loaded Zn-NPs were carried out via scanning electron microscope (SEM).For SEM, nanoparticles were diluted with deionized water (1:100) and a drop of it was placed on a silicon wafer and dried for 24 h at a suitable temperature.After that, coated with a layer of 100 Å gold for 3 min under argon at a pressure of 0.2 atm and studied under a field emission SEM.XRD analysis of the Zn-NPs was done to determine the crystalline phases of the Zn-NPs as well as to measure their crystal constants.XRD allows us to know the type of crystallographic shape of Zn-NPs and an XRD test was carried out by an XRD machine (Explorer, GNR, Italy) with CuKa lamp radiation and scanned from 2° to 80°, 2θ.To determine the functional groups of molecular compounds and, as a result, determine the possible structure of the compounds, the FTIR test was considered.The FTIR spectra of Zn-NPs and niosome-loaded Zn-NPs were investigated in KBr discs using a PerkinElmer FTIR spectrophotometer (spectrum Two, USA).FTIR analysis was conducted in a scanning range of 4000 to 400 cm −1 in a fixed resolution of 4 cm −1 at room temperature.A solid sample was examined by the KBr method, and percent transmission was recorded. To determine the composition of the elements in the sample and the purity of the product in certain parts of it and with the roles of distribution of the elements in the sample according to the imaging level, EDS analysis has been used.EDS analysis on single particles was carried out using Oxford Instrument, Incax-act, equipped with SEM (JEOL-FE SEM).The particle size, polydispersity index (PDI), and zeta potential were measured using a zeta sizer, DLS (Malvern Panalytical, Malvern, UK) at 25 °C.A tiny vessel was used for the testing of 1 ml of the specimen.Samples were diluted with a dilution factor of 1:1000 using double distillation water and size measurements were performed in triplicate and data are expressed as mean ± SD. In vitro drug release studies The release studies of niosomes were conducted using a cellulose acetate dialysis bag (molecular weight cutoff 12 KDa). 2 ml of niosome-loaded ZnNPs and free ZnNPs solutions were placed in separate dialysis bags.Each dialysis bag was suspended in a graduated cylinder with 100 ml phosphate-buffered saline (PBS) and placed on stirrers.The vessel was placed over a magnetic stirrer (50 rpm) and the temperature was maintained at 37 °C ± 0.5 °C. 1 ml of the sample was used to determine Zn-NP concentration and replaced with the same amount of phosphate buffer.Their optical absorption was read at 340 nm and the drug release chart was drawn 25 . Stability of niosomes One of the methods of checking the stability of niosomes is to monitor them during a certain time in terms of specific characteristics.Briefly, 1 ml of nisomes solution containing Zn-NPs was examined for size and EE% criteria for one month at two temperatures of 4 and 25 °C26 . Microdilution method To investigate the antimicrobial effects, the broth micro-dilution test was used.The strains used were Enterococcus fecalis ATCC 29212, Pseudomonas aeruginosa ATCC 15442, Escherichia coli ATCC 25922, and Staphylococcus aureus ATCC 700698.These strains were obtained from the microbial bank of Pasteur Institute of Iran.For this purpose, different dilutions of free Zn-NPs and niosomes containing Zn-NPs were prepared (7.8-1000 µg/ml) and 100 µl of the dilutions were poured into the wells of a 96-well plate.Then, 5 µl of 0.5 McFarland's concentration of studied bacteria and 95 µl of broth medium were added to the wells.After overnight incubation at 37 °C, the absorbance of the wells was read at a wavelength of 600 nm.We consider the first well where no growth was observed as the MIC concentration.Also, the first concentration that the culture of that well in the solid culture Entrapment percentage = amount of free Zn -NPs − amount of primary Zn -NPs / amount of primary Zn -NPs × 100 medium is negative is as MBC concentration.In this test, culture medium without inoculation of microbial cells and well-containing culture medium and microbial cells were used as negative and positive controls, respectively.In addition, this test was repeated three times 27 . Time kill assay This test was conducted according to the method of Mansouri et al. (2021), in summary, a concentration of 10 6 CFU/mL was prepared from the desired strains and added to the a 96-well plate and treated with a concentration of 1/2 MIC, optical density was read at a wavelength of 600 nm at different times 28 . Anti-biofilm activity To quantitatively study the biofilm inhibitory effects of Zn-NPs encapsulated by niosome and free ZN-NPs, the microtitre plate method-based crystal violet assay was used.Then, a 24 h culture was prepared from each strain, and then inoculated into a Tryptic Soy Broth (TSB) medium containing 0.2 glucose, a microbial suspension was obtained, whose turbidity was consistent with 0.5 McFarland.Then, 100 µl of each bacterial suspension were added to each of the wells of the plate, followed by 50 µl of niosome-loaded Zn-NPs and free Zn-NPs at sub-MIC concentrations, and then the plate was incubated for 24 h at 37 °C.After 24 h, the solution on the wells was removed and each well was washed 3 times with sterile physiological serum.Then, the bacteria attached to the wall were fixed with 250 µl of 96% ethanol.After 15 min, the contents of the well were emptied.After drying the plates, 200 µl of crystal violet dye were added (for 15 min).After this time, the extra dyes were washed using sterile distilled water.After drying the plates, biofilm was quantitatively measured by adding 200 µl of 33% acetic acid to each well, and its absorbance at 570 nm wavelength was read by ELISA Reader (JASCO, V-530, Japan) 28 . Biofilm gene expression analysis To measure the expression of biofilm genes that were exposed to nanoparticles, first, the microbial cells were treated with ½ MIC concentration of nanoparticles and the total RNA of the cells was extracted using the high pure kit (Roche, Germany) according to the relevant instructions.The quality and quantity of RNA was determined by spectrophotometry of Nanodrop.Due to the instability of the extracted RNA, the extracted RNA was quickly converted to cDNA by the cDNA synthesis kit (Fermentase, Lithuania).After that, the Real-Time PCR method was used to evaluate the expression of ndvB, fimH, icaD, and agg biofilm genes.During PCR, by increasing the number of DNA copies, the light emitted from the fluorescent markers increases, which can be measured, and since the expression of the 16S rRNA gene in the cell is stable, we use this gene as a housekeeping gene.The sequence of the primers used is listed in Table 1 29 : Cell culture In this study, breast cancer cell lines including MDA-MB 361, MCF-7 and T47D cells and normal HEK-293 cells were purchased from the Pasteur Institute cell bank of Iran.These cell lines were cultured in a CO 2 incubator at a temperature of 37 °C with 5% CO 2 (passage number 1).Also, the contamination of the cells was checked in terms of mycoplasma and they were negative.To culture cells, from fresh medium (RPMI1640) containing 10% FBS and 1% antibiotic (penicillin/streptomycin) was used.When the cells reached 90-95% confluence, the medium was aspirated and the cell monolayer was washed three times with sterile phosphate-buffered saline. Cell monolayers were treated with 1 mL of 0.25% (w/v) trypsin-EDTA, briefly incubated at 37 °C, and observed microscopically to ensure complete cell detachment. Cytotoxicity assay and biocompatibility The MTT method was used to determine the growth-inhibitory effects of niosomes on cancer cells.Briefly, after filling the flask containing the breast cancer cell lines including MDA-MB 361, MCF-7, and T47D cells, the cells were trypsinized and 5000 cells were poured into the wells of the cell culture flask (100 μl per well) and placed in a CO 2 incubator.After removing the supernatant solution, the wells were exposed to various concentrations of niosomes (31.25, 62.5, 125, 250, 500, and 1000 μg/ml) and incubated again for 24 h.After that, they were treated with MTT dye and incubated in the dark for 4 h.Finally, the OD of the wells was read using an ELISA reader at 540 nm wavelength, and the survival rate of the cells was calculated using the following formula 30 : Percentage of cell survival = optical density of control cells/optical density of treated cells.Also, to check the cytocompatibility of the synthesized niosomes, its cytotoxicity was investigated on the normal HEK-293 cell line.www.nature.com/scientificreports/ Measurement of apoptosis gene expression Investigating the expression of some genes related to apoptosis at the mRNA level is one of the ways to study apoptosis induction.In this study, breast cancer cells were treated with niosomes containing Zn-NPs and Zn-NPs alone, and the expression levels of apoptotic genes Bax, Bcl2, Casp3, and Casp9 were investigated using the Real-Time PCR method.In this study, the GAPDH was considered as a housekeeping gene, and the primers of the studied genes are given in Table 2 27 : Statistical analysis The data obtained from this study were put into GraphPad Prism software and the One-Way ANOVA test was used according to the normality of the data distribution, the comparison of the treatments and the control group was analyzed and the P-value was calculated.The significance level was considered at p < 0.05. Ethics approval The authors of this article state that all methods are reported in accordance with ARRIVE guidelines (https:// arriv eguid elines.org).All protocols were performed by the Ethical Committee and Research Deputy of the Islamic Azad University, Science and Research Branch, Iran (IR.IAU.REC.1401.128). Table 2. The primer sequence of the target genes that were used to evaluate the expression of apoptotic genes. Preparation of Zn-NPs and niosomes loaded ZnNPs During the Zn-NPs phytosynthesis process, the A. scoparia extract is mixed with zinc nitrate salt and causes its reduction.One of the characteristics of the reduction of zinc nitrate salt and the synthesis of Zn-NPs is the color change of the reaction and spectrometry 22 .The existence of a peak at the wavelength of 430 nm for Zn-NPs was confirmed by a UV-vis spectrometer (Fig. 1) (The concentration of Zn-NPs used for the UV test was 1 mg/ ml).In this study, to achieve the optimal formulation of niosomes, based on variables such as the molar ratio of Tween 60, Span 60, and cholesterol, the synthesis of niosomes was formulated and 6 formulas of niosomes were formed with a sonication time of 6 min (Tables 3, 4).According to the results obtained in different niosomal formulations, the F2 formulation was considered the optimal formulation in terms of size and EE%, and the rest of the study was carried out with this formulation. Characterization of Zn-NPs and niosomes loaded Zn-NPs The size and morphology of Zn-NPs and niosomes containing Zn-NPs were evaluated through SEM and DLS. In some places micrographs of SEM, nanoparticles can be seen in clumps, which can be removed by sonication.Also, TEM results confirmed the spherical structure of Zn-NPs (Fig. 3b).The EDS method was used for the elemental analysis of the synthesized Zn-NPs, and the results show that 76.1% of the reaction mixture contains zinc (Fig. 3a).XRD analysis was performed to prove the crystalline structure of Zn-NPs.The spectrum pattern obtained from the X-ray at θ angle had peaks of 46.56, 36.29,34.45, 33.28, and 33.8, which is consistent with the X-ray diffraction pattern of ZnO (Fig. 3c).The additional peaks observed in XRD may be due to the plant extract that was used in the synthesis of nanoparticles.In the FTIR results, peaks are observed that are related to the constituent compounds of niosomes.The peak at 475-616 cm −1 corresponds to ZN-NPs formation and is attributed to the metal-oxygen component.The peaks of 3555 and 3013 cm −1 are related to the hydroxyl groups in the plant extract, which are possibly related to the phenolic and alcoholic compounds of the extract.The 2922 cm −1 peak corresponds to the C-H stretching, which can be seen in the Span 60 structure (Fig. 3e,d). Drug release test The dialysis bag method was used for the study of the release pattern of Zn-NPs from niosomes, and the release medium was PBS medium, and this test was completed within 72 h.In this study, the dialysis bag was placed in the PBS solution to take samples at different times and check the release pattern of Zn-NPs.The results showed that the rate of release of Zn-NPs in niosomal form was lower than that of Zn-NPs in the free form during 72 h.In addition, 86.15% of free Zn-NPs were released in the medium during the first 8 h, but for niosome containing Zn-NPs, 31.26% of the Zn-NPs were released within 8 h (Fig. 4).As can be seen in Fig. 2, the release of Zn-NPs from niosomes is done in 2 steps: as the results show, in the early hours, the release of Zn-NPs is continuous and happens more than in other hours.Gradually, in other hours until the end (72 h), the release becomes slower and the amount of release decreases.In general, there is no direct relationship between EE% and drug release.It means that the lower the EE%, the faster the release 31 .In this study, because the size of Zn-NPs (89.6 ± 3.8 nm) is small, the loading rate is reduced (which was equal to 31.26%), which leads to the rapid release of Zn-NPs.Studies show that the rate of drug release enclosed in niosomes also depends on the niosome composition 32 .The more rigid the niosomes are, the slower the release, and there are many reports that the niosomes prepared using Span 60, Tween 60, and cholesterol have a tighter structure and the drug release occurs slowly 33 .One of the similar researches has been done by Rinaldi et al., who trapped silver nanoparticles inside niosomes and studied their structural characteristics.The findings of this study showed the low EE% of silver nanoparticles loading and their release becomes much slower.Also, niosomes made of Span 20 and Tween 20 were stable in water, bovine serum, and human serum mediums 34 .The low percentage of loading has also been reported by other researchers, Rezaie Amale et al. reported that the loading rate of gold nanoparticles inside niosomes was equal to 34.49 ± 0.84% and the in vitro drug release test showed that after 8 h, 59 ± 2.0% of gold nanoparticles were released from niosomes.However, the free gold nanoparticles were released by 95 ± 1.0% during this time, which shows that niosomes can slow down the release 21 .www.nature.com/scientificreports/ Stability test One of the characteristics of niosomes is their stability over time, and for this reason, the stability test for niosomes was carried out for 30 days at two temperatures of 4 °C and 25 °C, and the two characteristics of size and EE% were considered.The results of our study showed that the niosomes stored at the temperature of the refrigerator 4 °C, had more stability than the niosomes that were stored at the temperature of 25 °C, that is, their size and EE% had low changes.The noteworthy point is that over time, the instability of niosomes was obtained due to the increase in size and decrease in EE%.One of the mechanisms of increasing the size of niosomes with www.nature.com/scientificreports/increasing time is the fusion of niosomes together.Also, studies show that times for hydration, sonication and introducing cholesterol into the niosome structure have a significant effect on its stability.In general, according to the results shown in Fig. 5, it can be concluded that niosomes are more stable at 4 °C fewer structural changes were observed and it is suggested to use this temperature to store niosomes.However, niosomes are more stable compared to other drug delivery systems such as liposomes, but as mentioned, the type of synthesis method, the type of surfactant, and the cholesterol content are effective in their stability.www.nature.com/scientificreports/ Microdilution test In this study, the antimicrobial effects of Zn-NPs containing niosomes, free Zn-NPs, and free niosomes were investigated against the microbial strains including S. aureus, E. faecalis, E. coli, and P. aeruginosa.The results revealed that free Zn-NPs had an MIC between 31.25 and 62.5 µg/ml, However, niosomes loaded Zn-NPs had an MIC between 7.8 and 15.62 µg/ml, which indicated an increase in antimicrobial effects by at least 2 times.Also, the MBC value of niosomes was much lower than the MBC of free Zn-NPs.The results showed that free niosomes did not have significant antimicrobial effects against pathogenic bacteria (Table 5).Research studies have shown that the mechanism of Zn-NPs antimicrobial effects is due to the creation of pores on the cell membrane of microbial cells, or some researchers have indicated that Zn-NPs can create toxic oxygen radicals (ROS) inside microbial cells 35 .One of the other mechanisms of the antimicrobial effects of Zn-NPs is to destroy the balance of the entry and exit of minerals due to the entry of Zn-NPs across the cell membrane, as well as the leakage of www.nature.com/scientificreports/intracellular proteins and enzymes, inhibition of cell growth and death 36 .It is noteworthy that when Zn-NPs are loaded into niosomes, the antimicrobial effects increase.Numerous reports show that one of the reasons for decreased MIC value for niosomes and therefore the enhancement of bactericidal activity is due to the interaction of the bacterial membrane with the lipid structure of niosomal vesicles and the targeted delivery of Zn-NPs into the bacterial cell.Bacterial cell membrane interaction with niosomes takes place through three mechanisms: membrane integration, contact diffusion, and adsorption 28 .Iqbal et al., synthesized Zn-NPs by plant extract and investigated their antimicrobial effects on some microbial pathogens.The findings of this research showed that the MIC of Zn-NPs was 75 µg/ml for P. aeruginosa and 37.5 µg/ml for other bacteria studied 37 .Umavathi et al., synthesized Zn-NPs using the green synthesis method by aqueous extract of Parthenium hysterophorus and evaluated its activity on pathogenic bacteria and fungi.The researchers observed that Zn-NPs had significant antimicrobial effects against E. coli and stated that the antimicrobial potential of Zn-NPs is due to the attachment of the Zn-NPs to the cell membrane which has a negative charge and is attached to sulfur-containing amino acids and can disrupt the electron transport chain and DNA replication 38 . Time kill assay To further study the kinetics of the antibacterial potential of niosomes loaded Zn-NPs and Zn-NPs alone, the time kill assay was considered.As you can see in Fig. 6, as a result of the treatment of bacteria with niosomes, the optical density is lower than other treatments in a period of 72 h and can inhibit the growth of microbial pathogens. Anti-biofilm activity In this test, the biofilm inhibition effects of niosome's optimal formulation were compared with Zn-NPs alone.Biofilm formation is one of the pathogenic mechanisms of microbial pathogens and in this test, which used the crystal violet colorimetric method, bacterial cells were exposed to ½ MIC values.We observed that niosomes can destroy the biofilm formed by microbial cells significantly (2-4 times) compared to free Zn-NPs (Fig. 7).The anti-biofilm effects of niosomes have been reported in many studies.The effects of biofilm inhibition by streptomycin-containing niosomes were investigated by Mansouri et al. who observed that niosomes can have two to fourfold anti-biofilm effects and even at concentrations 4 to 8 times lower than the MIC of niosomes, can eradicate biofilms.Some studies have shown that one of the mechanisms of the anti-biofilm effects of niosomes is the electrostatic binding of niosomes that have a positive charge to the surface of biofilms that have a negative charge and targeted drug release into the biofilm structure 39 . Impact of niosome on biofilm gene expression The expression of biofilm genes in bacteria plays an important role in the formation of biofilm, so to investigate the impact of niosomes on the expression of biofilm genes, methods based on gene expression at the mRNA level such as RT-PCR can be used.In this test, the strains were exposed to ½ MIC value of nanoparticles, and the expression level of biofilm-related genes was studied.Our findings demonstrated that a significant decrease in the expression of biofilm genes by niosomes has occurred, and this can confirm the reduction of the ability of biofilm formation in the studied strains (Fig. 8).Our findings are consistent with many studies in that niosomes inhibit biofilm formation.Khaleghian et al. investigated the biofilm inhibitory effect niosomes encapsulated curcumin and the effects on the expression of genes involved in biofilm formation.Another finding of the study is a significant reduction in biofilm formation potential in a targeted manner and significantly reduced the expression of biofilm-forming genes in S. aureus strains 29 .Studies have shown that some metal nanoparticles can increase and inhibit the transcription of cell factors, and through blocking the transcription of biofilm-forming genes, the expression of genes decreases.Shakerimoghaddam et al. investigated the anti-biofilm effects of Zn-NPs on uropathogenic E. coli biofilm-forming bacteria.The point that the researchers of this research stated was that Zn-NPs can reduce biofilm formation and reduce the flu gene expression, which is effective in E. coli for initiation of attachment to surface and biofilm formation 40 . Cytotoxicity evaluation and cytocompatibility In this study, the cytotoxic effects of niosome-loaded Zn-NPs and free Zn-NPs against MDA-MB 361, MCF-7, and T47D cancer cells were investigated.The findings showed that niosomes containing Zn-NPs could inhibit cell growth more than Zn-NPs alone and the increase in cytotoxic effects was dose-dependent and the cytotoxic effects increased with increasing concentration.The greatest cytotoxicity was related to the concentration of 1000 µg/ml and at this point, the cell survival rate was equal to 25.19 ± 1.21, 31.24 ± 1.67 and 29.64 ± 2.31 in T47D, MCF-7, and MD-MBA 361 cell lines, respectively.In the case of cells treated with free Zn-NPs, the cell survival rate was equal to 61.13 ± 1.33, 49.61 ± 1.59 and 43.13 ± 2.31 in T47D, MCF-7, and MD-MBA 361 cell lines, respectively (Fig. 9).One of the reasons for increasing the cytotoxic effects of niosomes containing Zn-NPs is the binding of niosomes with the cell membrane, the controlled and targeted release of the drug into the cell 41 .Rezaie Amale, et al., evaluated the cytotoxicity of AuNPs loaded into niosomes with free AuNPs on cancer cells.The researchers stated that the IC50 value in niosomes was much lower than that of nanoparticles (2.62 ± 200 µg/ ml and 3.25 ± 155 µg/ml, respectively) and the results exhibited that the effects of niosomes containing AuNPs have increased 21 .Haddadian et al. compared the effects of selenium nanoparticles loaded in niosomes with free selenium nanoparticles.The results showed that when selenium nanoparticles are loaded into niosomes, their anticancer properties increase, which is due to the targeted delivery of the drug into the cell 27 . In addition, the cytocompatibility of synthesized niosomes, niosomes containing Zn-NPs, free Zn-NPs, and free niosomes on the normal HEK-293 cell line was evaluated.As the results show, empty niosomes failed to inhibit cell growth in the HEK-293 cell line, and also when Zn-NPs are enclosed inside niosomes, the cytotoxicity is greatly reduced, which shows the biocompatibility of niosomes.it is important that when the concentration of niosomes containing Zn-NPs increases, it can have toxicity for cells, and therefore it can be concluded that lower concentrations should be used for cytotoxicity studies.Most reports show that Zn-NPs can enter the cell through membrane channels, cell membrane, or transport proteins, also through endocytosis, and can endanger the function of some organelles such as mitochondria.Most of the studies attribute the cytotoxicity of Zn-NPs to oxidative stress, lipid peroxidation, and ROS generation, which leads to DNA mutation, breakage, and finally DNA destruction 42 . Impact of niosomes on apoptotic genes One of the methods of studying the induction of apoptosis in cells is to examine the expression of apoptoticrelated genes.In the current study, 4 genes related to the process of apoptosis including Bax, casp3, casp9, and Bcl2 were selected and their expression levels were investigated after treatment with niosomes containing Zn-NPs and free Zn-NPs.We observed that Casp3 and Casp9 apoptotic gene expression was increased in treated cells.Generally, when caspase is activated, it can activate other genes related to the caspase family, which leads to rapid apoptosis in the cell.As seen in Fig. 10, when cancerous cells are exposed to niosomes loaded Zn-NPs, the expression levels of Bax, Cas3, and Casp9 genes up-regulated (2.41 ± 0.2, 1.87 ± 0.1 and 1.49 ± 0.3, respectively), and Bcl2 gene expression decreases by 0.39 ± 0.05, which indicates the induction of apoptosis.The point that is worth mentioning is that many studies consider the cause of apoptosis to be the creation of ROS in treated cells, which destroys the structure of DNA and cellular components 43 . Pouresmaeil et al., showed that Zn-NPs can induce apoptosis in cancer cells through ROS, and RT-PCR results showed that Zn-NPs can upregulate the Bax mRNA level (p < 0.001) and decrease Bcl2 genes (p < 0.05).Akhtar et al., showed that Zn-NPs can increase the Bax and P53 gene expression in HepG2 cells and downregulate the antiapoptotic Bcl2 genes mRNA level, and also Zn-NPs can lead to the activation of caspase 3, activate toxic oxygen radicals and oxidative stress 44 .Cheng et al., prepared Zn-NPs by the green synthesis method and investigated their apoptotic effects on bone cancers.The results demonstrated that Zn-NPs can increase ROS generation and decrease MMP gene expression.Also, the expression of apoptotic genes Bax, Casp3, and Casp9 increases, which indicates the induction of apoptosis by Zn-NPs nanoparticles 45 .Efati et al., synthesized Zn-NPs using Lepidium sativum L. seed extract and investigated its anticancer and apoptotic effects against human colorectal cancer cells.The researchers found that the Zn-NPs increased the expression of apoptotic genes (P53) and decreased the expression of Bcl2 gene 46 .The findings are consistent with the results of our research, which shows the induction of apoptosis and the increase in the expression of apoptotic genes by Zn-NPs. Conclusion In this study, using the green synthesis method, zinc oxide nanoparticles (Zn-NPs) were prepared and loaded into the niosomes.Subsequently, 6 formulations of niosomes loaded Zn-NPs were synthesized and the formulation that had a smaller size and more EE% was considered as the optimal formulation.The optimal formulation was evaluated in terms of stability and release.The synthesized niosomes were characterized by SEM, FTIR, and DLS tests and the results showed that the synthesized niosomes have suitable characteristics.Also, the optimized form of synthesized niosome showed controlled drug release.Its antimicrobial efficiency on some microbial pathogens was investigated, and its cytotoxic effects on different types of breast cancer cells were also evaluated.The results of the antimicrobial tests showed that the synthesized niosomes can increase the antimicrobial effects by 2 to 4 times.Also, the results of cytotoxicity tests showed that synthesized niosomes can have significant cytotoxicity on T47D, MCF-7, and MD-MBA 361 breast cancer cell lines including.Also, the synthesized niosomes did not have significant cytotoxic effects against normal HEK-293 cell lines which shows its biocompatibility.Overall, our findings exhibited that niosomes enhanced the biological potential of free Zn-NPs and by targeting niosomes, drugs can be directed to the target cell with a higher concentration and therefore can be a suitable carrier for targeted delivery of Zn-NPs. 30 Figure 1 . Figure 1.UV-Vis analysis of green synthesized Zn-NPs.The existence of a peak at the wavelength of 430 nm for Zn-NPs was confirmed by a UV-vis spectrometer. Figure 4 . Figure 4. Release of Zn-NPs from niosomes compared with free form.This test was repeated 3 times and as can be seen, Zn-NPs loaded in niosomes have a slower release pattern compared to free Zn-NPs. Figure 5 . Figure 5. Stability of synthesized niosome samples at 4 °C and 25 °C based on EE% and vesicle size criteria.The test is repeated 3 times.*p < 0.05. Figure 6 . Figure6.The antimicrobial effects of niosomes containing Zn-NPs and free Zn-NPs and free niosomes by time-kill assay method in a period of 72 h against E. coli (A), P. aeruginosa (B), S. aureus (C), E. fecalis (D).The results showed that niosomes containing Zn-NPs had more growth inhibitory effects.The tests were repeated triplicate and the mean ± SD was used. Figure 8 . Figure 8. Biofilm gene expression analysis in pathogenic strains after treatment with niosome-loaded Zn-NPs, free Zn-NPs, and free niosome.The gene used as a control in this test was 16S rRNA.The results are repeated triplicate.(p < 0.001***, p < 0.05*, n = 3). Figure 10 . Figure 10.Measurement of gene expression in cells exposure to niosome-loaded Zn-NPs, free Zn-NPs, and free niosomes.As expected, the apoptotic gene expression in the cells that were exposed to niosomes increased significantly and can lead the cell to apoptosis.(p < 0.001***, p < 0.05*, n = 3). Table 1 . Primer sequence of genes involved in biofilm formation used in this research. Table 5 . The MIC and MBC values of niosome loaded Zn-NPs and free Zn-NPs.
2024-07-21T06:17:24.194Z
2024-07-19T00:00:00.000
{ "year": 2024, "sha1": "81058019a9e7277fd509f804142e2567a9f53eda", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1038/s41598-024-67509-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8f39df7ab9842ebad2b0af31330e4d5fc02e8c7d", "s2fieldsofstudy": [ "Medicine", "Materials Science", "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
251117426
pes2o/s2orc
v3-fos-license
Why empathy is an intellectual virtue ABSTRACT Our aim in this paper is to argue that empathy is an intellectual virtue. Empathy enables agents to gain insight into other people’s emotions and beliefs. The agent who possesses this trait is: (i) driven to engage in acts of empathy by her epistemic desires; (ii) takes pleasure in doing so; (iii) is competent at the activity characteristic of empathy; and, (iv) has good judgment as to when it is epistemically appropriate to engage in empathy. After establishing that empathy meets all the necessary conditions to be classified as an intellectual virtue, we proceed to discuss Battaly’s argument according to which empathy is a skill rather than a virtue. We contend, contra Battaly, that the agent who possesses the virtue of empathy: (a) sometimes foregoes opportunities to engage in the activity characteristic of empathy because it is the virtuous thing to do, (b) does not make deliberate errors, and (c) her actions are always ultimately aiming at epistemic goods. I. Introductory remarks Most people share the belief that empathy is a valuable quality for both the agent that possesses it as well as those around them.There is something undeniably positive in being able to attain knowledge of another's mental state.The empathetic agent is both willing and able to determine the state of mind of other agents -for example, -she is able to gain insight into why another person is angry at her.This is valuable for both the agent who is able to understand others as well as for the people around her that feel (and actually are) understood.The question we seek to address in this paper is whether one would be warranted to classify empathy as an intellectual virtue.To achieve this, we examine the concept of empathy through the framework of virtue epistemology. Roughly put, virtue epistemology involves the study of epistemological issues through the concept of intellectual virtue (see Brady & Pritchard, 2006).There are two distinct groups of scholars working in virtue epistemology: virtue responsibilists (such as Baehr, 2006Baehr, , 2011;;Code, 1987;Montmarquet, 1993;Roberts & Wood, 2007;Zagzebski, 1996) and virtue reliabilists (such as Greco, 1993Greco, , 2010;;Pritchard, 2005Pritchard, , 2018;;Sosa, 1980Sosa, , 2007)).One fundamental difference between these two camps is that the former understands intellectual virtues as traits of character (characterbased virtues) while the latter conceives of them as faculties of the mind (faculty-based virtues).In this paper, we focus on the virtue responsibilist project since our goal is to argue that empathy is a responsibilist kind of epistemic virtue. Virtue responsibilists maintain that intellectual virtues are epistemically valuable acquired traits of character.They model their understanding of intellectual virtue after Aristotle's conceptualization of moral virtue (Battaly, 2011, p. 289 1 ; see, also, 2008, p. 645).For them, open mindedness, intellectual courage, and intellectual tenacity are typical examples of epistemic virtues 2 (see, Greco & Turri, 2013).On this view, the openminded agent is predisposed to see "others' ideas as plausible" (Montmarquet, 1993, p. 24; see also, Baehr, 2011, p. 153).For virtue responsibilists, an agent needs to have a strong desire to acquire epistemic goods (such as knowledge and truth) in order to possess intellectual virtues.Without epistemic motivations, an agent cannot possibly be intellectually virtuous -they lack the necessary drive that is required for the acquisition and development of intellectual virtues.For instance, it is due to their strong epistemic desires that the person who possesses the virtue of open-mindedness is motivated to take under serious consideration the plausibility of other people's viewpoints (Battaly, 2011, p. 289). Despite numerous recent studies on virtue responsibilism, some of which seek to characterize various character traits as epistemic virtues (see, for example, Battaly, 2017;Hazlett, 2012;Kotsonis, 2021a;Watson, 2015), very few virtue theorists have looked into the concept of empathy from the viewpoint of virtue epistemology (for notable exceptions, see, Battaly, 2011;Simmons, 2014).Battaly (2011) is one of the few scholars who has considered the possibility that empathy could be a virtue.Her study is one of the most well-known and influential studies on the concept from a virtue theory perspective.She argues that empathy is not a virtue and should instead be classified either as a capacity or a skill.In this paper, we want to argue contra- Battaly (2011) that empathy is a virtue of the intellect.We seek to foreground this virtue in contemporary discussions of virtue epistemology discussions and forensically appraise its distinctive epistemic value. Our plan for this paper is the following: in the next section we argue that (cognitive) empathy is an epistemic virtue.To accomplish this, we discuss Baehr's (2016) conditions for intellectual virtues and proceed to show that empathy meets all of them.We argue that empathy enables the agents who possess it to gain insight into other people's emotions and beliefs.Having posited that empathy meets all the necessary conditions for a trait to be classified as an intellectual virtue, we proceed in the third section to discuss Battaly's three reasons for not classifying empathy as a virtue: (i) foregoing opportunities, (ii) deliberate errors and (iii) not aiming at the good.We contend, contra Battaly, that the agent who possesses the epistemic virtue of empathy: (a) sometimes foregoes opportunities to engage in the activity characteristic of empathy because it is the virtuous thing to do, (b) does not make deliberate errors, and, (c) her actions are always aiming at the good.In the fourth section, we discuss Annas (1995Annas ( , 2003Annas ( , 2011) and Stichter's (2011Stichter's ( , 2016) ) conceptualization of virtue as skill and draw attention to how their viewpoint challenges Battaly's argument, according to which, skills and virtues are mutually exclusive. II. Empathy as an intellectual virtue The term empathy is used to describe a multitude of distinct but related phenomena (Batson, 2009;Battaly, 2011;Cuff et al., 2016;Hall & Schwartz, 2019).Batson (2009), for example, identifies eight different uses of the term empathy: (i) cognitive empathy, (ii) motor mimicry, (iii) coming to feel as another person feels, (iv) projecting oneself into another's situation, (v) imagining how another is thinking and feeling, (vi) imagining how one would think and feel in the other's place, (vii) feeling distress at witnessing another person's suffering, and (viii) feeling for another person who is suffering.Our focus, in this paper, is cognitive empathy -viz., the process by which "one attains a cognitive grasp, belief about, or knowledge of another's mental states" (Battaly, 2011, p. 287).This is because we believe, and want to argue, that cognitive empathy is an intellectual virtue. According to the cognitive understanding of empathy, the defining characteristic of empathy is that it enables the agent to gain insight into other people's emotions and beliefs (see e.g., Boisserie-Lacroix & Inchingolo, 2021;Goldie, 2000;Hodges & Myers, 2007;Smith, 2017;Spaulding, 2017;Stueber, 2006Stueber, , 2012)).The activity characteristic of empathy entails employing available information in order to make judgments regarding what others experience in a given situation (see e.g., Harrelson, 2020;Hodges & Myers, 2007;Stueber, 2012).Cognitive empathy yields epistemic outputs: engaging in successful empathy enables the agent to know the target's mental state (Goldie, 2000, p. 195;Steinberg, 2014).Cognitive empathy should not be conflated with sympathy which involves caring for others (Goldie, 2000, p. 215;Coplan, 2004, p. 146).The agent who is good at empathizing with others does not necessarily care for them (e.g., she does not necessarily have the impulse to help them - Coplan, 2004, p. 146).For example, a good counselor may be able to understand that her client is feeling underappreciated by her partner but that does not necessarily mean that they sympathize with the client (e.g., the counselor might dislike this particular client).Framed this way, empathy is the process by which an agent comes to gain insight into another person's mental state.This does not require that they share the target person's mental state, such as, feeling sad because they are feeling sad, or indeed, care for the other person.What's more, cognitive empathy involves understanding what is going on for the other as another and does not necessarily involve perspective-taking, interpreted here in terms of imagining how one would think and feel in the other's place.For the rest of the paper, unless otherwise specified, we use the term "empathy" to refer specifically to cognitive empathy. In this section of this paper, we proceed to argue that empathy is an intellectual virtue.If the activity characteristic of this virtue is carried out successfully, it enables the agent to acquire epistemic goods about their environment -i.e., know what people such as their partner, children, parents, neighbors, coworkers and/or boss feel and think at a given time.In order to illustrate that empathy is an intellectual virtue, we discuss the conditions that are identified by virtue responsibilists as jointly necessary and sufficient for a trait to be classified as an epistemic virtue.We focus primarily on the conditions identified by Baehr (2016) and demonstrate that empathy meets all of them. 3 A) The virtue of empathy: The motivational and the affective dimensions As already noted in the introductory remarks, virtue responsibilists argue that epistemic motivations are an integral component of every intellectual virtue.Montmarquet (1993, p. 30) characterizes intellectual virtues as the qualities a truth-desiring agent would want to possess.In a similar vein, Zagzebski (1996, p. 167) argues that intellectual virtues entail a motivation to have "a cognitive contact with reality", while Roberts and Wood (2007, p. 305) maintain that love for knowledge is a necessary condition for the possession of all other intellectual virtues.In more formal terms, Baehr (2016, p. 87) notes that, "A subject S possesses an intellectual virtue V only if S's possession of V is rooted in a 'love' of epistemic goods". The agent who does not have a desire to obtain epistemic goods lacks the epistemic drive that is necessary for the possession of epistemic virtues.Consider, for example, an agent who has no interest in obtaining truth about her environment.Suppose for a moment that he prefers to sit on the couch all day and play videogames.Such an agent does not possess intellectual virtues -he is not interested in obtaining intellectual ends.Epistemic motivations are necessary for an agent to possess intellectual virtues even if the agent seems to consistently act in accordance with a specific virtue.For instance, the person who acts in accordance with the epistemic virtue of curiosity 4 because she is interested in acquiring the means that will enable her to beat her business competitors, does not possess the virtue in question.She is not ultimately driven to act due to her desire for epistemic goods but rather because she wants to make money. In line with the motivational dimension of intellectual virtues, the agent who possesses the virtue of empathy is motivated to act out of their desire for the acquisition of epistemic goods.She is motivated to gain insight into other people's emotions and beliefs out of her epistemic desires (rather than some other ulterior non-epistemic motive -say, making money 5 ).The activity characteristic of empathy might (or might not) spark in them the desire to sympathize with the target, but it is their desire to know the truth that ultimately motivates them.Consider for instance, the following case.Carla, who is a most empathetic person, goes to medical school.During a class, she witnesses one of her professors have a panic attack, in part triggered by a combination of stage-fright, poor sleep quality and an ongoing mid-life crisis.Given that she excels at empathizing, Carla has the epistemic motivations to acquire the truth about her surrounding environment.Empathy allows her to gain insight into her professor's inner state of mind.Nevertheless, because this professor was overly strict when marking her mid-term essay, Carla dislikes them, and as a result, is not feeling any sympathy for them. Besides the motivational dimension of intellectual virtues, Baehr (2016) also argues that intellectual virtues are characterized by an affective dimension.According to Baehr (2016, p. 89), "S possesses an intellectual virtue V only if S takes pleasure in (or experiences other appropriate affections in relation to) the activity characteristic of V".Baehr gives two main reasons as to why he distinguishes the affective dimension of intellectual virtues from the motivational component.First, there are instances where an agent is motivated to pursue epistemic goods out of a sense of duty rather than out of genuine affection for epistemic goods -for Baehr -this would indicate that the agent lacks epistemic virtues.Secondly, Baehr understands the motivational component as the initial spark of the inquiring process, and this does not guarantee that the agent will enjoy the process through which they come to acquire epistemic goods -i.e., one might desire epistemic goods but may perceive the process through which such epistemic goods are acquired as not worth the effort, dull and/or painful. Whether one understands the affective principle as part of the motivational component of intellectual virtues or as a distinct component, it still remains the case that a person who excels in empathizing takes pleasure in (or experiences other appropriate affections in relation to) the activity characteristic of empathy -i.e., gaining insight into other people's emotions and beliefs.In other words, such a person is not simply driven by their epistemic desires to acquire epistemic good via empathy but also enjoys the activity characteristic of this trait.Consider, for example, again, the case of Carla.Carla does not only have the epistemic motivations to gain insight into her professor's emotions and beliefs, she likewise enjoys the process through which she comes to obtain such goods, 6 . 7 B) The virtue of empathy: The competence and the judgment dimensions Having the motivation to acquire epistemic goods and taking pleasure in the activity characteristic of virtue X do not suffice for an agent to possess virtue X.One must also be competent at the activity characteristic of this virtue.Baehr (2016) calls this the competence dimension of intellectual virtues.For him, "S possesses an intellectual virtue V only if S is competent at the activity characteristic of V" (Baehr, 2016, p. 91).Accordingly, agents who possess strong epistemic motivations but lack the ability to acquire epistemic goods cannot be categorized as intellectually virtuous.More specifically, an agent cannot possibly be considered open-minded if they are incompetent at the activity that is characteristic of this virtue, that being, "able to transcend a default cognitive standpoint in order to take up or take seriously the merits of a distinct cognitive standpoint" (Baehr, 2011, p. 153). An agent who excels in empathizing does not only possess the motivation to acquire epistemic goods and take pleasure in the process.They are also competent at the activity characteristic of empathy -i.e., gaining insight into other people's emotions and beliefs.Going back to Carla's example, Carla has the motivation to acquire the truth about her professor's panic attack, enjoys the process through which she comes into the possession of such goods, and is competent at the activity that is characteristic of this trait.It is due to her competence that she is able to acquire the truth about her professor's inner state of mind.Had Carla being incompetent at the activity characteristic of empathy, we would not consider her an empathetic person. Closely related to the competence dimension of intellectual virtues is the judgment dimension that characterizes every agent that possesses intellectually virtues.According to Baehr (2016, p. 92), "S possesses an intellectual virtue V only if S is disposed to recognize when (and to what extent, etc.) the activity characteristic of V would be epistemically appropriate".In other words, the motivation to acquire epistemics goods, the enjoyment of the process through which such goods are acquired and the competence at the activity characteristic of virtue X are not sufficient for the possession of intellectual virtues.The agent must also be able to judge well regarding when it is epistemically appropriate to engage in the activity characteristic of this virtue.Put another way, they must be "able to judge when, for how long, toward whom, and in what manner" to engage in the activity characteristic of virtue X (Baehr, 2016, p. 93).For instance, an agent should not engage in activity characteristic of open-mindedness when discussing with agents that want to indoctrinate or brainwash her. 8 A person who excels in empathizing is characterized by her good judgment concerning the epistemic appropriateness of the activity characteristic of empathy.Consistent with this, Carla judges correctly that it is epistemically appropriate to engage in the activity characteristic of empathy when her professor is having the mental breakdown.In addition, Carla also knows that it is not epistemically appropriate to engage in the activity characteristic of this virtue during normal class time, notably, when her professor is lecturing on human anatomy.In the latter case, trying to gain insight into other people's emotions and beliefs is epistemically inappropriate since: (i) there is no insight to be gained through the activity characteristic of empathy at this time; (ii) it would distract Carla from acquiring epistemic goods through other means, (i.e., giving her full attention to what the professor is saying about human anatomy), which are epistemically more fruitful given the situation. To sum up, the character trait of empathy satisfies all the necessary conditions, identified by Baehr (2016), as jointly necessary and sufficient for the possession of intellectual virtues.The excellent empathizer is: (i) driven to engage in acts of empathy by her epistemic desires; (ii) takes pleasure; (iii) is competent at the activity characteristic of empathy; and (iv) has good judgment as to when it is epistemically appropriate to engage in empathy.Therefore, we argue that we should categorize empathy as an intellectual virtue, 9 . 10 C) The virtue of empathy: The reliability objection and low-grade goods One could argue that empathy lacks epistemic reliability in the sense that the epistemic outcomes of the activity characteristic of empathy are often inaccurate.For instance, Carla may think that she has understood the reasons behind her professor's mental breakdown, but she might be mistaken.This relates to Zagzebski's (1996) success condition of intellectual virtues.According to her, for an agent to possess an intellectual virtue X, they need to be reliably successful in acquiring epistemic goods (see, e.g., Zagzebski, 1996, p. 270).Hence, since the activity characteristic of empathy does not allow even the most skilled empathizer to successfully acquire epistemic goods on a reliable basis, one could argue that perhaps empathy should not be classified as a virtue. First, one could challenge the view that empathy lacks epistemic reliability.For instance, an experienced counselor who is competent at the activity characteristic of empathy is reliably successful in acquiring the truth about her client's inner state of mind (see, e.g., Barone et al., 2005 on how accurate empathy can be educated for).Secondly, the success condition of epistemic virtues put forward by Zagzebski (1996) has been challenged by scholars such as Baehr (2011Baehr ( , 2016) ) and Watson (2015) who maintain that reliability is not a necessary condition of intellectual virtues. 11These scholars build on Montmarquet's (1987) arguments against the view that intellectual virtues require epistemic reliability.To illustrate his argument, Montmarquet discusses cases of hostile epistemic environments (e.g., evil demon cases) in which an agent, despite possessing intellectual virtues, is unable to acquire epistemic goods on a reliable basis.For Montmarquet, this does not show that agents stop being intellectually virtuous in hostile environments, but that reliability is not necessary for the possession of epistemic virtues.Aligning with scholars such as Montmarquet (1987), Baehr (2011Baehr ( , 2016) ) and Watson (2015), we maintain that reliability is not a necessary condition for the possession of intellectual virtues.In the case of empathy, the agent who possesses this intellectual virtue is not characterized by her ability to acquire epistemic goods reliably but by her competence in the activity characteristic of this virtue.This should not be taken to imply, however, that the virtuous empathetic agent is not reliably successful at acquiring epistemic goods in non-hostile epistemic environments (viz., one cannot be a virtuously empathic agent and yet consistently fail to correctly infer the mental states of others under normal conditions).It simply shows that reliability is not a necessary condition for empathy to be a virtue. 12 Besides the reliability objection, one could also argue that empathy does not have significant epistemic value since it only yields low-grade epistemic goods.The empathetic agent is not able to acquire important truths (e.g., the chemical composition of oxygen, the second law of thermodynamics) but only low-level goods such as the inner state of mind of a given person (e.g., that John is feeling angry toward the agent because he feels neglected).Still, the kind of epistemic goods acquired through empathy are quite important for the wellbeing of agents both individually and collectively (see, Steinberg, 2014).Being able to understand the emotions and beliefs of other people is crucial for harmonious co-existence (Morris, 2019); and being listened and feeling understood is what a lot of people lack in their lives.From such perspective, it seems that epistemic goods acquired through the activity characteristic of empathy are on par with the goods acquired by virtues such as open-mindedness and intellectual tenacity.Being able to gain insight into the reasons why one's partner is angry at them could make the difference between a successful and unsuccessful relationship -hence such epistemic goods are not of detrimental value.The truth-desiring agent would certainly want to possess such goods. III. Empathy as an intellectual virtue: Virtue versus skill Having posited that empathy is an intellectual virtue, we proceed in this section to discuss Battaly's (2011) recent argument according to which empathy should not be classified as a virtue.Battaly puts forward three reasons as to why she believes this to be the case: (i) foregoing opportunities, (ii) deliberate errors and (iii) not aiming at the good.We discuss all three reasons and argue, contra Battaly, that the agent who possesses the epistemic virtue of empathy: (a) sometimes foregoes opportunities to engage in the activity characteristic of empathy because it is the virtuous thing to do, (b) does not make deliberate errors and (c) her actions are always ultimately aiming at the possession of epistemic goods.Battaly (2011) dismisses the idea that empathy is a virtue. 13She argues that, depending on one's understanding of this concept, empathy is either a capacity or a skill.As already discussed, we understand empathy as enabling agents to gain insight into other people's emotions and beliefs.For Battaly, when framed this way, empathy is a skill, and one important reason for that centers on foregoing opportunities, since one may possess the skill but choose when to use it -or for that matter -repeatedly not use it.For her, an empathetic person can sometimes forego the opportunity to exhibit empathy, and this shows that they lack the motivation that is required for the possession of intellectual virtues.Hence, she argues that empathy is a skill. We believe that Battaly is wrong to classify empathy as a skill due to foregoing opportunities.As she herself notes, an agent may fail to exhibit a specific virtue in a given situation but that does not mean that the agent has ceased to possess this virtue.She goes on to point out that, one could "fail to help a friend in need, and still be benevolent" (Battaly, 2011, pp. 293-294).Similarly, we maintain that an empathetic person may fail to gain insight into another person's emotions and beliefs and still possess the virtue of empathy.Nonetheless, Battaly rightly highlights the fact that the reasons for the agent's failure are quite important.She believes that for a virtuous agent to fail and still be virtuous, the reasons of their failure must be due to unforeseeable and uncontrollable events. We maintain that the agent who possesses the virtue of empathy sometimes foregoes opportunities to engage in the characteristic activity of empathy because it is the virtuous thing to do.Virtue does not automatically confer superhuman powers on the possessor.An empathetic person does not have boundless empathy.How could they?They would end up in an early grave and be of no use to man nor beast.We argue that the empathetic person is not authentically empathetic if they "have to be" empathetic all the time and toward everyone.Being authentically empathetic toward everyone would be the excess of the virtue of empathy.It might even lead the agent to acute mental distress (whereupon they might be forced to "switch off" or "temper" their empathy as a defense mechanism).If empathy is something an agent cannot have some measure of control over, there's a real risk that they will burnout; and if they have to be empathetic all the time, they are a slave to the virtue.This dovetails with the idea of empathy regulation (see, e.g., Ray & Gallegos de Castillo, 2019) whereby the virtuous empathetic person has the ability to control their empathy and exhibits it only when it is the virtuous thing to do.Different people and different situations merit different levels of empathy (some do not merit any empathy at all), and the agent needs to be practically wise to determine the mean where the intellectual virtue of empathy lies.This relates to Baehr's (2016) judgment component of intellectual virtue: the intellectually virtuous person is good at judging when (and to what extent, etc.) the activity characteristic of empathy would be epistemically appropriate -it is to be expected therefore that they will forego the opportunity to engage in the activity characteristic of empathy when foregoing it is the virtuous thing to do 14 . Battaly ( 2011) also discusses deliberate errors as a further reason for regarding empathy as a skill.She notes that an empathetic person can deliberately engage in "a process that she knows will produce botched effects and false beliefs" (Battaly, 2011, p. 297).For Battaly, this does not show that the agent has forfeited her ability to engage in skillful empathy, but rather that the agents might have decided to willingly produce poor results.According to her, this shows that the skillful empathizer lacks the motivation that is necessary for virtue, therefore, empathy should be classified as a skill.But why should we surmise that empathy is not a virtue on the basis of the fact that certain agents have the capacity to engage in actions that are characteristic of this trait but nevertheless sometimes decide not to do so in a skillful manner due to a lack of epistemic motivations?Lacking the motivations necessary for virtue might simply show that the agent in question lacks the virtue of empathy -instead of being considered as evidence that empathy is a skill.Very few people possess the virtue of empathy and this is, to a large extent, due to the fact that most of us have imperfect epistemic motivations -more precisely -we are not interested in acquiring epistemic goods when doing so would inconvenience us.We prefer to engage in a process that may "produce botched effects and false beliefs" than go out of our way to acquire epistemic goods. Here it might be helpful to discuss the example that Battaly (2011, p. 298) uses.Jackie and Joan are sisters.While Jackie is able to step into Joan's shoes and see things from her perspective (e.g., understand the reasons why Joan is frustrated with her job and her marriage), Joan lacks this ability -i.e., she is unable to gain insight into her sister's emotions and beliefs.Being frustrated by her sister's inability to see things from her perspective, Jackie decides not to engage in a competent empathetic understanding of her sister's feelings, willingly does a poor job and hence ends up forming false beliefs about her sister's emotions.Battaly argues that Jackie has not lost the skill to be empathetic, for she can truly understand her sister's emotions if she chooses to do so.According to Battaly (2011), Jackie simply lacks the motivation to be truly empathetic and this shows that empathy is a skill rather than a virtue. On the contrary, we believe that, though Battaly readily surmises that her example illustrates that empathy is a skill rather than a virtue, she fails to consider the possibility that empathy could be a virtue and that Jackie does not possess this virtue because, although she is competent at the activity characteristic of this trait, she lacks the epistemic motivation that is required for it.To better explain our argument, let us consider again the case of John.John has the competence that is required for an agent to be open-minded.He is able to consider other people's perspectives and take them under serious consideration.However, John despises his brother Michael.Blinded by his hatred, John is consciously not allowing himself to seriously consider what his brother says.He knows that Michael is an expert in sports, but he purposely denies seriously listening to anything Michael has to say.John clearly lacks the virtue of open-mindedness.He has the competence to evaluate alternative viewpoints with an open mind but, in the case of his brother, lacks the epistemic motivation to do so competently.On the contrary, he only does a poor job at being open-minded and that leads to poor results.Similar to John, Jackie has the competence to engage in empathetic understanding, but when it comes to her sister, she lacks the motivation to do so competently.Correspondingly, Michael lacks the virtue of open mindedness and Jackie lacks the virtue of empathy. Lastly, Battaly discusses not aiming at the good as a final reason for categorizing empathy as a skill.For her, the virtuous person aims at what appears to be good to them and the ends at which they aim are objectively good (Battaly, 2011, p. 299).She argues that empathy is a skill -and not an intellectual virtue -because the person who engages in the activity characteristic of empathy does not necessarily do so out of her desire for the acquisition of epistemic ends.For example, one could attempt to gain insight into another person's emotions and beliefs for some ulterior nonepistemic end such as getting rich.But why should we conclude that empathy is a skill simply because some agent who is competent at the activity characteristic of this trait engages in it out of her desire for wealth rather than the truth?Could we not simply posit instead that the agent who is competent at the activity characteristic of empathy but lacks the proper epistemic motivation for doing so, does not possess the intellectual virtue of empathy because she lacks the necessary epistemic motivation that is required for the possession of this virtue?After all, according to most (if not all) virtue responsibilists, the goodness of intellectual virtues is located in the goodness of epistemic motivations (see, e.g., Baehr, 2016;Roberts & Wood, 2007;Zagzebski, 1996), and one cannot possibly possess intellectual virtues (irrespectively of whether they are competent at the characteristic activity of a given virtue or not) if one does not possess perfect epistemic motivations.Battaly (2011) discusses an example to back up her argument.Katie, who is a therapist, is quite good at "stepping into her clients" shoes' and seeing things from their perspective.However, Katie does not do so out of her desire for the truth but out of a non-epistemic motive -her desire to earn money (i.e., be paid by her clients).Battaly (2011, p. 300) concludes, on the basis of this example, that "since truth is an objectively good end, one cannot be empathic, so construed, without having at least one end that is in fact epistemically good.But this is insufficient for virtue possession because empathizers may also have competing or ulterior motives that are epistemically bad". But is it not the same with open-mindedness?One may keep an open mind out of an ulterior motive to earn money (rather than acquire epistemic goods).Consider, for example, an agent named Christin who is characterized by her competency to engage in the activity characteristic of openmindedness.She is willing and able to seriously consider alternative viewpoints to her own.However, there are many instances where Christin exhibits an open mind for some ulterior motive other than the truth -i.e., the truth is not the end goal of her actions.For instance, Christin keeps an open mind during a business meeting in order to be able to evaluate and make the most profitable choice.She is interested in knowing the truth (e.g., that the stock market is going to skyrocket) but is not interested in the truth for its own sake but for the purpose of making money.In other words, she is acting in a very similar manner to Katie who has an ulterior motive for acquiring the truth via empathy.Our view is that both Christin and Katie meet the competence conditions of open mindedness and empathy correspondingly but lack the motivation necessary for the possession of these virtues. IV. Empathy as an intellectual virtue: Virtue and skill Thus far we have taken for granted the view that skills and virtues are mutually exclusive -i.e., skills cannot be virtues 15 .Battaly (2011) argues for this view on the basis that skills lack the motivation that is necessary for a quality to be classified as a virtue.Battaly's three reasons for concluding that empathy is a skill rather than a virtue narrow down to a lack of epistemic motivations: the agent is not motivated to acquire epistemic goods and hence (i) foregoes opportunities to do so, (ii) makes deliberate mistakes and (iii) has non-epistemic ulterior motives. Still, it is wrong to assume that all scholars working in virtue theory accept this sharp distinction between skills and virtues.For Stichter (2011Stichter ( , 2016) ) challenges the view that virtues involve certain motivations while skills merely entail a capacity to act well.He argues that skill acquisition requires certain motivations.This is also true in the case of empathy.For an agent to become skilled in gaining insight into other people's emotions and beliefs, it is not enough that she has the capacity to develop this skill -she also needs to have the necessary epistemic drive for doing so.Moreover, Stichter argues that the performance of a certain skill by a given agent can be evaluated on whether they are "committed to achieving the ends of their practice" (Stichter, 2016, p. 435).Thus, in the case of the therapist who exhibits empathy for some ulterior motive other than the acquisition of truth, one could maintain that they lack the skill of empathy because they are not committed to achieving the ends of this practice.Stichter's (2011Stichter's ( , 2016) ) viewpoint builds on Annas (1995Annas ( , 2003Annas ( , 2011) arguments according to which every intellectual and moral virtue involves the possession of certain skills.Annas' position 16 also partly informs Baehr's (2016, p. 91) understanding of the competence principle of intellectual virtues.As already noted in the previous section, according to Baehr, for an agent to possess an intellectual virtue, they need to be competent at the activity characteristic of this virtue -they need to possess a certain competence/skill.Baehr also points out that it is due to the different skills/competences that are characteristic of each virtue that we can differentiate between virtues such as open-mindedness, attentiveness, and curiousness.Every epistemic virtue ultimately aims at the possession of epistemic goods, but each one is characterized by a different skill/competence -i.e., "an openminded person is competent or skilled at one type of virtue-relevant activity, while an intellectually attentive person is skilled at a different type of activity, and the curious person at yet a different type" (Baehr, 2016, p. 91).The idea that a certain skill is necessary for the possession of an epistemic virtue undermines Battaly's (2011) argument that skills and virtues are mutually exclusive. In the previous section, in spite of taking for granted the view that skills and virtues are mutually exclusive, we have argued that empathy is an intellectual virtue.Still, instead of refuting one by one Battaly's reasons for maintaining that empathy is a skill, one could simply side with those scholars challenging the viewpoint that virtues and skills are mutually exclusive -and hence argue, contra-Battaly, that being a skill does not preclude empathy from being considered an intellectual virtue (viz.empathy is an intellectual virtue characterized by the skill to gain insight into other people's emotions and beliefs). V. Concluding remarks To sum up, our main goal in this paper was to classify empathy as an intellectual virtue.Following scholars such as Goldie (2000) and Hodges and Myers (2007), we argued that empathy is a trait that enables the agents who possess it to gain insight into other people's emotions and beliefs.The agent who possesses the virtue of empathy is (i) driven to engage in acts of empathy by her epistemic desires, (ii) takes pleasure in the activity characteristic of empathy, (iii) is competent at the activity characteristic of this trait and (iv) has good judgment as to when it is epistemically appropriate to engage in empathy.Having established that empathy meets all the necessary conditions for a trait to be classified as an intellectual virtue, we proceeded to discuss Battaly's (2011) reasons for maintaining that empathy is a skill rather than a virtue: (i) foregoing opportunities, (ii) deliberate errors and (iii) not aiming at the good.We discussed all three reasons and argued, contra Battaly, that the agent who possesses the epistemic virtue of empathy (a) sometimes foregoes opportunities to engage in the activity characteristic of empathy because it is the virtuous thing to do, (b) does not make deliberate errors and (c) her actions are always ultimately aiming at epistemic goods.Lastly, we discussed Stichter's (2011Stichter's ( , 2016) and Annas's (1995Annas's ( , 2003Annas's ( , 2011) ) arguments according to which moral and intellectual virtues involve certain skills and highlighted the fact that one could use their arguments in order to challenge Battaly's (2011) sharp distinction between virtues and skills. Notes 1. "Like Aristotelian moral virtues, the intellectual virtues require that one perform virtuous actions, possess virtuous motivations, and hit the mean".2. We are using the terms "intellectual virtue" and "epistemic virtue" interchangeably throughout the paper.3. Baehr (2016) identifies four components that are necessary and jointly sufficient for a trait to be classified as an intellectual virtue: (i) the motivational dimension, (ii) the affective dimension, (iii) the competence dimension and (iv) the judgment dimension.These four components are the building blocks of every intellectual virtue.4. For more on the virtue of curiosity see, Ross (2020) and Watson (2015).5.One might use their competency in the characteristic activity of empathy to inflict harm -e.g., a torturer employing it to discover their victim's "weaknesses" in order to inflict greater pain to them.This is a case of vicious employment of empathy.The agent is motivated to act out of some vicious motive (i.e., harm others) rather than for the acquisition of epistemic goods.6.In contrast, for example, to a counselor who is motivated to acquire the truth about their client's state of mind through empathy but nonetheless finds no enjoyment in the process. 7. The virtuous person takes pleasure in the activity characteristic of (cognitive) empathy, and this pleasure does not stem from and/or hinge on sharing the feeling of others and/or caring for them.8.This relates to Battaly's (2018aBattaly's ( , 2018b) ) recent arguments according to which closedmindedness is a virtue in hostile epistemic environments.9.The virtue of empathy is an acquired trait.One is not born possessing this virtue.One acquires it through practice and experience.It may be the case that the capacity for empathy is part of the human biological endowment (viz., we are born with this capacity) but been born with a certain capacity and employing this capacity virtuously are two different things (among other things, the latter requires practical wisdom and experience).10.It is important to note that scholars such as Prinz (2011) and Bloom (2016) have recently criticized empathy.Still, their critiques are aimed at emotional empathy (sharing in feelings and thoughts and caring for others), and as such our approach is immune from their criticisms.We do not make the claim that emotional empathy motivates us to do good and/or brings about good (which is the claim/argument that Prinz and Bloom criticize) but that cognitive empathy is an intellectual virtue (with its value stemming from the agent's motivation to acquire epistemic goods).11.Scholars such as Watson (2015) have identified intellectual virtues that are not characterized by the agent's ability to acquire epistemic goods on a reliable basis but by the agent's competence in the characteristic activities of these virtues.12.One may disagree with us and insist that reliability is a necessary condition of intellectual virtues.This, however, would not preclude them from accepting our overall argument according to which empathy is an intellectual virtue.It would simply require one to challenge the view that empathy lacks epistemic reliability (see the 1 st paragraph of section II.C on how one may go about doing so) and argue that those who are not reliably successful at acquiring epistemic goods through the characteristic activity of the virtue of empathy do not possess the virtue in question.13.For more on this see, also Kristjánsson (2014) who seems to agree with Battaly's (2011) arguments.14.It is worth noting that one can forego the opportunity to engage in the activity characteristic of empathy although it would be virtuous to be empathetic (e.g., ignoring a person in distress because the agent is late for work).Having the practical wisdom to determine when to exhibit empathy and when to forego the opportunity to exhibit empathy is a feature that distinguishes between agents who possess the virtue of empathy and agents who do not possess it.The virtuous empathetic person would be, in the vast majority of cases (if not always), in a position to recognize when they should engage in the activity characteristic of empathy and would engage in it even if it inconveniences them -e.g., even if they end up being late for work.15.Besides Battaly (2011), this view is upheld by scholars such as Wallace (1978), Zagzebski (1996), Rees andWebber (2014), andKlein (2014).16.Annas (see, e.g., Annas, 2003) is putting forward a modernized version of the Platonic and Stoic understanding of virtue as a skill -as opposed to Battaly (2011) who is following the Aristotelian conception of virtue and understands virtue and skill as mutually exclusive.For more on the Platonic conception of virtue and its significance for virtue epistemology, see, Kotsonis (2021b).
2022-07-28T15:08:02.115Z
2022-07-26T00:00:00.000
{ "year": 2024, "sha1": "ff523a476b66e69c2ff82a914930357d7361101a", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/09515089.2022.2100753?needAccess=true", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "692523aad567047f34c77a855259d50d095fb6a8", "s2fieldsofstudy": [ "Philosophy", "Psychology" ], "extfieldsofstudy": [] }
244525714
pes2o/s2orc
v3-fos-license
Adaptive Threshold Scheme for Pulse Detection under Condition of Background Nonstationary Noise The paper describes a new adaptive threshold scheme for detecting pulses in high-frequency signals against a background of non-stationary noise. The result of the scheme operation is to determine the pulse boundaries by comparing the signal amplitude-time parameters with the threshold. The threshold value is calculated in non-overlapping windows of fixed length and depends only on the background noise level. The detected pulses undergo additional shape checking, taking into account their characteristics. The parameters of the algorithms for detecting pulses and checking their shape can be adjusted for any type of high-frequency pulse signals. This threshold scheme is tuned to detect pulses in high frequency geoacoustic emission signals. The results of the scheme operation on an artificial signal and on fragments of a geoacoustic signal are given, a comparison is made between the proposed scheme and the previously used (outdated) one. The new threshold scheme proposed by the authors is less sensitive to the choice of the initial threshold value and it is more stable in operation. When processing 15-minute fragments of a geoacoustic signal, the new scheme correctly detects, on average, 5 times more pulses. Introduction A signal that is a sequence of short bursts over a finite period of time is called a pulse signal. Often, the analysis of such signals is the basis of methods for solving scientific problems in radar, medicine, geophysics, and other fields. At the same time, their various characteristics are studied: total duration, duration of the leading and/or trailing edges, frequency, pulse rate, etc. To do this, it is necessary to learn how to determine the boundaries of 'useful' signals (pulses) as accurately as possible and with the required reliability. In simple cases, for example, in the absence of interference, a simple signal form and a small dynamic range, such tasks are effectively solved by using segmentation methods [1][2][3][4], frequency-selective filtering [5,6], threshold schemes [5,6,7]. However, the process of detecting pulses is much more complicated if the signals are distorted or noisy. The presented paper describes an adaptive threshold scheme designed to detect signal fragments containing pulses against the background of constantly present non-stationary noise. The scheme is designed and configured for the analysis of high-frequency geoacoustic emission (GAE) signals [8,9]. The GAE signal is a combination of pulses of different amplitude and duration, with a steep leading edge and a smooth decay. The analysis of GAE signals is significantly complicated by strong noise 1 Author to whom any correspondence should be addressed. Figure 1 shows a fragment of a geoacoustic signal containing pulses. According to the characteristics of geoacoustic pulse emission, the stress-strain state of rocks at the observation point is assessed. The results of these studies are described in detail in [8][9][10][11][12][13][14]. Adaptive threshold pulse detection scheme Previously, the threshold scheme described in [15] was used to detect pulses from geoacoustic signals. The threshold values were calculated relative to the standard deviation (SD) of the signal in nonoverlapping fixed-length windows. Long-term application of this method has shown that the scheme is sensitive to the choice of the initial threshold value. In the considered scheme, the initial value is selected proportionally to the SD of the entire signal σ. If the signal contains a large number of pulses, σ significantly exceeds the background noise SD. Therefore, the scheme detects only a small part of the high amplitude pulses. Therefore, it became necessary to modernize this method. The authors propose an improved method for detecting pulses, which is based on a comparison of the signal amplitude-time parameters with an adaptive threshold. The threshold value is calculated in non-overlapping windows of a fixed length of n samples. The current value of the threshold is calculated relative to the mathematical expectation and SD of the previous n signal samples by the formula: (1) where Sk is the threshold, Mk-1 is the mathematical expectation and σk-1 is the standard deviation of the previous n samples, B is the experimentally determined parameter. In order to ensure that the threshold depends only on the background level, the samples included in the detected pulses are excluded from the sequence of n samples. As a result of the computational experiment, it was found that for the studied signals, the parameter n value lies in the range from 200 to 400 samples (for GAE signals with a sampling frequency of 48 kHz, n = 250 samples). With a lower value of n, the number of false alarms of the detector (type II errors) increases, with a higher value, the number of cases of missing a target (type I errors) increases. The threshold value is valid until enough data has accumulated to calculate a new value. An example of adapting the threshold is shown in Figure 2. Pulse detection consists in defining the pulse boundaries. The leading edge (beginning) of the pulse is fixed when the average signal amplitude calculated in a window with a duration of 0.1 ms exceeds the Sk threshold ( Figure 3, section a). Averaging of the amplitude is used to eliminate false definition of the pulse beginning that result from peaks, usually caused by noise. Hereinafter, when converting time into the number of counts, rounding upward to the nearest integer value is performed. After detecting the pulse beginning, the pulse end is searched for. It is fixed when the signal amplitude becomes falls below than the threshold Sk. In the final part of the pulse, sufficiently low frequencies can be observed, so its amplitude will be below the threshold for a significant time period. To avoid the pulse end premature determination, the local maximum of the signal amplitude is determined in a sliding window with a duration of 0.7 ms (Figure 3, section b). The window duration is chosen in such a way that at least two local signal maxima with a frequency of more than 4 kHz fall into it. This condition guarantees that a signal fragment between an adjacent pair of local maxima will not be taken as the pulse end. Simultaneously with the detection of the pulse end, the search for the beginning of the next pulse is carried out. To do this, the average signal amplitudes are compared in the current and next windows with a duration of 0.35 ms (Figure 3, sections c and d). If the amplitude average value increases by more than 1.5 times, the end of the current and the beginning of the next pulse are recorded. Figure 4 shows examples of detecting a single GAE pulse and a pulse included in a group. According to the graph shown in Figure 4b it can be seen that the signal amplitude does not fall below the threshold, but the difference in the average signal amplitudes at a time of about 15 ms is correctly interpreted by the algorithm as the end of the current and the beginning of the next pulse. To increase the number of detected pulses, it is advisable to slightly underestimate this parameter, but to introduce an additional check of the pulse shape ( Figure 5). This check is based on the fact that the pulses have a sharp front and a pronounced maximum of the envelope. For this, the pulse is divided into 3 equal parts, within which the average amplitude is determined. A fragment of a signal is considered a pulse if the average amplitude of one of the parts exceeds the rest by more than 1.3 times. At the same time, the maximum pulse amplitude (Figure 3, section m) must exceed the threshold Sk by at least 2 times, the minimum pulse duration is 0.8 ms. If necessary, the parameters of the pulse detecting algorithms and pulse shape checking can be adjusted for any type of high-frequency pulse signals. Testing the threshold scheme The threshold scheme was tested in two stages. At the first stage, an artificial signal containing 100 pulses was processed ( Figure 6). Figure 6. Artificial signal fragment. Analytically, the GAE pulse model can be specified as follows: where A is the pulse amplitude; t is the time, ; is the position of the envelope maximum, set relative to tend; n (pmax, tend) is the minimum value of the parameter affecting the pulse envelope steepness, chosen so that ; is the coefficient responsible for the pulse envelope steepness, the greater the value of , the steeper the envelope, ; f is the pulse modulation frequency; is the initial phase. The following parameters were chosen for the experiment: • sampling frequency, Fs was 48000 Hz; • amplitude, A was from 0.1 to 1 relative units; • pulse filling frequency, f was from 4000 to 10000 Hz; • pulse duration, tend was from 100 to 400 samples; • position of the envelope maximum, pmax was from 5% to 30% of the pulse duration; • coefficient Δ was from 1 to 2; • initial phase, was 0 rad. To obtain signals with a different signal-to-noise ratio (SNR), Gaussian noise of various amplitudes was superimposed on the model signal, simulating background noise. The results of the experiment are presented in tables 1-2. Since the pulse repetition rate in the test signal is 21 pulses per second, the standard deviation calculated for the signal as a whole significantly exceeds the background noise level, so the initial value of the threshold has to be artificially lowered (table 3). Based on the results of the experiment, it can be concluded that the algorithm for detecting pulses proposed by the authors for signals with SNR of at least 5 dB (according to the obtained estimates, the SNR of geoacoustic signals is 8-9 dB) works more accurately than the previously used one [13]. Also, the developed algorithm works more stable than the outdated version [13], since it does not require manual adjustment of the initial parameters. At the second stage of testing, we used a model signal composed of fragments of a real geoacoustic signal containing pulses (Fs = 48000 Hz). The signal consisted of 135 pulses with amplitudes from 0.002 to 1 rel. units. 75 pulses were allocated by the proposed method, and 40 by the outdated version. Table 4 shows the statistics of the number of detected pulses in 15-minute fragments of the geoacoustic signal using the developed and outdated schemes. The proposed method detects an order of magnitude more pulses. Conclusion The method of automatic pulse detection proposed by the authors is an adaptive threshold scheme that adjusts the threshold value to the changing background noise level. The parameters of the method were selected in such a way as to ensure the effective detection of pulses from geoacoustic emission signals. It should be noted that the proposed scheme provides a minimum of false alarms of the detector and the most critical errors from the point of view of further signal analysis. In comparison with the previously used one, the threshold scheme proposed by the authors is less sensitive to the choice of the threshold initial value, therefore, is more stable in operation. When processing 15-minute fragments of a geoacoustic signal, the new scheme detects on average 5 times more pulses. In general, the described method for detecting pulses with the appropriate parameter settings (sliding window length n, parameter B, segment lengths a, b, c, d, m, Figure 3) can be applied to various high-frequency pulse signals.
2021-11-24T20:07:36.852Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "342eddba3a0fda3bd57692ae156e36f8bb1be1c3", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/2096/1/012019", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "342eddba3a0fda3bd57692ae156e36f8bb1be1c3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
253993310
pes2o/s2orc
v3-fos-license
Lupus Nephritis Associated With Renal Cell Carcinoma A 43-year-old man was found to have hematuria and proteinuria during a regular checkup. Workup revealed a renal mass on a CT scan. He continued to have worsening proteinuria and hematuria, with declining renal function, so a renal biopsy was performed, which revealed lupus nephritis class IV. He underwent partial nephrectomy, which confirmed the diagnosis of renal cell carcinoma on pathology. He was started on glucocorticoids, hydroxychloroquine, and cyclophosphamide for lupus nephritis. Introduction Systemic lupus erythematosus (SLE) is an uncommon autoimmune condition that can affect almost any organ of the body. It is more commonly seen in young women but can also be seen in men. When SLE presents in men, it tends to have a more severe presentation with worse long-term outcomes [1]. Lupus nephritis (LN) can be seen in up to 50% of patients with SLE, often resulting in end-stage renal disease despite immunomodulatory therapies [1,2]. Renal cell cancer (RCC) originates from the renal epithelium and accounts for >90% of all cancers in the kidney, with incidence increasing with advancing age [3]. RCC accounts for 2% of all cancers and is noted to be more common in men [4]. Case Presentation A 43-year-old man was found to have microscopic hematuria and proteinuria on work-related medical clearance. A month later, he underwent cystoscopy, which was unrevealing, so a CT scan of the abdomen and pelvis was performed, revealing a 1.5 cm mass concerning for renal cell carcinoma (RCC). The patient's initial presentation with hematuria and proteinuria was attributed to possible RCC, which was visualized on a CT scan. The mass was further characterized with an MRI of the abdomen ( Figure 1). FIGURE 1: MRI of the abdomen showing renal mass (arrow) Further workup was pursued when his protein/creatinine ratio worsened from 4.7 to 8.5. He was unable to provide a 24-hour urine sample so proteinuria was followed by protein/creatinine ratios. He was found to be negative for anti-nuclear antibody (ANA) but surprisingly had elevated double-stranded DNA antibody (dsDNA Ab) at 12 IU/mL (normal range 0-9). He was noted to have low C3 (10 mg/dL, normal range 14-44 mg/dL) and C4 (57 mg/dL, normal range 82-167 mg/dL), CH50 was normal at >60 units/ml (normal value >41 units/mL). His anti-neutrophilic cytoplasmic antibodies (ANCAs) were negative. His other workup, including antibodies to extractable antigens, Sjogren antibodies, scleroderma antibodies, and anti-phospholipid antibodies, was negative. His creatinine had increased up to 5.5 mg/dL (normal range 0.6-1.3 mg/dL), and glomerular filtration rate (GFR) decreased to 11 mL/min at the time of diagnosis; creatinine has stabilized at 2 mg/dL and GFR at 35 mL/min after treatment. He underwent a renal biopsy due to worsening renal function. Pathological findings were consistent with diffuse lupus nephritis (LN) class IV with an activity score of 13/24 and a chronicity score of 3/12 ( Figure 2). FIGURE 2: Kidney biopsy pathology with findings of lupus nephritis The decision was made to proceed with partial nephrectomy for renal mass before initiating potentially carcinogenic immunomodulatory treatment for LN class IV. Tissue examination from partial nephrectomy revealed chromophobe renal cell carcinoma with a clear margin. In consultation with the patient's urologist, nephrologist, and rheumatologist, the decision was made to proceed with partial nephrectomy. A 2.5 cm mass with clear margins was excised, and pathology confirmed the diagnosis of RCC. He was started on cyclophosphamide 500 mg IV every two weeks per Euro-Lupus protocol for lupus nephritis, prednisone, and hydroxychloroquine, which led to a partial response. Following cyclophosphamide Euro-Lupus protocol for three months, he was switched to Mycophenolate Mofetil with further improvement in proteinuria. Discussion SLE is an autoimmune disorder that often affects multiple organs. People with SLE are shown to have a higher malignancy risk. The most common malignancy in SLE is non-Hodgkin's lymphoma (NHL); a higher risk of vulva, lung, thyroid, breast, and possibly liver malignancy has also been noted in prior studies [5,6]. People with SLE often require immunosuppressive medications, which also predisposes them to various malignancies [7]. It is unusual to have SLE with negative ANA; many previously reported negative SLE cases were thought to be due to sera were checked with older nonspecific testing methods [8]. Many ANA-negative patients have also been reported to have Anti-SSA/Ro antibodies, which were negative in this patient [9]. C2 deficiency can lead to SLE with negative ANA; this was ruled out by normal CH50 levels [10]. Proteinuria leading to immunoglobulin loss, prozone effect, and longer disease duration are some of the other possible etiologies of ANA-negative SLE reported in literature [11]. Chromophobe renal cell carcinoma is a rare subtype of RCC, seen in about 5% of RCC patients and often presenting in earlier stages [12]. It can be seen with a familial form of RCC Birt-Hogg-Dubé syndrome that was not suspected in his case, so no genetic testing was pursued. This patient has been monitored with serial imaging without any signs of recurrence. This patient had an unusual presentation of hematuria and proteinuria, which were initially attributed to RCC suspected on imaging. However, when his hematuria and proteinuria progressed, a kidney biopsy was pursued, which revealed the findings of lupus nephritis. There are limited reports of SLE patients presenting with RCC. Goupil et al. reported a case of LN presenting with renal mass who did not have underlying malignancy, concluding that it would be helpful to keep this in differential when evaluating a patient of SLE with a renal mass [13]. Matsuda et al. reported a case of an LN patient who had undergone treatment with steroids and cyclophosphamide, who was later to be found to have RCC [14]. Wong reported a patient with LN who was found to have RCC due to persistent hematuria despite completing treatment with steroids, cyclophosphamide, and Mycophenolate. It was postulated that the RCC was likely present before or at the time of diagnosis of LN [15]. Gopalkrishnan et al. reported a case of SLE who had been in remission with prednisone and azathioprine for eight years, found to have RCC when she underwent ultrasound for evaluation of pyelonephritis [16]. Our patient was found to have RCC and LN concurrently. Malignancies have been shown to increase autoimmunity, and autoimmune disorders have been shown to have increased malignancy risk [5,6,7,17]. An atypical presentation of SLE or malignancy should alert one to the possibility of this complex overlap, which can help with early diagnosis. Conclusions SLE is an uncommon autoimmune disorder with increased malignancy risk. Atypical presentation due to malignancy and SLE overlap can occur. SLE manifestations like lupus nephritis carry the risk of high mortality and morbidity; early identification is essential so that prompt treatment can be started. When the clinical picture is conflicting, imaging and biopsy can provide useful data for further management. SLE patients should be monitored closely for concurrent malignancies; early diagnosis can lead to improved clinical outcomes. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. ANTHC Health Research Review Committee issued approval N/A. IRB approval was not required but the manuscript was reviewed and approved by ANTHC Health Research Review Committee. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2022-11-27T16:02:31.723Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "ded68e26b773a245d69164b3de7e8ef1a5bb6b0c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "2b76033d8b01f5e60ff8bdc1dfb6a97f4c6058f6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238585810
pes2o/s2orc
v3-fos-license
Effects of cryo-EM cooling on structural ensembles Structure determination by cryo electron microscopy (cryo-EM) provides information on structural heterogeneity and ensembles at atomic resolution. To obtain cryo-EM images of macromolecules, the samples are first rapidly cooled down to cryogenic temperatures. To what extent the structural ensemble is perturbed during cooling is currently unknown. Here, to quantify the effects of cooling, we combined continuum model calculations of the temperature drop, molecular dynamics simulations of a ribosome complex before and during cooling with kinetic models. Our results suggest that three effects markedly contribute to the narrowing of the structural ensembles: thermal contraction, reduced thermal motion within local potential wells, and the equilibration into lower free-energy conformations by overcoming separating free-energy barriers. During cooling, barrier heights below 10 kJ/mol were found to be overcome, which is expected to reduce B-factors in ensembles imaged by cryo-EM. Our approach now enables the quantification of the heterogeneity of room-temperature ensembles from cryo-EM structures. These advances leave two questions that have been difficult to answer. First, to what extent does the "average" structure under cryogenic conditions reflect the ambient temperature "average" structure [realizing that the term average here is somewhat misleading, the authors will understand what is meant]. Second, to what extent does the distribution of conformations (conformational ensemble) present in the cryo-EM sample reflect the distribution at ambient temperatures [leaving aside the technical difficulties of determining structural models of these ensembles from experiments]. While the first question can to a certain extent be answered by comparing structures solved at cryo-conditions with those at ambient temperature (by crystallography), the second question lies at the heart of the utility (and large potential) for cryo-EM to study conformational ensembles. This study provides welcomed data in an area that has been lacking detailed and quantitative modelling, and where experiments are difficult. The results are promising in the sense that they support the idea that cryo-EM can to a large extent capture conformational ensembles at ambient temperatures. Importantly, the study provides a framework to think about these problems in a more quantitative manner that will hopefully spur additional experiments and analyses. Specific comments: Major p. 4: I must admit that I found the RMSF-based analysis somewhat difficult to follow in places. First, just to be sure could the authors confirm that in each case the RMSF is calculated "locally" that is using an average over the specific simulation as reference. Second, when I look at Fig. S1 it appears that there are still some changes in the RMSF curves even towards the end of the simulations that are of the same magnitude (but in the opposite direction) as those observed during cooling. Is that correct or am I looking at the figure in the wrong way? Also, while I realize that it is difficult to boil down a complex ensemble to one or a few numbers that can be tracked, it would be useful with alternative ways of looking at the ensembles. Are there local differences that are not captured by RMSF? What about rotamer distributions. I will leave it up to the authors whether to explore these issues further in this paper. p. 11: In terms of future experimental studies, what kinds of tests of the models could the authors envision? For example, the authors discuss work by Chen et al (Ref 24) on differences depending on the starting conditions. Do the authors' analytical model capture such effects? Do the authors' results lead to specific criteria for selecting good model systems to test the effects of cooling on conformational ensembles? p. 11/12: Maybe the authors could also briefly discuss the relationship to other techniques that rely on (rapid) cooling including ssNMR and EPR. I realize that the cooling process is different, but it might still be worth speculating on how the approaches and models the authors present could be extended to other situations. In this context I'd also like to point out relevant work from Rob Tycko studying protein folding by ssNMR with rapid injection into a cold isopentane bath (https://dx.doi.org/10.1021/ja908471n). Minor p. 1/2: In the discussion of molecules settling into the lowest free energy minima at slow cooling rates, it might be worth making it clear that these minima may well be different from the minima at ambient temperatures. p. 4: In the T-quenching MD simulations I couldn't easily find whether the simulations were performed using pressure control and if so how. p. 6: "the atoms are subjected to harmonic potentials with a force constant c which are uniformly distributed in an interval from −d to d" makes it sound like it is the force constants that are betweend and d. Consider rephrasing. p. 6 "Model3 is a combination of model2 and model3," should be, I guess, "Model3 is a combination of model1 and model2," p. 6: It is not clear what value of the pre-exponential factor that the authors use. I did not go through the maths, but I would assume that the choice would affect the "effective" barrier heights e.g. in Fig. 4. It would be useful if the authors would clarify this, given that there has/is some discussion about what pre-exponential factors are relevant for conformational changes in biomolecules. p. 11: The authors write "Biomolecules can thermodynamically access more conformations at room temperature than at the cryogenic temperature". While that is probably mostly true, examples such as cold-denaturation suggest it isn't universally true. I have discussed this review with Iris Young and James Fraser, UCSF, who have posted an independent review of the authors' preprint on bioRxiv. Their coments will be posted in the comments section at bioRxiv at https://doi.org/10.1101/2021.10.08.463658. I would recommend that the authors also examine the constructive and overall positive comments in that review, and note in particular a concordance with the point described above on boiling down complex properties of an ensemble to a few numbers. Kresten Lindorff-Larsen, University of Copenhagen Reviewer #2: Remarks to the Author: This paper concerns the effects of inevitable and essential quench-cooling of biomolecular samples for cryo-EM examination. With the growing recognition of, and interest in so-called structural heterogeneity, the topic is timely. Recent work on mapping structural variability has further heightened the need for a better understanding of the effects of rapid cooling in cryo-EM sample preparation. The authors combine an interesting set of theoretical techniques to examine and characterize the possible ways in which sample quenching can alter the conformational spectrum. The models presented in this paper involve many adjustable parameters. Fortunately, their results appear to show better agreement when the number of adjustable parameters is somewhat reduced. All in all, the models address a complicated question, and deserve serious attention. The somewhat disappointing aspect of the work concerns the paucity of significant new insights from the carefully developed and described models. The primary conclusions are, in essence, what one would have guessed: sufficiently rapid cooling "freezes in" the conformational spectrum reminiscent of the uncooled spectrum, while slow cooling allows the system to relax to the minimum-energy conformation. It is left open whether the conformational spectra can be modeled in terms of a temperature at all. In recognition of the seriousness and timeliness of the work, I recommend the authors consider the comments above, and also highlight the new insights, which warrant publication in Nature Communications rather than an excellent archival journal. Reviewer #3: Remarks to the Author: This paper has discussed and quantified the generalized impact of plunge-freezing on native ensemble of biomolecules done to obtain EM images of the relevant population. The authors have performed extensive temperature quenching MD simulation of bacterial ribosome to investigate the contribution of cooling rate on the structural ensembles and reported that cooling-rate largely impacts equilibration into different local minima separated by 8 -10 kJ/mol barrier heights. Authors have also reported that thermal contraction of the biomolecule due to cooling and equilibration within local minima are mostly cooling-rate independent effect of the freezing procedure. The robust discussion on the underlying physics of the plunge-freezing procedure might be useful for optimizing EM imaging process as required. There are a few major concerns as listed: 1. RMSF is a commonly used metric for the amount of fluctuation occurring in a atom over time relative to its average position. However, RMSF spectrum fails to capture many significant yet subtle and small movements as they focus on the amount of change rather than the quality of change. As we know that majority of high frequency fluctuations correspond to thermal motions whereas moderateto-low frequency fluctuations mostly captures the conformational rearrangements, it would be better if authors can comment on the effect of cooling-rate on different frequency ranges. 2. How cooling affects the dynamics of water (/water models) because proteins thermal fluctuation will also be influenced by the solvent's dynamics? There are a few minor concerns as listed below: 1. The RMSD of the predicted rmsf values from model-1 (Fig-3e), didn't vary much even when trained with more simulation data. However, the RMSD value was consistently low while trained with all spans of simulation data, even with shorter spans (0.1 -8ns). Can authors elaborate the trend? 2. For Q5, all the models have either overestimated or underestimated the rmsf while compared with simulation data correspond to all cooling-rates investigated. Any reason for why this quantile behaved differently? Reviewer 1: Kresten Lindorff-Larsen The manuscript by Bock & Grubmüller describes a detailed, multi-pronged computational study of the complex and important effects of cooling during sample preparation for cryo-EM. The paper is generally easy to read, appears technically sound and provides relatively clear results that will be of interest both to theoreticians and practitioners of cryo-EM. Over the last 10 years cryo-EM has delivered increasingly high-resolution structures that in some cases now rival those of e.g. X-ray crystallography. In addition to examining the structures of macromolecules, cryo-EM may also provide more detailed insights into their energy landscapes because it in principle is a single-molecule technique that enables the visualization of the conformational distribution of molecules. These advances leave two questions that have been difficult to answer. First, to what extent does the "average" structure under cryogenic conditions reflect the ambient temperature "average" structure [realizing that the term average here is somewhat misleading, the authors will understand what is meant]. Second, to what extent does the distribution of conformations (conformational ensemble) present in the cryo-EM sample reflect the distribution at ambient temperatures [leaving aside the technical difficulties of determining structural models of these ensembles from experiments]. While the first question can to a certain extent be answered by comparing structures solved at cryo-conditions with those at ambient temperature (by crystallography), the second question lies at the heart of the utility (and large potential) for cryo-EM to study conformational ensembles. This study provides welcomed data in an area that has been lacking detailed and quantitative modelling, and where experiments are difficult. The results are promising in the sense that they support the idea that cryo-EM can to a large extent capture conformational ensembles at ambient temperatures. Importantly, the study provides a framework to think about these problems in a more quantitative manner that will hopefully spur additional experiments and analyses. Specific comments: Major p. 4: I must admit that I found the RMSF-based analysis somewhat difficult to follow in places. First, just to be sure could the authors confirm that in each case the RMSF is calculated "locally" that is using an average over the specific simulation as reference. Second, when I look at Fig. S1 it appears that there are still some changes in the RMSF curves even towards the end of the simulations that are of the same magnitude (but in the opposite direction) as those observed during cooling. Is that correct or am I looking at the figure in the wrong way? The understanding of the reviewer is correct. To calculate the rmsf values of an ensemble of structures, the structures were first aligned and then the rmsf is calculated with respect to the average structure of this ensemble. For example, the rmsf values before cooling were calculated from all 41 starting structures relative to the average of these 41 starting structures. The rmsf values after cooling were calculated from the final structures of 41 T-quench simulations with respect to their average. To clarify, we have extended the description of the rmsf calculation in the Results section: "To that aim, we extracted structures of the ribosome complex, which contain all atoms resolved in the cryo-EM structure, at intervals of 50 ns from the 3.5-µs simulation at 277.15 K (Fig. S1a). Then, we grouped the 41 structures from the time points between 0 µs and 2 µs into an ensemble, aligned the structures, and then calculated the rmsf of each atom with respect to the average structure of the ensemble." Indeed, the RMSF distributions are still changing with simulation time and are slowly increasing towards the end, which is, we agree, opposite of the expected cooling effect. This at first sight non-intuitive behavior is discussed in section "Effects of different cooling rates on the ensemble" on page 4 of our original manuscript: "For later time intervals, the rmsf values increase with a decreasing slope (intervals 0.4-2.4μs to 1.5-3.5μs). The observed behavior is consistent with a slight adaptation of the initial ribosome structure obtained at cryogenic temperatures to near room temperature, with subsequent room temperature fluctuations. For the cooling simulations, we expected a decrease in rmsf values. To observe this effect, it is crucial to start cooling from an ensemble of structures which does not show a decrease in rmsf values in the absence of cooling. We therefore chose the ensemble of 41 structures in the 1-3μs interval (Fig. S1b,red histogram)." Unfortunately, there was a "not" missing (highlighted in red) which we believe created the confusion, understandably. It is now added. However, the reviewer's comment also prompted us to have a closer look and providing further analyses of the following issue: Inevitably, any MD equilibration of any non-trivial biomolecular system will exhibit a slight remaining drift towards the equilibrium ensemble of the used force field, as often reflected by a slight increase of the usual rmsd with respect to the starting structure. This drift also contributes to the rmsf values. Taking an ensemble of starting structures from the 277.15-K equilibration run, which under further sampling shows an increase in rmsf, results in a positive rmsf contribution also in the T-quench simulations, which might counteract the rmsf decrease due to cooling. This effect is particularly pronounced for slower cooling due to longer sampling. Overall, this partial cancellation can result in an underestimation of the cooling effect and, hence, the numbers we provided are actually conservative lower bounds. To also provide an upper bound for this additional rmsf contribution, we have now in the revised manuscript measured the rmsf increase in the 277.15-K simulation for different time spans. As an example, consider the T-quench simulations with a cooling time span c of 1 ns started from the 41 structures at times between 1 and 3 µs (1000 ns, 1050 ns, …, 3000 ns). To calculate how much the rmsf increases during an additional sampling of 1 ns at 277.15 K, we calculated the rmsf of 41 structures shifted by 1 ns (1001 ns, 1051 ns, ... , 3001 ns). We then subtracted this increase from the rmsf observed in the T-quench simulations. This procedure was repeated for all T-quench simulations and time points. The results are now shown in the newly added Fig. S8. As expected, the resulting rmsf values (red) are generally lower than the rmsf values obtained from the T-quench simulations (black), but within the statistical uncertainty. Notably, because of the decreasing temperature in the T-quench simulations, the rmsf increase due to the additional sampling is expected to be smaller in the T-quench simulations compared to the 277.15-K simulations. Therefore, the rmsf values obtained from the subtraction (red) provide a lower limit of the expected rmsf decrease during cooling, thus now providing 'brackets' of upper (black curves) and lower limits (red), with the true values somewhere in between. To be on the conservative side, we used the T-quench values for all further analyses to make sure that we do not overestimate the effect of cooling. We have now added a section to the Supplementary Results ("Effect of additional sampling on rmsf during T-quench simulations") to describe these results and added an explanatory sentence to the manuscript (p. 4): "The small rmsf increase for later time intervals does not significantly affect the results of the T-quench simulations (for details, see Supplementary Results)." Also, while I realize that it is difficult to boil down a complex ensemble to one or a few numbers that can be tracked, it would be useful with alternative ways of looking at the ensembles. Are there local differences that are not captured by RMSF? What about rotamer distributions. I will leave it up to the authors whether to explore these issues further in this paper. We agree that describing the structural ensemble by only the distribution of the rmsf values of all atoms may provide a somewhat limited perspective. Clearly, spatial or chemical information as well as information on local relaxation motions will not be seen. Motivated by the reviewer's comment, we now provide additional, orthogonal analyses in the revised manuscript: To provide chemically resolved information, we divided the ribosome-complex atoms into subsets, one containing only RNA atoms, one comprising protein atoms, and separately calculated the rmsf distributions before and after the 128-ns T-quench simulations. Further, to provide spatially resolved information, we divided the set of atoms into five shells around the center of mass of the ribosome, where each set contains the same number of atoms. For each of these sets, we calculated rmsf distributions before and after the longest T-quench simulations with c =128 ns (Fig. S2). Indeed, there are large differences in the absolute values of the rmsf quantiles, before and after cooling, between RNA and protein atoms with the protein atoms showing larger rmsf values (Fig. S2a). Also the atoms in the different shells around the center of the ribosome show different rmsf distributions. Mainly, as expected, the outer shells are more flexible than the inner shells. However, and perhaps more surprisingly, the rmsf reduction during cooling, i.e. the difference between the rmsf before and after cooling, is remarkably similar for all the sets of atoms (Fig. S2b). This observation suggests that the rmsf distribution of all atoms provides, to a first approximation, a valid observable to capture the narrowing of the ensemble during cooling. To also provide information on local relaxation motions, we now investigate in the revised manuscript to what extent the backbone conformations of the proteins are affected by the cooling rate. To this end, we extracted Φ and Ψ angles of the ensembles before cooling, as well as during and at the end of all the T-quench simulations. Ramachandran plots of the ensemble before cooling and at the end of the 128-ns T-quench simulation are now shown in the new Fig. S4a. Indeed, the distribution of angles narrows markedly during cooling. To quantify this effect, we calculated Shannon entropies for these distributions (Fig. S4b). The entropies decreased significantly during the T-quench simulations which nicely quantifies the narrowing of the protein backbone conformation distributions. Interestingly, slower cooling results in more pronounced entropy decreases, confirming the overall trend. This cooling-rate dependence suggests that some of the free-energy barriers which determine the cooling relaxation further analyzed below in the main manuscript actually involve backbone fluctuations and, likely, rearrangements. We have carried out a similar analysis for rotamer angles χ 1 and χ 2 of the amino acids that contain these rotamers (Fig. S4c). As was seen for the backbone dihedrals, also their distributions become narrower during cooling, and this effect is also more pronounced with slower cooling. We thank the reviewer for prompting these additional analyses which in our view provide valuable insights into the shock freezing relaxation dynamics of biomolecules. We have now added a paragraph to the Results section to discuss these results (p. 6). p. 11: In terms of future experimental studies, what kinds of tests of the models could the authors envision? For example, the authors discuss work by Chen et al (Ref 24) on differences depending on the starting conditions. Do the authors' analytical model capture such effects? Do the authors' results lead to specific criteria for selecting good model systems to test the effects of cooling on conformational ensembles? Our kinetic model does not capture the scenario that the relative free-energy difference between conformational states can change with the temperature. We assume that this is probably underlying the dependence of the trapped conformation on the starting conditions in the work by Chen et al. Our simulations also do not capture a situation like this, because the ensemble, from which the T-quench simulations were started, is generated only at one temperature. Therefore, such a model cannot be validated against our simulation data. In our view, a good test of this assumption would be to generate ensembles at several temperatures, check if the (Boltzmann corrected) free-energy differences between conformational states differ for different temperatures, and finally start T-quench simulations from these ensembles to see how well the ratios of the conformational states are preserved during cooling. A good test system would have relatively few states, separated by free-energy barriers that can be overcome during MD simulations. Further, the free energy of some of the states should be dominated by enthalpy while that of others should be entropic, such that a strong dependency on temperature would be seen. We think that such follow-up studies are out of the scope of the present manuscript. We have now added a paragraph to the discussion: "We hypothesize that the dependence of the trapped conformation on the temperature prior to cooling observed in some cryo-EM experiments 24,25,26 might be due to temperature dependent free-energy differences between conformational states. Although our MD simulations fully capture how free energies change with temperature and even the associated non-equilibrium effects, we here started all simulations from an ensemble at one (ambient) temperature, such that this effect cannot be explored. For this reason, we have also not included temperature dependent free-energy differences between conformational states within our simple kinetic models. One might test this hypothesis by generating ensembles at several temperatures, then starting T-quench simulations from these ensembles, and quantifying the occupancy of conformational states during cooling." p. 11/12: Maybe the authors could also briefly discuss the relationship to other techniques that rely on (rapid) cooling including ssNMR and EPR. I realize that the cooling process is different, but it might still be worth speculating on how the approaches and models the authors present could be extended to other situations. In this context I'd also like to point out relevant work from Rob Tycko studying protein folding by ssNMR with rapid injection into a cold isopentane bath (https://dx.doi.org/10.1021%2Fja908471n). Good idea! As suggested, we have now added the following sentence and references to the introduction: "Rapid cooling, with the freeze-quench method 27 , is also used in electron paramagnetic resonance (EPR) spectroscopy experiments to trap intermediate states 28 and in combination with solid-state NMR experiments allows the identification of transient folding intermediates 29 ." We did not expand too much on this subject, because the main focus of our study is clearly on the interpretation of cryo-EM data. Minor p. 1/2: In the discussion of molecules settling into the lowest free energy minima at slow cooling rates, it might be worth making it clear that these minima may well be different from the minima at ambient temperatures. We fully agree, and have now changed the sentence in the introduction discussing the work of Chen et al., to make clear that the minima can depend on the temperature. We also added a reference to Singh et al. Nat Struc Mol Biol 2019 (as suggested by reviewers Iris Young and James Fraser): "The observation that captured conformations of a ketol-acid reductoisomerase and of temperature-sensitive TRP channels differ dramatically for different temperatures prior to cooling suggests that, in these cases, the minimal free-energy conformations depend on the temperature and that the conformations are preserved during rapid cooling 24,25,26 ." p. 4: In the T-quenching MD simulations I couldn't easily find whether the simulations were performed using pressure control and if so how. Thanks for pointing out the missing information; we have now added a sentence to the Methods Section: "The pressure was coupled to a Parrinello-Rahman barostat 88 (τ p = 1 ps). p. 6: "the atoms are subjected to harmonic potentials with a force constant c which are uniformly distributed in an interval from −d to d" makes it sound like it is the force constants that are between -d and d. Consider rephrasing. We have changed the sentence to "In this model, the atoms are subjected to harmonic potentials with a force constant c. The minima of the potentials are uniformly distributed in an interval from −d to d (Fig. 3)." p. 6 "Model3 is a combination of model2 and model3," should be, I guess, "Model3 is a combination of model1 and model2," Yes, that was indeed a typo and is now corrected. Thanks! p. 6: It is not clear what value of the pre-exponential factor that the authors use. I did not go through the maths, but I would assume that the choice would affect the "effective" barrier heights e.g. in Fig. 4. It would be useful if the authors would clarify this, given that there has/is some discussion about what pre-exponential factors are relevant for conformational changes in biomolecules. We used a temperature-dependent pre-exponential factor , where T is the current κ( ℎ ) temperature and T h is the temperature prior to cooling. For the temperature exponent , we tested values 0.5, 1, and 2. The choice of did not affect how well the kinetic models agree with the MD simulation data (Figure 3e) and we used 1 for the further analysis. As assumed by the reviewer the choice of the scaling factor indeed affects the barrier height. Therefore, for model3, we tested values of between 0.01 ns -1 and 10 ns -1 , obtained the model parameters with Metropolis sampling and calculated the deviation between rmsf values obtained from the MD simulations and the rmsf values obtained from the model (we have now added Supplementary Fig. S9). The deviation decreases with increasing values and stays constant when 1 ns -1 is reached, suggesting that ≥ 1 ns -1 . Since larger values of result in larger barrier heights, our results using a value of 1 ns -1 provide a lower limit of the barrier height. We have now added these results to the Supplementary Information (section "Pre-exponential factor for model2 and model3"). p. 11: The authors write "Biomolecules can thermodynamically access more conformations at room temperature than at the cryogenic temperature". While that is probably mostly true, examples such as cold-denaturation suggest it isn't universally true. We agree that this statement, as written, is not universally true, as cold denaturation shows. However, we are not aware of any case where cold denaturation has compromised cryo-EM structure determination, presumably because upon plunge-freezing the glass-transition temperature is reached on timescales shorter than those typically expected for unfolding, as our calculations also have shown. We note that any potential or partial cooling-induced unfolding events would also show up in our simulations. We have rephrased the respective sentence on p. 11 to make this issue clearer. "Generally, biomolecules thermodynamically access more conformations at room temperature than at the cryogenic temperature and rates of conformational changes are determined by free-energy barriers and temperatures 22 . For very rapid cooling, low temperatures that prevent barrier crossing are quickly reached and the conformations prior to cooling are expected to be kinetically trapped. In contrast, during very slow cooling, most barriers are expected to be overcome and low free-energy conformations are predominately occupied resulting in an ensemble that is more homogeneous than the room-temperature ensemble prior to cooling. However, to what extent the rapid cooling perturbs the ensembles is currently unknown." Further down in the discussion (p. 12), we have added a paragraph with a reference to Dias et al. Cryobiology 2010 (reference 766): "We note that during cold denaturation a transient increase of the accessible number of conformations can occur during cooling 76 , which, however, is not relevant to cryo-EM studies, because during plunge-freezing the biomolecule typically reaches the glass-transition temperature on timescales much shorter than those expected for unfolding. In our simulations, we have not observed unfolding events either, which provides additional support for this notion." I have discussed this review with Iris Young and James Fraser, UCSF, who have posted an independent review of the authors' preprint on bioRxiv. Their coments will be posted in the comments section at bioRxiv at https://doi.org/10.1101/2021.10.08.463658. I would recommend that the authors also examine the constructive and overall positive comments in that review, and note in particular a concordance with the point described above on boiling down complex properties of an ensemble to a few numbers. We were glad to see the review by Iris Young and James Fraser on bioRiv and addressed all their comments at the end of this reply letter (reviewers 4) Kresten Lindorff-Larsen, University of Copenhagen This paper concerns the effects of inevitable and essential quench-cooling of biomolecular samples for cryo-EM examination. With the growing recognition of, and interest in so-called structural heterogeneity, the topic is timely. Recent work on mapping structural variability has further heightened the need for a better understanding of the effects of rapid cooling in cryo-EM sample preparation. The authors combine an interesting set of theoretical techniques to examine and characterize the possible ways in which sample quenching can alter the conformational spectrum. The models presented in this paper involve many adjustable parameters. Fortunately, their results appear to show better agreement when the number of adjustable parameters is somewhat reduced. All in all, the models address a complicated question, and deserve serious attention. We appreciate the positive and encouraging comments of the reviewer. Indeed, as the reviewer points out, the finding that out of the two cooling-rate dependent models we have studied, the one with the least number of adjustable parameters (model 3) reproduced and predicted the rmsf quantiles derived from the atomistic cooling simulations best is important, as it clearly speaks against possible overfitting. As the wording of the reviewer's comment might be read as suggesting a remaining possibility of overfitting, we would like to summarize here why we are convinced that overfitting is not an issue here. Generally, it is not the mere number of adjustable parameters that points to overfitting, but their ratio to the number of independent data points to which they are fitted. Here, our kinetic model 3 reproduced and predicted 605 rmsf values, involving only 9 free parameters (while other model variants required up to 25 parameters). Clearly, not all of these 605 rmsf values are fully statistically independent, but because the model is actually able to describe the evolution of rmsf values over four orders of magnitude in cooling time spans (0.1-128 ns), we think the reviewer will agree that these data contain way more information than actually needed for a stable fit. As an additional guard against overfitting, we performed careful cross-validation by excluding parts of the data from the Bayes fitting (Fig. 3e) and to test different sets of parameters (Fig. S5), akin to calculating free R factors in X-ray crystallography. The somewhat disappointing aspect of the work concerns the paucity of significant new insights from the carefully developed and described models. The primary conclusions are, in essence, what one would have guessed: sufficiently rapid cooling "freezes in" the conformational spectrum reminiscent of the uncooled spectrum, while slow cooling allows the system to relax to the minimum-energy conformation. In light of the appreciation of our new insights by all other reviewers, this comment was a bit of a surprise. We were actually quite happy to see that our simulations confirm the qualitative expectation that rapid cooling prevents the equilibration into lower free-energy minima -had they not, our manuscript would have been rightly questioned due to conflict with experimental data. However, as appreciated by reviewers 1 and 3 as well as by Young and Fraser, our approach provides new quantitative data on how fast the cooling proceeds, how much, for given cooling speed, the ambient temperature ensemble narrows, as well as a new quantitative relationship between the heights of the barriers that are overcome during cooling and the temperature drop during cooling. This relationship informs the interpretation of cryo-EM experiments and quantifies the limitations of current cooling procedures. In an increasing amount of cryo-EM experiments, multiple distinct conformations are routinely resolved and our results suggest that these conformations are separated by barriers larger than~10 kJ/mol with current cooling protocols. To resolve conformations separated by lower barriers, more rapid cooling would be required. Besides this cooling-rate dependent effect, we quantified the cooling-rate independent effects of equilibration within local minima and thermal contraction. We are confident that our gained detailed understanding of how structural ensembles are affected by the cooling rate will inspire experiments where both the temperature prior to cooling and the cooling rate are varied. Then, analysis of the observed occupancies of different conformations based on our framework has the potential to quantify not only the free-energy differences between conformations at room temperature but also the heights of separating barriers. We have re-written parts of the discussion (p. 11-13) to better highlight these novel findings. It is left open whether the conformational spectra can be modeled in terms of a temperature at all. We have tried hard to understand this comment, but have to admit that we are still not quite sure if we fully captured its meaning. In case we misunderstood this comment, we would very much welcome a clarification and the opportunity to reply. One way we read the reviewer's comment is to ask whether or not there is a one-to-one relationship between the (width of?) conformational spectra and temperature, i.e., that the temperature uniquely determines the conformational ensembles. But because our simulations clearly show that the cooling process is far from equilibrium, the answer is 'no' and, hence, not left open as claimed. We therefore think it is more likely that the reviewer criticized lack of detail in our characterisation of how the conformational ensembles change with temperature. To address this concern, we have now obtained the dominant conformational modes and checked if the distribution along these modes is affected by the cooling. To that end, we first carried out a principal component analysis (PCA) of structures extracted at 5 ps intervals from the trajectory at T=271.15 K (1-3µs). To calculate the covariance matrix, we used the coordinates of the P and CA atoms of RNA strands and proteins, respectively. The eigenvalues of each eigenvector of the covariance matrix correspond to the variance of the projections onto this eigenvector, such that eigenvectors with large eigenvalues describe dominant conformational modes of motion. To restrict the analysis to those conformational modes which contribute most to the overall motion, we chose the 212 eigenvectors whose eigenvalues are larger than 1 Å for further analysis. Next, for each eigenvector, we projected the structures from which the T-quench simulations were started on the eigenvectors and calculated the standard deviation of the projections σ before . For each cooling time span c , we then projected the structures at the end of the T-quench simulations and calculated the standard deviation σ after . The normalized difference between the standard deviations Δσ=(σ after -σ before )/σ before for the eigenvectors as a function of the eigenvalues is now shown in Fig. S10. For rapid conformational changes, one would expect the distributions of projections to become narrower upon cooling (Δσ<0) and that this effect is more pronounced for longer cooling time spans. We do not see a clear trend of narrowing, which could be explained by the conformational changes being too slow or a consequence of limited statistics. For the more local changes measured by rmsf as well as the narrowing of backbone dihedral angles and side-chain rotamers, however, we see a clear effect of the cooling (see answer to comment by reviewer 1). Together, these results suggest that the local changes converge faster than the correlated motions involving the whole ribosome, which provides additional information on the structural determinants underlying the different barrier heights derived from model 3. We now describe these new results in an additional section in the Supplement "Effect of cooling on conformational modes". It would be interesting to see if with more statistics a narrowing is also observed for distribution of the conformations along the conformational modes. However, testing this on the large ribosome complex is computationally not feasible. It would be better to carry out extensive simulations on a small protein with more rapid conformational modes instead. This, however, would be a project on its own and is outside of the scope of the presented manuscript. In recognition of the seriousness and timeliness of the work, I recommend the authors consider the comments above, and also highlight the new insights, which warrant publication in Nature Communications rather than an excellent archival journal. We appreciate this recognition and have followed the recommendation as described in reply to the reviewer's comment above. This paper has discussed and quantified the generalized impact of plunge-freezing on native ensemble of biomolecules done to obtain EM images of the relevant population. The authors have performed extensive temperature quenching MD simulation of bacterial ribosome to investigate the contribution of cooling rate on the structural ensembles and reported that cooling-rate largely impacts equilibration into different local minima separated by 8 -10 kJ/mol barrier heights. Authors have also reported that thermal contraction of the biomolecule due to cooling and equilibration within local minima are mostly cooling-rate independent effect of the freezing procedure. The robust discussion on the underlying physics of the plunge-freezing procedure might be useful for optimizing EM imaging process as required. We thank the reviewer for this positive assessment. There are a few major concerns as listed: 1. RMSF is a commonly used metric for the amount of fluctuation occurring in a atom over time relative to its average position. However, RMSF spectrum fails to capture many significant yet subtle and small movements as they focus on the amount of change rather than the quality of change. As we know that majority of high frequency fluctuations correspond to thermal motions whereas moderate-to-low frequency fluctuations mostly captures the conformational rearrangements, it would be better if authors can comment on the effect of cooling-rate on different frequency ranges. We thank the reviewer for this comment. This was in fact also suggested by reviewers 1 and 4, so we have carefully selected several new metrics, which we have now analyzed and included within the revised manuscript . For our detailed reply, please see our response to reviewer 1, where we discuss that effects of cooling are indeed seen for different kinds of motion which are occurring on different time scales. Briefly, the distributions of the protein side-chain rotamers and of the protein backbone dihedrals were observed to become narrower in a cooling-rate dependent manner, as observed for the rmsf. As detailed in our response to reviewer 2, we have now also included an analysis of the correlated motions of the ribosome complex obtained using principal component analysis (PCA). Interestingly, and in contrast to the above metrics, we do not see a clear trend of narrowing for the distributions of these large conformational changes. An explanation of this finding could either be that the motions are too slow to be equilibrated during cooling, or limited statistics. Indeed, from cryo-EM experiments (e.g. Fischer et al. 2013) we know that several large-scale motions, such as the intersubunit rotation, are too slow to be equilibrated during cooling. However, the amount of simulations required to distinguish between these possibilities, is currently computationally not feasible. In future work, extensive simulations of a small protein with rapid conformational changes could help to address this question. 2. How cooling affects the dynamics of water (/water models) because proteins thermal fluctuation will also be influenced by the solvent's dynamics? We certainly agree that the biomolecular fluctuations are strongly affected by the solvent and, hence, this is clearly an important question. Note, however, that (at least for smaller proteins) the coupling between solvent and protein dynamics at different temperatures, in particular in relation to the protein glass transition, has already been extensively investigated using neutron scattering experiments (Réat et al. PNAS 2000), MD simulations with different water models (Steinbach et al. PNAS 1993, Norberg et al. PNAS 1996, Vitkup et al. Nat Struct Biol 2000, Tournier et al. Biophys J 2003, and X-ray crystallography (Sartor et al. J Phys Chem 1993, Tilton et al. Biochemistry 1992, Warkentin et al. J Appl Crystallogr 2009). There are also two reviews on this topic (Ringe et al. Biophys Chem 2003, Doster Biochim Biophys Acta Proteins Proteom 2010. All these papers are cited in our introduction. To better point the reader to this literature, we have now changed the sentence introducing these papers to "The effects of low temperatures on protein dynamics and the coupling between solvent and protein dynamics have been studied extensively using…" Based on the extensive discussion in the literature and to keep the focus of the manuscript on protein/RNA dynamics, we would prefer not to include an analysis of water dynamics within our manuscript. There are a few minor concerns as listed below: 1. The RMSD of the predicted rmsf values from model-1 (Fig-3e), didn't vary much even when trained with more simulation data. However, the RMSD value was consistently low while trained with all spans of simulation data, even with shorter spans (0.1 -8ns). Can authors elaborate the trend? Thank you for this observation. We have now added a sentence to the results (p. 8) to discuss this observation: "The deviations are similar, because the parameter distributions obtained from the different training sets were similar, suggesting that the contribution to the rmsf decrease described by this model, namely the equilibration in local harmonic potentials, is indeed cooling-rate independent." 2. For Q5, all the models have either overestimated or underestimated the rmsf while compared with simulation data correspond to all cooling-rates investigated. Any reason for why this quantile behaved differently? We thank the reviewer for pointing this out. We have added the following paragraph on page 8: "All models either underestimate or overestimate the median values of the rmsf quantile Q5. However, the rmsf values obtained from the models lie very well inside the confidence intervals (Fig. 3d, gray area). Q5 quantifies the rmsf values of the atoms with the largest heterogeneity in the ensemble. We expect the underlying large conformational changes to be slower than conformational changes resulting in smaller rmsf values. Therefore, we expect the conformational changes underlying Q5 to be less equilibrated, which is supported by the larger confidence intervals." 3. Page-6, line-223: "Model3 is combination of model2 and model3". Do authors mean model2 and model1? Thank you, that was indeed a typo and is now corrected. Can authors elaborate what are "free parameters" and its implication? We assume the reviewer suggested a more detailed description of precisely which free parameters we have chosen, which we now have added to the manuscript (see below). But because the reviewer's question might also be read as what we mean by the term "free parameters", and to be sure we are talking about the same thing, let us first provide an explanation: Free parameters are the 'unknowns' if you wish, model parameters that enter the likelihood function; their probability distribution is calculated (see Fig. S6) by applying the Bayes theorem and Metropolis sampling of the probabilities. This probability distribution yields their expectation values and uncertainty estimates. We have adjusted the text to make this clear (p. 8): "The free model parameters optimized by the Bayes approach are d and c for model1, ∆G ‡ , ∆x, ∆G for model2, as well as d, c, ∆G ‡ , ∆G, and ∆x for model3." Reviewers 4: Iris Young and James Fraser (via biorxiv) The capability of cryo-electron microscopy (cryoEM) to capture multiple and native-like conformations of large macromolecules is transforming structural biology. This manuscript explores intricacies of the cooling process as it relates to structural ensembles. Specifically, how do variations in starting sample conditions (water layer thickness, water/sample starting temperature) and cooling (ethane layer thickness, rate of cooling) affect the distribution of structural states captured in the resulting micrographs? Can we be confident that the results of "time-resolved" cryoEM experiments are representative of barriers and basins we hope to capture? By a combination of molecular dynamics simulations and cryoEM experiments, the authors guide us to an empirical understanding of these questions. To understand the relationship between cooling and structural ensembles, we must begin with the thermodynamic principles in play. Ensembles represent the many possible conformational states of a structural unit, and the occupancies of the individual states depend on the energy landscape across which they are related. At the extremes, we may imagine an ensemble cooled instantaneously to 0 K, whose component structures would not be able to traverse the energy landscape in any direction, as well as an ensemble injected with enough energy to overcome any energy barrier on the landscape (i.e. a system at thermal equilibrium), whose component structures would move freely to occupy all possible states. In the latter case we would not expect all states to be occupied by the same number of particles, however -particles with exactly enough energy to breach a particular energy barrier are equally likely to fall to either side of it, but particles at different starting points with the same starting energy have different likelihoods of escaping their local energy minima. In aggregate, this produces the Boltzmann distribution, in which the populations of different states depend entirely on their relative energies. For intermediate temperatures, it is useful to speak in shorthand of energy barriers and a system's ability to overcome them. We believe the introduction of this manuscript deserves a more complete illustration of the fundamental principles, however, and would encourage the authors to possibly even add a diagram or two to aid in this effort. It is too easy to confuse the fact that cooled structures move toward global energy minima with the idea that cooling gives them the energy needed to overcome energy barriers, which is precisely the opposite of the truth. The abstract in particular ought to be reworded to avoid this misinterpretation. As suggested by the reviewers, we added a schematic of how cooling may affect ensembles (Fig. 1a). We also improved the abstract and introduction text to make it clearer that it is the additional time spent at higher temperatures that allows for barrier crossing into lower free-energy minima. The design of the computational and wet lab experiments is carefully geared toward isolating the relevant variables and reproducing the relevant states and processes. For example, to choose the equilibration times to use in MD simulations, the authors first simulated how long it would take for samples of various thicknesses to vitrify. By and large we are satisfied with the parameters of these experiments, but we question one unsupported assumption: the authors enforced an ethane bath outer boundary held at constant temperature. This could be possible if the ethane bath remains in contact with a heat sink and the equilibration time between ethane and the heat sink is negligible compared with that between ethane and the sample, but we do not see this discussed or justified, nor is this standard practice when freezing grids, as the ethane bath is isolated from the standard liquid nitrogen heat sink after reaching the desired temperature to prevent the ethane from freezing solid. We were confused by the plot of equilibration times for ethane layers of different thicknesses, and hypothesize that the unexpected (to us) trend is a result of this boundary condition: we would have imagined a thicker ethane layer to allow quicker absorption of heat from the water layer, but the opposite is shown. Moreover, there is often a cold gas layer (see work by Rob Thorne on hyperquenching: https://pubmed.ncbi.nlm.nih... and https://journals.iucr.org/m.... While this complication might be very difficult to simulate, it should be explained how it might affect the interpretation of results. Concerning the wet lab experiments mentioned in the first sentence of the reviewer's comment, we think there was a misunderstanding: We did not perform any wet lab experiments, and what the reviewer is likely referring to (Fig. 1b, we assume) are continuum solutions of the heat equation for the described system. Upon re-reading the abstract, we realized that unclear phrasing in the abstract might have caused the misunderstanding. We have now changed the abstract accordingly. Concerning the motivation for the thickness and the heat bath coupling of the ethane bath, we would like to clarify that in our work we have used three layers of modeling. (1) estimation of the temperature drops during plunge-freezing (solution of the heat equation), (2) calculation of the response of a macromolecular system to different cooling rates (MD simulations of ribosome complex), and, (3) several kinetic models trained against the MD simulation data and then applied to the temperature drops estimated by model layer (1). We have now expanded the respective summary of these three layers in the Discussion of our revised manuscript to avoid any misunderstanding. As pointed out by the reviewers, for the estimation of the temperature drop the temperature at the outer boundaries, which are in contact with the ethane, is indeed kept constant. In contrast to what the reviewers seem to have assumed, however, we have not chosen to model the walls of the ethane container as close as possible to the experimental situation -which is different, as the reviewers correctly point out. The reason for this choice is that in the real experiments the ethane container is much larger than any reasonable grid size of the numerical continuum calculation would allow us to implement. To solve the heat equation, however, a boundary condition is nevertheless necessary, which raises the question to which extent the too narrow ethane bath affects the results. Because of the small water-layer width, low temperatures (< glass transition temperature) in the water layer are expected to be reached well before the temperature increase in the ethane layer reaches the walls of the ethane container and we therefore do not need to include the whole ethane container in the model. It suffices to make the ethane layer wide enough such that the temperature drop in the water layer is not affected by the ethane-layer width. The various chosen ethane layer thicknesses aim to answer this question and help to extrapolate to the real situation of essentially 'infinite' thickness (for which the boundary condition becomes irrelevant). In particular, to test how wide the ethane layer should be, we successively increased the width (100 nm up to 3.2 µm). As can be seen, the difference between the temperature drops for widths 1.6 µm and 3.2 µm is very small and only occurs after around 10 -5 s which is much longer than the time it takes to reach the glass transition temperature (< 10 -6 s). We therefore chose the temperature drops obtained for 3.2-µm width for the following analysis. This reasoning was not clear enough from the text, and we have now modified it accordingly. We also thank the reviewers for pointing us to the papers by Robert Thorne and we have now included the following into our discussion: "Above the liquid cryogen, a cold gas layer with a thickness of several mm was observed [Warkentin et al. 2006, Engstrom et al. 2021. To what extent the cryo-EM samples are already cooled when they move through the gas layer during plunging is not clear. The slower temperature drop due to precooling by the cold gas layer, would allow biomolecules to overcome higher barriers resulting in more homogeneous ensembles." Although it does not impact the methods or results of this paper, we are also unconvinced that the use of any vitrification bath held at a lower temperature than the commonly used ethane bath would necessarily result in faster freezing, as heat transfer is also dependent on heat capacity (hence the selection of ethane for plunge-freezing rather than liquid nitrogen!) Propane does indeed appear to have a higher heat capacity than ethane at similar cryogenic temperatures, so in the case of an ethane-propane mixture, this assumption does hold, but we would prefer the authors include this detail. We agree with this assessment, and in the interest of staying within the word limit of Nature Communications, we have decided to remove this part, which is not essential for our main conclusions. As for the results of the study, it is well-evidenced and clearly presented that conformational distributions present before plunge-freezing are reflected in vitrified samples when a standard vitrification protocol is followed, and that the rate of cooling does indeed impact the degree to which these distributions are preserved. We especially applaud the authors' careful wording around what interpretations are supported or suggested by the data, leaving open the remote possibility of other explanations -they draw very clear distinctions between observations and analyses. This is good science! Finally, we find the implications of the study meaningful. The selection of a ribosomal complex as an example particle perfectly illustrates the biologically relevant range of flexibilities and temperature-dependent conformational ensembles. This example gives us an intuitive measuring stick for other types of structures. Taken as a whole, the analyses inform future "time-resolved" studies using cryoEM and the design of other experiments that depend on the preservation of a conformational ensemble by rapid cooling. This is a very exciting direction of inquiry that we will continue to watch with great interest! We thank the reviewer for these kind words. Minor points: -The authors could make the introduction even more clear and accessible by specifying that liquid specimens present a challenge because their vapor pressure is incompatible with high vacuum. We have changed the corresponding sentence to: "In general, biomolecules perform their functions in solution. However, the direct study of specimens in liquid solutions using EM is impeded because the high vacuum required by EM is incompatible with the vapor pressure of liquid solutions." -We favor the wording "most often liquid ethane" over "mostly liquid ethane" for describing the standard vitrification setup. We agree and the sentence is now changed. -Sentences such as the second to last sentence in the second paragraph of the introduction could be broken into multiple sentences or otherwise simplified to avoid confusion among the several "which," "from" and "and" clauses. We have now carefully gone over the revised manuscript to avoid too long sentences and ambiguous 'which' etc references. -Sobolevsky's work on TRP channels and vitrification probably deserves a mention in the intro alongside the other examples, esp. because those probe a natural temperature sensor! (https://www.nature.com/arti..., https://www.nature.com/arti… We thank the reviewers for pointing us to this amazing work. We have added both references to the introduction: "The observation that captured conformations of a ketol-acid reductoisomerase and of temperature-sensitive TRP channels differ dramatically for different temperatures prior to cooling suggests that, in these cases, the minimal free-energy conformations depend on the temperature and that the conformations are preserved during rapid cooling 24,25,26 ." -Tomography can be used to resolve position in ice layer, "Apart from the water-layer thickness, the temperature drop also depends on the position within the layer with the slowest drop in the center (Fig. 1a), which is relevant, because in the time between the spreading of the sample onto the grid and the plunging, the biomolecules tend to adsorb to the air-water interface 62." (https://pubmed.ncbi.nlm.nih... The introduction already referred to this work; we have now added an additional reference at this position. -The authors could comment on whether the effects of being in different parts of the ice layer would affect RMSF values. If the effects are not too small, reconstructions from different layers could yield B-factors that could deconvolute different effects! We thank the reviewers for this nice idea and have added the following to the discussion: "Finally, we would expect cooling rates to depend on the position in the ice layer, e.g., depending on the distance to the surface of the sample layer. Reconstructions from different positions could further help to disentangle the effects." -Are there local effects to their RMSF kinetic/thermodynamic models in the ribosome? For example, could they subdivide RNA/Protein, small/large subunit, exterior/interior sites, etc? Are there any regions that increase conformational heterogeneity upon cooling (as we have seen often in multi temperature crystallography)? Using a global RMSF metric may be leaving out interesting phenomena. This has also been a suggestion by reviewers 1 and 3, which led us to include a number of additional analyses to this effect. Please see our reply to the comment of reviewer 1 for a detailed description. -Are the frames and code deposited somewhere for others to examine? We will make the structures of the ensembles before and after cooling available for download (zenodo.org), with a link provided in the Data Availability statement. The code can be obtained from the authors upon request. Iris Young and James Fraser (UCSF)
2021-10-12T13:20:06.773Z
2021-10-08T00:00:00.000
{ "year": 2022, "sha1": "be2f6144c1e0298241164ac033d5367363eb43bd", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-022-29332-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0cd725ecdf95173ba82cc176249d5b0d605f3b41", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Biology" ] }
249163545
pes2o/s2orc
v3-fos-license
PLOD2 Is a Prognostic Marker in Glioblastoma That Modulates the Immune Microenvironment and Tumor Progression This study aimed to investigate the role of Procollagen-Lysine, 2-Oxoglutarate 5-Dioxygenase 2 (PLOD2) in glioblastoma (GBM) pathophysiology. To this end, PLOD2 protein expression was assessed by immunohistochemistry in two independent cohorts of patients with primary GBM (n1 = 204 and n2 = 203, respectively). Association with the outcome was tested by Kaplan–Meier, log-rank and multivariate Cox regression analysis in patients with confirmed IDH wild-type status. The biological effects and downstream mechanisms of PLOD2 were assessed in stable PLOD2 knock-down GBM cell lines. High levels of PLOD2 significantly associated with (p1 = 0.020; p2 < 0.001; log-rank) and predicted (cohort 1: HR = 1.401, CI [95%] = 1.009–1.946, p1 = 0.044; cohort 2: HR = 1.493; CI [95%] = 1.042–2.140, p2 = 0.029; Cox regression) the poor overall survival of GBM patients. PLOD2 knock-down inhibited tumor proliferation, invasion and anchorage-independent growth. MT1-MMP, CD44, CD99, Catenin D1 and MMP2 were downstream of PLOD2 in GBM cells. GBM cells produced soluble factors via PLOD2, which subsequently induced neutrophils to acquire a pro-tumor phenotype characterized by prolonged survival and the release of MMP9. Importantly, GBM patients with synchronous high levels of PLOD2 and neutrophil infiltration had significantly worse overall survival (p < 0.001; log-rank) compared to the other groups of GBM patients. These findings indicate that PLOD2 promotes GBM progression and might be a useful therapeutic target in this type of cancer. Introduction GBM is the most common and fatal malignant primary brain tumor in adults [1]. Despite the extensive standard of care therapy including maximal safe surgical resection followed by radiation and chemotherapy, the relative 5-year survival of GBM patients is less than 7% with a median survival of 14 months [1][2][3]. The diffuse infiltration in the surrounding brain parenchyma makes complete surgical resection impossible and is the main reason for recurrence and therapy resistance of GBM (reviewed in [4]). Therefore, the focus of research regarding therapeutic approaches for GBM has shifted towards immunotherapy and individualized therapy. In particular, numerous vaccine approaches, oncolytic viruses and immune-checkpoint inhibitors are in preclinical and clinical trials [3,5]. Furthermore, specific targeted therapies gained importance through a better understanding of the underlying molecular heterogeneity of GBM. Despite these multimodal approaches, GBM remains an incurable disease at present. Thus, there is still an urgent need to identify novel cellular and molecular mechanisms that control the progression of GBM and could serve as therapeutic targets in this type of cancer. Accumulating evidence indicates that the interplay between tumor cells and the tumor microenvironment (TME) plays a crucial role in tumor migration, invasion and progression [6,7]. The TME is largely determined by the extracellular matrix (ECM) with collagen as the most abundant protein [8]. During tumor progression, increased collagen crosslinking promotes stiffening of the extracellular matrix, thus enhancing invasion and metastasis [9,10]. The main enzyme mediating stabilized collagen crosslinks is Procollagen-Lysine,2-Oxoglutarat 5-Dioxygenase 2 (PLOD2) [11]. This membrane-bound homodimeric enzyme hydroxylates lysine residues in the telopeptides of procollagens and thus, plays a crucial role in the post-translational modification of collagen biosynthesis [11]. The resulting hydroxyl groups are essential for the formation of stable crosslinks by lysyl oxidases [12]. PLOD2 is upregulated in various cancers and is associated with poor outcomes in bladder cancer [13], hepatocellular carcinoma [14,15] and breast cancer [16] (and reviewed in [17]). Exploratory studies on a small cohort of 28 GBM patients indicated that PLOD2 was also associated with poor survival in this type of cancer [18]. Furthermore, Xu et al. found that high gene expression of PLOD2 was significantly associated with poor overalland progression-free survival in glioblastoma patients [19]. The same study also showed that increasing PLOD2 protein levels were associated with increasing tumor grade in glioma [19]. At the molecular level, PLOD2 induces epithelial-mesenchymal transition [20] and activates the PI3K-Akt [20], JAK-STAT [21] and FAK [19] signaling pathways. Although the exact mechanisms are still largely unclear, PLOD2 can be expected to affect key signaling pathways in tumor cells, thereby modulating tumor progression. The TME also contains a variety of infiltrating and resident immune cells that interact with the tumor cells and modulate their biology and functions. Recent studies showed that the GBM microenvironment hosts a large number of tumor-infiltrating neutrophils, which are actively recruited by GBM cells through the expression of IL-8 and IL-1b [22,23] (and reviewed in [24]). Importantly, the presence of infiltrating neutrophils in GBM was significantly associated with a poor outcome in these patients (reviewed in [24]). These findings suggest that neutrophils are substantially involved in the progression of GBM. As an important modulator of the extracellular matrix and TME, PLOD2 may also activate the tumor-infiltrating neutrophils. The role of PLOD2 in the pathophysiology of GBM still requires extensive characterization. This study aimed to determine (1) the association between PLOD2 expression and the clinical outcome of IDH wild-type GBM patients; (2) the involvement of PLOD2 in the modulation of GBM tumor cell functions and (3) the effect of PLOD2 on the biology and function of neutrophils. PLOD2 Associates with and Predicts Poor Overall Survival of GBM Patients Previous studies by Xu et al. using the TCGA database showed that high gene expression of PLOD2 is significantly associated with a poor outcome in GBM patients [19]. Here, we investigated whether the protein levels of PLOD2 in tumor tissue are associated with overall survival (OS) or progression-free survival (PFS) of GBM patients with confirmed IDH wild-type (IDH WT) status. To this end, the levels of PLOD2 were assessed by immunohistochemistry (see Material and Methods section) in two independent patient cohorts. PLOD2 expression was subsequently dichotomized into "low" and "high" based on the median-split method. The survival curves were plotted according to the Kaplan-Meier method and the statistical significance was assessed with the log-rank test. In the Hannover cohort, GBM patients with high tumor levels of PLOD2 (PLOD2 high ) had a significantly shorter OS compared to patients with low levels of PLOD2 (PLOD2 low ) (p = 0.020; log-rank) ( Figure 1A). These findings were confirmed in the Magdeburg cohort of GBM patients (p < 0.001; log-rank) ( Figure 1B). In both cohorts, PLOD2 high patients had a shorter PFS compared to their PLOD2 low counterparts, but statistical significance was only reached in the Magdeburg cohort (p = 0.001; log-rank) ( Figure 1C, D). GBM patients (p < 0.001; log-rank) ( Figure 1B). In both cohorts, PLOD2 high patients had a shorter PFS compared to their PLOD2 low counterparts, but statistical significance was only reached in the Magdeburg cohort (p = 0.001; log-rank) ( Figure 1C, D). We further analyzed the OS and PFS of IDH WT GBM patients using Cox proportional-hazard models adjusted for factors known to influence the patients' outcome, such as age [25], Karnofsky Performance Scale (KPS) [26], extent of surgical resection [27], therapy [28] and MGMT methylation status [29]. Initial analysis of the time-dependent covariate (T_COV_) for PLOD2 showed that the proportional hazard assumption of these models had been satisfied: (Figure 2A). For PFS, PLOD2 high patients had an increased hazard ratio compared to PLOD2 low patients but statistical significance was only reached in the Magdeburg cohort (HR = 1.645, CI [95%] = 1.040-2.603, p = 0.033) ( Figure 2B). These data indicate that PLOD2 could serve as an independent prognostic biomarker, at least for the overall survival of GBM patients. We further analyzed the OS and PFS of IDH WT GBM patients using Cox proportionalhazard models adjusted for factors known to influence the patients' outcome, such as age [25], Karnofsky Performance Scale (KPS) [26], extent of surgical resection [27], therapy [28] and MGMT methylation status [29]. Initial analysis of the time-dependent covariate (T_COV_) for PLOD2 showed that the proportional hazard assumption of these models (Figure 2A). For PFS, PLOD2 high patients had an increased hazard ratio compared to PLOD2 low patients but statistical significance was only reached in the Magdeburg cohort (HR = 1.645, CI [95%] = 1.040-2.603, p = 0.033) ( Figure 2B). These data indicate that PLOD2 could serve as an independent prognostic biomarker, at least for the overall survival of GBM patients. PLOD2 Promotes the Invasion, Proliferation and Anchorage-Independent Growth of GBM Cells Using U87 and U251 GBM cells, recent studies showed that PLOD2 enhanced the aggressiveness of GBM cells by promoting tumor invasion [19,20]. Here we investigated the effect of PLOD2 on the biology and functions of GBM using the H4 GBM cell line. To this end, the cells were stably transfected with a sh-RNA plasmid to downregulate the levels of PLOD2 (sh-PLOD2) or with a control plasmid (sh-control) (see Material and Methods section). All functional assays were performed in the absence of the selection antibiotic puromycin. Control western blot analysis confirmed that PLOD2 knock-downs remained stable until at least day 10, which was the last time point of the longest assay (supplementary Figure S1). Tumor invasion was assessed by the degree of "gap" closure (red line) in a 3D collagen matrix, using the Oris TM system ( Figure 3A). The results showed that PLOD2 knockdown significantly reduced the invasiveness of H4 GBM cells ( Figure 3B). We additionally determined the activity of matrix metalloproteases (MMP2 and MMP9), since MMPs are critical for tumor invasion in many types of cancer, including GBM [30]. To this end, shcontrol and sh-PLOD2 GBM cells were incubated in a culture medium and the supernatants were collected 48 h later. A culture medium without cells was used as a control. We found that H4 GBM cells released MMP2 but only negligible levels of MMP9 ( Figure 3C). Importantly, PLOD2 knock-down cells released significantly lower levels of MMP2 compared to their control-transfected counterparts ( Figure 3D). These findings indicate that PLOD2 enhances the invasiveness of GBM cells, possibly via MMP2. PLOD2 Promotes the Invasion, Proliferation and Anchorage-Independent Growth of GBM Cells Using U87 and U251 GBM cells, recent studies showed that PLOD2 enhanced the aggressiveness of GBM cells by promoting tumor invasion [19,20]. Here we investigated the effect of PLOD2 on the biology and functions of GBM using the H4 GBM cell line. To this end, the cells were stably transfected with a sh-RNA plasmid to downregulate the levels of PLOD2 (sh-PLOD2) or with a control plasmid (sh-control) (see Material and Methods section). All functional assays were performed in the absence of the selection antibiotic puromycin. Control western blot analysis confirmed that PLOD2 knock-downs remained stable until at least day 10, which was the last time point of the longest assay (Supplementary Figure S1). Tumor invasion was assessed by the degree of "gap" closure (red line) in a 3D collagen matrix, using the Oris TM system ( Figure 3A). The results showed that PLOD2 knockdown significantly reduced the invasiveness of H4 GBM cells ( Figure 3B). We additionally determined the activity of matrix metalloproteases (MMP2 and MMP9), since MMPs are critical for tumor invasion in many types of cancer, including GBM [30]. To this end, sh-control and sh-PLOD2 GBM cells were incubated in a culture medium and the supernatants were collected 48 h later. A culture medium without cells was used as a control. We found that H4 GBM cells released MMP2 but only negligible levels of MMP9 ( Figure 3C). Importantly, PLOD2 knock-down cells released significantly lower levels of MMP2 compared to their control-transfected counterparts ( Figure 3D). These findings indicate that PLOD2 enhances the invasiveness of GBM cells, possibly via MMP2. In further studies, we investigated the role of PLOD2 in GBM proliferation by assessing the metabolic activity of the transfected H4 cells using the MTT assay. To this end, the concentration of metabolized MTT was measured at different time points in one setup with 2000 cells and another with 4000 cells. The results showed that PLOD2 knock-down decreased the metabolic activity of H4 cells in both setups ( Figure 4A, B). However, statistical significance was only reached for certain time points. We additionally determined the anchorage-independent growth of transfected GBM cells by allowing the cells to form colonies in low-gelling agarose for 10 days ( Figure 4C). We found that PLOD2 knockdown cells formed significantly fewer colonies than their sh-control counterparts ( Figure 4D). Taken together, these data indicate that PLOD2 promotes the invasion, proliferation and anchorage-independent growth of GBM cells. In further studies, we investigated the role of PLOD2 in GBM proliferation by assessing the metabolic activity of the transfected H4 cells using the MTT assay. To this end, the concentration of metabolized MTT was measured at different time points in one setup with 2000 cells and another with 4000 cells. The results showed that PLOD2 knock-down decreased the metabolic activity of H4 cells in both setups ( Figure 4A,B). However, statistical significance was only reached for certain time points. We additionally determined the anchorage-independent growth of transfected GBM cells by allowing the cells to form colonies in low-gelling agarose for 10 days ( Figure 4C). We found that PLOD2 knock-down cells formed significantly fewer colonies than their sh-control counterparts ( Figure 4D). Taken together, these data indicate that PLOD2 promotes the invasion, proliferation and anchorage-independent growth of GBM cells. In all studies, statistical analysis was performed with the paired t-test. PLOD2 Modulates the Expression of Catenin D1, CD44, CD99 and MT1-MMP in GBM Cells As shown above, PLOD2 modulates the biological functions of GBM cells. To obtain further insight into the molecular mechanisms downstream of PLOD2 in GBM cells, we assessed by western blot the protein expression of several markers associated with tumor proliferation and invasion, such as Catenin D1, CD44, CD99, CDK6, EGFR, HIF1-beta, Integrin beta-1, MT1-MMP and PRAS40. We found that the levels of Catenin D1, CD44, CD99 and MT1-MMP were significantly lower in PLOD2 knock-down cells compared to their control-transfected counterparts ( Figure 5A-E). As shown above, PLOD2 modulates the biological functions of GBM cells. To obtain further insight into the molecular mechanisms downstream of PLOD2 in GBM cells, we assessed by western blot the protein expression of several markers associated with tumor proliferation and invasion, such as Catenin D1, CD44, CD99, CDK6, EGFR, HIF1-beta, Integrin beta-1, MT1-MMP and PRAS40. We found that the levels of Catenin D1, CD44, CD99 and MT1-MMP were significantly lower in PLOD2 knock-down cells compared to their control-transfected counterparts ( Figure 5A-E). GBM-Associated PLOD2 Induces Neutrophil Granulocytes to Aquire a Pro-Tumor Phenotype Accumulating evidence indicates that the GBM microenvironment contains significant numbers of neutrophils and that high neutrophil infiltration is associated with poor outcomes in GBM patients (reviewed in [24]). Furthermore, very recent studies found a correlation between high tumor levels of PLOD2 and high neutrophil infiltration in cervical [31] and hepatocellular carcinoma [14] tissues. Based on these findings, we hypothesized that GBM cells modulate the biology and functions of neutrophils via PLOD2. To test this hypothesis, we produced conditioned supernatants (SN) from sh-control and sh-PLOD2 GBM cells. Subsequently, we stimulated peripheral blood neutrophils with these supernatants and determined neutrophil survival as well as the release of MMP9-both indicators of a pro-tumor neutrophil phenotype ( Figure 6A). GBM-Associated PLOD2 Induces Neutrophil Granulocytes to Aquire a Pro-Tumor Phenotype Accumulating evidence indicates that the GBM microenvironment contains significant numbers of neutrophils and that high neutrophil infiltration is associated with poor outcomes in GBM patients (reviewed in [24]). Furthermore, very recent studies found a correlation between high tumor levels of PLOD2 and high neutrophil infiltration in cervical [31] and hepatocellular carcinoma [14] tissues. Based on these findings, we hypothesized that GBM cells modulate the biology and functions of neutrophils via PLOD2. To The results showed that the sh-control SN from H4 cells prolonged the survival of neutrophils at 24 h post-stimulation ( Figure 6B). This effect was significantly lower upon stimulation with sh-PLOD2 H4 SN ( Figure 6B). To confirm these findings, we additionally stimulated neutrophils with SN from a second GBM cell line (U251). Similar to the H4 SN, the sh-control U251 SN prolonged neutrophil survival while sh-PLOD2 U251 SN had a significantly weaker effect ( Figure 6B). To test whether GBM cells induce neutrophils to release MMP9, we stimulated neutrophils with sh-control or sh-PLOD2 SN for 1 h and determined MMP9 release by gelatin zymography. We found that the sh-control SN from both H4 and U251 cells induced neutrophils to release MMP9 ( Figure 6C,D). The release of MMP9 by neutrophils was significantly lower upon stimulation with sh-PLOD2 SN ( Figure 6C,D). GBM SN without neutrophils had only negligible levels of MMP9 (data not shown). To exclude potential clonal effects, we repeated this set of studies with SN derived from different sh-PLOD2 clones-for both H4 and U251 cells-and obtained similar results (Supplementary Figure S2A,B). Taken together these findings indicate that GBM cells release soluble factors via PLOD2, which stimulate neutrophils to acquire a tumor-promoting phenotype. test this hypothesis, we produced conditioned supernatants (SN) from sh-control and sh-PLOD2 GBM cells. Subsequently, we stimulated peripheral blood neutrophils with these supernatants and determined neutrophil survival as well as the release of MMP9-both indicators of a pro-tumor neutrophil phenotype ( Figure 6A). To test the clinical relevance of these findings, we stained GBM tissues against the neutrophilic marker CD66b. The patients were subsequently divided into four groups according to the combined expression of CD66b and PLOD2: CD66b low /PLOD2 low , CD66b low / PLOD2 high , CD66b high /PLOD2 low and CD66b high /PLOD2 high . Kaplan-Meier analysis revealed that CD66b high /PLOD2 high patients had the shortest overall survival of all GBM patients ( Figure 6E). These results were confirmed in a multivariate Cox regression model adjusted for age, KPS, therapy, resection efficiency and MGMT status where CD66b high / PLOD2 high patients had a significantly increased hazard ratio compared to the other groups of GBM patients (HR = 1.703, CI [95%] = 1.067-2.720, p = 0.026) ( Figure 6F). Discussion An increased effort has been made to identify the cellular/molecular factors that modulate the pathophysiology of GBM and that could provide information regarding diagnosis, prognosis and therapy in this type of cancer. PLOD2 is a promising biomarker and a target for cancer therapy, but its exact role in GBM still requires characterization. Several studies found an association between PLOD2 overexpression and poor outcome in multiple types of cancer, such as sarcoma [32], breast cancer [16], hepatocellular carcinoma [14] and bladder cancer [13]. Previous studies on a small (n = 28) cohort of GBM patients suggested that PLOD2 may serve as a biomarker in this type of cancer [18]. Song et al. found an association between high PLOD2 expression and poor outcomes in glioma patients. However, their study did not distinguish between high grade and low grade gliomas [20]. In a comprehensive study on glioma and GBM patients, Xu et al. demonstrated that increasing PLOD2 protein levels are associated with increasing tumor grade. Furthermore, the gene expression of PLOD2 was significantly higher in GBM than in healthy tissues and correlated with overall and progression-free survival [19]. Using two independent cohorts of IDH WT GBM patients, our study shows that high protein levels of PLOD2 (PLOD2 high ) are significantly associated with and predicted for poor overall survival of these patients. Together with the study by Xu et al., these findings indicate that PLOD2 could be a robust biomarker for the survival of GBM patients. We next sought to characterize the biological functions of PLOD2 in GBM cells. We found that PLOD2 promoted the invasiveness of H4 GBM cells. These data are in line with previous studies showing that PLOD2 modulates the migration and invasion of glioma cells. Specifically, Song et al. showed that PLOD2 knock-down suppressed the migration and invasion of U87 and U251 GBM cells, while Xu et al. showed in the same cell lines that the depletion of PLOD2 decreased invasion in vitro and in vivo, possibly by remodeling the stiffness of the ECM and decreasing the focal adhesion plaques [19,20]. We additionally found that PLOD2 promoted the release of ECM-degrading MMP2-a mechanism associated with enhanced invasiveness and worse outcomes in different types of cancer, including glioma [33][34][35]. Furthermore, we demonstrate that PLOD2 promotes the metabolic activity and the anchorage-independent growth of GBM cells. The effect of PLOD2 on the anchorage-independent tumor growth was especially striking since PLOD2 knock-down led to almost a complete inhibition of colony formation in GBM cells. Together, these findings are of particular importance for the pathophysiology of GBM, since high tumor invasiveness into the adjacent brain tissue and rapid growth are the main reasons why these tumors remain incurable at present. It should be mentioned at this point, that regulation of the ECM and tumor invasion is extremely complex. PLOD2 alone seems to play multiple roles in this process, since it can both degrade the basement membrane via MMP2 release (our own data), as well as induce collagen crosslinking/stiffening, thereby creating a "highway" for local invasion and activating different signaling pathways by mechanotransduction [6,17,19]. Furthermore, as elegant studies by Georgescu et al. recently showed, there are many other factors modulating the ECM program in the microenvironment of GBM [36]. Thus, the mechanisms involved in GBM invasion and the exact role of PLOD2 in this process still require further characterization. Previous studies found an association between PLOD2 and epithelial-mesenchymal transition (EMT), hypoxia-induced activation of PI3K-Akt signaling, as well as FAK phosphorylation in GBM cells [19,20]. Here, we found that PLOD2 knock-down cells had decreased levels of MT1-MMP, which is known to be a key regulator of cell migration and invasion in GBM [37][38][39]. Furthermore, MT1-MMP is an important activator of MMP2 leading to an enhanced invasion and progression in different tumor types including GBM [37,[40][41][42]. These findings support our data on the PLOD2-mediated release of MMP2 in GBM cells (see above). We additionally found that PLOD2 knock-down decreased the levels of CD44 indicating that PLOD2 regulates CD44 expression. CD44 is known to promote tumor formation through interactions with the tumor microenvironment and is involved in various cellular processes including invasion, proliferation and apoptosis in many types of cancer including GBM (reviewed in [43]). Increased CD44 expression was associated with worse survival in GBM [44]. By which mechanisms PLOD2 affects CD44 expression in GBM remains unclear. However, previous studies in laryngeal carcinoma suggested that PLOD2 enhanced CD44 expression via activation of the Wnt-signaling pathway [45], which may be a possible explanation for GBM as well. Our study further found that PLOD2 knock-down decreased the levels of CD99 in GBM cells. CD99 is known to alter the structure of the cytoskeleton, thus facilitating cell migration [46,47]. Indeed, studies on the GBM cell line U87 showed that CD99 overexpression increased the migration and invasion of these cells [46,47]. The exact mechanisms of CD99 regulation by PLOD2 are, however, currently unknown and remain to be characterized in future studies. Finally, our data showed that Catenin D1 levels were lower upon PLOD2 knock-down. Catenin D1 has been previously linked to oncogenic signaling pathways important for anchorage-independent cell growth [48]. This supports our findings that PLOD2 knock-down cells were almost completely unable to form colonies in soft agar clonogenic assays. Increasing evidence suggests that the interplay between tumor cells and the immune system is a key modulator of tumor biology and determines cancer pathogenesis and progression. Recent studies found a significant correlation between PLOD2 and infiltrating immune cells including neutrophils in cervical, hepatocellular and lung cancer [14,31,49]. These findings suggest that PLOD2 is an important modulator of the tumor immune microenvironment. The GBM microenvironment contains high numbers of infiltrating neutrophil granulocytes [23], which were found to associate with a poor outcome in GBM patients [22] (and reviewed in [24]). Neutrophils release various factors in the microenvironment, which can promote tumor progression. For instance, neutrophils have strong pro-angiogenetic activity via the release of MMP9 and vascular endothelial growth factor (VEGF). Additionally, neutrophils promote tumor motility, migration and invasion via the release of neutrophil elastase, cathepsin G, proteinase 3, MMP8 and MMP9. Under physiological conditions, neutrophils rapidly undergo apoptosis. However, their lifespan can be prolonged by tumor-derived factors resulting in enhanced local inflammation and, ultimately, tumor progression [50]. Our study shows that PLOD2 controls the production of (currently unidentified) soluble factors by GBM cells, which subsequently enhance neutrophil survival and the release of MMP9. Moreover, patients with synchronous high expression of PLOD2 and the neutrophilic marker CD66b had a significantly shorter overall survival compared to the other groups of GBM patients. These data suggest that PLOD2 modulates the immune microenvironment of GBM leading to the progression of this cancer. All of the above supports the fact that PLOD2 promotes GBM progression and, thus, might serve as a potential therapeutic target. Several pharmacological inhibitors of PLOD2 are available at present. In particular, Minoxidil was found to reduce sarcoma migration and metastasis in vitro and in vivo by inhibiting PLOD family members [32]. Similarly, inhibition of PLOD2 by Minoxidil reduced tumor migration in lung carcinoma and might prevent metastasis in this type of cancer [51]. Interestingly, several studies additionally found that Minoxidil enhanced drug delivery, including that of Temozolomide, by permeabilizing the blood-tumor barrier (BTB) in GBM [52,53]. It would be, therefore, tempting to speculate that GBM patients with high expression of PLOD2 might benefit from individualized therapies with PLOD2 inhibitors, such as Minoxidil. In summary, our study identifies PLOD2 as an independent prognostic biomarker in GBM. Furthermore, we demonstrate that PLOD2 mediates important biological functions of GBM cells, such as proliferation, invasion and anchorage-independent growth. We additionally show that PLOD2 regulates the expression of MMP2, MT1-MMP, CD44, CD99 and Catenin D1 in GBM cells. Importantly, we link PLOD2 with the immune modulation of neutrophils in the microenvironment of GBM. The main results of our study are summarized in Figure 7. These findings contribute to a better understanding of GBM pathophysiology and may ultimately foster the development of novel therapeutic strategies against this type of cancer. pro-angiogenetic activity via the release of MMP9 and vascular endothelial growth factor (VEGF). Additionally, neutrophils promote tumor motility, migration and invasion via the release of neutrophil elastase, cathepsin G, proteinase 3, MMP8 and MMP9. Under physiological conditions, neutrophils rapidly undergo apoptosis. However, their lifespan can be prolonged by tumor-derived factors resulting in enhanced local inflammation and, ultimately, tumor progression [50]. Our study shows that PLOD2 controls the production of (currently unidentified) soluble factors by GBM cells, which subsequently enhance neutrophil survival and the release of MMP9. Moreover, patients with synchronous high expression of PLOD2 and the neutrophilic marker CD66b had a significantly shorter overall survival compared to the other groups of GBM patients. These data suggest that PLOD2 modulates the immune microenvironment of GBM leading to the progression of this cancer. All of the above supports the fact that PLOD2 promotes GBM progression and, thus, might serve as a potential therapeutic target. Several pharmacological inhibitors of PLOD2 are available at present. In particular, Minoxidil was found to reduce sarcoma migration and metastasis in vitro and in vivo by inhibiting PLOD family members [32]. Similarly, inhibition of PLOD2 by Minoxidil reduced tumor migration in lung carcinoma and might prevent metastasis in this type of cancer [51]. Interestingly, several studies additionally found that Minoxidil enhanced drug delivery, including that of Temozolomide, by permeabilizing the blood-tumor barrier (BTB) in GBM [52,53]. It would be, therefore, tempting to speculate that GBM patients with high expression of PLOD2 might benefit from individualized therapies with PLOD2 inhibitors, such as Minoxidil. In summary, our study identifies PLOD2 as an independent prognostic biomarker in GBM. Furthermore, we demonstrate that PLOD2 mediates important biological functions of GBM cells, such as proliferation, invasion and anchorage-independent growth. We additionally show that PLOD2 regulates the expression of MMP2, MT1-MMP, CD44, CD99 and Catenin D1 in GBM cells. Importantly, we link PLOD2 with the immune modulation of neutrophils in the microenvironment of GBM. The main results of our study are summarized in Figure 7. These findings contribute to a better understanding of GBM pathophysiology and may ultimately foster the development of novel therapeutic strategies against this type of cancer. Study Subjects In this study, we retrospectively analyzed tissues from two independent cohorts of adult patients with histopathologically confirmed, newly diagnosed GBM. The tumors were clinically classified as primary GBM, as no lower grade glioma had been documented in the patient's medical history. Supplementary Table S1. The survival analysis was performed only on patients with confirmed IDH wild-type (WT) status. The clinical characteristics of the IDH WT patients are separately summarized in Supplementary Table S1. Tissue Microarrays (TMA): Immunhistochemistry and Scoring TMA blocks were built using the Arraymold kit E (Riverton, UT, USA) as previously described [54,55] and cut into 2 µm sections. The sections were incubated with 667 ng/mL PLOD2-specific polyclonal antibodies (Proteintech Europe, Manchester, UK) or 500 ng/mL anti-human CD66b antibodies (BioLegend, San Diego, CA, USA) at 4 • C overnight. Secondary and colorimetric reactions were performed using the UltraVision TM Detection System according to the manufacturer's instructions (Thermo Scientific, Freemont, CA, USA). Nuclei were counterstained with Haematoxylin (Carl Roth, Karlsruhe, Germany) and the sections were covered with Mountex ® embedding medium (Medite, Burgdorf, Germany). All stained TMAs were digitalized with an Aperio VERSA 8 high-resolution whole-slide scanner and the digital images were viewed with the Aperio ImageScope software (Leica Biosystems, Nussloch, Germany). Authors C.A.D., H.S. and N.K. independently performed blinded histological analysis. PLOD2 exhibited mainly a cytoplasmic subcellular localization. The expression intensity of the marker was categorized as "weak", "medium" or "strong" and assigned 1, 2, or 3 points, respectively (Supplementary Figure S3A). As a number of samples exhibited heterogeneous staining, the expression was subsequently graded using the H-Score according to the formula: CD66b (neutrophilic marker) was assessed by counting the number of positive cells at 20× magnification in at least two different fields per TMA spot. Samples with an average of ≤5 cells/field were considered as "CD66b low " and samples with >5 cells/field as "CD66b high " (Supplementary Figure S3B). Cell Lines and Stable Transfection Both H4 and U251 cell lines were a kind gift from Prof A. Temme (University Hospital Dresden) but are also commercially available. The main characteristics of these cells are shown in Supplementary Table S2. The cells were cultured in Dulbecco's Modified Eagle Medium (Gibco ® DMEM; Thermo Fisher Scientific, Dreieich, Germany) supplemented with 10% fetal calf serum (FCS; Pan Biotech, Aidenbach, Germany), and 1% Penicillin-Streptomycin (Gibco ® DMEM; Thermo Fisher Scientific). The cells were transfected with the OmicsLink TM shRNA clone HSH013271-nH1-b against PLOD2 or with CSHCTR001-nH1 as a control (both from GeneCopoeia, Rockville, MD, USA) using Panfect A-plus transfection reagent (Pan Biotech) according to the manufacturer's protocol. Transfected cells were selected with 1 µg/mL Puromycin (InvivoGen, Toulouse, France) and then maintained in cell culture medium containing 0.3 µg/mL Puromycin. The efficiency of PLOD2 knock-down (sh-PLOD2) compared to control transfection (sh-control) was assessed both at protein and mRNA levels (Supplementary Figure S4A-D). Based on these results, clone 6 (C6) for H4 and clone 3 (C3) for U251 cells were used in all subsequent experiments. To exclude potential clonal effects, selected experiments were performed using clone 4 (C4) for H4 and clone 1 (C1) for U251 cells (Supplementary Figures). In all figures depicting in vitro studies, "+" indicates the sample where the respective cells (either sh-PLOD2 or sh-control) were used. SDS-PAGE and Western Blot GBM cells were lysed with a commercially available buffer containing Triton X-100 and protease/phosphatase inhibitors (both from Cell Signaling Technology, Frankfurt am Main, Germany). Cell debris was removed by centrifugation and the lysates were incubated with an SDS-Loading buffer containing 4% glycerin, 0.8% SDS, 1.6% beta-mercaptoethanol and 0.04% bromophenol blue (all from Carl Roth). Samples were separated by SDS-PAGE followed by transfer to Immobilon-P (Merck Millipore) or Roti ® -Fluoro (Carl Roth) PVDF membranes. The membranes were incubated with the following primary antibodies: anti-Catenin D1, anti-CD44, anti-CD99 (from Cell Signaling Technology), anti-MT1-MMP and anti-PLOD2 (from Proteintech) overnight at 4 • C. Secondary reactions were performed for 1h at room temperature using HRP-, AlexaFluor ® 488-or AlexaFluor ® 647-coupled antibodies (all from Cell Signaling Technology). All antibodies were diluted as recommended by the respective manufacturer using the SignalBoost™ Immunoreaction Enhancer Kit (Merck Millipore). Signal detection was performed on a ChemoStar imaging system (Intas Science Imaging, Göttingen, Germany). Gene Expression Analysis The mRNA from sh-PLOD2 and sh-control GBM cells was isolated with the InnuPREP RNA Mini Kit 2.0 (Analytik Jena AG, Jena, Germany) according to the manufacturer's instructions. Subsequently, reverse transcription was performed with the LunaScript RT SuperMix Kit (New England Biolabs, Frankfurt am Main, Germany). The samples were incubated with primers against PLOD2 or GAPDH in the presence of Luna Universal qPCR Mix (New England Biolabs). The following primers were used: PLOD2 forward 5 -CATGGACACAGGATAATGGCTG-3 PLOD2 reverse 5 -AGGGGTTGGTTGCTCAATAAAA-3 GAPDH forward 5 -AGGGCTGCTTTTAACTCTGGT-3 GAPDH reverse 5 -CCCCACTTGATTTTGGAGGGA-3 MTT Assay GBM cells were seeded at a density of 2000 cells/well and 4000 cells/well in 96well plates. At the indicated time-points, fresh medium containing 10% MTT (3-(4, 5dimethylthiazol-2-yl)-2, 5-diphenyltetrazolium bromide) (Carl Roth) was added and the samples were incubated for 4 h at 37 • C to allow for the formation of formazan crystals. After lysis with a solution containing isopropanol and hydrochloric acid (both from Carl Roth), colorimetric detection was performed at OD 540 -OD 690 on a TECAN plate reader (Tecan, Männedorf, Switzerland). Soft Agar Clonogenic Assay Ninety-six-well plates were coated with 1% high-gelling agarose (Carl Roth). GBM cells (1000 cells/well) were mixed with low-gelling agarose (Carl Roth) at a final concentration of 0.3% and were added on top of the first layer. The low-gelling agarose was allowed to solidify for 1 h at 4 • C and culture medium was added to each well. The samples were incubated at 37 • C for 10 days with medium change every 3-4 days. The samples were subsequently stained with a solution containing 0.05% Crystal Violet (Carl Roth). Colonies with a diameter of at least 50 µm were counted using a BZ-X810 microscope (Keyence, Neu-Isenburg, Germany). Invasion Assay The invasion of GBM cells was assessed with the ORIS TM cell invasion system (Platypus Technologies LLC, Madison, WI, USA) according to the manufacturer's instructions. The GBM cells were allowed to invade for 72 h in a matrix containing 1 mg/mL collagen I. The degree of "gap-closure" was quantified with the ImageJ 1.48v software. Gelatin Zymography The release of matrix metalloproteases (MMPs) by GBM cells was analyzed by gelatin zymography, as described previously [56]. Briefly, 10 5 cells/mL were incubated at 37 • C in DMEM medium, supplemented as above. As serum-supplemented cell culture medium also contains MMPs, medium without cells was used as control. The supernatants were collected at 48 h and mixed with Zymogram sample buffer at a final concentration of 80 mM Tris pH 6.8, 1% SDS, 4% glycerol and 0.006% bromophenol blue. Proteins were separated by SDS-PAGE containing 0.2% gelatin 180 Bloom and then renatured in 2.5% Triton-X-100 for 1 h at room temperature. The enzymatic reaction was performed overnight at 37 • C in a buffer containing 50 mM Tris pH 7.5, 200 mM NaCl, 5 mM CaCl 2 and 1% Triton-X-100. The gels were stained with a solution containing 0.5% Coomassie blue, 30% methanol and 10% acetic acid for 1 h at room temperature. Finally, the gels were de-stained with 30% methanol and 10% acetic acid until the digested bands became visible. All chemicals were from Carl Roth (Karlsruhe, Germany). The gelatinolytic bands were quantified with ImageJ 1.48v software. The release of MMPs by neutrophils was assessed as above, except for using a different cell number (10 6 cells/mL) and duration of stimulation (1 h). Isolation of Neutrophils from Peripheral Blood Diluted blood (1:1, v/v in phosphate buffered saline (PBS)) was subjected to density gradient centrifugation using Pancoll (Pan Biotech). The mononuclear cell fraction was discarded, and the neutrophil fraction was collected in a fresh test tube. Erythrocytes were removed by sedimentation with a solution containing 1% polyvinyl alcohol (Sigma-Aldrich, Burlington, MA, USA) and, subsequently, by lysis with pre-warmed Aqua Braun (B. Braun, Melsungen, Germany). The resulting neutrophils were cultured in DMEM medium supplemented as above. The purity of the neutrophil population after isolation was routinely >98%. Apoptosis Assays Neutrophils (10 6 cells/mL) were stimulated as indicated and were stained 24 h later with FITC Annexin V/propidium iodide according to the manufacturer's instructions (BioLegend). Quantification was performed with a BD FACSCanto II flow cytometer (BD Biosciences, Heidelberg, Germany). Statistical Analysis Clinical data were analyzed with the SPSS statistical software version 26 (IBM Corporation). Survival curves (5-year, 3-year or 1-year cut-off) were plotted according to the Kaplan-Meier method. Significance was initially tested by univariate analysis using the log-rank test. Multivariate analysis was subsequently used to determine the prognostic value of selected variables using Cox's proportional hazard linear regression models adjusted for age, Karnofsky Performance Scale (KPS), therapy, extent of surgical resection and MGMT methylation status. The in vitro data were analyzed with the paired student's t-test. In all studies, the level of significance was set at p ≤ 0.05. Informed Consent Statement: All studies involving material from GBM patients were performed retrospectively on tissues removed during regular surgical and diagnostic procedures. The Ethics Committees provided a waiver for the need for informed consent. Neutrophil Granulocytes were isolated from the peripheral blood of healthy volunteers after informed consent. Data Availability Statement: Data from the in vitro studies are contained within the article and Supporting information files. The datasets involving GBM patients are available from the corresponding author on reasonable request.
2022-05-30T15:09:32.524Z
2022-05-27T00:00:00.000
{ "year": 2022, "sha1": "912a0dd48853fcb5b632f91dedc94ef072b8204b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/11/6037/pdf?version=1653654238", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b4c4194d1543db5a719b59383765e1ce134c9050", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
159354312
pes2o/s2orc
v3-fos-license
Innovation in Road Freight Transport : Quantifying the Environmental Performance of Operational Cost-Reducing Practices Road transportation is a key mode of transport when it comes to ensuring the hinterland connection of most European ports. Constrained by low profit margins and having to be active in a highly competitive market, companies active in this sector seek multi-dimensional innovative solutions that lower their operational costs. These innovative initiatives also yield positive environmental effects. The latter however are poorly recognized. This paper investigates the characteristics of different types of chassis used to transport containers from and to the terminals in the port areas and looks into the details of operational planning practices. It analyses the cost-effectiveness of these innovative solutions highlighting both the costs and the environmental emissions they save. Transport data from a road hauler serving the hinterland connection of a port in Western Europe is used to build up a case study. Results show that by using special types of chassis, which enable the combination of transport tasks in round-trips, the operational costs are reduced by 25% to 35%, and equally the CO2 emissions are also decreased by 34% to 38%. Introduction Road haulage is a key mode of transport mode when it comes to ensuring the hinterland connection of most European sea-ports.Loads such as containers, break bulk, dry bulk, or liquid bulk cargo depend on road transport as a part of their supply chain.Figure 1 compares ports' modal split in Europe and shows the dominance of road transport.As a consequence, it is clear that port traffic growth is challenging more and more the capacity of road networks developed around ports [1].In this context, road transport companies are forced to seek their position in a highly competitive market.Equally, road freight transport is a sector with low profit margins.For these reasons, road hauliers are now challenged to innovate and to find solutions to achieve cost-effective road transport solutions.In most cases, these solutions are triggered by policies on sustainable development (e.g., road pricing).However, the conceptual developed and implementation is done using private financial capabilities.Releases of the European Commission, such as its White paper [8], acknowledge the importance of road transport and create a clear context for its integration among all other transport modes.Moreover, European legislation is making efforts to create new ways of stimulating the use of environment-friendly solutions for transport. In this context, road transport innovators are confronted with the task of keeping a positive balance between their operational costs and policy decisions regarding environmental emissions.In practice, even if the environmental advantages are proven, the best economic solution still rules the decisions to invest in innovation dedicated to enhance the ports' activity [9]. Ex-post evaluation studies that use real data to show the effects of innovation in this field are scarce.Giuliano et al. [10] show that only a limited set of studies exist that use methods that quantify the costs and benefits in analyses of innovation in the transport sector.Conducting analyses like costbenefit, total cost, or cost-effectiveness requires a considerable amount of data.Information regarding investment by companies in innovation and their outcome are not always available.Reasons for this gap are the confidentiality status of this type of data.Innovation is driven by experimental thoughts and the nature of decisions in the supply chain that are sometimes made based on gut feeling [9], for which recordings are private.The latter brings limitation in methods and studies that can be developed. Nakamura [11] notes that the roots of scientific evaluation of projects in the transport sector dates back to the 19 th century.Since then, different types of methods and forms of evaluation have been developed to shed light on different aspects of effectiveness for investment and innovation projects developed for the transport sector.For this matter, methods following descriptive, qualitative or quantitative approaches have been developed and applied for the analysis of innovation in the transport sector. The present research develops an application of the cost-effectiveness analysis (CEA) to investigate whether the innovative practices used in road transport are bringing both operational and environmental cost advantages.The CEA is adapted to analyze decisions in which expected outcomes are clearly identified, but whose direct or indirect monetary benefits are not easily quantifiable [12].CEA has a long tradition that dates back in 1960s, when scientists developed a method to assist the United States military in making allocation decisions [13].This method, among others, has later been successfully applied in the medical sector.Here, the benefits of decisions or interventions could not be quantified more than the number of lives saved or persons cured [14][15][16].Releases of the European Commission, such as its White paper [8], acknowledge the importance of road transport and create a clear context for its integration among all other transport modes.Moreover, European legislation is making efforts to create new ways of stimulating the use of environment-friendly solutions for transport. In this context, road transport innovators are confronted with the task of keeping a positive balance between their operational costs and policy decisions regarding environmental emissions.In practice, even if the environmental advantages are proven, the best economic solution still rules the decisions to invest in innovation dedicated to enhance the ports' activity [9]. Ex-post evaluation studies that use real data to show the effects of innovation in this field are scarce.Giuliano et al. [10] show that only a limited set of studies exist that use methods that quantify the costs and benefits in analyses of innovation in the transport sector.Conducting analyses like cost-benefit, total cost, or cost-effectiveness requires a considerable amount of data.Information regarding investment by companies in innovation and their outcome are not always available.Reasons for this gap are the confidentiality status of this type of data.Innovation is driven by experimental thoughts and the nature of decisions in the supply chain that are sometimes made based on gut feeling [9], for which recordings are private.The latter brings limitation in methods and studies that can be developed. Nakamura [11] notes that the roots of scientific evaluation of projects in the transport sector dates back to the 19th century.Since then, different types of methods and forms of evaluation have been developed to shed light on different aspects of effectiveness for investment and innovation projects developed for the transport sector.For this matter, methods following descriptive, qualitative or quantitative approaches have been developed and applied for the analysis of innovation in the transport sector. The present research develops an application of the cost-effectiveness analysis (CEA) to investigate whether the innovative practices used in road transport are bringing both operational and environmental cost advantages.The CEA is adapted to analyze decisions in which expected outcomes are clearly identified, but whose direct or indirect monetary benefits are not easily quantifiable [12].CEA has a long tradition that dates back in 1960s, when scientists developed a method to assist the United States military in making allocation decisions [13].This method, among others, has later been successfully applied in the medical sector.Here, the benefits of decisions or interventions could not be quantified more than the number of lives saved or persons cured [14][15][16]. Staying competitive is the main objective of road transporters.In order to do so, developing cost-effective innovation is seen as the future of transport hauliers.Academia [17][18][19] provides multiple definitions of innovation.The present research follows the definition provided by Arduino et al. [20], who identify innovation as "the technological or organizational (including cultural and marketing as separate subsets) change to the product (or service) or production process that either lowers the cost of the product (or service) or production process or increases the quality of the product (or service) to the consumer".From a logistics chain and port-related perspective, Vanelslander et al. [21] show that stakeholders encompass multi-dimensional innovation and puts forward a comprehensive list of innovation types.Their typology (shown in Appendix A) is applicable as well to innovation developed by road transport operators. Blauwens et al. [22] put forward three classical examples of reducing the operational cost in a road transport firm.The first one refers to determining the shortest routes for its vehicles.Another practice is performing round trips that combine two or more transport requests in one journey.The third way of reducing transport cost is assigning transport tasks to vehicles that have the starting point as close as possible to the end location of a previous one.These three solutions set the goal to minimize the total distance that the transport vehicles are traveling per day.One should bear in mind that the total "distance", depending on the scope of the optimization problem, can be also translated into time or cost, besides kilometers [22].To set it in practice, road transporters together with chassis constructors look for innovative solutions.This cooperation resulted in the development of new trailers (or chassis) that extends the range of container transport tasks that can be consolidated in one trip.As a result, road hauliers' cost effectiveness is improved.The amount of kilometers driven in productive trips are increased and the labor time gets decreased.Simultaneously, the use of these newly-developed chassis led to perform transport tasks that also reduce environmental emissions.These results are enabled through both an innovative planning process and innovative chassis development.These changes are introduced through a new managerial and cultural (change of mind set) approach.In essence, this is thus an example of multi-dimensional innovation that falls under innovation type III (technological changes that also consist of changes at managerial, organizational and cultural level for a specific business) as put forward by Vanelslander et al. [21].Although these solutions are in use and known to the stakeholders active in this sector, this innovation in road transport does not get immediately adopted by the entire road transport sector and it remains unknown to the wide public.For this reason, the road transport stakeholders often get the label of un-innovative sector. The results of this multi-dimensional innovation are analyzed by this study through a CEA.This research quantifies the amount of kilometers that vehicles drive, the working hours for the employees that are active, the tolls that are paid or the vehicle usage cost inquired.These elements are finally used as costs that a transport haulier saves when innovative chassis are in use.The latter emissions savings are considered as extra benefits resulting from the more cost-effective activity. The empirical analysis developed by this study reflects the perspective of a road haulier active in container transport.In this setting, the focus is on transport of 20' import/export containers within the hinterland area of a seaport, as this specific setting enable container transport task consolidation in round trips. The research questions addressed by this study are twofold: RQ1: What is the cost-effectiveness of the innovative chassis-fleet management practice adopted by a road transport firm in order to minimize its operational costs?and RQ2: What is the cost effectiveness of these solutions with respect to environmental emissions? To answers these questions, the following paper structure is pursued.Section 2 presents the technical innovation present in road transport and the cost-effectiveness analysis as the main method used to achieve the results of these study.The latter are detailed in Section 3. Finally, Section 4 concludes this chapter. Material and Methods This section provides a detailed description of the multi-dimensional innovative solutions analyzed by this research and it delivers the background of the CEA. Innovative Practices in Road Transport of Containers This research focuses thus on pointing out the effect of efficient use of innovative trailers.To do so, the practice of carrying out individual transport tasks is firstly explained.The subsequent sub-subsections detail the technical characteristics of innovative chassis, the conditions considered when carrying containers via road and the decision algorithm of setting up round-trips. Carrying out Individual Transport Tasks The working practice considered by this analysis as a reference scenario in carrying out transport tasks has the following characteristics.Firstly, the transport company does not have the tools to comprehensively bundle data with regard to its transport orders in a centralized data-base.This type of working practice is typical for a company that uses multiple communication channels (e.g., email requests, phone, and shipping agents' platforms) to collect orders and their related data.Secondly, it is assumed that a transport company does not have the technical capacities to carry out more than one transport task at once.This characteristic applies to road transport of 20' containers where one trailer can accommodate only one container. Considering the above characteristics, a transport company conducts the following working practice to deliver its orders.This working practice is referred to as the reference scenario by the current research.For each transport order, shown in Figure 2 with a full line, the transport company completes two trips: one trip corresponding to the transport order and another trip corresponding to the empty return of the transport equipment (trailers).This situation occurs for both orders either to be carried from or towards point A. The extra time and the extra fuel consumption that is used to relocate the truck to its origin needs to be reduced.This reference scenario is being used as a comparison situation for the other alternative solutions chosen to carry the full transport tasks.This section provides a detailed description of the multi-dimensional innovative solutions analyzed by this research and it delivers the background of the CEA. Innovative Practices in Road Transport of Containers This research focuses thus on pointing out the effect of efficient use of innovative trailers.To do so, the practice of carrying out individual transport tasks is firstly explained.The subsequent subsubsections detail the technical characteristics of innovative chassis, the conditions considered when carrying containers via road and the decision algorithm of setting up round-trips. Carrying out Individual Transport Tasks The working practice considered by this analysis as a reference scenario in carrying out transport tasks has the following characteristics.Firstly, the transport company does not have the tools to comprehensively bundle data with regard to its transport orders in a centralized data-base.This type of working practice is typical for a company that uses multiple communication channels (e.g.email requests, phone, and shipping agents' platforms) to collect orders and their related data.Secondly, it is assumed that a transport company does not have the technical capacities to carry out more than one transport task at once.This characteristic applies to road transport of 20' containers where one trailer can accommodate only one container. Considering the above characteristics, a transport company conducts the following working practice to deliver its orders.This working practice is referred to as the reference scenario by the current research.For each transport order, shown in Figure 2 with a full line, the transport company completes two trips: one trip corresponding to the transport order and another trip corresponding to the empty return of the transport equipment (trailers).This situation occurs for both orders either to be carried from or towards point A. The extra time and the extra fuel consumption that is used to relocate the truck to its origin needs to be reduced.This reference scenario is being used as a comparison situation for the other alternative solutions chosen to carry the full transport tasks.Transport tasks are characterized by the following elements: origin/destination, time of pickup/drop-off, type of container and weight.These are the basic details needed by a transport planner to estimate which tasks can be carried out in a combined round-journey and which type of chassis should be used for this combination of tasks.The type of assets and the type of round-journeys enabled through multi-dimensional innovation are detailed below.These options are applicable when 20' containers or swap bodies similar in size without side doors are transported and when any transhipment or container swap movement with equipment (e.g.cranes, reach stackers) at hinterland locations (points B or C) are unavailable.Transport tasks are characterized by the following elements: origin/destination, time of pick-up/drop-off, type of container and weight.These are the basic details needed by a transport planner to estimate which tasks can be carried out in a combined round-journey and which type of chassis should be used for this combination of tasks.The type of assets and the type of round-journeys enabled through multi-dimensional innovation are detailed below.These options are applicable when 20' containers or swap bodies similar in size without side doors are transported and when any A. Using Mini Eco-Combi Chassis Under 'mini eco-combi cassis' is understood a combination of two 20' chassis that can be attached together so as to form a single 40' unit (Figure 3).The advantage of this type of chassis is that, depending on the round-journey characteristics, the two chassis can be also used as individual transport units.Equally, the mini eco-combi with its flexibility allows transporters to consider more combinations of journeys in their planning operations, eliminating technical limitations (conditions).The conditions imposed by safety regulations with regards to weight distribution for road transport vehicles are set by the European Commission (2015).According to EC 2015/719, the traction axle of the towing tractor must carry a minimum of 25% of the total carried load.The decoupled chassis enable the transporters to carry further to its destination a fully loaded 20' container that was previously loaded on the rear side of the chassis.Involving a regular 40' chassis in this practice, would defy the regulation.A further explanation of containers weight distribution is later given in Table 1.A derived advantage of this type of chassis is the time saved due to faster actions done to couple/uncouple the two chassis.Under 'mini eco-combi cassis' is understood a combination of two 20' chassis that can be attached together so as to form a single 40' unit (Figure 3).The advantage of this type of chassis is that, depending on the round-journey characteristics, the two chassis can be also used as individual transport units.Equally, the mini eco-combi with its flexibility allows transporters to consider more combinations of journeys in their planning operations, eliminating technical limitations (conditions).The conditions imposed by safety regulations with regards to weight distribution for road transport vehicles are set by the European Commission (2015).According to EC 2015/719, the traction axle of the towing tractor must carry a minimum of 25% of the total carried load.The decoupled chassis enable the transporters to carry further to its destination a fully loaded 20' container that was previously loaded on the rear side of the chassis.Involving a regular 40' chassis in this practice, would defy the regulation.A further explanation of containers weight distribution is later given in Table 1.A derived advantage of this type of chassis is the time saved due to faster actions done to couple/uncouple the two chassis. B. Operating Multipurpose Chassis The multipurpose chassis is used to complete combinations of two transport tasks in one roundtrip.The multipurpose chassis category refers to round transport journeys carried out with the following type of chassis: steer-chassis, legs-mounted chassis, and hydraulic load-shifting chassis.The use of these solutions is not as flexible as the mini eco-combi chassis due to a fixed weight distribution and loading/unloading order constraints.Nevertheless, these options are still suitable for combining different transport tasks in one trip, as shown in Figure 4.In addition, the amount of time saved by using this alternative is lower than in the case of mini eco-combi chassis, due to extra operations that have to be done at each loading/unloading point.Larger time windows used to position the containers according to safety procedures are counted in.For example: the new steerchassis needs extra time to reposition the container from the back end of the chassis to the front and to extend the steering third axle; the leg-mounted chassis operations need extra time to expand/retract the mounted legs, to set in use the pneumatic suspension of the chassis and reposition the container on the chassis; the classical chassis requires extra equipment (e.g.crane or reach stacker) to be used for the container to be repositioned. B. Operating Multipurpose Chassis The multipurpose chassis is used to complete combinations of two transport tasks in one round-trip.The multipurpose chassis category refers to round transport journeys carried out with the following type of chassis: steer-chassis, legs-mounted chassis, and hydraulic load-shifting chassis.The use of these solutions is not as flexible as the mini eco-combi chassis due to a fixed weight distribution and loading/unloading order constraints.Nevertheless, these options are still suitable for combining different transport tasks in one trip, as shown in Figure 4.In addition, the amount of time saved by using this alternative is lower than in the case of mini eco-combi chassis, due to extra operations that have to be done at each loading/unloading point.Larger time windows used to position the containers according to safety procedures are counted in.For example: the new steer-chassis needs extra time to reposition the container from the back end of the chassis to the front and to extend the steering third axle; the leg-mounted chassis operations need extra time to expand/retract the mounted legs, to set in use the pneumatic suspension of the chassis and reposition the container on the chassis; the classical chassis requires extra equipment (e.g., crane or reach stacker) to be used for the container to be repositioned. position the containers according to safety procedures are counted in.For example: the new steerchassis needs extra time to reposition the container from the back end of the chassis to the front and to extend the steering third axle; the leg-mounted chassis operations need extra time to expand/retract the mounted legs, to set in use the pneumatic suspension of the chassis and reposition the container on the chassis; the classical chassis requires extra equipment (e.g.crane or reach stacker) to be used for the container to be repositioned. C. Offering Empty Containers for Re-Use The 're-use' of empty containers has as main advantage the extra financial gain that the company makes from offering an empty container to be re-filled.This practice is driven through the innovative mind set on managing a business unit with multiple technological solutions.Although the re-use of empty containers seems to be an extra income, a certain balance has to be achieved between the benefit of the re-use and the extra fees charged by container owners, which commit to this option.The route created in a re-use round-journey is similar to the one presented in Figure 4. D. Handling Transport Tasks with Tilt Chassis The tilt chassis (Figure 5) represents another alternative that joins the innovative practices, which reduces the costs for pick-up/dropping cargo in road transport.This type of chassis enables the planner to form round-trips that reduce the distance travelled.A tilt chassis is a fast solution for cargo to be unloaded.Not all containers can be unloaded using a tilt method; however, this type of chassis increases the chances of creating a combined round-trip by offering the same container to be re-filled. C. Offering Empty Containers for Re-Use The 're-use' of empty containers has as main advantage the extra financial gain that the company makes from offering an empty container to be re-filled.This practice is driven through the innovative mind set on managing a business unit with multiple technological solutions.Although the re-use of empty containers seems to be an extra income, a certain balance has to be achieved between the benefit of the re-use and the extra fees charged by container owners, which commit to this option.The route created in a re-use round-journey is similar to the one presented in Figure 4. D. Handling Transport Tasks with Tilt Chassis The tilt chassis (Figure 5) represents another alternative that joins the innovative practices, which reduces the costs for pick-up/dropping cargo in road transport.This type of chassis enables the planner to form round-trips that reduce the distance travelled.A tilt chassis is a fast solution for cargo to be unloaded.Not all containers can be unloaded using a tilt method; however, this type of chassis increases the chances of creating a combined round-trip by offering the same container to be re-filled. Operational Conditions Considered when Combining Single Transport Tasks The weight of each load is a key criterion in the process of creating round journeys.Due to road tonnage restrictions and road safety regulation, several conditions must be taken into account.Table 1 shows the possibilities for two 20' containers to be loaded on chassis in accordance with the directives given by the European Commission (2015).The transport planner must take these conditions into account when the transport tasks are attributed to each round-journey.The maximum load is calculated by adding the truck, chassis, container and cargo weight. Condition Observations Combinations of round-journeys that respect the following conditions are allowed F1+F2<Max load 1 Two loaded containers, the total weight of which is not higher than the total allowed weight F1+E2<Max load E1+F2<Max load E1+E2<Max load One loaded container and one empty container the total weight of which is not higher than the total allowed weight.Or two empty containers bearing the same condition.25%*F1 = Load on traction axis One loaded container of which at least 25% of its weight is distributed on the traction axle. Operational Conditions Considered when Combining Single Transport Tasks The weight of each load is a key criterion in the process of creating round journeys.Due to road tonnage restrictions and road safety regulation, several conditions must be taken into account.Table 1 shows the possibilities for two 20' containers to be loaded on chassis in accordance with the directives given by the European Commission (2015).The transport planner must take these conditions into account when the transport tasks are attributed to each round-journey.The maximum load is calculated by adding the truck, chassis, container and cargo weight. Condition Observations Combinations of round-journeys that respect the following conditions are allowed Operational Conditions Considered when Combining Single Transport Tasks The weight of each load is a key criterion in the process of creating round journeys.Due to road tonnage restrictions and road safety regulation, several conditions must be taken into account.Table 1 shows the possibilities for two 20' containers to be loaded on chassis in accordance with the directives given by the European Commission (2015).The transport planner must take these conditions into account when the transport tasks are attributed to each round-journey.The maximum load is calculated by adding the truck, chassis, container and cargo weight. Condition Observations Combinations of round-journeys that respect the following conditions are allowed F1+F2<Max load 1 Two loaded containers, the total weight of which is not higher than the total allowed weight F1+E2<Max load E1+F2<Max load E1+E2<Max load One loaded container and one empty container the total weight of which is not higher than the total allowed weight.Or two empty containers bearing the same condition.25%*F1 = Load on traction axis One loaded container of which at least 25% of its weight is distributed on the traction axle. F1+F2>Max load Two loaded containers, the total weight of which is higher than the total allowed weight. 100%*F1 = Load on rear axle One loaded container the total weight of which is distributed on the rear chassis axles. Two loaded containers, the total weight of which is not higher than the total allowed weight Operational Conditions Considered when Combining Single Transport Tasks The weight of each load is a key criterion in the process of creating round journeys.Due to road tonnage restrictions and road safety regulation, several conditions must be taken into account.Table 1 shows the possibilities for two 20' containers to be loaded on chassis in accordance with the directives given by the European Commission (2015).The transport planner must take these conditions into account when the transport tasks are attributed to each round-journey.The maximum load is calculated by adding the truck, chassis, container and cargo weight. Condition Observations Combinations of round-journeys that respect the following conditions are allowed F1+F2<Max load 1 Two loaded containers, the total weight of which is not higher than the total allowed weight F1+E2<Max load E1+F2<Max load E1+E2<Max load One loaded container and one empty container the total weight of which is not higher than the total allowed weight.Or two empty containers bearing the same condition.25%*F1 = Load on traction axis One loaded container of which at least 25% of its weight is distributed on the traction axle. F1+F2>Max load Two loaded containers, the total weight of which is higher than the total allowed weight. 100%*F1 = Load on rear axle One loaded container the total weight of which is distributed on the rear chassis axles. One loaded container and one empty container the total weight of which is not higher than the total allowed weight.Or two empty containers bearing the same condition. Operational Conditions Considered when Combining Single Transport Tasks The weight of each load is a key criterion in the process of creating round journeys.Due to road tonnage restrictions and road safety regulation, several conditions must be taken into account.Table 1 shows the possibilities for two 20' containers to be loaded on chassis in accordance with the directives given by the European Commission (2015).The transport planner must take these conditions into account when the transport tasks are attributed to each round-journey.The maximum load is calculated by adding the truck, chassis, container and cargo weight. Condition Observations Combinations of round-journeys that respect the following conditions are allowed F1+F2<Max load 1 Two loaded containers, the total weight of which is not higher than the total allowed weight F1+E2<Max load E1+F2<Max load E1+E2<Max load One loaded container and one empty container the total weight of which is not higher than the total allowed weight.Or two empty containers bearing the same condition.25%*F1 = Load on traction axis One loaded container of which at least 25% of its weight is distributed on the traction axle. F1+F2>Max load Two loaded containers, the total weight of which is higher than the total allowed weight. 100%*F1 = Load on rear axle One loaded container the total weight of which is distributed on the rear chassis axles. 25%*F 1 = Load on traction axis One loaded container of which at least 25% of its weight is distributed on the traction axle. Combinations of round-journeys that lead to one of the following situations are not allowed Operational Conditions Considered when Combining Single Transport Tasks The weight of each load is a key criterion in the process of creating round journeys.Due to road tonnage restrictions and road safety regulation, several conditions must be taken into account.Table 1 shows the possibilities for two 20' containers to be loaded on chassis in accordance with the directives given by the European Commission (2015).The transport planner must take these conditions into account when the transport tasks are attributed to each round-journey.The maximum load is calculated by adding the truck, chassis, container and cargo weight. Condition Observations Combinations of round-journeys that respect the following conditions are allowed F1+F2<Max load 1 Two loaded containers, the total weight of which is not higher than the total allowed weight F1+E2<Max load E1+F2<Max load E1+E2<Max load One loaded container and one empty container the total weight of which is not higher than the total allowed weight.Or two empty containers bearing the same condition.25%*F1 = Load on traction axis One loaded container of which at least 25% of its weight is distributed on the traction axle. F1+F2>Max load Two loaded containers, the total weight of which is higher than the total allowed weight. 100%*F1 = Load on rear axle One loaded container the total weight of which is distributed on the rear chassis axles. Two loaded containers, the total weight of which is higher than the total allowed weight. Operational Conditions Considered when Combining Single Transport Tasks The weight of each load is a key criterion in the process of creating round journeys.Due to road tonnage restrictions and road safety regulation, several conditions must be taken into account.Table 1 shows the possibilities for two 20' containers to be loaded on chassis in accordance with the directives given by the European Commission (2015).The transport planner must take these conditions into account when the transport tasks are attributed to each round-journey.The maximum load is calculated by adding the truck, chassis, container and cargo weight. Condition Observations Combinations of round-journeys that respect the following conditions are allowed F1+F2<Max load 1 Two loaded containers, the total weight of which is not higher than the total allowed weight F1+E2<Max load E1+F2<Max load E1+E2<Max load One loaded container and one empty container the total weight of which is not higher than the total allowed weight.Or two empty containers bearing the same condition.25%*F1 = Load on traction axis One loaded container of which at least 25% of its weight is distributed on the traction axle. F1+F2>Max load Two loaded containers, the total weight of which is higher than the total allowed weight. 100%*F1 = Load on rear axle One loaded container the total weight of which is distributed on the rear chassis axles.100%*F 1 = Load on rear axle One loaded container the total weight of which is distributed on the rear chassis axles. Note: 1. Maximum allowed weights for road freight transport according to each European country's regulation and in accordance with Directive (EU) 2015/719 [23]. The use of each chassis is conditioned by the order in which the transport loads are delivered.Table 2 shows the types of round journeys than can be performed by each type of chassis.A parallel comparison between classical, multipurpose and mini eco-combi chassis has been made.Moreover, a separate column is added for the re-use of containers and the use of tilt chassis. The following notations are being used to define the types of possible round transport journeys by using each chassis, where P, D and Pa are notations used for operations involving classical, multipurpose or eco-combi chassis respectively, and L and U are used for operations involving re-use of containers and tilt chassis where cargo is loaded respectively unloaded from the carried containers. P-pick (or loading) of a container on a chassis; D-drop (or unloading) of a container from a chassis; Pa-park or decoupling of chassis and park; L-loading a container; U-unloading a container. As can be seen in Table 2, the alternative solutions for carrying out transport requests are divided in three categories.These categories are created based on the characteristics of each chassis and the type of round-trips that they can perform: • Round-transport tasks using multipurpose chassis (includes chassis which allow loading two 20-foot containers-one export and one import, a leg-mounted chassis and a new steer-chassis); • Round-transport tasks using the mini eco-combi chassis; and • Round-transport tasks using tilting chassis (re-using the same 20 containers). As can be seen in Table 2, the alternative solutions for carrying out transport requests are divided in three categories.These categories are created based on the characteristics of each chassis and the type of round-trips that they can perform: • Round-transport tasks using multipurpose chassis (includes chassis which allow loading two 20-foot containers-one export and one import, a leg-mounted chassis and a new steerchassis); • Round-transport tasks using the mini eco-combi chassis; and • Round-transport tasks using tilting chassis (re-using the same 20 containers). As can be seen in Table 2, the alternative solutions for carrying out transport requests are divided in three categories.These categories are created based on the characteristics of each chassis and the type of round-trips that they can perform: • Round-transport tasks using multipurpose chassis (includes chassis which allow loading two 20-foot containers-one export and one import, a leg-mounted chassis and a new steerchassis); • Round-transport tasks using the mini eco-combi chassis; and • Round-transport tasks using tilting chassis (re-using the same 20 containers). PDPD PULD (PULD) As can be seen in Table 2, the alternative solutions for carrying out transport requests are divided in three categories.These categories are created based on the characteristics of each chassis and the type of round-trips that they can perform: • Round-transport tasks using multipurpose chassis (includes chassis which allow loading two 20-foot containers-one export and one import, a leg-mounted chassis and a new steerchassis); • Round-transport tasks using the mini eco-combi chassis; and • Round-transport tasks using tilting chassis (re-using the same 20 containers). PPDD (1) As can be seen in Table 2, the alternative solutions for carrying out transport requests are divided in three categories.These categories are created based on the characteristics of each chassis and the type of round-trips that they can perform: • Round-transport tasks using multipurpose chassis (includes chassis which allow loading two 20-foot containers-one export and one import, a leg-mounted chassis and a new steerchassis); • Round-transport tasks using the mini eco-combi chassis; and • Round-transport tasks using tilting chassis (re-using the same 20 containers). PPDD (2) As can be seen in Table 2, the alternative solutions for carrying out transport requests are divided in three categories.These categories are created based on the characteristics of each chassis and the type of round-trips that they can perform: • Round-transport tasks using multipurpose chassis (includes chassis which allow loading two 20-foot containers-one export and one import, a leg-mounted chassis and a new steerchassis); • Round-transport tasks using the mini eco-combi chassis; and • Round-transport tasks using tilting chassis (re-using the same 20 containers). PPDD (3) As can be seen in Table 2, the alternative solutions for carrying out transport requests are divided in three categories.These categories are created based on the characteristics of each chassis and the type of round-trips that they can perform: • Round-transport tasks using multipurpose chassis (includes chassis which allow loading two 20-foot containers-one export and one import, a leg-mounted chassis and a new steerchassis); • Round-transport tasks using the mini eco-combi chassis; and • Round-transport tasks using tilting chassis (re-using the same 20 containers). Cost-Effectiveness Analysis The main purpose of a CEA is to identify the economically most efficient way of fulfilling an objective [26].The European Commission agrees with this type of analysis for evaluations of programs or projects, to assess the choices in the allocation of resources, to determine strategy-planning priorities or to be used in argumentative debates.CEA can be conducted in the context of ex-ante, intermediary, or ex-post evaluations.Contrary to the CBA monetizing practice, CEA puts in balance two elements: the cost of achieving one objective and the level of achievement of that objective.In other words, a cost effectiveness analysis makes a ratio between the inputs in monetary terms and the outcomes in non-monetary quantified terms [27].Input data for the CEA, as well as for the CBA, is difficult to collect.The main reason for this difficulty is the availability of this type of data and their confidentiality status.Nevertheless, applying CEA has its advantages.CEA is a tool that assesses a decision by using a single dimension of its output.CEA can measure the technical effectiveness of a project due to its particularity of comparing the costs of a project with its immediate outputs.Furthermore, CEA can also be used as a comparison tool of several projects or investment options.For this case, it must be taken into consideration that the evaluated criteria should be the same over the entire range of options [26]. CEA: Outcomes from a Literature Review of Transportation Studies A non-exhaustive literature review was conducted, focusing on studies using CEA for transport applications in the 2000-2015 period.From a transport mode perspective, it is clear that the road transport sector benefits from a lot of attention from researchers applying CEA.The scope of studies in this case was to prove the effectiveness of different measures either on social or environmental matters [28,29].Furthermore, also the rail sector benefits from the attention of CEA applications.The review of studies shows that the scope of CEA in this case was mixed, being focused on either social, economic or environmental matters [30][31][32].Hence, applications of CEA have also been used to determine the effect on environmental matters of maritime strategies.Here is to be mentioned the effect of speed limitations imposed on maritime transport emissions [33][34][35] or the consequences of policies regarding container repositioning [35].CEA was not preferred in analyzing matters dedicated to air transport.The authors of [36] use CEA only as an extension of a wider CBA to point out the cost effectiveness of measures taken by several airports against terrorist attacks.Their findings indicate that any additional measures against terrorist attacks would be too expensive to be justified by their effectiveness. The literature review shows that the scope of CEA in transport studies is to determine the effectiveness of measures for which a concrete estimation of all its benefits is difficult to determine [27,37].For this reason, most CEA analyses concern environmental impact assessments [28,32,38,39] or quantification of costs for measures addressing social benefits in the short term [29].In this case, researchers quantify the average cost of emission mitigation measures or actions aimed at decreasing the number of road fatalities.In some cases, CEA can also use monetary values as outcome indicators; these studies have a purely economic purpose [30,40,41]. The CEA outcome is most of the time expressed in units of monetary value spent to achieve one unit of abatement measure.Depending on the purpose of each analysis, the units used as measurements of effectiveness are chosen.More specifically, the effectiveness of pollution control measures is expressed in the average cost of the amount of emissions avoided (CO 2 , NO x , PM x , or SO x ) [28,[32][33][34].In the case of measures with a social impact, the immediate outcomes are quantified in the amount of averted deaths or the value of life [29].Some of the research studies on determining the cost effectiveness of projects in the transport sector do not end by calculating a CEA ratio.These studies are structured in three parts.Firstly, the costs of the analyzed measures are inventoried and quantified.Secondly, the implications and the immediate benefits of these measures are also calculated.Following these calculations, a discussion is being conducted based on the cost and the project outcomes previously determined [41][42][43]. Cost-Effectiveness Analysis Steps This section summarizes the general steps that have to be undertaken to conduct a CEA. Figure 6 offers a brief overview of the steps that are undertaken and its specific outputs for a road transport case analysis. In order to apply the cost effectiveness analysis, a first step is to define the framework and the scope of the evaluation.Also at this stage, the reference scenario and the alternative(s) are being described.As a second step, the costs for the reference scenario and for each alternative operational solution is calculated.In particular, for a road transport case analysis, this step refers to the calculation of operational expenses such as fuel consumption, tolls, labor cost, or other fees that are being supported by the transport company [22].Thirdly, the benefits in terms of distance, hours of labor or amount of emissions saved by each alternative are also calculated.These savings are calculated with respect to the reference scenario previously defined.Finally, an analysis comparing the differences in costs and emissions for each alternative is made. This section summarizes the general steps that have to be undertaken to conduct a CEA. Figure 6 offers a brief overview of the steps that are undertaken and its specific outputs for a road transport case analysis.In order to apply the cost effectiveness analysis, a first step is to define the framework and the scope of the evaluation.Also at this stage, the reference scenario and the alternative(s) are being described.As a second step, the costs for the reference scenario and for each alternative operational solution is calculated.In particular, for a road transport case analysis, this step refers to the calculation of operational expenses such as fuel consumption, tolls, labor cost, or other fees that are being supported by the transport company [22].Thirdly, the benefits in terms of distance, hours of labor or amount of emissions saved by each alternative are also calculated.These savings are calculated with respect to the reference scenario previously defined.Finally, an analysis comparing the differences in costs and emissions for each alternative is made. CEA Output CEA is adapted for the analysis of actions in which expected outcomes are clearly identified.If the outcome of a project cannot be clearly defined, or if homogeneous and quantifiable units cannot be determined, the use of cost-effectiveness analysis should be avoided [26].For example, when an investment aims at reducing the amount of air pollutants that are released in the atmosphere, the effectiveness criteria for that investment could be the decrease in the daily average amount of air pollutants emitted.Other criteria may be more relevant depending on the context of the project. As such, the CEA output is a ratio between the costs and the outcomes of a project.One has to consider the actualization of investments and depreciation.A more elaborated way of using the CEA is to calculate the ratio between the incremental costs ∆ and the incremental outputs ∆ of a project, as shown in Equation ( 1).This ratio gives information about the cost difference, which is paid to receive the extra, beneficial, output ∆E (EC, 2008).The incremental cost in some cases can be substituted with the total expenses (costs) Cf of each project.This situation can occur when the outputs of different variants are "compared" with a zero scenario (when Ci is equal to zero): in this case, the resulting ratio is interpreted as the total cost paid to receive the full benefit ∆ [12]. CEA Output CEA is adapted for the analysis of actions in which expected outcomes are clearly identified.If the outcome of a project cannot be clearly defined, or if homogeneous and quantifiable units cannot be determined, the use of cost-effectiveness analysis should be avoided [26].For example, when an investment aims at reducing the amount of air pollutants that are released in the atmosphere, the effectiveness criteria for that investment could be the decrease in the daily average amount of air pollutants emitted.Other criteria may be more relevant depending on the context of the project. As such, the CEA output is a ratio between the costs and the outcomes of a project.One has to consider the actualization of investments and depreciation.A more elaborated way of using the CEA is to calculate the ratio between the incremental costs ∆C and the incremental outputs ∆E of a project, as shown in Equation ( 1).This ratio gives information about the cost difference, which is paid to receive the extra, beneficial, output ∆E (EC, 2008).The incremental cost in some cases can be substituted with the total expenses (costs) C f of each project.This situation can occur when the outputs of different variants are "compared" with a zero scenario (when C i is equal to zero): in this case, the resulting ratio is interpreted as the total cost paid to receive the full benefit ∆E [12]. CEA Computation and data The road transport sector is a very competitive market with low profit margins.Due to this, values regarding the cost and benefit of private operators have a confidential status.For this reason, this section presents in detail the computation steps needed to conduct CEA and the publicly available information with regard to this case. Cost Calculation of Road Transport Blauwens et al. [22] define time and distance costs as the main elements contributing to the expenditures of a transport firm to perform their activity.They present the total cost of a transport task as being equal to: uU + dD + k + h.The notations used are as follows: u-time coefficient; U-total time a vehicle needs for a task; d-distance coefficient; D-total distance covering outwards and return journey; k-tolls, equipment use fees, port dues, etc; h-costs elements which are both depending on time and distance (equipment depreciation and maintenance costs).Each cost element and coefficient is further detailed and examples are given. Gérard et al. [44] give a complete overview of the costs calculation for road freight transport.The coefficients determined by Blauwens et al. [22] have an average composition and do not completely serve the aim of this study.Because this study focuses on the activity of one transport firm, by making use of these normalized coefficients, the differences in costs created by each type of chassis would be flattened out.To avoid this limitation, the main categories of costs are kept as defined by Blauwens et al. [22], but the cost coefficients are refined taking into account the specific characteristics of the case study and the outcome of interviews. Finally, the cost for one individual transport journey is calculated with the following formula: where: C i -cost of journey i; u-time coefficient; U i -amount of hours necessary to complete the transport journey i; d l -distance coefficient calculated according to truck's load as in Table 3, where l is included in (1, 4) d l = g l * p/100; D i l -distance according to vehicle load class l for journey i; k i -amount payable in road tolls for journey i; h ci -truck and chassis usage costs for each category of chassis c necessary to complete the journey i. The theoretical framework developed by Blauwens et al. [22] puts forward an in-depth methodology with regard to the time and distance coefficients calculation in determining the operational cost of a transport company.Their model shows that the hour coefficient varies according to the wages of crew, the annual insurance premiums for vehicles, the rent for working spaces and even the general administrative costs of running a fleet of vehicles.Similarly, the distance coefficient depends on the fuel consumption, maintenance, eventual fines and damage liabilities.For the purpose of this research only, the allocation of costs elements is done for the time and distance coefficients.This approach is validated through interviews with representative of the company providing the data for this research and differs from the method proposed by Blauwens et al. [22] as the costs being recognized nor time nor distance-related are included in an extra cost element h, referred to as truck and chassis usage cost.This approach considers the time coefficient u as dependent only on the staff cost, while the driver cost is calculated as the average hourly driver salary within the studied company. The distance coefficient d is directly proportional with the fuel consumption of the vehicle (g l ) and the fuel price (p).To compare the effectiveness of different chassis, here, several distance coefficients need to be defined according to the carried loads.The practice of reducing the driven distance by consolidating several transport tasks in a single round-journey implies carrying different loads during the round-trip.As such, the loading factor has a direct influence on the total fuel consumption of the vehicle.Table 3 presents four distance coefficients calculated according to vehicle's loading factors as validated by the firm's representative.Costs coefficient k incorporates costs that depend on the route selection such as tolls. The truck and chassis usage cost h, considers depreciation, fines, insurance premiums, maintenance and general administration costs According to the example presented by Blauwens et al. [22], taking into account a sufficiently long period (five to six years for road transport equipment), these elements can be estimated.For this reason and only for the purpose of this study, comparing the use of different chassis, these costs are determined from the daily estimated operational costs put forward by the company for each type of chassis.Table 4 gives an overview of these costs. Decision Logic Scheme for Using Alternative Chassis The fleet management practice to reduce the operating costs used by the trucking company is based on a mixed decision process.This process is partially based on the transport planner's experience and a structured decision scheme.The goal is to combine individual transport journeys in one round-trip in order to reduce unproductive movements, as shown in Figure 7 (empty trips).The first step is to seek for transport tasks that have a similar or neighboring origin and destination.In other words, the distance "D B-C " (see Figure 7) is a predefined maximum distance range between two origins/destinations set by a planner to look up for follow-up transport tasks (condition 1).Further steps are set to assign specific chassis to carry out round trips.These steps take into account the characteristics of the two combined tasks (distances between origins and destinations, hours of loading/unloading, quantities to be transported etc.), as detailed in Figure 8.A common example for this case, is the combination of import-export orders (not exclusively), which The first step is to seek for transport tasks that have a similar or neighboring origin and destination.In other words, the distance "D B−C " (see Figure 7) is a predefined maximum distance range between two origins/destinations set by a planner to look up for follow-up transport tasks (condition 1).Further steps are set to assign specific chassis to carry out round trips.These steps take into account the characteristics of the two combined tasks (distances between origins and destinations, hours of loading/unloading, quantities to be transported etc.), as detailed in Figure 8.A common example for this case, is the combination of import-export orders (not exclusively), which on a regular basis requires the use of two separate trips to bring, respectively collect the containers.Further in the decision scheme, condition 2 refers to the drop-off time of two consecutive transport tasks i and j, respectively.This condition determines which type of round trip is going to be chosen and which chassis needs to be used for this combination of tasks.Condition 3 verifies that the time interval between the drop-off and the next pick is sufficient and the latter condition 4 indicates whether the distance between the drop-off location and the pick is close enough so as the combination of transport journeys to be profitable. The following notations are being made: Emission Calculation Road transport is usually is pointed as the cause of most external emissions.Literature review shows that there are a series of elements that need to be taken into account when calculating the emissions of transport activity.In this matter, a comprehensive review is given by Kok, Annema and van Wee [45].As well, from a road transport perspective, Piecyk and McKinnon [46] put forward the results of a large scale Delphi survey where they build three scenarios with respect to CO2 emissions Emission Calculation Road transport is usually is pointed as the cause of most external emissions.Literature review shows that there are a series of elements that need to be taken into account when calculating the emissions of transport activity.In this matter, a comprehensive review is given by Kok, Annema and van Wee [45].As well, from a road transport perspective, Piecyk and McKinnon [46] put forward the results of a large scale Delphi survey where they build three scenarios with respect to CO 2 emissions levels in freight transport by 2020.Demir, Bektaş, and Laporte [47] make an analysis of models used to determine the emissions in road transport.Obviously, the total emission is a function of the travelled distance.They point out that, among others, the total vehicle weight, the type of fuel used, and the speed are the most commonly used variables in academic research that address road emissions. Researchers [48][49][50] have developed several methods to calculate the emissions coming from transport and point to different types of environmental emissions like carbon dioxide (CO 2 ), tetrahydrocannabinol (THC), non-methane hydrocarbons (NMHC), nitrogen oxides (NO x ) or particulate matter (PM).Appendix B gives a non-exhaustive overview of reports and the methods used to determine the emissions of CO 2 for road transport.A first observation is made with regards to the fact that CO 2 emissions in road transport are calculated either with respect to the weight transported over distance, according to the distance it travels or as a function of the amount of fuel consumed (thus in function of the equipment used).For each method, an emission factor is used in the computations.As well, a special distinction is made between whether the method is applied either to a web tool [51,52] or by research studies [53][54][55].In the former case, the input method allows for more flexibility and the outcome is given in relation with the weight transported.Secondly, it is clear that the emission factor calculated only as a function of distance has evolved over time.Regulation regarding emissions in road transport has become more severe.In parallel, manufacturers from the automotive industry have made important steps in developing more fuel-efficient engines that are also less polluting.The overview in the Appendix B with regards to emission factors used in road transport studies shows that researchers have followed the milestones introduced by Euro norms [56]. Conditioned by data availability, this research proposes to quantify the emissions of road transport for each transport journey taking into account the emission coefficient expressed in grams of CO 2 /veh*km.Facing constraints with regard to data availability, research studies such as Protocol [57] or Cefic [53] present the results regarding emissions as averages of distance for each vehicle.This way, the total amount of CO 2 emissions calculated for a transport journey is determined with the following formula [53,57]. where: E i−j -total amount of CO 2 emissions for transport journey i − j; e-CO 2 emissions coefficient expressed in g CO 2 /tonne*km and its recommended value to be used in road transport operations is 62 g CO 2 /tonne*km based on an average load factor of 80% of the maximum vehicle payload and 25% of empty running [53].For the purpose of this study it is chosen in accordance with [53,54], so the following values are used: 212 g CO 2 /vehicle*km for vehicles carrying loads between 0-5 tons, 646 g CO 2 /vehicle*km for vehicles carrying loads between 5-12 tons, 1056 g CO 2 /vehicle*km for vehicles carrying loads between 12-32 tons and 1254 g CO 2 /vehicle*km for vehicles carrying loads up to 44 tons.D i−j -distance accordingly to journey i−j. Savings Gained Using Innovative Road Solutions (Operational Costs and Emissions) The calculations with regards to savings gained by using road innovative solutions are done as part of the CEA.The scope of analysis is to determine whether the decrease in operational costs leads also to environmental emissions drops.These calculations are done taking into consideration the distance travelled in each transport journey, the amount of time necessary, the road tolls, the loads that need to be transported and the cost of using each type of chassis.Table 5 details how the costs and emissions are calculated for each alternative.Each calculation practice is derived from Equation (1). The transport cost of each transport journey, including the reference scenario, is calculated independently.Taking into account the type of chassis, which is being used, specific fuel consumption averages are differentiated among for loaded or empty transports.Other costs, like tolls or chassis usage are also taken into consideration in the cost formula. After determining the total cost for carrying out each round-trip, the analysis focuses on determining the savings.The savings are also calculated in comparison with the reference scenario. Table 5 shows, for each round-trip, how the cost and emission savings are calculated.The notation used in the formulas should be read as follows: D i−j -distance from i to j; d l -distance cost coefficient according to carried load; u-hour cost coefficient; k i−j -toll fees for route i to j; h ci -cost of using the chassis c for route i; u i -loading/unloading time (for multipurpose chassis) at point i; u i eco -loading/unloading time (for mini eco-combi chassis) at point i; e-emission coefficient in g CO 2 /vehicle*kilometer according to carried load; v-average speed; i and j keep place for locations' origins and destinations, and are indicated as locations A, B, or C through the reference scenario. Case B. Multipurpose chassis-the combination of two containers transported performed using a multipurpose chassis (ex: new steer-chassis). Case C. Tilt chassis container-the same container is used to respond to the second transport task.Case D. Re-use of empty container-the same container is used to respond to the second transport task. Results The CEA applied in this paper is done based on a real set of data coming from a road transport company.The used data set contains information over the transport tasks that have been combined in round-round journeys.This data refers to transport deliveries of 20' containers.Detailed information is used regarding the loaded/unloaded status, location, time of pick-up/drop-off and the type of chassis used for each journey.One must be aware that the hard data with regard to each individual trip is the object of a bilateral agreement between the data provider and the researchers conducting this study, being thus not publicly available. Data covering two periods was used in two analyses.The first analysis focuses on a long period of five months from 1 March to 31 July 2015.As well, this period is used as main sample and it ensures that the combination of transport tasks is consistent.These transport journeys are used to derive the main conclusions of this paper.In addition, the second analysis focuses on a shorter time interval of two weeks, from 14 to 25 April 2014.This period overlaps with data for the same time interval from the initial sample.The latter will show whether the use of the same practice has changed from one year to another and whether there is impact by seasonality of transport practices. In this study, the data collected shows that in the period of five months, a total of 2992 round transport journeys with an average length of 462 km were performed.Figure 9 gives an overview of the used transport practices for the road journeys.The highest proportion of tasks has been done by using mini eco-combi chassis (56%), followed by multipurpose chassis (23%), re-use empty containers (19%) and tilt chassis (2%). Results The CEA applied in this paper is done based on a real set of data coming from a road transport company.The used data set contains information over the transport tasks that have been combined in round-round journeys.This data refers to transport deliveries of 20' containers.Detailed information is used regarding the loaded/unloaded status, location, time of pick-up/drop-off and the type of chassis used for each journey.One must be aware that the hard data with regard to each individual trip is the object of a bilateral agreement between the data provider and the researchers conducting this study, being thus not publicly available. Data covering two periods was used in two analyses.The first analysis focuses on a long period of five months from 1 March to 31 July 2015.As well, this period is used as main sample and it ensures that the combination of transport tasks is consistent.These transport journeys are used to derive the main conclusions of this paper.In addition, the second analysis focuses on a shorter time interval of two weeks, from 14 to 25 April 2014.This period overlaps with data for the same time interval from the initial sample.The latter will show whether the use of the same practice has changed from one year to another and whether there is impact by seasonality of transport practices. In this study, the data collected shows that in the period of five months, a total of 2992 round transport journeys with an average length of 462 km were performed.Figure 9 gives an overview of the used transport practices for the road journeys.The highest proportion of tasks has been done by using mini eco-combi chassis (56%), followed by multipurpose chassis (23%), re-use empty containers (19%) and tilt chassis (2%).A main remark is that the mini eco-combi chassis is the most popular option for adding two or more transport tasks in one journey.The re-use of empty container and the multipurpose chassis have an approximately equal share in the daily operation of the studied data set. Figure 10 presents a detailed overview of the operational cost and CO2 emissions for each transport practice.The main elements that take part in the cost structure of road transport are labor and fuel consumption costs.These costs and CO2 emissions are determined as a function of each journey length.As it can be seen in Figure 10, these two outcomes vary similarly for each type of transport practice.A main remark is that the mini eco-combi chassis is the most popular option for adding two or more transport tasks in one journey.The re-use of empty container and the multipurpose chassis have an approximately equal share in the daily operation of the studied data set. Figure 10 presents a detailed overview of the operational cost and CO 2 emissions for each transport practice.The main elements that take part in the cost structure of road transport are labor and fuel consumption costs.These costs and CO 2 emissions are determined as a function of each journey length.As it can be seen in Figure 10, these two outcomes vary similarly for each type of transport practice.A more detailed overview of the savings generated by using innovative solutions in road transport operations is presented in Table 6.The percentages showing the costs and emission savings are calculated relative to the reference scenario.6, two conclusions can be derived.With regards to the savings generated by using each transport practice, firstly, combining transport tasks in one journey reduces the operational cost by between 25% and 35%.Secondly, the CO2 emissions are reduced as well.On average, the CO2 emissions are lower by 34% to 38% in the case of round-journeys.The highest costs and CO2 emission reduction is achieved by using chassis from the multipurpose category.The computed values there are on average by 35% lower in case of costs and by 38% lower in case of CO2 emissions.In contrast, the outcomes with respect to the eco-combi show that the operational costs are lower by 25% for costs and by 34% for the CO2 emissions. These conclusions are further confirmed also by the CEA outcomes.Table 7 presents for each category of round-trips the costs and the amount of CO2 saved.A further ratio between the cost of a round-trip and the CO2 emissions saved determines the cost effectiveness ratio with respect to each transport practice.Using the multipurpose chassis is the most cost-effective practice, followed by the re-use of empty containers and eco-combi chassis.A more detailed overview of the savings generated by using innovative solutions in road transport operations is presented in Table 6.The percentages showing the costs and emission savings are calculated relative to the reference scenario.From Table 6, two conclusions can be derived.With regards to the savings generated by using each transport practice, firstly, combining transport tasks in one journey reduces the operational cost by between 25% and 35%.Secondly, the CO 2 emissions are reduced as well.On average, the CO 2 emissions are lower by 34% to 38% in the case of round-journeys.The highest costs and CO 2 emission reduction is achieved by using chassis from the multipurpose category.The computed values there are on average by 35% lower in case of costs and by 38% lower in case of CO 2 emissions.In contrast, the outcomes with respect to the eco-combi show that the operational costs are lower by 25% for costs and by 34% for the CO 2 emissions. These conclusions are further confirmed also by the CEA outcomes.Table 7 presents for each category of round-trips the costs and the amount of CO 2 saved.A further ratio between the cost of a round-trip and the CO 2 emissions saved determines the cost effectiveness ratio with respect to each transport practice.Using the multipurpose chassis is the most cost-effective practice, followed by the re-use of empty containers and eco-combi chassis.intermediary loading and unloading points increases.In other words, for this type of chassis, regardless the previous year, the transport tasks chosen to be added in the same round journey were further located form each other.This stretch, by having a higher share in the total travelled distance, lowered the performance with regards to fuel consumption and labor costs.As these results demonstrate, choosing to combine transport tasks for which the successive destination and origin are not located in the vicinity of each other has a negative effect on the cost effectiveness.This remark is supported by the opposite results in case of multipurpose chassis and re-use of empty containers.For these practices, the transport tasks that have been chosen have closer destination and origin points, and by comparison, have a better cost effectiveness outcome. Conclusions Road transport operators need to innovate forced by a competitive environment and a low profit margins sector.Most innovation developments in the road sector focus on economic benefits.Nonetheless, environmental objectives are necessary nowadays.Innovation in the road transport sector has positive consequences also on the environment.However, this type of impact is incidentally achieved, and not always acknowledged.To address this shortcoming, the paper applies a cost-effectiveness analysis on a road transport case study to prove environmental merits.Such technological innovation (e.g., innovative chassis use), even though not subject to incentives from policy makers, has a high potential for environmental benefits.Moreover, these technological achievements have a high impact on the global effects of supply chains and freight transportation systems.After an in-depth literature review of studies dedicated to CEA, an investigation about the characteristics of innovative solutions used in road transport of containers in the hinterland of a port in Western Europe is carried out.These solutions are the object of the CEA, which has the goal to gain more insight into the decision-making process and to evaluate how these decisions ultimately impact environmental quality. This study adds to the current state of science through its detailed methodological approach.This methodology puts forward the process through which dispatchers take operational decisions in a detailed theoretical model.Moreover, the calculation process of costs, monetary savings, and emission reduction generated by innovation in transport operations is presented in a detailed overview.As opposed to similar studies, the cost-effectiveness ratio shows, in a composite unit, the performance of several transport practices. Moreover, this research provides evidence on the practice of a firm active in the road transport sector to adopt clean transportation planning practices and technologies if the outcomes are boosted by economic performance and/or governmental financial support.This hypothesis is also tested by the current approach.Starting thus from a real set of data, this paper investigates whether innovative road transport practices that have pure economic objectives bring also environmental benefits. The case study analysis focuses on practices used for transport of 20' containers in import/export operations at the hinterland of a seaport where tasks consolidation in round-trips is possible.A further assumption is that any transhipment or container swap-movement is not possible at locations in hinterland as no equipment (e.g., cranes, reach stackers) is available.Therefore, four road transport practices are being analyzed in relation with a reference scenario.These technological innovation initiatives are introduced into a chassis management scheme that has the purpose to combine two or more transport tasks in one round-journey.Involved are the multipurpose chassis, the mini eco-combi, the tilt chassis and the re-use of empty containers.For each type of chassis, the costs and emissions savings brought by the round journey are quantified using particularized functions. The results show that by using different innovative chassis and new planning procedures to form round journeys, a transport company has positive achievements with respect to both costs and environmental emissions.While the costs are reduced on average by 25% to 35%, the environmental emissions are lowered by 34% to 38%.Moreover, the use of the multipurpose chassis represents the most cost-effective practice that addresses environmental emissions.A further conclusion is made with regards to the location of successive origins and destinations of transport tasks that are combined in one round journey: increasing the distance between successive origins and destinations makes the combination of transport journeys less effective. This research is relevant for industry as well for researchers conducting studies in road transport.The methodology presented, results and interpretation represent a basic foundation on which operational decisions in road transport can be made.Moreover, this research shows to policy makers that technological innovation in road transport brings environmental benefits as well.The latter could benefit from higher appreciation.This research's results are given from the perspective of one road transport operator, for practices applicable for transport of 20' containers and for operations in the hinterland area of a seaport that involves import/export container movements.The methodology used can be generalized globally for this global niche of road transport operations. Therefore, further research is required to validate the findings from this paper more in depth and for other cases.The expansion of the dataset to a longer period would allow for consistency testing.Similar research can be carried with extending the round-trip possibilities enabled by including 40' containers.Eco-combi chassis that allow the transport of three 20' containers or one 40' and one 20' container could be transported in the same journey.Equally, the opportunity cost in relation to the human resources when calculating the cost of different transport options could be included.It would be useful to see as well whether the findings of this paper are confirmed also by a larger set of transport journeys or other innovative transport practices. 2. 1 . 2 . Multi-Dimensional Innovation Set in Use by Road Haulers for Round-Trips 2. 1 . 2 . Multi-Dimensional Innovation Set in Use by Road Haulers for Round-Trips Figure 6 . Figure 6.The steps undertaken for applying the cost effectiveness methodology.Source: own composition based on [26]. Figure 6 . Figure 6.The steps undertaken for applying the cost effectiveness methodology.Source: own composition based on [26]. Figure 7 . Figure 7. Theoretical method for combining two transport journeys in one trip.(a) Individual transport journeys per tasks, (b) Round journey using multi-purpose chassis, (c) Round journey using mini eco-combi chassis. Figure 7 . Figure 7. Theoretical method for combining two transport journeys in one trip.(a) Individual transport journeys per tasks, (b) Round journey using multi-purpose chassis, (c) Round journey using mini eco-combi chassis. Figure 8 . Figure 8. Logical scheme representing the decision process for building-up round-journeys. Figure 8 . Figure 8. Logical scheme representing the decision process for building-up round-journeys. Figure 9 . Figure 9. Share of practices used by the road transport company in its operations.Source: own calculation. Figure 9 . Figure 9. Share of practices used by the road transport company in its operations.Source: own calculation. Figure 10 . Figure 10.Total cost and emission of each transport practice.Source: own calculations. Figure 10 . Figure 10.Total cost and emission of each transport practice.Source: own calculations. Table 1 . Loading conditions considered when creating round journeys.Source: own composition based on interviews. Table 1 . Loading conditions considered when creating round journeys.Source: own composition based on interviews. Table 1 . Loading conditions considered when creating round journeys.Source: own composition based on interviews. Table 1 . Loading conditions considered when creating round journeys.Source: own composition based on interviews. Table 1 . Loading conditions considered when creating round journeys.Source: own composition based on interviews. Table 1 . Loading conditions considered when creating round journeys.Source: own composition based on interviews. Table 1 . Loading conditions considered when creating round journeys.Source: own composition based on interviews. of Combined Trip Classical Chassis Multipurpose Chassis Rear Wheel Steering Chassis Legged Mounted Chassis Hydraulic Chassis Mini Eco-combi Tilt Chassis and Re-use Sustainability 2019, 11, x FOR PEER REVIEW 8 of 26 Table 3 . Distance coefficient used in the further analysis.Source: own compilation based on interviews. Table 4 . Usage and maintenance costs for each type of chassis used.Source: own compilation based on interviews. between first pick-up and drop-off location, distance between two successive pick-up and drop-off locations and distance between first drop-off point and second pick-up location respectively; j op.drop -extra operational time needed at drop-off for task j; • t j op.pick -extra operational time needed at pick-up for task j. Table 5 . Comparison of chassis usage: calculating the costs and emissions savings.Source: own composition. Table 6 . Costs and emissions saving for each trip category.Source: own calculations. Table 6 . Costs and emissions saving for each trip category.Source: own calculations.
2019-05-21T13:05:48.761Z
2019-04-12T00:00:00.000
{ "year": 2019, "sha1": "c64073791acb6974caa7166291b3258bfb00e8dc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/11/8/2212/pdf?version=1555068098", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "c64073791acb6974caa7166291b3258bfb00e8dc", "s2fieldsofstudy": [ "Environmental Science", "Business" ], "extfieldsofstudy": [ "Economics" ] }
232138725
pes2o/s2orc
v3-fos-license
A distinct cognitive profile in individuals with 3q29 deletion syndrome Background: 3q29 deletion syndrome is associated with mild to moderate intellectual disability. However, a detailed understanding of the impact on cognitive ability is lacking. The goal of this study was to address this knowledge gap. A second goal was to ask whether the cognitive impact of the deletion predicted psychopathology in other domains. Methods: We systematically evaluated cognitive ability, adaptive behavior, and psychopathology in 32 individuals with the canonical 3q29 deletion using gold-standard instruments and a standardized phenotyping protocol. Results: Mean FSIQ was 73 (range 40-99). Verbal subtest score (mean 80, range 31-106) was slightly higher and had a greater range than nonverbal subtest score (mean 75, range 53-98). Spatial ability was evaluated in a subset (n = 24) and was lower than verbal and nonverbal ability (mean 71, range 34-108). There was an average 14-point difference between verbal and nonverbal subset scores; 60% of the time the verbal subset score was higher than the nonverbal subset score. Study subjects with a verbal ability subtest score lower than the nonverbal subtest score were 4 times more likely to have a diagnosis of intellectual disability (suggestive, p-value 0.07). The age at which a child first spoke two-word phrases was strongly associated with measures of verbal ability (p-value 2.56e-07). Cognitive ability was correlated with adaptive behavior measures (correlation 0.42, p-value 0.02). However, though group means found equivalent score, there was, on average, a 10-point gap between these skills (range -33 to 33), in either direction, in about 50% of the sample suggesting that suggesting that cognitive measures only partially inform adaptive ability. Cognitive ability scores did not have any significant relationship to cumulative burden of psychopathology nor to individual neurodevelopmental or psychiatric diagnoses. Conclusions: Individuals with 3q29 deletion syndrome have a complex pattern of cognitive disability. Two-thirds of individuals with the deletion will exhibit significant strength in verbal ability; this may mask deficits in non-verbal reasoning, leading to an over-estimation of overall ability. Deficits in verbal ability may be the driver of intellectual disability diagnosis. Cognitive ability is not a strong indicator of other neurodevelopmental or psychiatric impairment; thus individuals with 3q29 deletion syndrome who exhibit IQ scores within the normal range should receive all recommended behavioral evaluations. comparing individuals of the same chronological age without any reference for differences in developmental level. Second, the potential for etiological differences across individuals with IDD and thus unique phenotypic presentations. Lastly, the social and environmental factors influencing an individual's cumulative life experience (For literature review see Zigler and Hodapp, 1986;Burack et al., 2001Burack et al., , 2012. Over time, an understanding of the methodological weaknesses to this approach emerged and a more inclusive manner of thinking about IDD began. This conceptual and methodological shift, focused on using a developmental approach to study what Zigler called the "whole child" (Zigler and Hodapp, 1986). Simply put, a way to more precisely describe an individual's pattern of strengths and weaknesses across a variety of developmental domains (e.g., cognitive, social, adaptive) while also considering the various contexts (e.g., family, community, society) likely to impact the individual. Within this model, understanding cognitive skills remained important but were not the sole focus of characterizing individuals with IDD. Additionally, as technology in the field of genetics advanced, the ability to describe IDD based on known etiologic differences was further realized. Given some of the extensive past problems of overgeneralizing findings of individuals with genetic syndromes secondary to lack of precision in defining participant groups to be studied, Zigler and his colleagues articulated the need of the developmental approach in understanding individuals with IDD (Zigler, 1967(Zigler, , 1969. Thus the study of a specific rare variant, in the case of this manuscript those with 3q29 deletion syndrome, can help us both understand the unique strengths and vulnerabilities of those individuals affected by the disorder as well as help our awareness of withinand between-group differences to better our understanding of the entire range of developmental problems, risk-potentiating factors, and protective characteristics (Burack, 1997). To help expand knowledge of IDD, we took lessons from the developmental formulation postulated by Zigler in the late 1960s, and the premise that "if the etiology of the phenotypic intelligence (as measured by an IQ) of two groups differs, it is far from logical to assert that the course of development is the same, or that . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted March 8, 2021. ;https://doi.org/10.1101https://doi.org/10. /2021 even similar contents in their behaviors are mediated by exactly the same cognitive process" (Zigler, 1969, p. 553). Here, we apply this knowledge to understand the cognitive profile of individuals with 3q29 deletion syndrome. 3q29 deletion syndrome (Hamosh, 2012) is caused by a hemizygous 1.6 Mb deletion containing 21 genes (Willatt et al., 2005;Ballif et al., 2008). The deletion is typically de novo, though inherited cases have been reported (Cox and Butler, 2015;Murphy et al., 2020). Early case reports describe mild to moderate intellectual disability as a common syndromic phenotype, with developmental delay, including speech delay, as the initial presentation (Girirajan et al., 2012;Cox and Butler, 2015). It is now understood that the 3q29 deletion is associated with a range of neurodevelopmental and psychiatric disabilities that manifest throughout the lifespan, including intellectual disability, autism, anxiety disorders, ADHD, and a 40-fold increased risk for schizophrenia (Mulle, 2015;Sanders et al., 2015;Glassford et al., 2016;Mulle et al., 2016;Marshall et al., 2017;Pollak et al., 2019;Sanchez Russo et al., 2021). In a recent study among individuals with the 3q29 deletion who were systematically evaluated with a deep-phenotyping protocol, only 34% qualified for a diagnosis of intellectual disability (Murphy et al., 2018;Sanchez Russo et al., 2021). IQ measures among study subjects ranged from 40-99, thus individuals with the 3q29 deletion may have cognitive ability that is well within the average range. This range of cognitive abilities tends to differ when intellectual functioning is described in case report studies which indicate a much greater percentage of individuals with intellectual disability. It also likely argues against prior reports estimating that 92% of individuals with the 3q29 deletion demonstrate a mild to moderate intellectual disability (Cox and Butler, 2015). These findings suggest the not only the possibility of an ascertainment bias, but may also conflate intellectual disability and learning disability, which are not one in the same. These biases may have had an outsized influence, overestimating the impact of the 3q29 deletion on cognitive ability. The current study aims to carefully examine the cognitive profiles of individuals with 3q29 deletion syndrome, using cognitive data that has been uniformly measured among a case series of . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. Using Zigler's developmental framework, we aim to identify the unique strengths and vulnerabilities of these individuals, as a group and individually, to better understand their developmental trajectory as well frame our findings onto other genetic disabilities or developmental disorders such as autism spectrum disorder. Methods: Study Participants. Individuals with 3q29 deletion syndrome were recruited from the 3q29 registry (3q29deletion.org) housed at Emory University to participate in an in-person deep phenotyping study. The phenotyping protocol has been described previously (Murphy et al., 2018). Eligibility criteria were: a validated clinical diagnosis of 3q29 deletion syndrome where the subject's deletion overlapped the canonical region (hg19, chr3:195725000-197350000) by >80%, and willingness and ability to travel to Atlanta, GA. Funding for travel was provided to increase the diversity of participants. Exclusion criteria were: any 3q29 deletion with less than 80% overlap with the canonical region; non-fluency in English, and age younger than six years. One exception to the age criterion was made; a 4.85-year-old who was part of a previously-described multiplex family was included in the current study (Murphy et al., 2020). Prior to study participation, an informed consent session was conducted, and repeated in-person at the beginning of the study visit. This study was approved by the Emory Institutional Board (IRB000088012). Instruments. Cognitive ability was evaluated using the Differential Ability Scales, Second Edition (DAS-II ages <18, n = 24) or the Wechsler Abbreviated Scale of Intelligence, Second Edition (WASI-II, >18, n = 7) which was determined based on the participants' age. The DAS-II (Elliott, 2007) is an assessment of cognitive abilities for children ages 2 years, 6 months to 17 years, 11 months. The DAS-II is comprised of individual subtests that evaluate overall verbal, nonverbal and spatial abilities. A General Conceptual Ability (GCA) composite score is generated that reflects an overall estimate of cognitive functioning. The WASI-II (Wechsler, 2011) is an abbreviated measure . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted March 8, 2021. ; https://doi.org/10.1101/2021.03.05.21252967 doi: medRxiv preprint of cognitive functioning for individuals ages 6-90 and provides an estimate of verbal and nonverbal abilities and generates an overall Full Scale IQ (FSIQ). Of the individuals administered the DAS, 6 children aged 6 and younger were administered the DAS-II Early Years Battery (DAS-EY); one additional study subject, an 11 year old with limited verbal ability, was also administered the DAS-EY and age equivalents and ratio IQs were obtained. Seventeen children between ages 7-18 years were administered the DAS-II School Age Battery (DAS-SA). The other 8 individuals were administered the WASI-II. Instruments were administered by clinical psychologists (CS, CK, SW) early in the day to support the participants' active engagement. Adaptive behavior was assessed with the Vineland Adaptive Behavior Scales, Third Edition (Vineland-3; (Sparrow, Cicchetti and Saulnier, 2016). The Vineland-3 is a measure that assesses overall adaptive ability from birth to age 90 by gathering information about day-to-day activities across three adaptive domains of socialization, daily living skills, and communication skills. An Adaptive Behavior Composite (ABC) comprised of scores from each subdomain (Socialization, Daily Living Skills and Communication Skills) provides an overall estimate of an individual's adaptive functioning. The Comprehensive Parent/Caregiver Form was utilized for this study. The Vineland-3 was completed by the parent or caretaker electronically via publisher websites (i.e., Pearson Q-global) either before or during the visit. Instruments were scored according to publisher's instruction. Neurodevelopmental and psychiatric diagnoses were made using gold-standard instruments administered by experienced, trained professionals, as previously described (Sanchez Russo et al., 2021). Diagnoses were made by expert clinicians (CS, CK, SW, JB, LB, EW). To receive a diagnosis of ID, both cognitive and adaptive behavior scores needed to be approximately 2 or more standard deviations below the mean, consistent with diagnostic criteria in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5; APA, 2013). Data Analysis. All data were captured in a custom local REDCap database. Data were exported for analysis and visualization in R. After confirming there were no test-specific score differences in standard scores for GCA/FSIQ, verbal ability (VIQ), and non-verbal ability (NVIQ) . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. Profile of cognitive and adaptive abilities -overall group differences. Mean FSIQ across all study subjects was 73 (median 75.5, range 40-99). VIQ was slightly higher than FSIQ (mean = 80) and had greater variance across study subjects (median 85, range 31-106). NVIQ (mean 75, median 75, range 53-98) and spatial ability (mean 71, median 68, range 34-108) were both lower than verbal ability, though these differences were not statistically significant ( Figure 1). Spatial ability was measured only with the DAS-II (n = 24 subjects) and was not analyzed further. There was a suggestive relationship between sex, for both FSIQ and VIQ, where female study subjects were on average 9 points higher for FSIQ (p-value 0.051) and 10 points higher for VIQ (p-value 0.08, Table S2). Because of the small sample size in this study (n = 12 females), we did not conduct any sexstratified analyses, but we note that study of sex-related differences are an important future direction that will require a larger sample size. Age was not significantly associated with FSIQ, VIQ, or NVIQ (Table S3). The overall Adaptive Behavior Composite as measured by the Vineland-3 was a mean of 74 (median 71, range 48-110). Eleven individuals (34%) qualified for a diagnosis of intellectual disability (ID), where cognitive and adaptive behavior scores were >2 SD below the expected mean (< 70 on both evaluations). Relationship between cognitive ability and comorbid disorders. The 3q29 deletion is associated with a high burden of neurodevelopmental and psychiatric illness (Quintero-Rivera, Sharifi-Hannauer and Martinez-Agosto, 2010;Marshall et al., 2017;Pollak et al., 2019;Sanchez Russo et al., 2021). We examined whether FSIQ, VIQ, or NVIQ were associated with comorbid neurodevelopmental or psychiatric diagnoses common to our study subjects, including Autism . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted March 8, 2021. ; https://doi.org/10.1101/2021.03.05.21252967 doi: medRxiv preprint Spectrum Disorder (ASD), Attention Deficit Hyperactivity Disorder (ADHD), any anxiety disorder, or psychosis. There was no relationship between FSIQ or VIQ and any individual diagnostic category (Tables S4, S5). For NVIQ, individuals with ASD had lower scores than those without ASD (69 vs 79, p = 0.02, Table S6). On the other hand, individuals with an anxiety disorder had higher NVIQ measures than those without an anxiety disorder (80 vs 72, p = 0.05, Table S6). These data suggest there may be complex interactions between cognitive ability and comorbid conditions, and a larger sample size will be required to untangle these relationships. We also sought to ask whether cognitive disability is related to the cumulative burden of neurodevelopmental and psychiatric illness. For each individual, we summed overall possible diagnoses (ASD, ADHD, anxiety disorder, psychosis). The total possible sum is 4; the range in our sample was 0-3 (0 diagnoses: 4 individuals; 1 diagnosis, 13 individuals; 2 diagnoses, 9 individuals; 3 diagnoses, 6 individuals). Our data ( Figure 2) do not suggest any relationship between cognitive ability and overall burden of illness (p-value 0.92); individuals with 3q29 deletion syndrome who escape a substantial cognitive impact (with cognitive ability scores in the normal range) remain at significant risk for additional neurodevelopmental and psychiatric comorbidity. Differences in IQ subtest scores in 3q29 deletion syndrome -individual subject results. In the majority of our study subjects, at an individual data level, there were large discrepancies between verbal and nonverbal subtest scores ( Figure 3A and 3B). The average absolute difference between subtest scores was 14 points. In 59% of our sample (n = 19), the verbal subtest score was 5 or more points higher than the nonverbal score (range 5-42). For 31% of our sample (n = 10), the verbal subtest score was 5 or more points lower than the nonverbal subtest score (range 7-34). For 9% of our samples (n = 3), verbal and nonverbal subset scores were within 5 points of one another. We examined the relationship between VIQ/NVIQ disparity and comorbid diagnoses. Of 10 individuals with VIQ lower than NVIQ, 7 (70%) had a diagnosis of ID, compared to 4 out of 22 individuals (18%) with VIQ higher than or equal to NVIQ (odds ratio 9.5, p-value 0.01). This analysis reveals that among individuals with 3q29 deletion syndrome, those with VIQ lower than NVIQ are 9.5 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted March 8, 2021. ; https://doi.org/10.1101/2021.03.05.21252967 doi: medRxiv preprint times more likely to have a diagnosis of intellectual disability, as compared to individuals with VIQ equal to or higher than NVIQ. There were no significant relationships between the VIQ/NVIQ difference and other comorbid phenotypes ( Figures S4A-D). Predictors of verbal ability. Verbal ability was a noted strength for many of our study subjects. This is a slight paradox, as many individuals with 3q29 deletion syndrome exhibit speech delay. In our study sample, the average age that children began saying first words was 1.6 years (range 0.5 -4 years), and the average age for speaking in two-word phrases was 3.2 years (range 1.2-10 years). We evaluated whether verbal ability was "in process" and still developing for our study subjects, by assessing whether the time in years from talking (first words or two-words phrases) to the time of our evaluation was associated with verbal ability. There was no apparent relationship between these variables, revealing that the VIQ-NVIQ disparity was not an artifact of developmental time ( Figure S5). We next evaluated whether absolute age at talking (single words or two-word phrases) was associated with verbal ability. Age at which the child spoke two-word phrases was strongly associated with verbal ability (p-value < 0.0001) with an estimated decrease of 8 points in VIQ for every year that speaking in two-word phrases was delayed. This relationship was specific to two-word phrases; the age at which single words were spoken was not associated with VIQ (p-value 0.93). Speaking in two words-phrases had at best a suggestive relationship to NVIQ (p-value 0.06), with an estimated loss of 2 NVIQ points for every year two-word phrases were delayed. Relationship between cognitive ability and adaptive behavior. FSIQ and adaptive behavior standard scores were well-correlated (correlation coefficient 0.42, p-value 0.02, Figure 5). However, 50% of our study sample had a departure of half a standard deviation (7.5 points) or more (average mean absolute difference between cognitive and adaptive standard scores = 10.9). Six study subjects (19%) had lower adaptive behavior scores than would be expected based on their cognitive ability; 10 study subjects (31%) had higher adaptive behavior scores than would be expected based on cognitive ability. There was no relationship between comorbid diagnosis and cognitive/adaptive disparity. VIQ standard scores were on average 5 points lower than adaptive behavior standard . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted March 8, 2021. ; scores (range 27 to -38, absolute mean difference 13 points, Figure 5), and many study subjects with high verbal ability perform lower than expected for adaptive behaviors. NVIQ was also likely to depart from adaptive behavior standard scores, but was less likely to systematically be lower (NVIQ on average 1 point lower than adaptive behavior standard scores, range 29 to -30, absolute mean difference 11 points). Discussion. Overall Cognitive Profile. Individuals with 3q29 deletion syndrome demonstrate a unique profile of cognitive strengths and weaknesses. While the average overall IQ across participants was in the range of what would be considered mild intellectual disability, there was variability across individual subjects ranging from moderate intellectual disability to average cognitive functioning. This finding supports the variability observed in case reports, particularly when the 3q29 deletion is inherited (Li et al., 2009;Cobb et al., 2010;Khan et al., 2019;Murphy et al., 2020). Additionally, in line with the developmental perspective, looking at differences across verbal, non-verbal, and spatial skills reveals that this overall IQ may be less meaningful in understanding the functioning level of individuals with 3q29 deletion syndrome as there are specific strengths and weaknesses noted in these individual domains. For example, verbal ability tended to be an area of relative strength across participants whereas nonverbal and spatial skills (where collected) were areas of relative weakness. This pattern of strengths and weaknesses may set the stage for possible learning difficulties or disabilities in this group. Mild to moderate learning disabilities were similarly reported in an estimated 92% of individuals with the 3q29 deletion syndrome (Cox and Butler, 2015). However, learning disabilities are often diagnosed in context of academic difficulties and so these assumptions are provided tentatively and as an area that requires further investigation. Although anecdotally many of the children in the study did require academic supports through their local school district, receiving some level of special education services. With regard to specific cognitive profiles, verbal ability was a noted strength for many of our study subjects despite the reported history of speech delay highlighting the difference between . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) communication skills and verbal cognitive abilities. With each year of delay in speaking in phrases, individuals with 3q29 had an estimated decrease of 8 points in their verbal cognitive abilities. No other developmental factor or comorbid psychiatric or developmental disability was related to cognitive abilities or splits between verbal and nonverbal abilities. In addition, there was no relation between age, sex or age at first words indicating that the verbal/nonverbal cognitive disparity was not an artifact of developmental time. Cognitive Functioning and Adaptive behavior. In our sample, 11 individuals met criteria for some level of Intellectual Disability demonstrating scores on both cognitive and adaptive measures significantly below the average range (i.e., >2 SD below the expected mean). This is a 31-fold increase over the expected population prevalence of 1.1% (Zablotsky et al., 2019). Although overall cognition and adaptive behavior scores were well-correlated in our sample, individual profiles of cognition and adaptive behavior varied significantly across participants with 3q29 deletion syndrome. While some individuals skills were consistent across measures, 50% of the sample presented with adaptive skills that were either slightly higher than expected (31%) or lower than expected. Furthermore, many individuals with higher overall verbal skills on cognitive assessments performed lower on adaptive measures. Given that verbal ability is readily apparent and easily observed in many settings, there is a risk that overall ability may be overestimated for these individuals. These data underscore that mean scores and scores presented in isolation (i.e. just one construct) are limited in informing about overall profiles and outcomes for individuals with 3q29 deletion syndrome. Cognitive Functioning and Comorbid Disorders. The 3q29 deletion is associated with a high burden of comorbid neurodevelopmental and psychiatric diagnoses. In our study sample, there was no association between overall cognitive abilities or verbal IQ and the assessed diagnostic categories of Autism Spectrum Disorder (ASD), Attention Deficit Hyperactivity Disorder (ADHD), any anxiety disorder, or psychosis. Individuals with ASD had lower nonverbal IQ scores than those without ASD, and individuals with anxiety disorder had higher nonverbal IQ than those without an . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted March 8, 2021. ; https://doi.org/10.1101/2021.03.05.21252967 doi: medRxiv preprint anxiety disorder. These data suggest there may be complex interactions between cognitive ability and comorbid conditions, and a larger sample size will be required to untangle these relationships. We also sought to ask whether cognition is related to the cumulative burden of neurodevelopmental and psychiatric conditions. Overall, there was no relationship between intellectual disability and the overall burden of co-occurring psychiatric conditions in individuals with 3q29 deletion syndrome. However, individuals without cognitive delay remain at significant risk for having co-occurring neurodevelopmental and psychiatric conditions. Thus, individuals with 3q29 deletion syndrome with and without cognitive impairment should receive all recommended behavioral evaluations. Summary. These findings underscore the importance of taking a developmental approach in the assessment and characterization of individuals with 3q29 deletion syndrome. Individuals with 3q29 deletion syndrome do not show a simple phenotypic pattern but instead are a complex interplay of cognitive and adaptive profiles, language development, and co-occurring anxiety, ASD, ADHD, and psychosis. Overall IQ is not the best predictor of outcomes in these individuals, particularly given the cognitive splits in the majority of individuals. These gaps in cognitive skills may also be an indicator of the reported rates of learning disabilities and challenges faced in an academic setting. Limitations of this study include the relatively small sample size, particularly when examining cognitive profiles in psychiatric comorbidities. That being said, this is the largest sample of individuals with 3q29 deletion syndrome assessed to date. In addition, despite funding for travel provided by the study, the sample includes those individuals who were willing and able to travel to Atlanta. This may represent ascertainment bias; future studies are needed to measure IQ and psychiatric comorbidities in a virtual manner or with a study design where phenotypic experts can travel to the homes of participants rather than requiring individuals to travel to Atlanta. Additional studies will also explore the impact of executive functioning on cognitive profiles as well as profiles of academic functioning. . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
2021-03-08T20:01:15.958Z
2021-03-08T00:00:00.000
{ "year": 2022, "sha1": "e23bdbf505d7b0849dff2682d2cb4c9736058e9a", "oa_license": "CCBYNC", "oa_url": "https://www.medrxiv.org/content/medrxiv/early/2021/03/08/2021.03.05.21252967.full.pdf", "oa_status": "GREEN", "pdf_src": "MedRxiv", "pdf_hash": "e23bdbf505d7b0849dff2682d2cb4c9736058e9a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238478293
pes2o/s2orc
v3-fos-license
Elevated Neutrophil Gelatinase-Associated Lipocalin Is Associated With the Severity of Kidney Injury and Poor Prognosis of Patients With COVID-19 Introduction Loss of kidney function is a common feature of COVID-19 infection, but serum creatinine (SCr) is not a sensitive or specific marker of kidney injury. We tested whether molecular biomarkers of tubular injury measured at hospital admission were associated with acute kidney injury (AKI) in those with COVID-19 infection. Methods This is a prospective cohort observational study consisting of 444 consecutive patients with SARS-CoV-2 enrolled in the Columbia University emergency department (ED) at the peak of the pandemic in New York (March 2020–April 2020). Urine and blood were collected simultaneously at hospital admission (median time: day 0, interquartile range: 0–2 days), and urine biomarkers were analyzed by enzyme-linked immunosorbent assay (ELISA) and a novel dipstick. Kidney biopsies were probed for biomarker RNA and for histopathologic acute tubular injury (ATI) scores. Results Admission urinary neutrophil gelatinase-associated lipocalin (uNGAL) level was associated with AKI diagnosis (267 ± 301 vs. 96 ± 139 ng/ml, P < 0.0001) and staging; uNGAL levels >150 ng/ml had 80% specificity and 75% sensitivity to diagnose AKI stages 2 to 3. Admission uNGAL level quantitatively associated with prolonged AKI, dialysis, shock, prolonged hospitalization, and in-hospital death, even when admission SCr level was not elevated. The risk of dialysis increased almost 4-fold per SD of uNGAL independently of baseline SCr, comorbidities, and proteinuria (odds ratio [OR] [95% CI]: 3.59 [1.83–7.45], P < 0.001). In the kidneys of those with COVID-19, NGAL mRNA expression broadened in parallel with severe histopathologic injury (ATI). Conversely, low uNGAL levels at admission ruled out stages 2 to 3 AKI (negative predictive value: 0.95, 95% CI: 0.92–0.97) and the need for dialysis (negative predictive value: 0.98, 95% CI: 0.96–0.99). Although proteinuria and urinary (u)KIM-1 were implicated in tubular injury, neither was diagnostic of AKI stages. Conclusion In the patients with COVID-19, uNGAL level was quantitatively associated with histopathologic injury (ATI), loss of kidney function (AKI), and severity of patient outcomes. are apparent, evidence of tubular injury may be lacking, for example in the presence of volume depletion, 12,13 a common presentation in patients with COVID-19-associated diarrhea. 10,14 A rise in SCr level may also be confounded by the incidence of rhabdomyolysis in COVID-19, which enhances creatinine production. 15,16 The burden of kidney injury during surges of COVID-19 has strained hospital resources, including Emergency Medicine, Critical Care, and Nephrology, creating an urgent need to limit delays in the identification of individuals at risk for kidney injury, loss of kidney function, and renal replacement therapy. As a result, triage decisions for patients and for resource allocations would benefit from the use of a rapidly responsive, sensitive, and specific noninvasive marker of kidney injury and its attendant outcomes common in COVID-19 infection. Previous research in human and mouse models revealed that 2 molecular markers of tubular injury, uNGAL and uKIM-1, were derived from different segments of the kidney and allow for sensitive detection and real-time distinction between volume sensitive and volume insensitive intrinsic forms of tubular injury. 12,[17][18][19] Previous evidence of the utility of these markers in the detection of AKI on presentation to the ED 12,20-22 raises the question of whether they might be of value in patients with COVID-19. The need for new diagnostic testing is particularly evident because many patients with COVID-19 are presenting to the hospital for the first time and do not have previous records of SCr. Here, we study the performance of uNGAL and uKIM-1 in a large cohort of patients with acute COVID-19 presenting to the Columbia University ED at the peak of the pandemic in New York City (March 2020-April 2020). We tested the association of uNGAL and uKIM-1 with the subsequent diagnosis, duration, and severity of AKI as defined by the Acute Kidney Injury Network (AKIN) criteria and with in-hospital death, dialysis, shock, respiratory failure, and length of hospital stay. We used both standard ELISA methods and a novel dipstick that can measure uNGAL at the bedside. Finally, we probed whether NGAL and KIM-1 RNA correlated with histopathologic metrics of ATI in kidney biopsy findings from the patients with COVID-19. Human Subjects The Columbia COVID-19 Biobank recruited consecutive COVID-19 cases (positive result from nasopharyngeal SARS-CoV-2 polymerase chain reaction test), regardless of age, sex, or race/ethnicity who received care at the Columbia University Irving Medical Center. The Biobank stored residual blood and urine samples after clinical testing from every patient with COVID- 19 including an initial urine sample and serum obtained at hospital admission, which were used for biomarker measurements. For our analysis, we identified 444 consecutive patients who were admitted between March 24, 2020, and April, 27, 2020, with COVID-19. This included 371 patients who had not only a urine sample from presentation but also information on baseline SCr and complete SCr measurements in the hospital required to determine the stage and duration of AKI ( Figure 1). In addition, 4 patients with end-stage renal disease were excluded. Kidney biopsies were accessioned by the Columbia University Irving Medical Center Renal Pathology Laboratory, including 13 kidney biopsies from COVID-19 cases 23 and 4 non-COVID-19 specimens analyzed by in situ hybridization for NGAL and KIM-1 RNA. The COVID (þ) cohort was compared with a COVID (À) cohort that was recruited in an identical fashion in the Columbia University Irving Medical Center ED between June 2017 and January 2019. 20 Similar to the COVID (þ) cohort, 426 consecutive COVID (À) patients were enrolled at presentation, regardless of age, sex, or race/ethnicity. This included 318 patients with complete SCr measurements in the hospital required to determine stage and duration of AKI. Urine samples were collected within 24 hours of arrival in the ED. Patients with end-stage renal disease were excluded. Definition of Primary Outcomes Primary outcomes included SCr-based AKI, AKIN stage, and the duration of elevated SCr level consistently in both the COVID (þ) and the COVID (À) cohorts. In both cases, baseline SCr level was defined using a standardized algorithm, as previously described by Stevens et al.,20 in the following order: Median SCr from 365 to 31 days before urine collection. If not available, then: Minimum SCr from 30 days before urine collection to the day of collection. If not available, then: Minimum SCr from urine collection to 7 days in the hospital. Patients were classified as "Unknowns" if none of the above-mentioned criteria were met. AKIN 23 stages were classified as follows: AKIN stage 1: $0.3 mg/dl increase in SCr within a 48hour window OR 1.5-to 2-fold increase in SCr compared with baseline. AKIN stage 2: >2to 3-fold increase in SCr compared with baseline. AKIN stage 3: $0.5 mg/dl increase in SCr within a 48hour window when SCr $4.0 mg/dl OR >3-fold increase in SCr compared with baseline. The peak SCr level measured within 48 hours after urine collection was used to diagnose AKIN stage. In select cases, the day 1 AKIN score was imputed when the preceding and subsequent AKIN scores were identical. Further categorization was based on the duration of SCr elevation above the baseline: No AKI (AKIN 0)-not meeting AKIN criteria within 2 days of presentation (must have SCr values for both days). Transient AKI-met AKIN criteria on day 0 or 1 of presentation but normalized below AKIN detection thresholds within 2 days after first detection (total AKI duration <72 hours). Sustained AKI (sAKI)-met AKIN criteria within 2 days of presentation but normalized below the AKIN detection thresholds only after 2 days from the first detection (total AKI duration >72 hours). Unknown-missing baseline SCr, or insufficient SCr measurements to determine SCr kinetics, or missing measurements on day 0 or 1 that could not be imputed owing to discrepant AKIN scores ( Figure 1). Further categorization of elevated SCr level was based on the duration of SCr elevation above the baseline, including transient AKI rapidly normalizing SCr level (<72 hours) and sustained (sAKI) delayed normalization of SCr level (>72 hours). Urine output was not used for AKI definition because of the variable use of Foley catheters and incomplete recording of urine output in the ED. Definition of Secondary Outcomes Secondary outcomes included dialysis, shock, respiratory failure, length of hospital stay, and in-hospital death. Shock was defined by the need for vasopressors, and respiratory failure was defined by the need for either invasive or noninvasive positive pressure ventilation. Staging of chronic kidney disease was determined using the Chronic Kidney Disease Epidemiology Collaboration estimated glomerular filtration rate formula and the baseline creatinine as described previously. 24 Laboratory Analysis All urinary measurements were blinded to clinical data. NGAL and uKIM-1 were measured by ELISA (KIM-1: Enzo, ADI-900-226-0001; NGAL: BioPorto, KIT036). Proteinuria was detected with Chemstrip 10 SG (Roche Diagnostics). Urine was centrifuged (12,000 rpm; 10 minutes) and applied to NGAL gRAD dipsticks (Bio-Porto), and color development was compared with a color scale marking NGAL (ng/ml) concentration by 2 independent readers. Urinary cell pellets were analyzed for LRP2 and UMOD by immunoblot with SDS-PAGE (Bio-Rad Laboratories), rabbit anti-LRP2 (1:1000, Abcam, ab76969), sheep anti-UMOD (1:2000, Meridian Life Science, K90071C), and polyclonal secondary antibodies conjugated to HRP (1:10,000, Jackson ImmunoResearch). Confirmatory Renal Ischemia-Reperfusion Injury Analysis Male and female wild-type C57Bl/6 mice, aged 8 to 10 weeks (Jackson Labs) were anesthetized with isoflurane and placed on a warming The COVID-19 cohort includes all patients including "unknown." The COVID-19-negative cohort is a historical comparison cohort as we previously published in Stevens et al. 20 AKI, acute kidney injury; AKIN, Acute Kidney Injury Network; CKD, chronic kidney disease; NA, data not available; NS, not significant; SCr, serum creatinine. Statistical Analysis Normality of continuous variables was tested using Shapiro-Wilk test. Normally distributed continuous variables were compared using a 2-sample t test and summarized as mean AE SD. Non-normally distributed continuous variables were summarized as medians and ranges and compared using nonparametric Mann-Whitney U test. To improve interpretability of effect estimates from multivariate regression models, all nonnormally distributed predictors (including uNGAL and uKIM-1 levels) were log-transformed and standardnormalized before statistical testing. The effect sizes for biomarkers were expressed per SD unit of normalized predictors. Categorical variables were compared using c 2 or Fisher exact test. For testing binary outcomes, we used logistic regression. Ordinal outcomes, such as AKIN stage, were tested using ordinal logistic regression. Ordinal predictors, such as urine dipstick category or proteinuria grade, were tested under the assumption of linear effects using a slope test within the framework of a generalized linear model tailored to the outcome of interest (e.g., logistic or ordinal logistic for binary or ordinal outcomes, respectively). We used Cox proportional hazards model for the time-to-event analyses for death outcome. We used competing risks regression model for the analysis of the hospital length of stay, with death as a competing risk. 25,26 The proportional hazards assumption was verified by testing scaled Schoenfeld residuals for each predictor against observation time. Associations of urinary biomarkers with clinical outcomes were adjusted for the following covariates: age, sex, race, ethnicity (minimally adjusted model), baseline SCr, and preexisting obesity, diabetes, hypertension, transplant (any organ), cancers, cardiovascular disease (coronary artery disease, heart failure, cerebral infarction), pulmonary disease (asthma, chronic obstructive pulmonary disease, interstitial pulmonary disease, primary pulmonary hypertension, idiopathic pulmonary fibrosis) (fully adjusted model 1), and proteinuria (fully adjusted model 2). In the analysis of primary outcomes, we considered 2-sided P < 0.05 as statistically significant. In the analysis of secondary outcomes, we considered P < 0.01 as statistically significant (Bonferroni-corrected for 5 independent outcomes tested). We additionally stratified patients by both uNGAL ($150 ng/ml) and SCr-based AKI into the following 4 groups: NGALÀAKIÀ, NGALþAKIÀ, Subjects We analyzed urine samples from 444 ED patients with COVID-19 admitted for inpatient care. The samples were collected prospectively at a median time of day 0 (hospital admission, interquartile range: 0-2 days), within 1 day of a positive result from SARS-CoV-2 test in 70% of the patients (Figure 1). The cohort was diverse in age, sex, race, ethnicity (43.9% female, 20.5% African American, 54.1% Latinx), and preexisting comorbidities (Table 1). There were 4 patients with end-stage renal disease who were excluded from the study, and 69 patients had incomplete SCr records and were called "unknown" and analyzed separately ( Figure 1). The association of uNGAL with the primary outcomes was independent of age, sex, race, ethnicity, baseline creatinine, and other comorbidities and was also independent of proteinuria measured in the same urine sample (Supplementary Table S1). We performed a subgroup analysis of 198 patients with COVID-19 who did not have evidence of AKI on presentation to the ED (AKIN 0 stage). uNGAL levels were higher in patients with AKIN 0 who subsequently developed AKIN stages 1 to 3 within 7 days of admission (n ¼ 51, mean 158 AE 237 ng/ml) compared with those who did not develop AKI (n ¼ 147, mean 74 AE 65 ng/ml, P < 0.05). In this subgroup, the association of admission uNGAL level with subsequent recognition of AKI remained significant in our multivariable model (adjusted OR [95% CI]: 1.68 per SD of uNGAL [1.06-2.80], P < 0.05). uNGAL level measured using a novel rapid point-ofcare semiquantitative dipstick 20 deployed to the bedside correlated with ELISA measurements (Spearman's correlation r ¼ 0.84, P < 0.0001) and reproduced all of the associations of uNGAL with AKI, sAKI, and AKIN stages (Figure 3a and b and Supplementary Figure S2). Association of uNGAL With Critical Illness In addition to AKI metrics, admission uNGAL level was associated with the subsequent initiation of acute dialysis (adjusted OR [95% CI]: 3.59 [1.83-7.45], In a time-to-event analysis, a dose-dependent association of uNGAL with 90-day mortality was observed in both ELISA and dipstick measurements. Within 30 days of admission, the highest uNGAL ELISA tertile (>128 ng/ml) and the highest uNGAL dipstick category (>150 ng/ml) both revealed lower survival probability, from 86.3% to 71.2% and from 81.8% to 66.7%, respectively, comparing the lowest and highest tertiles (Figure 4b and c). In Supplementary Figure S3 and Supplementary Table S2, we summarized secondary outcome comparisons for patient subgroups defined by a combination of uNGAL levels (marker of tubular injury) and SCr levels (marker of functional AKI). These analyses reveal that high uNGAL levels ($150 ng/ml) provide additional prognostic information beyond AKIN AKI criteria, consistent with our primary analysis. In contrast to uNGAL, uKIM-1 levels were not associated with primary outcomes of AKI, sAKI, or AKIN stage in the COVID-19 cohort (Figure 2, logtransformed data in Supplementary Figure S1) nor with secondary outcomes including 90-day mortality (Figure 4a and Supplementary Table S1). Comparison of COVID-19 (À) and COVID-19 (þ) ED Cohorts To find whether our findings were specific to COVID-19, we evaluated a second cohort of comparable size (426 patients) admitted through the same ED (June 2017-January 2019) 20 before the COVID-19 pandemic in New York City and analyzed using identical methods ( Table 1). The COVID-19 cohort was older and enriched in Latinx patients, but the burden of chronic kidney disease was similar in both cohorts. Notably, patients with COVID-19 were 2.6 times more likely to present with AKI (35.2% vs. 13.6%, P < 0.0001), 3.9 times more likely to have sAKI (17.5% vs. 4.5%, P < 0.0001), and 1.8 times more likely to have more severe disease (AKIN 2-3, 12.5% vs. 6.8%, P < 0.01) compared with our historical cohort ( Figure S4). Proteinuria was also increased in COVID-19 (þ) compared with COVID (À) AKIN 0 cases (P < 0.0001; Supplementary Figure S5). In addition, evaluation of the cellular pellets of the AKIN 0 urine samples revealed that the shedding of proximal tubule cells (marked by proximal tubule gene LRP2) was more . Higher urinary NGAL levels are associated with critical illness and death in patients with COVID-19: (a) Urinary NGAL levels were associated with AKI and sustained AKI (>72 hours) after adjustment for age, sex, race, and ethnicity (minimally adjusted model, blue) and baseline SCr and preexisting comorbidities (fully adjusted model, red; n ¼ 371). Urinary NGAL levels were also associated with secondary outcomes of death, dialysis, shock, and respiratory failure in both minimally and fully adjusted models, N ¼ 440. In contrast, uKIM-1 level was not associated with AKI or any secondary outcomes except for respiratory failure. ORs and HRs are expressed per 1 unit of SD of biomarker distribution; 95% CI. (b) Kaplan-Meier survival analysis reveals survival differences by tertile of urinary NGAL levels measured by ELISA or (c) by 3 levels of urinary NGAL dipstick test (unadjusted P values provided for both b and c; N ¼ 440). AKI, acute kidney injury; ELISA, enzyme-linked immunosorbent assay; HR, hazard ratio; NGAL, neutrophil gelatinase-associated lipocalin; OR, odds ratio; SCr, serum creatinine; uKIM-1, urinary KIM-1. NGAL RNA Expression Correlates With Histopathology To explore the responses of different nephron segments, we evaluated the transcriptomic patterning of the biomarkers in kidney biopsies from 13 patients with COVID-19 23 and 4 controls without COVID-19 with ATI. In both COVID-19 and non-COVID-19 biopsy findings, KIM-1 was found to be expressed in the proximal tubule whereas NGAL was prominently expressed in the limbs of Henle and collecting ducts. The distributions were confirmed by simultaneously probing for segment-specific markers, LRP2 (proximal tubule) and AQP2 (collecting duct). Surprisingly, in addition to its canonical distribution, NGAL transcripts were expressed in additional nephron segments in COVID-19 biopsy samples. At maximum extent of ATI (>50% of tubules), KIM-1 was expressed in 27% (3322 of 12,123), whereas NGAL was expressed in 65.7% (6580 of 10,111) of the tubules, including significant coexpression with KIM-1 in 85% of COVID-19 kidneys and with proximal marker LRP2 in 77% of the kidneys (Figure 2d and e). Furthermore, 62.5% of the tubules in high ATI biopsy samples and 20% of the tubules in low ATI biopsy samples (P < 0.05) were found to have coexpression of KIM-1 and NGAL, implying severity of ATI drives NGAL RNA patterning. Confirmation That NGAL Expression Correlates With Histopathology in Models of Injury To confirm that the patterning of NGAL RNA in COVID (þ) biopsy samples reflected ATI, we evaluated a classical model of ischemia-reperfusion injury in the mouse. Similar to human kidneys, increasing degrees of ATI also resulted in broadening of NGAL RNA expression in mouse kidneys (Figure 5a-f). In the setting of prolonged arterial ischemia, the entirety of the corticomedullary junction, medulla, papilla, and KIM-1þ proximal tubules expressed NGAL RNA. These findings are consistent with a recent report describing NGAL expression in the proximal tubule cells by single-cell sequencing after ischemic injury 29 and revealing the expansion of NGAL RNA in COVID (þ) biopsies is a reproducible consequence of severe kidney injury. DISCUSSION We have identified a quantitative association between uNGAL and both functional kidney failure (AKI) and ATI in patients admitted to the hospital with COVID-19 infection. The level of uNGAL was associated in a stepwise fashion with clinical metrics and outcomes of AKI, such as dialysis, whereas renal biopsy findings CLINICAL RESEARCH correlated NGAL RNA with more severe forms of ATI. Consequently, admission uNGAL measurements provided prognostic data relevant to kidney injury and dysfunction in the COVID (þ) patients. As diagnostic tools, uNGAL and SCr have many different characteristics. 12 NGAL is expressed within 2 to 3 hours of injury, 13,30,31 whereas the elevation of SCr level is delayed by 24 to 48 hours, 32,33 depending on mechanisms that enhance its excretion (the renal reserve) or limit its production. 34 In addition, NGAL is detected in the urine after small wedge infarctions 13 and unilateral kidney disease, 35 whereas SCr is insensitive to focal or subtotal injury. In fact, we found that elevated uNGAL level was associated with AKI and clinical outcomes, even when patients with COVID-19 presented at admission without AKI (AKIN 0), confirming admission SCr level underestimated the evolution of COVID-19-associated kidney disease, revealing the differential sensitivity of the 2 analytes. The stepwise association of an injury marker, NGAL, and a functional marker of glomerular filtration, SCr, can be explained by the progressive induction of NGAL RNA in the kidney with greater degrees of histopathologic injury (Figure 2). Increasing severity of injury broadened the classical patterning of NGAL RNA expression in the limb of Henle and collecting ducts to encompass multiple segments of the nephron from the proximal tubule to the papilla. The dose responsiveness of NGAL RNA and its patterning were reproducible beyond human biopsies to include classical models of tissue damage created by ischemiareperfusion injury in mouse ( Figure 5). Indeed, elevated uNGAL level is associated with inflammatory, ischemic, toxic, and obstructive uropathies, which injure the tubule, rather than reversible hemodynamic challenges (e.g., volume depletion, diuretics, and heart failure) that induce little, if any, response by different injury biomarkers. 12,32,36 Hence, we reveal for the first time that the level of NGAL mirrors the severity of ATI in human kidney biopsies, including severe forms of ATI in COVID-19 kidneys. In light of this, the association of uNGAL with functional stages of AKI is likely due to its quantitative association with ATI, which in severe cases limits the clearance of SCr. The progressive increase in the area under the receiver operating characteristics curves for uNGAL, from 0.70 to 0.93 with increasing AKIN stage, highlights that severe dysfunction (AKI) is found in cases of severe injury (ATI). The association of uNGAL with AKI and ATI provides the possibility of a sensitive diagnostic strategy that bypasses the delays and insensitivity of SCr. We reveal that accurate testing for COVID-19-associated kidney injury is possible in the ED using rapid point-of-care dipsticks. 20,37 The NGAL dipstick correlated closely with ELISA-based measurements (r ¼ 0.84, P < 0.0001), but the dipstick limits the risk of handling infectious body fluids. The dipstick may be particularly helpful in the setting of high patient volumes witnessed in EDs during COVID-19 surges, providing prognostic information in real time. We also suggest that our diagnostic strategy may be further enhanced by measuring proteinuria, which is indicative of kidney injury even without elevation of SCr (AKIN 0) and is associated with a number of adverse clinical outcomes independently of NGAL. When measured together, NGAL and proteinuria may offer a comprehensive evaluation of kidney injury in COVID-19 especially in patients who have not reached criteria for AKIN staging. Our study has a number of limitations. Similar to previous studies, 38,39 we were limited by the use of SCr as the gold standard for AKI. Notably, many patients with COVID-19 did not have previous health records, making it difficult to establish their baseline SCr values and to detect AKI and calculate its stage. As a consequence, we were unable to define AKI in 69 patients (15.5%) with missing SCr measurements at baseline (no records) or follow-up (early death or discharge). We were also not able to collect urine samples on subsequent days because of the severity of illness, precluding comparative studies of the kinetics of urinary biomarkers, and it remains possible that subsequent measurements of uNGAL, uKIM-1, and proteinuria could have provided additional prognostic information, suggested to be important by Nugent et al. 40 in understanding the impact of COVID-19 In conclusion, a large cohort of patients with COVID-19 had a dose-responsive relationship of uNGAL with ATI and AKI and severe clinical outcomes, independently of other established risk factors. These relationships were found even in patients who did not meet the SCr-based AKIN criteria in the ED. Conversely, the absence of elevated uNGAL level essentially ruled out the need for dialysis and identified patients at lower risk for death. The utility of a dipstick further underscores the value of uNGAL for rapid triage decisions. Given the recent resource challenges that the COVID-19 pandemic has created for Emergency Departments, Nephrology, and Critical Care services, 41 AUTHOR CONTRIBUTIONS KK, KX, and JB conceived and designed the study. The Columbia University COVID-19 Biobank collected residual blood and urine samples after clinical testing from every COVID-19 patient diagnosed at the Columbia University Irving Medical Center. KX, JB, and UN conducted laboratory measurements including urinary biomarker and protein measurements and in situ hybridizations. NS implemented electronic AKI definitions and analyzed the Electronic Health Records to define clinical outcomes and covariates. AL, AC, AY, KX, and JB quantified the distribution of RNA expression. RVS performed mouse experiments. VDA and SK analyzed human kidney biopsy tissue to diagnose AKI and quantify tubular injury. JS and SM contributed to the comparative analysis of COVID-19-negative data set. KX and KK performed statistical analyses. KX, KK, SM, JB, and AMM wrote the manuscript draft. All authors reviewed and edited the manuscript. Drs. Kiryluk, Xu, and Barasch had full access to the data and take responsibility for its integrity and accuracy of the results. All data analyses and results are described in this article and Supplementary files. The primary deidentified data are available from the corresponding authors on reasonable request, if approved by the Institutional Review Board of Columbia University. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or Columbia University.
2021-10-09T13:12:00.960Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "c829373a73bbef1a0a1f30e04913278cd9070464", "oa_license": "CCBY", "oa_url": "http://www.kireports.org/article/S2468024921014455/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e6702d4bcb2f5b2a8c61cd758856995e883af43b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247842116
pes2o/s2orc
v3-fos-license
Medical professionalism among emergency physicians in South Korea: a survey of perceptions and experiences of unprofessional behavior Objective The purpose of this study was to analyze the current situation concerning professionalism among emergency physicians in South Korea by conducting a survey regarding their perceptions and experiences of unprofessional behavior. Methods In October 2018, the authors evaluated the responses to a questionnaire administered to 548 emergency physicians at 28 university hospitals. The participants described their perceptions and experiences concerning 45 unprofessional behaviors classified into the following five categories: patient care, communication with colleagues, professionalism at work, research, and violent behavior and abusive language. Furthermore, the responses were analyzed by position (resident vs. faculty). Descriptive statistics were generated on the general characteristics of the study participants. To compare differences in responses by position and sex, the chi-square and Fisher exact tests were performed. Results Of the 548 individuals invited to participate in this study, 253 responded (response rate, 46.2%). In 34 out of 45 questionnaires, more than half of participants reported having experienced unprofessional behavior despite their negative perceptions. Eleven perception questions and 38 experience questions for unprofessional behavior showed differences by position. Conclusion Most emergency physicians were well aware of what constituted unprofessional behavior; nevertheless, many had engaged in or observed such behavior. INTRODUCTION The Accreditation Council for Graduate Medical Education in the US recommends the inclusion of six essential competencies, including medical professionalism, in a medical curriculum. 1 In 2008, the Korea Institute of Medical Education and Evaluation emphasized the reinforcement of medical professionalism and social competence through a study on the common curriculum for residents in South Korea. 2 Subsequently, in 2014, the Korean Medical Association and the Ministry of Health and Welfare of South Korea issued a publication in which they described various futureoriented behaviors that physicians should exhibit. Physician roles were categorized into five areas: patient care, communication and cooperation, social accountability, professionalism, and education and research. The need to maintain a high level of work ethics based on professional job norms and self-regulation is emphasized in the area concerning professionalism, in which four competencies are further detailed: treatment based on ethics and autonomy, patient-physician relationships, self-regulation led by specialists, and professionalism and self-management. 3 A revised annual training curriculum for residents, announced by the Ministry of Health and Welfare in February 2019, identified additional competencies required for all residents; these were divided into the following eight categories: respect, ethics, patient safety, society, professionalism, excellence, communication, and teamwork. 4 The professionalism section among them includes professional integrity and self-management. Although the most important competencies may differ by field, the heads of various training hospitals officially announced that all residents should strive to master all competencies during training. However, despite these efforts by the Ministry of Health and Welfare of South Korea, training in medical professionalism has not been established in the curriculum of medical societies that train all medical residents in South Korea. 4 The 2019 Emergency Medicine Model Review Task Force also identified professionalism as a requirement for emergency medicine (EM) physicians, who often face disasters and other chaotic situations, in which they must make decisions almost instantly. 5 These situations can involve moral dilemmas like violence or ethical conflicts; therefore, professionalism is an essential competency for emergency physicians in particular. The 3rd and 4th year curricula for the residents of EM departments include: emergency medical treatment, ethics, patient-physician relationship, and other competencies. 4 However, these competencies are neither taught nor evaluated in the tests required for professional accreditation. Fortunately, a tool for evaluating medical professionalism is currently being developed, and studies examining professional-ism among residents have been conducted. [6][7][8][9] Alternative and complementary education aimed at improving the physician's understanding and management of unprofessional behaviors may be more useful than direct education targeting medical professionalism. Most physicians consider themselves professionals and are likely to perceive training in professionalism as unnecessary, or even insulting. Moreover, although it is difficult to quantify professional behavior, it is both possible and desirable to identify and quantify unprofessional behavior. 10 Therefore, the purpose of this study was to examine the current perceptions and experiences of professionalism among EM residents and faculty members through a survey regarding unprofessional behaviors and to analyze the necessity of promoting medical professionalism based on the results. Study design, setting, and protocol A total of 548 faculty members and residents from university hospital emergency departments in South Korea participated in the survey conducted from October to November, 2018. Twenty-eight universities were selected, including at least one from each metropolitan council. Survey Monkey (https://ko.surveymonkey.com/) was used to administer the survey. The first page of the survey stated the purpose of the study, the voluntary nature of participation therein, the guaranteed anonymity of the participants, the confidentiality of the data, and the research purposes for which the data would be used. Only respondents who provided informed consent were included in the study. This study was reviewed and approved by the institutional review board of Soonchunhyang University Bucheon Hospital (No. 2018-08-004-004). Survey questionnaire response categories and key outcomes The questionnaire used in this paper was originally designed by Nagler et al. 11 However, it was adapted for use in South Korea by five experts, who are professors with more than 10 years of educational experience in the college of medicine as EM specialists. The survey contains 45 questions in five categories: patient care (e.g., 'explaining to patients the test results that the physician could not interpret, ' 'not providing care, or providing discriminatory care based on the patient's social background'); communication with colleagues (e.g., 'not notifying senior staff regarding the medical mistakes made by colleagues'); professionalism at work (e.g., 'attending a dinner or social party provided by a pharmaceutical/medical company, ' 'surreptitiously leaving the emergency room while on duty because of personal issues'); research (e.g., 'asking the author of a paper to name a person who did not con-Professionalism among emergency physicians tribute to it as a co-author, ' 'manipulating research data to get the desired results'); and violent behavior and abusive language (e.g., 'inflicting verbal or physical violence on students or juniors'). Each questionnaire item pertains to a behavior inconsistent with medical professionalism. Respondents were questioned regarding their perceptions and experiences of each type of unprofessional behavior. 'Experiences' comprise both observation of and engagement in the behavior. Questions about perceptions of unprofessional behaviors were accompanied by four answer options: 'must not be done, ' 'should not be done, ' 'can be done depending on the circumstances, ' and 'usually can be done. ' The responses 'must not be done' and 'should not be done' were considered indicative of a perception of unprofessional behavior. There were three answer options for questions about experiences of unprofessional behavior: 'neither observed nor engaged in, ' 'observed, ' and 'engaged in. ' The responses were analyzed by position (resident vs. faculty). Data analysis Descriptive statistics were generated on the general characteristics of the study participants. To compare differences in responses by position and sex, the chi-square and Fisher exact tests were performed. The data were analyzed using IBM SPSS Statistics ver. 20 (IBM Corp., Armonk, NY, USA) and the significance level was set at P < 0.05. A majority of the responses to the questions about perceptions of unprofessional behavior were either 'must not be done' or 'should not be done. ' Regarding questions about experiences, the responses were mostly either 'observed' or 'engaged in. ' In 34 out of 45 questionnaires, more than half of participants reported having experienced unprofessional behavior despite their negative perceptions. The items with the highest level of agreement among respondents were 'manipulating research data to get the desired results' and 'prescribing drugs to oneself, or prescribing antipsychotic drugs regardless of any other medical treatment'; 100% of respondents agreed that these actions constituted unprofessional behavior. The items with the lowest levels of respondent agreement were 'dating a resident by a professor' (perceived as unprofessional by 78.0%) and 'experiencing pleasure when a patient decided not to receive emergency care' (perceived as unprofes-sional by 77.6%). The highest rate of engagement in an unprofessional behavior was found for 'experiencing pleasure when a patient decided not to receive emergency care' (86.9%). Thus, respondents often engaged in this behavior despite the perception that it is unprofessional. The behaviors that respondents were least likely to cite having experience with were 'touching a body part that is not necessary for diagnosis when examining a patient' (0.0%), and 'prescribing drugs to oneself or prescribing antipsychotic drugs regardless of any other medical treatment' (0.5%). Respondents did not engage in these behaviors because they were perceived as unprofessional. Regarding unprofessional behaviors related to patient care, most respondents perceived 'touching a body part that is not necessary for diagnosis when examining a patient' and 'photographing or taking a video clip of a lesion to post it on a social networking site without the patient's consent' as unprofessional. Nevertheless, 32.6% of respondents stated that they had engaged in the latter behavior (Table 1). Regarding unprofessional behaviors related to communication with colleagues, many respondents perceived 'intentionally hiding information or transmitting false information so that a patient is sent to another hospital' (98.4%) and 'not reporting suspected child abuse or domestic violence to senior staff or relevant organizations' (97.8%) as unprofessional. Nevertheless, 54.4% of respondents stated that they had experience with the former behavior (Table 2). Regarding professionalism at work, most respondents perceived 'prescribing drugs to oneself or prescribing antipsychotic drugs regardless of any other medical treatment' and 'surreptitiously leaving the emergency room while on duty because of personal issues' as unprofessional behaviors. However, 78.0% of the respondents responded to the item 'dating a resident by a professor' with 'can be done depending on the circumstances' (Table 3). Regarding unprofessional behaviors related to research, most respondents agreed that 'manipulating research data to get the desired results' was unprofessional. In addition, a large percentage of respondents agreed that 'asking the author of a paper to name a person who did not contribute to it as a co-author' (87.6%) and 'not deleting patient information used in a presentation from a public computer' (81.7%) were unprofessional behaviors; nevertheless, engagement in these behaviors was reported by 66.5% and 67.3% of respondents, respectively (Table 4). Regarding violent behavior and abusive language, a large percentage of respondents (99.5%) agreed that 'ignoring or insulting the patients or their family in front of them' was unprofessional behavior, despite which 77.4% of respondents agreed that they had done this themselves. 'Blaming and gossiping about colleagues Sangun Nah, et al. in experiences between sex, c) in perception between positions, and c) in perception between sex, Professionalism among emergency physicians in perception between sex, and d) in experiences between sex. Sangun Nah, et al. in experiences between sex, and c) in perception between positions. Professionalism among emergency physicians or other health professionals in front of students or junior physicians' was also perceived as unprofessional behavior, but 81.9% of respondents reported having engaged in it (Table 5). There were 11 perception questions and 38 experience questions for unprofessional behavior that showed differences by position (Tables 1-5 and Supplementary Tables 2, 3). Although 80.9% of the faculty members considered 'writing prescriptions not necessary for the patient but benefit the physician or hospital' was unprofessional, 30.0% indicated that they had engaged in this behavior. Similarly, 81.0% of the faculty members responded that 'asking the author of a paper to name a person who did not contribute to it' was unprofessional behavior; nevertheless, 34.7% had engaged in it. The faculty members had a less negative view of 'admitting patients at their request despite it being medically unnecessary' and 'accepting a gift from a patient' than did the residents and were more likely to have engaged in these behaviors. The residents identified 'treating and prescribing while using another colleague's identification to hide overtime' as unprofessional behavior, although 25.5% had engaged in it. There was a difference for one item pertaining to perceptions of unprofessional behavior and six pertaining to experience of unprofessional behavior between the male and female respondents (Supplementary Tables 4,5). Unprofessional behaviors that the men had more experience with included 'inflicting verbal or physical violence on students or juniors' (18.3% male vs. 5.7% female) and 'body odor arising from poor self-hygiene' (39.2% male vs. 20.6% female). Unprofessional behaviors that the women had more experience with included 'not deleting patient information used in a presentation from a public computer' (51.4% female vs. 39.5% male) and 'neglecting or gossiping about patients or their family with colleagues' (71.4% female vs. 49.1% male). DISCUSSION The purpose of this study was to examine perceptions and experiences of unprofessional behaviors among emergency physicians and to determine whether they differed by position. Most emergency physicians were well aware of what constituted unprofessional behavior; nevertheless, many had engaged in or observed such behavior. Faculty members had more experience of unprofessional behaviors than residents. EM residents and faculty members working at 28 university hospitals in South Korea were enrolled. Although the country-wide emergency physicians were not included, the contribution of this study is valuable because physicians working in hospitals offering residency training were enrolled, and perceptions of unprofessional behavior among senior faculty members were described. Furthermore, unlike previous studies, this survey questionnaire was administered to all EM physicians, including those in their first year of residency up to those who were faculty members. 6,11,12 This made it possible to identify differences in perceptions by position. One of the strengths of our survey was its anonymous nature, with data being collected in an automated manner rather than face-to-face so that respondents could express their opinions freely. Despite the emphasis on professionalism in medical education, the hidden curriculum exposes residents to unprofessional behavior. 13 Instead of allowing residents to follow role models heedlessly, they should be encouraged to consider the implications and consequences of unprofessional behavior. Unprofessional behavior is strongly related to noncompliance with practice guidelines, mortality and morbidity of patients, demoralization and increasing turnover of employees, medical mistakes and adverse consequences, and lawsuits for medical malpractice. Physician leaders must act when unprofessional behavior is identified; unfortunately, there is a failure to do so because of a lack of sufficient training. Moreover, such behavior may also be emulated by others. 10 The respondents largely considered the 45 behaviors covered in our survey questionnaire as unprofessional. Nevertheless, many respondents stated that they had witnessed or engaged in such behaviors. Thus, despite being aware that certain behaviors are unprofessional, some respondents appeared unable to avoid engaging in them. This behavior of respondents can be attributed to inadequate systems within training hospitals, lack of education on professionalism, strong hierarchical structure, and poor character. 14 There were a few differences in perceptions of unprofessional behavior depending on position. Faculty members were more likely than residents to report engaging in many of the unprofessional behaviors described in the study. This likely reflects the current lack of education promoting medical professionalism. An understanding of professionalism is acquired at medical schools and by working in the field. 9 Medical staff learn about professionalism from professors when they are in school. In the field, their understanding of this concept is influenced by the behavior of colleagues, including senior staff. 15 If the teaching staff are involved in unprofessional behavior, the education aimed at promoting medical professionalism will be undermined. Developing an attitude of introspection regarding unprofessional behavior may be the first step toward obtaining a better understanding of medical professionalism. Some previous studies demonstrated that men engage in more unprofessional behaviors than women and are more likely to justify these behaviors. 11,16,17 In this study, statistically significant difference in perception between men and women was noted for Sangun Nah, et al. only one unprofessional behavior. This contrast in the results between our study and previous studies might have been caused by cultural differences in the background of the participants. A qualitative study through additional interviews is warranted to investigate these distinctions. While role modeling is the most common method used to promote medical professionalism, mentoring can also aid in developing it. [18][19][20] However, before taking these steps, it is necessary to discuss the definition of medical professionalism and its integration in the formal/hidden curriculum. 21 Therefore, professors who teach the concept of medical professionalism should understand it fully before developing strategies for improving knowledge thereof among students. 22 Presenting examples of unprofessional behavior complements role modeling as a method for improving the understanding of medical professionalism. 6 Medical institutions and schools may be able to improve professionalism by establishing institutional norms around unprofessional behavior, teaching medical staff to recognize these behaviors, and informing staff about the consequences of violating them. 10,23 These institutions should also explore ways to address unprofessional behavior when they arise. Inclusion of medical professionalism in the standard education program of EM in South Korea would help improve the education of EM residents. This aspect may also be included as part of resident evaluation in the future. The authors suggest developing vignettes of cases regarding unprofessional behavior related to emergency care based on the results of this study. Furthermore, management of the medical professionalism curriculum for residents and faculty members by applying methods, such as ethics grand rounds and vignette-based consensus workshops, through the Korean Society of Emergency Medicine is recommended. This study has some limitations. First, the participants included EM residents and faculty members drawn from only 28 university hospitals, and the response rate to the survey was very low. Therefore, the findings may not be representative of all emergency physicians in South Korea. In 2018, there were 1,843 EM faculty members and 630 EM residents in the country. Our sample of 253 respondents corresponds to only 10.4% of all the emergency physicians. However, to address this limitation, we included at least one university from each metropolitan council in the study. Second, our survey did not include questions regarding respondents' experience of education in medical professionalism, which may be associated with perceptions and experiences of unprofessional behavior. Third, this study was conducted based on the 2018 survey results; therefore, there may be some differences from the current situation. The training curriculum for medical residents in South Korea is revised almost every year, which warrants periodic re-evaluation in future studies. Fourth, we did not perform addi-tional analyses according to the faculty's tenure. In future research, it would be good to find out how this difference influences medical professionalism. Fifth, considering several questions were described comprehensively, it is possible that there might have been differences in the understanding of and responses to the questions between respondents. Sixth, faculty staff may be more exposed to conflicting situations because the period of their clinical experience is longer than that of residents, which can result in the appearance of a higher rate of engagement in unprofessional behavior than that of residents. However, this may be because the faculty staff did not have the opportunity for self-reflection considering medical curricula and guidelines in the past were limited. Further research is needed to improve training courses aimed at promoting medical professionalism. In conclusion, most emergency physicians were well aware of what constituted unprofessional behavior; nevertheless, many had engaged in or observed such behavior. Perceptions and experiences of unprofessional behavior among emergency physicians differed by position. Faculty members engaged in unprofessional behavior more frequently than did residents. CONFLICT OF INTEREST No potential conflict of interest relevant to this article was reported. ACKNOWLEDGMENTS The authors would like to acknowledge support from the Soonchunhyang University Research Fund and thank the emergency physicians who participated in this study. Values are presented as median (interquartile range) or number (%). Supplementary Table 2. Physician's behavior showing a significant difference in perception between residents and faculty Physician's behavior Writing prescriptions not necessary for the patient but which benefit the physician or hospital (e.g., prescribed specific fluids or MRI to receive incentives)
2022-04-01T06:22:58.847Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "b9d215f6b6e771fa79336e713ad2e5491087a87b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "97e62de878f8364d58a8aaad9da8edda1fa6dca0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
195843871
pes2o/s2orc
v3-fos-license
A parasitological evaluation of edible insects and their role in the transmission of parasitic diseases to humans and animals From 1 January 2018 came into force Regulation (EU) 2015/2238 of the European Parliament and of the Council of 25 November 2015, introducing the concept of “novel foods”, including insects and their parts. One of the most commonly used species of insects are: mealworms (Tenebrio molitor), house crickets (Acheta domesticus), cockroaches (Blattodea) and migratory locusts (Locusta migrans). In this context, the unfathomable issue is the role of edible insects in transmitting parasitic diseases that can cause significant losses in their breeding and may pose a threat to humans and animals. The aim of this study was to identify and evaluate the developmental forms of parasites colonizing edible insects in household farms and pet stores in Central Europe and to determine the potential risk of parasitic infections for humans and animals. The experimental material comprised samples of live insects (imagines) from 300 household farms and pet stores, including 75 mealworm farms, 75 house cricket farms, 75 Madagascar hissing cockroach farms and 75 migrating locust farms. Parasites were detected in 244 (81.33%) out of 300 (100%) examined insect farms. In 206 (68.67%) of the cases, the identified parasites were pathogenic for insects only; in 106 (35.33%) cases, parasites were potentially parasitic for animals; and in 91 (30.33%) cases, parasites were potentially pathogenic for humans. Edible insects are an underestimated reservoir of human and animal parasites. Our research indicates the important role of these insects in the epidemiology of parasites pathogenic to vertebrates. Conducted parasitological examination suggests that edible insects may be the most important parasite vector for domestic insectivorous animals. According to our studies the future research should focus on the need for constant monitoring of studied insect farms for pathogens, thus increasing food and feed safety. Introduction The growing demand for easily digestible and nutritious foods has contributed to the emergence of new food sources in agricultural processing. Edible insects are one such category of a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 Materials The experimental material comprised samples of live insects (imagines) from 300 household farms and pet stores, including 75 mealworm farms, 75 house cricket farms, 75 Madagascar hissing cockroach farms and 75 migrating locust farms from Czechia, Germany, Lithuania, Poland, Slovakia and Ukraine. Owners/breeders of household farms and cultures from pet stores gave permission for the study to be conducted on their insect farms. The studies were carried out in the years 2015-2018. Up to 3 farms were tested from a single location (eg. city). Farm stock was purchased from suppliers in Europe, Asia and Africa. Forty insects were obtained from every mealworm and cricket farm, and they were pooled into 4 samples of 10 insects each. Ten insects were sampled from every cockroach and locust farm, and they were analyzed individually. Methodology Insects were immobilized by inducing chill coma at a temperature of -30˚C for 20 minutes. Hibernation was considered effective when legs, mandibles and antennae did not respond to tactile stimuli. Hibernating insects were decapitated and dissected to harvest digestive tracts. Digestive tracts were ground in a sieve and examined by Fulleborn's floatation method with Darling's solution (50% saturated NaCl solution and 50% glycerol). The samples were centrifuged at 3500 x for 5 minutes. Three specimens were obtained from every sample, and they were examined under a light microscope (at 200x, 400x and 1000x magnification). The remaining body parts were examined for the presence of parasitic larvae under the Leica M165C stereoscopic microscope (at 7.2x-120x magnification) The remaining body parts were analyzed according the method proposed by Kirkor with some modifications, by grinding body parts in a mortar with a corresponding amount of water and 0.5 ml of ether. The resulting suspensions were filtered into test tubes to separate large particles and were centrifuged at 3500x for 5 minutes. After loosening the debris plug, the top three layers of suspension were discarded. Three specimens were obtained, and they were analyzed according to the procedure described above. Parasites were identified to genus/species level based on morphological and morphometric parameters with the use of an Olympus image acquisition system and Leica Application Suite program based on the reference sources in Pubmed [18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36]. Parasites were identified to species level by Ziehl-Neelsen staining [37]. The owners of farms where human parasites were detected were advised to eliminate their stock. Farm owners were surveyed with the use of a questionnaire to elicit information about the origin of insects (to determine whether the stock was supplemented with insects from other farms, whether the farm was a closed habitat, whether stock was obtained only from Europe, or also from Asia/Africa), insect nutrition (whether insects were fed specialized feeds, fresh products, kitchen discards or locally collected sources of feed), contact with other animals or animal feces. Statistical analysis The prevalence of parasitic species was determined for every insect species. The data were tested for normal distribution with the Kolmogorov-Smirnov test. The assumptions of linearity and normality were tested before statistical analysis. Linearity was analyzed based on twodimensional distribution of the evaluated variables with the use of histograms and normal probability plots of the residuals. The significance of the correlations between insect species and questionnaire data was analyzed in a logistic regression model, where the dependent variable was dichotomous (0 or 1, presence/absence of parasites) and the independent variables were: origin of insects (insects purchased in Europe only/insects imported from Asia and Africa), Insect stock rotation system (insects from the evaluated farm only-close rotation/the farm was supplemented with insects from other farms-open rotation), nutrition (insects fed only fresh products or specialized feeds/insects fed kitchen discards) and direct/indirect contact with animals (yes/no). The correlations between the identified parasites were analyzed with the use of Yule's Q and Cramer's V, subject to the number of the evaluated variables. The examined associations were weak when the value of V/Q approximated 0, and the correlations were stronger when V/Q approximated +1/-1. The results were processed statistically in the Statistica 13.1 program with a StatSoft medical application. Probability of parasite occurrence The risk of Cestoda, Acanthocephala and Acaridae infections was significantly higher in insects imported from Africa and Asia than in insects purchased from European suppliers. Farms whose stock was supplemented with insects from other farms were more frequently colonized by Nosema spp., Isospora spp., Cryptosporidium spp., Entamoeba spp., Cestoda, Pharyngodon spp., Gordius spp., Physaloptera spp., Thelastoma spp. and H. diesigni than closed farms. The risk of infection with Cryprosporidium spp., Gregarine spp, Balantidium spp, Entamoeba spp., Steinernema spp., Gordiidea, H. diesigni and Acaridae was higher in insects fed kitchen discards and locally collected feed sources than insects fed only fresh products or specialized feeds. Insects that came into direct or indirect contact with animals were at higher risk of exposure to Isospora spp., Cryptosporidium spp., Cestoda, Pharyngodon spp., Physaloptera spp., Thelastoma spp. and H. diesigni, but at lower risk of exposure to Nyctotherus spp. The statistical significant results of logistic regression were placed in Table 2. Coinvasions Significant correlations were observed between the presence of Nosema spp. and Isospora spp. Discussion Due to the lack of registration obligation, we are currently unable to estimate the exact number of such farms in the surveyed area. The number of farms obtained for the experiment resulted from an indicatively calculated minimum number of samples. To get the most reliable results from a single location (eg. city), we tested up to 3 farms. The selection of insect species for research resulted from the dissemination of these animals among breeders. In case we have shown that insects came from the same supplier, we did not continue further research. Survey questions regarding the tested insect farms are related to the observed activities practiced by breeders. Breeders wanting to set up or enlarge their farms often order insects from the countries of origin or from places where the import of such food is cheaper than from Europe. In our opinion, such a phenomenon is a big threat due to the fact that there may be a risk of catching animals from the environment, and thus introducing new parasites, both pathogenic for insects as well as humans and animals. Some amateur breeders are not interested in the quality of feed introduced into the farm. They obtain insect feed from the environment (green fodder, wild fruit trees) or use leftovers from feeding other animals. Edible insects may also have direct or indirect contact with animals. Among the practices we can include redepositing insects to farms after the animal has not eaten them. These insects moving around the animal habitat (eg. terrariums) can mechanically introduce potential pathogens specific to animals. During the research in individual farms, we observed unethical practices of individual breeders, such as feeding insects with animal feces from a pet shop, feeding insects with corpses of smaller animals, or feeding insects with moldy food and even raw meat. These practices significantly reduce the quality of the final product and undermine the microbiological / parasitological safety of such food. Currently, however, there are no regulations regarding zoohygienic conditions and the welfare of these animals as potential animals for food. Although the research was conducted on amateur insect farms, most were not found to be seriously flawed. Breeding of edible insects carried out in places not intended for this purpose (houses) can lead to additional danger for humans. In the course of the study, we recorded individual cases of spreading insects from farms, which resulted in rooms infestation, eg. by cockroaches or crickets. Another example is the possibility of transmission of parasites such as Cryptosporidium spp. on human aerogenically, therefore if the farms are unprotected well or there is a lack of hygiene in contact with insects, such invasions may occur. Parasites pathogenic for insects The analyzed farm samples were colonized by developmental forms of parasites that are specific to insects, including Nosema spp, Gregarine spp., Nyctotherus spp., Steinernema spp., Gordiidae, H. diesigni, Thelastomidae, and Thelastoma spp. These pathogens constitute the most prevalent parasitic flora, and massive infections can compromise insect health and decrease farm profits [38,39]. According to Van der Geest et al. [40] and Johny et al. [41], the above pathogens have been implicated as pseudo-parasites of humans and animals. However, the impact of insect-specific parasites on humans has not yet been fully elucidated. Pong et al. [42] argued that Gregarine spp., a parasite specific to cockroaches, could cause asthma in humans. The results of the survey conducted in our study indicate that insect farming can increase the human exposure to pathogens and allergens [43,44]. Nosemosis is a disease caused by microsporidian parasites, and it can compromise the health of crickets and locusts. However, nosema parasites also control cricket and locust populations in the natural environment [45][46][47]. Lange and Wysiecki [48] found that Nosema locustae can be transmitted by wild locusts to a distance of up to 75 km. This parasite is also readily transmitted between individual insects, which can contribute to the spread of infections in insect farms. Johnson and Pavlikova [49] reported a linear correlation between the number of Nosema spp. spores in locusts and a decrease in dry matter consumption. The results of our study indicate that Nosema spp. infections can decrease profits in insect farming. Gregarine spp. are parasitic protists which colonize the digestive tracts and body cavities of invertebrates. According to Kudo [50], Gregarines are non-pathogenic commensal microorganisms, but recent research indicates that these protists are pathogenic for insects. These Nyctotherus spp. is a parasite or an endosymbiont which colonizes the intestinal system of insects. Gijzen et al. [53] found a strong correlation between the size of the N. ovalis population and carboxymethyl-cellulase and filter paper digesting activity in cockroach intestines, which was correlated with those insects' ability to produce methane. The results of our study indicate that the ciliate N. ovalis should be consider as commensal microflora of cockroach gastrointestinal tract. Nyctotherus spp. were less likely to be detected in insects that had previous contact with animals. The above could imply that insects whose digestive tracts are colonized by these parasites are more readily consumed by animals. Nyctotherus ovalis is rarely pathogenic for animals. Satbige et al. [54] reported on two turtles where N. ovalis infection caused diarrhea, dehydration and weight loss. Gordiidae, also known as horsehair worms, are parasitic nematodes with a length of up to 1.5 m that colonize invertebrates. When consumed by insects, parasitic larvae penetrate the intestinal wall and are enveloped by protective cysts inside the gut. Gordius spp. are generally specific to insects, but these nematodes have also been detected in humans and animals. Several cases of parasitism and pseudo-parasitism by gordiid worms from various locations, including France, Italy, Bavaria, Dalmatia, East Africa, Southeast Africa, West Africa, Transvaal, Chile, United States and Canada, were described in the literature [55]. Horsehair worms were also identified in vomit and feces [56,57]. However, none of the described parasitic invasions were pathogenic for humans. In the present study, parasites were detected in insects fed kitchen discards or locally collected food sources. Hammerschmidtiella diesigni, Thelastoma spp. and Thelastomatidae are nematodes specific to invertebrates. Nematodes colonizing insect digestive tracts are generally regarded as commensal organisms. Taylor [58] demonstrated that Leidynema spp. exerted a negative effect on hindgut tissues in insects. Similarly to the pathogens identified in our study, Leidynema spp. belong to the family Thelastomatidae. Capinera [59] demonstrated that these nematodes can increase mortality in cockroach farms. In our study, insects colonized by H. diesigni and Thelastoma spp. were characterized by lower fat tissue content. McCallister [60] reported a higher prevalence of H. diesigni and T. bulhoes nematodes in female and adult cockroaches, but did not observe significant variations in differential hemocyte counts or hemolymph specific gravity [60]. Steinernema spp. is an entomopathogenic nematode whose pathogenicity is linked with the presence of symbiotic bacteria in parasitic intestines. These nematodes are used in agriculture as biological control agents of crop pests [61], which can promote the spread of infection to other insects. In our study, insects infected with Steinernema spp. were probably fed plants contaminated with parasite eggs. Parasites pathogenic for humans and animals Cryptosporidium spp. are parasites that colonize the digestive and respiratory tracts of more than 280 vertebrate and invertebrate species. They have been linked with many animal diseases involving chronic diarrhea [62][63][64]. According to the literature, insects can serve as mechanical vectors of these parasites. Flies may be vectors of Cryptosporidium spp. that carry oocysts in their digestive tract and contaminate food [65,66]. Earth-boring dung beetles [67] and cockroaches [68] can also act as mechanical vectors of these parasites in the environment. However, the prevalence of Cryptosporidium spp. in edible insects has not been documented in the literature. In our study, Cryptosporidium spp. were detected in the digestive tract and other body parts of all evaluated insect species. In our opinion, insects are an underestimated vector of Cryptosporidium spp., and they significantly contribute to the spread of these parasites. Isospora spp. are cosmopolitan protozoa of the subclass Coccidia which cause an intestinal disease known as isosporiasis. These parasites pose a threat for both humans (in particular immunosuppressed individuals) and animals. The host becomes infected by ingesting oocytes, and the infection presents mainly with gastrointestinal symptoms (watery diarrhea). According to the literature, cockroaches, houseflies and dung beetles can act as mechanical vectors of Isospora spp. [69,70]. In our study, insect farms were contaminated with this protozoan, which could be the cause of recurring coccidiosis in insectivores. Isospora spp. were detected on the surface of the body and in the intestinal tracts of insects. In our opinion, the presence of Isospora spp. in edible insects results from poor hygiene standards in insect farms. Balantidium spp. are cosmopolitan protozoans of the class Ciliata. Some species constitute commensal flora of animals, but they can also cause a disease known as balantidiasis. According to the literature, these protozoans are ubiquitous in synanthropic insects [68,71]. In some insects, Balantidium spp. is considered a part of normal gut flora, and it can participate in digestive processes [72]. Insects can be vectors of Balantidium spp. pathogenic for humans and animals [73]. In our study, potentially pathogenic ciliates were detected even in insect farms with closed habitats. Entamoeba spp. are amoeboids of the taxonomic group Amoebozoa which are internal or commensal parasites in humans and animals. The majority of Entamoeba spp., including E. coli, E. dispar and E. hartmanni, identified in our study belong to non-pathogenic commensal gut microflora. However, pathogenic E. histolytica [74], and E. invadens were also detected in the presented study. Entamoeba histolytica can cause dysentery in humans and animals, whereas E. invadens is particularly dangerous for insectivorous animals such as reptiles and amphibians. Other authors demonstrated that E. histolytica is transmitted by insects in the natural environment [68,75]. Cestoda colonize insects as intermediate hosts. Cysticercoids, the larval stage of tapeworms such as Dipylidium caninum, Hymenolepis diminuta, H. nana, H. microstoma, H. citelli, Monobothrium ulmeri and Raillietina cesticillus, have been identified in insects [76][77][78]. Insects have developed immune mechanisms that inhibit the development of these parasites [78,79]. Tapeworms can induce behavioral changes in insects, such as significant decrease in activity and photophobic behavior [80]. Behavioral changes can prompt definitive hosts to consume insects containing cysticercoids. Our study demonstrated that insect farms which are exposed to contact with animals and farms which are supplemented with insects from external sources are at greater risk of tapeworm infection. Similar results were reported in studies of synanthropic insects [81,82]. In our study, both cysticercoids and eggs were detected, which suggests that farms can be continuously exposed to sources of infection. However, the correlations between edible insects and the prevalence of taeniasis in humans and animals have never been investigated in detail. Temperature has been shown to significantly influence the development of tapeworm larvae in insects [83,84]. In our opinion, the maintenance of lower temperature in insect farms could substantially decrease the reproductive success of tapeworms, and edible insects could be thermally processed before consumption to minimize the risk of tapeworm infection. The results of our study indicate that edible insects play an important role in the transmission of tapeworms to birds, insectivorous animals and humans. Pharyngodon spp. are parasitic nematodes that colonize exotic animals in both wild and captive environments [85,86]. These parasites are more prevalent in captive pets than in wild animals [85,86], which could be correlated with edible insects. In our study, insects that had previous contact with animals were significantly more often vectors of Pharyngodon spp. our results indicate that insects act as mechanical vectors for the transmission of the parasite's developmental forms. The role of insects as definitive hosts for Pharyngodon spp. has not been confirmed by research. Human infections caused by Pharyngodon spp. had been noted in the past [87], but these nematodes are no longer significant risk factors of potential zoonotic disease. Physaloptera spp. form cysts in the host's hemocoel approximately 27 days after ingestion [88]. Cawthorn and Anderson [89], demonstrated that crickets and cockroaches can act as intermediate hosts for these nematodes. Our study is the first ever report indicating that Physaloptera spp. can be transmitted by mealworms and migratory locusts. Insects can act as vectors in the transmission of these parasites, in particular to insectivorous mammals. Despite the above, definitive hosts are not always infected [88,89]. Cockroaches play an important role in the transmission of the discussed parasites, including zoological gardens [90]. A study of experimentally infected flour beetles (Tribolium confusum) demonstrated that Spirurids can also influence insect behavior [91]. Behavioral changes increase the risk of insectivores selecting infected individuals. Spiruroidea are parasitic nematodes which require invertebrate intermediate hosts, such as dung beetles or cockroaches, to complete their life cycle [92]. In grasshoppers, Spirura infundibuliformis reach the infective stage in 11-12 days at ambient temperatures of 20-30˚C [93]. Research has demonstrated that these insects are reservoirs of Spiruroidea in the natural environment [94]. These parasites form cysts in insect muscles, hemocoel and Malpighian tubules. They colonize mainly animals, but human infections have also been reported. According to Haruki et al. [95], Spiruroidea can infect humans who accidentally consume intermediate hosts or drink water containing L3 larvae of Gongylonema spp. (nematodes of the superfamily Spiruroidea). The prevalence of Spiruroidea in insects has never been studied in Central European insects. In our study, these nematodes were identified mainly in farms importing insects from outside Europe. Acanthocephala are obligatory endoparasites of the digestive tract in fish, birds and mammals, and their larvae (acanthor, acanthella, cystacanth) are transmitted by invertebrates. The prevalence of these parasites in wild insects has never been studied. In cockroaches, Acanthocephala species such as Moniliformis dubius and Macracanthorhynchus hirudinaceus penetrate the gut wall and reach the hemocoel [96]. The outer membrane of the acanthor forms microvilli-like protuberances which envelop early-stage larvae [97]. The influence of acanthocephalans on insects physiology has been widely investigated. The presence of Moniliformis moniliformis larvae in cockroach hemocoel decreases immune reactivity [98], which, in our opinion, can contribute to secondary infections. Thorny-headed worms influence the concentration of phenoloxidase, an enzyme responsible for melanin synthesis at the injury site and around pathogens in the hemolymph [99,100]. There are no published studies describing the impact of acanthocephalans on insect behavior. A study of crustaceans demonstrated that the developmental forms of these parasites significantly increased glycogen levels and decreased lipid content in females [101]. Thorn-headed worms also compromise reproductive success in crustaceans [102]. Further research into arthropods is needed to determine the safety of insects as sources of food and feed. Acanthocephalans have been detected in insectivorous reptiles [103], which could indicate that insects can act as vectors for the transmission of parasitic developmental forms. Pentastomida are endoparasitic arthropods that colonize the respiratory tract and body cavities of both wild and captive reptiles [104]. Pentastomiasis is considered a zoonotic disease, in particular in developing countries [105]. The presence of mites, which resemble pentastomid nymphs during microscopic observations, should be ruled out when diagnosing pentastomiasis in insect farms. The role of insects of intermediate hosts/vectors of pentastomid nymphs has not yet been fully elucidated. However, Winch and Riley [106] found that insects, including ants, are capable of transmitting tongue worms and that cockroaches are refractory to infection with Raillietiella gigliolii. Esslinger [107], and Bosch [108], demonstrated that Raillietiella spp. rely on insects as intermediate hosts. Our study confirmed the above possibility, but we were unable to identify the factors which make selected insects the preferred intermediate hosts. The choice of intermediate host is probably determined by the parasite species. We were unable to identify pentastomid nymphs to species level due to the absence of detailed morphometric data. Our results and the findings of other authors suggest that insects could be important vectors for the transmission of pentastomids to reptiles and amphibians [106,109]. Prevalence The prevalence of parasitic infections in insects has been investigated mainly in the natural environment. Thyssen et al. [110] found that 58.3% of German cockroaches were carriers of nematodes, including Oxyuridae eggs (36.4%), Ascaridae eggs (28.04%), nematode larvae (4.8%), other nematodes (0.08%) and Toxocaridae eggs (0.08%). Cestoda eggs (3.5%) were also detected in the above study. Chamavit et al. [68] reported the presence of parasites in 54. and Taenia spp. were not detected in our study, which suggests that the analyzed insects did not have access to the feces of infected humans. In a study of wild cockroaches in Iraq, the prevalence of parasitic developmental forms was nearly twice higher (83.33%) than in our study [82]. Iraqi cockroaches carried E. blatti (33%), N. ovalis (65.3%), H. diesingi (83.3%), Thelastoma bulhoe (15.4%), Gordius robustus (1.3%), Enterobius vermicularis, (2%) and Ascaris lumbricoides (1.3%). Unlike in our experiment, H. diesigni was the predominant nematode species in Iraqi cockroaches. The cited authors did not identify any developmental forms of tapeworms. Tsai and Cahill [111] analyzed New York cockroaches and identified Nyctotherus spp. in 22.85% of cases, Blatticola blattae in 96.19% of cases, and Hammerschmidtiella diesingi in 1.9% of cases. The results of our study suggest that farmed edible insects are less exposed to certain parasites (Ascaridae, Enterobius spp.) that are pathogenic for humans and animals. The absence of human-specific nematodes and roundworms could be attributed to the fact that the analyzed farms were closed habitats without access to infectious sources. In the work of Fotedar et al. [112], the prevalence of parasites was determined at 99.4% in hospital cockroaches and at 94.2% in household cockroaches. The percentage of infected cockroaches was much higher than in our study, which could indicate that environmental factors significantly influence the prevalence of selected parasites species. Our observations confirm that the risk of parasitic infections can be substantially minimized when insects are farmed in a closed environment. The high prevalence of selected developmental forms of parasites in the evaluated insect farms could be attributed to low hygiene standards and the absence of preventive treatments. Parasitic fauna in insect farms have never been described in the literature on such scale. A study of cockroaches from the laboratory stock of the Wrocław Institute of Microbiology (Poland) revealed the presence of ciliates in all insects and the presence of nematodes in 87% of insects [113]. These results could be attributed to the fact that all examined insects were obtained from a single stock, which contributed to the re-emergence of parasitic infections. Similar observations were made in several insect farms in the current study. Edible insect processing like cooking or freezing may inactivate parasitic developmental forms. Tanowitz et al. [114] reported that Teania solium is killed by cooking the pork to an internal temperature of 65˚C or freezing it at 20˚C for at least 12 hours. Smoking, curing or freezing meat may also inactivate protozoa like Toxoplasma gondii [115]. The use of microwaves may be ineffective [115]. On the example of Anisakis simplex, it has been proven that cooking and freezing can significantly improve food safety in relation to this nematode [116]. Also boiling insect for 5 min is an efficient process for eliminating Enterobacteriaceae [117]. Simple preservation methods such as drying/acidifying without use of a refrigerator were tested and considered promising [117]. However, there is a need of thorough evaluation of insect processing methods, including temperatures and time of cooking / freezing to prevent possible parasitic infections. Despite, food preparation processes parasite allergens may still be detected [116]. Insects may also be a bacterial vector / reservoir, but currently there are no data available for bacteriological tests in breeding insects. It has been proven that insects can be an important epidemiological factor in the transmission of bacterial diseases [3]. One of the most important bacteria that are transmitted by insects include Campylobacter spp. [118] and Salmonella spp. [119]. Kobayashi et al. [120] showed that insect may be also a vector of Escherichia coli 0157: H7. Free-living cockroaches harbored pathogenic organisms like Escherichia coli, Streptococcus Group D, Bacillus spp., Klebsiella pneumoniae, and Proteus vulgaris [121]. In vitro studies have shown that some species of insects may also be the reservoir of Listeria monocytogenes [122]. In our opinion further research should also focus on the microbiological safety of edible insect breeding. Due to the fact that the identification of parasites was based on morphological and morphometric methods, further molecular research should focus on the precise determination of individual species of identified parasites in order to determine the real threat to public health. The results of this study indicate that edible insects play an important role in the epidemiology of parasitic diseases in vertebrates. Edible insects act as important vectors for the transmission of parasites to insectivorous pets. Insect farms that do not observe hygiene standards or are established in inappropriate locations (eg. houses) can pose both direct and indirect risks for humans and animals. Therefore, farms supplying edible insects have to be regularly monitored for parasites to guarantee the safety of food and feed sources. Amount of parasites is related to cause the human and animal diseases therefore in the future quantitative studies of parasite intensity in insect farms should be performed. In our opinion, the most reliable method of quantitative research would be Real-Time PCR method. Insect welfare standards and analytical methods should also be developed to minimize production losses and effectively eliminate pathogens from farms.
2019-07-10T13:05:07.622Z
2019-07-08T00:00:00.000
{ "year": 2019, "sha1": "ebe9be7db024ad5d23fdb4358ff9a5657407ef0a", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0219303&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ebe9be7db024ad5d23fdb4358ff9a5657407ef0a", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
302252
pes2o/s2orc
v3-fos-license
Locally Correct Frechet Matchings The Frechet distance is a metric to compare two curves, which is based on monotonous matchings between these curves. We call a matching that results in the Frechet distance a Frechet matching. There are often many different Frechet matchings and not all of these capture the similarity between the curves well. We propose to restrict the set of Frechet matchings to"natural"matchings and to this end introduce locally correct Frechet matchings. We prove that at least one such matching exists for two polygonal curves and give an O(N^3 log N) algorithm to compute it, where N is the total number of edges in both curves. We also present an O(N^2) algorithm to compute a locally correct discrete Frechet matching. Introduction Many problems ask for the comparison of two curves. Consequently, several distance measures have been proposed for the similarity of two curves P and Q, for example, the Hausdorff and the Fréchet distance. Such a distance measure simply returns a number indicating the (dis)similarity. However, the Hausdorff and the Fréchet distance are both based on matchings of the points on the curves. The distance returned is the maximum distance between any two matched points. The Fréchet distance uses monotonous matchings (and limits of these): if point p on P and q on Q are matched, then any point on P after p must be matched to q or a point on Q after q. The Fréchet distance is the maximal distance between two matched points minimized over all monotonous matchings of the curves. Restricting to monotonous matchings of only the vertices results in the discrete Fréchet distance. We call a matching resulting in the (discrete) Fréchet distance a (discrete) Fréchet matching. See Section 2 for more details. There are often many different Fréchet matchings for two curves. However, as the Fréchet distance is determined only by the maximal distance, not all of these matchings capture the similarity between the curves well (see Fig. 1). There are applications that directly use a matching, for example, to map a GPS track to a street network [8] or to morph between the curves [4]. In such situations a "good" matching is important. We believe that many applications of the (discrete) Fréchet distance, such as protein alignment [9] and detecting patterns in movement data [3], would profit from good Fréchet matchings. Results. We restrict the set of Fréchet matchings to "natural" matchings by introducing locally correct Fréchet matchings: matchings that for any two matched subcurves are again a Fréchet matching on these subcurves. In Section 3 we prove that there exists such a locally correct Fréchet matching for any two polygonal curves. Based on this proof we describe in Section 4 an O(N 3 log N ) algorithm to compute such a matching, where N is the total number of edges in both curves. We consider the discrete Fréchet distance in Section 5 and give an O(N 2 ) algorithm to compute locally correct matchings under this metric. Related work. The first algorithm to compute the Fréchet distance was given by Alt and Godau [1]. They also consider a non-monotone Fréchet distance and their algorithm for this variant results in a locally correct non-monotone matching (see Remark 3.5 in [6]). Eiter and Mannila gave the first algorithm to compute the discrete Fréchet distance [5]. Since then, the Fréchet distance has received significant attention. Here we focus on approaches that restrict the allowed matchings. Efrat et al. [4] introduced Fréchet-like metrics, the geodesic width and link width, to restrict to matchings suitable for curve morphing. Their method is suitable only for non-intersecting polylines. Moreover, geodesic width and link width do not resolve the problem illustrated in Fig. 1: both matchings also have minimal geodesic width and minimal link width. Maheshwari et al. [7] studied a restriction by "speed limits", which may exclude all Fréchet matchings and may cause undesirable effects near "outliers" (see Fig. 2). Buchin et al. [2] describe a framework for restricting Fréchet matchings, which they illustrate by restricting slope and path length. The former corresponds to speed limits. We briefly discuss the latter at the end of Section 4. Preliminaries Curves. Let P be a polygonal curve with m edges, defined by vertices p 0 , . . . , p m . We treat a curve as a continuous map P : [0, m] → R d . In this map, P (i) equals p i for integer i. Furthermore, P (i + λ) is a parameterization of the (i + 1)st edge, that is, P (i + λ) = (1 − λ) · p i + λ · p i+1 , for integer i and 0 < λ < 1. As a reparametrization σ : [0, 1] → [0, m] of a curve P , we allow any continuous, nondecreasing function such that σ(0) = 0 and σ(1) = m. We denote by P σ (t) the actual location according to reparametrization σ: P σ (t) = P (σ(t)). By P σ [a, b] we denote the subcurve of P in between P σ (a) and P σ (b). In the following we are always given two polygonal curves P and Q, where Q is defined by its vertices q 0 , . . . , q n and is reparametrized by θ : [0, 1] → [0, n]. The reparametrized curve is denoted by Q θ . Fréchet matchings. We are given two polygonal curves P and Q with m and n edges. A (monotonous) matching µ between P and Q is a pair of reparametrizations (σ, θ), such that P σ (t) matches to Q θ (t). The Euclidean distance between two matched points is denoted by Free space diagrams. Alt and Godau [1] describe an algorithm to compute the Fréchet distance based on the decision variant (that is, solving δ F (P, Q) ≤ ε for some given ε). Their algorithm uses a free space diagram, a two-dimensional diagram on the range [0, m] × [0, n]. Every point (x, y) in this diagram is either "free" (white) or not (indicating whether |P (x) − Q(y)| ≤ ε). The diagram has m columns and n rows; every cell (c, r) (1 ≤ c ≤ m and 1 ≤ r ≤ n) corresponds to the edges p c−1 p c and q r−1 q r . To compute the Fréchet distance, one finds the smallest ε such that there exists an x-and y-monotone path from point (0, 0) to (m, n) in free space. For this, only certain critical values for the distance have to be checked. Imagine continuously increasing the distance ε starting at ε = 0. At so-called critical events, which are illustrated in Fig. 3, passages open in the free space. The critical values are the distances corresponding to these events. Locally correct Fréchet matchings We introduce locally correct Fréchet matchings, for which the matching between any two matched subcurves is a Fréchet matching. Definition 1 (Local correctness). Given two polygonal curves P and Q, a Note that not every Fréchet matching is locally correct. See for example Fig. 2. The question arises whether a locally correct matching always exists and if so, how to compute it. We resolve the first question in the following theorem. Theorem 1. For any two polygonal curves P and Q, there exists a locally correct Fréchet matching. Existence. We prove Theorem 1 by induction on the number of edges in the curves. First, we present the lemmata for the two base cases: one of the two curves is a point, and both curves are line segments. In the following, n and m again denote the number of edges of P and Q, respectively. Lemma 1. For two polygonal curves P and Q with m = 0, a locally correct matching is (σ, θ), where σ(t) = 0 and θ(t) = t · n. Proof. Since m = 0, P is just a single point, p 0 . The Fréchet distance between a point and a curve is the maximal distance between the point and any point on the curve: ]. This implies that the matching µ is locally correct. Proof. The free space diagram of P and Q is a single cell and thus the free space is a convex area for any value of ε. Since µ = (σ, θ) is linear, we have that , we conclude that µ is locally correct. For induction, we split the two curves based on events (see Fig. 4). Since each split must reduce the problem size, we ignore any events on the left or bottom boundary of cell (1, 1) or on the right or top boundary of cell (m, n). This excludes both events of type A. A free space diagram is connected at value ε, if a monotonous path exists from the boundary of cell (1,1) to the boundary of cell (m, n). A realizing event is a critical event at the minimal value ε such that the corresponding free space diagram is connected. Let E denote the set of concurrent realizing events for two curves. A realizing set E r is a subset of E such that the free space admits a monotonous path from cell (1, 1) to cell (m, n) without using an event in E\E r . Note that a realizing set cannot be empty. When E contains more than one realizing event, some may be "insignificant": they are never required to actually make a path in the free space diagram. A realizing set is minimal if it does not contain a strict subset that is a realizing set. Such a minimal realizing set contains only "significant" events. Lemma 3. For two polygonal curves P and Q with m > 1 and n ≥ 1, there exists a minimal realizing set. Proof. Let E denote the non-empty set of concurrent events at the minimal critical value. By definition, the empty set cannot be a realizing set and E is a realizing set. Hence, E contains a minimal realizing set. The following lemma directly implies that a locally correct Fréchet matching always exists. Informally, it states that curves have a locally correct matching that is "closer" (except in cell (1, 1) or (m, n)) than the distance of their realizing set. Further, this matching is linear inside every cell. In the remainder, we use realizing set to indicate a minimal realizing set, unless indicated otherwise. Lemma 4. If the free space diagram of two polygonal curves P and Q is connected at value ε, then there exists a locally correct Fréchet matching µ = (σ, θ) Proof. We prove this by induction on m + n. The base cases (m = 0, n = 0, and m = n = 1) follow from Lemma 1 and Lemma 2. For induction, we assume that m ≥ 1, n ≥ 1, and m + n > 2. By Lemma 3, a realizing set E r exists for P and Q, say at value ε r . The set contains realizing events e 1 , . . . , e k (k ≥ 1), numbered in lexicographic order. By definition, ε r ≤ ε holds. Suppose that E r splits curve P into P 1 , . . . , P k+1 and curve Q into Q 1 , . . . , Q k+1 , where P i has m i edges, Q i has n i edges. By definition of a realizing event, none of the events in E r occur on the right or top boundary of cell (m, n). Hence, for any i (1 ≤ i ≤ k + 1), it holds that m i ≤ m, n i ≤ n, and m i < m or n i < n. Since a path exists in the free space diagram at ε r through all events in E r , the induction hypothesis implies that, for any i (1 ≤ i ≤ k + 1), a locally correct matching µ i = (σ i , θ i ) exists for P i and Q i such that µ i is linear in every cell and Combining these matchings with the events in E r yields a matching µ = (σ, θ) for (P, Q). As we argue below, this matching is locally correct and satisfies the additional properties. The matching of an event corresponds to a single point (type B) or a horizontal or vertical line (type C). By induction, µ i is linear in every cell. Since all events occur on cell boundaries, the cells of the matchings and events are disjoint. Therefore, the matching µ is also linear inside every cell. To show that µ is locally correct, suppose for contradiction that values a, b If a, b are in between two consecutive events, we know that the submatching corresponds to one of the matchings µ i . Since these are locally correct, Hence, suppose that a and b are separated by at least one event of E r . There are two possibilities: ) < ε r holds, then a matching exists that does not use the events between a and b and has a lower maximum. Hence, the free space connects point (σ(a), θ(a)) with point (σ(b), θ(b)) at a lower value than ε r . This implies that all events between a and b can be omitted, contradicting that E r is a minimal realizing set. Now, assume d µ [a, b] > ε r . Let t denote the highest t for which σ(t) ≤ 1 and θ(t) ≤ 1 holds, that is, the point at which the matching exits cell (1,1). Similarly, let t denote the lowest t for which σ(t) ≥ m − 1 and θ(t) ≥ n − 1 holds. Since d µ (t) ≤ ε r holds for any t ≤ t ≤ t , d µ (t) > ε r can hold only for t < t or t > t . Suppose that d µ (a) > ε r holds. Then a < t holds and µ is linear between a and t . Therefore, This maximum is a lower bound on the Fréchet distance, contradicting the assumption that d µ [a, b] is larger than the Fréchet distance. Matching µ is therefore locally correct. Algorithm for locally correct Fréchet matchings The existence proof directly results in a recursive algorithm, which is given by Algorithm 1. Fig. 1 (left), Fig. 2 (left), Fig. 5, Fig. 6, and Fig. 7 (left) illustrate matchings computed with our algorithm. This section is devoted to proving the following theorem. Theorem 2. Algorithm 1 computes a locally correct Fréchet matching of two polygonal curves P and Q with m and n edges in O((m + n)mn log(mn)) time. Algorithm 1 ComputeLCFM(P, Q) Require: P and Q are curves with m and n edges Ensure: A locally correct Fréchet matching for P and Q 1: if m = 0 or n = 0 then 2: Find event er of a minimal realizing set 7: Split P into P1 and P2 according to er 8: Split Q into Q1 and Q2 according to er 9: µ1 → ComputeLCFM(P1, Q1) 10: µ2 → ComputeLCFM(P2, Q2) 11: return concatenation of µ1, er, and µ2 Using the notation of Alt and Godau [1], L F i,j denotes the interval of free space on the left boundary of cell (i, j); L R i,j denotes the subset of L F i,j that is reachable from point (0, 0) of the free space diagram with a monotonous path in the free space. Analogously, B F i,j and B R i,j are defined for the bottom boundary. With a slight modification to the decision algorithm, we can compute the minimal value of ε such that a path is available from cell (1, 1) to cell (m, n). This requires only two changes: B R 1,2 should be initialized with B F 1,2 and L R 2,1 with L F 2,1 ; the answer should be "yes" if and only if B R m,n or L R m,n is non-empty. Realizing set. By computing the Fréchet distance using the modified Alt and Godau algorithm, we obtain an ordered, potentially non-minimal realizing set P Q P Q Fig. 5. Locally correct matching produced by Algorithm 1. Free space diagram drawn at ε = δF(P, Q). E = {e 1 , . . . , e l }. The algorithm must find an event that is contained in a realizing set. Let E k denote the first k events of E. For now we assume that the events in E end at different cell boundaries. We use a binary search on E to find the r such that E r contains a realizing set, but E r−1 does not. This implies that event e r is contained in a realizing set and can be used to split the curves. Note that r is unique due to monotonicity. For correctness, the order of events in E must be consistent in different iterations, for example, by using a lexicographic order. Set E r contains only realizing sets that use e r . Hence, E r−1 contains a realizing set to connect cell (1, 1) to e r and e r to cell (m, n). Thus any event found in subsequent iterations is part of E r−1 and of a realizing set with e r . To determine whether some E k contains a realizing set, we check whether cells (1, 1) and (m, n) are connected without "using" the events of E\E k . To do this efficiently, we further modify the Alt and Godau algorithm. We require only a method to prevent events in E\E k from being used. After L R i,j is computed, we check whether the event e (if any) that ends at the left boundary of cell (i, j) is part of E\E k and necessary to obtain L R i,j . If this is the case, we replace L R i,j with an empty interval. Event e is necessary if and only if L R i,j is a singleton. To obtain an algorithm that is numerically more stable, we introduce entry points. The entry point of the left boundary of cell (i, j) is the maximal i < i such that B R i ,j is non-empty. These values are easily computed during the decision algorithm. Assume the passage corresponding to event e starts on the left boundary of cell (i s , j). Event e is necessary to obtain L R i,j if and only if i < i s . Therefore, we use the entry point instead of checking whether L R i,j is a singleton. This process is analogous for horizontal boundaries of cells. Earlier we assumed that each event in E ends at a different cell boundary. If events end at the same boundary, then these occur in the same row (or column) and it suffices to consider only the event that starts at the rightmost column (or highest row). This justifies the assumption and ensures that E contains O(mn) events. Thus computing e r (Algorithm 1, line 6) takes O(mn log(mn)) time, which is equal to the time needed to compute the Fréchet distance. Each recursion step splits the problem into two smaller problems, and the recursion ends when mn ≤ 1. This results in an additional factor m + n. Thus the overall running time is O((m + n)mn log(mn)). Sampling and further restrictions. Two curves may still have many locally correct Fréchet matchings: the algorithm computes just one of these. However, introducing extra vertices may alter the result, even if these vertices do not modify the shape (see Fig. 6). This implies that the algorithm depends not only on the shape of the curves, but also on the sampling. Increasing the sampling further and further seems to result in a matching that decreases the matched distance as much as possible within a cell. However, since cells are rectangles, there is a slight preference for taking longer diagonal paths. Based on this idea, we are currently investigating "locally optimal" Fréchet matchings. The idea is to restrict to the locally correct Fréchet matching that decreases the matched distance as quickly as possible. We also considered restricting to the "shortest" locally correct Fréchet matching, where "short" refers to the length of the path in the free space diagram. However, Fig. 7 shows that such a restriction does not necessarily improve the quality of the matching. Locally correct discrete Fréchet matchings Here we study the discrete variant of Fréchet matchings. For the discrete Fréchet distance, only the vertices of curves are matched. The discrete Fréchet distance can be computed in O(m · n) time via dynamic programming [5]. Here, we show how to also compute a locally correct discrete Fréchet matching in O(m·n) time. Grids. Since we are interested only in matching vertices of the curves, we can convert the problem to a grid problem. Suppose we have two curves P and Q with m and n edges respectively. These convert into a grid G of non-negative values with m + 1 columns and n + 1 rows. Every column corresponds to a vertex of P , every row to a vertex of Q. Any node of the grid G[i, j] corresponds to the pair of vertices (p i , q j ). Its value is the distance between the vertices: G[i, j] = |p i − q j |. Analogous to free space diagrams, we assume that G[0, 0] is the bottomleft node and G[m, n] the topright node. Matchings. A monotonous path π is a sequence of grid nodes π(1), . . . , π(k) such that every node π(i) (1 < i ≤ k) is the above, right, or above/right diagonal neighbor of π(i−1). In the remainder of this section a path refers to a monotonous path unless indicated otherwise. A monotonous discrete matching of the curves corresponds to a path π such that π(1) = G[0, 0] and π(k) = G[m, n]. We call a path π locally correct if for all 1 ≤ t 1 ≤ t 2 ≤ k, max t1≤t≤t2 π(t) = min π max 1≤t≤k π (t), where π ranges over all paths starting at π (1) = π(t 1 ) and ending at π (k ) = π(t 2 ). Algorithm. The algorithm needs to compute a locally correct path between G[0, 0] and G[m, n] in a grid G of non-negative values. To this end, the al- Algorithm 2 ComputeDiscreteLCFM(P, Q) Require: P and Q are curves with m and n edges Ensure: A locally correct discrete Fréchet matching for P and Q 1: Construct grid G for P and Q 2: Let T be a tree consisting only of the root G[0, 0] 3: for i ← 1 to m do 4: Add G[i, 0] to T 5: for j ← 1 to n do 6: Add G[0, j] to T 7: for i ← 1 to m do 8: for j ← 1 to n do 9: AddToTree(T, G, i, j) 10: return path in T between G[0, 0] and G[m, n] gorithm incrementally constructs a tree T on the grid such that each path in T is locally correct. The algorithm is summarized by Algorithm 2. We define a growth node as a node of T that has a neighbor in the grid that is not yet part of T : a new branch may sprout from such a node. The growth nodes form a sequence of horizontally or vertically neighboring nodes. A living node is a node of T that is not a growth node but is an ancestor of a growth node. A dead node is a node of T that is neither a living nor a growth node, that is, it has no descendant that is a growth node. Every pair of nodes in this tree has a nearest common ancestor (NCA). When we have to decide what parent to use for a new node in the tree, we look at the maximum value on the path in the tree between the parents and their NCA (excluding the value of the latter). A face of the tree is the area enclosed by the segment between two horizontally or vertically neighboring growth nodes (without one being the parent of another) and the paths to their NCA. The unique sink of a face is the node of the grid that is in the lowest column and row of all nodes on the face. Fig. 8 (a-b) shows some examples of faces and their sinks. Shortcuts. To avoid repeatedly walking along the tree to compute maxima, we maintain up to two shortcuts from every node in the tree. The segment between the node and its parent is incident to up to two faces of the tree. The node maintains shortcuts to the sink of these faces, associating the maximum value encountered on the path between the node and the sink (excluding the value of the sink). Fig. 8 (b) illustrates some shortcuts. With these shortcuts, it is possible to determine the maximum up to the NCA of two (potentially diagonally) neighboring growth nodes in constant time. Note that a node g of the tree that has a growth node as parent is incident to at most one face (see Fig. 8 (c)). We need the "other" shortcut only when the parent of g has a living parent. Therefore, the value of this shortcut can be obtained in constant time by using the shortcut of the parent. When the parent of g is no longer a growth node, then g obtains its own shortcut. Extending the tree. Algorithm 3 summarizes the steps required to extend the tree T with a new node. Node G[i, j] has three candidate parents, . Each pair of these candidates has an NCA. For the actual parent of G[i, j], we select the candidate c such that for any other candidate c , the maximum value from c to their NCA is at most the maximum value from c to their NCA-both excluding the NCA itself. We must be consistent when breaking ties between candidate parents. To this end, we use the preference order of Since paths in the tree cannot cross, this order is consistent between two paths at different stages of the algorithm. Note that a preference order that prefers G[i − 1, j − 1] over both other candidates or vice versa results in an incorrect algorithm. When a dead path is removed from the tree, adjacent faces merge and a sink may change. Hence, shortcuts have to be extended to point toward the new sink. Fig. 9 illustrates the incoming shortcuts at a sink and the effect of removing a dead path on the incoming shortcuts. Note that the algorithm does not need to remove dead paths that end in the highest row or rightmost column. Correctness. To prove correctness of Algorithm 2, we require a stronger version of local correctness. A path π is strongly locally correct if for all paths π with the same endpoints max 1<t≤k π(t) ≤ max 1<t ≤k π (t ) holds. Note that the first node is excluded from the maximum. Since max 1<t≤k π(t) ≤ max 1<t ≤k π (t ) and π(1) = π (1) imply max 1≤t≤k π(t) ≤ max 1≤t ≤k π (t ), a strongly locally correct path is also locally correct. Lemma 5 implies the correctness of Algorithm 2. Lemma 5. Algorithm 2 maintains the following invariant: any path in T is strongly locally correct. Proof. To prove this lemma, we strengthen the invariant. Invariant. We are given a tree T such that every path in T is strongly locally correct. In constructing T , any ties were broken using the preference order. Initialization. Tree T is initialized such that it contains two types of paths: either between grid nodes in the first column or in the first row. In both cases there is only one path between the endpoints of the path. Therefore, this path must be strongly locally correct. Since every node has only one candidate parent, T adheres to the preference order. Maintenance. The algorithm extends T to T by including node g = G[i, j]. This is done by connecting g to one of its candidate parents ( , the one that has the lowest maximum value along its path to the NCA. We must now prove that any path in T is strongly locally correct. From the induction hypothesis, we conclude that only paths that end at g could falsify this statement. Suppose that such an invalidating path exists in T , ending at g. This path must use one of the candidate parents of g as its before-last node. We distinguish three cases on how this path is situated compared to T . The last case, however, needs two subcases to deal with candidate parents that have the same maximum value on the path to their NCA. The four cases are illustrated in Fig. 10. For each case, we consider the path π i between the first vertex and the parent of g in the invalidating path (i.e. one of the three candidate parents). Note that π i need not be disjoint of the paths in T . Slightly abusing notation, we also use a path π to denote its maximum value, excluding the first node, i.e. max 1<t≤k π (t). We now show that for each case, the existence of the invalidating path contradicts the invariant on T . Case (a). Path π i ends at the parent of g in T . Path π is the path in T between the first and last vertex of π i . Since (π i , g) is the invalidating path, we know that max{π i , g} < max{π, g} holds. This implies that π i < π holds. In particular, this means that π, a path in T , is not strongly locally correct: a contradiction. Case (b). Path π i ends at a non-selected candidate parent of g. Path π 2 ends at the parent of g in T and path π 3 ends at the last vertex of π i . Let nca denote the NCA of the endpoints of π 2 and π 3 . The first vertex of π i is nca or one of its ancestors. Both path π 2 and π 3 start at nca. Let π 1 be the path from π i (1) to nca. Since the endpoint of π 2 was chosen as parent over the endpoint of π 3 , we know that π 2 ≤ π 3 holds. Furthermore, since (π i , g) is the invalidating path, we know that max{π i , g} < max{π 1 , π 2 , g} holds. These two inequalities imply max{π i , g} < max{π 1 , π 3 , g} holds. This in turn implies that π i < max{π 1 , π 3 } must hold. Since (π 1 , π 3 ) is a path in T and the inequality implies that it is not strongly locally correct, we again have a contradiction. Case (c). Path π i ends at a non-selected candidate parent of g. Path π 2 ends at the parent of g in T and path π 3 ends at the last vertex of π i . Let nca denote the NCA of the endpoints of π 2 and π 3 . The first vertex of π i is a descendant of nca. Path π 2 starts at π i (1) and π 3 starts at nca. Let π 1 be the path from nca to π i (1). In this case, we must explicitly consider the possibility of two paths having equal values. Hence, we distinguish two subcases. Case (c-1). In the first subcase, we assume that the endpoint of π 2 was chosen as parent since its maximum value is strictly lower: max{π 1 , π 2 } < π 3 holds. Since (π i , g) is the invalidating path, we know that max{π i , g} < max{π 2 , g} holds. Since π 2 ≤ max{π 1 , π 2 } always holds, we obtain that max{π i , g} < max{π 3 , g} must hold. This in turn implies that π i < π 3 holds. Similarly, since π 1 ≤ max{π 1 , π 2 }, we know that π 1 < π 3 must hold. Combining these last two inequalities yields max{π 1 , π i } < π 3 . Since π 3 is a path in T and the inequality implies that it is not strongly locally correct, we again have a contradiction. (Note that with max{π 1 , π 2 } ≤ π 3 , we can at best derive max{π 1 , π i } ≤ π 3 which is not strong enough to contradict the invariant on T .) Case (c-2). In the second subcase, we assume that the endpoint of π 2 was chosen as parent based on the preference order: the maximum values are equal, thus max{π 1 , π 2 } = π 3 holds. We now subdivide π i into two parts, π ia and π ib . π ia runs from the first vertex of π i up to the first vertex along π i that is an ancestor in T of candidate parent of g that is used by π i . At the same point, we also split path π 3 into π 3a and π 3b . We now obtain two more cases, π 3a < max{π 1 , π ia } and π 3a ≥ max{π 1 , π ia }. In the former case, we obtain that max{π 3a , π ib } < max{π 1 , π 2 } holds and thus (π 3a , π ib , g) is also an invalidating path. Since this path starts at the NCA of π 2 and π 3 , this is already covered by case (b). In the latter case, we have that either path π 3a -which is in T -is not strongly locally correct (contradicting the induction hypothesis) or there is equality between the two paths (π 1 , π ia ) and π 3a . In case of equality, we observe that (π 1 , π ia ) and π 3a arrive at their endpoint in the same order as π 2 and π 3 arrive at g. Thus T does not adhere to the preference order to break ties. This contradicts the invariant. Execution time. When a dead path π d is removed, we may need to extend a list of incoming shortcuts at π d (1), the node that remains in T . Let k denote the number of nodes in π d . The lemma below relates the number of extended shortcuts to the size of π d . The main observation is that the path requiring extensions starts at π d (1) and ends at either has not yet received any shortcuts. Lemma 6. A dead path π d with k nodes can result in at most 2·k−1 extensions. Proof. Since π d is a path with k nodes, it spans at most k columns and k rows. When a dead path is removed, its endpoint is G[i−1, j−1]. Let π e denote the path of T that requires extensions. We know that both paths start at the same node: π d (1) = π e (1). The endpoint of π e is at either G[i−1, j] or G[i, j −1], since G[i, j] has not yet received shortcuts when the dead path is removed. Also, note that if the endpoint of π e is not the parent of G[i, j] and has outdegree higher than 0, then it is a growth node and its descendants are also growth nodes. Hence, these descendants have parent that is a growth node and thus have shortcuts that need to be extended. Fig. 11 illustrates these situations. Hence, we know that π e spans either k + 1 columns and k rows or vice versa. Therefore, the maximum number of nodes in π e is 2 · k, since it must be monotonous. Since π e (1) does not have a shortcut to itself, there are at most 2 · k − 1 incoming shortcuts from π e at π d (1). Hence, we can charge every extension to one of the k − 1 dead nodes (all but π d (1)). A node gets at most 3 charges, since it is a (non-first) node of a dead path at most once. Because an extension can be done in constant time, the execution time of the algorithm is O(mn). Note that shortcuts that originate from a living node with outdegree 1 could be removed instead of extended. We summarize the findings of this section in the following theorem. Theorem 3. Algorithm 2 computes a locally correct discrete Fréchet matching of two polygonal curves P and Q with m and n edges in O(mn) time. Conclusion We set out to find "good" matchings between two curves. To this end we introduced the local correctness criterion for Fréchet matchings. We have proven that there always exists at least one locally correct Fréchet matching between any two polygonal curves. This proof resulted in an O(N 3 log N ) algorithm, where N is the total number of edges in the two curves. Furthermore, we considered computing a locally correct matching using the discrete Fréchet distance. By maintaining a tree with shortcuts to encode locally correct partial matchings, we have shown how to compute such a matching in O(N 2 ) time. Future work. Computing a locally correct discrete Fréchet matching takes O(N 2 ) time, just like the dynamic program to compute only the discrete Fréchet distance. However, computing a locally correct continuous Fréchet matching takes O(N 3 log N ) time, a linear factor more than computing the Fréchet distance. An interesting question is whether this gap in computation can be reduced as well. Furthermore, it would be interesting to investigate the benefit of local correctness for other matching-based similarity measures, such as the geodesic width [4].
2012-06-27T06:16:11.000Z
2012-06-27T00:00:00.000
{ "year": 2012, "sha1": "84ffa9896aa3be4c831eaa52bf47c9ae6f2566a0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1206.6257", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "83becfb04c71ed6bd663217ae75840b3231360cb", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
10538874
pes2o/s2orc
v3-fos-license
Plant parts substitution based approach as a viable conservation strategy for medicinal plants: A case study of Premna latifolia Roxb. Background Rapid population growth and catastrophic harvesting methods of wild medicinal plants especially trees, result in the exploitation of natural sources and its management is the need of the hour. Dashamoolarishta is an amalgam of roots of ten plants of a popular Ayurvedic FDC formulation consisting of the root of Premna latifolia Roxb. as one of its ingredients. Presently, their populations like many other trees are under threat due to extensive use of the roots by the herbal drug industry. Objective With an aim to conserve the biodiversity, a systematic study based on a rational approach by substituting root/root bark with alternative and renewable parts was conducted. Materials and methods The fingerprint profile together with anti-inflammatory and analgesic effect of different parts of the plant was established for comparison. Results The results based on chemical and biological study indicated close similarity between the roots and the leaves and suggest the possible use of latter over root/root bark. Conclusion The study proposes that the substitution of the root with alternate renewable parts of the same plant shall form the best strategy towards conservation of the trees like P. latifolia. Introduction Drugs of natural origin are well known for their beneficial role in the health care system and continue to play a significant role in the treatment of many diseases worldwide. Majority of the population believes in traditional herbal medicine because of easy accessibility and lesser side effects. But regardless of their importance, medicinal plants are still misused with no concern of their conservation. Over-exploitation of medicinal plants is a result of speedy population growth and increasing urbanization which affects ecosystem and biodiversity especially for the slow growing plants like trees. Further, the illegal and indiscriminate harvesting methods of the medicinal plants result in the depletion of natural resources [1]. The different possible strategies available to conserve the natural resources are: (i) Establishment of more conservation areas and enforcement of laws against bark and root collection. (ii) Promoting awareness about plant diversity and its role in sustainable livelihood and healthcare. (iii) Large scale cultivation including those of slow growing plants. (iv) Whereever possible, use of alternative vegetative renewable plant parts such as leaves, young stems and fruits in place of bark and underground parts like root, rhizome etc. There are a few reports focussing on the last suggestion of plant or plant parts substitution. To implement this policy, there is a dire need to do more case studies and create awareness about its practical usefulness towards conservation and sustainability of medicinal plants. The Chiov. of South Africa. The authors demonstrated how the strategy of plant part substitution can be carried out in the laboratory and also showed that the potential for plant part substitution is highly plant specific [2]. Most of the subsequent reports were also from South Africa on Pelargonium sidoides DC. (aerial parts in place of underground roots and tubers) [3,4], Hypoxis hemerocallidea Fisch., C.A.Mey. & Ave-Lall. (leaf in place of corms) [5], Curtisia dentata (Burm.f.) C.A.Sm. (leaf in place of stem bark) [6], and one of the reports on W. salutaris (G.Bertol.) Chiov. is published simply on phytochemical basis [7]. The concept of only phytochemical approach has been recently studied in Aegle marmelos (L.) Correa (stem in place of root) [8]. Of the two relevant studies from India, Venkatasubramanian et al. reported that Cyperus rotundus L. (Musta) can be substituted in place of Aconitum heterophyllum Wall. ex Royle (Ativisha) on the concept of both phytochemical and pharmacological approach [9]. The second report provides useful information wherein Ayurvedic practitioners use Cryptocoryne spiralis (Retz.) Fisch ex Wydler (Naattu Atividayam) and Cyperus scariosus R. Br. (Nagaramusta) as substitutes for A. heterophyllum Wall. ex Royle (Ativisha) and C. rotundus L. (Musta), respectively [10]. These studies demand an equal awareness coming from other countries as well and strongly emphasize the need of evaluating plant part substitution both chemically and biologically. Hence, in the present study, an indigenous tree Premna latifolia Roxb. (Verbenaceae) was selected which is one of the ten ingredients of an Ayurvedic formulation "Dashamoolarishta". Dashamoolarishta is popularly regarded as immune modulator and general restorative tonic in geriatrics and for women having problem with conception and pregnancy [11]. The methanolic extract of leaves is reported to have anti-inflammatory [12], anticalculogenic [13], and antifeedant activity [14]. An allencompassing literature survey publicized that a lot of phytochemical work has been done on root bark and leaves from which iridoids [15], sesquiterpenoids [16], diterpenoids [17], furanoid [18], and flavonoids [19], have been reported. The plant is under threat due to extensive use of its roots in herbal formulations and requires an obvious necessity to reexamine the fixed-dose-combination used in Ayurveda following suitable scientific approach. The roots of the plant are collected by destructive method which reduces the opportunity for rejuvenation and affects plant demography. A number of possible strategies are proposed from time to time for plants under-threat to curtail overharvesting, among which the most promising is the use of renewable plant parts as alternative to bark, roots and rhizomes for medicinal purposes. With an aim to accomplishing this objective and to conserve the biodiversity, a replacement of root/root barks of P. latifolia Roxb. with its alternative and renewable vegetative parts (young stems, leaves) was studied chemically and pharmacologically. In this study, we determined how different or close various parts of P. latifolia Roxb. are based on the TLC fingerprinting together with anti-inflammatory and analgesic activities. The results are in a definitive direction to promote the use of aerial parts as a suitable initiative towards plant conservation and ensuring sustainable harvesting. Plant material The plant material comprising of different parts of P. latifolia was collected during the month of July 2009 from Hamirpur district, Himachal Pradesh, India. The identity of the plant sample was confirmed on the basis of detailed study of taxonomic characters and by comparison with authentic sample available at Medicinal Plants Garden of University Institute of Pharmaceutical Sciences, Panjab University, Chandigarh. The identity of the sample was further confirmed by NISCAIR vide Ref-no. NISCAIR/RHMD/Consult/-2010-11/1456/54. A specimen of the plant has been deposited in the Museum-cum-Herbarium of University Institute of Pharmaceutical Sciences, Centre for Advanced Studies, Panjab University, Chandigarh, India, under voucher no 1465. Preparation of extracts All extractions were carried out at room temperature by macerating 100 g of coarsely powdered plant material with 500 ml of methanol till exhaustion. The methanolic extract of whole root (11 g), root bark (8.5 g), root wood (7 g) young stem (9.5 g) and leaf (10 g) was obtained by evaporating the solvent under reduced pressure in rotary vacuum evaporator and used for chemical and pharmacological studies. Statistical analysis Statistical analysis was performed using Jandel Sigma Stat statistical software. Significance of difference between two groups was evaluated by two-way analysis of variance (ANOVA) followed by Bonferroni-test. Results were considered significant if p values were 0.05. Chemical studies 2.4.1. Comparative chemical profiling The comparative TLC chromatograms and fingerprint profiles were developed using pre-coated silica gel F 254 plates [E. Merck (India) Ltd., alumina base, and 0.2 mm thickness]. A large number of various combinations of solvent systems were used during the present studies and the best resolution was obtained in a solvent system of toluene:ethyl acetate:acetic acid (7.4:2.4:0.2). Extracts were applied as bands using Camag Linomat 5 available in the laboratory. The running distance was kept at 8 cm and anisaldehyde sulphuric acid reagent was used as derivatizing agent followed by heating at 110 C for 5 min or till the bands developed colour. TLC fingerprint profiles were recorded as images under UV at 254 and 366 nm before spray and under white light after derivatization on Camag Reprostar fitted with D Â A 252 16 mm camera. Animals and experimental design Animals were procured from Central Animal House of Panjab University, Chandigarh, India and kept under controlled environmental conditions with room temperature (22 ± 2 C) and humidity (50 ± 10%). The studies were approved by Institutional Animal Ethics Committee (IAEC) vide ref no 978 dated 08.02.2010 and animals were used according to the CPCSEA (Committee For the Purpose of Control and Supervision of Experimentation on Animals) guidelines. Experiments were performed on male Wistar rats (200e220 g) and LACA mice (20e30 g). Animals were randomized into nine groups each consisting of six animals with free access to food and water ad libitum. The 12 h light and dark cycle was maintained throughout the study. Animals were acclimatized for one week prior to the commencement of the experiments. The doses were administered orally with the help of an oral cannula fitted on a tuberculine syringe. Anti-inflammatory activity and analgesic activity were evaluated by carrageenan induced paw oedema model and tail flick method, respectively [20]. The experimental animals were divided into groups of control, standard and test of six animals each. The control group animals received only vehicle, the standard group animals received ibuprofen [21] or pentazocine [22] for comparison and the test group animals received the test materials (extracts). Evaluation of anti-inflammatory activity Anti-inflammatory activity was determined using carrageenaninduced paw oedema in rats [23]. The animals were divided into different groups of control, standard (ibuprofen) and test of six animals each. All animals were starved overnight. Acute oedema was induced in left hind paw of rats by injecting 0.1 ml of freshly prepared working solution of carrageenan (1%) under plantar region of left hind paw. The control group received only the vehicle while standard and test groups received ibuprofen and test substances, respectively. The paw was marked with ink at the level of the lateral malleolus and immersed in solution up to this mark for noting the paw volume. The complete experimental design is outlined in Scheme I. The increase in paw volume was measured using Plethysmometer (water displacement, UGO-Basile, Varese, Italy) at 0, 1, 3, 6, 9 and 24 h after carrageenan challenge. Percent protection of paw oedema was calculated from the following formula: (V 0 ) ¼ volume of the paw before treatment with carrageenan. (V t ) ¼ volume of paw after carrageenan administration at 1, 3, 6, 9 and 24 h. Evaluation of analgesic activity Analgesic activity was determined by tail flick method [24] using analgesiometer (Imcorp, Ambala, India). Mice were divided into different groups of six animals each. All the animals in the respective groups were individually exposed to analgesiometer maintained at 55 C. The base line tail flick latency averaged 2e4 s in all animals. The tail withdrawal from the heat (flicking response) was taken as the end point. Pentazocine (15 mg/kg) was used as a standard drug for comparison. Cut off time was set as 10 s for animals. The response of the drug was observed at time intervals of 0, 15, 30, 45, 60 and 120 min after drug administration and expressed in terms of maximum protection effect (MPE), which was calculated from the following formula: % Maximum protection effect ¼ [Reaction timeÀBasal reading/Cut off timeÀBasal reading] Â 100 Cut off time ¼ 10 s The complete experimental design is outlined in Scheme II. Results The TLC fingerprinting of methanolic extract of the whole root, root bark, root wood, young stem and leaf of P. latifolia Roxb. was developed for comparative chemical profiling. The best resolution of all the plant parts was obtained in a solvent system of toluene:ethyl acetate:acetic acid (7.4:2.4:0.2) and the recorded fingerprint profiles are shown in Fig. 1(AeC). The different plant parts showed distinct patterns under UV-light at 254 and 366 nm as well as after derivatizing with anisaldehyde-sulphuric acid reagent. The chromatogram of the whole root was characterized by fifteen bands. The nine major bands at R f 0. 18 The comparative anti-inflammatory and analgesic activity of the methanolic extract of different plant parts was carried out at a dose of 100, 200 and 400 mg/kg. In the paw oedema and tail flick models, all the parts showed statistically significant activity (p 0.05) and were found to be dose dependent. In the carrageenan induced paw oedema model, various plant parts at different doses showed significant decrease in paw volume at variable time intervals as shown in Table 1. The whole root and leaf showed same level of inhibition of oedema (48%) at 6 h at a dose of 400 mg/kg in comparison to 59% inhibition shown by ibuprofen while young stem was less active. In the tail flick method, all the test samples at an oral dose of 100, 200 and 400 mg/kg showed significant (p 0.05) increase in latency time at different time intervals (15,30,45, 60 and 120 min), which differed as compared to baseline values. The whole root and leaf significantly increased percentage reaction time by 43 and 42% at 400 mg/kg dose level, as compared to 62% shown by the pentazocine group Table 2. Discussion TLC is an established tool of choice for handling complex task involving herbal drug standardization for its simplicity, flexibility, reliability and cost efficient separation. The comparative TLC analysis revealed there is significant similarity among the different Scheme I. Experimental design for anti-inflammatory activity. plant parts under UV and in day light as observed by the intensity of the major bands. The analogous fingerprint profile of the roots and leaves indicated the presence of similar type of compounds. In the carrageenan induced paw oedema and tail flick model, both whole root and leaf exhibited nearly same level of protection. The stem too showed significant anti-inflammatory and analgesic activities as compared to the standard drug but lesser than whole root and leaf. An assessment of the similarities and differences by chemical and pharmacological methods of various parts of a plant acts as a significant tool towards the suggested strategies of the plant part substitution for conservation of biodiversity. A careful look at the results obtained in the present investigations clearly indicated that the leaf and root are comparable chemically and biologically and the root can be safely substituted or replaced with the leaf on the basis of similarities in chemical profile together with the biological activity. Regardless of all the progress in modern healthcare systems, plants are still an essential source of medicinal preparations, drug discovery and development. Most of the plants are collected from the wild sources and majority of the developing countries believe in herbal remedies. The international agencies such as World Health Organization, World Wildlife Fund, International Union for Conservation of Nature, International Plant Genetic Resources Institute and many national agencies play an increasing role in the medicinal plant conservation and cultivation. The present time therefore offers a unique opportunity for implementing the scientific research methods and policies to regulate conservation, cultivation, processing and marketing of medicinal plants which will further Scheme II. Experimental design for analgesic activity. Values are expressed as mean ± SEM. Statistical analysis was done by two-way ANOVA followed by Bonferroni-test (p < 0.05). a when compared with control group, b when compared with ibuprofen and c when compared with whole root. bridge the gap between affordable healthcare and conservation of diversity. Conclusion The study suggests that the substitution of root with leaf, a renewable plant part, shall form the best strategy towards conservation of bioresources and sustainable bioprospecting especially of medicinal plants.
2018-04-03T00:17:03.922Z
2017-03-14T00:00:00.000
{ "year": 2017, "sha1": "cd48216ec8a62f7a651bc8c6ad96ae865d4d1bee", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.jaim.2016.11.003", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cd48216ec8a62f7a651bc8c6ad96ae865d4d1bee", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
205154061
pes2o/s2orc
v3-fos-license
Obesity-associated gut microbiota is enriched in Lactobacillus reuteri and depleted in Bifidobacterium animalis and Methanobrevibacter smithii Background: Obesity is associated with increased health risk and has been associated with alterations in bacterial gut microbiota, with mainly a reduction in Bacteroidetes, but few data exist at the genus and species level. It has been reported that the Lactobacillus and Bifidobacterium genus representatives may have a critical role in weight regulation as an anti-obesity effect in experimental models and humans, or as a growth-promoter effect in agriculture depending on the strains. Objectives and methods: To confirm reported gut alterations and test whether Lactobacillus or Bifidobacterium species found in the human gut are associated with obesity or lean status, we analyzed the stools of 68 obese and 47 controls targeting Firmicutes, Bacteroidetes, Methanobrevibacter smithii, Lactococcus lactis, Bifidobacterium animalis and seven species of Lactobacillus by quantitative PCR (qPCR) and culture on a Lactobacillus-selective medium. Findings: In qPCR, B. animalis (odds ratio (OR)=0.63; 95% confidence interval (CI) 0.39–1.01; P=0.056) and M. smithii (OR=0.76; 95% CI 0.59–0.97; P=0.03) were associated with normal weight whereas Lactobacillus reuteri (OR=1.79; 95% CI 1.03–3.10; P=0.04) was associated with obesity. Conclusion: The gut microbiota associated with human obesity is depleted in M. smithii. Some Bifidobacterium or Lactobacillus species were associated with normal weight (B. animalis) while others (L. reuteri) were associated with obesity. Therefore, gut microbiota composition at the species level is related to body weight and obesity, which might be of relevance for further studies and the management of obesity. These results must be considered cautiously because it is the first study to date that links specific species of Lactobacillus with obesity in humans. Introduction Obesity, defined as a body mass index (BMI) over 30 kg m À2 (ref. 1) and a massive expansion of fat, is related to a significantly increased mortality and is a risk factor for many diseases, including diabetes mellitus, hypertension, respiratory disorders, ischemic heart disease, stroke and cancer. 2,3 Obesity can be considered as a transmissible disease because maternal obesity predisposes children to adulthood obesity. 4 Its prevalence is increasing steadily among adults, adolescents and children, and has doubled since 1960; and obesity is now considered a worldwide epidemic as, for example, over 30% of the population of North America is obese. The WHO data indicate that obesity currently affects at least 400 million people worldwide and 1.6 billion are overweight. The WHO further projects that by 2015, B2.3 billion adults will be overweight and more than 700 million will be obese. 5 The causes behind the obesity epidemic appear to be complex and involve environmental, genetic, neural and endocrine origins. 6 More recently, obesity has been associated with a specific profile of the bacterial gut microbiota, including a decrease in the Bacteroidetes/Firmicutes ratio 7-10 and a decrease in Methanobrevibacter smithii, the leading representative of the gut microbiota archaea. 11 Since these pioneering studies, significant associations were found between the increase of some bacterial groups and obesity (Lactobacillus, 12 Staphylococcus aureus, [13][14][15] Escherichia coli, 15 Faecalibacterium prausnitzii 16 ). Conversely, other groups have been associated with lean status, mainly belonging to the Bifidobacterium genus. 11,[13][14][15][16] To date, controversial studies make it clear that the connection between the microbiome and excess weight is complex. 17 As many probiotic strains of Lactobacillus and Bifidobacterium are marketed in products for human consumption, altering the intestinal flora 18 and stimulating indigenous lactobacilli and bifidobacteria strains, 19 we hypothesized that widespread ingestion of probiotics may promote obesity by altering the intestinal flora. [20][21][22] However, this remains controversial. 23,24 In a first step to elucidate the interactions between probiotics for human consumption and obesity, only a few studies have compared the obese and lean subjects by focusing on the Lactobacillus and Bifidobacterium genera at the species level 13,16 and they have not been able to demonstrate significant differences probably because of a too small sample size. As a result, by increasing the sample size, we analyze the composition of the digestive microbiota for Firmicutes, Bacteroidetes, the archaea M. smithii, Lactobacillus genus, L. lactis, and explore the relationships between seven selected species of Lactobacillus and one species of Bifidobacterium, used elsewhere in marketed probiotics for human consumption and obesity. Materials and methods Ethics, participants and samples All aspects of the study were approved by the local ethics committee 'Comité d'éthique de l'IFR 48, Service de Médecine Légale' (Faculté de Médecine, Marseille, France) under the accession number 10-002, 2010. Only verbal consent was necessary from patients for this study. This is according to the French bioethics decree Number 2007-1220, published in the official journal of the French Republic. Obese patients, as defined by a BMI430 kg m À2 (BMI: weight over height squared (kg m À2 )), were selected from two endocrinology units (Hopital La Timone and Hopital Sainte Marguerite, Marseilles, France) from a group of patients attending the clinic for excessive body weight. BMI provides the most useful population-level measure of overweight and obese, as it is the same for both sexes and for all ages of adults. 5 However, it may not correspond to the same degree of fatness in different individuals (The Y-Y paradox). 25 Control subjects were healthy volunteers over 18 years of age with BMIs between 19 and 25 kg m À2 . Only a few patients had participated in the previous study conducted by our laboratory. 12 The control subjects were predominantly Caucasian and were approached in different geographical locations using a snowball approach. This approach was helpful in making the period of recruitment of cases and controls comparable. The exclusion criteria included the following: non-assessable BMI value, BMIo19 kg m À2 , BMI425 kg m À2 and o30 kg m À2 , gastric bypass, history of colon cancer, bowel inflammatory diseases, acute or chronic diarrhea in the previous 4 weeks and antibiotic administration o1 month before stool collection. Clinical data (gender, date of birth, clinical history, weight, height and antibiotic use) were recorded using a standardized questionnaire. The samples, collected using sterile plastic containers, were transported as soon as possible to the laboratory and frozen immediately at À80 1C for later analysis. For Firmicutes, Bacteroidetes, M. smithii and Lactobacillus species, analyses were first performed on the whole population and then after exclusion of common subjects with our previous study. 12 Analysis of gut microbiota Culture on specific Lactobacillus medium (LAMVAB medium). After thawing at room temperature, 100 mg of stool was suspended in 900 ml of cysteine-peptone-water solution 26 and homogenized. A serial dilution was undertaken in phosphate buffered saline. Samples diluted to 1/10 and 1/1000 were inoculated using a 10 ml inoculation loop on LAMVAB medium. 27 After a 72-hour incubation in jars (AnaeroPack, Mitsubishi Gas Chemical America, Inc., New York, NY, USA) in an anaerobic atmosphere (GasPak EZ Anaerobe, Becton Dickinson, Heidelberg, Germany) at 37 1C, the number of morphotypes were identified and 1-4 colonies per morphotype were placed on four spots of an MTP 384 Target plate made of polished steel (Bruker Daltonics GmbH, Bremen, Germany) and stored in trypticase cases in soy culture medium (AES, Bruz, France). Lactobacillus strain collection and MALDI-TOF spectra database. The Lactobacillus strain collection of our laboratory has been completed by the strains from the Pasteur and DSMZ collections, and reference spectra have been created from those missing in the Bruker database. Bacterial identification was undertaken with an Autoflex II mass spectrometer (Bruker Daltonik GmbH). Data were automatically acquired using Flex control 3.0 and Maldi Biotyper Automation Control 2.0. (Bruker Daltonics GmbH). Raw spectra, obtained for each isolate, were analyzed by standard pattern matching (with default parameter settings) against the spectra of species used as a reference database. An isolate was regarded as correctly identified at the species level when at least one spectrum had a score X1.9, and one spectrum had a score X1.7. 28 The reproducibility of the method was evaluated by the duplicate analysis of 10 samples. Quantitative real-time PCR for M. smithii, Bacteroidetes, Firmicutes and Lactobacillus genus. DNA was isolated from stools as described in Dridi et al. 29 The purified DNA samples The duplex real-time PCR was executed as described above and in Armougom et al. 12 The specificity was tested on the DNA of the reference strains reported in Supplementary table 1. The stool-purified DNA was analyzed in samples that were pure, diluted at 1/10, and diluted at 1/ 100 to confirm the absence of inhibitors. Negative controls were included on each plate. The different lactobacilli, B. animalis and Lactococcus lactis were quantified using a plasmid standard curve from 10 7 -10 copies per assay. Statistical Analysis First, the results of Lactobacillus-specific culture and quantitative PCR (qPCR) were compared in the two groups (obese and control group) using the Fisher's exact test when comparing proportions, and the Mann-Whitney test when comparing bacterial concentrations. A difference was considered statistically significant when Po0.05. In order to identify which qPCR bacterial groups (Bacteroidetes, B. animalis, Lactococcus lactis, L. acidophilus, L. casei/paracasei, L. fermentum, L. gasseri, L. plantarum, L. reuteri, L. rhamnosus) was most associated with the likelihood of being obese while taking into account possible confounders like age or gender, a logistic regression model was used. Variables with a liberal Po0.20 in the univariate logistic regression analysis were considered eligible for the multiple logistic regression analyses. 30 A secondary analysis based on logistic regression analysis was used to identify which culture variables (Lactobacillus species concentration) where associated with obesity. Data analyses were conducted using SPSS v.9.0 (SPSS Inc., Chicago, IL, USA). Patients In total, 115 subjects (68 obese patients and 47 controls) were included. Thirteen obese subjects and nine controls were part of the previous study conducted in our laboratory. 12 The two populations were homogeneous in sex and height, but not in age (Table 1). Culture In total, 68 obese and 44 controls samples were analyzed. Table 2) and non-parametric quantitative comparison of the concentration of Lactobacillus species between obese subjects and controls has been achieved for the species present in at least six individuals. L. paracasei was found more frequently in controls (17/44 vs 10/68, Fisher's exact test, P ¼ 0.004). L. reuteri was found more frequently in obese patients (6/68 vs 1/44, Fisher's exact test, P ¼ 0.15), although this was not significant. L. plantarum was found only in Table 3). In total, 64 obese samples and 43 control samples were analyzed. The presence of B. animalis was associated with normal weight (Table 4, Fisher's exact test, P ¼ 0.007), and L. reuteri was associated with obesity (Fisher's exact test, P ¼ 0.03). Comparison using non-parametric statistics found that levels of B. animalis were lower (Mann-Whitney test, P ¼ 0.004) and that of L. reuteri were higher in obese people (Mann-Whitney test, P ¼ 0.02) ( Figure 3). By comparing the culture and the Lactobacillus species-specific PCR, the sensitivity was higher for all seven tested species by PCR vs culture except for L. acidophilus, which was not found by culture or species-specific PCR. Logistic regression analysis The results of the logistic regression analysis on the qPCR results are presented in Table 5. Variables eligible for the final model were L. casei/paracasei, L. reuteri, L. gasseri, B. animalis, M. smithii and age. The final multiple logistic regression model showed that after adjustment for age, L. reuteri, B. animalis and M. smithii were significantly associated with Discussion To our knowledge, we report the largest case-control study comparing human obese gut microbiota to controls focusing on Archaea, Bacteroidetes, Firmicutes, Lactobacillus genus, Lactococcus lactis and B. animalis and, for the first time, we used a culture-dependent and culture-independent method to compare the Lactobacillus population at the species level between obese and normal-weighted humans. Our results confirm global alteration in obese gut microbiota with a lower level of M. smithii as already reported in the literature, 11 and newly report lower levels of B. animalis, L. paracasei, L. plantarum and higher levels of L. reuteri in obese gut microbiota. The qPCR system used in this study to detect and quantify Bacteroidetes, Firmicutes, Lactobacillus genus and M. smithii in human feces has already been evaluated and validated. 12,29 LAMVAB-selective media has also been used successfully to Gut microbiota and obesity M Million et al identify and enumerate lactobacilli from human feces. 27 As in our previous study, 12 we found an increase in Lactobacillus in obese patients using the same Lactobacillus genus-specific PCR system. However, we found that its sensitivity profile was heterogeneous among the Lactobacillus species found in human feces by culture (data not shown). We subsequently developed a novel Lactobacillus species-specific qPCR system targeting species associated with obesity or normal weight in our preliminary culture study, and targeting other species present in marketed probiotics products as Lactococcus lactis and B. animalis. Species-specific Lactobacillus PCR based on the Tuf gene and designed for this new study showed good reproducibility, sensitivity and specificity. However, we found significant discrepancies between culture and Lactobacillus species-specific PCR species. First, L. gasseri and L. acidophilus could not be identified in culture due to the presence of vancomycin in the LAMVAB medium. Conversely, although qPCR was much more sensitive than culture to detect selected species of Lactobacillus, we showed that the two methods were consistent for L. casei/paracasei, L. plantarum and L. reuteri. For these three Lactobacillus species, both techniques resulted in the same effect direction with human obesity gut microbiota enriched in L. reuteri, and depleted in L. casei/paracasei and L. plantarum. Gut microbiota and obesity M Million et al The decrease of Bacteroidetes was historically the first alteration significantly associated with obesity as reported by Ley and Turnbaugh,8 in mice and in North American individuals, 7,9 and by Santacruz et al., 15 who observed overweight pregnant women in Spain. We found the same correlation in our previous study, 12 and the same effect direction in the present study with the same PCR system on the whole population and after the exclusion of common subjects. Schwiertz et al. 11 reported opposite results, but the methodology was objectionable because the Bacteroidetes proportion was obtained by summing Bacteroides and Prevotella genera. Other studies found no interaction between the relative or absolute abundance of Bacteroidetes and obesity. [31][32][33] In our previous study, 12 abundance of M. smithii was significantly higher in patients with anorexia but not in lean controls. In this new study, we found that M. smithii was less frequent and significantly less abundant in obese patients on the whole population and after the exclusion of common subjects. Schwiertz et al. 11 using a specific qPCR for Methanobrevibacter species, found similar results in a German population. These results are in contradiction to those of Zhang et al. 33 who found that Methanobacteriales was present only in obese individuals using a qPCR but only three obese vs three controls were compared. In this study, we report an association between lower levels of B. animalis and obesity for the first time. Five studies reported a decreased number of Bifidobacterium representatives in the feces of obese subjects at the genus level. 11,[13][14][15][16] At the species level, Kalliomaki et al. 13 using a Bifidobacterium speciesspecific PCR, found that Bifidobacterium longum and Bifidobacterium breve were higher in normal weight controls, but this result was not significant probably because of a small sample size. Experimental data report that administration of a B. breve strain to mice with high-fat diet-induced obesity led to a significant weight decrease. 34 Administering four different Bifidobacterium strains to high-fat diet induced obese rats, Yin et al. 35 reported that one strain increased body weight gain, another induced a decrease and the two other strains lead to no significant change in body weight but species were not mentioned in this study. In this way, Cani et al. 36 reported that high-fat feeding was associated with higher endotoxaemia and lower Bifidobacterium species cecal content in mice. The selective increase of bifidobacteria by oligofructose, improving mucosal barrier function, significantly and positively correlated with improved glucose tolerance, glucose-induced insulin secretion and decreased endotoxaemia. L. plantarum and L. paracasei were associated with normal weight in culture, consistent with experimental models in the literature reporting an anti-obesity effect of L. plantarum in mice. 37 Other Lactobacillus strains have shown an antiobesity effect in animals and humans similar to the L. gasseri SBT2055 (LG2055) strain in lean Zucker rats 38 and in humans. 39 This anti-obesity effect may be linked to the production of specific molecules that can interfere with host metabolism, such as conjugated linoleic acid (CLA) for L. plantarum or L. rhamnosus. 37,40 In vivo and in vitro analyses of physiological modifications imparted by CLA on protein and gene expression suggest that CLA exerts its delipidating effects by modulating energy expenditure, apoptosis, fatty acid oxidation, lipolysis, stromal vascular cell differentiation and lipogenesis. 37 Authors who have investigated the mechanisms linking conjugated linoleic acid and antiobesity effects have reported the upregulated expression of genes encoding uncoupling proteins (UCP-2), which could be a primary mechanism through which CLA increases energy expenditure and produces an anti-obesity effect. 40 L. reuteri has been associated here with obesity. L. reuteri has been one of the most studied probiotic species especially for its ability to inhibit the growth of other potentially pathogenic microorganisms by secreting antibiotic substances such as reuterin. 41 When introduced in pigs, turkeys and rats, L. reuteri led to a significant weight gain and was isolated in higher concentrations from feces after probiotic administration. [42][43][44] The mechanism by which L. reuteri is able to support the healthy growth of these animals is not entirely understood. It is possible that L. reuteri simply serves to protect livestock against illness caused by Salmonella typhimurium and other pathogens. However, other studies have revealed that L. reuteri can also help when the growth depression is caused entirely by a lack of dietary protein and not by contagious disease. 45 This raises the possibility that L. reuteri somehow improves the intestines' ability to absorb and process nutrients, and increase food conversion. 46 As a theoretical basis for the causal link between the gut microbiota alterations and obesity, several mechanisms have been suggested. First, the gut microbiota could interact with weight regulation by hydrolysis of indigestible polysaccharides to monosaccharides easily absorbable activating lipoprotein lipase. Consequently, glucose is rapidly absorbed producing substantial elevations in serum glucose and insulin, both factors that trigger lipogenesis and fatty acids excessively stored with de novo synthesis of triglycerides derived from liver, these two phenomena causing weight gain. 47 Second, the composition of gut microbiota has been shown to selectively suppress the angiopoietin-like protein 4/fasting-induced adipose factor in the intestinal epithelium, known as a circulating lipoprotein lipase inhibitor and regulator of peripheral lipid and glucose metabolism. 48 Third, it has been suggested that bacterial isolates of gut microbiota may have pro-or anti-inflammatory properties, impacting weight as obesity, having been associated with a low-grade systemic inflammation corresponding to higher plasma endotoxin lipopolysaccharide concentrations defined as metabolic endotoxaemia. [49][50][51][52] Fourth, extracting crude fat in feed and excreta, Nahashon et al. 53 reported that feeding laying Leghorn with Lactobacillus improved significantly retention of fat with increased cellularity of the Peyer's patches of the ileum, which indicated ileal immune response. Conversely, Bifidobacterium and Lactobacillus species have been cited to deconjugate bile acids, which may decrease fat absorption. 54 Gut microbiota and obesity M Million et al Finally, specific strains of Lactobacillus and Bifidobacterium fed to farm animals have been shown to increase daily weight gain, 55 and this fact has been used for decades in agriculture to increase feed conversion. In this context, one cannot exclude that the 'growth promoter' effect in animals associated with oral administration of specific probiotics strains is similar to the mechanisms involved in human obesity. For instance, Abdulrahim et al. 56 reported that L. acidophilus significantly increased abdominal fat deposition in female chickens when administered alone and up to 31% when it was associated with zinc bacitracin. Further studies are therefore mandatory in exploring the interactions between probiotics and weight regulation. Conclusion In conclusion, reduced levels of M. smithii has been confirmed as being associated with obesity. In addition, higher levels of B. animalis, L. paracasei or L. plantarum were associated with a normal weight whereas higher levels of L. reuteri were associated with obesity, suggesting a possible interrelationship between certain probiotic species, marketed elsewhere for human consumption, and obesity. These results must be considered cautiously because it is the first study to date that links specific species of Lactobacillus with obesity in humans. This issue will be of critical importance in the management of the twenty-first century worldwide epidemic that is obesity and especially considering the booming market of probiotics.
2017-06-19T07:21:57.604Z
2011-08-09T00:00:00.000
{ "year": 2011, "sha1": "f8916fd720a69e9e0920fb9c903accbddd8684e6", "oa_license": "CCBYNCND", "oa_url": "https://www.nature.com/articles/ijo2011153.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9b3cade18e9abdb80c6fae552ad8415d2ea66d1e", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
12466420
pes2o/s2orc
v3-fos-license
The relation between star formation, morphology and local density in high redshift clusters and groups We investigate how the [OII] properties and the morphologies of galaxies in clusters and groups at z=0.4-0.8 depend on projected local galaxy density, and compare with the field at similar redshifts and clusters at low-z. In both nearby and distant clusters, higher-density regions contain proportionally fewer star-forming galaxies, and the average [OII] equivalent width of star-forming galaxies is independent of local density. However, in distant clusters the average current star formation rate (SFR) in star-forming galaxies seems to peak at densities ~15-40 galaxies Mpc^{-2}. At odds with low-z results, at high-z the relation between star-forming fraction and local density varies from high- to low-mass clusters. Overall, our results suggest that at high-z the current star formation (SF) activity in star-forming galaxies does not depend strongly on global or local environment, though the possible SFR peak seems at odds with this conclusion. We find that the cluster SFR normalized by cluster mass anticorrelates with mass and correlates with the star-forming fraction. These trends can be understood given a) that the average star-forming galaxy forms about 1 Msun/yr in all clusters; b) that the total number of galaxies scales with cluster mass and c) the dependence of star-forming fraction on cluster mass. We present the morphology-density (MD) relation for our z=0.4-0.8 clusters, and uncover that the decline of the spiral fraction with density is entirely driven by galaxies of types Sc or later. For galaxies of a given Hubble type, we see no evidence that SF properties depend on local environment. In contrast with recent findings at low-z, in our distant clusters the SF-density relation and the MD-relation are equivalent, suggesting that neither of the two is more fundamental than the other.(abr.) INTRODUCTION The star formation activity and other fundamental galaxy properties such as morphology and gas content vary systematically with redshift, galaxy mass and environment. While redshifts and galaxy masses are at least conceptually clearly defined, the definition of a galaxy environment is arbitrary and its optimal choice would require an a priori knowledge of the very thing we are trying to identify, that is the physical environmental driver or drivers of galaxy formation and evolution. Most studies nowadays define the "environment" either in terms of the local galaxy number density (the number of galaxies per unit volume or projected area around the galaxy of interest), or the virial mass of the cluster or group to which the galaxy belongs, when this applies, because these are the two most easily measurable quantities from spectroscopic or even imaging surveys. In spite of environment being an elusive and arbitrary concept, the fact that galaxy properties depend on environment was recognized earlier than the dependence on galaxy mass and redshift (Hubble & Humason 1931). The first quantitative measurement of systematic differences with local environment was the so-called morphology-density relation (MDR) in nearby clusters. The MDR is the observed variation of the proportion of different Hubble types with local density, with ellipticals being more common in high-density regions, spirals being more common in low-density regions, and S0s making up a constant fraction of the total population within the cluster virial radius regardless of density (Dressler 1980, as revisited in Dressler et al. 1997. It was subsequently found that a similar MDR also exists in nearby groups (Postman & Geller 1984 also revisited in Postman et al. 2005) and that a qualitatively similar, but quantitatively different, MDR is present in galaxy clusters at redshifts up to 1 (Dressler et al. 1997, Treu et al. 2003, Postman et al. 2005. Galaxy stellar populations have also long been known to vary systematically with environment (Spitzer & Baade 1951): denser environments have on average older stellar populations. At least at some level, this must be related to the higher incidence of early-type galaxies in high density regions, i.e. to the morphology-density relation. At low redshift the best characterization of the "star formation-local density" (SFD) relation has come from large redshift surveys. These studies have conclusively demonstrated that the average galaxy properties related to star formation depend on local density even at large clustercentric radii; at low densities in clusters; and outside of clusters, in groups and the general field (Hashimoto et al. 1998, Lewis et al. 2002, Gomez et al. 2003, Kauffmann et al. 2004, Balogh et al. 2004, see also Pimbblet et al. 2002). Moreover, they have highlighted the dependence on both galaxy mass and local environment, showing strong environmental trends at a given galaxy mass (Kauffmann et al. 2004, Baldry et al. 2006. To understand the origin of the SFD relation, it is essential to answer two separate questions: at any given galaxy mass, 1) how does the proportion of star-forming galaxies vary with density, and 2) how does the star formation activity in starforming galaxies vary with density? At low-z, the evidence for a change in the relative numbers of red/passively-evolving and blue/star-forming galaxies with local environment is overwhelming, but it remains an open question whether starforming galaxies of similar mass have star formation histories that depend on local density (Balogh et al. 2004a,b, Hogg et al. 2004, Gomez et al. 2003, Baldry et al. 2006. Deep redshift surveys have recently extended the study of the SFD relation in the general field to high redshift. They have revealed that the number ratio of red to blue galaxies increases with local density out to z > 1 and that we might be witnessing the establishment of the color-density relation at z approaching 1.5 (Cucciati et al. 2006, Cooper et al. 2007, Cassata et al. 2007). This suggests that the transition from a star-forming phase to a passive one occurs for a large number of massive galaxies in groups at z ∼ 2 (Poggianti et al. 2006). In apparent but not substantial contradiction, the average SFR per galaxy at z = 1 increases instead of decreasing with local density. Therefore the SFD relation is inverted with respect to the local Universe (Elbaz et al. 2007, Cooper et al. 2008. Thus, the MDR up to z = 1 is well studied in clusters and has started to be explored in the field (Capak et al. 2007), and the SFD relation is now being investigated in the general field over a similar redshift baseline. In clusters, Moran et al. (2005) have presented the EW(OII)s of ellipticals and S0 galaxies as a function of local density for a cluster at z = 0.4. However, a detailed study of the relation between star-formation and local environment in distant clusters has not yet been carried out. As a consequence, a comparison of the MDR and the SFD relation has not been possible to date in clusters at high redshift. This is due to the limited number of well-studied distant clusters with homogeneous data. Deep galaxy redshift surveys have recently made it possible to characterize the global as well as the local environment of galaxies and to study significant samples of groups at high-z (Wilman et al. 2005, Gerke et al. 2005, Balogh et al. 2007, Gerke et al. 2007, Finoguenov et al. 2007). Groups are typically identified as galaxy associations with masses < 10 14 M ⊙ corresponding to velocity dispersions of 400 km s −1 . Even the largest field surveys, however, include only very few distant systems above this mass (Finoguenov et al. 2007, Gerke et al. 2007). On the other hand, until recently, distant cluster surveys have studied primarily massive clusters. Only the lat-est surveys of optically-selected samples target structures of a wide range of masses, down to the group level (Hicks et al. 2008, Milvang-Jensen et al. 2008, Halliday et al. 2004. Nowadays groups are therefore the meeting point of field and cluster studies at high-z, and it has become possible to study the dependence of the SFD relation on global environment (clusters, groups and the field), approaching the question from both perspectives. In this paper, we investigate the relation between star formation activity, morphology and local galaxy density in z = 0.4 − 0.8 clusters and groups observed by the ESO Distant Cluster Survey (EDisCS). The EDisCS data set permits an internal comparison with galaxies in poor groups and the field at the same redshifts in a homogeneous way. To compare our results with clusters in the nearby Universe, we use a cluster sample drawn from the Sloan Digital Sky Survey (SDSS). After presenting the data set ( §2) and the definition of our cluster, group and field samples ( §3), we outline our method for measuring the local galaxy density at high-z ( §4) and describe the low-z cluster sample we use as local comparison ( §5). The average trends of the fraction of star-forming galaxies and of the [OII] equivalent width with local density in clusters ( §6.1) is compared with those found in lower density environments in §6.2 and to those in low-z clusters in §6.4. The dependence of the star-forming fraction on global cluster properties is presented in §6.3. We then analyze the behavior of the average and specific star formation rates in §7, summarizing the similarities and differences of the EW-density and SFR-density relations in §7.1. Cluster-integrated star formation rates are derived in §7.2 where we show their relationship with cluster mass and other global cluster properties. Galaxy morphologies are discussed in §8, where we present the morphology-density relation, the star-forming properties of each Hubble type as a function of local density, and the link between the MD and the SFD relations. The latter is compared with results at low redshift in §8.1. Finally, we summarize our conclusions in §9. All equivalent widths and cluster velocity dispersions are given in the rest frame. All quantities related to star formation are given uncorrected for dust. We use proper (not comoving) radii, areas and volumes. We assume a ΛCDM cosmology with (H 0 , Ω m , Ω λ ) = (70,0.3,0.7). THE DATASET The ESO Distant Cluster Survey (hereafter, EDisCS) is a multiwavelength survey of galaxies in 20 fields containing galaxy clusters at z = 0.4 − 1. Candidate clusters were selected as surface brightness peaks in smoothed images taken with a very wide optical filter (∼ 4500-7500 Å) as part of the Las Campanas Distant Cluster Survey (LCDCS; Gonzales et al. 2001). The 20 EDisCS fields were chosen among the 30 highest surface brightness candidates in the LCDCS, after confirmation of the presence of an apparent cluster and of a possible red sequence with VLT 20 min exposures in two filters . For all 20 fields EDisCS has obtained deep optical photometry with FORS2/VLT, near-IR photometry with SOFI/NTT, multislit spectroscopy with FORS2/VLT, and MPG/ESO 2.2/WFI wide field imaging in V RI. ACS/HST mosaic imaging in F814W of 10 of the highest redshift clusters has also been acquired (Desai et al. 2007). Other follow-up programmes include XMM-Newton X-Ray observations (Johnson et al. 2006), Spitzer IRAC and MIPS imaging (Finn et al. in prep.), Hα narrow-band imaging (Finn et al. 2005), and ad-ditional optical imaging and spectroscopy in 10 of the EDisCS fields targeting galaxies at z ∼ 5 (Douglas et al. 2007). An overview of the survey goals and strategy is given by White et al. (2005), who also present the optical groundbased photometry. This consists of V , R and I imaging for the 10 highest redshift cluster candidates, aimed at providing a sample at z ∼ 0.8 (hereafter the high-z sample) and B, V and I imaging for 10 intermediate-redshift candidates, aimed to provide a sample at z ∼ 0.5 (hereafter the mid-z sample). In practice, the redshift distributions of the high-z and the mid-z samples partly overlap (Milvang-Jensen et al. 2008). Spectra of > 100 galaxies per cluster field were obtained, with typical exposure times of 4 hours for the high-z sample and 2 hrs for the mid-z sample. Spectroscopic targets were selected from I-band catalogs (Halliday et al. 2004). At the redshifts of our clusters, this corresponds to ∼ 5000 ± 500 Å rest frame. Conservative rejection criteria based on photometric redshifts (Pelló et al. in prep.) were used in the selection of spectroscopic targets to reject a significant fraction of nonmembers while retaining a spectroscopic sample of cluster galaxies equivalent to a purely I-band selected one. A posteriori, we verified that these criteria have excluded at most 1-3% of cluster galaxies (Halliday et al. 2004 andMilvang-Jensen et al. 2008). The spectroscopic selection, observations, and catalogs are presented in Halliday et al. (2004) and Milvang-Jensen et al. (2008). In this paper we make use of the spectroscopic completeness weights derived by Poggianti et al. (2006). Here we only give a brief summary of the completeness of our spectroscopic sample, referring the reader to the previous paper for details. Given the long exposure times, the success rate of our spectroscopy (number of redshift/number of spectra taken) is 97% above the magnitude limit used in this study. A visual inspection of the remaining 3% of the galaxies reveals that most of these are bright, featureless low-z galaxies. Moreover, in our previous paper we computed the spectroscopic completeness as a function of galaxy magnitude and position within the cluster (Appendix A), verified the absence of biases in the completeness-corrected sample (Appendix B) and found that incompleteness has a negligible effect on the [OII] properties of our clusters. In this paper, we analyze 16 of the 20 fields that comprise the original EDisCS sample. We exclude two fields that lack several masks of deep spectroscopy (cl1122.9-1136 and cl1238.5.114; see Halliday et al. 2004 andWhite et al. 2005). We also exclude two additional systems (cl1037.9-1243 and cl1103.7-1245), each of which has a neighboring rich structure at a slightly different redshift (Milvang-Jensen et al. 2008) that is indistinguishable on the basis of photometric properties alone. The names, redshifts, velocity dispersions, and numbers of spectroscopic members for the remaining 16 clusters are listed in Table 1. The EDisCS spectra have a dispersion of 1.32 Å/pixel or 1.66 Å/pixel, depending on the observing run. They have a FWHM resolution of ∼ 6 Å, corresponding to rest-frame 3.3 Å at z=0.8 and 4.3 Å at z=0.4. The equivalent widths of [OII] were measured from the spectra using a line-fitting technique, as outlined in Poggianti et al. (2006). This method includes visual inspection of each 1D spectrum. Each line detected in a given 1D spectrum was confirmed by visual inspection of the corresponding 2D spectrum; this is especially useful to assess the reality of weak [OII] lines. We do not attempt to separate a possible AGN contribu- Halliday et al. (2004) and Milvang-Jensen et al. (2008). tion to the [OII] line, or to exclude galaxies hosting an AGN. We are unable to identify AGNs in our data, as the traditional optical diagnostics are based on emission lines that are not included in the spectral range covered by most of our spectra. AGN contamination will be most relevant for our study if there are non-starforming, red galaxies in which the [OII] emission originates exclusively from processes other than star formation. We note that only 13% of the spectroscopic sample used here is composed of red galaxies 1 with a detected [OII] line in emission. About 40% of these are spirals of types Sa or later. Among local field galaxies, about half of the red [OII]-emitting galaxies are LINERs whose source of ionization is still debated, while the rest are either star-forming or Seyferts/transition objects in which star formation dominates the line emission (Yan et al. 2006). Adopting the distribution of AGN types observed locally in the field and conservatively assuming that all LINERs are devoid of star formation, we estimate that the contamination from pure AGNs in our sample is at most 7%. Moreover, we have verified that the fraction of red emission-line galaxies is not a function of local density. It remains true, however, that all the trends we observe, and their evolution, may reflect a combination of the variations in the level of both star formation activity and AGN activity. This should be kept in mind throughout the paper and when comparing our results with any other work. With these caveats in mind, we conveniently refer to galaxies interchangeably as "star-forming" or "[OII] galaxies" whenever their EW([OII] )> 3 Å , adopting the convention that EWs are positive in emission. The detection of the [OII] line above this EW limit is essentially complete in our spectroscopic sample (see Poggianti et al. 2006 for details). THE DEFINITION OF THE VARIOUS ENVIRONMENTS As can be seen in Table 1, EDisCS structures cover a wide range of velocity dispersions, from massive clusters to groups. For brevity, we will refer collectively to these structures as "EDisCS clusters". In addition, within the EDisCS data set it is possible to investigate the spectroscopic properties of galaxies in even less densely populated environments, at the same redshift as our main structures. In a redshift slice within ±0.1 in z from the cluster/group targeted in each field, where we are sure the spectroscopic catalog can be treated as a purely I-band selected sample with no selection bias, we have identified other structures as associations in redshift space as described in Poggianti et al. (2006) . These associations have between 3 and 6 galaxies and will hereafter be referred to as "poor groups". We did not attempt to derive velocity dispersions for these systems, given the small number of redshifts per group. In total, our poor group sample comprises 84 galaxies brighter than the magnitude limits adopted for our analysis (absolute V magnitude brighter than -20, see below). Finally, within the same redshift slices, any galaxy in the spectroscopic catalogs that is not a member of our clusters, groups or poor group associations is treated as a "field" galaxy. This field galaxy sample is composed of 162 galaxies brighter than our limit and should be dominated by galaxies in regions less populated and less dense than the clusters and groups, although will also contain galaxies belonging to poor structures that went undetected in our spectroscopic catalog. Our field sample is therefore far from being similar to the galaxy sample in general "field" studies, which will be dominated by a combination of group, poor group, and field galaxies according to our environment definition. The median redshift is 0.58 for the field sample and 0.66 for the poor group sample. Redshift and EW([OII]) distributions of our poor group and field samples are given in Poggianti et al. (2006) . Computing galaxy masses as outlined in §7, we find that the mass distribution of galaxies varies significantly with environment, progressively shifting towards higher masses from the field to the poor groups to the clusters. This corresponds to a difference in the galaxy luminosity distributions, which was shown in Poggianti et al. (2006) . We build field and poor group samples that are matched in mass to the cluster sample (hereafter the "mass-matched" field and poor group samples), drawing for each cluster galaxy a field or poor group galaxy with a similar mass. In the following, we will present the results for both mass-matched and unmatched samples, but show only mass-matched values in all figures. PROJECTED LOCAL GALAXY DENSITIES The projected local galaxy density is computed for each spectroscopically confirmed member of an EDisCS cluster. It is derived from the circular area A that in projection on the sky encloses the N closest galaxies brighter than an absolute V magnitude M V lim . The projected density is then Σ = N/A in number of galaxies per square megaparsec. In the following we use N=10, as have most previous studies at these redshifts. For about 7% of the galaxies in our sample, the circular region containing the 10 nearest neighbors extends off the chip. Since the local densities of these sources suffer from edge effects, they were excluded from our analysis. Densities are computed both by adopting a fixed magnitude limit M V lim = −20 and by letting M V lim vary with redshift between -20.5 at z = 0.8 and -20.1 at z = 0.4 to account for passive evolution. Absolute galaxy magnitudes are derived as described in Poggianti et al. (2006) . We have used two different radial limits to derive the mean properties of galaxies in each density bin. First, we tried using only galaxies within R 200 (the radius delimiting a sphere with an interior mean density of 200 times the critical density, approximately equal to the cluster virial radius). We also tried including all galaxies in our spectroscopic sample, regardless of distance from the cluster center. The values of R 200 computed for our clusters, as well as sky maps showing R 200 relative to the extent of the spectroscopic sample, are given in Poggianti et al. (2006). For most clusters, our spectroscopy extends to R 200 , while severe incomplete radial sampling occurs for one cluster, Cl1232, which will be treated separately when relevant, e.g. in §7.2. The results do not change whether we confine our analysis to R 200 or use our full spectroscopic sample, nor whether we use a fixed or a varying magnitude limit. Therefore, to maximise the number of galaxies we can use and to minimize the statistical errors, we show the results for M V lim = −20 and with no radial limit, unless otherwise stated. We apply three different methods to identify the 10 cluster members that are closest to each galaxy. These yield three different estimates of the projected local density, which we compare in order to assess the robustness of our results. In the first method, the density is calculated using all galaxies in our photometric catalogs and is then corrected using a statistical background subtraction. The number of field galaxies within the circular area A and down to the magnitude limit adopted for the cluster is estimated from the I-band number counts derived for a 4 deg x 4 deg area by Postman et al. (1998). In the other two methods we include only those galaxies that are considered cluster members according to photometric redshift estimates. As described in detail in Pelló et al. (in prep.), photometric redshifts were computed for EDisCS galaxies using two independent codes: a modified version of the publicly available Hyperz code (Bolzonella, Miralles & Pelló 2000), and the code of Rudnick et al. (2001) with the modifications presented in Rudnick et al. (2003). We use two different criteria to retain cluster members and reject probable non-members. In the first case, a galaxy is accepted as a cluster member if the integrated probability that the galaxy lies within ±0.1 in z from the cluster redshift is greater than a specific threshold for both photometric redshift codes. The probability threshold is chosen to retain about 90% of the spectroscopically confirmed members in each cluster. In the other method a galaxy is retained as a cluster member if the best photometric estimate of its redshift from the Hyperz code is within ±0.1 in z from the cluster redshift. The projected local density distributions obtained with the three methods are shown in Fig. 1. We note that throughout the paper we use only proper (not comoving) lengths, areas, and volumes. For example, our local densities are given as the number of galaxies per Mpc 2 , as measured by the rest-frame observer. This choice is dictated by the fact that with local densities we are investigating vicinity effects, and gravitation depends on proper distances. LOW REDSHIFT SAMPLE Using the SDSS, we have compiled a sample of clusters and groups at 0.04 < z < 0.1. This sample serves as a low-redshift baseline with which we can compare our high-z results. The SDSS cluster sample is described in Poggianti et al. (2006) 2 and comprises 23 Abell clusters with velocity dispersions between 1150 and 200 km s −1 , with an average of 35 spectroscopically confirmed members per cluster. To approximate the EDisCS spectroscopic target selection, which was carried out at rest-frame 5000 ± 500 Å , we used a g-selected sample extracted from the SDSS spectroscopic catalogs. Local densities were computed for spectroscopic cluster members (within 3σ from the cluster redshift) that lie within R 200 from the cluster center. For Sloan, the radial cut is necessary to approximate the EDisCS areal coverage, which reaches out to about R 200 . We choose a galaxy magnitude limit of M V < −19.8, which maximizes the number of galaxies we can use in our analysis and would correspond to −20.1 at z = 0.4 and −20.5 at z = 0.8 under the assumption of passive evolution. As for EDisCS, local densities were derived using the circular area encompassing the 10 nearest neighbors. Two methods were employed to obtain two independent estimates of local densities. In the first method, we find the distance to the tenthnearest projected neighbor considering only spectroscopically confirmed members brighter than M V < −19.8. In the second method, we include the 10 nearest projected neighbors that are within the Sloan photometric catalog and that have an estimated absolute magnitude satisfying M V < −19.8. Absolute magnitudes were derived from observed magnitudes assuming that all galaxies lie at the cluster redshift and using the transformations of Blanton et al. (2003). Both the spectroscopic and the photometric densities were computed from catalogs of galaxies located over an area much larger than R 200 to avoid edge effects. Since the spectroscopic completeness within R 200 of the SDSS clusters is on average about 84%, the first method is likely to underestimate the "true" density. Using only the photometric catalog, we ignore the fact that some of the 10 galaxies might be in the background or foreground of the cluster. The second method therefore overestimates the value of the density. Thus, the spectroscopically-based and photometrically-based local densities represent lower and upper limits to the local density. Density values that have been corrected for spectroscopic incompleteness and for background contamination will lie between these two values. The local density distributions derived with the two methods are shown in the right panel of Fig. 1. As expected, the distribution of spectroscopically-based densities is shifted to slightly lower densities than the distribution based on photometry, but in the following we will see that the conclusions reached by the two methods are fully consistent. Compared to the density distribution at high-z shown in the left panel of the same figure, the low-z distribution is shifted to lower densities. In Poggianti et al. (in prep.) we show this is due to the fact that high-z clusters are on average denser by a factor of (1 + z) 3 compared to nearby clusters, with possible strong consequences on galaxy evolution. The values of EW ([OII] ) measured with the method we used for EDisCS spectra are in very good agreement with those measured by Brinchmann et al. (2004a,b) for starforming galaxies in the SDSS (see Fig. 3 in Poggianti et al. (2006)). However, for EDisCS galaxies, the EW([OII] ) was measured only when the line was present in emission, and a value EW([OII] )=0 was assigned when no line was present. In addition, each 1D and 2D spectrum was visually inspected. In contrast, the SDSS [OII] measurements of Brinchmann et al. (2004a,b) are fully automated and can even yield a (small) value in absorption that is compatible with 0 within the error and that cannot be ascribed to the [OII] 3727 line. To take this into account, for SDSS clusters we have used the EWs provided by Brinchmann et al. (2004a,b), but have forced the EW([OII] ) to be equal to 0 when the value provided by Brinchmann et al. is EW < 0.8 Å in emission. Moreover, to be compatible with EDisCS, we do not exclude AGNs from our SDSS analysis. Finally, the redshift range of our Sloan clusters was chosen as a compromise to minimize aperture effects while still sampling sufficiently deep into the galaxy luminosity function. The 3 ′′ SDSS fiber diameter covers the central 2.4-5.4 kpc of galaxies depending on redshift, compared to the 1 ′′ EDisCS slit covering 5.4-7.5 kpc at high redshift. In the following, we assume that [OII] equivalent widths do not change significantly over these different areas. [OII] strength and star-forming fractions as a function of local density at high-z We first investigate how the strength of the [OII] line varies in EDisCS clusters as a function of projected local density. Figure 2 shows the mean equivalent width of [OII] measured over all galaxies that are spectroscopically confirmed members of EDisCS clusters (black symbols), in bins of local density. The three different estimates of density described in §3 (empty and filled circles and crosses) yield similar results within the errors. In the following, errors on mean equivalent widths are computed as bootstrap standard deviations, and errors on fractions are computed from Poissonian statistics. The mean EW computed over all galaxies is consistent with being flat up to a density ∼ 70 gal/Mpc 2 (the Kendall's probability for an anticorrelation in the three lowest density bins is only 40%), then decreases at higher densities. These trends arise from a combination of the incidence of star-forming galaxies and the relation between density and EW in starforming galaxies. As shown in fig. 3, the proportion of star-forming galaxies tends to decline at higher density. The fraction of star-forming galaxies decreases from about 60% to ≤30%. Using the average of the values given by the 3 membership methods, the Kendall test gives a 95% probability of an anticorrelation. In contrast, fig. 4 shows that the mean EW([OII] ) computed only for galaxies with emission lines does not correlate with local density (the Kendall's correlation probability is 38%). It is consistent with being flat over most of the density range, except for the highest density bin centered on ∼ 450 galaxies per Mpc 2 , where it drops by a factor 2 to 3. As will be shown in §8, the highest density bin is populated only by elliptical galaxies, whose weak [OII] may be related to the presence of an AGN. It is thus not surprising that this bin stands out from the other bins, where star-forming spirals dominate the mean behavior. The constancy of the average [OII] equivalent width in starforming galaxies at most densities suggests that as long as star formation is active, it is on average unaffected by local environment, at least in clusters. Similarly, when considering only galaxies with ongoing star formation, the mass-matched field and poor group values are comparable to those at most densities in clusters ( fig. 4). The mean EW of [OII] field and poor group galaxies is 17.2±1.5 Å and 14.5±2.2 Å , respectively (18.0±1.5Å and 18.9±2.2Å in the unmatched samples). Finally, the fraction of [OII] galaxies in the field is 62±8% (matched, fig. 3) and 72±8% (unmatched). In the poor groups, the mass-matched [OII] fraction is 80.0 ±10% (matched) and 77±10% (unmatched). The star-forming fraction in the field is compatible with that observed at most densities in clusters, while the poor group fraction is slightly higher. Therefore, relative to clusters, the unmatched poor groups and field have higher average EWs and star-forming fractions. Our results indicate that this is primarily due to differences in the galaxy mass distribution with environment. Using galaxy samples with similar mass distributions, we find that the EW properties of star-forming galaxies do not differ significantly between clusters, poor groups, and the field, neither with local density within clusters as shown in the previous section. [OII] -local density relation as a function of cluster mass To assess whether the relation between star formation and density is the same in structures of different mass, we divide our cluster sample into different velocity dispersion bins and show the correlation found above, between the star-forming fraction and density, in Fig.5. The analysis is now done using only galaxies within R 200 . As above, errors on fractions are computed from Poissonian statistics. The two most massive clusters (σ > 800 km s −1 ) exhibit a flatter relation. i.e. have lower [OII] fractions in the three lowest density bins, than clusters with σ < 800 km s −1 . 3 In contrast, we find that at low-z this relation is indistinguishable in clusters with σ above and below 800 km s −1 , in agreement with previous works that found no dependence of the correlation between the star-forming (Hα-emitting) fraction and density from the cluster velocity dispersion in the local universe (Lewis et al. 2002, Balogh et al. 2004. We note that although the relation between [OII] fraction and local density varies with cluster mass at high-z, the relation between star formation and physical three-dimensional space density may be constant, since the distribution of physical densities in each projected 2D density bin varies with cluster mass (Poggianti et al. in prep.). Dividing the high-z sample into finer velocity dispersion bins, we do not find a continuous trend with velocity dispersion (Fig.5). Systems with 600 < σ < 800 km s −1 may lie at larger or comparable [OII] fractions than systems with σ < 600 km s −1 , but our errorbars are too large to draw any conclusion. In Poggianti et al. (2006) we studied how the fraction of [OII] galaxies depends on the cluster velocity dispersion σ. In an [OII] fraction-σ diagram, one can identify three groups of structures: a) high-mass structures, all with low [OII] fractions; b) low-mass structures with high [OII] fractions and c) low-mass structures with low [OII] fractions (the so-called "outliers" in Poggianti et al. (2006) ). In addition to having a low [OII] fraction, the outliers have a low fraction of blue galaxies ), a high fraction of early-type galaxies given the measured velocity dispersions (Simard et al. 2008), and peculiar [OII] equivalent width distributions (Poggianti et al. (2006) ). Hence, galaxies in the outliers resemble those in the cores of much more massive clusters. The presence of low-mass structures with low [OII] fractions in our sample could be responsible for a non-monotonic trend of the [OII] fraction-density relation with cluster mass. We study the dependence of the [OII] fraction on local projected density for the three groups separately in the right panel of Fig.5. Except for the lowest density bin where the results of all three groups are compatible within the errors, the trend with local density is different in the three groups. At any given density, the star-forming fraction in low-mass, low-[OII] groups is significantly lower than those in low-mass high-[OII] systems. In Poggianti et al. (2006) we found that the global [OII] fraction in distant clusters relates to the system mass, but not solely to the system mass, at least as estimated from the observed velocity dispersion. Here we find that the [OII] fraction does not depend solely on projected local density but also on global environment, and that variations in the star-forming fraction-density relation do not depend uniquely on cluster mass. In principle, the correlations with system mass and local density could have a single common origin, from i.e. a correlation between galaxy properties and physical density in three-dimensional space. In a separate paper (Poggianti et al. in prep.), we use numerical simulations to investigate the relations between projected local density, physical 3D density, and cluster mass. The aim of that paper is a simultaneous interpretation of the observed trends with local density and cluster mass presented in this paper and in Poggianti et al. (2006) . The EW([OII])-density relation at low redshift The SDSS results are shown as red symbols (triangles) in fig. 2, fig. 3 and fig. 4. At low-z, the mean EW([OII] ) computed for all galaxies continuously and smoothly decreases with local density ( fig. 2). This trend is driven by the decrease in the star-forming fraction with local density ( fig. 3). Using the average of the values obtained with the two density estimates, the Kendall's test yields a 98.5% probability of an anti-correlation. As at high-z, the mean [OII] strength of [OII] -galaxies does not vary significantly over most of the density range (Kendall's probability 82.6%), except for a decrease in the highest density bin (fig. 4). These findings are in agreement with previous low-z results based on SDSS and 2dFGRS (Lewis et al. 2002, Balogh et al. 2004. From a quantitative point of view, blindly comparing the high-z and the low-z results, at any projected density in common we observe a lower average EW([OII]) at low-z. This is due to both a lower average [OII] fraction and a lower average EW([OII]) in star-forming galaxies at low-z. Taken at face value, this indicates that both the proportion of star-forming galaxies and the star formation activity in them decrease with time at a given density. However, it is worth stressing that observing similar projected densities at different redshifts does not imply similar physical densities, since the correlation between projected and 3D density varies with redshift (Poggianti et al. in prep.). Hence, a quantitative comparison of results at different epochs at a given projected density cannot be interpreted as a direct measure of the decline of the star formation activity with time for similar "environmental" physical conditions. Moreover, we stress again that to the low-z density distribution is shifted to lower densities compared to clusters at highz. This is due to the fact that high-z clusters are on average denser by a factor of (1 + z) 3 compared to nearby clusters, as discussed in Poggianti et al. (in prep.). From a qualitative point of view, the only difference between high-and low-z observations is the fact that the high-z EW(OII) averaged over all galaxies is consistent with being flat up to a density ∼ 70 gal/Mpc 2 (40% probability for an anticorrelation), while the low-z trend smoothly declines towards higher densities with no discontinuity (98.5% probability for an anticorrelation). For the rest, the trends of [OII] equivalent widths and starforming fraction with local density are qualitatively very similar at z = 0 and z = 0.8, showing that an [OII] fraction-density relation similar to that observed locally is already established in clusters at these redshifts, and that the activity in starforming cluster galaxies, when assessed from the EW of the [OII] line, does not appear to depend strongly on local density at any redshift. STAR FORMATION RATES The star formation rate (SFR) of a galaxy can be roughly estimated from the [OII] line flux in its integrated spectrum. The equivalent width is the ratio between the line flux and the value of the underlying continuum. Thus, it is not directly proportional to the star formation rate. For example, a faint late-type galaxy in the local Universe usually has a higher equivalent width, but a comparable or lower star formation rate than a more luminous spiral. For this reason, the analysis presented above does not yield information on absolute star formation rates, but only on the strength of star formation relative to the galaxy ∼ U restframe luminosity, which itself depends on the current star formation activity. We have derived star formation rates of galaxies with EDisCS spectra by multiplying the value of the observed equivalent width by the value of the continuum flux estimated from our broad-band photometry. For the latter we have used the total galaxy magnitude as estimated from spectral energy distribution (SED) fitting (Rudnick et al. 2008), assuming that stellar population differences between the galactic regions falling in and out of the slit are negligible (see §5). 4 We use the conversion SFR(M ⊙ yr −1 ) = L([OII]) erg s −1 /(1.26 × 10 41 ) (Kewley et al. 2004), adopting an intrinsic (with no dust attenuation) flux ratio of [OII] and Hα equal to unity with no strong dependence on metallicity, as found by Moustakas et al. (2006). At this stage we do not attempt to correct our star formation estimates for dust extinction. Locally, the typical extinction of [OII] relative to Hα is a factor 2.5, and the typical extinction at Hα is an additional factor 2.5-3, so our SFR estimates would be corrected by a factor ∼ 7 for extinction, 5 but there are large galaxy-to-galaxy and redshift variations and they are hard to derive using only optical spectra. Dust-free SFRs based on 4 We do not attempt to compare with SFRs in Sloan, as SFRs are more sensitive to aperture effects than EWs, and dishomogeneity in observations and photometry between the two datasets would render a quantitative comparison highly uncertain. 5 A comparison of our [OII]-based SFRs with those derived from Hα narrow-band photometry from Finn et al. (2005) shows that the relation between the two does not strongly deviate from that derived using the local typical factor ∼ 7 for extinction. Spitzer data of the EDisCS clusters will be presented in Finn et al. (in prep). Adopting the same criteria and galaxy sample as in Sec. 4, we derive the mean star formation rate for galaxies in bins of local projected density and compute errors as bootstrap standard deviations. The lower panel of Fig. 6 shows the results including all cluster members as black symbols. The mean SFR is ∼ 1 − 1.2 M ⊙ yr −1 in low-density regions and declines towards denser regions. The mean SFR might present a maximum at a density between 15 and 40 galaxies/Mpc 2 , though within the errors the values of the two lowest density bins may be consistent. The corresponding mean SFR for all galaxies in the field and poor groups is 0.8-1.2 (0.9-1.3 in the unmatched samples), comparable to the average in cluster low-density regions. Considering only star-forming galaxies, the trend with local density in clusters remains similar and is shifted to higher SFR values with a maximum of ∼ 1.8 M ⊙ yr −1 between 15 and 40 galaxies/Mpc 2 (blue points in Fig.6). Errorbars are larger here due to the reduced number of galaxies. Nevertheless, the presence of a peak is hinted at by the data at the 1-2σ level. To further assess the significance of the peak, we have computed mean and median SFR values for star-forming galaxies in 5, 4, and 3 equally-populated density bins. The latter are shown in the top panel of Fig.6, together with the SFR values for individual galaxies. The equally-populated bins confirm the presence of the peak at the 2 to 4σ level. A KS test rejects the null hypothesis of similar SFR distributions in star-forming galaxies in the peak density bin and in each one of the other bins with a 98.4% and 98.2% probability. The galaxy mass distribution varies only slightly from one density bin to another. In any case, we have verified that the significance of the peak in the mean and median SFR remains the same when matching the mass distribution of galaxies in the lowest and highest density bin to that in the bin with the peak. The distribution of individual points in the top panel of Fig. 6 is also visually consistent with higher SFRs for a significant number of galaxies at densities between 15 and 50 galaxies/Mpc 2 . The peak SFR is higher than the mean values of 1.17 ± 0.14 M ⊙ yr −1 for mass-matched field star-forming galaxies, but is compatible within the errors with the mass-matched poor group value of 1.44 ± 0.25 (blue lines in Fig.6). The unmatched samples yield similar results (1.2 − 1.3). Figure 7 shows the average specific star formation rate (sSFR), defined as the SFR per unit of galaxy stellar mass, as a function of local galaxy density. Galaxy stellar masses were computed from rest-frame absolute photometry derived from SED fitting (Rudnick et al. 2008), adopting the calibrations of Bell & De Jong (2001), which are based on a diet Salpeter IMF. Cluster trends are similar to the SFR-density diagram, reinforcing the picture of a peak and a declining trend on both sides of the peak. The average specific star formation rates in mass-matched samples of star-forming field galaxies (3.9 ± 0.4 10 −11 yr −1 ) and of poor groups (2.6 ± 0.3 10 −11 yr −1 ) are comparable to those found in the low-density regions of clusters. Interestingly, using the unmatched samples, the field would be markedly distinct from the other environments, having higher specific star formation rates by a factor of two or more. This shows that on average our star-forming field galaxies are forming stars at more than twice the rate per unit of galaxy mass than star-forming galaxies in any other environment we have observed, and that this is due to their average lower galaxy mass. Our results show that in distant clusters the average star formation rate and the specific star formation rate per galaxy, computed both over all galaxies and only among star-forming galaxies, may not follow a continuously declining trend with density. The most striking result is the significance of the peak in the SFR of star-forming galaxies discussed above. The average star formation rate over all galaxies decreases with density in the general field at z = 0 (Gomez et al. 2003), but distant field studies have found that the relation between average star formation rate over all galaxies and local density was reversed at z = 1, when the SFR increases with density, at least up to a critical density above which it may decrease again (Cooper et al. 2007, Elbaz et al. 2007). These high-z surveys sample different regions of the Universe (the general "field") and slightly higher redshifts than our survey (z ∼ 0.75 − 1.2). The range of projected densities in these studies is likely to overlap with our range only in their highest density bins, but a direct comparison is hampered by the different measurement methods of local density. It is compelling, however, that both we and these studies find a possible peak plus a possible decline on either side of the peak. Unfortunately, none of these studies sample a sufficiently broad density range to be sure of the overall trend. It is possible that the SFR per galaxy at redshifts approaching 1 presents a maximum at intermediate densities (corresponding to the groups/filaments that are common to all of these studies), and declines both towards higher and lower density regions. Large surveys sampling homogeneously a wide range of environments and local densities at z = 0.5 − 1 should be able to address this question. Comparing the SFR-density and the EW-density relations To summarize the results presented in the previous sections, there are some notable differences between the "star formation-density" relation as depicted by the observed equivalent widths (EW([OII])-density relation), and that portrayed by the measured star formation rates (SFR-density relation). The main differences are best seen by comparing Fig. 4 with Fig. 6, and can be described as follows. The "strength" of star formation in star-forming galaxies, when assessed from the EW([OII]), is consistent with being flat with density in clusters (except for the strong depression in ellipticals in the densest regions), and to be rather similar in equally massive field and poor group galaxies. The "strength" of star formation in star-forming galaxies, when represented by the SFR, possibly peaks in clusters at ∼ 30 galaxies/Mpc 2 , exceeding the field value. This finding appears robust to any statistical test we have applied. However, data for larger galaxy samples will be needed to confirm this result. From the EW([OII])-density relation one would conclude that on average the star formation activity in currently starforming galaxies is invariant with both local and global environment, while from the SFR-density relation one may conclude that the star formation rate is possibly boosted by the impact with the cluster outskirts, as several studies have suggested (see e.g. Milvang-Jensen et al. 2003, Bamford et al. 2005. Variations in star formation histories and dust extinction with density must play a role in causing the differences between the EW and SFR trends, and may conspire to keep the EW([OII]) relation flat. We have instead verified that variations in the galaxy mass distributions are not responsible for the SFR peak (see above). The relation between line EW and local density is often considered equivalent to the SFR-density relation, but we have shown here that they provide different views of the dependence of the star formation activity on environment. Cluster integrated SFRs We derive cluster-integrated SFRs by summing up the SFRs of individual galaxies within the projected R 200 . We derive the individual SFRs from the [OII] line flux as described in the previous section, and weight each galaxy for spectroscopic incompleteness as outlined in §2. We do not attempt to extrapolate to galaxy magnitude limits fainter than the spectroscopic limit adopted for this paper, thus SFRs in galaxies fainter than M V = −20 are not included in our estimate. The cluster-integrated SFR, normalized by the cluster mass (SFR/M) is shown as a function of cluster mass in Fig. 8. The cluster mass has been obtained from the cluster velocity dispersion using eqn. 4 in Poggianti et al. (2006). Error bars are computed by propagating the errors on the observed velocity dispersion and the typical 10% error on the [OII] flux. We reiterate that these SFR estimates are not corrected for extinction. From the Millennium Simulation, we find that mass and radius estimates based on observed velocity dispersions critically fail for systems below ∼ 300 km s −1 , yielding masses that are up to a factor of 10 lower than the true virial mass of the system (Poggianti et al. in prep.). As a consequence, the masses and the mass-normalized SFRs for the two lowest velocity dispersion systems in our sample (CL1119 and Cl1420) are likely to be blatantly incorrect, and will not be used in the analysis. Nevertheless, for completeness we do show the Cl1119 point in the diagrams. Cl1420 has no galaxies showing [OII] emission. It therefore has SFR=0 and is not visible in the plots. All of our other structures have SFR/M between 5 and 50 M ⊙ yr −1 per h −1 10 14 M ⊙ . Having excluded Cl1119 and Cl1420, the Kendall test gives a 95.7% probability for an anticorrelation between SFR/M and cluster mass (Fig. 8). Again without Cl1119 and Cl1420, the average SFR/M is 30.4 and 12.4 M ⊙ yr −1 per h −1 10 14 M ⊙ for systems below and above 2 × 10 14 h −1 M ⊙ , respectively. At redshift ≥ 0.4 there are very few other clusters in the literature with which we can compare. Cluster-integrated SFRs corrected for incompleteness, within a clustercentric distance = R 200 /2, are presented by Finn et al. (2005) based on Hα studies for two additional clusters, Cl0024 6 at z = 0.4 (Kodama et al. 2004) and CL J0023 at z = 0.85. A similar analysis was carried out by Homeier et al. (2005) for a cluster at z = 0.84, except that it was based on [OII] fluxes. Both of these works, when including lower redshift clusters, find a possible anti-correlation between the mass-normalized cluster SFR and the cluster mass similar to ours, although it is impossible to separate the redshift dependence from the mass dependence in such small samples. An overall evolution of the mass-normalized SFR and a large cluster-to-cluster scatter are also found by Geach et al. (2006) using mid-to farinfrared data. An upper limit in the mass-normalized SFR versus mass plane has been found to exist for clusters, groups and individual galaxies by Feulner, Hopp & Botzler (2006). The right-hand panel of Fig. 8 shows SFR/M versus M for the three clusters from the literature that were the subject of emission-line studies, plotted alongside the EDisCS points restricted to the same radius (=R 200 /2). The SFRs for the non-EDisCS clusters were corrected to account either for slightly different SFR-[OII] calibrations or for the extinction of [OII] relative to Hα (a factor of 2.5). Including the three clusters from the literature and excluding Cl1119 and Cl1420 as above, we find that the average SFR/M is 33.4 and 9.8 M ⊙ yr −1 per h −1 10 14 M ⊙ for systems below and above 2 × 10 14 h −1 M ⊙ , respectively. The Kendall test yields an anticorrelation probability of 99.2%. In contrast, as shown in the left panel of Fig. 9, the clusterintegrated SFR does not correlate with cluster mass (60% probability), and there is a large scatter in the mass range occupied by the majority of our clusters (1 − 5 × 10 14 h −1 M ⊙ ). Moreover, the right panel of Fig. 9 shows that the SFR per unit mass follows the star-forming fraction (98%). We caution that the anticorrelation between SFR/M and M presented in Fig. 8 could be entirely due to the correlation of errors. We tested this possibility by generating 100 realizations of the dataset used in Fig. 9 (i.e. 100 mass-SFR pairs), drawn from Gaussians with the same means and intrinsic rms, and by adding Gaussian errors as observed. In 41 out of 100 cases the Kendall test gave a probability larger than 95.7% that an anticorrelation between mass and SFR/M exists. Therefore the observed anticorrelation could be mainly driven by correlated errors, although this test cannot rule out the existence of an intrinsic anticorrelation. Although our sample increases the number of available cluster-integrated SFRs by a factor of four, larger cluster samples, in particular clusters at the highest and lowest masses, are clearly needed to verify our three findings: the weak anticorrelation of SFR/M with M, the lack of a correlation between the integrated SFR and M, and the presence of a correlation between SFR/M and star-forming fraction. To further investigate the robustness and the possible origin of these three results, in the left panel of Fig. 10 we show that the integrated star formation is linearly proportional to the number of star-forming galaxies N SF . In fact, the integrated SFR is equal to the number of star-forming galaxies, because the average SFR per star-forming galaxy is roughly constant in all clusters at about 1 M ⊙ yr −1 . The correlation between the integrated SFR and the number of star-forming galaxies is much tighter than the relation between the SFR and the total number of cluster members, also shown in Fig. 10 as empty circles. In Poggianti et al. (2006) we discovered that the starforming fraction in distant clusters generally follows an anticorrelation with cluster mass, with some noticeable outliers, while in nearby clusters the average star-forming fraction is constant for σ > 500 km s −1 , and increases towards lower masses with a large cluster-to-cluster scatter. The starforming fraction is given by f [OII] = N SF /N tot . In Fig. 10 we examine the mass dependence of both the numerator and denominator of this expression. We show that in distant clusters the number of star-forming galaxies N SF does not depend on cluster mass (central panel), while the total number of cluster members N tot grows with cluster mass (right panel) according to a least squares fit as: log(N tot ) = 0.56 × logM(h −1 10 14 M ⊙ ) + 1. 73 (1) At z = 0, the star-forming fraction in systems more mas-sive than 500 km s −1 is constant. If the average star formation activity in star-forming galaxies in these clusters is independent of cluster mass, as it is at high redshift, then the clusterintegrated star formation rate at z = 0 should be not only linearly proportional to the number of star-forming galaxies, but also to the total number of cluster members, as indeed found by Finn et al. (2008). Moreover, in low-z clusters the relation between the total number of cluster members and cluster mass is (triangles in the right panel of Fig. 10) log(N tot ) = 0.66 × logM(h −1 10 14 M ⊙ ) + 1.17 (2) As a consequence of eqn. (2) and of the constancy of the star-forming fraction presented in Poggianti et al. (2006) , the number of star-forming galaxies in clusters with σ > 500 km s −1 at low-z, as well as the total integrated star formation rate, must increase with cluster mass. This is at odds with what we find at high-z, where both the number of star-forming galaxies and the total SFR are independent of cluster mass ( Fig. 10 and Fig. 9). The different behavior at z = 0.6 and z = 0 is simply due to the different trends of the star-forming fraction with cluster mass at the two redshifts (Poggianti et al. (2006) ). In systems with masses below 500 km s −1 at z = 0, N ]rmSF /N tot is no longer independent of cluster mass, being on average (Poggianti et al. (2006) ) Based on eqns. 2 and 3 and the relation between cluster mass and σ, we predict that the average number of starforming galaxies for low-mass systems at z = 0 should be equal to between 4 and 6 galaxies regardless of group mass for masses between 2 × 10 13 and 2 × 10 14 h −1 M ⊙ . If the average star formation rate per star-forming galaxy is independent of group mass at low-z, as it is at high-z, then the average total group star formation rate in the mass range 2 × 10 13 − 2 × 10 14 h −1 M ⊙ should also be constant, with a very large scatter from group to group at a given mass reflecting the large scatter in the star-forming fraction. Low-z group samples should be able to verify these predictions, which are based purely on the observed correlations presented in this paper and in Poggianti et al. (2006) . Because the integrated SFR is equal to the number of starforming galaxies in distant clusters, the former is by definition proportional (with a proportionality factor that happens to be equal to 1) to the star-forming fraction multiplied by the total number of cluster members SFR = N SF = f (OII) × N tot . In distant clusters, the best-fit relation between f (OII) and cluster mass was given by Poggianti et al. (2006) : From this and from the fact that the total number of cluster galaxies correlate with cluster mass (eqns. 4 and 1), and given that the integrated SFR is equal to the number of starforming galaxies (Fig. 10), one can analytically conclude that the SFR/M should correlate with the star-forming fraction, as indeed we observe in Fig. 9. To summarize, in distant clusters we have observed a weak anticorrelation between SFR/M and cluster mass, the lack of any correlation between cluster-integrated SFR and mass, and the presence of a correlation between SFR/M and starforming fraction. These findings can be explained, and actually predicted, on the basis of three observed quantities: a) the constancy of the average SFR per star-forming galaxy in all clusters, found in this paper (left panel of Fig. 10); b) the correlation between cluster mass and number of member galaxies shown in this paper (eqn. 1 and right panel of Fig. 10); and c) the previously observed dependence of star-forming fraction on cluster mass (Poggianti et al. (2006) ). Observation (a), that the average SFR per star-forming galaxy is constant for clusters of all masses, suggests that either clusters of all masses affect the star formation activity in infalling star-forming galaxies in the same way, or that, if/when they cause a truncation of the star formation, they do so on a very short timescale. In the latter case, star-forming galaxies of a given mass have similar properties inside and outside of clusters. Observation (b), the correlation between cluster mass and number of cluster members, stems from the mass and galaxy accretion history of clusters. These results can be used to test the predictions of simulations. More importantly, comparisons with simulations can allow to explore how our results are linked with the growth history of clusters, which should play an important role in establishing the star-forming fraction. The relative numbers of star-forming versus non-star-forming galaxies (observation (c) above) and, above all, its evolution, remain the key observations that display a strong dependence on cluster mass. Ultimately, understanding the observed trends comes down to finding out why the relative proportion of passive and star-forming galaxies varies with "environment', the latter being either cluster mass or local density. In Poggianti et al. (2006) we proposed a schematic scenario in which there are two channels that cause a galaxy to be passive in clusters today: one due to the mass of the galaxy host halo at z > 2 (a "primordial" effect), and one due to the effects related to the infall into a massive structure (a "quenching" mechanism). The results of this paper are consistent with that simple picture. AGE OR MORPHOLOGY? For 10 EDisCS fields we can study galaxy morphologies from visual classifications of HST/ACS images (Desai et al. 2007) and thus compare our star formation estimates with galaxy Hubble types. In particular, we are interested in knowing whether the trend of SF with local density can be partially or fully ascribed to the existence of a morphology-density relation (MDR). Do the SF trends simply reflect a different morphological mix at different densities, with the SF properties of each Hubble type being invariant with local density? Or do the SF properties of a given morphological type depend on density? Can the lower average SF activity in denser regions be fully explained by the higher proportion of early-type galaxies in denser regions? Desai et al. (2007) have published visual classifications in the form of Hubble types (E, S0, Sa, Sb, Sc, Sd, Sm, Irr). However, for this paper we consider only four broad morphological classes: E (ellipticals), S0s (lenticulars), early-spirals (Sa's and Sb's) and late-spirals (Sc's and later types). We note that irregular galaxies (Irr) represent only 10% of our late-spiral class and therefore do not dominate any of the latespiral results we present below. The morphology-density relation for EDisCS spectroscopically-confirmed cluster members brighter than M V = −20 is shown in Fig. 11. We find clear trends similar to what has been observed before in clusters both at high-and low-z (Dressler et al. 1997, Postman et al. 2005. The fraction of spirals decreases, the fraction of ellipticals increases, and the fraction of lenticulars is flat with local density. Previous high-z studies have not considered early-spirals and late-spirals separately. We find that the spiral trend is due to the fraction of late-spirals strongly decreasing with density, while the distribution of early-spirals is rather flat with density (top panel of Fig. 11). The early-spiral density distribution is thus very similar to that of S0 galaxies, suggesting that these objects are the best candidates for the immediate progenitors of the S0 population, which has been observed to grow between z = 0.5 and z = 0 (Dressler et al. 1997, Fasano et al. 2000, Postman et al. 2005, Desai et al. 2007. We consider three observables related to the star formation activity: the [OII] EW and the SFR derived from the [OII] flux described in the previous sections, and the break at 4000 Å. The last is defined as the difference in the level of the continuum just bluer and just redder than 4000 Å. It can be thought of as a "color" and in fact it usually correlates well with broad band optical colors, though it spans a smaller wavelength range than broad bands and is thus less sensitive to the dust obscuring those stars that dominate the spectrum at these wavelengths. We use the narrow version of this index, sometimes known as D4000n, as defined by Balogh et al. (1999), and refer to it as D4000 in the following. The observed values of SFR, EW(OII), and D4000 are plotted as a function of local density for each of our four morphological classes in Fig. 12 (Es, S0s, early-spirals, late-spirals from bottom to upper row). The same figure presents the local density distribution of each morphological class. The highest density bin is only populated by ellipticals, as previously discussed. Two main conclusions can be drawn from this figure: a) Neither the SFR, nor the EW(OII), nor the D4000 distributions of each given morphological class vary systematically with local density. Dividing each morphological class into two equally-populated density bins, we find statistically consistent mean and median values of SFR, EW([OII]), and D4000. The only exception may be a possible deficiency of galaxies with high SFR among early spirals at the highest densities. However, the mean and median SFR in the two density bins differ only at the 1σ level. Thus, as far as it can be measured in our relatively small sample of galaxies, the SF properties of a given morphological class do not depend on density. b) While the great majority of Es and S0s are "red" (==have high values of D4000, and null values of EW(OII) and SFR) and the great majority of late-type spirals are "blue" (==have low D4000 values, OII in emission and ongoing SF), earlyspirals are a clearly bimodal population composed of a red subgroup (D4000 > 1.5) and a blue subgroup (D4000 < 1.3). Approximately 40% of the early-spirals are red with absorption-line spectra and 40% are blue with emission-line spectra, and the rest have intermediate colors. Most of the intermediate-color galaxies have some emission, but at least two out of 10 have recently stopped forming stars (have k+a post-starburst spectra; Poggianti et al. 2008 in prep.) and therefore are observed in the transition phase while moving from the blue to the red group. For their starformation properties, the red early-spirals can be assimilated to the "passive spirals" observed in several previous surveys (Poggianti et al. 1999, Goto et al. 2003, Moran et al. 2007). The early-spiral bimodality is not due to the red subgroup being composed mainly of Sa's and the blue subgroup consisting mainly of Sb's, as the proportion of Sa's and Sb's is similar in the two subgroups. Interestingly, the relative fractions of "red" and "blue" early-spirals does not strongly depend on density, as might have been expected, but there is a tendency for the intermediate color galaxies to be in regions of high projected local density. We now want to calculate whether the observed star formation-density relations can be accounted for by the observed morphology-density relation, combined with the average SF properties of each morphological class. To obtain the trend of star-forming fraction with density expected from the MDR, we compute the fraction of starforming galaxies in each morphological class and combine this with the fraction of each morphological class in each density bin (i.e. the MDR). 7 The result is compared with the observed star-forming fractions in Fig. 13. Similarly, to compute the expected SFR-density relation given the MD-relation, we combine the mean SFR in solar masses per year for each morphological class (0.23±0.1 for Es, 0.15±0.1 for S0s, 1.03±0.16 for early-spirals and 2.71±0.43 for late spirals) with the fraction of each morphological class in each density bin (i.e. the MDR), and compare it with the observed SFR-density relation in Fig. 13. This figure shows that the MDR is able to fully account for the observed trends of star-forming fraction and SFR with density (and viceversa). 8 Hence, we find that the MD relation and the "star formation-density relation" (in the different ways it can be observed) are equivalent. These observations indicate that at least in clusters, for the densities, redshifts, galaxy magnitudes, SF and morphology indicators probed in this study, these are simply two independent ways of observing the same phenomenon, and that neither of the two relations is more "fundamental" than the other. Comparison with low redshift results The equivalence between the SFD and the MD relations that we find in EDisCS clusters is at odds with a number of studies at low redshift. In local clusters, Christlein & Zabludoff (2005) have found a residual correlation of current star formation with environment (clustercentric distance in their case) for galaxies with comparable morphologies and stellar masses. Using the Las Campanas Redshift Survey, Hashimoto et al. (1998) demonstrated that the star formation rates of galaxies of a given structure depend on local density, and that "the correlation between...star formation and the bulgeto-disk ratio varies with environment". More recently, a series of works on other field low-z redshift surveys have concluded that the star formation-density relation is the strongest correlation of galaxy properties with local density, suggesting that the most fundamental relation with environment is the one with star formation histories, not with galaxy structure (Kauffmann et al. 2004, Blanton et al. 2005, Wolf et al. 2007, Ball, Loveday & Brunner 2008. Most of these stud-ies are based on structural parameters such as concentration or bulge-to-disk decomposition that are used as a proxy of galaxy "morphology". Visual morphologies such as those we use in this paper are known to be related both to structural parameters and star formation. In fact, from a field SDSS galaxy sample at low-z, van der Wel (2008) concludes that structure mainly depends on galaxy mass and morphology depends primarily on environment, and that the MD relation at low-z is "intrinsic and not just due to a combination of more fundamental, underlying relations". Similarly, Park et al. (2007) argue that the strongest dependence on local density is that of morphology, when morphology is defined by a combination of concentration index, color and color gradients. Interestingly, having fixed morphology and luminosity, these authors find that both concentration and star formation related observables are nearly independent of local density. The difference between a classification based on structural parameters and one obtained from visual morphology may be responsible for the differences between our high-z results and most, but not all, low-z results. Using visual morphologies of galaxies in the supercluster A901/2, Wolf et al. (2007) find that the mean projected density of galaxies of a given age does not depend on morphological class, and conclude there is no evidence for a morphology-density relation at fixed age. In their sample, except for the latest spirals, which are all young, galaxies of every other morphological type span the whole range of ages, i.e. there are old, intermediate-age, and young Es, S0s, and early spirals. In contrast, as discussed previously, our morphological classes correspond to a strong segregation in age: practically all ellipticals and S0s are old, all late spirals are young, and only early spirals are a bimodal population in age. To facilitate the comparison with Wolf et al. (2007), in particular with their Fig. 5c, in Fig. 14 we present our results as mean projected density for galaxies of different stellar "ages" as a function of morphological class. The notation "old" and "young" separates galaxies with red and blue D4000 (> / < 1.3). Figure 14 shows that we find an "MD-relation" at fixed age, i.e. a difference in mean density for galaxies of the same age but different morphological type, for example between old Es and old S0s. Since age trends with density are observed only at faint magnitudes by Wolf et al., the fact that their galaxy magnitude limit is 2 mag deeper than ours may partly or fully explain the discordant conclusions. The SFD and the MD relations may be equivalent at bright magnitudes, and decoupled at faint magnitudes. Additionally, it is possible that we are observing an evolutionary effect, with star formation and morphology equally depending on density at high-z, but not at low-z. This might be the case if at high-z most galaxies still retain the morphological class they had "imprinted" in the very early stages of their formation, and if at lower redshifts progressively larger number of galaxies are transformed, having their star formation activity and morphology changed. Such transformations are known to have occurred in a significant fraction of local cluster galaxies. In fact, approximately 60% of today's galaxies have evolved from star-forming at z ∼ 2 to passive at z = 0 according to the results of Poggianti et al. (2006) . If the changes in star formation are more closely linked with the local environment than the related change in morphology, while the latter retains some memory of the initial structure at very high-z (mostly dependent on galaxy mass), a progressive decoupling between the SFD and the MD relations would take place at lower redshifts (see also Capak et al. 2007). Since the changes in star formation activity and morphology involve progressively fainter galaxies at lower redshifts in a downsizing fashion (Smail et al. 1998, Poggianti et al., 2001a,b, 2004, De Lucia et al. 2004, the decoupling at low-z should be prominent at faint magnitudes. In this scenario, the differences between our analysis and Wolf's results would be both an evolutionary and a galaxy magnitude limit effect, the two being closely linked. At low-z it should now be possible to fully address these questions and investigate the galaxy magnitude and global environment dependence of the SFD-MD decoupling. Ours is so far the only study comparing the SFD and the MD relations at high redshift, so other future works may help clarify the redshift evolution of the link between the two relations in clusters, groups, and the field. SUMMARY We have measured the dependence of star formation activity and morphology on projected local galaxy number density for cluster, group, poor group, and field galaxies at z = 0.4 − 0.8, comparing with clusters at low redshift. At highz, our 16 main structures have measured velocity dispersions between 160 and 1100 km s −1 , while for our poor groups we did not attempt a velocity dispersion measurement. The field sample comprises galaxies that do not belong to any of our clusters, groups, or poor groups. Our analysis is based on the [OII] line equivalent widths and fluxes and does not include any correction for dust extinction. All galaxies with an EW([OII]) greater than 3 Å are considered to be currently star-forming. Although the contamination from pure AGNs is estimated to be modest (7% at most), all the trends shown might reflect a combination of both star formation and AGN activity. Our main conclusions are as follows: 1) In distant as in nearby clusters, regions of higher projected density contain proportionally fewer galaxies with ongoing star formation. Both at high and low redshift, the average star formation activity in star-forming galaxies, when measured as mean [OII] equivalent width, is consistent with being independent of local density. 2) At odds with low-z results, we find that the correlation between star-forming fraction and projected local density varies for massive and less massive clusters, though it is not uniquely a function of cluster mass. Some low-mass groups can have lower star-forming fractions at any given density than similarly or more massive clusters. 3) In our clusters, the average current star formation rate per galaxy and per star-forming galaxy, as well as the average star formation rate per unit of galaxy mass, do not follow a continuously decreasing trend with density, and may display a peak at densities ∼ 15 − 40 galaxies Mpc −2 . The significance of this peak ranges between 1 and 4 σ depending on the method of analysis. This result could be related to the recent findings of an inverted, possibly peaked SFR-density relation in the field at z = 1. The EW-density and the SFR-density relations thus provide different views of the correlation between star formation activity and environment. The former suggests that the star formation activity in star-forming galaxies does not vary with local density, while the latter suggests the existence of a density range in which the star formation activity in star-forming galaxies is boosted by a factor of ∼1.5 on average. 4) When using galaxy samples with similar mass distribu-tions, we find variations not larger than 1 σ in the average EW and SFR properties of star-forming galaxies in the field, poor groups, and clusters. Higher average EWs, SFRs, and star-forming fractions in the unmatched field and poor group samples compared to clusters are primarily due to differences in the galaxy mass distribution with global environment. As an example, star-forming field galaxies form stars at more than twice the rate per unit of galaxy mass compared to starforming galaxies in any other environment. Together with point 1) above, this suggests that the current star formation activity in star-forming galaxies of a given galaxy mass does not strongly depend on global or local environment. 5) By summing the ongoing SFR of individual galaxies within each cluster we obtain cluster-integrated star formation rates. We find no evidence for a correlation with cluster mass. In contrast, the cluster SFR per unit of cluster mass anticorrelates with mass and correlates with the star-forming fraction, although we caution that the anticorrelation with mass could be mainly driven by correlated errors. The average starforming galaxy happens to form about one solar mass per year (uncorrected for dust) in all of our clusters, making the integrated star formation rate in distant clusters just equal to the number of star-forming galaxies. These findings can be understood in the light of three additional results that we show: a) the cluster integrated SFR is linearly proportional (equal) to the number of star-forming galaxies; b) the total number of cluster members scales with cluster mass as N ∝ M 0.56 and c) the star-forming fraction depends on cluster mass in distant clusters as presented in Poggianti et al. (2006) . Given the invariance of the average star formation with cluster mass, as well as with global and local environment (see points 1 and 4 above), the most important thing that remains to be explained is the cause of the cluster-mass-dependent evolution of the relative number of star-forming versus non-star-forming galaxies. 6) Defining galaxy morphologies as visually classified Hubble types from HST/ACS images, we find a morphologydensity relation similar to that observed in previous distant cluster studies. In addition, we find that the trend of declining spiral fraction with density is entirely driven by late-type spirals of types Sc and later, while early spirals (Sa's and Sb's) have a flat distribution with local density as S0s do. 7) The star formation properties (ongoing SFR, EW(OII), and D4000) of each morphological class do not depend on local density. Galaxies of a given Hubble type in distant clusters have similar star formation properties regardless of the local environment. 8) Essentially all Es and S0s have old stellar populations and all late spirals have significant young stellar populations, while early spirals are a clearly bimodal population, with 40% of them being red and passively evolving and 40% being blue and having ongoing star formation. The bimodality of the early spirals, together with their resemblance to S0s as far as the morphology-density distribution is concerned, once more suggests that early spirals are the most promising candidates for the progenitors of a significant fraction of the S0 population in clusters today (see also Moran et al. 2007). 9) From the combination of the morphology-density relation and the average properties of each morphological class, we are able to recover the star formation-density relations we have observed. The morphology-density and the star formation-density relation are therefore equivalent in our distant clusters, and neither of the two relations is more fundamental than the other. This is at odds with recent results at low-z. Among the possible reasons for the discordant conclusions are differences between visual morphologies and structural parameters, the fainter galaxy magnitude limit reached in low-z studies, and possibly evolutionary effects that can produce a progressive decoupling of the SFD and the MD relations at lower redshifts. We would like to thank the referee, Arjen van der Wel, for the constructive and careful report that helped us improving the paper. BMP thanks the Alexander von Humboldt Foundation and the Max Planck Instituut fur Extraterrestrische Physik in Garching for a very pleasant and productive stay during which the work presented in this paper was carried out. The Millennium Simulation databases used in this paper and the web application providing online access to them were constructed as part of the activities of the German Astrophysical Virtual Observatory. The Dark Cosmology Centre is funded by the Danish National Research Foundation. BMP acknowledges financial support from the FIRB scheme of the Italian Ministry of Education, University and Research (RBAU018Y7E) and from the INAF-National Institute for Astrophysics through its PRIN-INAF2006 scheme. Facilities: VLT (FORS2), HST (ACS). ) is computed over all galaxies. The high redshift points (EDisCS), described in §6.1, are shown in black as empty circles (statistical subtraction), filled circles (photo-z probable members) and crosses (photo-z within ±0.1 from z clu ). The low redshift points (SDSS), described in §6.4, are shown in red as empty triangles (density computed using only spectroscopic members) and filled triangles (full photometric catalog). Errors are computed as bootstrap standard deviations from the mean using 100 realizations. The horizontal dashed and dotted lines delimit the values found in the field and in poor groups at high redshift using mass-matched samples (see text). [OII] in emission versus local density for different subsets of our cluster sample: Left Clusters in different velocity dispersion bins: σ > 800 km s −1 (thick solid line) and σ < 800 km s −1 (thick dashed line). A small shift in density has been applied to allow a better visibility of the errors. The red dotted lines indicate clusters with 600 < σ < 800 km s −1 and blue, long-dashed lines respresent systems with σ < 600 km s −1 . Right The two most massive clusters (σ > 800 km s −1 , cl1232 and cl1216); the three outliers in the [OII]-σ relation (cl1119, cl1420 and cl1202), and all the remaining clusters. Densities have been computed using the high-probability photometric-redshift membership (sec. 4). -Bottom Mean SFR in solar masses per year for galaxies in different density bins. All galaxies: black symbols. Only star-forming galaxies: blue symbols. Different symbols indicate the values obtained for the 3 membership criteria, as in Fig. 2. Large squares represent the average of the values for the 3 membership methods. A small shift around the center of each density bin has been applied to the different points to avoid confusion. The dashed and dotted lines delimit the 1σ error around the value for field and group galaxies, respectively. For clarity, only the star-forming field and poor group values for the mass-matched samples are shown. Top SFR in solar masses per year for all individual galaxies. The mean (filled circles) and median (empty circles) SFR in star-forming galaxies are shown for three equally populated density bins (see text). 10 100 1000 0 FIG. 7.-Mean specific SFR (SFR/M, where M is the galaxy stellar mass) in yr −1 for galaxies in different density bins. All galaxies: black symbols. Only starforming galaxies: blue symbols. Different symbols indicate the values obtained for the 3 membership criteria, as in Fig. 2. Large squares represent the average of the values for the 3 membership methods. A small shift around the center of each density bin has been applied to the different points to avoid confusion. The dashed and dotted lines delimit the 1σ error around the value for field and group galaxies, respectively. For clarity, only the star-forming field and poor group values for the mass-matched samples are shown. FIG. 8.-Integrated cluster SFR per unit of cluster mass plotted as a function of cluster mass. The large cross identifies our lowest velocity dispersion group (Cl1119) whose SFR/Mass estimate is unreliable (see text). Left EDisCS clusters over a radius equal to R 200 . Cl 1232 has not been included because observations cover only out to = R 200 /2. Right EDisCS clusters (filled dots) and literature data (empty dots, see text) over a radius = R 200 /2. FIG. 9.-Total integrated SFR as a function of cluster mass (right) and SFR/Mass versus fraction of star-forming galaxies (left). All quantities are computed within R 200 . The large cross identifies our lowest velocity dispersion group (Cl1119) whose SFR/Mass estimate is unreliable (see text). Errors are computed by propagating errors on velocity dispersions and SFRs. FIG. 10.-Left Integrated cluster SFR versus number of star-forming cluster members (filled circles) and total number of cluster members (empty circles). The line represents the 1:1 relation, SFR = N SF , which implies that the average SFR per star-forming galaxy is roughly equal to 1 M ⊙ yr −1 in all clusters. Numbers are computed using spectroscopically-confirmed members and correcting for spectroscopic incompleteness. Error bars are omitted in this panel for clarity. All quantities are computed within R 200 to the galaxy magnitude limits adopted in this paper. Center Number of star-forming members versus cluster mass. Right Number of members within R 200 versus cluster mass. For EDisCS clusters, numbers are computed from photo-z membership (stars), statistical substraction (crosses) and from the number of spectroscopic members corrected for incompleteness (filled circles). The least squares fit for EDisCS clusters is shown as a solid line, and it is given in eqn. 1. Sloan clusters at low-z are shown as empty triangles, and the least squares fit as a dashed line (eqn. 3). -Fraction of star-forming galaxies (top) and mean SFR among all galaxies (bottom) with symbols as in Fig. 3 and Fig. 6. The large, solid squares represent the values expected given the morphology-density relation and the mean star-forming fraction and SFR of galaxies of each morphological class. FIG. 14.-Mean local density for each morphological type, from left to right: ellipticals, S0s, early spirals, and late spirals. The notation "old" and "young" here separates galaxies with red and blue D4000 (> / < 1.3). Practically all ellipticals and S0s are old, and all late spirals are young, while early spirals are cleanly divided into two age populations with similar mean densities. Errors are bootstrap standard deviations.
2008-05-08T11:40:01.000Z
2008-05-08T00:00:00.000
{ "year": 2008, "sha1": "828535a357cee8e2279e49c0daf7761ee7c1223c", "oa_license": null, "oa_url": "https://authors.library.caltech.edu/14303/1/POGapj08.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "828535a357cee8e2279e49c0daf7761ee7c1223c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
251328442
pes2o/s2orc
v3-fos-license
Knowledge and Awareness of COVID-19 in Uttar Pradesh: An Exploratory Data Analysis Corona viruses, commonly called COVID-19, are a large family of viruses that can cause diseases ranging from the common cold to Severe Acute Respiratory Syndrome (SARS). Worldwide Covid-19 is affecting 210 countries and territories around the world and two international conveyances. As of 2 June 2020, there are 6,408,869 confirmed 2,935,368 recovered and 378,317 deaths cases has been reported in world of Coronavirus diseases, India is not untouched from this situation. Currently, it has reported infected 190,535 and 5,394 death cases due to COVID-19 in India. (https://covid19.who.in t/region/searo/country/in). The COVID-19 pandemic was first confirmed in the Indian state of Uttar Pradesh on 4 March 2020, with the first positive case in Ghaziabad. As of 1 June 2020, the state has 8361 confirmed cases, resulting in 222 deaths and 5030 recoveries. The situation is getting worst Journal of Reliability and Statistical Studies, Vol. 15, day by day as COVID-19 outbreaks and patients are increasing by every minute and become the most important issue for the whole world and So accessing knowledge and awareness among the people is very important. The present study using the exploratory data analysis we tried to demonstrate the knowledge and awareness of individuals about the COVID-19 pandemic in Uttar Pradesh, the most populous state of India. The findings of the present study can be utilized by the researchers and policy makers to handle this worst situation. 21 days, limiting movement of the entire 1.3 billion population of India as a preventive measure against the 2020 coronavirus pandemic in India. It was ordered after a fourteen-hour voluntary public curfew on 22 March, followed by enforcement of a series of regulations in the country's COVID-19 affected regions. The lockdown was placed when the number of confirmed positive coronavirus cases in India was approximately 500. This lockdown enforces restrictions and self-quarantine measures. Lockdown was extended the nationwide till 3 May, with a conditional relaxation promised after 20 April for the regions where the spread had been contained by then. As on 27th April 2020, the total cases reported in India are 29435 with 6869 recoveries and 934 deaths (Covid-19.in, 2020). Hospital isolation of all confirmed cases, tracing and home quarantine of the contacts is ongoing. However, the rate of infection in India is lower as compared to other countries. Common signs and symptoms of COVID-19 infection include symptoms of severe respiratory disorders such as fever, coughing and shortness of breath. The virus that causes COVID-19 is mainly transmitted through droplets generated when an infected person coughs, sneezes, or exhales. These droplets are too heavy to hang in the air, and quickly fall on floors or surfaces. The average incubation period is 5-6 days with the longest incubation period of 14 days. In severe cases, covid-19 can cause pneumonia, acute respiratory syndrome, kidney failure, and even death. The clinical signs and symptoms reported in the majority of cases are fever, with some cases having difficulty breathing, and X-rays show extensive pneumonia infiltrates in both lungs (Holshue et al., 2020;Perlman, 2020). The clinical symptoms of severe and critical patients with covid-19 are likely similar with the clinical symptoms of SARS and MERS (Wang et al., 2020b). Preventive measures for COVID-19 include maintaining social distancing, washing hands frequently, avoiding touching the mouth, nose, and face (WHO, 2020). Knowledge and awareness during the people of India played an important role to save them from COVID-19. Even the health ministries mentioned that awareness can be one of the important thing to fight against COVID-19 and remain safe. Authors like Singh et al. (2020), Sharma et al. (2020) studies the importance of knowledge and awareness among the Indian citizens. They recommended that urban societies were less affected both in terms of numbers and severity as compare to the rural because of their education level and awareness. It is fact that testing in the rural area was limited and therefore numbers may reflect the against the analysis and recommendation of the literature. Uttar Pradesh is the most populous state of India holds more than 166 million people, therefore we focused on the state Uttar Pradesh for our study. Because of the heterogeneity in populations, the estimates show variation with respect to certain background characteristics like variation among age group, caste, marital status etc. Therefore, in order to get complete picture of the scenario, it is important to study these estimates with respect to their background characteristics. There are few studies, which have discussed about knowledge, perception, attitude and risk behaviour of COVID-19. This study attempts to know how the knowledge and awareness changes among people of Uttar Pradesh with respect to certain background characteristics and that in conformity with gender wise variation. It also tries to know the risk behaviour about COVID-19. With aforesaid background we restricted ourselves to following two objectives: • To assess the knowledge and Awareness among people living in Uttar Pradesh. • To access the risk behaviour about COVID-19 among peoples having different occupations. Data and Methods Our study was cross-sectional, carried out by a convenience non probability sampling technique in India. A semi-structured questionnaire was developed in very easy understandable English by using Google form. The questionnaire was disseminated through WhatsApp, e-mails and through social media platforms to the known contacts. The participants showed enough interest to give their response and also forwarded it to their contacts, which resulted to get the response from all over the states. Participants possessing smart phones & internet have participated in the study, which is very common in nowadays society. Participants above to 15 years and comfortable with English and shown their willingness have given their inputs. Total we received 533 responses, but some were filled incomplete, so we eliminated them. Finally, we analysed on 448 responses to draw our results. The socio-demographic profile of respondents was accessed by questionnaire, which includes gender, age, education, place of residence, domicile, marital status, etc. The questionnaire given to respondents contained separate section to know how they commute and interact to peoples, trusted source of information, 2 questions to evaluate the threat level of virus, one dichotomous question for awareness about health facility, 6 questions to estimate awareness level of corona virus in society, 11 questions for symptoms, 12 questions for perception about prevention from corona virus. The process of data collection was held during 11th, April 2020 to 28th, April 2020. Definition of Variables This section describes the definition of the variables constructed and used in the analysis. • Place of Residence Place of residence has divided in to two categories Rural and Urban. • Marital Status Marital status has been classified in to four categories Single/Never married, Divorce/Separated, Widow(er) • Qualification Qualification has been divided in to five parts UG, PG, Ph.D., Other, Diploma. • Occupation Occupation has been divided in to ten categories Salaried Government, Salaried Private, Business/Self Employed, Doctor/Health care provider, Agriculture worker, working for non profit/ government organisation, student, House wife/Home maker, Retired and others. • COVID-19 Related Knowledge The knowledge of COVID-19 has been judge using the following questions. Corona can be caused by virus, can spread from one person to another, It can be prevented, It is same as common cold, Occurs at certain period of the year, COVID-19 symptoms are worse among Diabetic. • Knowledge about COVID-19 symptoms The knowledge regarding COVID-19 symptoms has been checked using the following symptoms Headache, Sore people (Males and Females), who have actively participated in the survey are lying between the age-group 15-24 and 26-35. However, the percentage of Females participation is slightly higher as compared with the Males for both the age-group. Most of the people are from Urban area. In terms of Marital status most of the people are either married or single. The percentage of divorced/separated or widow/widower is very low. Major part of people (Males & Female both) are Graduate or Post Graduate. The participation of Ph.D., others or Diploma people is low compared with Graduate or Post Graduate people. In terms of Occupation major part of peoples are students followed by Salaried Private, Salaried Government Employee and Business/Self Employed. Table 2 represents the knowledge of COVID-19 in UP. To access the knowledge of COVID-19 questions regarding cause of corona virus, it can be spread from one person to another, whether it can be prevented or not, Symptoms, whether it is periodic and the effect among Diabetic patients have been asked from the 448 respondents. From Table 2 it can be observed that out of 448 respondents 422 (94.2% of total respondent) respondent agreed that it is cause by virus. 98.4% of people are responding that it can be spread from one person to another. 77.9% people think that it can be prevented. Around 58.9% of people state that COVID-19 symptoms are worse among Diabetic. Knowledge Regarding COIVD-19 It may due to low immune systems of diabetic patients. On the other hand, 14.55 of people thinks that COVID-19 symptoms are not worse among Diabetic patients and around 25% states that they don't know whether COVID-19 symptoms are worse or not among Diabetic patient which reflect the misconception among the people. Only 28.6% respondents state that the symptoms of COVID-19 is same as common cold, while 62.7% people states 542 P. Sharma et al. that the symptoms of COVID-19 is not same as common cold and 8.7% people don't know whether the symptoms of COVID-19 is same as common cold or not. Around 11.8% people think that COVID-19 occurs at certain period of the year. Which reflect the misconception about COVID-19 among the people. Summarising to these result it is observed that most of the people are aware the corona is caused by virus, it can be spread from one person to another and about the prevention. On the other hand, many people also have misconception regarding COVID-19 as they think that it occurs at a certain period of the year, symptoms are not worse among diabetic patient and symptoms is not same as common cold. Knowledge regarding COVID-19 Symptoms To access the knowledge regarding common symptoms of COVID-19 responses have been collected from 448 respondents. The result of the response is represented in Table 3. From Table 3, it is observed that only 61.2% of people think that Headache is common coronavirus symptoms. 86.6% people think that sore throat is the common symptoms. 23.4% people thinks that Vomiting is a common symptom. 87.1% people think persistent coughs as common symptoms, 70.3% thinks that running nose is a common symptom, 79.5% people think that sneezing is a common symptom, 47.1% thinks that muscle ache is a common symptom, 27.2% people state that Abdominal pain is a common symptom, 98.4% people thinks Fever is a common symptom, 29.2% people thinks Diarrhoea as a common symptom, 75.2% thinks that Feeling tired is a common symptom. Concluding to these On the other hand, people also have some misconception regarding common coronavirus symptoms result, it is observed that major part of the population thinks that fever is the main common coronavirus symptoms followed by persistent cough, sore throat, Sneezing, feeling tired, running nose and headache. On the other hand, people also have some misconception regarding common coronavirus symptoms as they think that Muscle ache, Diarrhoea, Abdominal pain, vomiting are also a coronavirus symptom. Source of Information The question regarding the source of information about COVID-19 have been asked from the 448 respondent of UP. The response so obtained is present in Table 3. From Table 4 it is clear that the most of the people are getting information about COVID-19 from Family members (84.2%) followed by Television (68.1%), official government website (60%) and Newspaper (45.8%) as the percentage of getting information through these sources are higher as compared to rest sources. Around 6.5% of people are getting information through office colleagues. 16.1% people are getting information from Doctor/Nurse. 6% of people are getting information through Neighbours. 9.4% of people are getting information from Radio. 19% of people are getting information through Facebook/Twitter/Instagram. 16.3% are getting information through Whatsapp. Table 5 reveals the awareness of COVID-19 among the people of UP. To access the awareness, the opinion of people was asked regarding, if someone develops symptoms of COVID-19, they should be quarantine and treated in the hospital, quarantine and sent home, they should be socially restricted to prevent cross infection, they should rapidly access to counselling services. The responses were collected in likert scale between 1 to 5. Where 1 reflect strongly agree and 5 reflects strongly disagree. The people were asked to rate the various ways to prevent from COVID-19 such as Exercise and Yoga, avoid face to face meeting with people, avoid travelling by any medium, Wash hands and face with soap during certain period of time, avoid going to market, avoid morning walk, avoid going to the office, always wearing mask and sanitize hands while stepping out, Eating Ginger/Garlic/Hot Chillies pepper, drinking warm water, avoid going out in Cold weather and Social Distancing between 1 to 5. Where 1 reflect strongly agree and 5 reflects strongly disagree. The people were also asked to give their opinion about, who are in greater risk of getting infected by COVID-19 among Doctors and Nurses, Other Hospital Staffs, Children, Senior Citizen, Diabetics, Airline Crew, Other Airport Staff, Frequent Air Travelers, Train/Bus Drivers, Rickshaw Pullers, Hawkers, Person with pre-existing morbidity condition, Pregnant Women, Girls during menstrual Period, People who eat non veg food, who works in Bank, who drink Alcohol or Who consume Tobacco. Table 5 exhibits that most of the people are agree to keep person quarantine and treated in hospital for a person who develops symptoms of coronavirus. People are also agreeing for socially restricting the people who develop a symptoms of coronavirus to prevent from cross infection. Few people are also agreeing towards rapidly accessing to counselling service for a person who develop a symptoms of coronavirus. Most of the people are neutral towards agreeing to quarantine and sent home to a person who develop a symptoms of coronavirus. Awareness The knowledge regarding prevention from coronavirus were access by asking them to rate various ways of prevention between 1 to 5. The result reveals that most of the people are agreeing that by social distancing, always wearing mask and sanitize hands while stepping out, wash hands and face with soap during certain period of time are the possible prevention from coronavirus. Peoples are also agreeing that avoiding face to face meeting with people, avoid travel from any medium, avoid going to market, avoid morning walk, avoid going to office, drinking warm water weather are the possible ways to prevent from coronavirus. Few peoples also agree that Exercise and Yoga, Eating Ginger/Garlic/Hot Chillies pepper, avoiding going out in cold are the possible ways to prevent from coronavirus as their weighted means are close to 2. The questions were asked to access the opinion of the people about the people who are at greater risk of getting infected. From Table 6 it is clear that around 95.8% of people state that Senior citizen are at higher risk of Women/Girls during menstrual period (31.5%). Conclusion It Conclusion It has been observed that majority of people have knowledge about the COVID-19. On the other hands few peoples also have some misconception regarding COVID-19 and their symptoms as they think that it occurs at certain period of years, they think that Muscle ache, Diarrhoea, Abdominal pain, vomiting are also a coronavirus symptom. People knows about the symptoms of COVID-19. According to them, fever is the main common coronavirus symptoms followed by persistent cough, sore throat, Sneezing, feeling tired, running nose and headache. Analysing the primary data collected from individuals we observed that family members are the main source of information about COVID-19 followed by Television, official government website and Newspaper. Less peoples are getting information from social media like facebook/twitter/Instagram and other sources such as Nurses/Doctors, Radio, Office colleagues etc. Peoples are agreeing to keep person quarantine and treated in hospital for a person who develops symptoms of coronavirus. People are also agreeing for socially restricting the people who develop a symptoms of coronavirus to prevent from cross infection. Few people are also agreeing towards rapidly accessing to counselling service for a person who develop a symptoms of coronavirus. Most of the people are neutral towards agreeing to quarantine and sent home to a person who develop a symptoms of coronavirus. People are agreeing that avoiding face to face meeting with people, avoid travel from any medium, avoid going to market, avoid morning walk, avoid going to office, drinking warm water weather are the possible ways to prevent Prayas Sharma is currently working as Assistant Professor in the area of Decision Sciences at Indian Institute of Management Sirmaur, Paonta Sahib, Himachal Pradesh. He has more than 9 years of academic experience, both in the domain of teaching and research. His research interest includes Survey Sampling, Estimation Procedures using Auxiliary Information and Measurement Errors, Predictive Modelling, Business Analytics and Operations Research. Dr. Sharma has published more than 40 research papers in reputed National & International journals along with one book and two chapters in book internationally published. He has more than 300 citations with H-Index & I index of 12. Dr. Sharma has a keen interest in reading, writing and publishing, he is serving 7 reputed journals as editor/associate editor and more than 30 journals as reviewer and reviewed more than 150 research papers from the prestigious journals.
2022-08-05T15:16:16.486Z
2022-08-03T00:00:00.000
{ "year": 2022, "sha1": "d40414c8a0636b46a86cd3a9496d65c18d07e931", "oa_license": null, "oa_url": "https://www.journal.riverpublishers.com/index.php/JRSS/article/download/3683/2343", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "167d042776f94d4803c846319d84ad24fcfd3f63", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261101150
pes2o/s2orc
v3-fos-license
Some Finiteness Results on Oda's Question for Pairs of Line Bundles In this work we prove on a given smooth toric threefold all but finitely many ample line bundles are projectively normal. Introduction Let X be a smooth projective toric variety, L 1 be an ample and L 2 a nef line bundle on it. In [17] Oda asks whether the following multiplication map is surjective Recall that one can associate a polytope P L to a nef line bundle L on X and the lattice points in P L just form a basis of H 0 (X, L). In terms of polytopes, the surjectivity of (1.1) is equivalent to the surjectivity of the following map defined by Minkowski sum where M is the group of characters of the torus acting on X. Obviously, when L 1 , L 2 take values in {L k } k≥1 for some given ample line bundle L, the surjectivity of (1.1) just implies L is projectively normal. Conversely, if one could prove Φ L,L is surjective for any ample line bundle on any smooth toric variety X [12,Remark 4.3], then the surjectivity of (1.1) for pairs of ample line bundles follows readily. Concerning the projective normality of very ample line bundles on projective varieties, the first positive result can be traced back to Castelnuovo and Mumford [16], where it is proved a line bundle L on a curve of genus g is projectively normal when deg L ≥ 2g + 1. For projective normality on varieties of higher dimensions, a complete survey can be found in [15, 1.8D]. When it comes to toric varieties, projective normality and higher syzygies are investigated by Ewald et al. [4] and Hering et al. [13] respectively. Similar as other varieties, the line bundles that are verified to be projectively normal are required to be powers of some other ample line bundles. In addition, the dimension of the base variety also appears in these powers for bounding the positivity of the ample line bundle. The map (1.1) is obviously surjective for projective spaces, which are among the simplest toric varieties. As a generalization of projective spaces in another vein flag varieties are also proved to be projectively normal [19], in which the technique of diagonal splitting of Frobenius plays a key role. Unfortunately, this method cannot be applied to toric varieties as they are not diagonally split in general [18]. Probably due to the fact that toric varieties are much more varied, a proof of the surjectivity of (1.1) in its final form is more difficult. As far, Oda's question is still widely open, see [9] for a recent survey. On the positive side, the equivalent reformulation (1.2) makes Oda's question more approachable if treated from an intuitive point of view. The surjectivity of (1.1) is proved to hold for smooth toric surfaces in [19] and xinhe@qymail.bhu.edu.cn. 1 general toric surfaces not necessarily smooth in [11]. The methods used in these works relies heavily on the dimension of the base, which makes it difficult to generalize. A more recent major breakthrough is achieved in [8], where an ample line bundle on a toric variety of dimension d is proved to be projectively normal provided the lattice length of the associated polytope is no less than 4d(d + 1). This bound is improved to 2d(d + 1) in [10]. In spite of the importance of the accomplishment made in [8], Oda's question is still unanswered fully. Firstly, in [8] only the special case when L 1 , L 2 are powers of a third line bundle is considered. Towards this direction Haase and Hofmann make progress in [10] by requiring that one of the polytopes has each of its edge at least d times larger than the corresponding edge of the other. Secondly, for a given toric variety there might be infinitely many ample line bundles out of the scope of these works loc. cit. , as the lattice lengths of their associated polytopes might not reach the lower bounds aforementioned. With these questions in mind, we prove the following results. Theorem 1. If X is general in the sense that no two edges on the polytope associated to an ample line bundle on it are parallel, then dim coker Φ L1,L2 can be uniformly bounded for any pair of nef line bundles L 1 , L 2 . Theorem 2. For a smooth projective toric threefold X there exists a finite subset of Amp(X) such that the morphism (1.1) is surjective for any ample line bundle L 1 not in that subset and any nef line bundle L 2 . In addition, dim coker Φ L1,L2 can be uniformly bounded. Strategy of the Proof and Outline of the Paper. First of all, instead of (1.2) in this work we will mainly investigate the following map where L 1 is ample and L 2 is nef as before. It's evident the surjectivity of (1.3) implies that of (1.2). Our starting point is the following easy observation. For each vertex of P L1⊗L2 we have a lattice translation of P L1 lying inside P L1⊗L2 such that this vertex and its corresponding vertex of P L1 coincide. If the difference of the lattice lengths of the edges of P L1 and P L1⊗L2 are small compared with their sizes, the points of P L1⊗L2 not in a translation of P L1 as above could only be found near the boundary of P L1⊗L2 . By applying Carathéodory's theorem we can prove (see Lemma 3.2) in this setting all these translations of P L1 obtained via fitting the corresponding vertices will suffice to cover P L1⊗L2 . Following this line we can show (Proposition 3.1) the map (1.1) is surjective as long as L 1 is sufficiently ample but not necessarily a power of another line bundle. Another essential ingredient in our proofs lies in the fact that the nef cone of a projective toric variety is a rational polyhedral cone and it is endowed with a natural partially ordered filtration consisting of linear subsets (see Definition 2.3). In the proof of Theorem 1 the polytopes associated to those sufficiently ample line bundles can be used as fillers to 'cover' polytopes associated to other nef line bundles. Albeit such coverings are not perfect the space left out can be uniformly bounded, and we will call them quasi-coverings (see Claim 3.7). The first main result just follow from our proof that the polytopes associated to all but finitely many nef line bundles on a general toric variety admit such quasi-coverings. In the case of smooth projective toric threefolds we first show for a given ample line bundle the dimension of the cokernel of the map (1.1) is bounded. The proof of our second main result is built on Proposition 4.5, which says that for a given nef line bundle there always exists a finite subset of Amp(X) such that the map (1.1) is surjective for any ample line bundle not in this subset and the nef line bundle. In the proof of the latter we make use of Theorem 3, which is nothing but the surjectivity of (1.3) in dimension 2. After proving our main results we elaborate the dimension lowering process, which already appears in the proof of Lemma 2.5. With this decisive input, we can make an explicit estimation (see Proposition 5.2 and Proposition 5.3) as complement to the finiteness results. Theorem 3 is proved by induction on the Picard number of the base toric surface in the final section. The most delicate part of the proof occurs while choosing a suitable subset of P L −1 1 ⊗L2 ∩ M (we follow the notations in the proof) such that the union of the translations of P L1 by the vectors in that subset can cover the region missed in the inductive step. In two situations, i.e. (I1) and (I2), where such choices should be made, the numbers of such translation vectors turn out to be one and four. In the first case, although one translation of P L1 would sustain our induction, we need to select a set of translation vectors first then prove one of them will suffice. A case-by-case argument for determining admissible types of adjacent translation vectors in this situation seems unavoidable and we make frequent use of the condition that the underlying toric surface is smooth. Conventions and Notations. X = X(Σ): a toric variety X defined by Σ; Σ(k): the set of cones of Σ with dimension k; |Σ|: the support of Σ; Σ L : the normal fan of the polytope P L associated to a nef line bundle L; ρ ∈ Σ X (1): a one-dimensional cone of Σ X and the primitive vector paralleling with this cone D ρ : the invariant divisor corresponding to ρ ∈ Σ X (1); F α (P L ): the facet of P L corresponding to a cone α of Σ L ; H α (P L ): the minimal affine subspace which contains F α (P L ); v σ (P L ): the vertex of P L corresponding to a full-dimension cone σ of Σ L ; e τ (P L ): the edge of P L corresponding to a codimension one cone τ of Σ L ; u τ : a primitive vector in M paralleling with the edge e τ (P L ); u τ (σ): the primitive vector in M paralleling with e τ (P L ) such that v σ (P L ) + ǫu τ (σ) ∈ e τ (P L ) for all ǫ > 0 sufficiently small; AB: the line passing through A and B; AB: the segment with endpoints A and B; l v (p): the line passing through p and paralleling with v; r v (p): the ray with initial at p and and paralleling with v; rin(S): the relative interior of a point set S. For convenience we will occasionally endow M R with an Euclidean norm · . Following [8], the lattice length E(L) of a nef line bundle L on X is defined to be the minimum of the lattice lengths of the edges of P L . For two polytopes P 1 and P 2 with the same normal fan the number E( P1 P2 ) is defined to be the minimum of the ratios of the (lattice) lengths of the corresponding edges of P 1 and P 2 . When P 1 = P L1 , P 2 = P L2 for two line bundles L 1 , L 2 we will denote E( Preliminary Materials In this section we collect some standard facts as well as some less standard definitions and results that will be used in our proofs. Partial Orders on the Nef Cone. For a projective toric variety X, recall that its nef cone is a rational strongly convex polyhedral cone in Pic(X) R = N 1 (X) and each lattice point in the nef cone corresponds to a nef line bundle on X. In this work we will denote by Nef(X) the semigroup of nef line bundles, Amp(X) ⊂ Nef(X) the set of ample line bundles and Nef(X) Q the nef cone. The semigroup Nef(X) is endowed a natural partial order '≺' given by More generally, for a convex subcone C ⊆ Nef(X) Q , we can define a partial order ≺ C on Nef(X) Q as follows To define another partial order '≺ c ' on Nef(X), recall that to any nef line bundle L we can associate to it a convex polytope P L , which is the convex hull of the following set of characters Information of the line bundle can be read from that of the associated polytope and vice versa. For instance, when L is big P L has the highest possible dimension. When we translate P L in M by a character χ, the corresponding line bundle would be L(−D χ ) with D χ = χ, ρ D ρ , which is linearly equivalent to L. Now we define for any L,L ∈ Nef(X) L ≺ cL ⇔ each point of PL lies in some lattice translation of P L which is contained in PL. On the set Amp(X) we can define another partial order ≺ o by L ≺ oL iff L ≺L and coker Φ L,L −1 ⊗L = 0. Then it is obvious that we have the following implication and Oda's question can be reformulated as L ≺L ⇒ L ≺ oL . Remark 2.1. Unlike the partial order ≺, we do not whether the following implication is true It is not know either whether (2.2) can obtained with one of or both of the relations L ≺ c L ⊗ L ′ , L ≺ cL ⊗ L ′ . More generally, there are also implications without tensors unclear to us such as . What is even more obscure is when we consider implications as above with some of '≺' on the left or/and '≺ c ' on the right replaced by ≺ o . The following lemma is the starting point of all our results in this work, the proof of it is apparent and we omit it. Lemma 2.2. Let L,L be two ample line bundles such that L ≺ cL , then the map (1.1) is surjective for the pair (L, L −1 ⊗L). Now it is clear to see Oda's question has an affirmative answer if for any pair of ample line bundles L,L, we can prove However, this implication is not always true as one notices by taking L = O P 2 (1). The following observations for a pair of nef line bundles L ≺ L will be used later. Let Σ L (resp. Σ L ) be the normal fan of P L (resp. P L ) and n L = dim Σ L , n L = dim Σ L . Note that for any t > 0, Σ L is also the normal fan of P L + tP L . Then for anyσ ∈ Σ L (n L ) we can define the following subset Σ L (n L )σ of Σ L (n L ) from which one sees easily Σ L (n L ) = σ∈ΣL(nL) Σ L (n L )σ. 2.2. Linear Subsets of the Nef Cone. Definition 2.3. Let {C τ } τ ∈I be the set of invariant curves on X, then a linear subset of Nef(X) Q is a subset of the following form ii) for each τ ∈ I\J the image of the following map is infinite b be a nonempty linear subset, then 1) L J b is reduced iff it can be written in the form L + L J 0 , where L is a nef line bundle and L J 0 is a reduced subcone of Nef(X); 2) there exists finitely many reduced linear subsets Proof. The first part follows from definition and the second part from [1, Theorem 2.12]. The following lemma will be used in the proofs of our main results. Note that it is still valid if we replace Nef(X) by Amp(X). Lemma 2.5. Let P be a property defined on Nef(X). Suppose for each reduced linear subset such that the property P is true for all line bundles in L J b ′ , then P is true for all but finitely many nef line bundles. Proof. To prove the lemma, we note that By Lemma 2.4 each linear subset on the right side can be written as a union of finitely many reduced ones. On the other hand, again by Lemma 2.4 any reduced linear subset is a translation of a reduced subcone of Nef(X), hence the dimension of each of these reduced linear subsets mentioned above is strictly smaller than that of L J b . We start with J = ∅ and b = 0 and apply the condition in the lemma and (2.6) repeatedly. Since the dimension of L ∅ 0 = Nef(X) is finite, after finitely many steps we will arrive at finitely many reduced linear subsets each of which has zero dimension, i.e. contains only one nef line bundle. It is easy to see any line bundle belonging to one of the reduced linear subsets with positive dimensions appearing in the process satisfies P. A Class of General Toric Varieties. In this work we will consider the following class of general toric varieties. Note that our definition differs from [6, Definition 2.1], where the condition is exerted on codimension one facets. Definition 2.6. A projective toric variety X is said to be general if the polytope associated to an ample line bundle on it has no paralleling edges. To prove the proposition above, we need the following Lemma 2.7. Let P ⊂ M R be a strongly convex lattice polyhedron with dim P = dim M R = n such that no two edges of P are parallel, then 1) there exists a one-to-one correspondence between the infinie edges (resp. codimension one facets) of P and its recession cone C; 2) dim C = n. Proof. We first prove 1) for the case when dim C = n. Let Σ P , Σ C ⊆ N R be the normal fans of P and C respectively, then by [2, Theorem 7.1.6] |Σ C | = |Σ P | = C ∨ and the cones of Σ C with lower dimensions are contained in ∂C ∨ . In particular, we have τ ∈ΣC (n−1) τ = ∂C ∨ . On the other hand, since Σ P is a refinement of Σ C and there are no paralleling edges on P , Σ C (n − 1) can be regarded as a subset of Σ P (n − 1). Then each infinite edge of C corresponds uniquely to an infinite edge of P . Next we prove Σ C (n − 1) = Σ P (n − 1). Let τ ∈ Σ P (n − 1) such that it is not contained in ∂C ∨ , then we can find ρ ∈ Σ P (1) in the interior of C ∨ such that ρ τ . However, for such ρ it is easy to see the corresponding facet is bounded, hence the edge corresponding to τ contained in it is also finite, a contradiction. It remains to show there also exists a correspondence between the codimension one facets. We only need to consider the one-dimensional cones of Σ P lying on the boundary as those in the interior of |Σ P | correspond to bounded facets. By abuse of notation, let ρ ∈ Σ P (1) ∩ ∂|Σ P |, then since ∂C ∨ = τ ∈ΣC (n−1) τ , one can find τ ∈ Σ C (n − 1) ⊆ Σ P (n − 1) such that ρ is contained in τ . As Σ P is a refinement of Σ C and there are no paralleling edges on C we must have ρ τ and ρ ∈ Σ C (1). Next we prove C must be full-dimensional. If dim C < dim C ∨ , then |Σ C | = U + N, where U is a strongly convex cone and N ⊂ N R is the nonzero subspace consisting of vectors vanishing on C. On the other hand, by the proof loc. cit. we have where V is the set of vertices of P and σ v is dual of the cone generated by P − v. Then we get We claim Indeed, let rin(σ) be the relative interior of σ, then it suffices to show rin(σ) However, this would imply x, ρ > 0, which contradicts with our assumption that ρ ∈ N, i.e. x, ρ = 0 for all x ∈ C. From (2.7) one deduces N is spanned by In particular, there are at least two cones σ 1 , σ 2 ∈ Σ P contained in N such that dim σ 1 = dim σ 2 = dim N. Then dim F σ1 (P ) = dim F σ2 (P ) = dim C and the minimal affine subspaces containing F σ1 (P ) and F σ2 (P ) respectively are translations of each other. On the other hand, one sees easily C is the recession cone of F σi (P ), i = 1, 2, then by what we have proved above for 1), both of the infinite edges of F σ1 (P ) and F σ1 (P ) have a one-to-one correspondence with those of C. By condition, however, no two edges of P are parallel, hence F σ1 (P ) and F σ2 (P ) must coincide and dim C = n. As a result, part 1) of the lemma is also proved. Proposition 2.8. Let X be a general toric variety of dimension n, then 1) any nontrivial nef line bundle on X is big; 2) for nontrivial nef line bundles L, L such that L ≺ L, Σ L (n− 1) can be regarded as a subset of Σ L (n− 1); , there is a one-to-one correspondence between the following sets Proof. Let L 1 be an ample and L 2 a nontrivial nef line bundle on X. Take a full-dimensional cone σ of Σ L2 and let P = P L1 + σ ∨ , then by Lemma 2.7 2) we get dim P = dim σ ∨ = dim Σ L2 = n. For the second part, letσ, τ ∈ Σ L (n − 1) such that τ σ . Recall that (2.4) Σ L (n) defines a partition of Σ L (n), then by Lemma 2.7 1) there exists a unique σ ∈ Σ L (n)σ and an edge of P L,σ = P L +σ ∨ through v σ (P L,σ ) that parallels with e τ (σ ∨ ). Thus τ corresponds uniquely to an cone of Σ L (n − 1). As there are no paralleling edges on P L or P L , one sees easily this map from Σ L (n − 1) to Σ L (n − 1) is well-defined and injective. To prove the third part, it suffices to observe that by Lemma 2.7 1) there exists a one-to-one correspondence between the codimension one facets of P L,σ = P L +σ ∨ containing the edge e τ (P L,σ ) and those ofσ ∨ that contain e τ (σ ∨ ). Coverings of Polytopes and Proof of Theorem 1 In this section we will prove Theorem 1 first. Afterwards, we will set forth a local variant of Oda's question, which is evoked by the proof of the theorem. We will see there some essential difficulties will naturally arise for non-general toric varieties even when we consider this weaker question. Coverings of Polytopes via Vertex Fitting. Next we will show the implication (2.3) is true as long as E(L) is large enough. That is, we have the following Proposition 3.1. For a given sufficiently ample line bundle L 1 , the multiplication map Φ L1,L2 is surjective for any nef line bundle L 2 . In order to prove the proposition above, we need the following Lemma 3.2. Let P be a convex polytope with dimension d, then for any d d+1 < c < 1, we have where V is the set of vertices of P . Proof. It is easy to observe that the lemma is true when P is a simplex. More general case can be deduced from this observation by using Carathéodory's theorem [1,Theorem 1.53]. Proof of Proposition 3.1. Since Nef(X) Q is a strongly convex rational polyhedral cone we can find a Hilbert basis {B j } 1≤j≤m for it such that any nef line bundle can be written as a non-negative integral linear combination of B j . Now for each invariant curve C of X we take for any invariant curve C}. Then for any L ∈ D(X) and 1 ≤ j ≤ m, we have E( L L⊗Bj ) > d d+1 hence P L ≺ c P L⊗Bj by Lemma 3.2. Note that L ⊗ B j ∈ D(X), then one can deduce L ≺ c L ⊗ L ′ for any L ′ ∈ Nef(X), which just implies the map Φ L,L ′ is surjective. There are finitely many pairs of nef line bundles on X such that the surjectivity of (1.1) for each of these pairs will yield an affirmative answer to Oda's question. Proof. By a similar argument as in the proofs of Proposition 3.1, one can show for any linear subset Then by Lemma 2.5 one can find finitely many reduced linear subsets {L J k b k } k of Nef(X) such that (3.4) is valid for any two line bundles L 1 , L 2 in one such subset. In particular, let Next suppose the map (1.1) is surjective for any pair consisting of a nef and an ample line bundle from {L k } k , we will prove the same conclusion is also true for other such pairs of line bundles from Nef(X) and Amp(X) respectively. Suppose L 1 is a ample and L 2 is an nef line bundle, then we can find L k1 , L k2 ∈ {L k } k such that L k1 ≺ c L 1 , L k2 ≺ c L 2 . Then for some m 1 ∈ M 1 and m 2 ∈ M 2 . Since the map (1.1) is surjective for L k1 and L k2 by assumption, by using [11, Lemma 3.1] again one deduces 3.2. Quasi-coverings of Polytopes via Edge Sliding (along the Long Edges). Given a polytope P , by a quasi-covering of P we mean a finite set of polytopes P i contained in P with the same dimension such that P \ i P i is contained in a union of small neighborhoods (compared with the size of P ) of the vertices of P . The word 'small' would be clear from the statement of Claim 3.7, in which setting it just means bounded for a family of infinitely many polytopes. Next we will show the following Proposition 3.4. With at most finitely many exceptions, the associated polytope of any nef line bundle on a general toric variety X admits a quasi-covering by polytopes corresponding to line bundles from D(X) (see (3.3) for the definition). By Lemma 2.5, we only need to show for any reduced linear subset L J b there exists b ′ > b such that the conclusion of the proposition above is valid for all line bundles in L J b ′ . Before proving this assertion, we introduce some notations. Let L (resp. L 0 ) be a line bundle in the relative interior of L J 0 (resp. L J b ), Σ J be the normal fan of P L , which is evidently independent of the choice of L. LetL,L be two nef line bundles such that L 0 ≺L, L 0 ≺L, and ΣL, ΣL be the normal fans of PL and PL respectively, then by Proposition 2. Proof of Proposition 3.4. LetL 0 ∈ D(X) such thatL 0 ≻ L 0 , we will choose L 2 ≻ L 0 such that P L2 admits a quasi-covering by translations of PL 0 and prove for any line bundle L in L J b such that L ≻ L 2 the polytope P L admits a quasi-covering by translations of PL . The proof will be given in three steps. Hρ(a 2 ) Hρ(a 1 ) Let P L1,σ = P L1 +σ ∨ , then by our argument above we have Take a nonzero vector ρ in the interior ofσ ⊂ N R and let H ρ (a) = {x ∈ M R | x, ρ = a} for each a ∈ R. Then since the recession cone of P L1,σ isσ ∨ , we can find a minimal number a 1 ∈ R such that H ρ (a 1 ) ∩ P L1,σ = ∅ and a 2 > a 1 such that PL 0 (vσ(P L1 )) τ + N (τ )u τ (σ) is lying between the H ρ (a 1 ), H ρ (a 2 ), as is shown in the figure on the right above. For Then PL 0 (vσ(P L1 )) τ + N (τ )u τ is the contained in the polytope bounded by H ρ (a 2 ) and the facets F α (P L1 ), where α σ for some σ ∈ Σ L1 (d)σ. This polytope is obviously contained in P L1 when E(L 1 ⊗ L −1 0 ) ≫ 0. As a result, (3.7) will follow when the inequality ( Take a line bundle L ∈ L J b , L ≻ L 1 and add P L⊗L −1 1 to both sides of (3.7), we get Note that the line bundleL 0 ⊗ L ⊗ L −1 1 falls in D(X). If we can find a suitable line bundle constitutes a quasi-covering of P L , then so does Step 2. Before finding a line bundle L 2 as described above, we will prove the distances between the points of P L in the complement of (3.9) and the border of P L can be uniformly bounded. Step 3. To complete the proof of the proposition, we will show the following in P L is contained in a union of neighorhoods of the vertices of P L whose radii can be uniformly bounded. Then by the proof of Proposition 3.1 we have By Claim 3.6 for any x in the complement of the union (3.12), we have d(x, ∂P L ) < c. Let It is easy to see for any (vσ(P L )) +σ ∨ , then one sees easily . Therefore we need only to prove the complement , it suffices to prove the assertion above for the following complement Note that P L0⊗L⊗L −1 2 ,σ = P L0 +σ ∨ , then this can be further reduced to prove the following complement is contained in a neighborhood of y of bounded size for all y ∈ P L0 Moreover, since y +σ ∨ \ τ (y +σ ∨ + N (τ )u τ (σ)) is a translation of the following complement, we only need to prove the same conclusion for it Now as in the proof of Claim 3.5, let ρ be a nonzero vector in the interior ofσ ⊂ N R and for each a ∈ R let H ρ (a) = {x ∈ M R | x, ρ = a}. Then by our choice of ρ, the translation H ρ (0) + λu τ (σ) has nonempty intersection with all infinite edges ofσ ∨ + N (τ )u τ (σ) for each τ when λ is sufficiently large. Moreover, for all large λ 1 , λ 2 the intersection (H ρ (0) + λ 1 u τ (σ)) ∩σ ∨ is a homothety of (H ρ (0) + λ 2 u τ (σ)) ∩σ ∨ by a factor of λ1 λ2 . In particular, we have Therefore, by the proof of Proposition 3.1 we have for all sufficiently large λ, hence the complement (3.16) is contained in a bounded neighborhood of the vertex ofσ ∨ . Therefore, the union of (3.15) for all y ∈ P L0 is also contained in the bounded neighborhood of v σ (P L ), hence Claim 3.7 and the proposition are proved. Proof of Theorem 1. By Proposition 3.4 and Lemma 2.5, there exist finitely many linear subsets {L J k b k } such that for any line bundle from one of them its associated polytope admits a quasi-covering by polytopes associated to line bundles from D(X). Note that (see the proof of Lemma 2.5) still some hence finitely many of these linear subsets might contain only one line bundle, in this case no polytopes will be jammed into the associated polytope to form a quasi-covering. Given contained in a union of neighborhoods of the vertices bounded by some fixed number. It is easy to see one can deduce the same conclusion when one of L 1 , L 2 is the only line bundle in the linear subset that contains it. Therefore, we can get finally a finite number bounding dim coker Φ L1,L2 for all L 1 , L 2 . Local Variants of Oda's Question. It should be perceptible that the following question can be viewed as a local variant of Oda's original one. Question 1. Let L 1 be an ample line bundle on a smooth projective toric variety X, L 2 and L be nef line bundles andσ be a full dimensional cone of Σ L , is the following map surjective The following strengthened version of Question 1 can be regardeded as a counterpart of (1.3) in the local setting. Question 2. Under the same conditions as Question 1, is the following map surjective Note that coker Φ L1,L2⊗L k = 0 for all k sufficiently large would imply the surjectivity of (3.17). On the other hand, by the proof of Theorem 1, the lattice points of P L1⊗L2 +σ ∨ staying away from the image (3.17) is finite if X is general in the sense of Definition 2.6. However, a smooth projective variety is general only if each of the two-dimensional facets of the polytope associated to an ample line bundle is a triangle. Next we consider to what extent the map (3.17) might fail to be surjective in a setting less restrictive than being general. Firstly, we assume dim Σ L = dim Σ X . Moreover, we require that each two-dimensional facet ofσ ∨ corresponds uniquely to a two-dimensional facet of P L1 +σ ∨ , where the former is the recession cone of the latter. To make the second condition more explicit, let Σ X,σ be the normal fan of P L1 +σ ∨ and for eachρ ∈σ(n − 2) where n = dim Σ X , we define this condition just means #Σ X,σ (n − 2)ρ = 1 for anyρ ∈σ(n − 2). Proposition 3.8. Suppose dim Σ X = dim Σ L = n and for eachρ ∈σ(n − 2) we have #Σ X,σ (n − 2)ρ = 1, then the lattice points of P L1⊗L2 +σ ∨ lying outside the image of (3.17) are contained in certain tubular neighborhoods of its infinite edges. Proof. Let ι i , 1 ≤ i ≤ t be the (n − 2)-dimensional cones of Σ X,σ corresponding to 2-dimensional infinite facets of P L1 +σ ∨ which contain two infinite edges with different directions. Since X is smooth and L 1 is ample, each ι i is the convex hull of n − 2 one dimensional cones of Σ X . In particular, it can be regarded as an element of n−2 N R and by abuse of notation we still denote this element by ι i . By condition the elements {ι i } 1≤i≤t of n−2 N R are mutually different. Take x in the interior ofσ and let v x ∈ N R be corresponding vector. Then it is easy to see by choosing x suitably we may assume any two of the vectors v x ∧ ι i , 1 ≤ i ≤ t are non-colinear. For simplicity, we write v for v x from now on and for each number a let Then it is easy to see for all a larger than some number a 0 , the hyperplane H v (a) intersects with each infinite edge of P L1⊗L2 +σ ∨ . Moreover, for any a 0 the complement of and the edges of H v (a) ∩ (P L1⊗L2 +σ ∨ ) whose lengths keep invariant with a are just those lying on some two-dimensional facet of P L1⊗L2 +σ ∨ with a one-dimensional recession cone. By our assumption, except for those with fixed lengths all other edges of H v (a) ∩ (P L1⊗L2 +σ ∨ ) have different directions. By a similar argument as in the proof of Theorem 1 and particularly Claim 3.7, one can show that for all a sufficiently large the following union forms a quasi- Then the conclusion of the proposition readily follows from this result. Remark 3.9. The case when dim Σ X > dim Σ L is even more intricate. Recall that the equivariant morphism f : X → Y = Y (Σ L ) can be induced by a map of lattices Suppose the following map is surjective as Question 1 indicates then for each z ∈ P L1|Z the following map is necessarily surjective Note that the fiberg −1 (z) is in general not a lattice polytope. On the other hand, one can show wid ρ (g −1 (z)) is a piecewise linear convex function of z for any ρ ∈ N ′ . If E(g −1 (z)) is sufficiently large for any vertex z of P L1|Z , then by [14,Corollary 2.11] and the convexity of width function one can deduceg −1 (z) is not lattice-free for any z ∈ P L1|Z . However, since even the normal fan of the convex hull of the lattice points of g −1 (z) is unclear to us, the surjecitivty of 3.20 in this general setting cannot be touched by our methods. If the map ψ L splits Σ X in the sense of [2, Definition 3.3.18], then f is a locally trivial fibration hence Z is also smooth. Let m = dim Σ Z , then for each σ ∈ Σ Z (m), there is a section Y σ of f passing through the point of the generic fiber of f corresponding to σ. Moreover, by [3, Proposition 3.8] the polytope P L1 is a twisted Cayley sum of the polytopes P L1|Y σ , σ ∈ Σ Z (m). In this case for a lattice point z ∈ P L1|Z one can show the fiberg −1 (z) is a lattice polytope with the same normal fan as Y . Then the surjectivity of (3.20) in this setting can be reduced to answering the same question for lower dimensions. Proof of Theorem 2 To prove (2.3) in the setting of smooth toric threefolds, the method used in the previous section is inadequate -here we need to move P L around while keeping one of its two-dimensional facets touching the corresponding facet of PL. In other words, the translations of P L corresponding to lattice points on two-dimensional facets of P L −1 ⊗L should also be taken into account (see the proof of Proposition 4.5). To prove this proposition we will need the following lemmas, the proof of the first of which is trivial and we omit it. Lemma 4.2. Two neighboring edges of a smooth lattice polygon which contains at least four lattices are also neighboring edges of a unimodular parallelogram that is contained in this polygon. Lemma 4.3. Let P be a pointed infinite polyhedron, ∂ f P be the union of its finite facets, then we have Proof. The lemma will be proved by induction on the dimension of P . The case when dim P = 1 is trivial. Let Q be the convex hull of the vertices of P , then by Minkowski-Weyl decomposition we have P = Q + C. If Q ⊂ ∂P , then it is easy to see Q ⊆ ∂ f P hence the conclusion follows from P = Q + C. In the case when Q ∩ P • = ∅ we will find for any p ∈ Q ∩ P • a point q ∈ ∂ f P such that p ∈ q + C. Let l be a line through p and parallel to a ray r + in C, then there exists a unique q ∈ ∂P such that {q} = ∂P ∩ l and p ∈ q + r + . If q ∈ ∂ f P , then we are done. Otherwise, q is contained in an infinite facet F of P . By induction, we have x ∈ ∂ f F and y ∈ C(F ) such that q = x + y, where C(F ) is the recession cone of F . Since each finite facet of F is also a finite facet of P , we have x ∈ ∂ f P . Moreover, y ∈ C as C(F ) ⊆ C hence the lemma is proved. Lemma 4.4. Let Σ 1 be a two dimensional complete fan and Σ 2 be a noncomplete subfan of Σ 1 with a convex support. Let P 1 (resp. P 2 ) be a lattice polygon (resp. infinite lattice polygonal region) whose normal fan is Σ 1 (resp. Σ 2 ). Suppose the length of each finite edge of P 2 is no smaller than that of the corresponding edge of P 1 , then each lattice point in P 2 can be found in some lattice translation of P 1 which is itself contained in P 2 . Proof. Let C be the recession cone of P 2 , then the normal fan of P 1 + C is also Σ 2 . Moreover, one sees easily there are only finitely many translations of P 1 + C contained in P 2 such that one of the edges with finite lengths of such a translation is contained in the corresponding edge of P 2 . If we denote these translations by where v i is an integral vector, then by Lemma 4.3 we get On the other hand, we can find a nef and big line budle L 3 on the toric surface defined by Σ 1 such that C =σ ∨ for someσ ∈ Σ L3 (2). Then P 1 + C = k≥1 (P 1 + kP 3 ), by the main results of [5] and [11] one deduces easily any lattice point of P 1 + C + v i is contained in certain integral translation of P 1 which itself is contained in P 1 + C + v i . Then the claim in the lemma follows from this result and the equality (4.1). Proof of Proposition 4.1. By Lemma 2.5 we need only to show for any reduced linear subset of the form L 0 + L J 0 there exists a line bundle L ∈ L J 0 such that dim coker Φ L1,L2 can be uniformly bounded for any Moreover, by a similar argument as the proof of Theorem 1 especially Claim 3.7, this can be further reduced to a local question (see Question 1). To be more precise, let Σ J be the normal fan of the polytope associated to an line bundle in the interior of L J 0 , n J = dim Σ J andσ ∈ Σ J (n J ). Then we only need to show the lattice point on the right of the map below not contained in the image can be uniformly bounded. Next we will prove our claim for n J = 1, 2 and 3 respectively. Case 1: n J = 1. Recall that (see Remark 3.9) a nef line bundle L in the interior of L J 0 defines a morphism X → Y = Y (Σ L ), which is induced by a surjective map from N to N ′ . We will denote this map by ψ J and the subfan ker where u τ is a primitive vector paralleling with e τ (P L1 ). Since u τ parallels with u τ ′ for any τ ′ ∈ Σ X (2) ∩ Σ J X , then u τ = u τ ′ . As a consequence, by convexity for any x ∈ P L1 we have where l τ (x) is the line passing throught x and paralleling with e τ (P L1 ). To proceed further, let Π ⊂ M be a two-dimensional sublattice such that Zu τ ⊕Π = M . Let π : M R → Π R be the projection with kernel Ru τ . As Σ X is smooth, so is its subfan Σ J X . Then π(P L1 ) and π(P L1⊗L0 + σ ∨ ) are both lattice polygons in Π R . Let H be a hyperplane lying between H α (P L1⊗L0 ) and H −α (P L1⊗L0 ) such that H ∩ M = ∅. It is clear that there are only finitely many such hyperplanes. Next we will show for any such hyperplane H the lattice points of (P L1⊗L0 +σ ∨ ) ∩ H missed out by integral translations of P L1 are contained in certain bounded neighorhoods of the vertices of P L1⊗L0 . If e β (P L1⊗L0 ∩ H) contains a lattice point then so does e β (P L1 ∩ H) as e β (P L1 ∩ H) ≥ u β . If e β (P L1⊗L0 ∩ H) contains no lattice points, we will show some lattice point can be found on some chord of P L1 ∩ H paralleling with e β (P L1 ∩ H). Let c β (P L1 ∩ H) (resp.c β (P L1 ∩ H)) be the chord of P L1 ∩ H that is contained in the intersection of H with the hyperplane passing through e τ0 (P L1 ) and e τn (P L1 ) (resp. throughẽ β (F α (P L1 )) andẽ β (F −α (P L1 ))). Then one sees easily c β (P L1 ∩ H) ≥ u β , c β (P L1 ∩ H) ≥ u β and the region bounded by these chords together with the border of P L1 ∩ H contains a lattice point. Let e ′ β (P L1 ∩H) be the chord of P L1 ∩H nearest to e β (P L1 ∩H) such that it parallels with u β and contains a lattice point. Then this chord lies between e β (P L1 ∩ H) and e ′ β (P L1 ∩ H). Since the Euclidean lengths of all chords of P L1 ∩ H with direction u β lying between e β (P L1 ∩ H) andc β (P L1 ∩ H) are no less than u β , none of them or the line containing one of them has a lattice point by our definition of e ′ β (P L1 ∩ H). By the same token, when e β ′ (P L1 ∩ H) is lattice-free there exists a chord e ′ β ′ (P L1 ∩ H) containing a lattice point and the open region bounded by the lines containing this chord and e β ′ (P L1 ∩ H) contains no lattice points. It's clear that our claim can be deduced from these results. It still remains to prove the case when F α (P L1 ) or F −α (P L1 ) contains exactly three lattice points. If X fails to be be isomorphic to P 1 × P 2 and F α (P L1 ) contains exactly three lattice points, then one sees easily the lattice triangle F ′ α (P L1 ) ⊂ P L1 paralleling with and nearest to F α (P L1 ) contains more lattice points. If we replace F α (P L1 ) by F ′ α (P L1 ) and F −α (P L1 ) by F ′ −α (P L1 ) if necessary, one sees easily the proof above still works. Finally, the surjectivity of (4.2) can be proved directly when X is isomorphic to P 1 × P 2 . Case 3: n J = 3. In this case by Proposition 3.8, we only need to show all but finitely many lattice points in a tubular neighborhood of the infinite edges of P L1⊗L2 +σ ∨ with a fixed direction are contained in certain integral translations of P L1 . Let U τ be a tubular neighborhood as described in Proposition 3.8 such that the distance between points in it and the infinite edge e τ (P L1⊗L0 +σ ∨ ) is bounded for some τ ∈σ (2). Since Σ X is smooth, we can take a ρ ∈ N lying in the interior ofσ such that then it is easy to see for all sufficiently large integer a 0 , the lattice points of U τ that are not contained in the following set is finite Besides, it is easy to see the intersections H ρ (a) ∩ U τ for various a ∈ Z are translations of each other by integral multiples of u τ , hence it suffices to prove for a large enough any lattice point of H ρ (a) ∩ U τ is contained in some translation of P L1 . Note that H ρ (0) ∩ M is a complement to Zu τ in M . By abuse of notations, let π : M R → H ρ (0) be the projection with kernel Ru τ , then π(P L1 ) is a lattice polygon on H ρ (0) which is contained in the lattice polygonal region π(P L1⊗L0 + σ ∨ ). Then by applying Lemma 4.4 any lattice point of π(U τ ) ⊂ π(P L1⊗L0 + σ ∨ ) is contained in some integral translation of π(P L1 ), from which our claim follows. Proposition 4.5. For a given nef line bundle L 2 , the map (1.1) is surjective for all but finitely many line bundles L 1 ∈ Amp(X). Proof. Let L 0 +L J b ⊆ Amp(X) be a reduced linear subset, by Lemma 2.5 it suffices to show the map Φ L0⊗L,L2 is surjective for all L ∈ L J 0 such that E(L) ≫ 0. We first prove this claim for the case when dim Σ L2 = 3 by showing the map below is surjective for all L lying sufficiently deep in L J 0 (4.4) Let Σ J be the normal fan of a polytope associated to a line bundle in the relative interior of L J 0 and n J = dim Σ J . Take δ ∈ ( 3 4 , 1) and denote by L δ ∈ Nef(X) Q the line bundle whose associated polytope is δP L . Similar as (3.13) we have Recall that (see (2.4 be the integral translation of P L0⊗L contained in P L0⊗L2L corrsponding to x 0 , then by combining (4.4) and (4.5) it suffices to show for anyσ ∈ Σ J (n J ) The proof of (4.6) given below follows a similar line as our argument in Claim 3.7. Let P L0⊗L,σ (x 0 ) = P L0⊗L (x 0 ) +σ ∨ , P L0⊗L2⊗L,σ = P L0⊗L2⊗L +σ ∨ , and S ′ σ = {ρ ∈ Sσ | the facet of P L0⊗L,σ corresponding to ρ is bounded}. Then it is easy to see (4.6) follows from We first prove (4.7). By Lemma 4.3, we only need to prove for all ρ ∈ S ′ σ the following equality If D ρ is not isomorphic to P 2 or if D ρ is isomorphic to P 2 and F ρ (P L0⊗L (x 0 )) contains more than three lattice points the equality above can be deduced from Theorem 3. Now we consider the case D ρ is isomorphic to P 2 and F ρ (P L0⊗L (x 0 )) contains only three lattice points. If #(F ρ (P L0⊗L⊗OX (−Dρ) ) ∩ M ) > 3, we can still apply Theorem 3 and get (4.10) As a consequence (4.9) can be obtained. If #(F ρ (P L0⊗L⊗OX (−Dρ) ) ∩ M ) = 3, we will have n J = dimσ = 1, from which we can deduce X ∼ = P 2 × P 1 and the conclusion can be proved directly. Proof of Theorem 2. First of all, by the proof of Proposition 3.3 we can write Nef(X) as a union of finitely many reduced linear subsets be a Hilbert basis of L J k 0 and take Next we will prove the following Claim 4.6. For givenL ∈ Amp(X), if the map ΦL ,L k ⊗L is surjective for all L ∈ B J k b k , then ΦL ,L is surjective for allL ∈ L k + L J k 0 . Indeed,L can be written asL for some L ∈ B J b such that the power k j = 0 iff the power of B j in L is equal to 3. The claim will be proved by induction on the powers k j , 1 ≤ j ≤ m. Firstly, if k j = 0 for all j, then we are done by assumption. Now assume the map ΦL ,L is surjective, we will prove so is true for ΦL ,L⊗Bj , where j satisfies the condition that the power of B j in L is 3. Given an invariant curve C τ , if B j .C τ = 0 then we have If B j .C τ = 0, then sinceL is ample we have By the proof of Proposition 3.1, the following map is surjective On the other hand, by induction the following map is also surjective hence so is the following map Moreover, sinceL ≺L ⊗ B j and both of them are contained in L J k b k , by (4.12) the map ΦL ,Bj is surjective. As the following diagram is commutative, the map ΦL ,L⊗Bj is surjective. Then by induction Claim 4.6 is proved. Now by Proposition 4.5, to each line bundleL from the following finite set we can assign a finite subset SL of Amp(X) such that the map ΦL ,L is surjective as long asL is contained in Amp(X)\SL. Let S 2 = L ∈S1 SL, then ΦL ,L is surjective for anyL ∈ Amp(X)\S 2 andL ∈ S 1 . By Claim 4.6, the map ΦL ,L is surjective forL ∈ Amp(X)\S 2 and any nef line bundle L. Corollary 4.8. With at most finitely many exceptions, any ample line bundle on a given smooth toric threefold is projectively normal. Explicit Estimations related to Pairs Potentially Violating the Surjectivity of (1.1) In the first part of this section we reexamine the proof of Lemma 2.5 in more detail and formalize the dimension lowering process, which will act as the cornerstone of our estimation. In the second part, we provide some explicit bounds for the pairs of line bundles that fall outside some of the positive results obtained in previous sections. The Dimension Lowering Process for Linear Subsets. The proof of all our finiteness results obtained as far are based on Lemma 2.5, where the number of the line bundles that might violate the property P is implicit. Recall that to apply Lemma 2.5, we need to find for a reduced linear subset L J b a new integer vector b ′ with b ′ > b such that all line bundles in L J b ′ satisfy the property P. Then we come to where each of the finitely many linear subsets LJ b on the right is reduced and moreover dim LJ 0 < dim L J 0 . This dimension lowering process for the linear subsets to acquire the property P will continue until dim LJ 0 = 0, exactly when we arrive at the line bundles that might fail to satisfy P. To extract effective bounds for the potential counter-examples, we need to make this process tractable by finding an explicit upper bound for eachb. Obviously, the way to write L J b \L J b ′ as union of linear subsets with lower dimensions is not unique. For our purpose we will require each component on the right of (5.1) is a maximal reduced linear subset. Then the indexJ must be minimal, i.e. there exists no other reduced linear subset LJ Lemma 5.1. For eachJ ⊂ I in (5.1) there existsbJ ∈ Z I ≥0 such that for any maximal reduced linear subset Proof. Let B = {B j } 1≤j≤m be a Hilbert basis of Nef(X), for any S ⊂ I we define GivenJ ⊂ I as on the right in (5.1), we label the line bundles in B as follows Then we can deduce n j = 0 for any 1 ≤ j ≤ k since B j .C τ ′ = 0 for some τ ′ ∈ J by our definition (5.2). However, combining this with (5.7) one would getL.C τ = L.C τ for allL ∈ L J b hence τ ∈ J as L J b is reduced, which contradicts with our assumption that τ ∈J\J. Next we will find minimal elements of the following set with respect to ≺J , which is defined by the subcone LJ 0 of Nef(X) (see Section 2.1 for the definition) Following the notations above (5.8), sinceL ∈ L J b by the argument above we have n j = 0 for all 1 ≤ j ≤ k. On the other hand, letL ∈ SJ b\b ′ , if n j > 0 for some l + 1 ≤ j ≤ m, then it is easy to seeL ⊗ B −nj j ∈ SJ b\b ′ . Hence by the minimality requirement we can just assume n j = 0 for all l + 1 ≤ j ≤ m in (5.8). Then for L ∈ SJ b\b ′ (5.8) can be further written as Note that the property P is satisfied by all line bundles in L J b ′ , hence forL in the form of (5.10) to stay outside L J b ′ , the integer vector (n j ) k+1≤j≤l needs to satisfy the following inequality (recall that b = (b τ ) τ ∈I and b ′ = (b ′ τ ) τ ∈I ) for at least one τ 1 ∈J\J (see also (2.6)) (5.11) Take One sees easily there are only finitely many different terms in the non-decreasing sequence {{τ 1 } (k) } k≥1 and we will denote by {τ 1 } the largest one, then by definition we have We claim {τ 1 } =J\J. Firstly, by definition for any l + 1 ≤ j ≤ m we have B j .C τ1 = 0 hence also B j .C τ = 0 for any τ ∈ {τ 1 }. Let τ ∈ {τ 1 }, then since τ / ∈ J by (5.5) we can find k + 1 ≤ j ≤ l such that B j .C τ > 0, then by (5.6) we get τ ∈J hence {τ 1 } ⊆J\J. On the other hand, by (5.13) for any τ 2 ∈ (I\J)\{τ 1 }, there exists B j ∈ B 0 {τ1} such that B j .C τ2 > 0. Therefore, by definition 2.3 the index set J ∪ {τ 1 } is reduced, hencē J = J ∪ {τ 1 } asJ is minimal by our assumption. Therefore, by our definition (5.3) for any k + 1 ≤ j ≤ l we have for some τ ∈ {τ 1 }. Then similar as (5.11) for each k + 1 ≤ j ≤ l we have for some τ ∈ {τ 1 }. Now by (5.10) and (5.14) for a minimalL and τ ∈ I\J we get where a τ = max Bj ∈B B j .C τ . Then it is easy to see the vectorbJ given by the formula below satisfies the condition in Claim 5.1 With these recursive relations one can derive the inequalities that the line bundles left in the zerodimensional linear subsets need to satisfy. The Estimation. For each J ∈ J (X) let A J be a Hilbert basis of L J 0 ∩ Nef(X) and take c = max When J = ∅, we will just take A ∅ = B, then one sees easily a τ ≤ c for all τ ∈ I. Moreover, by [1, Theorem 2.12] we can find a finite set of ample line bundles {L (i) } i such that (5.17) Amp(X) = i (L (i) + Nef(X)) ∩ Nef(X). For each τ ∈ I we will take Proposition 5.2. The map (1.1) is surjective for any ample line bundle L 1 and nef line bundle L 2 on a projeictive toric variety with dimension d if so is true when L 1 , L 2 satisfy L i .C τ ≤ #Idc 2 for any τ ∈ I, i = 1, 2. Proof. For any nef line bundle L, let P be the property that L ≺L ⇒ L ≺ cL . Then by our proofs of Proposition 3.1 and Proposition 3.3, for any reduced linear subset L J b , the linear subset L J b ′ will satisfy the condition of Lemma 2.5 if where ǫ is any positive number. Then by our argument in (5.16) (note that L J0 b0 = L ∅ 0 ) for any reduced linear subset L J b = L + L J b turning up in this dimension lowering process we have For now on till the end of this section, we denote by X a smooth projective toric threefold. For ρ ∈ Σ X (1) we take r ρ = min Proposition 5.3. Let L 1 be an ample and L 2 a nef line bundle on X, if Proof. For given L 2 let P be the property that coker Φ L1,L2 = 0 for L 1 ∈ Amp(X). Similar as the proof of Proposition 5.2, it suffices to find an explicit b ′ for a reduced linear subset L J b in Amp(X). Following the notations in the proof of Proposition 4.5 for any ρ ∈ Σ X (1)\Σσ(1) we can find τ ∈ I\J such that u τ , ρ = 0. By our proof there for (4.11) to be valid we only need where δ ∈ ( 3 4 , 1). Then we can take otherwise. To apply (5.16) we need to start from a reduced subset L (i) + Nef(X) (see (5.17)) hence accordingly Note that the property P is not satisfied by an ample line bundle L 1 only if L 1 is contained in the linear subsets with zero dimension. That is, for any τ ∈ I and δ ∈ ( 3 4 , 1), from which we can deduce the conclusion. Remark 5.4. An estimation for the size the finite subset of Amp(X) in Theorem 2 can also be obtained by combining the results of the propositions above. As for Proposition 4.1, to explicite a uniform bound of dim coker Φ L1,L2 would be challenging even for those L 2 = L k 3 with L 3 a given nef line bundle and k ≥ 1. On the other hand, in this setting the surjecitivity of Φ L1,L k 3 for all k ≥ 1 would imply the surjectivity of the following map for any full-dimensional coneσ of Σ L3 . The surjectivity of this map can be verified by using formal sum of characters, however a moderate bound for the lattices that might fall out of the image seems still desirable allowing for the amount of computation. Covering of Smooth Lattice Polygons Theorem 3. Let L 1 ≺ L 2 be two ample line bundles on a smooth projective toric surface, then the following map is surjective as long as L 1 is not O P 2 (1) Let u be a given nonzero vector, recall that l u (x) is the line through x with direction u. For a convex polygon or an infinite convex polygonal region P such that P ∩ l u (x) is a segment for all x ∈ P , we will denote by L P u or L u the following function P → R ≥0 , x → P ∩ l u (x) and by L u (P ) the number max x∈P L u (x). Let P be a convex polygon, π u : P → R be the projection of P along the direction of u. Then π u (P ) = [a, b] ⊂ R and L P u can be regarded as a function defined on [a, b] by letting L P u (t) = L P u (x), where t ∈ [a, b] and t = π u (x) for some x ∈ P . By investigating the Steiner symmetrization [7, 9.1] of P along the direction of u, one sees easily L P u (t) is a convex unimodal function, which means this function attains a unique maximum in the interval where it is defined. If L u (P ) ≥ u , then we have a nonempty interval Moreover, if L u (P ) = u and there exists exactly one chord of P paralleling with u and with length u , then c = d. Otherwise, we have c < d. Following the notations above, we have the following Definition 6.1. A contact point of P with respect to u (or of u for brevity) is a point x ∈ ∂P ∩ ∂(u + P ) such that π u (x) = c or π u (x) = d and x − ǫu / ∈ u + P for any ǫ > 0. It is easy to see from our definition P has one or two contact points in the direction of u depending on c = d or c < d. The lemma below should be clear from our definition and we omit its proof. Lemma 6.2. Let x, y be the two contact points P in the direction of u, then P ∩ (u + P ) is contained in the closed region bounded by the lines l u (x) and l u (y). For a parallelogram P , each pair of its opposite sides defines an infinite region (including the boundary) bounded by the pair of lines containing these sides. A point set is said to be bounded by one direction of P if it is contained in such a region. More specifically, a point set is said to be bounded by direction AB (or CD) of a parallelogram ABCD if it is contained in the region bounded by the lines AB and CD. A point set is said to be bounded by two directions of P if it is contained in neither of these infinite regions but their union. The following lemma is easy to verify and we omit its proof. Lemma 6.3. Given two parallelograms all of whose vertices stay on the boundary of their convex hull, then each of them is bounded by one or two directions of the other. If furthermore they have no common points, their vertices are contained in a pair of paralleling lines. To determine the relative positions of two chords of P will be essential in the proof of Theorem 3. Next we show this question has a simple answer under suitable conditions and that is enough for our purpose. Before stating the result we make some preparations. Given a convex polygon P in the plane and a vertex x 0 of it we choose a coordinate system such that x 0 is the unique lowest point of P . In the following the slopes of vectors or segments are measured in this chosen coordinate system. Let s 1 , s −1 be the two edges of P which share x 0 as their common endpoint. The consecutive edges of P starting from s 1 (resp. s −1 ) will be denoted by s 2 , s 3 , · · · (resp. s −2 , s −3 , · · · ). Let µ i > 0 be the slope of s i , we assume µ 1 > µ −1 , µ i > µ i+1 when i > 0 and µ i < µ i−1 when i < 0. Recall that P can be represented as where H + si is the half plane which is cut out by the line containing the edge s i and contains P . For each pair of positive integers m, n such that µ m > µ −n , we define a polygonal region P m,−n containing P as follows Obviously we have P ⊂ P m,−n ⊆ P m ′ ,−n ′ for m ≥ m ′ , n ≥ n ′ . By abuse of notation, we use s m (resp. s −n ) for the ray, which is part of the boundary of P m,−n , with initial at s m ∩ s m−1 (resp. s −n ∩ s −n+1 ) and containing the edge s m (resp. s −n ) of P . Next we consider the chords of the region P m,−n and P with the same directions and lengths. Firstly let u be a vector such that L P u (x 0 ) = 0, one sees easily the following function L is strictly monotonically increasing when x moves upwards along i≥1 s i or i≤−1 s i . Now suppose P has a chord with direction u and length a > 0. Then it is easy to see we can find minimal m and n such that a chord of P m,−n has direction u, length a and at the same time also a chord of P , i.e. the endpoints of this chord are contained in P . Furthermore, for any m ′ , n ′ such that m ′ ≥ m and n ′ ≥ n this chord is also contained in P m ′ ,−n ′ . On the other hand, by the minimality of m and n, for P m ′ ,−n ′ with m ′ < m or n ′ < n the chord of P m ′ ,−n ′ with direction u and length a would be strictly on the lower side of the common chord of P and P m,−n as mentioned above. Now we can state our Lemma 6.4. Let u, v be two nonzero vectors, x u , x v (resp. y u , y v ) be points on s −1 (resp. s 1 ), which are contained in P 1,−1 . Suppose that 1) the directions of the chords x u y u and x v y v are u and v respectively and x u y u is lying on the upper right side of Then we can find m, n such that P m,−n and P share two chords with directions u, v, and lengths x u y u , x v y v respectively. Moreover, the chord with direction u still lies above the chord with direction v. Proof. By our argument above, we can find a pair of positive integers m, n such that P m,−n shares with P a chord whose direction is v and length x v y v . Moreover, we assume m, n are minimal with this proerpty. To prove the lemma, we only need to show the chord of P m,−n with direction v and length x v y v lies on the lower left of the chord with direction u and length x u y u . We will prove this claim by induction, i.e. for any k ≥ 1, l ≥ 1, the chord of P k,−l with direction v and length x v y v lies on the lower left of the chord with direction u and length x u y u . By condition the claim is true for k = l = 1. Assuming now it has been proved for P k,−l , next we will prove it for P k,−l−1 and P k+1,−l . Let x k,−l u ∈ −l≤i≤−1 s i , y k,−l u ∈ 1≤i≤k s i (resp. x k,−l v ∈ −l≤i≤−1 s i , y k,−l v ∈ 1≤i≤k s i ) be the endpoints of the two chords of P k,−l as described in the statement of the claim. Proof of Theorem 3. The proof is given in three steps. Step 1. Reduction to Two Cases. Recall that all smooth projective toric surfaces can be obtained by successively blowing up the fixed points under the torus action. We will prove the proposition by induction on the Picard number r of X. Firstly we observe under the condition of Theorem 3, the surjectivity of (6.1) can be reformulated as the lattice translations of P L1 which are contained in P L2 can cover P L2 . The statement above is easy to verify when r = 2. Now assume it is proved for all toric surfaces with Picard number r = k, next we will prove it for r = k + 1. Moreover, we will choose a basis u h , u v of M R defined by u h , α = u v , β = −1, u h , β = u v , α = 0. For each 0 ≤ i ≤ k and 0 ≤ j ≤ l, we take u αi and u βj to be the primitive integral vectors with non-negative coefficients under the basis {u h , u v } such that u αi , α i = 0 and u βj , β j = 0 respectively, particularly The following lemmas will be used later and proof of the first is omitted. Lemma 6.6. Following notations above, P L1 has two contact points in the directions of u αi and u βj for each 0 ≤ i ≤ k and 0 ≤ j ≤ l. Proof. For 0 ≤ i ≤ k − 1 and 0 ≤ j ≤ l − 1 the conclusion is obviously true as one sees easily If L u α k (P L1 ) = u α k or L u β l (P L1 ) = u β l , then we have −σ(1) ⊆ {−α, −β}. In particular, e α k (P L1 ) and e β l (P L1 ) share a common endpoint, say x. Then by Lemma 4.2, we have u α k + u β l + x ∈ P L1 , hence the conclusion is also true in this case. Step 2.1. The Set of Translation Vectors. By our choice of the integral basis of M , the polygons corresponding to L i , i = 1, 2, can be obtained from those corresponding to L i (D σ ) by cutting off the shading region, as is shown in the figure below. There are two possibilities for the relative positions of the polygons corresponding to L 1 and L 2 . S For the case on the left, there is nothing to prove. For the case on the right, we need to show the shading region S of P L1(Dσ ) is contained in the union of some lattice translations of P L1 , each of which is contained in P L2 . Let A 0 , A ′ 0 be the endpoints of e α+β (P L1 ), if L u α i (A 0 ) ≥ u αi , L u α i (A ′ 0 ) ≥ u αi for some 0 ≤ i ≤ k, then we can deduce S ⊂ u αi + P L1 . We have similar result when α i is replaced by β j for some 0 ≤ j ≤ l. In the following we will work under the assumption below (6.2) S cannot be coverved by either a single u αi + P L1 or u βj + P L1 . Let p 0 ∈ P L −1 1 ⊗L2 ∩ M be the lattice point corresponding to P L1 . In order to find the translations of P L1 which can cover S, we will first prove the following Claim 6.7. There exists lattice points q, q ′ ∈ P L −1 1 ⊗L2 ∩ M such that for any point x on the segment qq ′ (6.3) where p 0 qq ′ is the triangle with vertices p 0 , q and q ′ . We first prove this claim under the condition p 0 / ∈ ∂P L −1 1 ⊗L2 . The proof for the case when p 0 ∈ ∂P L −1 1 ⊗L2 will be given at the end of Step 2.1. If λ αi / ∈ Z and particularly when λ αi < 1, the translation of any lattice point of P L1 under λ αi u αi is not a lattice point. Then ρ ∈ Σ X (1) satisfying the relation above is unique, we will denote it by ι(α i ). In the same way we can define ι(β j ). Since the Picard number of the base toric surface is at least three, there exist 0 which is excluded by our assumption at the beginning (6.2). Then the condition (6.6) specializes to (6.7) 0 < λ α k 0 = λ β l 0 < 1. Otherwise ι(α i ) ∈ {α + β, β 0 , β 1 , · · · , β l } for all 0 ≤ i ≤ k. It can be seen easily from the figure above on the right there exist no lattice points of P L −1 1 ⊗L2 lying in the intersection of P L −1 1 ⊗L2 with the angular region bounded by r uα i (p 0 ) and r uα i+1 (p 0 ), hence the intersection of P L −1 1 ⊗L2 with the angular region bounded by r u α 0 (p 0 ) and r u α k (p 0 ) either. On the other hand, we have already found 1 ≤ l 1 ≤ l such that β l1 satisfies (6.9). As a consequence, the slope of u β l 1 is smaller than that of u α k . Then we can deduce the slope of u k is positive hence P L1 has a unique lowest point. As the slope of u β l is strictly larger than that of u α k we must have l 1 < l. Therefore, we can deduce L u β l 1 (P L1 ) > u β l 1 . Then P L1 has two contact points in the direction of u β l 1 , particularly we can find a unique pair of points y 1 , y 2 ∈ ∂P L1 \e β l 1 (P L1 ) such that − − → y 1 y 2 = u β l 1 and y 2 is a contact point of P L1 with respect to u β l 1 . Next we locate the points y 1 and y 2 on ∂P L1 . Firslty by (6.2) we have S ⊂ u β l 1 + P L1 , hence y 2 / ∈ 0≤i≤k e αi (P L1 ). By our argument above, the slope of u β l 1 is small than that of u αi , 0 ≤ i ≤ k, then y 1 / ∈ 0≤i≤k e αi (P L1 ). For slope reasons y 2 is contained in 0≤j≤l1−1 e βj (P L1 ) or e α+β (P L1 ) and y 1 must be contained in l1+1≤j≤l e βj (P L1 ), which would imply − − → y 1 y 2 > e β l 1 (P L1 ) ≥ u β l 1 , a contradiction. As a consequence, (6.8) is proved. Next we prove (6.3). For saving notations, we just assume λ αi > 1, λ βj > 1 from now on. By Lemma 6.6 u αi and u βj satisfy one of the following conditions. 1) L u α i (P L1 ) > u αi or L u β j (P L1 ) > u βj ; 2) L uα i (P L1 ) = u αi , L u β j (P L1 ) = u βj and P L1 has two contact points with respect to any of u αi and u βj . For the latter, we have i = k, j = l, u α k−1 parallels with u β l and u α k parallels with u β l−1 . Then by (6.8) and (6.9) we have S ⊂ u α k + P L1 and S ⊂ u β l + P L1 . Next we assume L u β j (P L1 ) > u βj and the case when L u α i (P L1 ) > u αi can be treated similarly. Take A, B ∈ e αi (P L1 ) such that AB = u αi . If e βj (P L1 ) > u βj , then we can find a segment on e β l 1 (P L1 ) longer than u βj . Otherwise, by assumption we can find a chord of P L1 near e βj (P L1 ) such that it parallels with but is longer than u βj and has empty intersection with AB. In any of the two possibilities we will denote this longer segment by CD. The figures below exhibit these segments and their convex hull under the conditions that AB < CD together with the slope of CD is lareger and smaller than that of AB respectively. The case when AB ≥ CD is omitted as the proof is similar. In these two cases we take D ′ and C ′ on CD such that CD ′ = C ′ D = u βj . From the figures below one sees easily for any point x 1 ∈ D ′ E and x 2 ∈ C ′ E we have Then (6.3) can be proved by taking q = p 0 + u αi and q ′ = p 0 + u βj . HE XIN By combining the unimodality of the function L−−→ p0pi (x), x ∈ P L1 and the relation between L−−→ p0pi (A 0 A ′ 0 ) and − − → p 0 p i , one checks easily each translation vector belongs to one of the following types. Let − → p 0 q = au h + bu v , a ≥ 0, b ≥ 0 be a nonzero vector, for convenience in our later argument we will say − → p 0 q is of type g) (resp. type h)) if P L1 has one (resp. zero) contact point with respect to − → p 0 q. Next we will label the contact points for each vector. Let then one can define a real-valued function on γ i (P L1 ) simply by assigning L−−→ p0pi (x) to t for t = γ i (x) and this function is still unimodal. Let A i , A ′ i be the two contact points of P L1 in the direction of − − → p 0 p i we assumẽ Proof of Claim 6.7 for p 0 ∈ ∂P L −1 1 ⊗L2 . If p 0 ∈ e α+β (P L −1 1 ⊗L2 ), then there is nothing to prove as S is not contained in P L2 . If p 0 ∈ ρ∈−σ∩ΣX (1) e ρ (P L −1 1 ⊗L2 ) and not an endpoint of this polygonal chain, then one sees easily ι(α i ) and ι(β j ) are well defined for all 0 ≤ i ≤ k and 0 ≤ β ≤ l. Then the arguement above for p 0 / ∈ ∂P L −1 1 ⊗L2 can still serve here. Next we will prove the case when p 0 is contaiend in 0≤j≤l e βj (P L −1 1 ⊗L2 ). The case when p 0 ∈ 0≤i≤k e αi (P L −1 1 ⊗L2 ) can be proved similarly and we omit it. If p 0 ∈ e β (P L −1 1 ⊗L2 )\e α+β (P L −1 1 ⊗L2 ), then we have S ⊂ u h + P L1 . Besides, if e β (P L −1 1 ⊗L2 ) = 0 and p 0 lie at the top position of the 1≤j≤l e βj (P L −1 1 ⊗L2 ), then there will be nothing to prove since S is not contained in P L2 . Furthermore, if p 0 lies at the lowest position of P L −1 1 ⊗L2 , one sees easily (6.8) and (6.9) are valid for p 0 automatically, hence our previous proof will provide a translation vector with which we can cover S as indicated by (6.4). By the argument above, we can find some 1 ≤ j ≤ l and p 1 ∈ e βj (P L −1 1 ⊗L2 ) such that − − → p 0 p 1 = u βj , as is shown in the figure on the left below. If the open quadrant on the lower right of p 0 contains a lattice point of P L −1 1 ⊗L2 , then one checks easily p 0 + u h ∈ P L1 −1 ⊗L2 hence we are done. Next we will assume all lattice points of P L −1 1 ⊗L2 below l u h (p 0 ) are contained in the cone bounded by r −u h (p 0 ) and r −uv (p 0 ). Let R be the open angular region bounded by r−−→ p0p1 (p 0 ) and r u h (p 0 ), then we claim there must be some lattice point p in P L −1 1 ⊗L2 ∩ R such that − → p 0 p is of type c) or d) or e) if − − → p 0 p 1 is not of type c). If the contrary is true, then for all p ∈ P L −1 1 ⊗L2 ∩ R ∩ M the type of − → p 0 p must be one of the following: a), b), f), g) and h). . Since the slope of − − → p 0 p i is larger than that of − −−− → p 0 p i+1 , the two parallelograms are separate. By Lemma 6.3, their vertices are contained in a pair of paralleling lines. Moreover, each of the two lines contains four points of ∂P L1 hence an edge of P L1 . Note that − − → p 0 p i , − −−− → p 0 p i+1 cannot both parallel with these lines. Therefore, we have at least a pair of points, say A i and A ′ i , lying on one of these lines. However, that would imply one of A i and A ′ i is not a contact point as these two parallelograms have empty intersection, a contradiction. The proof is the same as the case e) i -a) i+1 hence omitted. form a basis of M (these vectors are neither horizontal nor vertical by (6.8) and (6.9) one checks easily We will denote these chords by A 0 B 0 and A ′ 0 B ′ 0 respectively. , in order that the four endpoints of these chords are contained on the boundary of their convex hull, C must be contained in both of them. Then B 0 , B ′ 0 are contained in the interior of the convex hull of A 0 , A ′ 0 , A i , A ′ i+1 and C, which is impossible. The proofs for the other cases are similar and we omit them. Therefore the types of The case b) Firstly we consider the case − − → , then one sees easily J is lying between A ′ 0 and I. Moreover, one computes A ′ 0 I = 1 a−c ≤ 1, which implies A ′ 0 I ⊆ e α (P L1 ). Then as J, B ′ i+1 and B ′ i are also contained in ∂P L1 , B ′ i+1 B ′ i is contained in some edge of P L1 . However, this would imply I lies outside P L1 , a contradiction. HE XIN p 0 p i , which contradicts with the condition that − − → p 0 p i is of type a). By abuse of notation, let is lying between C and A 0 as − − → p 0 p i is of type a). Moreover, B ′ i is on the boundary of P L1 , it cannot be contained in the interior of the triangle A i+1 B i+1 C. If it is contained in A i+1 B i+1 , then A i+1 B i+1 would be contained in an edge of P L1 as this chord contains three different points on the border of P L1 , which is impossible. Consequently, the two chords A ′ i B ′ i and A i+1 B i+1 must intersect and the intersection point is different from their endpoints. a+b−c−d ≤ √ 2, hence G and D are contained in the edge e α+β (P L1 ). In particular, this would imply B i+1 D is contained in an edge of P L1 with D one of its endpoints. On the other hand, note that D lies strictly on the upper left of F as A ′ i cannot coincide with A 0 since − − → p 0 p i is of type a), then we would deduce G lies outside P L1 , a contradiction. A 0 Step 3. Proof of (I2) The notations for some points and vectors in the following are different from those used in Step 2. The proof for the cases when −α / ∈ Σ X (1) or −β / ∈ Σ X (1) are the same and next we will assume −β / ∈ Σ X (1). Under the assumption above, by Lemma 6.5 χ + P L1 has a unique lowest point and we will denote it by C. Let u −1 , u 1 (in counter-clockwise order) be the two primitive edge vectors of χ + P L1 with their common initial point at C. Let C h = −u h + C, C v = −u v + C and D = l u1 (C h ) ∩ l u−1 (C v ), then one checks easily D is also a lattice point. Indeed, let (6.14) We claim D and the translation − − → CD + χ + P L1 are contained in P L2 . Let P be the convex hull of −u h + χ + P L1 , −u v + χ + P L1 together with D, then by our argument above P is a lattice polygon and its normal fan is just Σ X . Therefore, to prove our claim it suffices to show D is contained in P L2 . LetC be the lowest vertex of P L2 ,C −1 = l u1 (C h ) ∩ 0≤i≤k e αi (P L2 ),C 1 = l u−1 (C v ) ∩ 0≤j≤l e βj (P L2 ). As is shown in the figure on the left below, D lies inside the convex hull of C h , C v ,C −1 andC 1 , hence contained in P L2 . Now by (6.15), as is shown by the figure on the right below, the lattice points corresponding to −u −1 + χ and −u 1 + χ are contained in the convex hull of the lattice points −u h + χ, −u v + χ, − − → CD + χ as there are no other lattices points in the triangle with vertices χ, −u h + χ and −u v + χ. Therefore we have −u −1 + χ, −u 1 + χ ∈ P L −1 1 ⊗L2 since the same is true for −u h + χ, −u v + χ and − − → CD + χ. For saving notations, we will drop χ from χ + P L1 from now on. Next we label some contact points of P L1 that will be used in the sequel. As is shown in the figure on the right below, let A, B be the endpoints of e α+β (P L1 ), then one of the contact point with respect to −u h is A −h = −u h + A. The other contact point will be denoted by E, as is shown in the figure on the left below. Similarly, we let B −v = −u v + B and F be the contact points of P L1 in the direction of −u v , A −v = −u v + A and G be the contact points of −u h + P L1 in the direction of u h − u v . In the figure on the left below, we let C −1 = −u −1 + C and C 1 = −u 1 + C. Note that it is by no means self-evident that E (or F ) should reside in a position lower than G. However, we will prove this is indeed the case. Let P 1,−1 be the angular region bounded by r u −1 (C) and r u 1 (C). By a short computation, one finds easily the chord of P 1,−1 with direction u h − u v and length u h − u v is lying over the chord with direction u h (resp. u v ) and length u h (resp. u v ), then by Lemma 6.4 G lies over both E and F . Now one sees easily the complement of (−u h + P L1 ) ∪ (−u v + P L1 ) in P L2 is contained in the polygon represented by the curvilinear quadrilateral GECF , hence we only need to prove it is contained in (−u −1 + P L1 ) ∪ (−u 1 + P L1 ). As can be seen from the figure on the left above, it suffices to show 1) −u 1 + P L1 and −u h + P L1 (resp. −u v + P L1 ) has a common point lying between E and G (resp. C v and F ). 2) −u −1 + P L1 and −u h + P L1 (resp. −u v + P L1 ) has a common point lying between C h and E (resp. F and G); The proof of the two claims are similar and we will only provide the first one. Let P 1,−1 be the cone bounded by r u −1 (C) and r u 1 (C). One sees easily the chord of P 1,−1 with direction u 1 − u h and length u 1 − u h (resp. direction u 1 − u v and length u 1 − u v ) shares an endpoint with but lies on the upper side (resp. lower side) of the chord with direction u h and length u h (resp. direction u v and length u v ). Furthermore, by using (6.14) and the fact that ad − bc = 1 one checks easily the chord of P 1,−1 with direction u v − u h and length u v − u h lies above the chord with direction u 1 − u h and length u 1 − u h . Then 1) follows from Lemma 6.4, hence the proof of (I2) and Theorem 3 is complete. Following notations in Step 2.2, for any α i , 1 ≤ i ≤ k and β j , 1 ≤ j ≤ l there are always two contact points of P L1 with respect to the corresponding primitive edge vector. In general these vectors are not necessarily to be of type c). However, by our proof for the case when p 0 ∈ ∂P L −1 1 ⊗L2 we can deduce this is indeed the case in certain circumstance. Proposition 6.9. For u αi (resp. u βj ) if there exists u βj (resp. u αi ) such that the following condition are satisfied, then it is of type c). i) the slope of u αi is larger than that of u βj ; ii) u αi < u βj (resp. u βj < u αi ); iii) u αi and u βj forms a basis of M . Proof. We only prove the conclusion for u βj and the other case can be proved similarly. Following the notations of in the proof of Claim 6.7 for p 0 ∈ ∂P L −1 1 ⊗L2 , under the condition of the proposition, there exists only one lattice point in P L −1 1 ⊗L2 ∩ R hence u βj = − − → p 0 p 1 must be of type c). Remark 6.10. As a byproduct of our proof in Step 2.2, the conclusions (6.8) and (6.9) can be reformulated as follows. Let P, Q be lattice polygons such that Q is smooth and the normal fan of Q is a refinement of that of P , then for any lattice point p in the interior P , there always exist lattice points q 1 , q 2 ∈ P (which might be the same) such that − → pq 1 and − → pq 2 parallel with two edges of Q. It might be interesting to know whether we still have similar results in three dimension.
2023-08-25T06:42:19.324Z
2023-08-24T00:00:00.000
{ "year": 2023, "sha1": "f071c04e09bf7e40cdcb053e7d10cf08bb637a43", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f071c04e09bf7e40cdcb053e7d10cf08bb637a43", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
2134817
pes2o/s2orc
v3-fos-license
Calcium-dependent Gating of MthK, a Prokaryotic Potassium Channel MthK is a calcium-gated, inwardly rectifying, prokaryotic potassium channel. Although little functional information is available for MthK, its high-resolution structure is used as a model for eukaryotic Ca2+-dependent potassium channels. Here we characterize in detail the main gating characteristics of MthK at the single-channel level with special focus on the mechanism of Ca2+ activation. MthK has two distinct gating modes: slow gating affected mainly by Ca2+ and fast gating affected by voltage. Millimolar Ca2+ increases MthK open probability over 100-fold by mainly increasing the frequency of channel opening while leaving the opening durations unchanged. The Ca2+ dose–response curve displays an unusually high Hill coefficient (n = ∼8), suggesting strong coupling between Ca2+ binding and channel opening. Depolarization affects both the fast gate by dramatically reducing the fast flickers, and to a lesser extent, the slow gate, by increasing MthK open probability. We were able to capture the mechanistic features of MthK with a modified MWC model. I N T R O D U C T I O N Ion channel proteins open and close their pores in response to various stimuli in order to allow fl ow of ions in or out of cells. Ligand-gated ion channels are modulated when signaling molecules such as Ca 2+ , cAMP, cGMP, acetylcholine, and serotonin bind to specialized regions within the channel. Numerous eukaryotic ion channel proteins fall into this category, including Ca 2+modulated channels: BK, SK, and IK channels (for review see Vergara et al., 1998), cyclic nucleotide-gated channels (CNG and HCN channels) (Kaupp and Seifert, 2001;Kaupp and Seifert, 2002;Matulef and Zagotta, 2003;Zagotta, 1996), acetylcholine receptor channels, and serotonin receptor channels (Devillers-Thiery et al., 1993;Kotzyba-Hibert et al., 1999). Although extensive functional data has been collected for these channels and great strides have been made towards describing the mechanism of ligand gating, no high resolution structural data is yet available for eukaryotic ligandgated channels. The small prokaryotic channel MthK, from Methanobacterium thermoautotrophicum, is the fi rst ligand-gated channel whose structure was determined to angstrom resolution by X-ray crystallography (Jiang et al., 2002a,b). The channel consists of a two-transmembrane region, with a GYGD-containing pore region that undoubtedly places it in the tetrameric K + channel family, and contains an extended cytoplasmic Ca 2+ binding region, called the regulator of K + conductance (RCK) domain, which bears homology to a putative Ca 2+ binding region of eukaryotic Ca 2+ -activated K + (BK) channels. The functional MthK channel has a gating ring composed of eight such RCK domains, four intrinsic RCK domains integral to the channel and four identical, soluble, cytoplasmic RCK subunits. Soluble RCKs and channel RCKs are intricately connected to form the functional unit in the four fold symmetric ligand binding structure (Jiang et al., 2002a). The soluble RCK subunit is a product of the same gene that encodes the MthK channel, via an internal initiation methionine at position 107. Eight Ca 2+ ions appear in the X-ray structure of the channel, bound specifi cally to the cytoplasmic RCK domains. In addition, the channel opens in the presence of millimolar Ca 2+ . Thus, MthK has served as a model for eukaryotic BK channels and as an example of an open K + channel pore (Jiang et al., 2002a). However, in some respects, MthK behavior differs from that of BK channels. The millimolar concentration of Ca 2+ reported to open the prokaryotic MthK channel is three to four orders of magnitude larger than that required to open the eukaryotic Ca 2+ -dependent channels SK and BK (Vergara et al., 1998). Additionally, the unitary current voltage (I-V) profi le of MthK (Jiang et al., 2002a) differs from BK (Barrett et al., 1982) in that the inward single-MthK channel current is several fold higher in conductance than its outward current, while for BK, the inward and outward currents have similar amplitudes. Consequently, we fi nd ourselves with a large set of eukaryotic ligand-gated channels whose functions have been studied extensively but whose structures are unknown and, in contrast, with a prokaryotic channel whose structure has been solved but whose functional characterization lags behind. In this article we attempt to even out the structurefunction balance for MthK by investigating its gating characteristics in detail. Upon examination of purifi ed single-MthK channels in planar lipid bilayers, we found that MthK is markedly and cooperatively activated by Ca 2+ in the millimolar range with an unusually high Hill coeffi cient of ‫. 8ف‬ The increase in open probability (Po) of the channel is due mainly to an increase in the frequency of opening rather than an increase in the mean open time, unlike other eukaryotic Ca 2+ -activated K + channels such as BK, which stay open longer and open more frequently upon Ca 2+ application. Ca 2+ is a specifi c opener of MthK channels, as Mg 2+ does not mimic the effects of Ca 2+ on MthK open probability. Transbilayer voltage also regulates channel opening by a different mechanism than Ca 2+ . While changing the membrane potential does not signifi cantly alter MthK open probability (in contrast to BK channels), depolarization greatly lengthens the opening durations. In conclusion, although there must be some similarities in their mechanism of Ca 2+ activation, as MthK is also a K + channel opened by Ca 2+ binding to RCK domains, MthK is not a faithful model for BK channel gating. MthK Channel Purifi cation and Reconstitution into Liposomes The MthK channel gene (a gift from R. MacKinnon, Rockefeller University, New York, NY), cloned in pQE70 vector (QIAGEN) with a carboxy-terminal hexahistidine tag was transformed into XL1-Blue cells and grown at 37ºC in Luria-Bertani media with ampicillin (200 μg/ml) selection. MthK expression was induced by incubating with 400 μM IPTG (Sigma-Aldrich) for 3 h at a cell density (OD 600 ) of 1. Protein was purifi ed from the cell extract according to previously described protocols (Jiang et al., 2002a). In brief, bacterial pellets were resuspended and sonicated with a probe sonicator at 50-75% power in 50 ml breaking buffer (100 mM KCl, 50 mM Tris, pH 7.6) with protease inhibitors (one tablet of Complete EDTA-free Cocktail Inhibitors and 0.17 mg/ml PMSF; Roche). The membranes were extracted for 3 h at room temperature in 50 mM decyl maltoside (DM; Anatrace) and then centrifuged at 40,000 g for 45 min at room temperature. The supernatant was applied to a Co-column (Talon, 1-2 ml slurry/liter culture) pre-equilibrated in buffer B (100 mM KCl, 20 mM Tris, 5 mM DM, pH 7.6). The column was washed with 30 mM imidazole, and MthK protein was eluted with 300 mM imidazole (Fluka) in buffer B. Immediately after purifi cation, 0.5 U Thrombin (Roche) per 3 mg of protein was added and incubated for 5 h to cleave the His tag. MthK was then further purifi ed on a Superdex-200 (GE Healthcare) gel fi ltration column in buffer B and concentrated in 50,000 MWCO Amicon concentrators (Millipore). Purifi ed MthK channels were reconstituted into liposomes (Heginbotham et al., 1999;Nimigean and Miller, 2002) made of 3:1 1-palmitoyl-2-oleoyl phosphatidylethanolamine (POPE):phosphatidylglycerol (POPG) synthetic lipids (Avanti Polar Lipids). The lipids were removed from chloroform and solubilized in 34 mM CHAPS (Anatrace) in S buffer (450 mM KCl, 20 mM HEPES, and 4 mM NMG) at a concentration of 10 mg/ml. The protein was mixed with the solubilized lipids (1-20 μg protein/mg of lipid) and applied to a 20 ml Sephadex G50-fi ne (GE Healthcare) column equilibrated in S buffer. The turbid eluates that contain the detergent-free protein liposomes were aliquoted, fl ash frozen in liquid N 2 , and stored at −80 ºC. Single-channel Recording in Lipid Bilayers For channel recording, we used a horizontal lipid bilayer setup (Chen and Miller, 1996;Heginbotham et al., 1999) with two aqueous chambers separated by a partition (transparency fi lm, 5 × 5 mm) with a small hole in the middle (50-90 μm in diameter) made with a butterfl y pin. The lipids, POPE:POPG 3:1 (10 mg/ml, resuspended in decane; Sigma-Aldrich) were painted on this hole to form lipid bilayers that separate the two aqueous chambers, connected by agar bridges (150 mM KCl, 2% agarose) to the recording electrodes. Formation of a bilayer was monitored electrically with a pulse protocol in Clampex (Axon Instruments). The top (cis) chamber was grounded. Using charybdotoxin (CTX), channels whose extracellular side faced the trans chamber were blocked, thus allowing the study of only the channels with extracellular side facing the cis chamber. Single channel currents were recorded in symmetrical 200 mM K + solutions (120 mM KCl, 80 mM KOH, 10mM HEPES, pH 7.0) with an Axopatch 200 (Axon Instruments) sampled at 10 kHz and low-pass fi ltered at 2-5 kHz. Intracellular (trans) solutions contained 100 nM charybdotoxin, a gift from C. Miller (Brandeis University, Waltham, MA), manufactured according to protocol (Park et al., 1991). The Ca 2+ concentration was varied on the intracellular side from 0 to 10 mM, while neither Ca 2+ nor EDTA was added to the extracellular side. Solutions deemed 0 Ca 2+ or 0 Mg 2+ contained no added divalents and 1 mM EDTA, a known divalent ion chelator. Open Probability Calculations Data were analyzed using Clampfi t 9.0 (Axon Instruments). All open and closed events longer than 0.18 ms were measured using the Single Channel Search Module, and the open probability was determined. Due to uncertainty about the total number of channels per bilayer due to low open probability, we looked at the average number of open channels in the bilayer, nPo, defi ned as where Pi is the open probability of the particular level, i, with each level corresponding to the number of open channels. In addition, due to the variable open probabilities (Po values ranging from 0.05 to 0.5) among MthK channels in identical conditions, we defi ned a relative activity (r a ): where nP o,max is the maximal open probability for a given bilayer at 5 mM Ca 2+ (at −200 mV). We used r a to track changes in nPo relative to this maximum activity as Ca 2+ was varied and to average data from different channels. Each experiment began by recording in 5 mM Ca 2+ , followed by perfusion of the trans chamber with a lower Ca 2+ concentration. For several bilayers, the chamber was reperfused to the initial Ca 2+ concentration to ensure no gain or loss of channels. For each bilayer, recordings at −200 mV were analyzed in detail (unless indicated otherwise). Similar results are obtained when a different negative voltage is selected (unpublished data). The dose-response data was fi t with the Hill equation, where K d is the concentration at half maximal activation, r a,max is the maximal relative activity, and n is the Hill coeffi cient. Parameters were free-fi t, with the exception of the Hill coeffi cient, which was either free or constrained to 8, as described in Results (see Fig. 3). Burst Kinetics A critical time has to be determined that separates the gaps between bursts from the gaps within bursts. The closed dwell times are log-binned (McManus et al., 1987) and the distributions are plotted and fi tted (maximum likelihood) with sums of exponential components: where A i is the relative number of events in each component 1 1 , where k i is the microscopic rate constant associated with the process). The critical time, t c , which separates gaps between bursts and closed times within bursts has to be chosen so as to minimize the number of misclassifi ed events (Magleby and Pallotta, 1983). Then, closures longer than t c correspond to gaps between bursts, and closures shorter than t c correspond to gaps within a burst. For burst duration determination, all closed intervals less than t c were ignored during analysis. When two or more channels were open at the same time, the average burst duration was determined by adding the burst times in each level and then dividing by the total number of bursts. To determine the gap between bursts durations, all closures less than t c were ignored and only recordings of 5 min or longer were included. When two or more channels were present, the average duration obtained for the gaps between bursts was multiplied by the number of channels visible in the bilayer. Voltage Dependence The voltage dependence of the closing rate within a burst is given by z, the effective gating charge determined from the following equation: where k fc is the closing rate within a burst (k fc = 1/o f , where o f is the mean open time within a burst), k fc (0) is the closing rate at 0 mV, z is the effective gating charge and R, T, and F have their usual meanings. In order to display the effect of voltage on MthK open probability for several bilayers, the relative activity was calculated in a similar way described above for the Ca 2+ dose-response curve (Eq. 2), only this time nP o,max is replaced by nP o,−200 . Kinetic Models Kinetic models and simulated single-channel records were generated and analyzed using the QuB software for single-channel analysis (version 1.4.0.2, www.qub.buffalo.edu). The rates associated with each transition are in the following form: k = k 0 × C × e bV , where k 0 is the intrinsic rate constant, C is the Ca 2+ concentration, V is the voltage, and b is zF/RT. The values for the rates used for the simulations in the fi gures in this paper were partly obtained from our data analysis (thus constrained) and the rest were assessed by eye with the trial and error method, constrained by key parameters such as the Hill coeffi cient for the Ca 2+ doseresponse curve and the values of the kinetic parameters in Table II. At least 60 s of continuous single-channel activity for each desired condition were simulated, sampled at 10 kHz, and fi ltered at 2 kHz to resemble our real data. Scaling was at 320 and the current standard deviations were 0.04 and 0.08 MthK, an Inwardly Rectifying K + Channel Single MthK channel current recordings are shown in Fig. 1 in symmetrical 200 mM K + . MthK channel gating displays bursting behavior with long periods of inactivity. We were able to detect openings, albeit infrequently, in the absence of Ca 2+ so we can compare the MthK conduction properties in 0 Ca 2+ with those in high Ca 2+ ( Fig. 1 A). Currents vary widely in amplitude over the range of voltages examined, with the inward current (negative voltages in Fig. 1 B) being much larger in size than the outward current (positive voltages in Fig. 1 B). The current voltage (I-V) curves in Fig. 1 C are characteristic of a classic inwardly rectifying channel. The chord conductances of the current-voltage relationship in the presence of Ca 2+ (fi lled circles, Fig. 1 C) are 242 pS and 27 pS at −200 and +150 mV, respectively. It appears that some of this rectifi cation is due to Ca 2+ block. If Ca 2+ is removed from the intracellular solution altogether (by replacing it with EDTA to chelate all free Ca 2+ ), the current in the outward direction increases to a chord conductance of 96 pS ( Fig. 1 C, open circles, value at 150 mV). This indicates that Ca 2+ is partly responsible for the lower conductance in the outward direction either by direct fast block or by an electrostatic screening effect. This maneuver did not completely remove inward rectifi cation and also resulted in large open channel noise ( Fig. 1), as if blocking events faster than the amplifi er sampling time are occurring, indicating that Ca 2+ is not the sole culprit of channel block. Thus, we restricted our gating kinetics analysis to the negative voltage values (inward currents). Occasionally, under identical recording conditions, MthK displays a slightly lower conductance level ‫%5ف(‬ of the time). We constrained all our kinetic analysis to channels with the conductance properties discussed above. Extracellular CTX Block, a MthK Orientation Tool Lipid bilayer current recordings have one distinct disadvantage from patch clamp or whole cell recordings of living cells. Whereas in mammalian cells or oocytes channels are inserted in the cell membrane in a well-defi ned orientation, in lipid bilayers the purifi ed channel protein will insert randomly in either of the two available orientations: with their cytoplasmic domain facing the top or bottom chamber. CTX, a well-known extracellular K + channel blocker (Park et al., 1991) was shown to block MthK channels (Jiang et al., 2002a). Due to the sidedness of the block, it can be a useful orientation tool for MthK in bilayers if its blocking affi nity is high enough. We investigated the CTX blocking affi nity by applying increasing concentrations of the toxin to the cis chamber and examining its effect on MthK gating. As indicated in Fig. 2, CTX blocks the inward current of MthK in a concentration-dependent manner by further increasing the durations of the long gaps between bursts such that at 72 nM the current is completely blocked. The K d of the CTX block for MthK channels as determined from Fig. 2 B is ‫1ف‬ nM, a high affi nity slow block, in the same range as for other Ca 2+ -activated K + channels (Miller, 1995). All data was henceforth collected in the presence of 100 nM CTX in the trans bilayer chamber to ensure that we were recording only from channels with the extracellular side facing the ground, a condition similar to normal electrophysiological conventions. Ca 2+ Dependence of MthK While millimolar levels of intracellular Ca 2+ decrease outward MthK current, they also increase the open probability of the channel (Jiang et al., 2002a). Fig. 3 shows representative single MthK channel current traces at −200 mV with increasing Ca 2+ concentrations. From 0 to 3 mM Ca 2+ the channel is rarely open. But between 3.5 and 4 mM Ca 2+ there is a sharp increase in Po, with maximal open probability obtained at 5 mM Ca 2+ . No further activation is observed upon raising the Ca 2+ concentration from 5 to 10 mM. The Ca 2+ concentrations required to open MthK channels are unusually high and have yet to be encountered among eukaryotic Ca 2+ -activated K + channels (Vergara et al., 1998). Qualitatively, as was previously reported (Jiang et al., 2002a), it is clear that Ca 2+ increases the open probability of MthK channels. However, MthK channels are diffi cult to investigate quantitatively for three main reasons. First, they display variable open probability in identical conditions from bilayer to bilayer (with values ranging from 0.05 to 0.5; unpublished data) making comparisons between bilayers and averaging over several bilayers challenging. For this reason, to tame the variability in Po, for each Ca 2+ concentration we measured the relative increase in nPo to the maximal open probability at saturating Ca 2+ for each bilayer. Second, MthK channels display maximal open probabilities significantly less than 1 even in the presence of saturating Ca 2+ concentrations, leading to uncertainty about the total number of channels per bilayer. To circumvent this problem, we quantifi ed channel activity as nPo due to uncertainties in estimating the number of channels per bilayer. Third, MthK opens in bursts with relatively long periods of inactivity, thus making it necessary to record long stretches of data (minutes in each condition) to accurately sample each experimental condition investigated. Fig. 3 B shows the plot of relative Po vs. Ca 2+ for 11 different bilayers (the Ca 2+ dose-response for a single representative bilayer is shown in the inset for clarity). Although channel activity and behavior showed some heterogeneity, Ca 2+ elicited the same relative increase in activity across all experiments (the various white symbols represent separate experiments). We obtained a dose-response relation that increases extremely steeply with Ca 2+ (Fig. 3 B, black symbols with error bars for averages and inset for a single bilayer). The open probability increases >10-fold between 2 and 5 mM Ca 2+ (and >100fold over a 5 mM change in Ca 2+ ). This sharp increase in channel activity can be described by the Hill equation, with an n value of ≥8. If the Hill coeffi cient is allowed to vary freely, the dose-response data is best fi t with n = 20, indicating that there are at minimum 20 cooperative binding events leading to channel opening, as shown in Fig. 3 B (dashed line). However, only eight Ca 2+ ions were detected in the MthK structure (Jiang et al., 2002a). Hence, in the spirit of true structurefunction studies, we constrained the Hill coeffi cient to 8 and obtained a visually satisfying fi t (solid curve in Fig. 3 B). Eight, which is the lowest Hill coeffi cient that will satisfy both our functional and structural constraints, is the highest Hill coeffi cient reported for a ligand-gated channel and an indicator of highly cooperative Ca 2+ binding interactions. In comparison, for BK channels, the highest Hill coeffi cient for Ca 2+ activation is 3-4 (Nimigean and Magleby, 1999) and for Ca 2+ activation in Each open symbol is a single bilayer, with 11 different bilayers analyzed. Filled symbols represent mean ± SEM from three separate bilayers with the exception of the 10 mM Ca 2+ point with n = 2. Data points were fi t with the Hill equation (Eq. 3) with n = 8 (smooth line, K d = 3.8 ± 0.2 mM, and r a,max = 1.07 ± 0.1) and n = 20 (dashed line, K d = 3.8 ± 0.04 mM, and r a,max = 1.00 ± 0.04). Inset shows a Ca 2+ dose-response from a single representative bilayer. The data (open circles) were fi t with the Hill equation with n = 8 (K d = 3.9 ± 0.14 mM, and r a,max = 1.08 ± 0.14). The axes are the same as in B. the presence of Mg 2+ is 5 (Golowasch et al., 1986). MthK has the highest known cooperativity and the lowest Ca 2+ affi nity of all known Ca 2+ -activated K + channels to date. Ca 2+ Modulation of Gating Kinetics of MthK Through what mechanism does Ca 2+ increase MthK's open probability? To answer this question, we examined the single-channel gating kinetics of MthK. The representative traces in Fig. 1 A and Fig. 3 A display a similar pattern; the channel is closed for long periods of time (gaps between bursts) interrupted by periods of high activity (bursts) in which the channel fl ickers (rapidly transitions) between the open and closed levels. Thus, the increase in channel Po with Ca 2+ could result from any one or more of the following possibilities: (a) an increase in the duration of individual channel openings, (b) an increase in the duration of bursts, or (c) an increase in the frequency of bursting (equivalent to decreasing the durations of gaps between bursts). In order to examine burst kinetics, we introduced a critical closed time, t c . All closed interval durations smaller than t c are to be classifi ed as closings within a burst and those larger than t c as gaps between bursts. To determine t c , we fi t the closed dwell time histograms from individual bilayers with two exponential components. For one representative bilayer, the time constants of such components were τ 1 = 0.23 and τ 2 = 117.6 ms ( Fig. 4 C, 5 mM Ca 2+ ). Despite some variation in the long component (due to different numbers of channels in each bilayer as well as variable open probability amongst channels), all bilayers analyzed displayed a considerable two to four orders of magnitude difference between the time constants of the short and long component, corresponding to closings within bursts and gaps between bursts, respectively. (Decreasing Ca 2+ from 5 to 0 mM magnifi es the gap between the two exponential components, as seen in Fig. 4 C.) This allows us to choose a value for t c (10 ms) that will minimize event misclassifi cation (Magleby and Pallotta, 1983). To test whether data analysis depends on our critical time selection, we determined the average durations of the gaps within a burst using three different critical times: 5, 10, and 20 ms. Identical results were obtained (unpublished data). Thus, we chose 10 ms as the critical time for all of our analyses. The results of the burst analysis are shown in Fig. 4 B. The durations of the gaps between bursts decreased on average ‫-04ف‬fold as Ca 2+ was increased from 0 to 5 mM, while the burst durations increased only threefold. As expected upon visual inspection of the channel traces in Fig. 3 A, the majority of the increase in channel activity results from an increase in burst frequency, which is determined by the duration of gaps between bursts; the shorter the gaps between bursts, the more frequent the bursts. The average open and closed times within a burst remained unchanged irrespective of the amount of Ca 2+ present (Fig. 4 A). This is also illustrated in the bilayer represented in Fig. 4 C; the short closed duration components display identical time constants, while the gap duration components increase ‫-001ف‬fold when Ca 2+ is increased from 0 to 5 mM. To summarize, the increase in open probability with Ca 2+ is mainly caused by a dramatic increase in the frequency of channel openings (40-fold) and less so by a modest increase in burst durations (threefold), accounting well for the experimentally observed ‫-001ف‬fold increase in channel activity. It is noteworthy that the fast intraburst gating is Ca 2+ independent. Voltage Dependence of MthK Besides the increase in open probability with Ca 2+ , MthK also displays an interesting voltage effect; as the and 10 bilayers (solid, 5 mM Ca 2+ ). (C) Closed dwell time histograms for a single bilayer containing two channels. The bilayer was perfused from 5 mM Ca 2+ to 0 Ca 2+ . The fi ts are with Eq. 4 with two exponential components with: A 1 = 0.97 ± 0.09, τ 1 = 0.23 ± 0.12, A 2 = 0.03 ± 0.06, and τ 2 = 118 ± 3 for 5 mM Ca 2+ , and A 1 = 0.88 ± 0.05, τ 1 = 0.30 ± 0.08, A 2 = 0.12 ± 0.04, and τ 2 = 22552 for 0 Ca 2+ . Inset shows magnifi cation of histogram data for closures ≥10 ms; smooth line is the exponential fi t. membrane potential is made more depolarized, the frequency of the very brief closings (fl ickers) within the burst decreases dramatically (Fig. 5 A). This leads to a signifi cant increase (‫-42ف‬fold for a 150-mV depolarization) in mean open time (Fig. 5 B). While this effect is apparent upon examination of single channel traces, the increase in Po resulting from this depolarization is more modest, less than threefold (Fig. 5 D) and the increase in the durations of bursts is insignifi cant (Fig. 5 C). To gain further insight into the mechanism of voltage gating, we examined the closed dwell time distributions for one bilayer (Fig. 5 E) at two voltages: −200 and −100 mV. These distributions were fi t with the sum of two exponential components corresponding to the fl ickering closings within bursts and the gaps between bursts. As we depolarize the bilayer by 100 mV, the number of events comprising the "fl icker" component decreases markedly while the number of gaps between bursts stays relatively constant (or, in other words, the fraction of the total number of closed events represented by fast fl ickers decreased approximately fi vefold as the bilayer is depolarized 100 mV while their durations stayed the same; Fig. 5 E). In contrast, the durations of the long closed component decreased to about 50% their starting value and their total number did not change (Fig. 5 E). Thus it appears that the modest increase in Po with depolarization is due to a decrease in the duration of gaps between bursts. The disappearance of the fl ickers, while striking, contributes mainly to the large increase in mean open time and only a negligible amount to the change in open probability. The visible lengthening of the durations of the openings within a burst (Fig. 5 B) is thus a consequence of a decrease in the fl icker closing rate with depolarization. We quantifi ed the voltage dependence (z) associated with the change in this closing rate by fi tting the durations of the mean open times in Fig. 5 B with an exponential function (Eq. 5). We obtained a z value of 0.62 ± 0.04, which traditionally suggests that the fl ickering process is caused by a charged entity that moves one elementary charge more than halfway across the membrane on its way to occlude the pore (Woodhull, 1973). Mg 2+ Does Not Mimic Ca 2+ in Modulating Po Can any divalent ion recapitulate the effects of Ca 2+ on MthK channels or is the increase in Po Ca 2+ specifi c? We investigated this issue by measuring channel activity in solutions where Mg 2+ was substituted for Ca 2+ . Bilayers were obtained in the presence of 5 mM Ca 2+ . Then, the intracellular (trans) compartment was perfused with solutions containing 5 mM Mg 2+ and no added Ca 2+ . The effect of removing all divalent cations was examined by perfusing with solutions containing 1 mM EDTA (and no added divalents). 5 mM Mg 2+ has a nominal effect on the open probability of MthK as shown in Fig. 6. For comparison, in Fig. 6 B, the effect of Mg 2+ on MthK open probability is superimposed onto the Ca 2+ doseresponse relationship for MthK (dashed line represents the fi t with a Hill coeffi cient of 8 from Fig. 3 B). With increasing Mg 2+ concentrations, there is a slight positive slope of the curve (Fig. 6 B), but the increase is much smaller than that seen for Ca 2+ and is consistent with a nonspecifi c divalent effect. This result is in agreement with the fi nding that Mg 2+ does not support the Ca 2+ -induced octameric RCK domain complexes in solution (Dong et al., 2005). D I S C U S S I O N Calcium-activated K + channels are essential players in cell physiology, as they link intracellular signaling with the electrical properties of the membrane. The mechanism of channel modulation by Ca 2+ has been studied in detail for eukaryotic Ca 2+ -activated K + channels (BK and SK channels) but an important piece of the puzzle is still missing: structural information. The structure of MthK was postulated to be a model for the structure of BK channels and a starting point for understanding the mechanism of channel opening upon Ca 2+ modulation. However, little functional information is actually available for MthK (Jiang et al., 2003). Here, we characterize in detail the gating of MthK channels and we describe the mechanism of Ca 2+ modulation in the framework of the available structural information. MthK and BK Channels: Functional Comparison What are the similarities between MthK and BK channels? They are both potassium channels with a homologous pore region and a GYG signature sequence for potassium selectivity. The COOH-terminal cytoplasmic domain of both channels is composed of two RCK domains (identical in MthK, different in BK, but homologous between the two species; Jiang et al., 2001), and calcium favors channel opening in both channel types. However, while BK channels are activated by micromolar Ca 2+ concentrations (Latorre, 1989), MthK channels do not respond until Ca 2+ is raised to millimolar concentrations. One of the most intriguing features of MthK is the very steep activation by Ca 2+ . We examined MthK activation as Ca 2+ was raised from 0 mM (with added EDTA) to 10 mM. We found that the channel activity increased sharply between 3.5 and 5 mM Ca 2+ . This jump in channel activity was well described by a Hill coeffi cient of at least 8. The value was constrained by the number of Ca 2+ ions present in the high resolution structure of the channel; eight Ca 2+ ions were found in the MthK structure, two per functional subunit and one per RCK domain. This is the highest Hill coeffi cient found in any ligand-gated channel; for comparison, the largest Hill coeffi cient found for the Ca 2+ activation of BK channels is 4-5 (Golowasch et al., 1986;McManus and Magleby, 1991;Cox et al., 1997;Nimigean and Magleby, 1999), for cyclic nucleotide activation of CNG channels it is 2-3 (Zagotta and Siegelbaum, 1996;Ruiz et al., 1999), and for Ca 2+ activation in SK channels it is 2-4 (Kohler et al., 1996;Vergara et al., 1998). What does this unusually high Hill coeffi cient mean for the mechanism of Ca 2+ modulation of MthK channels? The simplest answer is that MthK almost never opens unless all eight Ca 2+ binding sites are occupied. This mechanism is different from the Ca 2+ activation of BK channels in which the channel can open in partially Ca 2+ -occupied states and the coupling between Ca 2+ binding and channel opening, although strong, is not as strong as we observe with MthK (McManus and Magleby, 1991;Horrigan and Aldrich, 2002). An alternative to the above simple explanation is that the extreme steepness of the Ca 2+ dose-response is due to multiple overlapping processes that lead to channel opening from closed/dormant/blocked states (i.e., Ca 2+ activation plus one or more additional independent processes that modulate MthK channel gating). Inactivation was recently proposed to be the cause for the apparently low open probability of KcsA channels during steadystate gating (Cordero-Morales et al., 2006). However, it is not obvious that an inactivated state exists for MthK channels (which lack the glutamate, E71, proposed to be involved in KcsA inactivation), and, in the absence of supporting data for a more complex process, we favor the simpler mechanism described above for the origin of the steep Hill slope. The apparent all-or-none Ca 2+ activation mechanism for MthK is also supported by our kinetic analysis of the open and closed states with Ca 2+ . First, we found that MthK activity has a "bursty" appearance at the single channel level that we catalogued as two gating modes: slow (bursts and gaps between bursts) and fast gating (fl ickers within a burst). We found that the durations of the open and closed intervals within a burst (the components of fast gating) are Ca 2+ independent and that the increase in Po with Ca 2+ (>100-fold) comes mainly from the marked decrease of the durations of the gaps between bursts (40-fold) and to a lesser extent from the increase in the duration of bursts (two to threefold). Ca 2+ affects only the slow gating of MthK by mainly altering the frequency of opening while the fast gating is Ca 2+ independent. For BK channels, while Ca 2+ similarly increases the frequency of openings, it also markedly increases the durations of the open intervals (McManus and Magleby, 1991). Therefore, while BK and MthK appear similar on the surface, as they are both activated by Ca 2+ and possess RCK Ca 2+ binding domains, the details of the mechanism of Ca 2+ activation differ between the two channels. This is not surprising, as the structure of the eukaryotic BK channel is likely to be more complex than that of the more primitive MthK. BK channels have an additional fi ve transmembrane domains upstream of the pore region, while MthK has only the two transmembrane regions that form the pore. In addition, the RCK-containing large COOH terminus of BK channels also includes other functional domains as well as other Ca 2+ binding structures/sites (Schreiber and Salkoff, 1997;Shi and Cui, 2001;Zhang et al., 2001;Bao et al., 2002;Xia et al., 2002). A Modifi ed Monod-Wyman-Changeux Model Describes MthK Ca 2+ Gating Using our kinetic data, we derived a qualitative model to describe the Ca 2+ activation of MthK. This model must fulfi ll the following conditions: (a) the channel opens in 0 Ca 2+ (Fig.1); (b) the fast gating in 0 Ca 2+ is identical to the fast gating in saturating Ca 2+ (Fig. 4, durations of opening and closings within a burst are Ca 2+ independent), and (c) the channel opens more frequently in the presence of Ca 2+ , which increases the open probability ‫-001ف‬fold (Fig. 3). The purpose of this modeling effort is not to provide an exact kinetic treatment with precisely determined rate constants, but to show that a minimal allosteric model derived from our experimental fi ndings (with structural constraints) can reasonably reproduce our data. We tested several allosteric schemes based on the Monod-Wyman-Changeux model (MWC; Monod et al., 1965) that were shown to be appropriate for other ligand-gated channels (Cox et al., 1997;Li et al., 1997) and the proposed model in Scheme 1 is a minimal model that satisfies all the above conditions. There must be at least nine closed states and nine open states on the top two rows in order to satisfy the requirement of having a distinct kinetic state for each Ca 2+ occupancy state (open and closed states corresponding to 0, 1, 2, 3, 4, 5, 6, 7, and 8 Ca 2+ ions bound). These rows of states are connected by rate constants that are Ca 2+ dependent and are involved in the slow Ca 2+ -dependent gating of MthK. The Ca 2+ binding equilibrium constant is described by K Ca (K Ca = α[Ca 2+ ]/β, where the forward [α] and backward [β] rate constants represent the intrinsic on and off rates of Ca 2+ binding to both the closed and the open conformation of MthK. For simplicity, we assume that the Ca 2+ binding constants are identical for the open and closed channel conformations (similar results are obtained if we assume they are different). L represents the equilibrium constant (L = k o /k c , where k o and k c represent the opening and closing rates, respectively, which defi ne the vertical transitions between the top two rows). The vertical transitions between the top two rows are identical for each Ca 2+occupied conformation, with the exception of the eight Ca 2+ -bound conformation, which opens with a higher probability conferred by a factor f (to satisfy microscopic reversibility, f also multiplies the equilibrium constant of the fi nal transition O 7Ca to O 8Ca ). This maneuver, which distinguishes our proposed model from a classical MWC model ensures that the channel is favored to open mainly when all Ca 2+ binding sites are occupied, as demanded by our results, and at the same time allows a nonzero open probability in the absence of Ca 2+ . The cooperativity coeffi cient (μ) allows each sequential Ca 2+ binding event to occur with higher probability. In the absence of this cooperativity coeffi cient, we were unable to simulate a Ca 2+ dose-response curve with Hill coeffi cients higher than 4. The closed states on the bottom row represent "fl icker states." This equilibrium is described by K f (K f = k fo /k fc , where k fo and k fc are rate constants that do not depend on Ca 2+ and defi ne the fast Ca 2+ -independent gating within the burst). We were able to constrain most of these rates by examining our results; Table II lists the key parameters from our experiments that were used to determine the rate constants ("real" and "simu" denote kinetic parameters extracted from experimental and simulated data, respectively). Since the fast gating behavior is Ca 2+ independent, k f0 is determined by the inverse of the mean open time within the bursts ‫5.1ف(‬ ms at −200 mV) and k fc by the inverse of the mean closed time within the burst ‫4.0ف(‬ ms at −200 mV). The rate of leaving the bursts (closing rate, k c ) was estimated as the inverse of the mean burst duration in 0 Ca 2+ ‫05ف(‬ ms at −200 mV). The values of these experimentally determined rate constants are in Table I. Thus, although the model in Scheme 1 may appear complex due to the total number of states presented (27), it is actually the simplest model that will satisfy all the structural and experimental constraints listed above. In addition, due to the fact that we were able to determine directly some of the microscopic rate constants from our kinetic analysis, the 27-state model in Scheme 1 has only fi ve free parameters: the rates of Ca 2+ association/dissociation (α and β), the opening rate k o , the cooperativity factor, μ, and the factor f. Using Scheme 1 and the constrained parameters, we fi t by eye our data to obtain the rest of the parameters (Table I). We then simulated single channel traces (Fig.7, A and B) and a dose-response curve (Fig. 7 C) using the QuB software as detailed in Materials and Methods. The simulated data illustrates visually that Scheme 1 recapitulates the salient Ca 2+ -dependent gating features of MthK (compare simulated single-channel traces in Fig. 7 with the real traces in Figs. 1, 3, and 5). The simulated data was then analyzed to obtain the kinetic parameters (shown together for comparison with the real data in Table II). This comparison shows that using Scheme 1 we were able to capture some detailed characteristics of MthK gating as well. Table I, the open probability increased ‫-001ف‬fold from 0 to 5 mM Ca 2+ , as in our experimental data. Allosteric models similar to the modifi ed MWC model proposed here for the Ca 2+ gating of MthK have been used to describe Ca 2+ gating in BK channels (McManus and Magleby, 1991;Cox et al., 1997;Rothberg and Magleby, 1998;Rothberg and Magleby, 2000;Horrigan and Aldrich, 2002). While our model is relatively simple, it captures the main features of MthK's Ca 2+ activation. Voltage-dependent Gating of MthK For the classical voltage-gated channels (Shaker, BK channels), increasing membrane depolarization leads to a dramatic increase in the channel's open probability accompanied by an increase in the durations of the open intervals (Bezanilla and Stefani, 1994;Sigworth, 1994;Fedida and Hesketh, 2001). Since this large voltage dependence is caused by a voltage sensor domain contained within the fi rst four to fi ve transmembrane segments, which are absent in MthK, there is no surprise that voltage does not elicit a similar increase in open probability in MthK. At fi rst glance, voltage appeared to only disrupt the fl ickering behavior of MthK channels, dramatically reducing the frequency of the short closing events within a burst. This has a strong visual impact by lengthening the amount of time the channel is open, while keeping the burst lengths constant. After further kinetic analysis, we found that the open probability also increases with voltage (to a lesser (SCHEME 1) extent) but not because of the intraburst kinetics change. The open probability increased two to threefold for a 150-mV depolarization, due primarily to a twofold decrease in the durations of the gaps between bursts (Fig. 5). Thus it appears that two distinct mechanisms are responsible for the voltage effects observed here with MthK: a major one that decreases the fl ickering activity within the burst and a minor one that slightly increases the Po by decreasing the durations of the gaps between the burst. We will concentrate on the major voltage effect for the rest of the discussion. Can the model described above in Scheme 1 be modifi ed to also capture the major effect of voltage on fast gating? To do this, we assigned voltage dependence to the rate constants corresponding to the vertical fl icker transitions (middle to bottom row) by assuming that only the closing rates depend on voltage (Eq. 5). We did not assign voltage dependence to the opening rates (k fo ), since the durations of the closing events within a burst do not vary with voltage (Fig. 5 E). We used z = 0.62, the value obtained from our analysis of the open duration dependence on voltage in Fig. 5 B. Using the parameters listed in Table I, we were able to simulate single-channel currents that capture the voltage effect on MthK gating (Fig. 7 B). As seen in Table II and Fig. 7 B, we succeeded in precisely capturing the increase in mean open time with depolarization by simply attaching voltage dependence (with a z value of 0.62) to the fl icker closing rate. We did not attempt to capture the minor effect of voltage on the open probability using the above model. Fast Gate and Slow Gate of MthK What does this kinetic analysis tell us in terms of mechanism? If we examine the kinetic model proposed here, we notice that there is a clear separation between the states and transitions involved in the Ca 2+ -dependent slow gating and those involved in the voltage-dependent fast gating. This effect is not model dependent, as we showed that Ca 2+ and voltage act on two distinct pathways because of the distinct open and closed interval populations they affect (Figs. 3 and 5). Table I for two different Ca 2+ concentrations (A), and two different voltages (B), for comparison with real data from Fig. 3 A and Fig. 5 A, respectively. Simulated data was fi ltered at 1-2 kHz and sampled at 10 kHz, just like the real data. (C) Predicted Ca 2+ dose-response from the kinetic model in Scheme 1. Symbols represent Po values, mean ± SEM from four to seven distinct sets of 120 s of simulated single-channel data for each Ca 2+ concentration. Solid line represents a Hill fi t with Eq. 3 with n = 7.5 ± 0.5, K d = 4 ± 0.04 mM, and P o,max = 0.28. Values are at −200 mV for the values in the fi rst two columns and at 5 mM Ca 2+ for the values in the last two columns. The "real" parameters are extracted from the data presented in Fig. 4 (A and B) and Fig. 5 (B and C). The "simu" parameters are from simulation and analysis of individual data sets in the required conditions. Values of Kinetic Parameters, from Real and Simulated Data We propose that there are two distinct gates underlying the gating of MthK. The fi rst is the Ca 2+ -dependent gate proposed previously (Jiang et al., 2002b), located at the inverted teepee at the cytoplasmic end of the inner helices. It was suggested that when Ca 2+ binds, the gating ring formed by eight RCK domains switches conformations and pulls on the linkers that connect the gating ring to the inner helices, mechanically opening the channel. A similar gate was proposed for the BK channels in light of recent fi ndings that different lengths of linkers, at a homologous position to the MthK linker, affect the open-closed equilibrium in a predictable fashion for a "spring"-like model (Niu et al., 2004). This mechanism appears plausible for the slow gate of MthK channels, and our kinetic analysis is indifferent to the actual structural changes that occur upon opening the Ca 2+ gate. However, our analysis puts some constraints on the intricacies of this mechanism. First, the Ca 2+ gate does not open when exposed to Mg 2+ ; it is a Ca 2+ specifi c phenomenon. Second, voltage actually affects the slow gate as well (see above, the minor effect of voltage on the open probability). This may occur through a different mechanism than Ca 2+ , and not by modulating the binding of Ca 2+ to open the gate if the voltage also increases the open probability and the duration of gaps between bursts in nominally 0 Ca 2+ (we do not have suffi cient data at this time to refute this possibility). Third, our very steep dose-response curve and the low affi nity of the binding sites for Ca 2+ suggest that this conformational change that "tugs" on the inner helices to open the channel is favored only when all eight Ca 2+ ions are bound. Due to calcium's low affi nity for MthK, it is also likely that the Ca 2+ dissociation rate is very fast (as seen in Table I). If so, then in high Ca 2+ , entry into the all-eight-Ca 2+ -bound open states is favored, albeit predicted to be short lived due to the fast Ca 2+ dissociation rate coupled with a low probability of opening in partially liganded states. However, once an open state is reached, a second gate comes into play, allowing the channel to fl icker very fast between open and closed fl ickery states (bursting, see Scheme 1) during the time the channel has the fi rst gate open. The second gate is the fl icker gate modulated by voltage and responsible for the fast gating behavior of MthK. No indication of such gate is visible in the structure of MthK (Jiang et al., 2002a). There are several potential candidates for an additional gate. One of them is the actual selectivity fi lter that was proposed to be the only gate for several other K + channels (Flynn and Zagotta, 2001;Bruening-Wright et al., 2002;Kuo et al., 2003). Alternatively, these voltage-dependent fl ickers could represent open channel block by an external (molecules in the recording solution) or internal (protein structures/side chain residue of the channel proper) charged particle. The only external molecules present are K + , Cl − , Ca 2+ , Tris, and CTX. Tris, Cl − , Ca 2+ , and CTX can be disqualifi ed from the race since the fi rst one is uncharged, the second one is an anion that should not be able to get near the pore, and for the last two, recordings in the absence and presence of either ( Figs. 1 and 2) show identical fl ickering behavior. That leaves K + as a viable cause for the generation of fast gating behavior. Precedents exist for rapid fl ickery block by permeant ions in CFTR channels (Linsdell and Hanrahan, 1996). Other possibilities include block by a fl exible charged amino acid side chain that is favored to protrude into the pore when the gating ring undergoes a conformational change. We cannot identify the potential culprit side chain due to the fact that the resolution of the channel pore was only high enough to allow backbone tracing of MthK while the amino acid side chains were not resolved (Jiang et al., 2002a). Conclusion MthK is a low affi nity Ca 2+ -gated and voltage-gated inwardly rectifying potassium channel. The MthK Ca 2+ dose-response curve has an impressively large Hill coeffi cient, which suggests that the channel opens mainly when all eight sites are bound by Ca 2+ . MthK has two distinct gating modes: the slow gate, affected by Ca 2+ , and a fast gate (fl ickers), affected by voltage. The increase in MthK activity with Ca 2+ is due mainly to an increase in the frequency of channel opening. While voltage does not lead to a signifi cant increase in channel activity, it dramatically alters the frequency of fl ickering closures within a burst. We proposed a modifi ed MWC kinetic model that approximates the main gating features of MthK.
2016-05-15T11:59:19.527Z
2006-06-01T00:00:00.000
{ "year": 2006, "sha1": "022ae83a838482fd09543925a85e8a0d86481c70", "oa_license": "CCBYNCSA", "oa_url": "http://jgp.rupress.org/content/jgp/127/6/673.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "022ae83a838482fd09543925a85e8a0d86481c70", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
233673340
pes2o/s2orc
v3-fos-license
Analysis on Academic Benchmark Design and Teaching Method Improvement under Artificial Intelligence Robot Technology To let robots in the education field better implement teaching tasks, the curriculum standards for information technology teaching is designed based on artificial intelligence robot technology. First, based on the robot education, the Lego robot (LEGO MINDSTORMS EV3) for the development and adoption of teaching resources in information technology courses is introduced. Then, according to the sensor teaching in robot education, the teaching contents for the information technology course are designed. The teaching method for the information technology course is improved based on sensor teaching. Finally, the feasibility of the existing teaching resources is evaluated, and the effect of information technology teaching is analyzed through a questionnaire survey. The results show that more than 80% of students are interested in robot teaching, and more than 70% of students can master relevant theoretical knowledge in practical operations. It is verified that robot education can arouse students’ interest in information technology courses. The learning methods proposed can enable students to better master relevant theories, and it can improve students’ operational ability and innovation ability. This work provides basic theories and important references for the adoption of artificial intelligence robotics in education. Keywords—Artificial intelligence; robot education; curriculum standard design; educational teaching methods Introduction The advancement of modernization has produced various advanced technologies. In recent years, artificial intelligence has shined in all walks of life [1]. The basic goal of the education field is to cultivate high-quality talents. Under the influence of artificial intelligence, its development model has gradually moved from depending on network and information to depending on intelligence. Artificial intelligence robot technology combines a variety of advanced technologies, and its adoption in education is becoming more and more extensive [2]. At present, robot education still has some problems, its teaching resources are relatively scarce, the teaching goals are not clear enough, and the teaching methods are relatively simple, which have a very negative impact on the teaching effect. For robot education under artificial intelligence, many foreign and domestic studies were carried out. Some scholars applied robots to the education of some special populations. It was found that robot education helped people with disabilities and students with autism well [3]. Some scholars integrated teaching resources via advanced modern technology, which made robot education more comprehensive and cross-subjects [4]. The adoption and research of robot education abroad are relatively advanced. There are many schools that allow students to learn in practice, and self-programming and debugging of robots improves students' operational ability and learning interest [5]. The research on robot education in China mainly focuses on its combinations with innovative classrooms and micro-classes [6]. Some scholars combined STEM theory with teaching practice to form a novel robot teaching model [7]. In addition, some scholars integrated maker education with robot teaching. As a result, they created an innovative teaching model in Normal College [8]. It can be inferred that there are many practical adoptions of robot education, but there are limited researches on the curriculum standard design in robot education. In summary, the teaching resources and contents of information technology courses are developed and designed based on artificial intelligence robot technology, so as to improve the teaching methods. Then, the effect of robot education and teaching is evaluated. This study provides an important reference for the adoption of artificial intelligence robot technology in curriculum standard design and teaching method innovation. Basic theories of curriculum standard design based on robot education Artificial intelligence robot technology combines artificial intelligence to program the machine to construct a robot that completes the target operation. Robot education is the adoption of various advanced technologies such as artificial intelligence, mechanical manufacturing, computer technology, and electronic sensors in education and teaching, to cultivate students' hands-on ability and learning interest [9]. Applying robot education can enable students to have a deep understanding of various functions of the robot. Then, through self-programming and robot construction, the scientific literacy and innovation ability of the students are improved. Robot education has a clear teaching goal. First, it teaches students the theoretical knowledge about the functions, organization, and significance of robots, so as to raise students' interest in robot technology [10]. Then, it teaches students to write the operating program of the robot, so that students can independently design and develop software. Such a way can also improve students' ability to analyze and solve problems such as logical thinking. Another goal is to cultivate students' hands-on ability and their innovative ability. To achieve which, students need to build and assemble robots themselves, and divert their thinking to complete various tasks. The overall teaching goal of robot education is based on long-term teaching, so robot teaching during different time period also has different teaching goals. Each stage of teaching has certain teaching difficulties, so the teaching resources and teaching methods need to be adjusted accordingly, which highlights the practicality and comprehensiveness of robot education. Curriculum standard generally includes the nature of the curriculum, the purpose of the curriculum teaching, the resources of the curriculum teaching, the contents of the curriculum teaching, and the recommendations for teaching implementation [11]. The development of information technology courses enriches the teaching purposes. According to the standards stimulated by the country, for the curriculum standard of information technology, the robot education mainly includes the robot's functional results and production design [12]. The core is the design and manufacture of robots. Students coordinate computer software and hardware, perform macro-control on design and planning, and adopt various advanced technologies to implement robot running tasks. Curriculum teaching resources need to be developed and designed according to the teaching purpose, and the teaching contents also need to be planned. Then, the teaching method needs to be improved according to the teaching contents. Finally, the teaching effect is evaluated, so that corresponding suggestions for the implementation of the teaching can be given. Development of teaching resources of information technology courses based on robot education The development of teaching resources has certain principles. The first is the teaching nature of teaching resources. The development of teaching resources requires certain teaching concepts as support, then, the design of teaching purposes and contents can be carried out. The second is that the teaching resources should be scientific, which means the content reflected in teaching resources must conform to the principles of scientific practice [13]. Teaching resources should also be innovative, and teaching must closely follow the trend of the times and conform to the trend of innovation and development. The teaching contents are supposed to cultivate students' innovative thinking and ability. Finally, teaching resources need to be open and available. Open teaching resources can adapt to multiple modes of teaching and also facilitate the integration and utilization of resources. In this work, Lego EV3 is adopted to develop teaching resources. The LEGO company in the United States has developed a variety of robot products, and the EV3 robot is the third-generation product developed by its R&D team [14]. Compared with the previous two generations, it has applied various cutting-edge technologies, and the opinions of experts in robot field were adopted for its development. Therefore, there has been a great improvement in various functional indicators. The EV3 robot has an extremely complex structure, and can complete various operations. For students, the EV3 robot, as a novelty, can arouse their interest in exploration, which helps to realize the first goal of robot education. The EV3 robot is increasingly adopted in primary and secondary education in China, which promotes the transformation of educational concepts and the improvement of teaching models. To develop teaching resources for information technology courses based on robot education, LEGO MINDSTORMS Edu EV3 is applied. LEGO MINDSTORMS Edu EV3 is a visual editing software. The operation is relatively simple, and it is easy to get used to it. It is also convenient for public, and suitable for students to operate and learn [15]. LEGO MINDSTORMS Edu EV3 has more powerful functions and a clear and rich design interface. According to the preparation of the corresponding program, the robot can be controlled to perform various complex operations and meet the requirements of the task. The basic principles of its usage and design are shown in Figure 1 reveals that to design a robot, first, it needs to create a corresponding project. For the functional requirements of the robot, it also needs to create a corresponding project. Then, the corresponding modules of these projects are programed and designed, and the programs are written. Finally, the robot that one designs according to his needs is obtained. The action module in related modules contains various actions in the control program. The data flow module controls the data processing of the program. The sensor module inputs various sensory information of the outside world through the sensor to help program to read, so that the robot can recognize corre-sponding feelings. The data calculation module reads, writes, compares, and calculates variables. Advanced module manages related files, Bluetooth, and other advanced functions. In terms of hardware technology, LEGO MINDSTORMS EV3 has more powerful performance, and has been upgraded accordingly in different aspects. First, an APP is created on a terminal device that can connect via WiFi, which supports various client system operations. To realize better communication between humans and robots, microphones and speakers are upgraded. The interaction is improved, so that the control mode is enhanced. From the initial program control to APP control, the direct command control mode is supplemented to make the robot more intelligent. Then, the LEGO MINDSTORMS EV3 processor is upgraded to increase its memory configuration, thereby increasing the running speed of it. LEGO MINDSTORMS EV3 has 550 technical components at least, and the application personnel can also purchase the corresponding components to install themselves, which makes LEGO MINDSTORMS EV3 have more functions to realize more operations. The adoption of the sensor has also been improved accordingly. The addition of a new type of sensor based on the original version makes LEGO MINDSTORMS EV3 have more new functions. Learners can build and assemble operations according to the instructions. After they are familiar with it, they can design robots that highlight their personalities according to their own interests. They can also divert their thinking and add other robot products or artificial intelligence equipment to LEGO MINDSTORMS EV3 for innovative design. Design of teaching contents of information technology course based on robot education The teaching contents in the information technology courses are designed based on robot education. Sensor teaching is taken as example to design teaching contents. Sensor teaching includes software design and behavioral operation, which requires a combination of hand and brain. The teacher explains the programming knowledge of LEGO MINDSTORMS Edu EV3, and introduces the relevant theory of the sensor to the students. While acquiring theoretical knowledge, students need to design and assemble sensor robots by themselves. Through practical operation, the learning of theoretical knowledge is consolidated. By combining knowledge and practice, students' literacy in all aspects are improved, thereby sensor teaching goals are achieved. Figure 2 illustrates the frame structure of sensor teaching. Figure 2 suggests that the contents of sensor teaching includes LEGO MINDSTORMS Edu EV3 programming learning, adoption principle of sensor learning, robot assembly operation, and programming control robot behavior operation. Students assemble a robot that combines various sensors and employ LEGO MINDSTORMS Edu EV3 for programming, to control the robot's actions and behaviors. To complete the teaching of these contents, the generally applied teaching mode is shown in Figure 3, the learning of LEGO MINDSTORMS Edu EV3 is the foundation of the contents learning. To realize the various interfaces in the LEGO MINDSTORMS Edu EV3, and to make students be familiar with the contents and functions of each module, detailed demonstration and explanation are made by the teacher. After the students master the basic theory, they will operate the LEGO MINDSTORMS Edu EV3. During practice, students communicate and cooperate with each other, and constantly think to make innovations. While improving hands-on ability, teamwork ability, computational learning ability, and innovation ability, they can also master relevant theories. Thus, the goal of sensor teaching is effectively achieved. Information technology teaching method based on robot education The task-driven method is a relatively perfect teaching method based on the constructivist teaching theory. Robot education has its own unique nature. Task-driven teaching is one of the most common methods. The task-driven method can realize expected teaching effect in the entire robot teaching process [16]. Teachers need to take the overall teaching goals as the basis and divide them into small goals in various stages. The achievement of each small goal will bring students a sense of accomplishment. With the sense of accomplishment, students can give full play to their own subjective initiative and complete learning tasks more actively, which will also produce positive effects that students fight for success together. The task-driven method needs to set relevant tasks for students based on the contents of information technology courses. Combined with the characteristics of robot teaching, students' understanding of basic theories and students' learning of basic skills are strengthened. Moreover, according to the students' own situation, the students' qualities and abilities should be cultivated combined with the learning ability, cognitive characteristics, and their own condition. Inquiry teaching is a typical "learning by doing" teaching method in discovery teaching. When teachers teach students the relevant principles and theoretical knowledge, they will not directly explain it, but give related cases and questions to trigger students to think by themselves. Meantime, students are encouraged to apply various methods to figure it out, including discussion, observation, and experiment. In robot teaching, the cooperative inquiry method is also a relatively common method. Students collaborate to complete the assembly of robots and the design of related products. They can also compete in groups and adopt various communication methods to understand the basic theoretical knowledge. In this work, based on robot education, the teaching method of information technology is improved referring to task-driven method and inquiry teaching method. The teaching method is shown in Figure 4. Figure 4 discloses that teachers first set the relevant context according to the teaching contents. Then, they lead to the research topics of the course, to stimulate students' desire to learn and tap the potential of them. Subsequently, the task-driven method is adopted, which issues small goals continuously, so that the students will continue to explore. The students are divided into several groups to cooperate with each other in practical operations. The teacher is responsible for guidance, who needs to make an in-depth summary of each task, and give a corresponding evaluation of the students' own exploration results. Afterwards, when the teaching goal is realized, the teacher should make corresponding summaries and supplementary explanations. Then, the students summarize the achievements, and manage and consolidate the entire learning process. Finally, the teacher continues to guide students to expand their thinking, so that students can carry out independent innovation. In each stage, teachers should pay attention to the learning status of students, and cultivate their hands-on ability, learning ability, innovation ability, and quality in all respects accordingly. Teaching resources and teaching effect evaluation methods To comprehensively evaluate the information technology teaching resources, the evaluation index of information technology teaching resources based on robot education is selected, and the corresponding evaluation index system is constructed in Figure 5, the evaluation indexes of teaching resources are selected according to the development principles proposed above. To make the evaluation of information technology teaching resources more reasonable, 50 special teachers in robot education field are invited to score the evaluation indexes. The scoring standards are shown in Table 1. To analyze the teaching effect of the information technology curriculum standards and teaching methods, a questionnaire survey is implemented. A total of 50 questionnaires are distributed, and 48 valid questionnaires are finally recovered, with a recovery rate of 96%. The questionnaire survey mainly involves students' attitudes towards the learning of information technology courses based on LEGO MINDSTORMS EV3, their mastery of teaching content, and the cultivation of comprehensive abilities. The form in the Richter scale is adopted to express the items of the questionnaire, and the form and scores are shown in Table 2. Evaluation results of information technology curriculum standard design In this work, 4 first-level evaluation indicators of the teaching resources developed by LEGO MINDSTORMS Edu EV3 and 12 second-level indicators are scored. The results are shown in Figure 6. Figure 6 discloses that the evaluation result of information technology teaching resource proposed in this work is favorable, and the scores of various indicators are relatively high. However, the diversity and openness of teaching resources still need to be strengthened. Analysis of the effect of information technology courses Based on the results of the questionnaire survey, the teaching effect of information technology courses based on artificial intelligence robot technology is analyzed. The students' interest in learning and theoretical mastery of sensor teaching content is shown in Figure 7. Figure 7 shows that most students are interested in sensor teaching of robot education, which accounts for more than 80% totally. Besides, more than 70% of the students have a good grasp of programming knowledge, sensor adoptions, and related theories of robot assembly. The cultivation condition of students' hands-on operation ability and innovation ability through the questionnaire survey is shown in Figure 8. Figure 8 reveals that the teaching method proposed improves the hands-on ability of most students. The teaching method based on robot education has cultivated students' hands-on ability and innovation ability well. Conclusion In order for robot education to complete its teaching goals more successfully and effectively, the curriculum standards of information technology are designed based on artificial intelligence robot technology. LEGO MINDSTORMS Edu EV3 is employed to develop information technology teaching resources. Moreover, information technology teaching content is designed based on sensor teaching, and learning methods are improved. Eventually, the research and development of teaching resources are evaluated and the teaching effect is analyzed. The results show that the teaching resources obtained by the proposed teaching resource development method have been widely recognized in robot education field. The improved teaching methods also realizes ideal teaching effects, which can increase students' enthusiasm for learning. Through the combination of theoretical study and practical operation, students' handson operation ability and innovative ability are cultivated. This work provides an important reference for the adoption of artificial intelligence robot technology in the field of education and teaching, but there are still some deficiencies. The relevant suggestions for curriculum implementation in the curriculum standard design are not comprehensive enough, and relevant research and discussion shall be carried out in the future.
2021-05-05T00:09:08.368Z
2021-03-16T00:00:00.000
{ "year": 2021, "sha1": "20c6d9455690ac8f2dce6cf8915ab4acda68019f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3991/ijet.v16i05.20295", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f7f254dcc1ba9340b76e2929a7674e59ed745ae0", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
228987344
pes2o/s2orc
v3-fos-license
Research on PID Parameter Tuning Based on Improved Artificial Bee Colony Algorithm Aiming at the difficulty of tuning industrial PID control parameters, an improved artificial bee colony PID tuning method is proposed. The improved artificial bee colony algorithm (IABC) uses Boltzmann selection mechanism to avoid premature convergence of roulette mechanism; at the same time, the global crossover mechanism is introduced in the domain search, which enhances the direction of the algorithm; and through chaotic avoiding search, the diversity of population is increased and the ability of global search is improved. Finally, the IABC algorithm is applied to PID parameter optimization of second order system. The simulation results show that the IABC algorithm obtains smaller ITAE and adjustment time than the basic artificial bee colony algorithm and Ziegler-Nichols engineering tuning method, and there is no overshoot. The performance of IABC algorithm is obviously improved. Introduction PID parameter tuning is a difficult problem in industrial control, which usually includes theoretical calculation and engineering tuning. Among them, the theoretical calculation method relies too much on the mathematical model of the system, which requires a full grasp of the system mechanism. However, the engineering setting method is mainly based on the engineering experience, so it has great trial and error. In recent years, artificial bee colony(ABC) algorithm has been widely used in PID parameters. Cai Chao [2] and others used ABC algorithm to adjust PID parameters to improve the dynamic performance, rapidity and stability of the controlsystem. Zhou Huaixiang [3] proposed immune bee colony algorithm, which could dynamically fit the diversity of the population and the convergence speed, so as to avoid the algorithm falling into local minimum. Zhang Dongli [4] and others used the improved artificial bee colony algorithm to design the AVR system with optimal fractional order PID controller, and achieved good results. However, as a new algorithm, ABC algorithm model is not very mature, there are still some problems such as easy to fall into local minimum point, as well as low convergence precision and slow convergence speed. In this paper, the IABC algorithm, which introduces Boltzmann selection mechanism, avoids premature phenomenon, and performs global crossover operation in domain search, which can effectively guide the search to converge to the optimal result and accelerate the convergence speed. Chaos avoiding search is used to improve the global searching ability. The application of IABC algorithm in PID parameter tuning makes the system obtain better control performance and provides theoretical basis for engineering application. Basic Artificial Bee Colony Algorithm In 2005, Turkish scholar karaboga proposed artificial bee colony algorithm (ABC) [5] on the basis of previous studies, and applied it to the optimization of function extremum, and the result is very good. Later, the algorithm and its improved algorithm have been widely used in image processing, job scheduling, combinatorial optimization [6], path planning, traveling salesman, and other engineering fields. ABC algorithm is a kind of swarm intelligence model, through the cooperative cooperation among bee collecting, observation bee and reconnaissance bee to complete target search [7]. There are three stages in ABC algorithm: in the first stage, the honeybee searches in the food source field according to formula (1) and shares the food source information with the observation bee; in the second stage, the observation bee chooses a food source according to the probability formula (2) according to the honey source richness analyzed by the bees, and searches for a new honey source near the honey source; in the third stage, when the honey source is exhausted, the collecting bee becomes a reconnaissance bee, and presses Formula (3) generates new honey source by random initialization. Through the cycle of these three stages, new and better nectar sources are constantly found, and then the optimal nectar source is found. The field search of bee collection and observation is as follows: In formula (2), fit i is the fitness function value corresponding to the i-th solution, and p i is the probability of the observation bee to select the i-th honey source. (3) In formula (3), x max and x min are the maximum and minimum of X, and α is the random number between [0,1]. Improved Artificial Bee Colony Algorithm (IABC) A. Global cross domain search mechanism in ABC algorithm Bees and observation bees are used for domain search according to formula (1), which shows that the algorithm is good at exploring and neglecting development. In order to enhance the development ability of the algorithm, Zhu [8] and others introduced the global optimal solution to guide the direction of the algorithm, as shown in equation (4). (4) In formula (4), β value is used to balance the algorithm's development and exploration ability, but this reduces the global optimization ability of the algorithm to some extent. In this paper, cross operation is used to improve the global optimization ability of the algorithm, that is to let the bees and observation bees conduct cross operation with the global optimization after the field search. However, only crossing with the global optimum will reduce the search ability. Therefore, the search ability can be enhanced by adding theβterm to prevent the over development phenomenon of simple crossover with global optimal, as shown in formula (5). cr is the cross probability and x g is the global optimal solution. B. Boltzmann selection mechanism If the observation bee chooses honey source according to the formula (2), the population diversity will decrease and the algorithm will converge prematurely. Therefore, the Boltzmann selection mechanism is introduced to adjust the selection pressure dynamically. In the early stage, the selection pressure is small, so that the poor individuals can survive and enhance the population diversity. In the later stage, the selection pressure is automatically increased to narrow the search range and accelerate the convergence speed of the optimal solution. The selection probability of food source selection of observation bee is as follows: formula (6): WhereT 0 is the initial temperature andT is the variable temperature,c is the number of cycles. C. Chaos avoiding search ABC algorithm is used to search the field randomly, which is blind to some extent. It can not inherit the good gene evolution of the original local optimal solution, and there may be repeated search phenomenon. Due to the ergodicity of chaotic search, the diversity and search precision of the population can be effectively improved [9]. Therefore, by introducing the interference of turbid sequence, the candidate solution set is constructed near the current local solution, and the Scout bee selects the solution with the highest fitness among the candidate solutions to become a new honey source, which can not only effectively spring forthof the local extremum, but also inherit the good genes in the original local extremum to speed up the algorithm At the same time, the memory function of avoiding table is used to record the local optimal solution that has been searched. If the solution is encountered next time, it will be skipped to avoid falling into local minimum, thus increasing the diversity of solutions. Specific update rules: if the food source still has no update after continuous limit times, it will be liberated into the avoiding list, and then according to formula (8), a chaotic sequence candidate solution is generated near the local solution, and the better candidate solution that is not in the avoiding table is selected as the new honey source. If all the candidate solutions are in the avoiding table, the new solution will be generated randomly. The chaotic sequence Yi is generated by tent mapping, and then mapped to the solution space [x min , x max ], then the chaotic vector is constructed as follows: The original local solution x is perturbed individually to generate the candidate solution. newX i is: The Euclidean distance between the candidate solution and the elements in the avoiding table is calculated by equation (9). If it is less than the set threshold, the solution is in the avoiding table. tsX k is the k-th element of avoiding list. D. IABC algorithm combines Boltzmann selection Global cross domain search and avoiding search to form an improved bee colony algorithm (IABC). The specific the flow of IABC algorithm is shown in Figure 1. Case Study A. IABC_ PID control principle in the IABC algorithm The PID controller K p , K i , K d to be optimized parameters as honey source, with a certain performance index as the objective function, to form a new type of IABC_ PID optimization algorithm. The core problem of the algorithm is to find a group of K p , K i , K d parametersby using artificial bee colony algorithm, so that a certain performance index of the system can reach the minimum value, and the controlled quantity can quickly reach the expected target, and there are ultra-small overshoot and small adjustment time. The second order system is selected as the controlled object, and its transfer function is as follows: . . B. Fitness function Because the system's absolute error moment error (ITAE) performance index formula (11) has good practicability and selectivity [10], so ITAE is selected as the objective function.The fitness function is inversely proportional to ITAE, as shown in equation (12), that is, the smaller the performance index ITAE, the higher the fitness, the better quality of honey source. C. IABC_ PID algorithm setting steps The basic steps of PID algorithm tuning are as follows: Step 1: Initialization, determine the number of population, the maximum number of iterations maxcycle, search limit, crossover probability Cr, the range of control parameters K p , K i , K d etc. Step 2: The NP group K p , K i , K d feasible solutions (honey source) are generated randomly. The output response of the controlled object after PID control is obtained, and the fitness is calculated. The first 50% optimal solution is taken as the honeybee. Step 3: Each bee collected honey in the vicinity of the original honey source for global cross, looking for new honey source, and according to the greed criterion to select honey sources. Step 4: According to Boltzmann mechanism, the probability of honey source being followed is calculated. Step 5: The observation bees searched in a global crossover mode and retained better nectar sources. Step 6: If a honey source has been searched for a limited number of times and has not been updated, the honey source will be forsaken, and the honeybee will be transformed into a reconnaissance bee, and a new honey source will be generated through chaotic avoiding search. Step 7: Determine whether the maximum number of iterations MaxCycle is reached, no, jump to step 3.Yes, the algorithm is done. Simulation Experiment and Result Analysis In the simulation experiment and result analysis, three methods, Ziegler-Nichols engineering tuning method, ABC algorithm and IABC algorithm, are selected to adjust the PID parameters of the controlled object in equation (10), and the experiment is carried out on the MATLAB(Fixed-step, ode3). The unit step response is shown in Figure 3. Among them, the parameters of the two bee colony algorithms are: Table 1. Table1 and figure 2 state that the operation effect of IABC algorithm is significantly improved, and the ITAE value is smaller, the adjustment time is less, and there is almost no overshoot. The optimization process of two bee colony algorithms is shown in Figure 3. It can be concluded from the figure that the IABC algorithm converges rapidly after 20 iterations, which shows that it has efficient search ability and fast convergence speed. Conclusion The IABC algorithm adopts Boltzmann selection mechanism to increase population diversity at the early stage of the algorithm, effectively avoids premature convergence, and introduces global crossover operation, which can effectively guide the search to converge to the optimal solution; and the algorithmuses chaotic avoiding search to improve the global searching ability. The algorithm has strong search ability and development ability, and improves the convergence accuracy and convergence speed of the original ABC algorithm. IABC_PID controller optimized by PID algorithm has no overshoot, fast response and small ITAE, The algorithm has a good prospect of engineering application. Due to the article space, the reliability of the algorithm needs to be further verified.
2020-11-12T09:09:49.078Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "eb491615a9972c9bcdfaed73471076ad175a1959", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1670/1/012017", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "4a126fdda0ae7635c3f3c98ecfb9b3cc524209be", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }