id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
235387516
pes2o/s2orc
v3-fos-license
A RSTUDIO-based modeling program for dispersion of particulate matter emissions using Gaussian plume equation Mathematical models allow evaluating the air pollutants effects to the environment under several conditions, being a relevant and a low-cost tool for planning and regulatory purposes. Recent studies have shown the effects of particulate matter on environment and human health, especially cardiovascular and respiratory diseases. This study aims to evaluate the air quality of Volta Redonda, Brazil, due to particulate matter emitted by stationary point sources of a steel industry using meteorological data from three monitoring stations. A mathematical model was developed RStudio®, using the Gaussian dispersion equation and Google Maps API to visualize the results. Davidson-Bryant plume rise equation was included. Observed data revealed southeastern, southeastern, northern and eastern light prevailing winds that were used to simulate particulate matter concentrations for 24-hour periods, under stable and unstable conditions according Pasquill-Guifford classification. The results show that the stations exceed the daily standards determined by the legislation in different scenarios, with Santa Cecília station, being the one that violated these standards the most, reaching an average daily value of 3673.04 µgm -3, with an hourly peak of 7712.76 µgm -3, for the scenario of prevailing northwest winds and wind speed of 1 ms -1. Other stations also violated the standards, with the Retiro station showing better results for the north and northwest wind directions. Introduction Air pollution is one of the biggest environmental concerns in recent decades, due to industrial growth, high volume of traffic fossil-fuel vehicles and a significant increase of respiratory, cardiovascular, and neurological diseases attributed to the inhalation of these pollutants [1], causing approximately three million premature deaths per year [2]. Besides human health, the intensification of air pollutants in atmosphere and consequential deposition on soil and water bodies may cause acidification, affecting photosynthesis capacity, reducing agricultural productivity, changing natural nutrient balance [3] and being responsible for phenomena such as photochemical smog, stratospheric ozone depletion and global warming. [4]. Mathematical models are currently used to estimate potential impacts on air quality [5]. Dispersion modelling allows to assess the local atmospheric circulation and its influence on pollutants concentration and to verify if legal air quality standards are attained [6]. Gaussian plume model is the most widely used model for point source emissions and it is based on the transport and diffusion of the air pollutant particle, using empirical parameters (sigmas) as function of atmospheric stability [7]. Gaussian plume models such as industrial source complex (ISC), AERMIC Model of AERMOD software, and CALPUFF, developed by the United States frequently used for regulatory purposes and environmental licensing processes. Thus, although the ISC model has been replaced by AERMOD software, the former continues to be extensively used, which can be explained by the unavailability or inaccessibility of input data required by AERMOD software and other more sophisticated models [8]. In the present study, a Gaussian plume model was developed using RStudio® platforms to perform air pollution dispersion studies. The model was used to simulate the dispersion of inhalable particulate matter (PM₁₀) emissions from a steel production plant located in Volta Redonda, Brazil, and an R package based on Google Maps API was managed to visualize the results. Emission data This study was carried out in Volta Redonda city, located in Rio de Janeiro State, Brazil. The city is approximately 130 km far from the state capital and it is surrounded by mountains and valleys. The city has a mesothermal climate with an average annual precipitation of 1300 mm and relative humidity around 75% [9]. The main economic activity is the steel industry, sheltering the largest plant in Latin America, whose production reached 912000 metric tons of steel in the third quarter of 2018, achieving approximately US $ 1.3 billion in profit in 2018 [10]. As input emission data, steel plant's point emissions inventory of 2017 [11], which includes 35 chimneys and measured data as nitrogen oxides (NO and NO₂ as NOx); sulfur oxides (SOx) and inhalable PM₁₀ from seven different processes within the plant. Figure 1 shows the sources and monitoring stations locations surrounding the plant. The average emissions from point sources by productive area are shown in Table 1. Meteorological data comprising wind temperature and relative humidity from January 2007 to December 2016 were collected from three meteorological stations placed near the complex. Due to unavailability data from Volta Redonda, other meteorological variables such as solar radiation, accumulated precipitation in 24 h and cloud cover were obtained from the National Institute of Meteorology (INMET) conventional and automatic meteorological stations, both located in the nearby city of Resende, Brazil. Dispersion model: structure and configuration The program was developed in RStudio® and divided into three components: variable definitions and model configuration, equation resolution and post-processing. First script describes possible input values and allows the user to set the model configuration. The second defines routines of wind direction distribution, atmosphere stability pattern and output data, and in sequence, provide concentration values using the Gaussian plume model; post-processing scripts use the Openair R package for plotting and providing general statistics of observed data and RgoogleMaps package to get background maps through Google Static Map API. The model was configured according to wind speed and direction obtained from monitoring stations, using the inverse standard normal probability distribution to generate a prevailing wind field for each season/station; coefficients σy and σz were defined using McElroy-Pooler urban fit; effective emission heights were obtained by Davidson-Bryant plume rise formula to each stack. All emissions were presumed simultaneous; the model does not consider wind variability, atmospheric turbulence, chemical transformation, wet deposition, inhomogeneous terrain, and fugitive emissions [5,6]. Wind and stability analysis Wind roses were generated using the Openair R package and have shown dissimilar wind patterns in each station, both in the seasonal variability and in the prevailing wind directions. Average wind speed, however, shown prevailing light winds, ranging between 1.0 ms⁻¹ and 2.5 ms⁻¹. Wind directions ranged from northern (N) to north-western (NW) in summer and autumn and eastern (E) to southeastern (SE) during winter and spring. SE winds occurred for 10% of all seasons, indicating the influence of cold fronts passages, and north-quadrant winds is believed to occur by the presence of mountain ranges positioned north of the steel complex, as observed by Guimaraes [12]. Figure 2 shows wind roses for each monitoring station. The average daily variation of stability classes, according to the Pasquill-Guifford classification [5] was made using average wind speed and climatological data from Resende station. Modeling results It can be seen from Figure 3 Figure 5 shows that, for prevailing winds from Northwest of 2.5 ms⁻¹, Santa Cecília station exceeded the limit PI-1 (120 μg m⁻³) [13], currently the national air quality standard, and consequently, the intermediate (PI-2) and final (PF) values, which are both lower than PI-1. The exceed occurred for both stability conditions, however, under more stability as shown in Figure 6, simulated values were shown to be lower with the daily average of 149.58 μg m⁻³, approaching to PI-1, while unstable conditions reached 1480.63 μg m⁻³. Table 2 shows the highest rates of exceedance of the national air quality standard PI-1 of 120 μg m⁻³. Santa Cecília station presented the first and second highest rates, for Northwest and North directions of similar wind speed conditions (1.0 ms⁻¹) and stability. Belmonte had the third highest rate for the prevailing East winds. Conclusion The results analysis pointed to a high influence of prevailing wind directions on the concentrations of particulate matter, most related to permanent and transient pressure systems that appear over the area throughout the year. Santa Cecília station revealed high concentration results for the N-NO wind directions, while Retiro and Belmonte stations showed higher concentrations for S-SE winds. The south of the complex near Santa Cecília exposed the highest concentration results, mainly for the NO winds of 1.0 ms⁻¹, under which the highest average concentration was reached. Air quality standards have been exceeded in several conditions, with high overtaking rates, indicating the need to improve air quality control, to meet national and local standards and provide environmental health and protection.
2021-06-10T20:02:05.260Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "2c65d4b01aff1a336ab9069b2fd5340b0f9a0590", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1938/1/012013", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "2c65d4b01aff1a336ab9069b2fd5340b0f9a0590", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
221310422
pes2o/s2orc
v3-fos-license
The Back-care Behavior Assessment Questionnaire (BABAQ) for schoolchildren: development and psychometric evaluation Background Back pain is an important public health problem and the leading cause of adult disability worldwide and is rising among schoolchildren populations. Despite numerous studies reporting on back care interventions in pediatric population; there is currently no existing theory-based instrument to assess impact and outcome of these programs. This paper reports on development and psychometric testing of a theory based back-care behavior instrument for use among elementary schoolchildren. Methods This was a three-phases study that included the following steps: a) a literature research to review existing instruments that assess healthy spine-related behavior in elementary schoolchildren; b) development of a new instrument namely the Back-care Behavior Assessment Questionnaire (BABAQ) based on the Social Cognitive Theory and existing instruments, and c) conducting a cross sectional study to test psychometric properties of the BABAQ by estimating the content validity ratio (CVR), the content validity index (CVI), performing confirmatory factor analysis (CFA), reliability analysis, and convergent validity as estimated by the Average Variance Extracted (AVE). Results First, a questionnaire (the BABAQ) was developed. It contained of 49 items tapping into 5 pre-defined constructs (skills, knowledge, self-efficacy, expectation beliefs, and behavior). Then, 610 fifth-grade female schoolchildren were entered into a cross sectional study and they completed the BABAQ. The CVR and the CVI of the questionnaire was found to be ≥0.54 and > 0.7, respectively. The CFA confirmed the five constructs and showed good fit for the data. The intraclass correlation (ICC) and the Cronbach’s alpha coefficients for the BABAQ were 0.84 (P < 0.001) and 0.93, respectively. The convergent validity as measured by the AVE also showed satisfactory results. Conclusion The findings suggest that the Back-care Behavior Assessment Questionnaire (BABAQ) is a valid instrument for measuring healthy spine-related behaviors among schoolchildren. Background Musculoskeletal disorders (MSDs), including back pain, are among the most important problems causing excessive absenteeism in the workplace, imposing high economic costs on health care systems, and suffering nearly 540 million people [1][2][3][4][5]. As described by the World Health Organization (WHO), back pain comprises low back and neck pain (mild, moderate, severe, and most severe). An individual who develops back, leg, and arm pain might thus experience difficulty dressing, sitting, standing, walking, turning one's head, holding arms up, as well as lifting things. They might also sleep poorly, have headaches, feel tired and worried, and lose some enjoyments of life [6]. Although the burden of back pain among adults has been thus far well documented, this subject matter in children is underreported. According to the WHO statistics in 2015, back pain ranked 9th place in years living with disability in 10-to-14-year-olds and 4th in children and young adolescents aged 15-19 years, even much higher than non-communicable diseases such as cancer and anxiety disorders [7]. It is of note that the lifetime prevalence rate of low back pain (LBP) in children varies from 13 to 51% [8] and increases with age wherein a sharp rise is evident. As transition occurs from childhood to adolescence, the boundary is approximately at the age of 10-13 years. In addition, previous studies have reported higher prevalence rates among adolescent girls than boys (38.9% vs. 35.0%) [3,9]. As such, implementation of educational interventions for back-care among children and young adolescents are increasingly becoming popular. Therefore, it is argued that measuring healthy spine-related behaviors during daily life activities among children, as a key outcome in evaluation of educational interventions for back-care, is of prime importance [10]. Up until now, a number of questionnaires have been developed for such purposes. For instance, Spence et al. [11] and Sheldon [12] introduced written and practical tests to assess pupils' knowledge and performance with regard to correct lifting techniques among 3th, 5th, 6th, and 8th-grade public-school children. As well, Monfort et al. [1] developed and evaluated the psychometric properties of a health questionnaire on back-care knowledge in daily life physical activities (known as HEBACA-KNOW), consisting of 24 items examining levels of back-care knowledge among adolescents. Similarly, Noll et al. [2] designed the Back Pain and Body Posture Evaluation Instrument (BackPEI) for schoolchildren, relevant to the evaluation of back pain and its associated behavior risk factors. In addition, Cardon et al. [13][14][15] utilized a battery of questionnaires consisting of different constructs including general and specific back-care knowledge, fear-avoidance beliefs, self-efficacy, attitudes, self-reported behaviors, practical tests, social support, program commitment, and perceived behaviors for children, parents, and teachers. Despite the effectiveness of such questionnaires in advancing knowledge on the subject matter, none has been theory-based. In addition, some discrepancies have been also found for constructs and psychometric properties of the questionnaires introduced. In fact, assessment of back-care behavior has been scarcely investigated from the theoretical point of view and most of the previous studies have not reflected on construct validity, especially, exploratory or confirmatory factor analyses (namely, EFA and CFA). To this end we believe that despite numerous studies reporting on back care intervention in pediatric populations [8,11,13,14], there is currently no existing a theory-based measure to assess impact and outcome of these programs. Thus, this study aimed to develop a theory based back-care behavior assessment questionnaire for pupil populations attending elementary schools. The specific objectives were to evaluate: content, face, and structural validity as well as reliability of its subsections. Theoretical framework The conceptual framework for this study and development of an instrument was based on the Social Cognitive Theory (SCT). It has been shown that this theory has a good power to predict behavior changes especially in pupils [16]. According to the SCT, three main psychological determinants of any behavior changes are: selfefficacy (SE); behavioral capability (skills and knowledge to perform a given behavior); and outcome expectation beliefs (behavioral beliefs) [17,18]. The proposed cognitive factors of behavior are important set of modifiable factors that are assumed to combine in different ways to determine health related behavior and distinguish between those performing and not performing behaviors [17,18]. Therefore, we thought an instrument that intends to measure back care behavior among elementary schoolchildren should address the constructs that proposed by this theory in order to achieve the desired behavior change of back care during daily activity. Design and procedure This study comprised of three parts: a broad literature searches in order to review existing questionnaires for assessing of healthy spine-related behavior in elementary schoolchildren; compiling items to fulfill pre-defined constructs based on the social cognitive theory; and conducting a cross sectional study in order to validate the questionnaire among 5th-grade students attending elementary schools in Tehran, Iran. Preliminary questionnaire The early version of the Back-care Behavior Assessment Questionnaire (BABAQ) was developed based on the content of other existing questionnaires ( Table 1). The draft instrument yielded 55 items in five predefined constructs as follows: 1. A checklist for practical assessment of skills for back care principles. The checklist consisted of seven tasks and 24 items. Each item is rated on a 3point scale ranging from 0 (not fulfilling the criteria) to 2 (correct completion of the task) giving score ranging from 0 to 48 points where higher scores indicate better fulfillment of tasks [14,19]. Posture in relation to sleeping, sitting in a chair to write, sitting in a chair to talk, using a computer and lifting an object from the ground (BackPEI) 2. Back care knowledge containing 13 multiple-choice questions. Scores on this construct range from 0 to 13 where the higher scores indicate better knowledge [12][13][14]19]. 3. Self-efficacy subscale containing 4 items. Each item is rated on a four-point scale (from difficult to easy) giving score ranging from 4 to 16 where the higher scores indicate higher self-efficacy [10,13]. 4. Expectation beliefs containing 6 items. Each item is rated on a five-point scale (strongly disagree to strongly agree) giving score ranging from 6 to 30 where higher score indicate stronger beliefs [10,13]. 5. Back care behavior containing 8 items regarding daily activity. Response categories ranged from never (1) to ever (5) giving a score ranging from 8 to 40 where higher scores indicate better preventive behavior [10,13]. Then, content and face validity of the preliminary version of the questionnaire was assessed. To determine the content validity, a panel of 13 specialists in health education and health promotion, epidemiology and physiotherapy reviewed the questionnaire in order to estimate the content validity ratio (CVR) and the content validity index (CVI). They rated items based on three evaluation options: unnecessary, useful but unnecessary, and necessary. The CVR was then calculated via following equations for each item; CVR = (n E -N/2) / (N/2), where n E is the number of specialists who indicate that an item is "essential" and N is the total number of specialists. In order to determine whether to remain or discard specific questions, the CVR values of each item were then compared with the Lawshe table. In the present study, values ≥0.54 were considered reasonable to verify each item [20]. The specialists were also asked to assess the relevance of each questions to measure the CVI. To obtain the CVI value, the expert panel rated the relevance of each questions as 1 (not relevant), 2 (somewhat relevant), 3 (quite relevant), and 4 (very relevant). To this end, the CVI value was calculated using the following formula, CVI = (n/N), where n is the number of specialists who give score of 3 or 4 and N is the total number of experts [21]. Values > 70% were regarded as appropriate to verify each question according to the Lawshe. At the end of this process 4 items were removed yielding a total of 51 items. Then, qualitative method was used for face validity. A group of six 5th-grade girls were asked to examine the questionnaire and indicate whether they could read and understand the questions. As a result, 2 additional items were removed yielding a 49-item provisional version of the questionnaire. As such the total score for the BABAQ range from 16 (lowest) to 132 (highest). We assigned the following criteria to interpret the scores: high (above the third quartile, 104-132); intermediate (between the first and third quartiles, 45-103); and low (less than the first quartile, . Psychometric evaluation The provisional questionnaire with 49 items [Additional file 1] then was administered to a sample of female students in Tehran Iran. Since previous studies reported higher prevalence and incidence among girls than boys (38·9% vs 35·0%) [3,9], female students were selected from district 22 where the district represents a population with a variety of socio-economic backgrounds. Data analysis Data was analyzed using the SPSS version 24 software; the level of significance was set at p < 0.05. The descriptive statistics was used to present the demographic characteristics of participant and self-reported back and neck pain prevalence during the last week. To assess psychometric properties of the questionnaire the following statistical procedures were applied: Item analysis: In order to analyze the correlation of items and predefined constructs, item-total correlation analysis was performed. As such the correlation between items and hypothesized constructs was calculated using the Pearson correlation coefficient. Structural validity Confirmatory factor analysis (CFA) was conducted to investigate predefined construct of the BABAQ (see Table 1). The CFA is the best method for evaluating the structural validity of an instrument when there is a theoretical approach to analyze the instrument with specified constructs and for the direct representation of a hypothesized factor model, leading to a measure of model fit [21][22][23][24]. Since, in most forms of factor analysis, the assumption is made that the items follow a normal distribution [25] and in this study data were normally distributed, thus for estimation method, maximum likelihood (ML) estimator was applied. To test the goodness-of-fit of the model, the Comparative Fit Index (CFI), Root Mean Squared Error of Approximation (RMSE A), and Standard Root of Mean Square Residual (SRMR) were examined. The data was analyzed using LISREL 8.80 to test for significance of item loadings on each relating factor, and to evaluate overall model fit intended by the SCT framework. The following values were considered acceptable for the model fit: χ2/df < 5, CFI > 0.95, RMSEA < 0.10, SRMR < 0.08 [21]. We also used the Average Variance Extracted (AVE) statistic in order to test the convergent validity of the constructs. The AVE values above 0.50 shows adequate convergent validity. Reliability Internal consistency was estimated using the Cronbach's alpha coefficient. The value of 0.70 or above was considered satisfactory [26]. The test-retest reliability also was used to examine stability by calculating intraclass correlation coefficient (ICC). A sample of 50 students who did not participate in the main study completed the questionnaire twice within 2 weeks' interval. The ICC also used to evaluate inter-rater reliability on each group of items for the practical skill domain as rated by two independent and trained raters. Values higher than 0.70 considered excellent agreement [14]. In addition, we estimated the standard error of measurement (SEM Participants In all, 610 5th-grade girls participated in the study; 50.3% of the participants (n = 307) were the only child in family, 74.1% of their father (n = 452) and 73.9% of their mother (n = 451) had secondary and higher education, respectively; about a quarter of students (n = 144) reported back pain during last week. The demographic characteristics of the pupils are shown in Table 2. Item-total correlation The correlation between items and predefined constructs are presented in Table 3. As shown the correlation between items and its own predefined construct was satisfactory. Reliability The Cronbach's alpha coefficients for all subscales were high ranging from 0.93 to 0.97. The intraclass correlation coefficient of the four self-reported subscales of the BABAQ ranged from 0.76 to 0.83. Table 4 represents the Cronbach's alpha coefficients, ICC values, SEM, and MDC for the questionnaire. Convergent validity The calculated Average Variance Extracted (AVE) values for skills, knowledge, self-efficacy, beliefs, and behavior were 0.54, 0.73, 0.79, 0.49, and 0.86 respectively indicating adequate convergent validity, although expectation beliefs had AVE value close to 0.50. In addition, we estimated values for the skills subscale inter-rater agreement ( Table 5). Discussion This study is a modest contribution to ongoing discussions on development and psychometric testing of the Back-Care Behavior Assessment Questionnaire (BABAQ) among 5th-grade girls in some Iranian elementary schools. Particular attention is thus paid to measure validity and reliability of the BABAQ sub-scales. For a few reasons, this study has a novel approach and is important. First, the originality of this study lies in the fact that it is a theory-based instrument in evaluating healthy spine-related behaviors in pupils. It is also significant because the BABAQ can provide the opportunity to assess behaviors and their determinants according to the Social Cognitive Theory (SCT). As such, the instrument developed might help create a theory-based intervention in order to change unsafe behaviors among pupils. Secondly, the psychometric properties of the BABAQ are evaluated while four groups including the research team (academics), the 5th-grade girls, their teachers, and health specialists are involved. Thirdly, to the best of authors' knowledge, this is the first attempt reporting on construct validity of an instrument for back pain prevention, employed for evaluating education programs. Content validity verification in this study indicated that three items associated with knowledge section including 'Who is sitting the best way', 'If you have to move equipment in the gym, you should ...', and 'Which posture is the best?' had no acceptable values. As well, one item related to behavior section, i.e., 'No twisting while moving heavy objects' had the same conditions. Accordingly, all the mentioned items removed from the final version. The panelists also believed that these items were irrelevant. However, these results are in good agreement with Dolphens et al., using almost similar items in their questionnaires [9]. The further contribution of this study is recruiting construct validity and CFA to test multiple variables, while there was a theoretical framework [20]. Moreover, various indicators such as the Chi-square (χ2)/degree of freedom (df) ratio, the comparative fit index (CFI), the standardized root mean square residual (SRMR), and the root mean square error of approximation (RMSEA) RMSEA verified the fitness of the models. In addition, the findings demonstrated that each of the five subscales in the BABAQ had appropriate fit within the SCT framework. Empirical results from the Cronbach's alpha, testretest, and inter-rater reliability also confirmed that the BABAQ showed acceptable internal consistency (ranged from 0.93 to 0.97) within the five sub-scales, providing reliable results over repeated administrations (ranged from 0.76 to 0.83), and producing significant inter-rater agreement (ranged from 0.73 to 0.95) at the 5th-grade level. Likewise, the higher values of the BABAQ scores were associated with greater standard deviations (SDs) (expected knowledge), accounting for the remarkably higher standard error of measurement (SEM) scores for each sub-scale. The higher scores for the BABAQ could be due to the small sample size in this study. In previous studies, the reliability of the questionnaires had been assessed only from the aspect of test-retest stability and internal consistency. For example, Cardon et al. had evaluated different instruments, based on previous literature, indicating reliability ranged from 0.42 to 0.82 [13]. Cronbach's alpha coefficient of the expectation beliefs was also 0.70 and other intended sections were not applicable in the present study. In order to verify face and content validity, 150 children, 20 parents, and 10 teachers had completed the questionnaire to identify unclear items, which had been then modified. Moreover, they had not used panelists. Inter-rater reliability results in the present study are accordingly in relative agreement with the findings reported by Cardon et al., obtaining the intra-class correlation coefficient (ICC) to determine inter-rater agreement on the sum scores of the practical test items, ranged from 0.785 to 0.980 [14]. Other results were also better than previous studies. It is argued that the BABAQ is suitable for a wide variety of potential applications to measure back-care behaviors and their main determinants among the 5thgrade girls. One unique feature of the BABAQ is the reliability and validity of its sub-scales, which contain back-care skills and knowledge, self-efficacy towards proper back-care behaviors, expectation beliefs, and healthy spine-related behaviors. These sub-scales may be measured, evaluated, and modified by potential change strategies, thereby providing back pain prevention and ultimately back health promotion. Limitations In this study, there are limitations that must be noted. First, data were only collected from the 5th-grade girls' population attending public elementary school in capital Tehran's region 22; and other independent elementary schools, grades, as well as male pupils didn't enroll to study; therefore, the generalizability of outcomes to the overall population may be limited. In addition, due to decrease recall bias, back pain report was limited within the last week. Subscales of the BABAQ were limited to main psychological determinants of behavior in SCT and the other constructs (environmental determinants of behavior), in other to decrease the questions' burden on participants, didn't use. In skills items construct validity verification phase, sample was limited to fewer population because difficulty of assessing. However, future studies should test CFA with an adequate number of participants. Despite these limitations that have been explained, the BABAQ is a valid and reliable instrument to measure healthy spine-related behavior in girls as young as 11 years of age. Conclusion The Back-care Behavior Assessment Questionnaire (BABAQ) demonstrated to be a valid instrument to measure healthy spine-related behavior including behavioral capability (skills and knowledge), self-efficacy, expectation beliefs and performance spine. Future attempts should focus on to assess whether the BABAQ is applicable in diverse pupils' populations. Additional file 1. Back-care Behavior Assessment Questionnaire.
2020-08-26T14:41:37.339Z
2020-08-26T00:00:00.000
{ "year": 2020, "sha1": "8fc91049adaf48f4db7f29e9643daf78153ac868", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-020-09318-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8fc91049adaf48f4db7f29e9643daf78153ac868", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
269037287
pes2o/s2orc
v3-fos-license
Canine Prostate Cancer: Current Treatments and the Role of Interventional Oncology Simple Summary Prostate carcinoma remains a therapeutic challenge in veterinary medicine. Current treatment focuses on locoregional control ideally while minimizing morbidity, as well as systemic therapy for the management of distant disease progression. Below, the current treatment modalities, including the role of interventional oncology in the management of prostate carcinoma therapy in dogs, are reviewed. Additionally, the role of dogs as a translational model for research in people is acknowledged, as well as the consideration of using therapeutic strategies commonly utilized for people for dogs. Abstract Prostate carcinoma is one of the most common cancers worldwide in men, with over 3 million men currently living with prostate carcinoma. In men, routine screening and successful treatment schemes, including radiation, prostatectomy, or hormone therapy, have allowed for high survivability. Dogs are recognized as one of the only mammals to spontaneously develop prostate neoplasia and are an important translational model. Within veterinary medicine, treatment options have historically been limited in efficacy or paired with high morbidity. Recently, less invasive treatment modalities have been investigated in dogs and people and demonstrated promise. Below, current treatment options available in dogs and people are reviewed, as well as a discussion of current and future trends within interventional treatment for canine PC. Introduction Most tumors arising from the prostate are histopathologically characterized as prostate adenocarcinoma or urothelial carcinomas (UC), although other types have been reported [1].Prostate carcinoma (PC) may arise from acinar epithelium urothelium lining the prostatic urethra or ductal epithelium, and it remains challenging to distinguish prostate-origin carcinoma from urothelial carcinoma arising from the urethral or prostatic ducts and invading the prostate secondarily.Histopathologically, both UC and PC demonstrate a heterogenous appearance, and histologic differences do not appear to be correlated with clinical outcomes in dogs [1][2][3].Additional histopathological investigation in canine PC and hyperplasia suggested that different cell populations are susceptible to neoplastic transformation (ductal cells) compared to hyperplastic age-related steroid-responsive change (basal cells).This finding is also supportive of the noticeable risk of castrated dogs for PC compared to intact male dogs [4,5]. Most PCs are diagnosed in older castrated male dogs [2][3][4][5].In general, PC is not identified until later stages of the disease when clinical signs such as dysuria, hematuria, dyschezia, hind limb pain, or ataxia are noted with urogenital signs preceding gastrointestinal and systemic signs typically [6,7].Metastatic disease is most frequently diagnosed in the lung, lumbar spine/pelvis, or lumbar lymph nodes [2,3].Pulmonary metastatic rate at the time of diagnosis ranges from 8 to 50% [5,8], while local metastasis, such as lymph node and bone, ranges from 15 to 72% [5,9], and gross metastasis is reportedly >80% at the time of death [3,5]. Diagnosis The diagnosis of PC in men often consists of chemical marker assay screening such as prostate-specific antigen (PSA), as well as prostate biopsy [10].Currently, there are no such screening tools able to identify benign or malignant diseases of the canine prostate or distinguish between malignant prostatic or urothelial cell origin, which creates an inherent challenge to diagnosing canine PC [1].Recently, BRAF gene mutations have been discovered in a majority of canine PC and UC [11], which are associated with pro-oncogenic properties and can also be detected in urine samples in dogs with BRAF mutation containing UC or PC [12].Imaging, including ultrasound, computed tomography (CT), or radiographs, may be performed.Changes such as mineralization, regional lymphadenopathy, loss of parenchymal architecture, and prostate capsule integrity are consistent with canine PC.The mineralization of the prostate gland in neutered dogs is strongly associated with neoplasia; however, this finding in intact dogs is less conclusive [13].Radiographs or CTs may identify bone lesions consistent with distant metastasis or mineralization in the region of the prostate [2,14].Cytologic diagnosis acquired by ultrasound-guided fine-needle aspiration, diagnostic catheterization, or urine sediment cytology are reported.There appears to be a strong correlation between cytologic diagnosis and histopathologic diagnosis [15], although the manner by which cytology is acquired is important.Diagnostic catheterization appears to be highly sensitive and specific for UC/PC cytology, and utilization of pathologist review may help improve the sensitivity and specificity of this diagnostic [16].Importantly, the seeding of the abdominal wall following fine-needle aspiration or percutaneous biopsy of UC/PC was described [17,18], and caution should be used when considering this diagnostic method. Medical Management The medical treatment of PC in dogs includes the use of non-steroidal anti-inflammatory drugs and chemotherapy.The expression of cyclooxygenases (Cox)-1 and -2 was evaluated in PC in dogs.Cox-1 was detected in normal and neoplastic prostatic epithelial cells, while Cox-2 was exclusively identified in tumor cells, and both were identified in the majority of tumors (94% and 88%, respectively) for which they were evaluated.While Cox-1 and -2 positive tumors do not appear to have significantly different clinical courses compared to negative tumors, it does justify the use of NSAIDS in these patients; the clear superiority of one NSAID over another has not been established [9].Importantly, the anti-tumor effects of Cox inhibitors are likely multifactorial, although they may act on Cox-dependent or -independent pathways [9].The median survival time (MST) in dogs receiving NSAIDS vs. no treatment was 6.9 mo compared to 0.7 mo, which was significant [9].In contrast to NSAIDS, the antitumor effects of chemotherapy in the treatment of PC appears to be generally poor.A retrospective study evaluating mitoxantrone paired with piroxicam in dogs with PC objectively identified no partial or complete response, although most owners perceived improvement in urination and/or defecation.In this population, MST for all dogs was 155 days [8].A more recent prospective open-label phase III randomized study compared mitoxantrone to carboplatin administered every 3 weeks with piroxicam concurrently in dogs with lower urinary tract tumors, including PC.There was no significant difference in treatment arms, but similarly, prostatic involvement appeared to negatively impact survival with a median survival of 109 days compared to urethral, trigonal, or apically located tumors (300, 190, and 645 days, respectively) [19].The authors suggest that the addition of chemotherapy may prolong survival in dogs based on their results compared to historical published data, despite not having a piroxicam-only treatment arm.A similar suggestion was made following retrospective evaluation of PC treated with NSAIDS with or without chemotherapy, which found that dogs treated with NSAIDS and chemotherapy had significantly longer MST and time to progression (106 d, 76 d, respectively) compared to NSAIDS alone (51 d and 44 d, respectively) [6].While this is compelling, there is a lack of prospective randomized controlled trials (RCT) dictating the difference in outcomes between NSAIDS alone and in conjunction with chemotherapy, which softens the recommendation to apply the two concurrently.In patients for which chemotherapy is available and is likely to be well tolerated, concurrent application can be considered. Surgery In people, definitive intent treatment options for PC in men include surgery or radiotherapy, with surgery possibly being preferred over radiotherapy for impact on overall and PC-specific mortality in patients [20]; differences in bowel and genitourinary symptoms may be inconsistent between the two, however [21].In dogs, prostatectomy was evaluated.A study comparing dogs with PC treated with NSAIDS with or without chemotherapy (e.g., toceranib, carboplatin, 5-fluorouracil, and chlorambucil) had a median survival time of 90 days following diagnosis.This was compared to dogs who underwent surgical treatment {total prostatectomy (TP) or total prostatocystectomy (TPC)} with a median survival time of 337 days; dogs survived significantly longer in the TP group (>500 days) compared to the TPC group (83 d).In that study, most patients (80%) experienced urinary incontinence following surgery [22].In another report of TP with various reconstructive surgeries described in dogs with PC, MST was 231 days, and permanent incontinence was reported in one-third of dogs [23].While surgery is a viable option, it remains moderately morbid and reasonably complicated, with a high risk of urinary incontinence.Therefore, the need for alternative loco-regional therapies is clear. Radiation Curative intent external beam or brachytherapy was performed on men with high cure rates and mixed side effects compared to surgery, with some evidence suggesting improved urinary and sexual effects [24].It was also used as an adjunct with prostatectomy [24].Image-guided intensity-modulated radiotherapy (IMIG-RT) was described as a first-line or salvage procedure, with or without chemotherapy, in dogs with lower urinary tract carcinomas [7,[25][26][27][28].In studies evaluating the risk of acute radiation effects associated with pelvic irradiation with curative intent, external beam radiation gastrointestinal complications (specifically colitis) were encountered most commonly (38-75%) [26,27].Studies similarly evaluating late complications for dogs receiving a definitive intent irradiation of pelvic region tumors identified one or more complications in 39-56% of patients, with necrotic drainage/ulceration of the skin and subcutaneous tissues within the radiation field, chronic colitis, strictures, and osteopenia being most commonly reported [26,27].Interestingly, the perineal location was specifically identified as a riskier location for the development of complications, as was a larger radiation field.Therefore, irradiation in the region of the lower urinary tract is considered a lower risk in the scheme of pelvic RT [26].A retrospective study evaluating the role of radiation therapy with or without concurrent chemotherapy in lower urinary tract carcinomas reported an event-free survival (EFS) of 260 days and an overall survival time (OST) of 510 days.All dogs were retrospectively categorized into three treatment groups, including dogs undergoing first-line concurrent chemoradiotherapy (1), first-line chemotherapy > 1 mo prior to initiating radiotherapy who did not have evidence of tumor progression (2), and dogs receiving radiotherapy as salvage following locoregional failure (3).Fifty-one dogs with primary genitourinary urothelial carcinoma were included and further categorized into bladder (19), prostate (17), and urethral (4), and eleven were multifocal within the urinary tract.Dogs with prostate involvement were not separately evaluated for factors such as acute or late side effects, although overall median survival times in dogs with prostate involvement was 341 days, which was significantly worse than for dogs without prostate involvement.Acute radiation effects were predominantly mild but were reported in 65% of treated dogs and included acute colitis most commonly, followed by acute dermatitis and genitourinary effects.Importantly, there was a 31% risk of permanent urinary incontinence, and late effects, including urethral stricture, were documented but uncommon.In all dogs, the median time to local progression was 343 days and was reported in 59% of dogs.Locoregional failure rates per group were 56%, 50%, and 75% for groups 1, 2, and 3 [25].In a separate study that retrospectively evaluated the late effects of intensity-modulated image-guided radiotherapy (IMIG-RT) for genitourinary carcinomas in dogs, including PC and UC, late effects were only identified in 19% of dogs, which included grade 3 [29] genitourinary and gastrointestinal events.Acute effects occurred in the majority of patients with gastrointestinal (colitis) being most common, followed by integumentary and urinary tract effects.In this study, median EFS and OST was 317 and 654 days each, and the location of tumor did not appear to significantly affect outcome.Of the owners who completed standardized post-treatment questionaries, 60% perceived improved quality of life while 30% reported unchanged [28].A more recent study solely evaluated definitive intent intensity-modulated radiation therapy for PC with or without concurrent chemotherapy.The median EFS and OST for all dogs were 220 and 563 days, respectively [7].Within the treated population, the median time to local progression was 241 d, and 56% of patients had documentation of metastatic disease at a median of 108 days.In this population, the presence of symptoms at time of diagnosis negatively impacted survival, and EFS was shorter in patients with metastatic disease at diagnosis compared to those who did not have metastatic disease.Patients with the involvement of additional uroepithelia sites beyond the prostate did not have significantly different OST or EFS.Importantly, 60% of patients had grade 1-2 [29] acute toxicity documented while the estimated rate of late effects at 12 and 18 mo was 8% and 22% [7].Metastatic disease was the most common reason for euthanasia, which suggests that aggressive local treatment should be paired with systemic treatment as well.Despite evidence demonstrating prolonged OST and EFS with systemic therapy, overall prognosis remains guarded with most animals succumbing to metastatic disease. Interventional Oncology Approaches to Prostate Carcinoma Interventional oncology (IO) is the treatment of cancer using image-guided minimally invasive techniques.Available IO options include both definitive-and palliative-intent treatments.In veterinary medicine, IO techniques are particularly exciting due to the optimization of quality of life with a lowered morbidity.While still emerging in veterinary medicine, prospective and retrospective investigations of outcomes in dogs undergoing these treatments have started to guide the role of IO in the treatment of PC. Prostate Artery Embolization Prostate artery embolization (PAE) is a minimally invasive technique involving the delivery of embolic material into the arterial blood supply that feeds the prostate.The prostate is a bilobed structure with independent blood supply per lobe.In most dogs, the internal pudendal artery branching from the internal iliac gives rise to the main prostatic artery.The prostatic artery also provides a smaller terminal branch, the caudal vesical artery, that courses towards the distal ureter and urethra and provides some supply.Distally, the prostate artery provides the small middle rectal artery as well as the three smaller terminal cranial, middle, and caudal prostate arteries [30].To adequately embolize the prostate, the selection of the left and right prostate artery is attempted and, in all published descriptions, femoral or carotid artery access was elected.In people, PAE was investigated for the amelioration of lower urinary tract signs associated with benign prostate hyperplasia (BPH) in men due to its minimally invasive nature [31,32].It was found to be technically safe, with good long term outcomes for the reduction of adverse symptoms and prostate volume [31,32].Additionally, there has been some investigation into PAE for prostate bleeding associated with PC or for a tumoricidal effect in patients with localized PC.While technically successful in a majority of cases, there was evidence of the incomplete and non-sustainable control of PC [33,34].The control of bleeding in men with advanced PC appears generally successful [35], although PAE is not considered a standard first line treatment for PC in men.Prior to translation in people, the embolization of canine prostates following the induction of BPH in a research setting was reported.In two early studies, PAE performed with microspheres in research dogs with induced BPH was found to be technically safe and feasible [36,37].In a similar study evaluating the delivery of polyethylene glycol microspheres sized 400 +/− 75 µm in a spontaneous BPH model in intact beagles, a significant reduction in prostate volume was noted at 2-and 4-weeks post-embolization.A histopathological exam revealed diffuse glandular atrophy and interstitial fibrosis, although the partial or complete recanalization of all prostate arteries was demonstrated 1 mo following initial embolization [38]. Prostate artery embolization with embolic beads in dogs with spontaneous PC was also evaluated.It was found to be technically successful, and all dogs had a reduction in prostate volume after PAE with a median decrease in prostate volume of 39.4% as measured on CT 1 mo following treatment.Additionally, the clinical signs of stranguria, tenesmus, and lethargy were significantly less common 30 days after PAE compared to before [14].Drug-loaded beads (DEB) with docetaxel were investigated for use in a canine model of spontaneous prostate carcinoma and evaluated at day 30 and day 60 following embolization [39].Three of five dogs were unable to make it to the endpoint of study due to rapid disease progression, although CT demonstrated a decrease in prostate volume in all dogs and no major complications were noted.While this treatment substantially reduced the tumor volume, it did not eradicate it, and additional investigations regarding dose and delivery are necessary.It is unknown how effective DEB-PAE or bland-PAE is when compared to radiotherapy or surgery; however, it does appear to be substantially less morbid without the association of significant genitourinary or gastrointestinal side effects.Additionally, it appears to be effective at reducing prostate and tumor size with varied effects on quality of life and symptoms.While worth consideration, it appears that systemic tumor progression occurs in the face of local tumor control, and further investigation into managing local and distant disease progression continues to be essential. Exciting advances in PAE include the development of radioembolization with 90 Y microspheres (TheraSphere; Boston Scientific; Marlborough, MA, USA).The beta-particle emission from radioactive decay results in a more focused distribution of energy delivery into surrounding tissues, with the majority deposited within 5 mm of the emitting particle.This has many potential advantages regarding the avoidance of acute and late-term radioeffects.A study evaluating the feasibility, safety, and absorbed dose distribution of prostate 90 Y radioembolization in a canine model of induced BPH was recently completed.Animals were divided into groups based on escalated dose and delivered radioembolic, and dogs served as their own controls as only one prostate lobe underwent treatment.Positron emission tomography/MRI was subsequently performed to evaluate the absorbed dose and volume change.The bladder and rectal wall were exposed to tolerable doses of radiation based on microdosimetry, and a significant volume decrease was noted in all dogs, which correlated positively to an escalated dose.No adverse events were detected in the follow up period.Additionally, there was no non-target tissue damage when tissues were harvested and evaluated microscopically [40].While seemingly safe with a limited side effect profile, additional research into radioembolization for the treatment of PC is essential. Currently, there remains limited published experience with PAE in people and dogs, although it remains a compelling treatment option.Unanswered questions include the advantage of chemotherapy with embolic compared to bland or radioembolization, as well as long-term outcomes compared to other treatment modalities (surgery, radiotherapy) in dogs.The pairing of embolization with concurrent therapies such as IA or IV chemotherapy or external beam radiation and the efficacy of repeat embolization is unknown.While these questions require additional effort to be answered, the clinical role of PAE in dogs with naturally occurring PC is justified and, while technically challenging, appears to be feasible and minimally invasive with a very low rate of procedure-associated complications. Intra-Arterial Chemotherapy There has been some published experience on the utility of chemotherapy with canine PC.Intra-arterial chemotherapy is of notable interest for PC due to the increased drug concentration within the tumor following intra-arterial administration, while also sparing systemic exposure and reducing adverse events (Figure 1).There has been increased focus on bladder cancer and intra-arterial chemotherapy.An early investigation in a rabbit model of bladder cancer evaluated outcomes following IA or IV infusion once a week for three weeks of carboplatin and pirarubicin.All bladder tumors in the IA group decreased in size or disappeared entirely [41].In people, there appears to be some evidence that IA infusions of cisplatin may decrease bulky tumors and improve outcomes in patients without metastasis for bladder carcinoma [42].Pairing IA chemotherapy and radiation concurrently as a primary treatment or in the neoadjuvant setting also demonstrated success in improving rates of response while minimizing systemic toxicity for bladder carcinoma [43,44].Previously, IA chemotherapy (cisplatin) paired with radiation therapy for the treatment of urinary bladder carcinoma in two dogs led to a reduction in tumor size in both and was well tolerated [45].While encouraging, there is a lack of randomized controlled trials to establish these treatments in canine lower urinary tract carcinomas.In a more recent retrospective study [46], intra-arterial chemotherapy alone was compared to intravenous chemotherapy regarding local short-term effects against spontaneously forming lower urinary tract tumors in dogs.Dogs with prostatic or urothelial carcinomas who received IV or IA carboplatin and an NSAID were included.Ultrasonographic appearance of the tumor prior to and following two doses of IA chemotherapy revealed a significant change in the longest unidimensional measurement, which was not noted in the IV chemotherapy group.While this was retrospectively performed, it is suggestive of some increased benefit to the use of the super-selected delivery of chemotherapy into lower urinary tract carcinomas.An additional prospective study performed to evaluate serum concentration of chemotherapeutics evaluated the IA and IV treatment of lower urinary tract tumors with mitoxantrone, doxorubicin, or carboplatin.The area under curve (AUC) for the serum drug concentration-time was significantly lower after IA mitoxantrone compared to IV, while peak serum concentrations of IA carboplatin were significantly lower compared to equivocal IV and AUC values.Doxorubicin-delivered IA or IV did not demonstrate measurable differences in AUC or peak serum concentrations.While these findings appear mixed, the heterogenous population of treated tumors and patients as well as various factors impacting tumor uptake of chemotherapy were not controlled or specifically evaluated for, and additional controlled studies may be helpful [47].Ultimately, there appears to be some evidence that IA carboplatin has low systemic exposure and may be effective against lower urinary tract tumors including PC in dogs.This treatment should be considered as a safe and technically feasible treatment with some evidence of increased effectiveness against a resilient tumor.The local progression of PC can lead to urethral or ureteral obstruction, for which urethral or ureteral stenting is possible.The transurethral placement of permanent ure thral stents to treat malignant urethral obstruction in dogs has been described [48].Balloo The local progression of PC can lead to urethral or ureteral obstruction, for which urethral or ureteral stenting is possible.The transurethral placement of permanent urethral stents to treat malignant urethral obstruction in dogs has been described [48].Balloon expanded metallic stents or self-expanding metallic stent placement was described, although self-expanding stents may be preferred and are almost exclusively used at the authors' institutions.The procedure is minimally invasive, and the procedural time tends to be short.In the original report of urethral stents for malignant obstruction, death was not related to urethral obstruction in all dogs for which stents were placed and nine of twelve dogs were continent or mildly incontinent after stent placement, while the remaining three were severely incontinent (2) or had an atonic bladder (1) in one study.Major complications included stent dislodgement in one dog, although none of the dogs had reported tumor in-growth [48].A second study retrospectively evaluating a larger population of dogs undergoing palliative urethral stent placement showed a high rate of technical success, although 26% of all dogs (5/19 females and 6/23 males) were severely incontinent following stent placement [49].Interestingly, stent length, diameter, and location were not associated with incontinence or stranguria.Ultimately, 95% of dogs were euthanized following stent placement for reasons unrelated to urethral obstruction [49].Clinically, urethral stents are an excellent option for resolving the life-threatening condition of obstructive neoplasia in the bladder, urethra, and/or prostate.While urinary incontinence is a risk of this procedure, it appears to severely affect a minority of patients. Temporary stents are placed less frequently and typically as a bridging therapy until permanent stents can be placed.These are often rubber or polyurethane and can be temporarily managed by an owner at home [50].In one study, temporary urethral stents placed for benign or malignant etiologies were successfully placed and well tolerated but led to urinary incontinence in all dogs they were placed in, if they spanned from bladder to urethral orifice, and were associated with complications such as bacteriuria and stent migration [51].While reasonable as a temporary solution, these are generally not considered adequate as a long-term solution. Ureteral Stenting In people, a ureteral stent placement to relieve ureteral obstruction has been utilized for the palliation of urologic malignant disease.Stents are typically preferred over nephrostomy tubes for improved tolerability, although polymeric and metallic stents had mixed success in maintaining patency following placement in patients [52].While there are various different devices available, metallic stents generally resist external compressive forces better than polymeric stents.Patency at the time of placement appears to be very high [52][53][54], and overall stent failure over 12 months following placement remains tolerable [52,53].Compared to metallic stents, polymer stents may have comparable patency at 6 months and significantly diminished patency and associated quality of life at 12 months [55].In contrast to metallic stents, which may be allowed to indwell for longer, polymer stents may be exchanged as frequently as every 3 months, which can be perceived negatively by a patient [55,56]. In dogs with ureteral stents that are placed for benign or malignant ureteral obstruction, stent exchange is often not financially possible or planned for.Additionally, the long-term outcome in dogs following stent placement for malignant obstruction is often poor; therefore, prolonged patency may be less important.In dogs, percutaneous antegrade ureteral stent placement is advocated for relief of malignant ureteral obstruction [50].Due to the decreased visibility or access to the ureterovesicular junctions, these are most often placed percutaneously and antegrade (Figure 2), which is technically challenging and requires substantial experience with ultrasound-guided and fluoroscopic-guided procedures.Reported complications of this procedure include the inability to successfully place stents percutaneously requiring conversion, the migration of a stent, the disruption of the upper urinary tract, or tumor seeding at skin puncture sites [57,58].While comparably little is known on how the type of stent device may play a role in immediate and long-term patency in dogs and metallic ureteral stents have not been reported in veterinary patients, the longterm indwelling time of metallic stents may align well with the inability to perform stent exchange in veterinary medicine and could be considered as a future point of investigation in dogs.to perform stent exchange in veterinary medicine and could be considered as a future point of investigation in dogs. Concluding Remarks At this time, there remains a lack of robust trials comparing treatment modalities and long-term outcomes in dogs diagnosed with prostate carcinoma.As such, the role of interventional oncology in the treatment of prostate cancer is still being investigated.Importantly, the practitioner must consider each patient independently while weighing disease burden, stage, and external patient and owner factors.IO techniques remain minimally morbid, with percutaneous access (PAE, intra-arterial chemotherapy, ureteral stenting) or natural orifice access (urethral stenting, ureteral stenting), which is hugely advantageous.While long-term data for dogs undergoing prostate artery embolization or intraarterial chemotherapy is limited, locoregional effectiveness based on tumor volume reduction was documented in both [14,46], which is encouraging.IA chemotherapy may demonstrate improved effectiveness in bulky disease compared to IV administration, although the long-term control of distant disease progression is unknown and there may be a role for both for disease management [19,46].Additionally, while no data exist comparing embolization to radiotherapy or surgery, the less invasive nature and high tolerability Concluding Remarks At this time, there remains a lack of robust trials comparing treatment modalities and long-term outcomes in dogs diagnosed with prostate carcinoma.As such, the role of interventional oncology in the treatment of prostate cancer is still being investigated.Importantly, the practitioner must consider each patient independently while weighing disease burden, stage, and external patient and owner factors.IO techniques remain minimally morbid, with percutaneous access (PAE, intra-arterial chemotherapy, ureteral stenting) or natural orifice access (urethral stenting, ureteral stenting), which is hugely advantageous.While long-term data for dogs undergoing prostate artery embolization or intra-arterial chemotherapy is limited, locoregional effectiveness based on tumor volume reduction was documented in both [14,46], which is encouraging.IA chemotherapy may demonstrate improved effectiveness in bulky disease compared to IV administration, although the long-term control of distant disease progression is unknown and there may be a role for both for disease management [19,46].Additionally, while no data exist comparing embolization to radiotherapy or surgery, the less invasive nature and high tolerability makes embolization an exciting treatment option for the local prostatic disease.Urethral and ureteral stenting can be used in the treatment of progressive obstructive disease within the lower urinary tract which may mitigate life-limiting complications that are frequent to prostate carcinoma locoregional progression [48][49][50][51]58].Lower urinary tract stenting appears to be well tolerated and is considered a good option in the palliation of this disease.When considering these treatments, owner finances as well as available resources (e.g., fluoroscopy) and procedural experience will play an important role in what can be offered.Additionally, thorough the evaluation of disease burden to identify the appropriateness of treatments is essential and may include ultrasound, computed tomography angiography, cystoscopy, and/or cystourethrography. In conclusion, canine PC remains a diagnostic and therapeutic treatment challenge with guarded long-term prognosis.Despite this, the translation of therapies such as stenting for malignant obstruction and prostate artery embolization to clinical canine PC is encouraging.While surgery and radiotherapy remain valid as locoregional therapies, the inability to obtain consistently good long-term outcomes without a moderate risk of side-effects may limit the tolerance of the associated morbidity of some of these treatments.The role of interventional oncology in the palliative setting (percutaneous or transurethral stenting procedures) or in the direct treatment of PC in dogs (embolization, intra-arterial chemotherapy) is an exciting field that deserves ongoing focus in the future.While ongoing research is needed, there are encouraging therapeutic developments that may allow the optimization of the treatment of PC in dogs and people. Figure 1 . Figure 1.(A).Lateral digital subtraction angiogram at the level of the prostatic artery demonstratin extensive neovascularization of the prostate, urethra, and trigone (carat) in a dog with progressiv prostatic carcinoma.(B).Lateral fluoroscopic view of the same dog during administration of intra arterial chemotherapy, admixed with contrast (carat), into the prostatic artery via microcatheter (as terisk). Figure 1 . Figure 1.(A).Lateral digital subtraction angiogram at the level of the prostatic artery demonstrating extensive neovascularization of the prostate, urethra, and trigone (carat) in a dog with progressive prostatic carcinoma.(B).Lateral fluoroscopic view of the same dog during administration of intra-arterial chemotherapy, admixed with contrast (carat), into the prostatic artery via microcatheter (asterisk). Figure 2 . Figure 2. (A).Fluoroscopic image of percutaneous renal pelvic access (obliqued, head left) with a needle (asterisk) for diagnostic pyelogram and for antegrade placement of a wire spanning into the bladder and exiting the urethral orifice for fluoroscopic guided percutaneous ureteral stent placement for obstructive urothelial carcinoma.(B).Fluoroscopic image of through-and-through wire (carat), long access sheath (asterisk), and second wire (plus) coiled within the renal pelvis that spanned the site of ureterovesicular junction obstruction.(C).Fluoroscopic image of ureteral stent (asterisk) in place alongside through-and-through wire (carat) spanning the urinary tract. Figure 2 . Figure 2. (A).Fluoroscopic image of percutaneous renal pelvic access (obliqued, head left) with a needle (asterisk) for diagnostic pyelogram and for antegrade placement of a wire spanning into the bladder and exiting the urethral orifice for fluoroscopic guided percutaneous ureteral stent placement for obstructive urothelial carcinoma.(B).Fluoroscopic image of through-and-through wire (carat), long access sheath (asterisk), and second wire (plus) coiled within the renal pelvis that spanned the site of ureterovesicular junction obstruction.(C).Fluoroscopic image of ureteral stent (asterisk) in place alongside through-and-through wire (carat) spanning the urinary tract.
2024-04-11T15:08:08.627Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "df14fdfdce9589b26976d82648898e60a8c40005", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2306-7381/11/4/169/pdf?version=1712657132", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "2b7be0dc9fec89c19a4d636b5e49e0e910115af1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
254900852
pes2o/s2orc
v3-fos-license
Sequence, Secondary Structure, and Phylogenetic Conservation of MicroRNAs in Arabidopsis thaliana MicroRNAs are small non-coding RNA molecules that are produced in a cell endogenously. They are made up of 18 to 26 nucleotides in strength. Due to their evolutionary conserved nature, most of the miRNAs provide a logical basis for the prediction of novel miRNAs and their clusters in plants such as sunflowers related to the Asteraceae family. In addition, they participate in different biological processes of plants, including cell signaling and metabolism, development, growth, and tolerance to (biotic and abiotic) stresses. In this study profiling, conservation and characterization of novel miRNA possessing conserved nature in various plants and their targets annotation in sunflower (Asteraceae) were obtained by using various computational tools and software. As a result, we looked at 152 microRNAs in Arabidopsis thaliana that had already been predicted. Drought tolerance stress is mediated by these 152 non-coding RNAs. Following that, we used local alignment to predict novel microRNAs that were specific to Helianthus annuus. We used BLAST to do a local alignment, and we chose sequences with an identity of 80% to 100%. MIR156a, MIR164a, MIR165a, MIR170, MIR172a, MIR172b, MIR319a, MIR393a, MIR394a, MIR399a, MIR156h, and MIR414 are the new anticipated miRNAs. We used MFold to predict the secondary structure of new microRNAs. We used conservation analysis and phylogenetic analysis against a variety of organisms, including Gossypium hirsutum, H. annuus, A. thaliana, Triticum aestivum, Saccharum officinarum, Zea mays, Brassica napus, Solanum tuberosum, Solanum lycopersicum, and Oryza sativa, to determine the evolutionary history of these novel non-coding RNAs. Clustal W was used to analyze the evolutionary history of discovered miRNAs. Introduction Sunflower (Helianthus annuus) belongs to the Asteraceae family. By cloning method, 700 types of miRNA were identified in plants; in 2012, miRNA was identified in Arabidopsis thaliana. All the processes of miRNA targets are based on coding and non-coding sequence. 1 Previous studies show that RNA polymerase play important role in the transcription of the miRNA gene. 2 In recent years, for the identification of miRNA scientists have used high throughput sequencing and computational analysis techniques. 3 Almost all scientists have concluded that microRNAs involved in the regulatory function of flowering and non-flowing plants are conserved. 4 Plants are damaged by 2 types of environmental stresses categorized as biotic and abiotic stress. Damage to living organisms by living organisms like parasites, bacteria, viruses, and fungi is known as abiotic stress as well as damage to a living organism by the source of nonliving factors called abiotic stress. 5 To describe abiotic stress, we should study the function of different organisms that survived in different environments. Stress always affects the plant's tissues. plants need enough water for their sufficient growth. Up and down movement of water expand plant cells, which causes plant growth. Enough amount of water expands the plant's cells and transfers the minerals from the soil to the tip of the leaves but stress causes an imbalance in the plant's routine processes. According to research every year, we lost 50% of our food production due to abiotic stress. Abiotic stress affects plants' fruits, crops, metabolism, respiration processes of plants, and at the end plant's seeds. Seeds are used for next generation; hence, unhealthy seeds affect further production. 6 H. annuus typically refer to annual species that tend to spread rapidly and can become aggressive. 7 The plant family improvement depends on the genetically resistant varieties, seed productivity, modern cultivation, and biotic-abiotic stress tolerance. Plant improvement can be assessed by studying its genetic makeup and sowing in a different location. 8 For the human, it acts as an important source of nutrients. Nutritionally, it is the main source of vital nutrients inkling carbohydrates, proteins, and dietary fibers, and provides almost 20% of the dietary energy supply. According to the miRNA database, H. annuus contains a total of 6 precursors and 7 mature microRNA that are compared with A. thaliana for the identification of novel micro-RNA. 9 A. thaliana contained 205 precursors and 384 mature non-coding RNA. All data regarding miRNA are present in miRBase. Methodology Many tools are used as comparative genomics approaches to achieve novel and interesting information about miR-NAs in plants and animals. In the initial step, identify sequences and reference sequences, and download them from the microRNA Registry Database. This miRBase Pre-miRNA database was available at https://www.mirbase.org/ freely. Pre-miRNA potential candidates were predicted by subjecting the downloaded mature and precursor miRNAs sequence through the Basic Local Alignment Tool. For this purpose, nucleotide BLAST available freely at Genbank of the National Center for Biotechnology Information (https://blast.ncbi.nlm.nih. gov/Blast.cgi?PAGE_TYPE=BlastSearch) was used. The miRNA* sequences, both mature and precursor, were subjected to BLAST against H. annuus expressed sequence tags (ESTs) sequentially using the BLASTn program following a maximum of up to 4 mismatches with miRNAs*. EST single-tone selection BLASTn program was used and the setting of the parameters was adjusted as expect values, 1000; low complexity, the sequence filter, database, others; organism, H. annuus; program selection, somewhat similar sequences; and all other parameters, by default. To identify the coding part of miRNA, we used BLASTx. BLASTx highlights coding regions. 10 Prediction of miRNAs secondary structure MFOLD, a secondary structure prediction tool was used to produce a stem-loop structure for the initially identified potential H. annuus. All the initial candidate sequences that failed to develop stable secondary structures were discarded. MFOLD software updated as UNAfold http://www.unafold.org/ and then clicked on MFOLD and then selected application (RNA fold form Version 2.3). 11 UNAfold → MFOLD → Application → RNA fold form version The setting of MFOLD parameters was adjusted as RNA sequence, linear; folding temperature, 37°C, ionic concentration, 1 mol/L of National Center for Biotechnology Information (NCBI) having no divalent ions; percent sub-optimality number, 5; maximum interior loop size 30. Conservation and phylogenetic analysis Clustal W was selected for phylogenetic analysis. Clustal W is used for multiple sequence alignment or the alignment of more than one sequence available at https://www.genome.jp/toolsbin/clustalw. New potential miRNAs in sunflower In all research, we predicted 152 miRNAs that performed regulatory process against drought-tolerant stress in A. Phylogenetic analysis of non-coding microRNAs The study of the evolutionary history of a species or a group of organisms or a particular characteristic of an organism. Here we have done phylogenetic analysis by clustal W. Phylogenetic analysis of MIR156a The phylogenetic analysis of mir156a is described in Figures 1 and 2 Phylogenetic analysis of MIR164a The phylogenetic analysis of mir164a is described in Figure 3. Phylogenetic analysis of MIR172b The phylogenetic analysis of mir172b is described in Figure 4. According to the phylogenetic analysis of MIR172b, H. annuus with the Accession number "XR_004862999" shows close relation with A. thaliana, Z. mays, S. lycopersicum, S. tuberosum and shows a distance relationship with G. hirsutum. Phylogenetic analysis of MIR172a Phylogenetic analysis of mir172a is described in Figures 5 to 7. According to the phylogenetic analysis of MIR172a, H. annuus with the Accession number "XR_002592285" and "XR_002556375" shows close relation with A. thaliana, Z. mays, S. lycopersicum, O. sativa, B. napus and with "XR_004865012" shows close relationship with G. hirsutum. Phylogenetic analysis of MIR319 The phylogenetic analysis of mir319 is described in Figure 8. According to the phylogenetic analysis of MIR319, H. annuus with the Accession number "XR_004863001" shows close relation with A. thaliana, S. lycopersicum, and T. aestivum. Phylogenetic analysis of MIR393a The phylogenetic analysis of mir393 is described in Figure 9. According to the phylogenetic analysis of MIR93a, H. annuus with the Accession number "XR_002552875" shows close relation with A. thaliana, G. hirsutum, O. sativa, S. tuberosum, and Z. mays. Phylogenetic analysis of MIR394 The phylogenetic analysis of mir393 is described in Figure 10. Phylogenetic analysis of MIR399 The phylogenetic analysis of mir399 is described in Figure 11. According to the phylogenetic analysis of MIR94, H. annuus with the Accession number "XR_002574508" shows a distance Discussion Sunflower is the fourth biggest oil-seed crop in the world. The seeds of sunflowers are used in food as well as their dried stalk is used as fuel. It has previously been used as an ornamental plant and was also used in ancient ceremonies. 12 Moreover, different parts of sunflowers are used in body painting, decorations, and making dyes for the textile industry. Its oil is used in the manufacturing of margarine and salad dressings, and cooking. With roasted seeds, a coffee type could be made. In industry, it is used in cosmetics and paints. Due to its lack of anti-nutritional factors and high nutritional values, it is a potential source of protein for human consumption. Due to its metabolic, physiological, and morphological adaptation strategies, the sunflower is one of the most important oil-seed crops and is resistant to various abiotic stresses. This crop is of special interest for its adaptation to limited water availability, high temperatures, high salinity, and heavy-metal concentrations in soil. The dried stems Mazhar et al 13 which are used for fuel contain potassium and phosphorous which can be composed and returned to the soil as fertilizer. 13 MiRNAs arise from primary longer RNA transcripts that include a self-complementary fold-back, from which the mature miRNAs are excised. They are short RNA molecules containing 19-24 nucleotides in size. 14,15 They are familiar as regulators of gene expression by binding to open reading frames (ORF) or untranslated regions (UTR) of specific mRNAs, targeting them for directing or cleavage translation inhibition at the mRNA level. It has been demonstrated that around 60% of protein-coding genes are targets of miRNAs and are modulated by these small RNAs. miRNAs are derived from hairpin pre-miRNA from which both miRNA and the imperfectly complementary miRNA* strands are released. Their sequences are not conserved between plants and animals, and even not have been seen in fungi. Many miRNAs within the kingdom have an ancient origin, some being completely conserved among sunflower, Arabidopsis, rice, and even liverworts, mosses, and hornworts. 16 By regulating gene expression, miRNA plays a vital role to regulate the developmental processes of organisms. 17 The negative regulation of miRNAs in gene expression in both plants and animals has been demonstrated. 18 MicroRNAs have been revealed to modulate diverse developmental processes, including polarity, identity, and organ separation, and to regulate their function and biogenesis. 18 In our study, we used the miRBase database to find miRNAs from A. thaliana that were tolerant against drought stress that was 152 in strength and then performed local alignment of these miRNAs against sunflower and found 12 novel miRNAs (MIR156a, MIR164a, mir165a, mir170, mir172a, mir172b, mir319a, mir393a, mir394a, mir399c, mir156h, and mir414). The secondary structure of these 12 novel miRNAs, including forward and reverse strands, was forecasted by MFOLD software by using default parameters. Later, conservation and phylogenetic analysis were done by selecting 10 different organism type such as G. hirsutum, H. annuus, A. thaliana, T. aestivum, S. officinarum, Z. mays, B. napus, S. tuberosum, S. lycopersicum, and O. sativa. In our study, all novel miRNAs are present in only A. thaliana. In our study, mir-156a is present in all species; mir-164 is present in all except H. annus and S. officin; 19 mir-165a and Conclusions Twelve novel miRNAs (MIR156a, MIR164a, mir165a, mir170, mir172a, mir172b, mir319a, mir393a, mir394a, mir399c, mir156h, and mir414) were identified against drought stress in A. thaliana. We targeted these miRNAs to cope with the drought tolerance in sunflowers. In our study, all novel miRNAs are present in only A. thaliana. Moreover, different parts of sunflowers are used in body painting, decorations, and making dyes for the textile industry. Its oil is used in the manufacturing of margarine and salad dressings, and cooking. With roasted seeds, a coffee type could be made. In industry, it is used in cosmetics and paints. The improvement method also increases the production of sunflowers and benefits economically.
2022-12-21T16:10:52.680Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "a5cd32c1d90865c2089d300ba4b860308fdd18e8", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "875464e7667e0cf4427344ad38168f9452f0a5cb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
197422872
pes2o/s2orc
v3-fos-license
Improving the Clinical Application of Natural Killer Cells by Modulating Signals Signal from Target Cells Relapsed acute myeloid leukemia (AML) is a significant post-transplant complication lacking standard treatment and associated with a poor prognosis. Cellular therapy, which is already widely used as a treatment for several hematological malignancies, could be a potential treatment alternative. Natural killer (NK) cells play an important role in relapse control but can be inhibited by the leukemia cells highly positive for HLA class I. In order to restore NK cell activity after their ex vivo activation, NK cells can be combined with conditioning target cells. In this study, we tested NK cell activity against KG1a (AML cell line) with and without two types of pretreatment—Ara-C treatment that induced NKG2D ligands (increased activating signal) and/or blocking of HLA–KIR (killer-immunoglobulin-like receptors) interaction (decreased inhibitory signal). Both treatments improved NK cell killing activity. Compared with target cell killing of NK cells alone (38%), co-culture with Ara-C treated KG1a target cells increased the killing to 80%. Anti-HLA blocking antibody treatment increased the proportion of dead KG1a cells to 53%. Interestingly, the use of the combination treatment improved the killing potential to led to the death of 85% of KG1a cells. The combination of Ara-C and ex vivo activation of NK cells has the potential to be a feasible approach to treat relapsed AML after hematopoietic stem cell transplantation. Introduction Acute myeloid leukemia (AML) is a severe hematological disorder that is mostly prevalent in the adult population. AML is characterized by expansion of cells arrested at different stages of differentiation of myeloid or/and monocyte lineage. The current standard treatment is chemotherapy, followed by allogeneic hematopoietic cell transplantation (allo-HSCT). AML relapse after allo-HSCT is still one of the most serious post-transplant complications with a very poor prognosis and without a clear treatment strategy. High-dose Ara-C (HiDAC) is mostly used for relapse treatment either alone or in combination with agents such as mitoxantrone, cladribine, and fludarabine [1]. This re-induction decreases a tumor burst but fails to maintain it long-term. Donor lymphocyte infusion or a second round of HSCT is a standard immunotherapy after re-induction to improve survival after HSCT; however, this has been without considerable success [2]. Modern cellular therapy has become a widely used treatment for many hematologic malignancies, especially in relapsing or refractory diseases. CD19 specific CAR T-cells have the potential to be a new standard treatment for lymphoid neoplasia, while CAR therapy in myeloid malignancies has the major limitation of the absence of specific targetable cell surface markers [3]. Infusions of ex vivo activated NK cells are promising treatment for myeloid 33-73%) and of NKp44 was 36% (range 6-44%). Mutually exclusive expression of CD25 and NKp44 was observed (p ≤ 0.05). The expression of NKG2D ranged from low to high intensity, with a median mean fluorescence intensity (MFI) of 2498 (range 947-5168, Figure 1B). The presence of inhibitory KIR differed between donors. Only two of eight donors expressed inhibitory KIR2DL5 with no correlation on the effect of NK cells´ cytotoxic functions. All donors expressed KIR2DL1, KIR2DL2, KIR2DL3, KIR3DL1, and KIR3DL2 with high variability (2-40% of positive NK cells, Figure 2A-B). The correlation (Pearson's R) between KIR levels and the number of dead KG1s cells did not show a significant association between KIRs expression and killing ability. Change in the Expression of NKG2D Ligands after Ara-C Application The expression of NKG2D ligands (ULBP1/2) in KG1a cells was measured by qPCR and normalized to untreated and B2M gene (ΔΔCT). Time course experiment showed that after 24 h of 33-73%) and of NKp44 was 36% (range 6-44%). Mutually exclusive expression of CD25 and NKp44 was observed (p ≤ 0.05). The expression of NKG2D ranged from low to high intensity, with a median mean fluorescence intensity (MFI) of 2498 (range 947-5168, Figure 1B). The presence of inhibitory KIR differed between donors. Only two of eight donors expressed inhibitory KIR2DL5 with no correlation on the effect of NK cells´ cytotoxic functions. All donors expressed KIR2DL1, KIR2DL2, KIR2DL3, KIR3DL1, and KIR3DL2 with high variability (2-40% of positive NK cells, Figure 2A-B). The correlation (Pearson's R) between KIR levels and the number of dead KG1s cells did not show a significant association between KIRs expression and killing ability. Change in the Expression of NKG2D Ligands after Ara-C Application The expression of NKG2D ligands (ULBP1/2) in KG1a cells was measured by qPCR and normalized to untreated and B2M gene (ΔΔCT). Time course experiment showed that after 24 h of Change in the Expression of NKG2D Ligands after Ara-C Application The expression of NKG2D ligands (ULBP1/2) in KG1a cells was measured by qPCR and normalized to untreated and B2M gene (∆∆CT). Time course experiment showed that after 24 h of Ara-C treatment (0.5µM), only ULPB2 increased expression (∆∆CT = 1.14). At later time points (48 and 72 h), all other tested genes (ULBP1-3, MICA/B) showed increasing and maintained patterns of expression. At 48 h, both ULPB1 and 2 increased their relative expression to almost 2 times (2.2 and 1.93, respectively). At this time point, MICA/B expression was also elevated (∆∆CT was 1.5 for MICA and 1.23 for MICB) after 0.5 µM Ara-C application. For all 5 genes tested, the highest induction was achieved at 72 h, where ∆∆CT was 3.2 for both ULBP1/2, and ULBP3 also reached higher levels (∆∆CT = 1.6). The kinetics of MICA and MICB induction was similar and reached to ∆∆CT = 1.6 or 1.7 (all results are summarized in Figure 3). Figure 3). Target Cell Preparation and Cytotoxic Potential of NK Cells NK cell killing activity was estimated against treated and untreated KG1a cells. KG1a were Target Cell Preparation and Cytotoxic Potential of NK Cells NK cell killing activity was estimated against treated and untreated KG1a cells. KG1a were pretreated with 0.5 µM Ara-C for 24 or 48 h respectively, and then co-cultured with NK cells for 8 or Target Cell Preparation and Cytotoxic Potential of NK Cells NK cell killing activity was estimated against treated and untreated KG1a cells. KG1a were pretreated with 0.5 µM Ara-C for 24 or 48 h respectively, and then co-cultured with NK cells for 8 or 24 h. HLA-ABC blocking antibody was always added 24 h before co-culture. The effect of blocking antibody was measured using flow cytometry and detection antibodies, which were prevented to bind the cells incubated for 24 h with a blocking antibody ( Figure 5). The expression of HLA-ABC was evaluated during the co-culture experiment, where the epitope remained blocked during the entire experiment. KG1a cells were detected according to CD34 expression, and the number of dead cells was estimated as 7AAD positive cells from the entire CD34 positive population ( Figure 6). 24 h. HLA-ABC blocking antibody was always added 24 h before co-culture. The effect of blocking antibody was measured using flow cytometry and detection antibodies, which were prevented to bind the cells incubated for 24 h with a blocking antibody ( Figure 5). The expression of HLA-ABC was evaluated during the co-culture experiment, where the epitope remained blocked during the entire experiment. KG1a cells were detected according to CD34 expression, and the number of dead cells was estimated as 7AAD positive cells from the entire CD34 positive population ( Figure 6). The viability of untreated KG1a cells was always higher than 98%. The presence of the anti-HLA class I antibody did not affect the viability of KG1a cells in culture without NK cells. Ara-C increased the number of dead KG1a in a time-dependent manner, and co-culture times were: T1 = 9.3%, T2 = 12.8%, T3 = 15.1%, T4 = 38% (Figure 7). The median of killing ability of NK cells against untreated cells was: T1 = 8.4%, T2 = 15.1%, T3 = 14.3%, T4 = 21.4%. The combination of chemotherapy and NK cells produced high numbers of dead KG1a cells with the maximum reached at the last time point (to 80%). In previous time-points, the numbers of dead cells were 65% for T3, 48% for T2, and 29% for T1. We did not find any correlation between the killing ability and NK cell activation of receptors´ levels. The addition of a blocking antibody positively affected NK cell killing activity and further slightly improved the killing potential when combined with Ara-C (Figure 7). At the first time point, the percentage of dead cells after antibody treatment only was the same as after Ara-C (28.9%). Subsequent time-points showed lower potential of HLA blocking compared to Ara-C. The percentage 24 h. HLA-ABC blocking antibody was always added 24 h before co-culture. The effect of blocking antibody was measured using flow cytometry and detection antibodies, which were prevented to bind the cells incubated for 24 h with a blocking antibody ( Figure 5). The expression of HLA-ABC was evaluated during the co-culture experiment, where the epitope remained blocked during the entire experiment. KG1a cells were detected according to CD34 expression, and the number of dead cells was estimated as 7AAD positive cells from the entire CD34 positive population ( Figure 6). The viability of untreated KG1a cells was always higher than 98%. The presence of the anti-HLA class I antibody did not affect the viability of KG1a cells in culture without NK cells. Ara-C increased the number of dead KG1a in a time-dependent manner, and co-culture times were: T1 = 9.3%, T2 = 12.8%, T3 = 15.1%, T4 = 38% (Figure 7). The median of killing ability of NK cells against untreated cells was: T1 = 8.4%, T2 = 15.1%, T3 = 14.3%, T4 = 21.4%. The combination of chemotherapy and NK cells produced high numbers of dead KG1a cells with the maximum reached at the last time point (to 80%). In previous time-points, the numbers of dead cells were 65% for T3, 48% for T2, and 29% for T1. We did not find any correlation between the killing ability and NK cell activation of receptors´ levels. The addition of a blocking antibody positively affected NK cell killing activity and further slightly improved the killing potential when combined with Ara-C (Figure 7). At the first time point, the percentage of dead cells after antibody treatment only was the same as after Ara-C (28.9%). Subsequent time-points showed lower potential of HLA blocking compared to Ara-C. The percentage The viability of untreated KG1a cells was always higher than 98%. The presence of the anti-HLA class I antibody did not affect the viability of KG1a cells in culture without NK cells. Ara-C increased the number of dead KG1a in a time-dependent manner, and co-culture times were: T1 = 9.3%, T2 = 12.8%, T3 = 15.1%, T4 = 38% (Figure 7). The median of killing ability of NK cells against untreated cells was: T1 = 8.4%, T2 = 15.1%, T3 = 14.3%, T4 = 21.4%. The combination of chemotherapy and NK cells produced high numbers of dead KG1a cells with the maximum reached at the last time point (to 80%). In previous time-points, the numbers of dead cells were 65% for T3, 48% for T2, and 29% for T1. We did not find any correlation between the killing ability and NK cell activation of receptors´levels. treatments was the most efficient in all time points. Almost all the cells were killed at the last time point where the percentage of dead KG1a was 85%. In previous time points, the proportion of dead cells was as followed: T1 = 45.6%, T2 = 69.3%, T3 = 75.7%. All results are summarized in supplementary Figure S4 and Figure 7A-D. We did not observe any correlation between inhibitory KIR expression and the killing ability. The expression of CD16 also did not influence the percentage of dead cells either (data not shown). Discussion NK cells are a crucial part of the anti-leukemia immune response after hematopoietic stem cell transplantation. The NK cell activity correlates with relapse-free survival in AML patients [20]. These data suggest that NK cells may play a crucial role in the control of leukemia development and relapse [21], therefore, donor NK cell infusion following HSCT might improve the outcome of patients. The ability of NK cells to kill residual or relapsed leukemia cells depends on the strength of activating and inhibitory signals. Ex vivo activation can induce expression of activating receptors, causing an exceeding signal from inhibitory receptors and full activation of their cytotoxic activity/potential [20]. Many protocols have been developed for preparing of NK cell-based medical products. However, optimal product characterization has not been defined yet. The key factors involved in NK cell therapy success are cell dosage and activation status [22]. We developed an ex vivo expansion The addition of a blocking antibody positively affected NK cell killing activity and further slightly improved the killing potential when combined with Ara-C (Figure 7). At the first time point, the percentage of dead cells after antibody treatment only was the same as after Ara-C (28.9%). Subsequent time-points showed lower potential of HLA blocking compared to Ara-C. The percentage of dead KG1a cells ware 27.3% for T2, 47% for T3, and 53.4% for T4. The combination of both treatments was the most efficient in all time points. Almost all the cells were killed at the last time point where the percentage of dead KG1a was 85%. In previous time points, the proportion of dead cells was as followed: T1 = 45.6%, T2 = 69.3%, T3 = 75.7%. All results are summarized in supplementary Figure S4 and Figure 7A-D. We did not observe any correlation between inhibitory KIR expression and the killing ability. The expression of CD16 also did not influence the percentage of dead cells either (data not shown). Discussion NK cells are a crucial part of the anti-leukemia immune response after hematopoietic stem cell transplantation. The NK cell activity correlates with relapse-free survival in AML patients [20]. These data suggest that NK cells may play a crucial role in the control of leukemia development and relapse [21], therefore, donor NK cell infusion following HSCT might improve the outcome of patients. The ability of NK cells to kill residual or relapsed leukemia cells depends on the strength of activating and inhibitory signals. Ex vivo activation can induce expression of activating receptors, causing an exceeding signal from inhibitory receptors and full activation of their cytotoxic activity/potential [20]. Many protocols have been developed for preparing of NK cell-based medical products. However, optimal product characterization has not been defined yet. The key factors involved in NK cell therapy success are cell dosage and activation status [22]. We developed an ex vivo expansion protocol for preparing of NK cells, which was able to provide us with a sufficient number of NK cells with a high activation status. Using of cryopreserved mononuclear cells as an input material allows allowed us more flexible timing of NK cells application and treatment with multiple doses of fresh cells. NK cells are very sensitive to cryopreservation and could lose their recovery potential and activating state. Therefore, they still need the IL-2 re-activation [23]. Our in vitro activated NK cells isolated from cryopreserved mononuclear cells (MNCs) induced key activating receptors such as CD25, NKp44, or NKG2D. CD25 is mainly required for cell proliferation [24]. Our previous finding showed a reverse correlation between CD25 and NKp44 expression, where cells with high CD25 expression had low expression of NKp44 and vice versa [23]. CD25 is expressed mainly after the first days upon activation and is lost after 2 weeks of culture whereas NKp44 is stably expressed on the surface of over 50% of cells (data not shown). No correlation between cytotoxicity and expression of these markers was found. The activating receptor mentioned last-NKG2D-seems to be the most critical for NK cell response even when HLA class I molecule is present. AML is a heterogeneous disease with a highly variable expression of ligands for NKG2D on the cell surface [25]. The level of expression directly influences the clinical outcome of patients through NK cell activity to control relapse [26]. These ligands can be easily induced, pharmacologically leading to a better susceptibility of tumor cells to NK cells (examples of such treatment include HDAC-inhibitors or Ara-C) [19,27]. Ara-C is a cytostatic drug standardly used for the treatment of AML, but some AML cells could be resistant to this chemotherapy. This resistance is usually correlated with cellular stress responses that cause a higher expression of DAMPs, including NKG2D ligands [28]. We tested NK cell activity from healthy donors against the NK-resistant cell line, KG1a, untreated and treated with Ara-C (for induction of NKG2D ligands). Ara-C improved the killing ability in all our tested time point from about 17% to 58% when co-culture samples with treated and untreated target cells were compared. The strategy of chemotherapy as an inducer of DAMPs has been used in other cancers such as myeloma, low dose bortezomib [29]; prostate cancer, valproate [18]; or adenocarcinoma, Gefitinib [30]. Chemotherapy pretreatment is feasible within clinical protocols. In the case of AML, Ara-C chemotherapy used as standard (re)induction treatment could be followed by the application of ex vivo activated NK cells. The HLA class I cells such as AML cells are resistant to NK cells' killing ability because of a strong inhibitory signal and can escape NK cell responses [31]. HLA class I molecules react with inhibitory KIRs [32]. We tested several donors with a different expression of inhibitory KIR receptors and their cytotoxic potential against the NK-resistant leukemia cell line KG1a with a high HLA class I (ligands for KIR receptors) positivity. We did not find any correlation between the level of inhibitory KIRs and the cytotoxic potential of NK cells. However, inhibition of HLA-KIR interaction led to an increased NK cell killing potential of about 18% (time point 1) to 31% (time point 4). An improved killing activity of NK cells was observed in a previous study where pan-HLA blocking increased the proportion of dead cells by about 20% [33]. Another strategy to activate the killing potential of NK cells is to block inhibitory KIRs. The first clinical trials that blocked KIR-HLA interactions using lirilumab has already been reported, [34] with a promising result in preclinical testing [8]. However, this antibody binds only on KIR2DL1-3, and the KIR3DL1-3 inhibitory signal is still not blocked, so there is still the possibility of a high inhibitory signal for NK cells [35]. The expression of inhibitory KIRs is a dynamic process after hematopoietic stem cells transplantation, and it seems to be influenced by the presence of HLA ligands [36]. Therefore, the application of lirilumab can be limited by the presence and level of expression of inhibitory KIRs. There is no clinical-grade pan HLA-class I blocking antibody. Moreover, HLA-class I molecules are expressed in all nucleated cells, and their blocking could cause several side effects. In clinical practice, haploidentical NK cells with potential KIR-HLA mismatch are used to reduce KIR-HLA interactions [37,38]. Our study showed that blocking the inhibitory signal is not so efficient as the activating signal increases. The maximum proportion of dead cells in blocking experiments was 53% in comparison to 80% in experiments using chemotherapy. The combination of both treatments reached 85%, which represents only a minor improvement of killing ability. The chemotherapy pretreatment is not dependent on inhibitory KIR expression (either level type), which makes it donor-independent treatment. Our findings proved that NK cells could kill leukemia cells with high expression of HLA class I molecules; however, they have to be combined with the sensitization of target cells. Pretreatment with cytostatic cells can be crucial for the immunotherapeutic protocols where ex vivo activated NK cells are used. Ara-C, standardly used for AML treatment could improve NK cells´killing ability and the clinical outcome of patients. NK Cell Preparation Preparation of NK cell is based on the approved protocol for a clinical trial with an EudraCT number: 2018-001562-42. Peripheral blood mononuclear cells (PBMNCs) from 8 healthy donors were isolated by gradient centrifugation and Ficoll-Paque solution (GE Healthcare, UK) and cryopreserved at concentration 15 × 10 6 /mL in PBS with 10% albumin (Albunorm, Octapharma, Manchester, UK) and 10% Dimethyl Sulfoxide-DMSO (Cryosure, Wak-Chemie, Steinbach, Germany). NK cells were isolated immediately after thawing of PBMNCs with an NK cell isolation kit (Miltenyi Biotech, Teterow, Germany). Manufacturer´s instructions were followed. Pure cells were seeded at a concentration 1 × 10 6 to SCGM medium containing 10% FBS (Gibco, Paisley, Scotland), IL-2 (1000 UI/mL; Proleukin, Nuremberg, Germany) and irradiated (25 Gy) PBMNCs from healthy donors (10 × 10 6 of pooled MNC from 5 donors) as a feeder. Cells were cultured for 10 days, fresh IL-2 was added every 2-3 days. After this period NK, cells were split into 96-well plates at a concentration of 350 × 10 5 in 300 µL of a medium containing IL-2. The purity, activation, and inhibitory receptors were evaluated (see section flow cytometry). The study was approved by the ethics committee (joint committee Faculty of Medicine in Pilsen and Faculty Hospital Pilsen) on 4 September 2014. Signed informed consent was obtained from all individual participants included in the study. Target Cells Preparation The NK-resistant cell line KG1a (Sigma Aldrich, Germany) was used as target cells. This cell line expresses a high level of HLA class I molecules which bind to inhibitory KIRs (for HLA typing see the datasheet of cells line). Cells were cultured for 7 d in Iscove's Modified Dulbecco's Medium (IMDM) medium (Gibco) with 10% FBS and antibiotics (100 units/mL of penicillin, 100 µg/mL of streptomycin, and 0.25 µg/mL Amphotericin B; Gibco). Then the cells were seeded at concentration 1 × 10 6 /mL. Ara-C (0.5 µM, Cytosar, Pfizer, USA) was added for 24, 48, and 72 h. Anti HLA-class I antibody (clone W6/32, Santa Cruz Biotechnology, Inc, Dallas, USA) was added to block KIR-HLA interactions at a concentration 20 µg/mL 24 h before addition to the NK cell culture. Blocking efficiency was tested by flow cytometry analysis. The effect of Ara-C on NKG2D ligands was evaluated by qRT-PCR and a part of them also by flow cytometry. For details, see below (Section 4.3 and Section 4.4). Flow Cytometry Detection of Surface Markers and Cytotoxicity Purity of NK cells were determined immediately after NK cells were isolated with a combination of antibodies-CD45-BV510 (BD Bioscience, San Diego, CA, USA) and CD3FITC-CD16/56-PE (Exbio, Prague, Czech Republic). The expressions of surface activating as well as inhibitory receptors were measured on the day of cell splitting. For activating receptors, cells were stained with anti CD45-BV510, CD3-Pacific Blue (Beckman Coulter, Brea, CA, USA), CD56-APCCy7, CD25-PECy7, NKp44-APC, NKp46-PerCPCy5.5 (all BioLegend UK Ltd., London, UK), CD16-FITC (Exbio), and NKG2D-PE (eBioscience, San Diego, CA, USA). All the above-mentioned cell suspensions were incubated with antibodies for 15 min and then washed with PBS in 300 g/5 min. The cell pellet was resuspended in 300 µL PBS and immediately measured on a FACSCanto II flow cytometer (Becton Dickinson, Belgium). KG1a was evaluated for the expression of HLA-class I and selected NKG2D ligands. The HLA class I expressions before and after blocking were estimated using anti-HLA-ABC-PE antibody (Biolegend). MICA/B expression on the surface of KG1a before and after Ara-C treatment was evaluated using anti-MICA/B-BV711 (BD Bioscience, USA). In co-culture experiments, the mix of 7-actinomycin (7AAD; Exbio) and CD34-PECy7 (for detection of KG1a cells; Beckman Coulter) as well as HLA-ABC-PE (Biolegend) were added to co-culture suspension. Cells (cell suspensions) were incubated with antibodies for 15 min and then washed with PBS in 300 g/5 min. The cell pellet was resuspended in 300 µL PBS and immediately measured on a FACSCanto II flow cytometer. Analysis of cytometry data was performed using FlowJo software (Tristar, Ashland, OR, USA). The percentage of cells positive for KIRs, CD25, NKp44, 7AAD (dead cells), MICA/B, and HLA-ABC were determined. Median of fluorescence intensity (MFI) was used for the evaluation of NKG2D expression to compare individual donors. A complete list of antibodies is summarized in supplementary Table S1. Evaluation of NKG2D Ligand Expression-qRT PCR KG1a cells (culture techniques above) were treated with Ara-C (0.5 µM) for 24, 48, and 72 h. Total RNA was isolated using RNeasy Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer's protocol. Quality and quantity of the extracted RNA were evaluated using a Synergy HTX (BioTek, Winooski, VT, USA) and samples were diluted to a similar concentration. A QuantiTect Reverse Transcription Kit (Qiagen, Hilden, Germany) was used for reverse transcriptions following manufacturer instructions. The primers used for detection of ULBP-1, ULBP-2, ULPB-3, MICA, and MICB and for beta2-microglobulin (B2M) as a reference gene are in Table 1. All PCR reactions were performed using a QuantStudio5 Real-Time PCR System (ThermoFisher Scientific, Waltham, MA, USA). A QuantiTect SYBR Green PCR Kit (Qiagen, Hilden, Germany) as a master mix for all PCR reactions was used with modified manufacturer protocol (real-time PCR and two-step RT-PCR using Applied Biosystems cyclers and other cyclers). For each PCR run, a master mix was prepared on ice in duplicate with 1× final concentration 2× QuaniTect SYBR Green PCR Master Mix, 1 µM of each primer, and 2 µL of cDNA in a total volume 25 µL. The thermal cycling conditions included an initial denaturation step at 95 • C for 15 min, 50 cycles at 94 • C for 15 s, 56-59.5 • C (MICA: 56 • C; MICB: 58 • C, ULBP1-3: 59.5 • C) for 30 s and 72 • C for 30 s followed by a dissociation curve. Appropriate amplicons were verified on a 2% agarose gel. Threshold cycle (Ct) values were determined using QuantStudioTM Design & Analysis Software. Relative expression was calculated as ∆∆Ct = (Ct (target, test) -Ct (reference, test)) -(Ct (target, calibrator) -Ct (reference, calibrator)). Fold change in expression was calculated as a 2 -∆∆Ct . Co-Culture Cytotoxic Assay NK cells after 10 days of culture were seeded into 96-well plates in SCGM with 5% FBS and IL-2 (1000 IU/mL). KG1a were treated with Ara-C (0.5 µM) for 24 or 48 h, and anti-HLA-ABC was added 24 h before co-culture to NK cells. The proportion of NK cells and KG1a cells was 1:10 in all experiments. To exclude an influence of IL-2, KG1s cells were transferred to SCGM medium with IL-2 without the addition of NK cells. Samples were collected at different time points, as described in Table 2. The number of dead cells was measured using flow cytometry and 7AAD. The specific killing activity was evaluated as the proportion of dead cells in the co-culture experiment minus spontaneous dying in a well without NK cell addition. The control co-culture well (KG1a without treatment) was determined as a natural killing activity without any sensitization. The control well (NK cell co-culture with untreated KG1a) and wells with treated cells were compared to evaluate the effect of treatment. Data Evaluation Data were evaluated using The MatLab software (The MathWorks, Inc., USA). Non-parametric Mann-Whitney U test was chosen to determine a statistical difference between groups at p < 0.05. Correlation of observed parameters was determined by an evaluation of the correlation coefficient (Pearson's R). A coefficient greater than 0.8 and p < 0.05 was considered as significant.
2019-07-18T14:22:03.113Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "4f03e7f7934f2548ef36204159c536facc64174d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/20/14/3472/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4f03e7f7934f2548ef36204159c536facc64174d", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6078416
pes2o/s2orc
v3-fos-license
Laparoendoscopic single-site surgery for the treatment of different urological pathologies: Defining the learning curve of an experienced laparoscopist Objectives To define the learning curve of laparoendoscopic single-site surgery (LESS) of an experienced laparoscopist. Patients and methods Patients who had LESS, since its implementation in December 2009 until December 2014, were retrospectively analysed. Procedures were divided into groups of 10 and scored according to the European Scoring System for Laparoscopic Operations in Urology. Different LESS indications were done by one experienced laparoscopist. Technical feasibility, surgical safety, outcome, as well as the number of patients required to achieve professional competence were assessed. Results In all, 179 patients were included, with mean (SD) age of 36.3 (17.5) years and 25.4% of the patients had had previous surgeries. Upper urinary tract procedures were done in 65.9% of patients and 54.7% of the procedures were extirpative. Both transperitoneal and retroperitoneal LESS were performed in 92.8% and 7.2% of the patients, respectively. The intraoperative and postoperative complication rates were 2.2% and 5.6% (Clavien–Dindo Grade II 3.9% and IIIa 1.7%), respectively. In all, 75% of intraoperative complications and all conversions were reported during the first 30 LESS procedures; despite the significantly higher difficulty score in the subsequent LESS procedures. One 5-mm extra port, conversion to conventional laparoscopy and open surgery was reported in 14%, 1.7%, and 1.1% of the cases, respectively. At mean (SD) follow-up of 39.7 (11.4) months, all the patients that underwent reconstructive LESS procedures but one were successful. Conclusion In experienced hands, at least 30 LESS procedures are required to achieve professional competence. Although difficult, both conversion and complication rates of LESS are low in experienced hands. Introduction Laparoendoscopic single-site surgery (LESS) was recently introduced in the field of minimally invasive urological surgery, aiming to further reduce postoperative pain, shorten hospital stay, and improve cosmesis [1][2][3]. Despite its technical difficulty, which limits its applications to experienced laparoscopists, LESS may be regarded as an emerging trend in minimally invasive urological surgery that has significantly evolved and became widely applicable in a relatively short time [3]. Attempting to share experiences of LESS and to outline its technical feasibility, difficulties, complications and outcomes, multi-institutional studies were recently reported including most of the centres that pioneered LESS worldwide [3][4][5][6]. These studies proved that LESS is at least comparable to well-established conventional laparoscopy. However, to date no single published report has highlighted the learning curve of LESS for an experienced laparoscopist to achieve professional competence. Therefore, we present for the first time a learning curve for LESS for an experienced laparoscopist for the treatment of different urological pathologies in different age groups. Patients and methods This retrospective study included 179 consecutive patients, with different urological pathologies, who were indicated for laparoscopy and were treated with LESS since its implementation at our institute in December 2009 until December 2014. All patients gave an informed consent for LESS. Exclusion criteria included absolute contraindications to laparoscopy and children aged <3 years. Procedures were scored according to the European Scoring System for Laparoscopic Operations in Urology [7]. Data were collected in a standard data sheet and all procedures were approved by our Ethical Care Committee. All LESS procedures were done by one experienced laparoscopist (A.M.A.) with an advanced laparoscopic background. To outline the learning curve for the laparoscopist, consecutive procedures were divided into groups of 10 and each group was analysed and the different groups were compared. Outcome measures Demographic data of patients included age, gender, body mass index (BMI), past history of (abdominal/pelvic) surgery, American Society of Anesthesiologists (ASA) score, associated comorbidities, and indication for LESS. Procedures were divided as either ablative or reconstructive, and either upper urinary tract or pelvic. The operative data analysed were: operative time, estimated blood loss (EBL), intraoperative complications, and blood transfusion. Data of the surgical procedure included: type of single-port device, type of instruments, access technique (single-port or singleincision/single-site), port-insertion site (umbilical or extra-umbilical) and approach (transperitoneal or retroperitoneal). Adding an extra 5 mm trocar was regarded as conversion to reduced-port laparoscopy [8]. Also, conversion to conventional laparoscopy or open surgery was recorded. Postoperative data included: hospital stay, visual analogue scale (VAS) pain score at discharge and postoperative complications during the hospital stay and within the first 3 months postoperatively. Postoperative complications were graded according to the Clavien-Dindo system [9]. Finally, the functional and oncological outcomes were recorded during the follow-up period. Statistical analysis Data were analysed using the IBM Statistical Package for the Social Sciences (SPSSÒ) software package version 20.0 (SPSS Inc., IBM Corp., Armonk, NY, USA) [10]. Comparisons between groups for categorical (qualitative) variables were assessed using the chi-squared test. The Mann-Whitney test was used to compare groups for abnormally distributed (non-parametric) quantitative variables. A P 0.05 was considered to indicate statistical significance. Patient demographics, procedures and instrumentation Included were 179 patients with a mean (SD) age of 36.3 (17.5) years and a mean (SD) BMI of 28.65 (6.76) kg/ m 2 ; of them 44% had a BMI of >30 kg/m 2 . All demographic data are presented in Table 1. Different indications of LESS are shown in Table 2. Of the procedures, 54.7% were extirpative and 65.9% targeted the upper urinary tract. In all, 38 patients (21.2%) were children, with mean (SD) age of 6.2 (4.2) years. Indications for LESS in the children included: undescended testes (18), varicocele (eight), PUJ obstruction (six), and simple nephrectomy (six). Trans-umbilical transperitoneal access using a multichannel-port technique was used in 92.8% of the patients, whilst retroperitoneal access was used in 7.2%. The most commonly used access device was the TriPort (44.1%; Olympus, NY, USA and Advance Surgical Concept, Wicklow, Ireland) followed by the Covedien SILS port (31.8%; Covedien, Chicopec, MA, USA). The QuadPort was used in 22.9% of the procedures (Olympus, and Advance Surgical Concept), whilst X-Cone (Karl Storz, Tuttlingen, Germany) and Ethicon ports (Endosurgery, Cincinnati, OH, USA) were used only once (0.6%). Pre-bent and articulating instruments were used according to port type. Perioperative outcomes, complications and conversions Intraoperative complications occurred in four cases (2.2%), in two cases there was intraoperative bleeding, one case of left gonadal vein injury, and one case of respiratory hypoventilation. Of the 10 postoperative complications (5.6%), seven and three were Clavien-Dindo Grade II and IIIa, respectively. One patient developed retroperitoneal abscess, two had urinary leakages, two had wound infection, two had UTI, one had anaemia, and two developed umbilical hernia ( Table 3). The overall conversion rate was 16.8%; including conversion to reduced-port laparoscopy in 14%, conventional laparoscopy in 1.7% and open surgery in 1.1%. Perioperative outcomes predictors Univariate analysis to evaluate predictors of perioperative outcomes identified age, BMI, female gender, high ASA score, oncological surgical indication, upper urinary tract surgeries, and high procedure's score as significant. Patient's comorbidities significantly increased EBL and length of hospital stay, whilst previous abdominal or pelvic surgery only significantly extended the operative time. Ablative procedures increased intraoperative EBL. Multivariate analysis identified increased BMI as a significant risk factor for all inferior perioperative outcomes. Predictors of conversion and complications Univariate analysis identified female gender, previous abdominal or pelvic surgeries, reconstructive surgeries, pelvic procedures, and procedures of high difficulty score as significant risk factors for additional port insertion, whilst high ASA score significantly increased the rate of conversion to conventional laparoscopy. Meanwhile, high BMI significantly increased the risk of conversion to open surgery (Table 4). Despite being statistically non-significant (P = 0.60), all intraoperative complications occurred in female patients. For postoperative complications, univariate analysis identified female gender and increased BMI as significant risk factors. However, multivariate analysis showed that none of the perioperative factors were significant predictors of conversion and complications. Analysis of learning and training of LESS In all, 75% of all intraoperative complications (sequential cases number 5, 18 and 29) and all conversions were reported during the first 30 LESS procedures, therefore patients were subsequently divided into two groups where the first group included the first 30 procedures and the second group included the subsequent 149 procedures. Comparisons between the two groups for perioperative outcomes, complications, conversion rates, and procedure difficulty scores are shown in Table 5. Follow-up At a mean (SD) follow-up of 39.7 (11.4) months, all reconstructive LESS procedures but one complex vesico-vaginal fistula were successful (98.1% success rate), whilst patients with renal parenchymal and pelvic tumours showed no recurrence. Three patients were lost during the follow-up, whilst one patient with T1bN0M0 renal parenchymal tumour died from a cause other than the original pathology after 27 months. Discussion LESS has been proposed as an evolutionary step beyond conventional laparoscopy and it has been increasingly adopted by urologists worldwide since its introduction in 2007 [1,11]. Although, the recently reported multiinstitutional studies included large number of patients, in all of these studies there were strict patients' selection criteria with a variety of experienced laparoscopist who did LESS at different institutions of variable settings and in different healthcare systems [2][3][4][5]12]. Even studies that reported single-centre experiences, the LESS procedures were probably performed by more than a laparoscopist [2,13]. In the present study, we report a detailed analysis of the learning curve of an experienced laparoscopist who performed 179 consecutive LESS procedures at a single centre. Only absolute contraindications of laparoscopy and children aged <3 years were excluded, as in younger children it is difficult to use the commercially available LESS instruments that are designed for adults and also objective evaluation of postoperative pain is inaccurate. However, paediatrics represented 21.1% of the total number of patients. In our present study, 44% of the patients had a BMI >30 kg/m 2 , 27.9% had associated comorbidities, and 25.4% had had previous surgeries. In most reported series of LESS, patients were not obese and of low-grade surgical risk [2][3][4]13]. Autorino et al. [4] reported the largest multiinstitutional study that included 1163 patients who had LESS at 21 institutions worldwide. In their study, 85.6% of the procedures targeted the upper urinary tract and 83.4% were extirpative. This trend might reflect the technical difficulty for the lower urinary tract, as well as of reconstructive LESS procedures, due to the unfavourable ergonomics of LESS. In our present series, upper urinary tract LESS procedures represented 65.9% of the cases, whilst 45.3% had reconstructive LESS procedures. Because of the current technical limitations of LESS, a good laparoscopic background is necessary before practicing LESS. With more training LESS can be widely adopted and applied even to the most complex urological procedures [11,14]. In our present study, all cases were done by one surgeon with an advanced laparoscopic background of >10 years (A.M.A.). Data analysis showed that 75% of intraoperative complications, as well as conversion to conventional laparoscopy and open surgery, occurred during the first 30 LESS procedures. In the first 30 LESS procedures, 60% were considered 'easy' and 'slightly difficult', whilst the subsequent procedures were considered as 'fairly difficult', 'difficult', 'very difficult' and 'extremely difficult' in 64.4%. Furthermore, despite the higher technical difficulty in the subsequent group of patients, the incidence of adding an extra port was significantly higher in the first 30 LESS procedures. This may reflect the fact that with increasing experience of the operating surgeon, professional competence can be achieved and this probably requires 30 LESS procedures. Attempting to overcome current limitations of LESS, the da VinciÒ Robot System (Intuitive Surgical, Sunnyvale, CA, USA) was used and it facilitated, to some extent, these limitations [15]. However, as it was not originally developed for LESS the current da Vinci system has some limitations. Recently, a novel robotic system has been specifically developed for single-port surgery [16]. Although it has been safely used for major urological LESS procedures, the problem of assistant access to the surgical field has not been solved and remains a challenge. Like most of the reported LESS series [1][2][3][4], transperitoneal trans-umbilical access was commonly used in our present patients. Ryu et al. [17] described urological retroperitoneal LESS and their results are comparable to ours for complication and conversion rates but are inferior for perioperative outcomes. However, their report represented one of the early experiences with single-port retroperitoneal laparoscopy, where LESS was still in its infancy. Overall, like conventional laparoscopy, transperitoneal and retroperitoneal approaches were used for LESS; however, retroperitoneal LESS is less favourable. A wide variety of access devices have been developed for LESS, aiming for simultaneous use of at least three instruments during the surgery [18]. However, each device has its own advantages and disadvantages and still the ideal port is not yet available [19]. Five types of multichannel access devices were used in our present study. The use of these different access devices was mainly due to their commercial availability; however, the frequently used ones were the most convenient. Both articulating and pre-bent instruments were developed for use with different access devices in order to overcome the problem of triangulation and to facilitate surgery during the single-port approach [1]. In our present study, both articulating and pre-bent instruments were used. Data analysis showed that neither the access device nor the instrument was a predictor for preoperative outcomes, conversions or complications. The perioperative outcomes of the present study were favourable compared with those of the two largest multi-institutional studies [3,4]. This may be because the present study included a larger proportion of 'easy', 'slightly difficult', and 'fairly difficult' procedures (55.3%). Also, all the procedures in the present study were done by one experienced laparoscopist. On the other hand, the previously mentioned multiinstitutional studies included a larger proportion of 'difficult' procedures, where LESS was performed by different surgeons with variable levels of experience and surgical skills. Our present data analysis to evaluate predictors of perioperative outcomes correlated with the literature except for pelvic LESS procedures that had significantly better perioperative outcomes compared to upper urinary tract LESS procedures [4,20]. This could be explained by the higher proportion of 'easy' procedures, namely varicocelectomy and undescended testis, which are categorised as pelvic surgeries. Moreover, no pelvic oncological indications were included in our present series. To consider LESS as a safe alternative to the wellestablished conventional laparoscopy, potential risk of conversion and complications must be relatively low and clearly defined [13]. Two case series have specifically evaluated LESS for upper tract procedures. In 125 upper urinary tract LESS procedures, Irwin et al. [13] reported conversion to conventional laparoscopy and complication rates of 5.6% and 15.2%, respectively. Also, Greco et al. [20] reported a 17% complication rate in 192 upper urinary tract LESS procedures. Increasing experience and the proven feasibility of LESS have allowed for reporting of larger LESS series, from which more information has accrued. Autorino et al. [4] reported intraoperative and postoperative complication rates of 3.3% and 9.4%, respectively. Their overall conversion rate was 19.6%, where conversion to reducedport laparoscopy, conventional laparoscopy and open surgery was 14.6%, 4% and 1.1% of LESS procedures, respectively. Compared with what has been published, our present study showed a lower complication rate. Again, this may be related to the fact that all cases were done by one laparoscopist and also the other reported studies might have included a larger proportion of technically difficult procedures. However, the conversion rate in our present study was comparable with that reported by Autorino et al. [4], although almost half of our present cases were reconstructive LESS procedures compared with 16.6% of their cases. Furthermore, analysis identified female gender, previous abdominal or pelvic surgeries, reconstructive surgeries, pelvic procedures, and procedures of high difficulty score as significant risk factors for additional port insertion, which correlates with the literature [4]. Despite being statistically non-significant, all intraoperative complications in the present series occurred in females, which might have been related to the high BMI of the females, which was 31.29 kg/m 2 compared to 26.93 kg/m 2 in males. For the postoperative complication predictors, analysis identified female gender and increased BMI as significant risk factors, which is in accordance with the literature [4]. The present study is unique for three main reasons. Firstly, it included a relatively large LESS series with different genitourinary pathologies in different age groups who were operated upon by one skilled laparoscopist. Secondly, the present study is both descriptive and analytical, providing information on the perioperative outcomes and risk factors for complications and conversions in LESS. Finally, the present study is the first to analyse the progression of the learning curve of LESS of an experienced laparoscopist. On the other hand, limitations include the retrospective design of the study, as although the data were prospectively collected bias would have remained. Also, our present series represents the outcomes of a surgeon with an extensive laparoscopic background; therefore results may not be representative of those achieved by less experienced urologists. Moreover, a comparative analysis with standard laparoscopy and potentially other available 'scarless' options was not performed. Conclusions The present study defines the learning curve of an experienced laparoscopist to achieve professional competence of LESS for the treatment of different urological pathologies in different age groups. In experienced hands, at least 30 LESS procedures appear to be required to achieve professional competence. Although it can be safely applied in urology with low conversion and complication rates, good training and good laparoscopic experience are prerequisites for satisfactory results of LESS. Conflict of interest None.
2018-04-03T00:55:07.176Z
2017-07-13T00:00:00.000
{ "year": 2017, "sha1": "d9cfe20766d543d2572b37e24cb2d7ddff0b70be", "oa_license": "CCBYNCND", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1016/j.aju.2017.06.001?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d9cfe20766d543d2572b37e24cb2d7ddff0b70be", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12307273
pes2o/s2orc
v3-fos-license
The impact of alcohol consumption on patterns of union formation in Russia 1998–2010: An assessment using longitudinal data Using data from the Russian Longitudinal Monitoring Survey, 1998–2010, we investigated the extent to which patterns of alcohol consumption in Russia are associated with the subsequent likelihood of entry into cohabitation and marriage. Using discrete-time event history analysis we estimated for 16–50 year olds the extent to which the probabilities of entry into the two types of union were affected by the amount of alcohol drunk and the pattern of drinking, adjusted to allow for social and demographic factors including income, employment, and health. The results show that individuals who did not drink alcohol were less likely to embark on either cohabitation or marriage, that frequent consumption of alcohol was associated with a greater chance of entering unmarried cohabitation than of entering into a marriage, and that heavy drinkers were less likely to convert their relationship from cohabitation to marriage. Introduction Studies in several countries have shown that higher rates of alcohol consumption can affect the timing of marriage and are also associated with increased rates of cohabitation and union dissolution (Forthofer et al. 1996;Leonard and Rothbard 1999). Until now it has not been possible to ascertain whether similar patterns might be observed in Russia, a country known to have a distinctive drinking culture and rates of alcohol consumption that significantly reduce life expectancy (e.g., Leon et al. 2009;Shkolnikov et al. 2013). In studies undertaken outside Russia it has long been noted that married people usually have better health and follow healthier lifestyles, including more moderate drinking patterns, than those who are unmarried (Gove 1973;Rosengren et al. 1989;Umberson 1992;Joung et al. 1995;Waite 1995). On the whole this finding is replicated in Russia: most crosssectional studies have found that there too married people drink less frequently and are less likely to drink to excess (Bobak et al. 1999;Cockerham et al. 2006;Pridemore et al. 2010). Two reasons have been suggested to explain why individuals who are heavy drinkers are more likely to be unmarried than married. One suggestion is that those who drink more heavily are less likely to enter marriage and more likely to exit from it. The other suggestion is that the change from one marital status to another may lead the individuals concerned to change their alcohol consumption (Leonard and Rothbard 1999). Longitudinal data are needed to distinguish which, if either, of these reasons is valid. In the study reported in this paper, we used data from the Russian Longitudinal Monitoring Survey (RLMS) for the years 1998-2010 to investigate the association between alcohol consumption and subsequent union formation or dissolution. It will be helpful if we explain the terminology used throughout the paper. We use the terms 'cohabitation' to refer to non-marital cohabiting unions and 'marriage' to mean marital union. The terms 'union formation' or 'entry to union' refer to the start of either a cohabitation or a marriage. The term 'unmarried' refers to all those who are not in a registered marriage, and includes those in nonmarital cohabiting unions. 'Drinking frequency' refers to the frequency of drinking alcohol, whereas 'drinking pattern' can refer to both drinking frequency and intensity of consumption, thereby capturing 'binge drinking' (heavy consumption in a short period) as well. Alcohol consumption in Russia Comments on the heavy drinking by many Russians can be found from as early as the tenth century, and Russian culture has long been regarded in the popular imagination as revolving around intense bouts of excessive drinking, particularly by men (Nemtsov 2011). Early historical accounts of the amounts consumed and the patterns of consumption were based largely on travellers' observations of the drinking habits of the urban elite, at royal celebrations or in Moscow taverns, but it is unclear how much heavy drinking occurred amongst men elsewhere in the country. In the 1700s, Peter I encouraged the practice of drinking large amounts of spirits by introducing daily vodka rations to the Navy, and accounts from that period helped to form the view, still prevalent today, that Russian men are heavy drinkers (Nemtsov 2011). Orlando Figes (2003), in his account of Russian cultural history, notes that the test of a 'true' Russian was the capacity to 'be able to drink vodka by the bucketload', an attitude which still prevails today. Over the twentieth century, Russian drinking continued to involve the consumption of large amounts of spirits, although there were sharp fluctuations, in part as a consequence of wars, but more particularly as a result of a series of prohibition laws. The first of these was passed in the early twentieth century (1914) and the last by Gorbachev in 1985. However, the scale of alcohol consumption in the Soviet Union across the twentieth century is difficult to estimate because the statistics on the subject released by the Soviet state, which manage to conceal any rise in consumption, are regarded as grossly misleading (Segal 1990;White 1996;Treml 1997). This fudging of the true picture can be partly attributed to the fact that excise duty on alcohol brought in significant income for the government, constituting 12-14 per cent of the total revenues received (Treml 1997). In the 1990s, after the fall of Communism, there was a sharp increase in hazardous levels and patterns of alcohol consumption, with a concomitant rise in alcohol-related illnesses, especially among men. This can be seen as a reaction to Gorbachev's anti-alcohol stance, but was also driven by the increasing availability of cheap spirits at a time of widespread socio-economic insecurity ). Today, the pattern of drinking in Russia still largely conforms to the traditional stereotype. Russia has a relatively high average annual alcohol consumption, at 15.7 litres of ethanol per adult, which is considerably higher than the European average of 12.2 litres (World Health Organisation 2011). In addition, the way that Russians drink alcohol is particularly hazardous for their health. Up to 75 per cent of all the alcohol is drunk in the form of spirits (Popova et al. 2007;Pomerleau et al. 2008;World Health Organisation 2011). Russia is the world's greatest consumer of 'hard liquor'. The country also has a tradition, particularly amongst its men, of periodic binging on vodka, with the express goal of becoming intoxicated (Nilssen et al. 2005;Perlman 2010). In a recent study conducted in the Urals, approximately 10 per cent of men reported going on 'zapoi' (a period of continuous drunkenness lasting several days) in the previous year ). Finally, the practice of consuming samogon, a home-made spirit, and non-beverage alcohols, such as medical tinctures, is also relatively common (Bobrova et al. 2009;Gil et al. 2009). Accordingly, in Russia and elsewhere in Eastern Europe, alcohol consumption accounts for a higher proportion of disability-adjusted life years (DALYs) (the number of years lost owing to poor health, disability, or early death than in any other region of the world (Lim et al. 2012)). In addition, a comparative study of Eastern European countries has found that Russians are more likely to report that drinking leads to negative social consequences, such as family problems, than the inhabitants of any neighbouring countries (Bobak et al. 2004). The consumption of alcohol has become normalized and incorporated into everyday life in Russia, with the expectations surrounding drinking heavily informed by traditional notions of masculinity and femininity (Van Gundy et al. 2005;Pietilä and Rytkönen 2008b;Bobrova et al. 2010;Saburova et al. 2011;Hinote and Webber 2012). Heavy drinking is more common in youth and middle age, and among the unemployed, those with low education, and those in poorer households (Carlson and Vagero 1998;Chenet et al. 1998;Bobak et al. 1999;Tomkins et al. 2007;Jukkala et al. 2008;Perlman 2010;Cook et al. 2011). However, the greatest disparity is between the sexes: men drink more frequently, consume more spirits, and are several times more likely to binge drink (Bobak et al. 1999;Malyutina et al. 2001;Nicholson et al. 2005;Pomerleau et al. 2008;Bobrova et al. 2010). Heavy drinking is considered more socially acceptable for men than for women (Van Gundy et al. 2005) and is perceived to play an important role in the social and business life of men, acting as a form of stress management (Mustonen 1997;Ashwin 2006;Pietilä and Rytkönen 2008a;Saburova et al. 2011). In this normative setting heavy drinking amongst men may be seen as an indicator of social 'normality', a coping strategy, but among women as indicative of a form of dysfunctionality. Patterns of union formation in Russia Patterns of union formation in Russia have traditionally followed the model of early, universal marriage identified as characteristic of the region in the 1960s (Hajnal 1965 For most of the twentieth century, the chances of getting married at some point in life remained high in Russia, while rates of cohabitation stayed low (Philipov and Jasilioniene 2010). The period covered by our study, 1998-2010, was, however, characterized by a shift towards a pattern of decreasing rates of marriage, while rates of cohabitation began to increase. Marriage rates had begun to decline in the 1980s and they fell sharply in the 1990s. Over the same time span rates of cohabitation increased in all relevant age groups, counterbalancing most of the decline in marriage (Vishnevsky 2006;Kostova 2007;Hoem et al. 2009). Despite the increasing popularity of cohabitation as a precursor to marriage-in the 1990s, half of all first unions began as cohabitationa registered marriage remains the preferred setting for childbearing in Russia (Philipov and Jasilioniene 2010). On the other hand, divorce rates in the country have long exceeded those in Western nations (Avdeev and Monnier 2000;Mills 2004;Jasilioniene 2007) and there is a correspondingly high re-marriage rate. Therefore marriage, divorce, re-marriage, and cohabitation are all common occurrences in the lives of Russia's inhabitants (Avdeev and Monnier 2000;Mills 2004). Two main explanations of recent trends in Russian cohabitations, marital unions, and dissolutions have been put forward. According to the 'crisis' explanation, the economic and social upheaval associated with the break-up of the Soviet Union led couples to postpone marriage and to greater union instability (see, e.g., Vannoy et al. 1999). In contrast, the 'ideational change' explanation suggests that the shift from marriage to cohabitation reflects a late blossoming of the individualistic values associated with what has become known as the second demographic transition (e.g., Zakharov 2008). Proponents of the second demographic transition theory argue that as countries develop socially and economically, people become increasingly individualistic, which leads to more diverse, non-traditional forms of union, with an increased risk of divorce and separation ( Van de Kaa 1987;Lesthaeghe 1995). On their own, neither the 'crisis' nor the 'ideational' explanations can adequately explain the changing patterns of union formation seen in Russia. The 'crisis' explanation has the flaw that cohabitation had started to become popular before the fall of communism (Gerber and Berman 2010), and might have been restrained by Soviet housing allocation rules, which favoured married couples (Zavisca 2012). The 'ideational' explanation is not wholly satisfactory either, because many of the traditional elements of Russian social behaviour-such as early marriage and young and nearly universal childbearing-are still in place and are not consistent with those characteristic of the second demographic transition (Philipov and Jasilioniene 2010). Indeed, the immediate post-Soviet period (early 1990s) saw an even greater emphasis on traditional gender stereotypes within the family (Watson 1993), and comparative studies have shown that there is more support for traditional gender roles in Russia than in other European countries (Motiejunaite and Kravchenko 2008). Further, conditions in society that would favour the proposed ideological 'development', such as nominal female equality in education and the labour force, were achieved many decades ago in Russia (Gerber and Berman 2010), but no corresponding change in patterns of union formation had resulted. For decades, Russia has been distinguished from many Western European countries by a pattern of early, universal childbearing and reliance on abortion as a means of birth control. Despite substantial reductions during the 1990s, the country's abortion rate remains the highest of all Eastern European countries (Sedgh et al. 2007;UNICEF 2013), and was twice as high as in the UK in 2011 (England & Wales Department of Health 2012; UNICEF 2013). In the Soviet era women turned to abortion because modern contraception was neither easily available nor promoted by the State, and the practice has continued into the post-Soviet period for cultural and practical reasons. Despite trends towards birth postponement (Frejka and Zakharov 2012), the Alcohol use and entry to union in Russia 285 mean age at first birth remains much lower in Russia than in other European countries: in 2009 the Russian figure was 24.6 years, compared to 27.6 years in the UK and 30.1 years in Italy (UNECE 2012). Fertility in Russia declined rapidly in the 1990s, marking a shift towards a one-child family norm, but rates of voluntary childlessness have remained relatively low (Philipov and Jasilioniene 2010). In Russia, pre-marital pregnancy is a strong determinant of entry to marriage (Cartwright 2000;Jasilioniene 2007;Kostova 2007;Alich 2009;Gerber and Berman 2010). 'Shotgun weddings' persist owing to a combination of the belief that childbearing within marriage is to be preferred and a low level of effective contraceptive use. The use of modern contraception has not increased much in the post-Soviet period (Perlman and McKee 2009), and because women are traditionally reluctant to abort their first pregnancy (Kulakov et al. 1996), pre-marital pregnancy is likely to remain a driver of entry into marriage. Factors routinely found to be associated with union formation elsewhere, such as employment status or income (Jalovaara 2012), have been shown to have inconsistent associations, or no association at all, with union formation in Russia (Gerber and Berman 2010). Studies show that in Russia there are some disparities in type of union by level of education: cohabitation and non-marital childbearing are concentrated amongst less-educated groups, leading some to suggest that non-marital unions reflect a 'pattern of (social and economic) disadvantage' (Kostova 2007;Alich 2009;Gerber and Berman 2010;Perelli-Harris and Gerber 2011;Potârcă et al. 2012). Alcohol and union formation Alcohol consumption may affect union formation in a number of ways. Research on the topic has tended to concentrate on entry into marriage, rather than into cohabitation. Types of behaviour, including heavy drinking, can be incorporated into existing rational choice models of mate selection (Fu and Goldman 1996). For example, heavy drinking may be thought to reflect an individual's cultural tastes, values, and lifestyle, and thus positively or negatively affect their attractiveness to potential partners. Another possibility is that an individual who drinks heavily may currently have poor health, or may be perceived to be at risk of poor health in the future, making him, or her, less desirable as a partner. One review of alcohol and the 'marriage effect' (the consistent finding that unmarried people drink more heavily than the married) has suggested that excessive consumption of alcohol may encourage both early and late entry into marriage (Leonard and Rothbard 1999), and that the effect of consumption on entry to marriage will vary according to age. The few longitudinal studies which have been conducted to explore the relationship between alcohol consumption and marriage have shown associations that are inconsistent between studies. Two studies from the USA, which followed adolescents into adulthood, found that heavier drinking or alcohol abuse was associated with an early age of marriage (Newcomb and Bentler 1987;Forthofer et al. 1996). A possible scenario in explanation of this finding is that heavy drinking, perhaps combined with forms of anti-social behaviour, led to an early exit from schooling, an early sexual debut, and early parenthood which prompted early marriage. However, other longitudinal studies have found that heavy drinking was associated with delayed entry into marriage (Fu and Goldman 1996;Waldron et al. 2011). This may indicate that heavy drinkers also have personal problems, such as an unwillingness to make a lasting commitment, or that they are viewed as being undesirable spouses, so that they take longer, on average, to find a marriage partner. A further study was unable to find any effect of drinking habits on the timing of marriage once socio-economic factors had been adjusted for (Martino et al. 2004). The question of whether the relationship between alcohol consumption and cohabitation differs from that between alcohol consumption and marriage has not been thoroughly investigated. In European cross-sectional studies, individuals who cohabit were found to be more likely to be heavy drinkers than those who were married (Plant et al. 2008;Li et al. 2010). The authors speculated that this could be either because heavy drinkers are selected into cohabitation (Plant et al. 2008;Li et al. 2010) or because they are less likely to move on from cohabitation into marriage. However, these two hypotheses have not been investigated thoroughly and the mechanisms involved remain unclear. Horwitz and White (1998) investigated drinking before cohabitation and marriage and found no evidence that cohabiters were heavy drinkers before they moved in with their partners (Horwitz and White 1998). The effects of alcohol consumption on entry into cohabitation or marriage has not been investigated using data from Eastern Europe or (before our study) Russia, where patterns of both drinking and union formation are substantially different from those seen in Western countries. Data Our analysis used data from the Russian Longitudinal Monitoring Survey (RLMS) (Higher School of Economics et al. 1992-present), a Russian household panel survey started in the early 1990s to monitor the effect of the political transition from Soviet to post-Soviet Russia on health and wellbeing. The study was designed as a repeated crosssectional survey but because of the way the followup procedure was organized, the data also permit longitudinal analysis. The survey was carried out in 'waves', taken in successive years. We used crosssectional data from waves 8 to 19 (1998-2010) to construct longitudinal data by observing individuals from one wave (wave t − 1) to the subsequent wave (wave t) across the series of waves until they fell out of observation and were lost to follow-up. Waves 5-7 (1994-97) of the RLMS survey could not be used in our study because data on cohabitation were not collected in those years. Full details of the design and sampling framework of the RLMS are available on the project website (http://www.cpc.unc.edu/projects/rlms-hse). To date, the survey project has comprised two phases. Data from phase 1 (waves 1-4, 1992-94) were not included in this study since they are widely regarded as unrepresentative. Phase 2 comprised waves 5 onwards, and spanned the years 1994 to the present. At the beginning of the second phase, in 1994, a three-stage probability sample was drawn in an attempt to construct a nationally representative sample. Thirtyeight Russian population centres were chosen as primary sampling units (PSUs), the probability of their inclusion being proportional to their size, and villages or census districts were randomly chosen from the PSUs as secondary sampling units (SSUs). From each SSU, the addresses of ten households were randomly chosen from local household registers (in urban areas, the registers were developed by the survey team) and where possible all adult members of each selected household were interviewed. The total selected sample consisted of 4,718 households. At least one interview was completed for 84.3 per cent of these households, although the response rate was lower in the Moscow and St Petersburg regions, where it reached only 60.2 per cent. At the beginning of phase 2 (wave 5), data had been collected on 3,975 households and 8,893 adults. For each of the selected households, interviewers collected data on its composition, and attempted to interview every adult resident of the household, using a more detailed individual questionnaire. After the first round of interviews the interviewers returned to the same households approximately annually. Where possible, people leaving the household between annual 'waves' were followed up, and their new household recruited into the study. When, in a later wave, new people were found to have moved into one of the addresses selected for the initial sample they were invited to join the study. According to the RLMS survey team, the population sampled in wave 5 in 1994 were comparable to the population enumerated in the 1989 census of Russia in distribution of household size, sex ratio, age distribution, and proportion living in urban and rural areas. Our own analysis showed that crude marriage rates within the RLMS population were slightly higher than those published by the Russian government statistics agency but followed the same pattern of increases and decreases over the survey period. Analysis sample Both men and women were included in our longitudinal study if they had completed an individual interview as part of any two consecutive waves of the RLMS (wave t − 1 and wave t), and were aged between 16 and 50 at the time they first entered observation at wave t − 1. We refer to the wave in which the outcome was measured as wave/time t, and the preceding wave as wave/time t − 1. Waves further back in relation to the outcome are called wave t − 2, wave t − 3, and so on. Our study focused on the population not married at wave t − 1. Whether an individual was included in the analysis sample depended on which aspect of union formation was being investigated. For example, for the analysis of entry to union (either cohabitation or marriage) we included all those who were never married, divorced, or widowed (and not cohabiting) at t − 1, and then followed them to the next wave t. For the analysis of conversion of cohabitation to marriage we included all those cohabiting at wave t − 1 and followed them to wave t. The waves over which the individual could be followed are referred to as 'the follow-up period', and the first wave in which they were seen is termed 'the start of the follow-up period'. We restricted our analysis to those aged 16-50 years at the start of the follow-up period because most union formation occurs at these ages. Outcome variable: union status For each individual, marital status at wave t − 1 was grouped into four 'union status' categories: 'never married and not cohabiting', 'currently cohabiting', 'divorced', and 'widowed'. Each individual was then categorized as being in one of five potential outcome categories at the subsequent wave t, the four just listed or a, fifth, 'married' category. We could then cross-tabulate the 'original' (wave t − 1) and 'destination' (wave t) categories, to provide for 20 potential transitions over the period between waves. For four of these transitions no individual changed union status, leaving 16 potential transitions from one state to another, although two of these-a move from being either divorced or widowed to being never married-were logically impossible. After data cleaning, 62 out of 27,228 of the follow-up periods fell into these two 'impossible' categories, and as irresolvable inconsistencies they were excluded from the analysis. For different sections of the analysis we grouped together marital status outcomes at wave t − 1 and wave t. For example, for the analysis of entry to union, marriage and cohabitation were combined at wave t. This is explained in more detail in the Statistical Methods section below. Alcohol variables Alcohol use at the previous survey wave (wave t 1). At each wave of the RLMS, participants were asked about the frequency with which they drank alcohol, the types of beverage they consumed, and the maximum daily volume of each alcoholic beverage they had consumed in the 30 days before they were interviewed. The information on an individual's drinking habits at wave t − 1 was used to derive two alcohol consumption variables for the period beginning with that interview in wave t − 1 and ending at the interview for the next wave (wave t) of the survey. The first variable, an individual's 'drinking frequency', was categorized into five groups according to the number of times they had had an alcoholic drink in the 30 days before their interview at t − 1. The groups were: 'non-drinker', 'had a drink 2-3 times in the month', 'drank once a week', 'drank 2-3 times a week', and 'drank 4 or more times a week'. For women, the last two categories (2-3 and 4+ drinks) had to be combined owing to small numbers. The second variable for alcohol consumption, 'drinking pattern', classified individuals into 'binge drinkers', 'non-binge drinkers', or 'non-drinkers'. Adopting a criterion used in previous studies conducted in Russia (Malyutina et al. 2001;Bobak et al. 2004), we defined binge drinking as the consumption of more than 80 g of ethanol in a single type of beverage on a single occasion. These two alcohol consumption variables were used in separate models because they were highly correlated. Changes in alcohol consumption. It might be supposed that a sudden change in alcohol consumption would have a greater effect on an individual's union status than the individual's usual drinking pattern. To assess this particular hypothesis we conducted further analyses using a subset of observations with data from three waves (referred to as waves t − 2, t − 1, and t), and fitted models that were simultaneously adjusted for drinking frequency and pattern at t − 2 and for substantial changes in drinking frequency and pattern between waves t − 2 and t − 1. Categorical variables were created to indicate a 'substantial increase', 'decrease', or 'relative stability' between successive waves in 'drinking frequency' or in 'drinking pattern' (defined above). Change in 'drinking frequency' was indicated by a shift either upwards or downwards by at least two of our 'drinking frequency' categories-for example, a move from being a non-drinker to drinking once per week, or from drinking 4 or more times per week to drinking 1-3 times per month. Individuals who did not shift by at least two categories were classified as 'stable'. A 'change in drinking pattern' was indicated by a move from being a 'non-drinker' to being a 'binge drinker', and the opposite was taken to indicate a decreased pattern of drinking; all other cases were classified as having a 'stable' drinking pattern. Other variables. All covariate data were selfreported in the RLMS surveys and were taken to apply from the point they were reported (wave t − 1) to the next wave of the survey (wave t). We know from previous studies that the following factors could be associated with drinking pattern and union formation and so act as confounders of the true association: age, education, employment, income, health, and pregnancy. With the exception of age (the only variable operative before the start of alcohol consumption), all these factors could be influenced by alcohol consumption and so mediate between alcohol consumption and union formation. Treating these potential mediating variables as confounders in the model could have resulted in over-adjustment and obscure some of the effect of alcohol on union formation. However, not adjusting for them ran the risk of leaving residual confounding. Because we were not sure whether these variables were confounders, mediators, or both, we entered them into the models in a stepwise fashion and took extra care in interpreting the effects adjusted for them. Respondent's age at the start of the follow-up period was assigned to a 5-year age group. Calendar time was measured as the year when the survey was conducted. Education was assigned to one of three categories: 'incomplete secondary'; 'secondary, specialist, and professional', which included those who completed secondary education, and those who then went on to undertake specialist education, professional, and vocational-technical training (forms of applied professional training conducted in colleges, not universities); and 'university and above'. Employment status was also assigned to one of three categories: 'unemployed', 'employed', and 'other', the last including groups such as students and housewives. Household income was adjusted to allow for household size using an OECD (Organization for Economic Cooperation and Development)-modified scale (Hagenaars et al. 1994), and then assigned to the appropriate decile within the overall range of income. The self-assessed health of respondents, which they had reported on a fivepoint scale from 'very poor' to 'very good', was grouped into three categories: 'very poor and poor', 'fair', and 'good and very good'. A binary variable indicated whether the person interviewed had children under the age of 16 living with them. For women a binary variable indicating their current pregnancy status was also included. To take account of place, a categorical variable divided Russia into four 'areas' based partly on geography and partly on level of urbanization: respondents were classified as living either in the 'metropolitan areas' of Moscow and St Petersburg, or in 'Central, Urals, North, and North-west', 'Volga and the North Caucasus', or 'Siberia and the Far East'. Statistical methods We modelled the data using discrete-time hazard models (Fahrmeir 1998) in which the probability of moving between one union state at wave t − 1 and another union state at a succeeding wave of the RLMS survey, t, was expressed conditionally on the union state at time t − 1, and on the values taken by other relevant covariates at time t − 1 and in some cases covariates at time t − 2. We fitted several logistic and multinomial logistic regression models. For model 1 we used logistic regression to model entry into either cohabitation or marriage by those individuals who had previously never married or who were divorced or widowed and not cohabiting. For model 2 we used a multinomial logistic regression to model the competing risks of people embarking on either cohabitation or marriage, and for model 3 we applied a multinomial logistic regression to model the likelihood that a cohabiting individual would convert their relationship from a cohabitating union to marriage. We chose to model these particular transitions because they occurred most frequently within the sample population. For the different models, we used subsets of individuals according to whether they were 'at risk' of making the relevant transitions at wave t − 1. We also combined union status variables at time t and time for conversion of cohabitation into marriage: it substituted 'unmarried and not cohabiting' for 'cohabiting' and made s equal to 'still cohabiting', 'married', or 'neither of these'. In a further analysis to explore the effect of changes in alcohol consumption on union formation, we used a subsample of our data set, restricted to those individuals for whom there were data from three successive waves of the RLMS survey (t − 2, t − 1, and t). The models of entry into a union and that of the conversion from cohabitation into marriage were refitted, and now included 'drinking frequency at t − 2' as a predictor, as well as 'change in drinking frequency' to indicate whether or not an individual's drinking frequency had altered significantly between time t − 2 and time t − 1, as follows: logitfpðentry to union by time t at risk of entry to j union at t À 1 and t À 2; X tÀ1 ; X tÀ2 Þ where X t−1 represents covariates and 'change in drinking frequency' at time t − 1 and X t−2 represents 'drinking frequency' at time t − 2. If a significant effect associated with the 'change in drinking frequency' variable was found, it would suggest that a recent change in drinking frequency was a predictor of union formation over and above an individual's baseline 'drinking frequency', as measured at time t − 2. The same models were also fitted with 'drinking frequency' substituted for 'drinking pattern' and 'change in drinking frequency' for 'change in drinking pattern'. Because of the conditional structure of the model, the log likelihood components from each time point were independent and, assuming that the model holds, they could be summed to allow a single overall fit using standard regression. This method is described as 'pooled logistic regression' in epidemiological and demographic studies (D'Agostino et al. 1990;Grundy and Kravdal 2008). The procedure pools the subjects at risk and the events across all periods of observation, which means that one individual in the panel study may contribute to several periods of observation. Invariance of the regression parameters over time and age was assessed by testing for interactions with age and calendar time using likelihood ratio tests. Because the RLMS uses a multi-stage sampling design we calculated robust standard errors, which used information on PSUs as a cluster variable. In some cases observations were included from individuals living in the same household, and we therefore adjusted the standard errors to allow for dependence induced at the household level. Because of the different drinking habits of men and women we analysed data for the sexes separately. After developing the models using complete cases, with all items of data available, we explored the effect of missing data by fitting multiple imputation (MI) models under the missing-at-random (MAR) assumption (Carpenter and Kenward 2013). According to this assumption, the probability of 'missingness'-that is, that a piece of data has been omitted from the record of an individual-is entirely dependent on and explained by the observed data. We never know if this is a valid assumption, but its application will usually reduce any bias in our models caused by differences between those individuals for whom items of data are missing and those whose data are complete, and may increase the precision of our models by allowing the inclusion of those with missing data in addition to those for whom the data are complete. To determine which variables to include in the MI models, we fitted stepwise logistic regression models that would predict loss to follow-up from time t − 1 to time t using backward selection to deselect any covariates that were insignificant at the 5 per cent level. In addition to those variables in the models we had constructed as part of our earlier analyses, we included covariates of socio-economic status, such as occupational class and asset ownership. We also included indicators of life satisfaction, household size, and whether or not an individual was a smoker. Variables that remained significant after backward selection were included in the MI models. After creating multiple imputed data sets, the analysis was rerun once more and the resulting models combined using Rubin's rules (Rubin 1976), which take into account variation both within and between data sets. The MI procedure was implemented using STATA commands ice and mim. After the MI procedure all missing values were imputed, including missing values on the outcome at time t (attributable to attrition) and missing values on covariates at time t − 1. Finally, the models based on MI were compared with the analysis of the cases for which all the data were complete. If no non-trivial difference was found between the MI and 'complete case' analysis we could be more confident of its results, although never completely so, that differences between individuals with 'complete' and those with 'incomplete' data were not causing biased estimates of the association between alcohol consumption and union formation. A description of the sample used in our main analysis In the pooled data taken from waves 8 to 19 of the RLMS we were able to observe a total of 15,326 periods between wave t − 1 and wave t for unmarried men and 18,390 such periods for unmarried women at wave t − 1 (here 'unmarried' refers to never married, divorced, widowed, and cohabiting). The sample used to construct our first and second models, from those unmarried at wave t − 1, consisted of 20,853 observation periods, drawn from the records of 7,505 individuals who could be followed from wave t − 1 to wave t. Forty-five per cent of sample members were male. The sample used to construct our third model, used to study those cohabiting at wave t − 1, consisted of 8,137 observation periods between wave t − 1 and wave t, drawn from 3,532 individuals, 47 per cent of whom were male. Table 1 shows the characteristics of the pooled sample of 'unmarried' men and women at wave t − 1, Alcohol use and entry to union in Russia 291 and compares the 'unmarried and not cohabiting' with the 'unmarried and cohabiting'. The majority of the 'unmarried and not cohabiting' were aged less than 25 years, and this was particularly true for the men. The mean age of those not cohabiting, for both sexes combined, was 26.7 years, whereas those who were 'cohabiting and not married' tended to be older, with a mean age of 31.8 years. Among the 'unmarried and not cohabiting', men were more likely than women to be young, to have never married, not to have completed their secondary education, and to be unemployed. The most common employment status for the 'unmarried and not cohabiting' was 'other', which usually meant they were students. The majority of those in the sample who were cohabiting were employed. In line with previous cross-sectional studies conducted in Russia (Bobak et al. 1999;Jukkala et al. 2008), women were found to be significantly more likely to be non-drinkers, and those who did drink were less likely than men to drink frequently and to indulge in binge drinking. In our sample of unmarried individuals 43 per cent of men and 50 per cent of women reported that they had not had an alcoholic drink in the 30 days before their interview. Amongst the 'cohabiting' individuals in the sample, approximately half the men reported binge drinking in the previous 30 days, but just 20 per cent of the women. Tables 2 and 3 show the odds ratios derived from the logistic regression of 'entry into either a cohabitation or a marriage' according to the drinking habits of men and women before they formed the new union. Table 2 shows the odds ratios by frequency of consumption, and covariates included in the model. Since there was little change in the odds ratios when the covariates were added in a stepwise fashion, we show only the results mutually adjusted for all the other variables in the model. The most significant of the results in the tables is that those individuals of both sexes who were nondrinkers reduce their odds of entering a partnership by 20-25 per cent. Table 2 shows that as the frequency of an individual's drinking increased, so the odds of their entering a union also increased significantly. Tables 2 and 3 also show that the odds of entering a union between time t − 1 and time t were significantly higher for those men and women who, at time t − 1, were aged less than 35, or were divorced or widowed. The odds were higher for women with low levels of education than for those with higher levels. Both men and women who were not part of the labour market, that is, those in the 'other' employment category, had a greater likelihood of remaining single between waves t − 1 and t than those who were in employment. Amongst women, pregnancy was associated with a six-fold increase in the odds of entering a union. For both sexes, having a child or children under the age of 16 increased the likelihood of entering a union. The associations with the covariates shown in Tables 2 and 3 are broadly in line with findings from the Gender and Generations Survey (GGS) and other sources (Kostova 2007;Gerber and Berman 2010). We tested for interactions but found no evidence of one between the alcohol variables and any of the following: original union status at time t − 1, age, calendar time, and the other variables in the model using likelihood ratio tests. Broadly the same pattern of associations was seen for both men and women. Table 4 shows the odds of the competing risks of entering cohabitation or marriage by wave t for those who were not in a union at wave t − 1, Alcohol use and entry to union in Russia 293 according to their alcohol consumption. The adjusted results show that for both men and women there was a significant positive association between the frequency with which alcohol was consumed and the odds of entering cohabitation rather than marriage (p = 0.008 and 0.007, respectively). The odds of cohabitation relative to marriage were 4 times higher for men drinking at least 4 times a week than for men drinking 1-3 times a month; for women, the odds were only twice as high. There was, however, no evidence of an interaction between frequency of drinking and sex of the drinker (p = 0.18). Like Table 2, Table 4 also indicates that the chance of entering any kind of relationship was significantly lower for non-drinkers than for nonbinge drinkers. There were no significant interactions between any of the variables related to drinking and age, marital status at time t − 1, calendar time, or any of the other variables in the model. Conversion from cohabitation to marriage For both sexes, the odds of converting from cohabitation to marriage were highest amongst 'nondrinkers' and decreased as drinking frequency increased (Table 5). Male binge drinkers were significantly less likely to convert from cohabitation to marriage compared to those who drank more moderately. The results changed little when we added in our socio-economic and health variables. There were no significant interactions between the variables related to drinking and any of the other covariates. Alcohol consumption was not associated with the probability of remaining in a cohabiting union vs. a return to single status during the period between wave t − 1 and wave t. Further analysis of changes in drinking frequency and pattern We used a subset of men and women for whom we had data from three consecutive waves of the RLMS to assess whether changes in drinking frequency and drinking pattern affected entry into a union. The individuals forming this subsample, for which there were a total of 18,992 observations, were similar to the main sample in age, education, and reported alcohol consumption. Table 6 shows the associations found between changes in 'drinking frequency' and 'drinking pattern' and the risk of entering into a union for each sex, and Table 7 shows the associations found between the two types of change in drinking and the conversion from cohabitation to marriage, again for each sex. Table 6 shows that if men changed their drinking frequency or pattern this did not significantly affect their chances of forming a union within the follow-up period, but if women increased their drinking frequency in this period their chance of forming a union increased. Table 7 shows that if men who cohabited increased their drinking frequency they were significantly less likely to convert from cohabitation to marriage, but this was not true of women. When cohabiting non-drinkers began to drink, or moderate drinkers living with a partner started to binge drink, they reduced their odds of converting from cohabitation to marriage by 55 per cent. The analysis of missing data and the use of multiple imputation In the data from the full sample, shown in Table 1, 22.1 per cent of men and 16.9 per cent of women could not be followed from wave t − 1 to wave t. The numbers lost to follow-up were also significantly higher for more frequent drinkers and younger individuals, and was also associated with level of education, employment status, and area of residence in Russia (defined in the Methods section). Union status at wave t − 1 was also associated with attrition: never married, cohabiting, and divorced people were more likely to be lost to follow-up between wave t − 1 and wave t. The possible bias caused by this differential rate of attrition was investigated using MI under the MAR assumption, details of which were given in the Methods section above. When the same models used to calculate the figures in Tables 2 and 3 were refitted using data obtained from the MI procedure, the association for men between non-drinking and remaining out of union was weakened but did not disappear completely. When the models in Table 4 were refitted, the MI models showed the same pattern of associations for both sexes, but with levels of significance lower than those in Table 4. The MI versions of the models in Table 5 showed the same patterns as those featured in the table, with the levels of association virtually unchanged. Discussion Statistically the strongest result from our study was the intriguing finding that in Russia, after adjustment for a range of factors such as age, education, and health, those who consumed alcohol were significantly more likely to enter a union than those who did not drink. In addition, we also found some evidence that people not in unions and who were frequent drinkers were more likely to enter a cohabitation than to get married. Amongst those who were cohabiting, those who did not drink were more likely to convert their union into a marriage than were more frequent or binge drinkers. Thus, frequent or binge drinking seems to have had apparently opposite effects on union statusincreasing the likelihood of forming a union in the first place, but lowering the chance of converting a cohabiting union into a marriage. On the whole, the same relationship between patterns of alcohol consumption and union formation was seen for both sexes. There were no significant differences in the effect of alcohol consumption between those in the various union states when first observed at wave t − 1, and little attenuation of the effects of alcohol after adjustment for factors such as education, employment, income, or health. Further analysis of recent changes in drinking behaviour showed that if a woman substantially increased the frequency with which she drank, she became more likely to enter a union. We also demonstrated that for both sexes a substantial increase in drinking pattern by cohabiters (moving from non-drinking to moderate drinking or moving from moderate drinking to binge drinking) decreased the likelihood that their union would be converted into a marriage. In combination, these associations between drinking behaviour and union formation are likely to have produced the association between greater alcohol consumption and delayed entry into marriage. Together with recent studies which used the RLMS data to show that heavier drinkers were more likely to experience divorce (Keenan et al. 2013), these findings suggest that the levels of alcohol consumption in Russia result in fewer people embarking on, and remaining in, the married state. The odds ratios we report may seem small when considered over the course of a year-long interval, but over a longer period they could have an appreciable cumulative effect on the likelihood of entering a union. Continued heavy drinking over a longer period could have detrimental effects on health, occupation, and socialeconomic status, which may all in turn affect the chance of union formation. Alcohol consumption seems to be an important, and often overlooked, factor affecting an individual's pattern of union formation over the life course, and one which may play a particularly important role in a country like Russia where heavy drinking is common. The contradictory nature of some of the associations between alcohol consumption and union formation are in line with findings from studies in Western countries, which indicate that heavy drinking is associated with the early assumption of adult roles, including union formation (Newcomb and Bentler 1987;Forthofer et al. 1996), and that heavy drinkers tend to delay marriage (Fu and Goldman Alcohol use and entry to union in Russia 297 1996;Waldron et al. 2011). It should be noted, however, that the association between non-drinking and non-entry into a union was not found in any of the previous studies of the relationship between alcohol consumption and union formation, which all used data from the USA (Newcomb and Bentler 1988;Forthofer et al. 1996;Fu and Goldman 1996). The association between non-drinking and reduced entry to union could be explained by a number of factors. First, some individuals who do not drink alcohol may suffer from the type of health problems previously found to be associated with an increased likelihood of non-drinking, such as anxiety and depression (Rodgers et al. 2000) or cardiovascular disease (Marmot et al. 1981;Malyutina et al. 2002). These health problems may reduce sufferers' chances of forming a union. Alternatively, it is possible that by reducing their intake of alcohol individuals also reduce their opportunities for social interaction, and thus their likelihood of meeting potential partners. The reports of several qualitative studies have stressed the social function of drinking in Russia (Simpura and Paakkanen 1997;Pietilä and Rytkönen 2008b;Saburova et al. 2011), although they tend to concentrate on men's drinking behaviour. The fact that our results are relatively consistent for both sexes, and our finding that women who recently increased the frequency of their drinking were more likely to form a union than those who had not done so, both suggest that a factor such as sociability, which has a positive association with alcohol consumption, rather than poor health, which has a negative association, is responsible for the association between increased drinking and union formation. The importance of changes in drinking behaviour over the life course to an individual's chances of union formation, and the fact that this has an impact over and above that of recent alcohol consumption, highlights the need to understand drinking behaviour as a dynamic and cumulative process over an individual's lifetime. This is an issue which could be further explored if more comprehensive longitudinal data were available. Our study had several limitations, one of which was the likelihood of bias in the samples we used. It is likely that the individuals selected for study were untypical in their levels of alcohol consumption, because a greater proportion of frequent drinkers and heavy drinkers left observation, and because individuals in these groups were also less likely to have participated in a population survey such as the RLMS in the first place (Jousilahti et al. 2005). When compared with the proportion of single men in previous studies who reported that they had not drunk alcohol in the last month (Bobak et al. 1999;Pomerleau et al. 2008), the proportion of men in our sample who reported not drinking over the previous 30 days was, at 46 per cent of the total, rather high. Another possible indication of bias is that the sample seemed to have had a higher proportion of men than women with lower education. The underrepresentation of heavy or frequent drinkers is a common limitation of population panel data, and one for which it is difficult to correct. However, assuming that the associations reported here hold for the whole population, it is reasonable to suppose that had our sample included more heavy drinkers, the associations we found would have been strengthened. Another possible source of bias is the likelihood of attrition rates being related to outcomes, because entering a cohabitation or marriage might make sample members more likely to drop out of the survey; for example, the couple might go somewhere else to live. As a result fewer transitions between union states may have been captured than occurred. However, although we have not shown the analysis here, we calculated that marriage rates of individuals interviewed in the RLMS were broadly comparable with Russia's national crude marriage rate over the relevant period. To attempt to correct for biases in attrition, we used MI models, assuming MAR. These models imply that any missing values are entirely explained by the observed data, which included data on marital status and alcohol consumption. The results from the MI models did not show substantially different patterns of effect, but reliance on the inherently untestable MAR assumption means that it is impossible to know if the individuals who dropped out of the sample did so as a result of unobserved factors. Procedures for dealing with data that are missing but not at random (MNAR), require much more complex solutions than those we could apply in our study, generally involving the use of appropriate sensitivity analysis (see, e.g., Carpenter and Kenward 2013, Chapter 10). The RLMS survey itself had some limitations. Alcohol consumption was self-reported in the RLMS, and as a result was probably under-reported. The questions relating to alcohol within the survey did not permit an individual's total alcohol consumption to be calculated, nor was it possible to investigate the particular aspects of hazardous drinking behaviour characteristic of the Russian drinking pattern considered in previous studies ). We were also unable to establish the relationship and union history of individuals before they were interviewed as part of the RLMS survey, and this may have led to some misclassification of individuals' union status. For example, the 'never married' category will have included not only those who had never been in any union, but also those who had previously cohabited but were not doing so when interviewed at wave t − 1. Similarly, those categorized as cohabiting at t − 1 will have included some individuals who had previously divorced or been widowed. Further, an individual's previous relationships or marital history might affect both their propensity to engage in adverse drinking behaviour, or behaviours, and their likelihood of entering a future union, over and above any effect their current marital status may have on that possibility (Grundy and Tomassini 2010). When working with the multinomial models of cohabitation vs. marriage shown in Table 4, a problem of misclassification could have biased the associations we found. Given the average length of time between observation points t − 1 and t of approximately 1 year, it is likely that some individuals both began to cohabit and then make the transition to legal marriage within the year-long interval. In these cases the individual would have been classified as married at time t, when in fact they could actually have been classified as having been in a cohabitation, or more accurately in both a cohabitation and a marriage. To assess the likely scale of this misclassification in Russia we used another Russian data set, the GGS conducted in 2004. The GGS showed that within a timeframe similar to that of our study, 1998-2004, approximately 32 per cent of new marriages that had begun as cohabitations had been converted to marriages within 12 months or less. If these figures, which imply that 32 per cent of observed marriages were actually misclassified cohabitations, are applied to the RLMS data, the effect would be to increase the number of individuals entering a cohabitation between time t − 1 and time t. The number of cohabitations would increase from 1,109 to 1,368, that is, from 5.6 to 6.9 per cent of those not in union at time t − 1; the corresponding figures for the number entering marriage would decrease from 809 to 550, 4.1 to 2.8 per cent of those not in union at time t − 1. How might the under-enumeration of cohabitation or the overenumeration of marriage bias the associations seen? Unfortunately, the details of interviewees' alcohol consumption were not collected as part of the GGS, so we can only speculate about the direction of bias. If union status had been randomly misclassified (i.e., independently of alcohol consumption), the effect would have been to dilute the associations found. However, if, as seems more plausible, light drinkers converted to marriage more quickly than heavy drinkers, and were therefore more likely to have been misclassified, our analysis will have overestimated the association between heavy drinking and cohabitation, and as a result made the relationship between alcohol consumption and cohabitation appear more distinct from the relationship between alcohol and marriage than was actually the case. The main strength of our study was the use of longitudinal data, which allowed us largely to eliminate the possibility of reverse causality in the relationships we were observing. By using data on alcohol consumption which preceded the transition to a new union state, we could be confident that the change of state was the result of the alcohol-related behaviour and not vice versa. Nevertheless, the analysis does not necessarily imply a causal relationship. There is always the possibility of some residual confounding by unmeasured factors. For example, one such factor might be family background, and others might be personality or lifestyle characteristics not measured in our data set. Models of patterns of union formation, such as those embodied in the theory of the second demographic transition, rarely take into account factors such as alcohol consumption, despite it being an integral part of the culture of most European countries, and one that produces short-term and long-term changes in behaviour that affect the success of individuals' personal relationships. Moreover, such behaviour changes are not likely to have a uniform effect on personal relationships crossculturally because in some societies drunken behaviour is more socially acceptable and 'expected' than in others. The findings reported in this paper suggest that the role of alcohol should be more frequently considered in demographic models of union formation (Fu and Goldman 1996). Our findings also suggest that recent Russian demographic patterns, including increased rates of cohabitation and lower rates of marriage, may be related to the country's high rates of heavy and hazardous drinking and its causes, rather than to changes in societal values about marital behaviour itself. The causes of change in alcohol-related behaviour may be related to underlying socio-economic problems, the consequences of which could include changes in cohabitation and marriage rates, which could, in turn, affect the country's fertility levels. The results from our study also contribute to the more general debate about the reasons why cohabitation, rather than marriage, is associated with a greater risk of adverse outcomes, such as higher rates of domestic violence (Brownridge and Halli Alcohol use and entry to union in Russia 299 2000; Kenney and McLanahan 2006), lower reported rates of both general well-being (Soons and Kalmijn 2009), psychological well-being (Kim and McKenry 2002;Dush and Amato 2005), and higher rates of depression (Lamb et al. 2003). Because heavy drinkers are more likely to cohabit and alcohol consumption is often associated with these adverse outcomes, it could be acting as a confounding variable or common cause both of adverse health and social outcomes and cohabitation. These potential causal relationships need to be further investigated using appropriate longitudinal analysis. Research of this kind is particularly important given the growth of cohabitation both in Russia and elsewhere over the past few decades. Of course, it is possible that as cohabitation increasingly becomes the norm in Russia, its association with the adverse outcomes discussed above will weaken. Our study is, to our knowledge, the first to investigate the effect of alcohol on union formation within Russia using longitudinal data, and complements previous work, which studied divorce alone (Keenan et al. 2013). The new study contributes to knowledge on the factors affecting Russian patterns of cohabitation and marriage (Gerber and Berman 2010;Philipov and Jasilioniene 2010) and also to the debates on the interplay between marital status and health in Russia (Pridemore et al. 2010). The results suggest that as well as being associated with negative health effects and increased marital instability, alcohol consumption in Russia also contributes to the country's particular patterns of union formation.
2016-10-10T18:24:48.217Z
2014-09-02T00:00:00.000
{ "year": 2014, "sha1": "7e91d8d9a17693e6324a6c95c8cd0ea6b6166427", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc4487543?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "ff484f44699d9842d3ac39c87f14c41f19ba766a", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
86521771
pes2o/s2orc
v3-fos-license
Novel predication of protein biomarkers in interferon-gamma-stimulated breast cancer cells. Objective Proteomics is the large-scale study of localization, identification, structure, and function of the proteome. A proteome is the complete set of proteins expressed and modified by an organism under a specific set of environmental conditions. This study was undertaken to investigate the novel protein biomarkers that play a role in breast cancer under inflammatory condition. Methods The two-dimensional gel electrophoresis (2-DE) was applied in the context of the breast cancer model system to investigate the effect of interferon-gamma (IFN-γ) on the differential protein expression in breast cancer-derived cell lines CAMA-1 and 3,4-methylenedioxyamphetamine (MDA)-MB-231. Whole cell lysates were prepared from IFN-γ-stimulated and non-stimulated CAMA-1 and MDA-MB-231 cells for 2-DE to obtain information for potential differential protein expression. Protein spots in the gels were visualized by silver staining and analyzed by Progenesis SameSpot. Gels were then scanned using the Epson image scanner with LabScan 6.0 software. The ExPASy tool was used to identify and quantify breast cancer cell membrane proteins expressed in response to IFN-γ. Results In the present proteomics study, a series of differentially expressed proteins were analyzed in IFN-γ-stimulated CAMA-1 and MDA-MB-231 cells. While results obtained from this analysis can be used as preliminary data to identify differences between untreated and IFN-γ-treated samples, they were not used for further mass spectrometry analysis. Conclusion The data described and discussed here can be utilized for further data validation projects and could assist in the discovery of new breast cancer-related proteins and molecular pathways. Introduction Proteomics is the large-scale study of localization, identification, structure, and function of the proteome. A proteome is the complete set of proteins expressed and modified by an organism under a specific set of environmental conditions. The application of mass spectrometry (MS) in proteomics analysis has made it a powerful tool for protein characterization [1] The classical method for the quantitative analysis of complex protein mixtures is the separation of proteins by two-dimensional gel electrophoresis (2-DE) and identification of resolved proteins by MS or tandem MS (MS/MS). [2] 2-DE accommodates a large mass range and permits the analysis of the entire set of proteins. Furthermore, proteins are separated with high resolution by isoelectric point (pI) and molecular mass. Resolved protein spots are used for comparison among different samples and can be used for MS analysis. However, this method has some disadvantages such as the occurrence of gel to gel variation, a limited dynamic range, and difficulty in detection of basic or hydrophobic proteins and low range of molecular weights and limited range pIs of proteins. [3] Recent developments of non-gel-based and label-free shotgun proteomics techniques have rendered quantitative protein study faster, cleaner, and simpler. [4][5][6] Proteomics studies provide the global analysis of protein expression and function. In contrast to the genome, the proteome is very dynamic in nature due to post-translational modifications. [7,8] Therefore, to recognize the physiological and pathological events that occur in health and disease, it is important to detect and analyze the proteins from their native proteome. Interferon-gamma (IFN-γ) has now well considered as a pleiotropic cytokine played a major role as an effector molecule for antitumor immunity which has ability to suppress tumor growth. On the other hand, it is also known as a tumor promoter involved in promoting an outgrowth of tumor cells. [9,10] Not only in cancer but this pleiotropic cytokine is also involved in various other mechanisms. [11] Taking into consideration the importance of proteomics, 2-DE was applied in the context of the breast cancer model system to investigate the effect of cytokines such as IFN-γ on breast cancer cells. The ExPASy tool (http://web.expasy.org/tagident/) was used to identify and quantify breast cancer cell membrane proteins expressed in response to IFN-γ. This investigation is a potential aid in developing an understanding of the molecular aspects of breast cancer and also helpful for the drug designing for cancer patients. Preparation of cell lysate The cell lysate was prepared as described previously. [14,15] The untreated and IFN-γ-treated CAMA-1 and MDA-MB-231 cells in 100 mm cell culture dishes (Nunc, UK), with density of 1×10 7 cells were used per sample to make cell lysate. The cells were detached from culture dishes using Accutase, counted, and washed with phosphate-buffered saline. The cell pellet was resuspended in 1 ml of the protein extraction buffer containing 7 M urea, 2 M thiourea, 4% CHAPS, ×1 protease inhibitor cocktail, 20 mM DTT, 1% ampholyte, and benzonase. Cells were vortexed and sonicated for 5 min at 4°C. Cell lysates were spun at 20,000 rpm for 10 min; supernatant was collected into a clean microcentrifuge tube and stored at −80°C for later use. The protein concentration of cell lysates was estimated using the Coo Assay (Uptima, Interchim, France) following manufacturer's instructions. For IEF, the IPGPhor was cleaned with strip holder cleaning solution (GE Healthcare Bio-Sciences AB). The Ettan IPGPhor3 was switched on and connection with the IPGPhor3 control software was established. IPGPhor manifold was covered with 108 ml of Immobiline PlusOne DryStrip cover fluid and the rehydrated strips were then placed in individual lanes of Ettan IPG strip holder (GE Healthcare Bio-Sciences AB) under the fluid using tweezers with the positive end toward the anode end of the manifold. Hydrated filter wicks were placed between the IPG strips and the electrodes. The cathodic (-ve) filter wick was rehydrated with 150 µL of 100 mM DTT. The anodic (+ve) filter wick was hydrated with 150 µL miliQ water. The lid was closed and IPGPhor program was run according to the programme below. A holding step at the end was added to so that it could be left overnight. Step At the end of the program, the computer was disconnected and IPGPhor was stopped. The paper wicks were removed with tweezers and discarded. The IPG strips were placed in a Petri dish, rinsed briefly with deionized water, labeled, and stored at −80°C for later use. Second-dimension gel electrophoresis The second-dimension gel electrophoresis (2D gel electrophoresis) was performed by BIO-RAD PROTEAN® II Assembly as per instructions given by the manufacturer (Bio-Rad Laboratories Ltd., Watford Hertfordshire, WD17, 1ET). Briefly, the glass plates were cleaned with 70% ethanol, dried, assembled with 2 mm spacers, and clipped into the casting frame. Purite water was poured between the plates to check for leakage. Assemblies that leaked were taken apart and reclipped and the process was repeated. On establishing a non-leaking system, the water was removed and the system was dried in situ with pressurized airflow. The gel solution 12% SDS polyacrylamide gel electrophoresis (SDS-PAGE) was made with water, 1.5 M Tris-HCl, and 30% acrylamide solution. These were placed in a flask and degassed for 15 min at ambient temperature. The tetramethylethylenediamine, International Journal of Health Sciences Vol. 13, Issue 2 (March -April 2019) ammonium persulfate (APS), and SDS were added and mixed by stirring. The gel solution was poured in between glass plates, avoiding any air bubbles, to 1 cm below the lowest plate. The top of the gels was covered with overlay buffer (water saturated isopropanol 80%) and allowed to polymerize overnight. The 2 D electrophoresis was performed in the following steps. Equilibration of the IPG strips Before the 2D gel run, the IPG strips containing isoelectrically focused proteins were equilibrated and reduced. For each strip, two vials of 10 ml aliquots of the frozen equilibration buffer were thawed at room temperature. In one vial of equilibration buffer, 100 mg of DL-dithiothreitol (DTT) was added while in the other, 400 mg of iodoacetamide was added and allowed to mix gently. The IPG strips were first equilibrated in equilibration buffer containing 1% DTT and then in a buffer containing 4% iodoacetamide for 15 min each at room temperature. The IPG strips were rinsed with ×1 electrophoresis buffer before placing on the second-dimension gel. Assembly and running of 2D gel Agarose sealing solution was heated to liquefy. IPG strips were trimmed from each end up to 0.6 cm, thus giving a final length of 16 cm. A small square of paper electrode wick (2 × 3 cm half thickness) was loaded with 10 µl of molecular weight marker and placed on the top of the left-hand corner of the gel. The IPG strip was placed into the well of the 12% SDS-PAGE gel with the acidic side facing the glass plate hinge and sealed with agarose solution, avoiding any air bubbles. The electrophoresis tank was filled with 1.5 L of gel running buffer. The gels with strips were removed from the casting assembly and clipped onto the core unit of the protean tank. The core unit was lifted into the tank, running buffer was added to the top of the upper buffer chamber and air bubbles were removed with a glass rod. The lid was fitted to the tank and cables were connected to the power Pac (Bio-Rad, Power Pac 1000). Electrophoresis was carried out first at 50 V for 30 min and then at 150 V for about 4.5 h or until the bromophenol blue dye front had reached to the lower end. The core unit was then removed from the tank disassembled and gels were removed from the clamps. The spacers were loosened, and one edge of the glass plate was lifted up with a spatula. The gel was then placed in a glass container containing gel fixing solution. Silver nitrate staining Protein spots were visualized by silver nitrate staining as described previously. [16] Briefly, after electrophoresis gels were fixed for half an hour in fixing solution, they were sensitized for 30 min and washed with ultrapure water. Staining was carried out using 2.5% silver nitrate solution for 20 min followed by a careful wash with ultrapure water for a maximum of 1 min. The gels were developed for 10-15 min until spots appeared, and the reaction was stopped by washing with stop solution for 10 min. The gels were washed 3 times with ultrapure water and stored in gel preserving solution at 4°C. Gel image capture and spot analysis Gels were scanned using the scanner (Epson image scanner III) with LabScan 6.0 software. First, the scanner was calibrated and set to use the transparent settings at 300 dpi with the blue filter. The scanner surface was cleaned with 70% ethanol and a little purite water was poured on the surface. The gel was placed directly on the scanner, previewed and air bubbles were smoothed out if any were present. The scan area of the gel was then selected and scanned. Gel images were saved as mel and tiff files. Scanned gel images were characterized with Progenesis SameSpot software package (Nonlinear Dynamics Limited, UK). 2D gel analysis of CAMA-1 and MDA-MB-231 cells CAMA-1 and MDA-MB-231 cells were used in this study as a model for breast cancer and differential expression of the breast cancer proteome in response to pro-inflammatory cytokines such as IFN-γ. This was investigated by 2-DE followed by protein spot analysis. The total proteome is separated by the first-dimension isoelectric focusing on the basis of the isoelectric points (pI) of the various proteins. The isoelectrically focused proteins were resolved by the second-dimension SDS-PAGE, as shown in These data were used to gather preliminary results to discover the potential of differential protein expression in response to infections in the model system. An example of a 3D spot graph analyzed by Progenesis SameSpot software is shown in Figure 5. Table 1 Tables 2 and 3. spots resolved at a given pI and molecular weight were identified in CAMA-1 cells and 17 spots were identified in MDA-MB-231 cells. The 2-DE results were analyzed using ExPASy tool that yielded several potential useful results. The polypeptide spots that randomly selected and analyzed were highlighted as shown in Table 1-3. Five spots of the 19 polypeptide spots generated from the CAMA-1 cell sample were randomly selected and analyzed. For example, the polypeptide spot number 1741 correlated with IFN-induced transmembrane protein 10, which has a pI of ~ 6 and an MW 25 kDa. The polypeptide spot number 2610, which it is believed to be related to programmed cell death protein 5 (PDCD5), was also evaluated. PDCD5 has a pI of ~ 6 and an MW 16 kDa which is very similar to spot number 2610. PDCD5 is widely expressed in most types of normal human tissue and is unregulated in cells undergoing apoptosis. [17] It has since been confirmed that IFN-γ inhibits growth of human carcinoma cells through caspase-1-dependent induction of apoptosis. [18] It was assumed that IFN-γ played a critical role in the apoptosis-inducing factors like PDCD5. The investigation of spot number 1387 which has a pI of ~ 3 and an MW 30 kDa showed a strong similarity to cell cycle checkpoint protein RAD1. Cycle checkpoint proteins play an important role in controlling cell division of damaged cells. [19] Another interesting polypeptide spot number was 2086 which has a pI of ~ 9 and an MW 25 kDa. The search engine showed that this spot might be related TNF-α-induced protein 8-like protein 2 (TIPE2). TIPE2 is a novel immune negative molecule and an inhibitor of the oncogenic Ras in mice. However, its function in humans is still unclear. [20] The polypeptide spot number 2361 which has a pI of ~ 7 and an MW 20 kDa showed correlation with cyclin-dependent kinase 4. Cyclin-dependent kinases (CDKs) play a central role in the orderly transition from one phase of the eukaryotic mitotic cell division cycle to the next. [21,22] CDKs regulate cell proliferation and coordinate the cell cycle checkpoint response to DNA damage. Due to this, it is assumed that the absence of spot number 2361 after IFN-γ treatment may act as a regulator of tumor cell division. CDK4/6 inhibitors have been proven to be attractive antineoplastic agents due to the importance of CDK4/6 activity in regulating cell proliferation. [23] In the above manner, five spots of 17, the polypeptide spots obtained from MDAMB-231 cell lysates were randomly selected and analyzed. The polypeptide spot number 932, which has a pI of ~ 4 and an MW 110 kDa, has been investigated. This spot showed a strong correlation with melanoma-associated antigen C. Melanoma-associated antigens (MAGEs) are classified into two subgroups, I and II. Subgroup I consists of antigens in which expression is generally restricted to tumor or germ cells, whereas Subgroup II MAGEs are expressed in various normal adult human tissues. [24] Another polypeptide spot that was linked to ExPASy tool was number 905, which has a pI of ~ 5 and an MW 55 kDa. The database showed that spot number 905 corresponds to Myc proto-oncogene protein. Discussion The Myc proto-oncogene is a "master regulator" which controls many functions including cellular metabolism and proliferation. The Myc oncogene has been shown to induce apoptosis and has, therefore, been targeted to develop novel cancer therapies. [18] It was noticed that the data obtained from 2-DE indicate that the intensity of spot number 905 increased after IFN-γ treatment. In this context, IFN-γ has been shown to induce apoptosis of human carcinoma cells through a caspase-1-dependent mechanism, which is believed to control Myc proto-oncogene. [25] Therefore, IFN-γ might increase the apoptosis of breast cancer cells through Myc proto-oncogene protein in the same manner. TP53-regulated inhibitor of apoptosis is linked to the polypeptide spot number 2299, which has a pI of ~ 5 and an MW 10 kDa. The polypeptide spot number 1720, which has a pI of ~ 4 and an MW 20 kDa, showed correlation with growth arrest and DNA damage-inducible protein (GADD45β been reported to inhibit apoptosis through attenuating c-Jun N-terminal kinase activation. It has been reported that TNF-α treatment induces GADD45β protein expression through nuclear factor-kappa B-mediated transcription, and transforming growth factor-b induces its expression. [26] The polypeptide spot number 1493 was then evaluated, which it is believed to be related to the HLA-DR alpha chain. The HLA-DR alpha chain has a pI of ~2 and MW 30 kDa, which is very similar to spot, number 1493. The expression of HLA-DR on cancer cells closely relates to a more favorable prognosis for cancer patients, but the immunological and non-immunological mechanisms are still obscure. [26] Conclusion The current study has found a number of differentially expressed proteins in the breast cancer model system in response to infectious agents. The data presented here could be used as baseline for further detailed investigation. In this regard, functional assays need to be employed to further validate the data presented. Further, analysis by mass spectrometry is also required to identify sequences and differentially observed polypeptide spots. Declaration of Interest The authors report no conflicts of interest.
2019-03-28T13:34:05.358Z
2019-02-14T00:00:00.000
{ "year": 2019, "sha1": "ae1534d6b11d6eadc8df7b659fc23fd35a202627", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "9cd9c8673727ca41f7d7b3a2c77be6ac9893cef4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
208076048
pes2o/s2orc
v3-fos-license
Likelihood Assignment for Out-of-Distribution Inputs in Deep Generative Models is Sensitive to Prior Distribution Choice Recent work has shown that deep generative models assign higher likelihood to out-of-distribution inputs than to training data. We show that a factor underlying this phenomenon is a mismatch between the nature of the prior distribution and that of the data distribution, a problem found in widely used deep generative models such as VAEs and Glow. While a typical choice for a prior distribution is a standard Gaussian distribution, properties of distributions of real data sets may not be consistent with a unimodal prior distribution. This paper focuses on the relationship between the choice of a prior distribution and the likelihoods assigned to out-of-distribution inputs. We propose the use of a mixture distribution as a prior to make likelihoods assigned by deep generative models sensitive to out-of-distribution inputs. Furthermore, we explain the theoretical advantages of adopting a mixture distribution as the prior, and we present experimental results to support our claims. Finally, we demonstrate that a mixture prior lowers the out-of-distribution likelihood with respect to two pairs of real image data sets: Fashion-MNIST vs. MNIST and CIFAR10 vs. SVHN. Introduction The out-of-distribution detection is an important area of study that has attracted considerable attention [28,11,21,31] to improve the safety and reliability of machine learning systems. Detection methods based on density estimation using a parametric model have been studied for low dimensional data [28], and deep generative models seem to be a reasonable choice when dealing with high-dimensional data. However, recent work [23,12,31,24,5] has shown that deep generative models such as VAEs [18], PixelCNN [34], and flow-based models [8,16] cannot distinguish training data from out-of-distribution inputs in terms of the likelihood. For instance, deep generative models trained on Fashion-MNIST assign higher likelihoods to MNIST than to Fashion-MNIST, and those trained on CIFAR-10 assign higher likelihood to SVHN than to CIFAR-10 [23]. Methods for mitigating this problem have been proposed from various perspectives [12,2,5,24]. We focus on the influence of the prior distribution of deep generative models on the likelihood assigned to outof-distribution data. Although the typical choice is a standard normal distribution, various studies have analyzed alternatives [7,4,33,35]. However, present work mainly focuses on the representative ability and the likelihood assigned to in-distribution data when evaluating prior distributions. To the best of our knowledge, no existing work has analyzed the effect that the prior distribution has on the likelihood assigned to out-of-distribution inputs. Here, we consider data sets that can be naturally partitioned into clusters, so the underlying distribution can be approximated by a multimodal distribution with modes apart from each other. This assumption is reasonable for many data sets found in the wild such as Fashion-MNIST, which contains different types of images, including T-shirts, shoes, and bags. If a unimodal prior distribution is used to train generative models on such data sets, the models are forced to learn the mapping between unimodal and multimodal distributions. We consider this inconsistency an important factor underlying the assignment of high likelihood to out-of-distribution areas. We use untrainable mixture prior distributions and manually allocate similar data to each component before training by using labels of data sets or k-means clustering. Under these conditions, the models trained on Fashion-MNIST successfully assign lower likelihoods to MNIST. Our approach also lowers the likelihoods assigned to SVHN by models trained on CIFAR-10. We provide three explanations for our observations. First, as mentioned above, a multimodal prior distribution can alleviate the inconsistency between a prior and a data distribution, which is a possible factor underlying the out-of-distribution problem. Second, allocating similar data to each component can reduce the possibility of accidentally assigning undesirable out-ofdistribution points to high likelihood areas. Our second or-der analysis can theoretically justify this intuition in a manner similar to the work of Nalisnick et al. [23]. Third, outof-distribution points are forced out of high likelihood areas of the prior distribution when a multimodal prior is used. Somewhat surprisingly, the out-of-distribution phenomenon still occurs when a model with a unimodal prior is trained only on data that would be allocated to one component in the multimodal case. This is a novel observation that motivates further investigation of designing the latent variable space to mitigate the out-of-distribution phenomenon. Related Work Our work is directly motivated by the recent observation that deep generative models can assign higher likelihoods to out-of-distribution inputs [23,5]. The use of prior distributions has been studied independently of this line of work. Out-of-Distribution Detection by Deep Generative Models Although model likelihood is often used to evaluate deep generative models, Theis et al. [32] showed that high likelihood is neither sufficient nor necessary for models to generate high quality images. Remarkably, Nalisnick et al. [23] has reported that deep generative models such as VAEs, flow-based models, and PixelCNN can assign higher likelihoods to out-of-distribution inputs. Similar phenomena have also been reported in parallel studies [5,12]. Solutions have been proposed from various perspectives. Hendrycks et al. [12] proposed "outlier exposure", a technique that uses carefully chosen outlier data sets during training to lower the likelihood assigned to out-ofdistribution inputs. Bütepage et al. [2] focused on VAEs and reported that the methods for evaluating likelihood and the assumption of a visual distribution on pixels influence the likelihood assigned to out-of-distribution inputs. Another line of study is to use alternative metrics. Choi et al. [5] proposed using the Watanabe-Akaike Information Criterion (WAIC) as an alternative. Nalisnick et al. [24] hypothesized that out-of-distribution points are not located in the model's "typical set", and thus proposed the use of a hypothesis test to check whether an input resides in the model's typical set. Prior distribution A typical choice for a prior distribution for deep generative models such as VAEs and flow-based models is a standard Gaussian distribution. However, various studies have proposed the use of different alternatives. One line of study selects more expressive prior distributions, such as multimodal distributions [15,7,33,25,14], stochastic processes [25,10,3], and autoregressive models [4,35]. Another option is to use discrete latent variables [29,35]. Previous work on the choice of the prior distribution for deep Figure 1: Motivation for using a multimodal prior distribution from a topological point of view. If the prior distribution is mapped to a distribution with a different topology, the mapped distribution will inevitably have undesirable high likelihood areas. The black and red areas represent the typical sets of the prior and the data distribution, respectively. The gray and yellow areas represent high likelihood areas of the prior and the data distribution, respectively. While the distributions are shown in two-dimensions in this figure, this inconsistency between high likelihood areas and typical sets is a problem observed in high dimensional data. generative models have focused on the representative ability, natural fit to data sets, and the likelihood or reconstruction of in-distribution inputs. To the best of our knowledge, no previous study has focused on the relationships between the prior distribution and the likelihood assigned to out-ofdistribution data. Motivation In this section, we discuss the theoretical motivations for using a multimodal prior distribution; topology mismatch and second order analysis. On a related note, we have observed that a multimodal prior distribution can force out-ofdistribution points out of high likelihood areas. We explain this effect in Section 5.3. Topology Mismatch We focus on data sets that have "clusters", and adopt an assumption that the underlying distribution can be approximated as a multimodal distribution with components located far away from each other. We analyze deep generative models by approximating them as topology-preserving invertible mappings between a prior distribution and a data distribution. Nalisnick et al. [24] focused on the "typical set" [6] of deep generative models and the data distribution. As suggested by Nalisnick et al. [24], here, we assume that deep generative models learn mappings from the typical set of the prior distribution to the typical set of the data distribu- tion. Figure 1 visualizes the intuition of the mappings from a bimodal data distribution to two different types of prior distributions under our assumptions. If the bimodal data distribution is mapped to a unimodal prior distribution, we cannot eliminate the possibility of the model mapping outof-distribution inputs to the typical set or high likelihood areas of the prior distribution. We will refer to this issue as the topology mismatch problem. This simple analysis explains the out-of-distribution phenomenon and the results of prior work [24], implying that out-of-distribution inputs can even reside in the typical set. By contrast, if a prior distribution is topologically consistent with the data distribution, there exists a mapping that decreases the possibility of the outof-distribution phenomenon. Note that we cannot say that the modification in a prior distribution can single-handedly solve the problem as the probability density of latent variables on a prior distribution is not the only factor influencing the likelihood of deep generative models such as VAEs and Glow. In addition, it has been reported that deep generative models trained on similar images can generate dissimilar images [27], and thus our analysis on topology mismatch cannot explain this result. However, we later experimentally show that the choice of the prior distribution nonetheless has a significant influence on the likelihoods assigned to out-of-distribution inputs (Section 5). To justify our analysis, we conduct experiments on some simple artificial data sets. Figure 2 shows the likelihoods assigned by flow-based deep generative models trained on points sampled from a bimodal Gaussian mixture distribution. We use a simple model architecture with four affine coupling layers and reverse the features after each layer. We compare a unimodal Gaussian prior and a bimodal Gaussian mixture prior. Figure 2 shows that the contours of the log-likelihood assigned by the model using a standard Gaussian prior distribution have high likelihood areas outside the region data points reside. Because the prior distribution is mapped to a distribution with a different topol-ogy, the mapped distribution will inevitably have undesirable high likelihood areas. By contrast, the contours of the model using a Gaussian mixture prior successfully separates the two modes, and do not have high likelihood areas in out-of-distribution areas. To show that a model with a standard Gaussian prior can assign high likelihood to outof-distribution inputs even in the low-dimensional case, we compare the likelihoods assigned to out-of-distribution inputs that are points sampled from a Gaussian distribution with mean zero and variance 0.01. As shown in Figure 2c, the out-of-distribution inputs have minimal overlap with the in-distribution data. However, the mean of the loglikelihoods assigned by the model using a standard Gaussian prior to in-distribution inputs is −3.25, which is similar to the log-likelihood assigned to out-of-distribution inputs (−4.87). By contrast, the mean of log-likelihood assigned to out-of-distribution inputs by the model using a multimodal prior is much lower (−40.11). As the dimensionality of the data increases, this phenomenon becomes more pronounced; however, using a multimodal distribution as a prior can significantly alleviated this problem. Further details are presented in Appendix C. Second Order Analysis Nalisnick et al. [23] provided a second order analysis with implications consistent with their experimental observations, while they put some strong assumptions. One implication of their analysis is that deep generative models with unimodal prior distributions assign higher likelihood if out-of-distribution images have lower variance over image pixels. However, since they use a unimodal prior distribution, their analysis do not apply here. To explain why our proposition may help, we perform a similar analysis on assumptions corresponding to our models. Although we still adopt some strong assumptions and apply coarse approximations in a similar manner as the original analysis, our analysis provides an intuitive explanation for our experi-mental results. The value we are interested in evaluating is where p is a given generative model, q is the adversarial distribution (out-of-distribution), and p * is the training distribution. If the value of is positive, the adversarial distribution is assigned a higher likelihood by the generative model. In the following analysis, we assume that p reasonably approximates p * . Note that the analysis for unimodal prior models suggests that q can be assigned higher likelihood even if p perfectly approximates p * . Nalisnick et al. [23] approximate the probability distribution function of the given generative model p as which is equivalent to assuming that the generative model can be approximated with a Gaussian distribution. In this work, however, we focus on data sets with an underlying distribution which can be approximated by a mixture distribution. Therefore, we assume that p can be approximated as log p(x; θ) log 1 K K p i (x; θ), where p i corresponds to each component approximated by a Gaussian distribution. We assume that each component of the generative model p i (x; θ) corresponds to a component of the prior distribution p i (z; ψ). Here, we adopt the assumption that the generative model is constant-volume Glow (CV-Glow) as is done in [23]. The derivation and detailed assumptions are given in Appendix A. Finally, we derive the following formula: where σ 2 ψ is the variance of the prior distribution (we assume that all the components have identical variance), D * i and D i correspond to the in-distribution and out-ofdistribution data allocated to the i-th component and w * i , w i are the ratio of data allocated to the i-th component satisfying K w * i = 1, K w i = 1. u l,c,j is the weight of the l-th 1x1 convolution, which is fixed for any inputs. Further, h and w index the input spatial dimensions, c indexes the input channel dimensions, l indexes the series of flows, and j indexes the column dimensions of the C l × C l ker- , wherex i is the elementwise mean of the images generated from the i-th component, and the two matrices are assumed to be diagonal. D min and D * max are chosen so the final expression is maximized. Expanding the formula by using CV-Glow does not seem to be a reasonable, as we do not use it in our experiments. However, Nalisnick et al. [23] reported that the out-of-distribution phenomenon occurs even on CV-Glow, one of the simplest deep generative models, as with many other more complex deep generative models such as general Glow and VAEs. Therefore, it is worth considering CV-Glow to analyze the problem of general deep generative models. Roughly speaking, we can say that if σ Dmin takes smaller values than σ D * max , the likelihood assigned to outof-distribution data can be larger than that assigned to indistribution data. However, if this is the case, this indicates that one of the out-of-distribution modes has a mean that is close to the mean of one of the modes of the generative model with small variance. If out-of-distribution data satisfies this condition, such a mode can no longer be considered out-of-distribution, as inputs corresponding to it must be similar to the images corresponding to the mode of the generative model. Note that a mode of the generative model has a small variance and contains similar images under our assumptions. By contrast, the analysis by Nalisnick et al. [23] assumed that the data distribution can be approximated by a unimodal Gaussian distribution with possibly large variance. Therefore, low-variance out-of-distribution data with mean identical to in-distribution data can contain completely different images. Our analysis indicates that the squared distance from the mean of each mode is an important factor of likelihood assignment. We later show that our experimental results are consistent with this analysis. Note that this analysis does not provide an exhaustive explanation for our results, as our experiments show that the squared distance is not the only important factor underlying the likelihood assigned to outof-distribution inputs (Section 5.3). However, our simple analysis provides an intuitive interpretation of our experimental results similar to the suggestion from the analysis by Nalisnick et al. [23]. Proposed Model We replace the prior distributions of deep generative models with mixture distributions K i=1 p i /K that are not trainable, and we assume that all components are uniformly weighted. Although some previous studies have performed clustering using VAEs with a trainable multimodal prior distribution [7,33], we manually assign each input to a component of the prior distribution before training. We simply use the labels of the data sets or apply k-means clustering on the training data to decide on which components to assign to. During training, the likelihood for each input is evaluated using a different unimodal prior distribution p i (using a different index i for each input), which is the component of the multimodal prior distribution assigned to each input. The test likelihood is evaluated on a mixture prior distribution K i=1 p i /K without the component assignment used during training. We use Gaussian distributions and generalized Gaussian distributions for the components of the mixture distributions. The probability density function of a univariate generalized Gaussian distribution is where Γ is the Gamma function and α, β ∈ (0, +∞) are parameters. The assumption made in our analysis in Section 3.2 suggests that the components of prior distributions should be far from each other and have a small overlap. We observe that Gaussian distributions are too heavy-tailed and components must be placed far away from each other in order to lower the out-of-distribution likelihood. To deal with this problem, we propose the use of a generalized Gaussian distribution. A generalized Gaussian distribution with parameters β = 2 and α = √ 2 is equal to a standard Gaussian distribution. A generalized Gaussian distribution with large β is light-tailed, so we use a generalized Gaussian distribution with parameters α = Γ(1/β)/Γ(3/β), β = 4 (Section 5.3). Note that the variance of a generalized Gaussian distribution using α = Γ(1/β)/Γ(3/β) is one. Experiments We evaluate the effect of a multimodal prior distribution on the likelihoods assigned to out-of-distribution inputs on two pairs of real image data sets: Fashion-MNIST vs. MNIST, and CIFAR-10 vs. SVHN. . "uni" denotes a standard Gaussian prior and "multi" denotes a bimodal Gaussian mixture prior. For Fashion-MNIST, we report likelihoods evaluated on test data. Bimodal priors mitigate the out-of-distribution problem. Data Sets We use two pairs of image data sets. The first pair is Fashion-MNIST [36] (training data) and MNIST [20] (outof-distribution inputs). The second pair is CIFAR-10 [19] (training data) and SVHN [26] (out-of-distribution inputs). For training, we use a small subset of the data sets, as using all images requires a large number of clusters to lower the out-of-distribution likelihood. For CIFAR-10, 10% random width and height shifting is applied during training as data augmentation. Model Architecture and Training Details Our implementation of VAE is based on the architecture described in [30,23]. Both the encoder and the decoder are convolutional neural networks. Our implementation of Glow is based on the authors' code hosted at OpenAI's open source repository 1 . To remove spatial dependencies on the latent variables, we do not use the multi-scale architecture, and apply 1 × 1 convolution over three dimensions (width, height, channel) after the decoder. Further details are discussed in Appendix B. Two Labels and Two Modes We first analyze our model on simple data sets of images. Here, models are trained on images in label 1 (7) 1595.10 Glow multi Fashion-MNIST (1, 7) 2679.89 Table 1: Negative log-likelihoods assigned to MNIST by the models trained on Fashion-MNIST. Fashion-MNIST (i) indicates that the model is trained on the images in the i-th label. "uni" denotes a standard Gaussian prior and "multi" denotes a bimodal Gaussian mixture prior. The unimodal prior models trained only on images for one label of Fashion-MNIST still exhibit the out-of-distribution phenomenon for MNIST when compared to multimodal prior models. Figure 4 shows that the models using multimodal prior distributions correctly assign low likelihood to MNIST, the out-of-distribution data, while the models using unimodal prior distributions assign high likelihood to MNIST. Multi-Modal Priors Force Out Out-of-Distribution Points Our analysis in Section 3.2 suggests that multimodal prior models mitigate the out-of-distribution problem because each component is trained on simpler data. However, although the complexity of data allocated to each component is important, unimodal prior models trained on data allocated to a single component still assign high likelihoods to out-of-distribution inputs when compared to multimodal prior models (Table 1). Figure 5 shows that the model with the multimodal prior correctly places MNIST in an out-of-distribution area wihin the latent variable space. In contrast, MNIST and Fashion-MNIST (label 7) have large overlap within the latent variable space of the model with a unimodal prior trained only on label 7 of Fashion-MNIST. These results imply that separating in-distribution data in the latent variable space by using a multimodal prior distribution has a strong effect of forcing out-of-distribution points out of high-likelihood areas. This observation suggests a new approach for mitigating the out-of-distribution phenomenon; improving latent variable design. Figure 6: Relationships of the distance between two components and the mean log-likelihoods assigned to MNIST by models trained on Fashion-MNIST (label 1 and 7). While likelihoods assigned to out-of-distribution inputs are sensitive to the distance between components regardless of component choice, the Gaussian mixture priors require much larger distances to lower the likelihood assigned to out-ofdistribution inputs. The histograms and the mean values of the log-likelihoods are reported in Appendix F.1 Distance between Two Components We analyze the relationship of the likelihoods assigned to out-of-distribution inputs and the distance between two components using two types of distributions: Gaussian and generalized Gaussian mixture distributions. Figure 6 shows the mean loglikelihoods assigned to MNIST by the models trained on distance between components regardless of the component choice. However, models using Gaussian mixture priors require larger distances to lower the out-of-distribution likelihoods. A generalized Gaussian prior (β = 4) is particularly effective at assigning low likelihoods to out-ofdistribution inputs even with much smaller distances between components. The means of the bimodal distributions are [±d/2, 0, . . . , 0], and the variance is diag ([1, . . . , 1]) for all the components. Note that the likelihoods assigned to the test data of Fashion-MNIST (label 1 and 7) are relatively unaffected by the distance between two components (Appendix F.1). Second Order Analysis Our analysis in Section 3.2 suggests that the conditional expectation of the squared distances from the mean of the images generated from the modes, i.e. σ 2 Di,h,w,c , influence the assigned likelihoods. We show that our experimental results are consistent with the analysis. Note that this is not the only influencing factor that lowers the out-of-distribution likelihood as mentioned above. Figure 7a Figure 7c shows the per-dimensional variance of images in Fashion-MNIST (label 1 and 7) and MNIST. In contrast to our analysis, the analysis by Nalisnick et al. [23] for unimodal prior models suggests that the models assign higher likelihood to an adversarial distribution if the perdimensional variance is small. As has been reported in [23], most pixels of images found in MNIST have low variance, and this is consistent with the result that MNIST is assigned higher likelihood when a standard Gaussian prior is used. The differences between these two types of histograms provide an intuitive explanation for the difference between likelihoods assigned to out-of-distribution inputs by models using unimodal and multimodal prior distributions. Results on Complex Data Sets We evaluate our proposition on more complex data sets. While multimodal priors assign lower likelihoods, the effect is especially limited on Glow. The results on Glow may be affected by the spatial dependencies on the latent variables, and our efforts to remove the dependencies may not be sufficient. Our observations suggest that Glow requires further modifications to solve this problem, so we leave this problem for future work. More investigation into latent variable space as well as separation of data sets is required for further performance. Alternatively, our method can be used in tandem with other techniques such as [12,24]. Figure 8 shows that the models using standard Gaussian priors produce the out-of-distribution phenomenon on MNIST, while the models using Gaussian mixture priors do to a lesser degree. CIFAR-10 (label 0 and 4) vs SVHN We find that images for one label in CIFAR-10 are still too diverse for our method. Therefore, we apply k-means and separate the images in label 0 and 4 of CIFAR-10 into four respective clusters, and we use images in one cluster of each label. We compare the models using a standard Gaussian prior and a bimodal Gaussian mixture prior with means [±100, 0, . . . , 0] for VAE, and [±200, 0, . . . , 0] for Glow. Figure 9 shows that the models with multimodal priors assign lower likelihoods to SVHN compared to the models using unimodal prior distributions. However, this effect is limited on Glow. We hypothesize that images in each cluster are still too diverse for Glow with our settings. These results imply that it is difficult to adopt our method if a data set does not consist of low-variance and distant clusters, and further study is required particularly for Glow. . The models using standard Gaussian priors assign higher likelihood to SVHN. The models using multimodal priors mitigate this problem while the effect is limited on Glow. Conclusion and Discussion We analyzed the influence of prior distribution choice of deep generative models on the likelihoods assigned to outof-distribution inputs. Recent work [23,5] on deep generative models with unimodal prior distributions has shown that these models can assign higher likelihoods to out-ofdistribution inputs than to training data. In this paper, we showed that models using multi-modal prior distributions lower the likelihoods assigned to out-of-distribution inputs for Fashion-MNIST vs. MNIST and CIFAR10 vs. SVHN. We also provided theoretical explanations for the advantages of the use of multi-modal prior distributions. Unfortunately, our experimental results suggested that it is difficult to apply our method to complex data sets even when we use prior knowledge. Thus, our work demonstrates the limitation of the high-dimensional likelihoods yet again, and encourages future work on alternative metrics such as [5,24]. Nevertheless, our work is the first to show that likelihoods assigned to out-of-distribution inputs are affected by the choice of the prior distribution, which has been mainly studied as a way to improve the representative ability of deep generative models for in-distribution data. Our observations motivate further study on the prior distributions of deep generative models, as well as on methods to control the structure of the latent variables to make the model likelihood sensitive to out-of-distribution inputs. Supplimentary Material of "Likelihood Assignment for Out-of-Distribution Inputs in Deep Generative Models is Sensitive to Prior Distribution Choice" A. Second Order Analysis In this section, we present detailed explanations for the analysis in Section 3.2. We adopt the assumption that the probability distribution function of of the given generative model p can be approximated by mixture distribution log p(x; θ) where p i corresponds to each component that can be approximated by a Gaussian distribution. For simplicity, we assume that the components are assigned the uniform weights, and have equal variances. In addition, corresponding to the nature of the data sets that we are considering, we assume that components of the distribution are far from each other and have small variances. Under the assumptions, we can approximate the probability distribution for each input by taking the value from the component that yields the maximum value for the data: . Thus, we can write the expectation by the training data distribution p * as where D * i represents in-distribution data allocated to the ith component and w * i is the ratio of data allocated to the i-th component satisfying We can also expand the expectation by the adversarial distribution q i represents out-of-distribution data allocated to the i-th component, and w i is the ratio of data allocated to i-th component satisfying Since we assume that each component can be approximated by a Gaussian distribution, we use second order approximation for each component: Here, x i is the mean of images generated from each component. Therefore, ∇x i log p(x i ; θ) T (x −x i ) 0 since argmax x log p i (x; θ) x i . Thus we can expand the conditional expectation as where , and it is assumed to be diagonal as in [23]. Furthermore, Note that Σ D * i and Σ Di are not the variance matrices asx i are the mean images of the generative model. Because we assume that variances of all components are identical, log p i (x i ; θ) can be approximated to be identical for all i. Finally, we can write the difference of the two log-likelihoods (Equation 1) in a relatively simple form in parallel with the first line of Equation 5 in [23]: If we assume that each p i is precisely a Gaussian distribution, we can simply compute the Hessian in Equation 6. However, because this assumption is too strong, Nalisnick et al. [23] expanded this formula by adopting the assumption that the generative model is constant-volume Glow (CV-Glow). Although we do not use CV-Glow in our experiments, we apply the expression derived by Nalisnick et al. [23]: where σ 2 ψ is the variance of a component of the prior distribution (we assume all components have identical variance) and σ 2 Di,h,w,c are diagonal elements of Σ Di . u l,c,j is the weight of the l-th 1x1 convolution of Glow, which is fixed for any inputs. h and w index the input spatial dimensions, c indexes the input channel dimensions, l indexes the series of flows, and j indexes the column dimensions of the C l × C l kernel. We assume that each component of the generative model p i (x; θ) corresponds to a component of prior distribution p i (z; ψ). Finally, we arrive Equation 2. B. Experimental Settings We present model architectures and training settings of the experiments shown in Section 5. VAE Our implementation of VAE [18] is based on the architecture described in [30,23]. Both the encoder and the decoder are convolutional neural networks described in Table 2 and 3. We use batch normalization [13] after every convolutional layer except for the last layer of the encoder and the decoder. All the convolutional layers in the decoder use ReLU [22] as an activation function after batch normalization. After the final layer of the decoder, we apply the softmax function, and assume i.i.d. categorical distributions on pixels as visual distributions. Operation Kernel Strides Channels Pad To alleviate the spatial dependencies on the latent variables, we do not use the multi-scale architecture, which splits the latent variables after squeezing [9]. In addition, we apply 1 × 1 convolution over three dimensions (width, height, channel) after the encoder, and apply the inverse operation before the decoder. In the implementation, we add the code in Listing 1 after the encoder, and add the inverse operation before the decoder. Moreover, we add a small positive value (0.1 in our implementation) to the scale of affine coupling layers to stabilize the training as suggested at 3 . While Nalisnick et al. [23] remove actnorm and apply their original initialization scheme, we use actnorm and apply the original initialization scheme in the OpenAI's code. We perform training for 1,000 epochs using the Adam optimizer in accordance with the OpenAI's code. We use a learning rate of 1e−3, which is linearly annealed from zero over the first 10 epochs. C. Simple Artificial Data For artificial data used in Section 3.1, we compare the likelihoods assigned to in-distribution and out-ofdistribution data to show that a standard Gaussian prior can assign high likelihoods to out-of-distribution inputs. The indistribution data is generated from a two-dimensional Gaussian mixture distribution whose means are [±3.5, 0] and variance is diag([0.5, 1]), and the out-of-distribution data is sample points from a two-dimensional Gaussian distribution with zero mean and 0.01 variance Figure 2c shows that out-of-distribution inputs does not have any overlap with in-distribution data. However, Figure 10a shows that the log-likelihoods assigned for in-distribution and out-ofdistribution inputs by the model using a standard Gaussian prior are similar. On the contrary, the model using a multimodal prior distribution assigns much lower likelihoods to out-of-distribution inputs. This phenomenon is more serious for high dimensional data. The in-distribution data is generated from a 10 dimensional Gaussian mixture distribution whose means are [±3.5, 0, . . . , 0] and variances are diag([0.5, 1, . . . , 1]) for both components. The out-of-distribution data is generated from a 10 dimensional Gaussian distribution with zero mean and 0.01 variance. Figure 10b, c shows that the loglikelihoods assigned by the model using a standard Gaussian prior assigned to out-of-distribution inputs are much higher than those assigned to in-distribution data, although the model using a multimodal prior assigns much smaller likelihoods to out-of-distribution inputs. Figure 10: (a) Histograms of the log-likelihoods assigned to training and out-of-distribution data by flow-based generative models trained on simple two-dimensional Gaussian mixture data in Section 3.1. "uni" denotes a unimodal prior, and "multi" denotes a multimodal prior. While a model with a unimodal prior assigns relatively high likelihoods to out-of-distribution inputs, a model with a multimodal prior assigns much lower likelihoods to out-of-distribution inputs. (b, c) The histograms of the log-likelihoods assigned by flow-based generative models for 10 dimensional data. The out-of-distribution problem is more serious for highdimensional data. on the Glow. Figure 12 shows the images corresponding to the means of the unimodal prior distributions of the VAE and Glow. The mean images of the a bimodal prior VAE are similar to the images in each cluster. However, the mean image of the standard Gaussian prior VAE is different from the training data. For Glow with a standard Gaussian prior, while the mean image is similar with in-distribution data, some images from random sampling of Glow are different from the training data. The results suggests that the models with unimodal prior distribution can assign high likeli- Figure 11: Images corresponding to the means of components of the bimodal prior distributions of VAE and Glow trained on label 0 and 7 of FahionMNIST (left). Images in the data set allocated to each cluster (right). D. Mean Images of Clusters (a) VAE with a unimodal prior (b) Glow with a unimodal prior Figure 12: Images corresponding to the means of the unimodal distributions of VAE and Glow trained on label 0 and 7 of Fashion-MNIST (left), and images generated from random sampling (right). The mean image of VAE is dissimilar with training data while image from random sampling are similar with training data. While the mean image of Glow is similar to training data, some images from random sampling of Glow are disimilar with training data. hoods to out-of-distribution inputs because they can contain out-of-distribution inputs in their high likelihood areas or typical sets. E. K-means Clustering for CIFAR-10 In the experiments reported in Section 5.4, we separate the images in label 0 and 4 of CIFAR-10 by k-means clustering (k = 4) initialized by k-means++ [1] respectively. Figure 13 shows sample images from the clusters and the per-dimensional variance of the images in each cluster. The histograms show that k-means clustering successfully decreases the per-dimensional variance. In our experiments, we use images in the cluster corresponding to the second rows. (a) Sample images in four clusters of label 0. Each row corresponds to one cluster. We use the images in the second row. (b) Per-dimensional variance of images in each cluster of label 0. (c) Sample images in four clusters of label 4. Each row corresponds to one cluster. We use the images in the second row. F. Additional Experimental Results We show additional materials for the results reported in Section 5. 1 and 4). The results in these images correspond to those in Figure 6. F.1. Distance between Two Components (label 1, 7) using different distances between two components. Figure 14 shows the histograms corresponding to the results reported in Figure 6. The likelihoods assigned to the test data of Fashion-MNIST are not affected by the distances between two components significantly compared to those assigned to MNIST. 1, 7). The likelihoods assigned to the test data are relatively not affected by the distance between two components. Figure 16 shows the log-likelihoods assigned to MNIST and the test data by models with standard Gaussian priors trained on Fashion-MNIST (label 1 or 7). Although the models assign lower likelihood to MNIST, the effect of alleviating the out-of-distribution behaviour is not significant compared to that of the model using a multimodal prior. Table 1. While the models trained on a simpler data set assign lower likelihoods to out-of-distribution inputs, the models using multimodal distributions assign much lower likelihoods. Figure 17 shows the histograms of the latent variables of the models with bimodal prior distributions trained on Fashion-MNIST (label 1, 7). For the results on VAE, we show the histograms of the means of posterior distributions. Figure 19 shows the histograms of the latent variables on the Figure 19: Histograms of the latent variables on the models with standard Gaussian prior trained on Fashion-MNIST label 1 or 7. We select the dimension whose mean of the latent variables of MNIST is farthest from zero. dimension whose mean of the latent variables of MNIST is farthest from zero (the mean of the prior distributions), the latent variables of MNIST have a large overlap with those of Fashion-MNIST on the models with unimodal prior distributions, especially on Glow. Figure 18 shows the histograms of the latent variables of the models with bimodal prior distributions trained on CIFAR-10 (label 0, 4). In contrast to Fashion-MNIST vs. MNIST, the latent variables of SVHN are located near the in-distribution areas.
2019-11-15T08:40:45.000Z
2019-11-15T00:00:00.000
{ "year": 2019, "sha1": "4759b598cdecdf125afbb4d82ed5e5fbb495d4fc", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4759b598cdecdf125afbb4d82ed5e5fbb495d4fc", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
222124096
pes2o/s2orc
v3-fos-license
AN OPTIMAL IMAGE SELECTION METHOD TO IMPROVE QUALITY OF RELATIVE RADIOMETRIC CALIBRATION FOR UAV MULTISPECTRAL IMAGES Radiometric calibration has become important pre-processing with increasing use of unmanned aerial vehicle (UAV) images in various applications. In order to convert the digital number (DN) to reflectance, vicarious radiometric calibration is widely used including relative radiometric calibration. Some UAV sensor systems can measure irradiance for precise relative radiometric calibration. However, most of UAV sensor systems cannot measure irradiance and therefore precise relative radiometric calibration is needed to produce reflectance map with vicarious calibration. In this study, an optimal image selection method is proposed to improve quality of relative radiometric calibration. The method, relative calibration by the optimal path (RCOP), uses filtered tie points acquired in geometric calibration based on selection optimal image by Dijkstra algorithm. About 100 multispectral images were acquired with a RedEdge-M camera and a fixed-wing UAV. The reflectance map was produced using RCOP and vicarious calibration using ground reference panels. A validation data was processed using irradiance for precise relative radiometric calibration. As a result, the RCOP method showed root mean square error (RMSE) of 0.03-0.10 reflectance to validation data. Therefore, the proposed method can be used to produce precise reflectance map by vicarious calibration. * Corresponding author INTRODUCTION Rapidly advancing technologies are making sensors smaller, more precise, and more popular. Recently, the supply of UAV images is increasing in various remote sensing fields. UAV images have attracted attention as a means of efficiently observing limited access areas when users want them (Tsouros et al., 2019). UAV images are used to acquire three-dimensional spatial information and biophysical information. In order to acquire location and attribute information from remote sensing images including UAV images, pre-processing such as geometric and radiometric calibration is required. Radiometric calibration, including atmospheric correction, is an important pre-processing step to obtain biophysical information from spectral reflectance. For radiometric calibration, vicarious calibration and the radiative transfer (RT) model are widely used in spaceborne and airborne remote sensing fields (Del Pozo et al., 2014;Moran et al., 2001;Smith, Milton, 1999;Berni et al., 2014;Garzonio et al., 2017;Honkavaara et al, 2013). The RT model converts DNs into reflectance based on a mathematical model. It does not require ground reference panels, but it has limitations such as complicate parameters and low accuracy (Yang et al., 2017;Hakala et al., 2013;Aasen, Bolten, 2018;Konkavaara, Khoramshahi, 2018;Roosjen et al., 2017;Schneider-Zapp et al., 2019). UAV images are taken at a relatively low altitude, compared to aerial or satellite images, and they may do not possess significant radiometric distortions. On the other hand, their small field of view makes mosaicking a necessary procedure. Each image may experience different turbulence, a different incidence angle, different illumination, or different signal processing chains. As a result, radiometric properties of UAV images may vary significantly. Relative radiometric properties among UAV images have to be adjusted so that the mosaic image can have a consistent spectral reflectance. Therefore, for UAV images, we need relative radiometric calibration in addition to vicarious calibration, unless we install ground reference panels within the field of view of each image. Images with ground reference panels are calibrated through vicarious calibration. Images without ground reference panels are calibrated relatively to the DNs of the images with ground reference panels, and are then calibrated through vicarious calibration (Mafanya et al., 2018). Some UAV cameras include an irradiance sensor to measure the amount of sunlight for each image, and they use the irradiance measurements for relative radiometric calibration (Mamaghani, Salvaggio, 2019). For UAV cameras without an irradiance sensor, conversion coefficients have been estimated through regression analysis between the DNs of pixels on overlapping regions between two images (Suomalainen et al., 2018). However, this method is prone to geometric distortions and pixels with radiometric anomalies (Xu et al., 2019), and they often result in visual discontinuity between adjacent scenes (Liu et al., 2011). Precise relative radiometric calibration of UAV images is still highly in demand. In this study, an optimal image selection method is proposed to improve quality of relative radiometric calibration. It uses filtered tie points acquired in geometric calibration based on optimal image selection by Dijkstra algorithm. It can minimize error accumulation by reducing the step of reaching the last image located at the region boundary. UAV Images UAV images were acquired from 15:32 to 15:36 on 15 May 2019 with 50 degree of solar elevation angle under clear sky. Total 108 images were acquired with about 3 cm spatial resolution. Figure 1 shows the study area and locations of the images acquired. The area is above the campus of the National Institute of Agricultural Sciences in Wanju-gun, Jeonlabuk-do, Korea. It includes trees, grass, soil, and sidewalk blocks as its major cover types. The camera captures images in five spectral bands: blue, green, red, red-edge, and near-infrared. It also has an irradiance sensor, a down-welling light sensor (DLS), for radiometric calibration. The measured irradiance from the DLS was used to produce validation data for this study. Reflectance of Ground Reference Panels Seven ground reference panels were deployed on the grass of a soccer field for vicarious radiometric calibration of the UAV images ( Figure 3a). The panels were made of specially coated fabric sized 1.2 m × 1.2 m to maintain constant reflectance through a spectral range of 435 nm to 1100 nm. Spectral reflectance of each panel was measured using a FieldSpec-3 spectro-radiometer ( Figure 3b). The panels used for the experiment provided 3%, 5%, 11%, 22%, 33%, 44%, and 55% reflectance. (a) (b) Figure 3. (a) Ground reference panels installed for vicarious radiometric calibration of UAV images, and (b) spectral reflectance measured using a spectro-radiometer Validation Data For validation, a reference reflectance map was generated using the measured irradiance data and the reference panels. An image map with intensity as its pixel values was generated first through standard processing procedures of tie point extraction, bundle adjustment, digital surface model generation, orthorectification, and image resampling. A reflectance map with reflectance as its pixel values was then generated by converting DNs of the mosaicked image into reflectance values. For the conversion procedure, the measured irradiance data and the DN on the ground reference panels were used to calculate coefficients for the conversion. We used commercial software (SW) to produce the reflectance map. Figure 4 shows the reflectance map generated as validation data. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B1-2020, 2020 XXIV ISPRS Congress (2020 edition) Figure 4. A natural color composite of reflectance map generated for validation 3. METHODS Relative Radiometric Calibration Procedure Overall radiometric calibration procedure is shown as Figure 5 to generate reflectance map. The first procedure is vicarious radiometric calibration of a reference image using ground reference panels. The image with ground reference panels is selected as the reference image. DN values of the reference panels within the reference image are measured, and coefficients for vicarious calibration are estimated through regression analysis. Next, an image network is formed by defining each image as a node. When there is a sufficient number of tie points between two images, a link between the two corresponding nodes is defined. After an image network is formed, we can find an optimal path from one image to another by following links between image nodes. In this experiment, we used the Dijkstra algorithm to obtain the optimal path. Next, tie points among the images within the optimal path are processed to estimate coefficients for relative radiometric calibration. DNs of an image are converted to equivalent DNs in the reference image using these relative calibration coefficients, and eventually, they are converted to reflectance using the vicarious calibration coefficients. Finally, a geometric mosaicking process is carried out on the reflectance images to generate a mosaicked reflectance map. Figure 5. Overall procedure of radiometric calibration Vicarious Calibration of Reference Image DNs of the reference image are converted to spectral reflectance using the vicarious radiometric calibration method. For each ground reference panel, its image location is identified manually. The average DN around the location was calculated per spectral band and per reference panel. DNs of the reference image are converted to reflectance using the following equation: (1) where R is reflectance, DN is the digital number of an image, is absolute radiometric gain, and is the absolute radiometric offset. These coefficients are estimated for each band via linear regression between the DN and the reflectance of the pixels corresponding to the ground reference panels ( Figure 6). All DNs of the reference image were then converted to reflectance using equation (1). (a) (b) Figure 6. Examples of linear regression analysis between the digital number and the reflectance for (a) blue and (b) red-edge bands An Optimal Image Selection Method A method to select optimal images was proposed that is relative calibration by optimal path (RCOP). It can minimize error accumulation by reducing the step of reaching the last image located at the region boundary. Yin and Wang (2010) proposed Dijkstra algorithm to find a path that minimizes the sum of The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII- B1-2020, 2020XXIV ISPRS Congress (2020 weights from one starting point to all other points. In Figure 7, the weight between images was given as 1 when number of tie points was exceeded 100, otherwise it was given as infinity. The reason is that if the number of tie points between images is too small, it may cause over-or under-estimation of conversion coefficients. If there are multiple paths with the weight, the optimal path is selected in the direction where the number of tie points is greater. Figure 7. Selection of optimal images using number of tie points between image pair that was selected by weight and number of tie points Relative Radiometric Calibration Tie points are obtained from the geometric calibration process. Some tie points that may be geometrically correct may contain radiometric abnormalities due to shadow, saturation, or sun glints. To improve the quality of relative radiometric calibration, tie points undergo radiometric filtering. In order to remove radiometric outliers in the tie points, the statistical distribution of DN differences between two images is analyzed. The mean and standard deviation of the DN differences are calculated. We accept tie points where the DN differences between two images are within range of the mean ± standard deviation. Figure 8 shows an example of the distribution of DN values for all tie points (black crosses), and the accepted tie points are red crosses. Figure 8. The distribution of DN values for all tie points (black crosses) and accepted tie points (red crosses) Using the accepted tie points, coefficients of relative radiometric calibration are estimated using equation (2): (2) where DN is the digital number from the original image, DN' is the digital number from the converted image, ar is relative radiometric gain, and br is relative radiometric offset. Using the estimated conversion coefficients, DNs of the original image are converted to equivalent DNs of the other image. Using successive image pairs along the optimal path, DNs of the original image are converted sequentially to equivalent DNs of the reference image. They are then converted to reflectance using equation (1). Visual Interpretation of Calibration Results Figures 9 shows the natural color composite of the reflectance map by RCOP and validation data, respectively. In order to compare colors, each band was stretched based on the same range of reflectance. The results from RCOP had similar color when compared with the validation data using irradiance measurement. The mosaic result was smooth without noticeable color differences at the boundaries between images. Quantitative Accuracy Analysis For quantitative validation, reflectance obtained from RCOP was compared to the validation data at the same points. The test samples were collected for a total of 200 pixels by random sampling (Figure 10). Error was calculated as root mean square error (RMSE) using the following equation: ( 3) where n is the number of samples, and and represent reflectance of the calibrated image and the validation data, respectively, for the i-th validation point. The RMSE of the visible (blue, green, red) bands is about 0.03, although red-edge and NIR bands show about 0.06 and 0.10, respectively. A reason of higher RMSE of red-edge and NIR bands might be cover types of study area that most of the area is composed with grass and trees. Vegetation has higher reflectance in red-edge and NIR bands than visible bands. CONCLUSION In this study, the RCOP method was proposed for relative radiometric calibration of images those do not have measured irradiance. It can improve radiometric calibration quality using tie points and optimal path selection. It showed reliable and stable calibration results. Therefore, the proposed method can be used to obtain a precise reflectance map, improving the quality of relative radiometric calibration. Most UAVs acquire images without irradiance measurement, and are used in applications where precise reflectance retrieval is crucial. The proposed method should contribute to improving accuracy of biophysical factor estimation or classification using UAV images. In further studies, precise band alignment will be carried out, and we will check how exceptions reported in this paper can be improved. An additional validation study will be carried out by installing ground reference panels at various locations in the study area.
2020-08-13T10:03:00.981Z
2020-08-06T00:00:00.000
{ "year": 2020, "sha1": "03feafb9c3ad9b8ba599a9e9c41abffd72adf94c", "oa_license": "CCBY", "oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIII-B1-2020/493/2020/isprs-archives-XLIII-B1-2020-493-2020.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9f5e74d93204ee46921a3d61789ab772d414866a", "s2fieldsofstudy": [ "Environmental Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
10368671
pes2o/s2orc
v3-fos-license
Expected Logarithm of Central Quadratic Form and Its Use in KL-Divergence of Some Distributions : In this paper, we develop three different methods for computing the expected logarithm of central quadratic forms: a series method, an integral method and a fast (but inexact) set of methods. The approach used for deriving the integral method is novel and can be used for computing the expected logarithm of other random variables. Furthermore, we derive expressions for the Kullback–Leibler (KL) divergence of elliptical gamma distributions and angular central Gaussian distributions, which turn out to be functions dependent on the expected logarithm of a central quadratic form. Through several experimental studies, we compare the performance of these methods. Introduction Expected logarithm of random variables usually appears in the expressions of important quantities like entropy and Kullback-Leibler (KL) divergence [1][2][3].The second kind moment is an important statistics method used in estimation problems [4,5].It also appears in an important class of inference algorithms called the variational Bayesian inference [6,7].Furthermore, the geometric mean of a random variable which has been used in economics [8,9] is equal to the exponential of the expected logarithm of that random variable. Central quadratic forms (CQFs) have many applications, and most of them stem from the fact that they are asymptotically equivalent to many statistics for testing null hypotheses.They are used for finding the number of components in mixtures of Gaussians [10], to test goodness-of-fit for some distributions [11] and as test statistics for dimensionality reduction in inverse regression [12]. In this paper, we develop three algorithms for computing the expected logarithm of CQFs.There is a need to develop special algorithms for it because CQFs do not have a closed-form probability density function, which makes the computation of their expected logarithms difficult.Although there is a vast literature on many different ways for calculating probability distributions of CQFs (see [13][14][15][16]), we have not found any work on calculating their expected logarithms.It is worth noting that one of our three algorithms is based upon works for computing the probability density function of CQFs using a series of gamma random variables [13,14].We also derive expressions for the KL-divergence of two distributions that are subclasses of generalized elliptical distributions.These are zero-mean elliptical gamma (ZEG) distribution and angular central Gaussian (ACG) distribution.The only term in their KL-divergences that can not be computed in terms of elementary functions is an expected logarithm of a CQF, which can be computed by one of our developed algorithms. The KL-divergence or the relative entropy was first introduced in [17] as a generalization of Shannon's definition of information [18].This divergence has been used extensively by statisticians and engineers.Many popular divergence classes like f-divergence and alpha-divergence have been introduced as generalizations to this divergence [19].This divergence has several invariance properties like scale invariance that makes it an interesting dissimilarity measure in statistical inference problems [20].KL-divergence is also used as a criterion for model selection [21], hypothesis testing [22], and merging in mixture models [23,24].Additionally, it can be used as a measure of dissimilarity in classification problems, for example, text classification [25], speech recognition [26], and texture classification [27,28]. The wide applicability of KL-divergence as a useful dissimilarity measure persuade us to derive the KL-divergence for two important distributions.One of them is ZEG [29] that has a rich modeling power and allows heavy and light tail and different peak behaviors [30,31].The other is ACG [32] which is a distribution on the unit sphere that has been used in many applications [33][34][35][36].This distribution has many nice features; for example, its maximum likelihood estimator is asymptotically the most robust estimator of the scatter matrix of an elliptical distribution in the sense of minimizing the maximum asymptotic variance [37]. Contributions To summarize, the key contributions of our paper are the following: -Introducing three methods for computing the expected logarithm of a CQF. -Proposing a procedure for computing the expected logarithm of an arbitrary positive random variable.-Deriving expressions for the entropy and the KL-divergence of ZEG and ACG distributions (the form of KL-divergence between ZEG distributions appeared in [38] but without its derivations). The methods for computing the expected logarithm of a CQF differ in running-time and accuracy.Two of these, namely integral and series methods, are exact.The third method is a fast but inexact set of methods.The integral method is a direct application of our proposed procedure for computing the expected logarithm of positive random variables.We propose two fast methods that are based on approximating the CQF with a gamma random variable.We show that these fast methods give upper and lower bounds to the true expected logarithm.This leads us to develop another fast method based on a convex combination of the other two fast methods.Whenever the weights of the CQF are eigenvalues of a matrix as in the case of KL-divergences, the fast methods can be very efficient because they do not need eigenvalue computation. Outline The remainder of this paper is organized as follows.Section 2 proposes three different methods for computing the expected logarithm of a CQF.Furthermore, a theorem is stated at the beginning of this section that has a pivotal role in the first two methods.Then, we derive expressions for the KL-divergence and entropy of ZEG and ACG distributions in Section 3. Afterwards, in Section 4, multiple experiments are conducted to examine the performance of three methods for computing the aforementioned expected logarithm in terms of accuracy and computational time.Finally, Section 5 presents our conclusions.To improve the readability of the manuscript, the proofs of some theorems are presented in appendices. Calculating the Expected Logarithm of a Central Quadratic Form Suppose N i is the i-th random variable in the series of d independent standard normal random variables, i.e., normal random variables with zero-means and unit variances.Then the central (Gaussian) quadratic form is the following random variable: where λ i s are non-negative real numbers.Note that N 2 i s are chi-square random variables with degree of freedoms equal to one; therefore, this random variable is also called the weighted sum of chi-square random variables.To the best of our knowledge, the expected logarithm of the random variable U does not have a closed-form expression using elementary mathematical functions.For its calculation, we propose three different approaches, namely an integral method, a series method and a set of fast methods.Each of them has its specific properties and does well in certain situations. In the following theorem, a relation between the expected logarithm of two positive random variables distributed according to arbitrary densities and the Laplace transform of these two densities is given.This theorem is used in the integral method and the fast method.Note that the assumptions of the following theorem are unrestrictive.Therefore, it can also be used for computing the expected logarithm of other positive random variables. Theorem 1.Let X and Y be two positive random variables, F and G be their cumulative distribution functions, and f and g be their probability density functions.Furthermore, suppose that F and G are the Laplace transform of f and g, respectively.If Proof.Using the definition of Laplace transform, we have Using the integration property of Laplace transform, we have Using the frequency integration property of Laplace transform, and the formulas Letting s in the above equation go to zero and since Using the integration by parts formula having log(x) and G(x) − F(x) as its parts, we have Since lim x→∞ log(x)(G(x) − F(x)) = 0, and lim Hence by using ( 5) in (4), we have From the definition of expectation, relation (2) is obtained. Integral Method In this part, we will use Theorem 1 for computing the expected logarithm of a CQF.To this end, we choose a random variable Y that has a closed-form formula for its expected logarithm and Laplace transform of its density.A possible candidate is the gamma random variable.The density of gamma random variable has the following Laplace transform: where k and θ are its shape and scale parameters, respectively.Also, the expected logarithm of this random variable is Ψ(k) + log(θ), where Ψ(•) is digamma function. Using the convolution property of Laplace transform, it is easy to see that the density function of the CQF given in (1) has the following closed-form Laplace transform: Lemmas 2 and 3 show that a CQF and a gamma random variable satisfy the conditions of Theorem 1.For proving Lemma 2, we need Lemma 1.First of all, let us express the following trivial proposition.Proposition 1.Let X 1 , . . ., X n be arbitrary real random variables.Suppose we have two many-to-one transformations Y = h(X 1 , . . ., X n ) and Z = g(X 1 , . . ., X n ).If the following inequality holds for any x i s in the support of random variables X i s: then we have the following inequality between the cumulative distribution functions of random variables Y and Z: Lemma 1.Let F be the cumulative distribution function of a CQF, that is ∑ d i=1 λ i N 2 i , where λ i s are positive real numbers and N i s are independent standard normal random variables.Also, let G(x; k, θ) be the cumulative distribution function of a gamma random variable with parameters k and θ, then the following inequalities hold: where λ max = max{λ i } d i=1 , and Proof.This lemma is an immediate consequence of Proposition 1 and the following relation, knowing that λ ∑ d i=1 N 2 i is a gamma random variable with the shape parameter d/2 and the scale parameter 2λ: Lemma 2. Let G be the cumulative distribution function of an arbitrary gamma random variable and F be the cumulative distribution function of random variable ∑ d i=1 λ i N 2 i , where λ i s are positive real numbers and N i s are independent standard normal random variables, then lim x→∞ log(x)(G(x) − F(x)) = 0, and The proof of this lemma can be found in Appendix A. Lemma 3. Let G be the Laplace transform of probability density function of an arbitrary gamma random variable and F be the Laplace transform of probability density function of ∑ d i=1 λ i N 2 i , where λ i s are positive real numbers and N i s are independent standard normal random variables, then The proof of this lemma can be found in Appendix B. According to Lemmas 2 and 3, the conditions of Theorem 1 hold by choosing X to be a CQF given in (1), and Y to be an arbitrary gamma random variable.Therefore, we can use (2) for calculating the expected logarithm of a CQF, and it is given by The above equation holds for any choice of positive scalars k and θ.To the best of our knowledge, the above integral does not have a closed-form solution, so it must be computed numerically.This integral can be computed numerically using the variety of techniques available for one-dimensional integrals (see for example [39]). Fast Methods The integral method explained in the previous part can be computationally expensive for some applications.To this end, we derive three approximations that can be calculated analytically and, therefore, are much faster. Using the first or higher order Taylor expansion around E[U] to approximate the expected logarithm of U has been already proposed in the literature [6,40].However, we observed that lower order Taylor expansion does not give a very accurate approximation.Therefore, we use two different approximations, for which we can show that they provide a lower and an upper bound for the true expected logarithm.Finally, a convex combination of these two is used to get the final approximation. Two different gamma distributions have been used in [15,41,42] to approximate a CQF.Since the expected logarithm of a gamma random variable has a closed-form solution, we use the expected logarithm of these gamma random variables to approximate the expected logarithm of a CQF.A further justification for this idea can be given based on (13) by choosing the shape and the scale parameters of gamma distribution such that the magnitude of the integral part in (13) becomes smaller. Since the weights of CQF in the KL-divergence formulas in Section 3 are eigenvalues of a positive definite matrix Σ, we express the approximations based on this matrix.This way of expressing the approximations also elucidates the fact that the eigenvalues do not need to be calculated, which shows further computational benefits of these approximations.The shape and scale parameters of the first approximating gamma random variable are d/2 and 2tr(Σ)/d, respectively.Therefore, for the first fast approximation we have The shape and scale parameters of the gamma random variable for the second approximation are tr(Σ) 2 /2tr(Σ 2 ) and 2tr(Σ 2 )/tr(Σ), respectively.Then, we obtain the following formula for the second fast approximation: The following theorem shows that these approximations are lower and upper bounds to the true expected logarithm. , where λ i s are eigenvalues of positive definite matrix Σ d×d and N i s are independent standard normal random variables, then The proof of this theorem can be found in Appendix C. From this theorem, we can conclude that there exist some convex combinations of the two previously mentioned approximations which perform equal or better than each of them, in the sense that they are closer to the true expected logarithm.Therefore, we define the third fast approximation to be To determine parameter l ∈ [0, 1] in the above equation, we used the least squares fitting on thousands of positive definite matrices with different dimensions and unit trace sampled uniformly according to an algorithm given in [43].We observed that the fitted value is roughly equal to l = 0.7 and dimensionality has a negligible effect on the best value of l.For the case of d = 20, the mean squared error for various values of l can be seen in Figure 1. Series Method One can represent the probability density function of a CQF given by (1) as an infinite weighted sum of gamma densities [13,14], where g(u; d/2 + j, 2β) is the probability density function of a gamma random variable with parameters d/2 + j and 2β, and This result can be used for deriving a series formula for the expected logarithm of U. Thus, Ruben [13] analyzed the effect of various βs on the behavior of the series expansion and proposed the following β as an appropriate one: By using this β, ∑ ∞ j=0 c j = 1 holds [13] and also knowing that the following relation holds for the digamma function: can be simplified To approximate this formula, we cut the series coefficients, which means we only use a finite number of terms to evaluate the expectation: For this approximation, it is possible to compute an error bound which is expressed by the following lemma.Lemma 4. The bound for error of the approximation (25) where Γ(•) is gamma function, and = (λ max − λ min )/(λ max + λ min ). The proof of this lemma is in Appendix D. By using this bound, it is possible to calculate the expected logarithm with a given accuracy by selecting an appropriate L. Note that the upper bound given by ( 26) is growing with respect to , and is also increasing with λ max /λ min .As we will see in the simulation studies, when the ratio λ max /λ min as well as the dimensionality d are small, this method performs better than the integral method. KL-Divergence of Two Generalized Elliptical Distributions In this section, we derive expressions for the KL-divergence and the entropy of two subclasses of generalized elliptical distributions, namely ZEG and ACG distributions [44].We first start by reviewing some related materials. Some Background on the Elliptical Distributions Suppose the d-dimensional random vector X is distributed according to a zero-mean elliptical contoured (ZEC) distribution with a positive definite scatter matrix Σ d×d , that is X ∼ Z E C(Σ, ϕ).The probability density function of X is given by for some density generator functions ϕ : R + → R. We know that we can decompose the vector X into a uniform hyper-spherical component and a scaled-radial component so that, X = Σ 1/2 RU, where U is uniformly distributed over the unit sphere S d−1 and R is a univariate random variable given by R = Σ −1/2 X 2 [45].Then, the random variable R has the density Therefore, square radial component Υ = R 2 has the following density: A ZEG is a ZEC whose square radial component is distributed according to a gamma distribution Υ ∼ Gamma(a, b).A gamma-distributed random variable has the density where a is a shape parameter and b is a scale parameter.So the probability density function of a d-dimensional random variable X ∼ Z E G(Σ, a, b) is given by where x ∈ R d and Σ 0 is its scatter matrix, also a, b > 0 are certain scale and shape parameters [31]. When ZEC random variable is projected onto a unit sphere, the resulting random variable is called ACG and denoted by X ∼ ACG(Σ).This distribution unlike many other distributions on the unit sphere has a nice closed-form density given by where x ∈ S d−1 and Σ 0 is its scatter matrix. KL-Divergence between ZEG Distributions Suppose we have two probability distributions P and Q with probability density functions p and q, KL-divergence between these two distributions is defined by The negative of the first part, H(X) = − log(p(x))p(x)dx, is the entropy and the second part, E[− log(q(X))] = − log(q(x))p(x)dx, is the averaged log-loss term, where X is a random variable distributed according to P. Following lemma gives a general expression for the KL-divergence between two ZEC distributions.It is then used for deriving the KL-divergence between two ZEG distributions. Lemma 5. Suppose we have two probability distributions on random variable Y, P ), then the KL-divergence between these two distributions is given by the following expression: where f Υ and f Υ are the square radial components of distributions P and Q, respectively.Also, f wd is the density of , where N i s are independent standard normal random variables, and λ 1 , . . ., λ d are eigenvalues of matrix Proof.KL-divergence is known to be invariant against invertible transformations of random variable Y [46].To simplify the derivations, we apply a linear transformation X = Σ −1/2 2 Y that makes the scatter matrix of the second distribution identity.By using this change of variable, the problem becomes that of KL-divergence computation between As expressed in (33), the KL-divergence is the subtraction of the entropy from the averaged log-loss.Firstly, let us derive the entropy of X having distribution P X , that is H(X) = − log(p(x))p(x)dx. Let r = y 2 and recall that the area of a sphere in dimension d with radius r equals 2r d−1 π d/2 /Γ(d/2), thus Using the change of variable υ = r 2 and replacing ϕ by square radial density f Υ as expressed in (29), we obtain Now, we are deriving an expression for the averaged log-loss given by E[− log(q(X))] = −E log ϕ (X X) .The argument of function ϕ is X X; therefore, it is enough to compute the expectation of the function over the new random variable Z = X 2 2 : It is easy to see that the random variable Z can equally be written as Z = X Σ X where X ∼ Z E C(I, ϕ).The density of Z with this representation has already been reported in [47]: where f Υ is the square radial density of p Y , and f wd is the density of a linear combination of Dirichlet random variable components, where D = (D 1 , . . ., D s ) is a Dirichlet random variable with parameters (r 1 /2, . . ., r s /2), and l j s are s distinct eigenvalues of the positive definite matrix Σ with respective multiplicities r j , for j = 1, . . ., s. It is known that if random variables C 1 , . . ., C s are independent chi-square random variables having r 1 , . . ., r s degrees of freedom, and C = ∑ s j=1 C j , then (C 1 /C, . . ., C s /C) is a Dirichlet random variable with the parameters (r 1 /2, . . ., r s /2) [48].Hence, the random variable Λ in (38) can be expressed as Λ = ∑ s i=1 l j C j /C.Equivalently, if N 1 , . . ., N d are independent standard normal random variables, then Λ can be written as Using (37) in (36) and replacing ϕ by square radial density f Υ as expressed in (29), we obtain the following expression for the averaged log-loss: Subtracting ( 35) from (40), we obtain (34). Until now, we derived an expression for the KL-divergence between two ZEC distributions.We can further simplify the KL-divergence for the case of ZEG distributions to avoid computing double-integration, and the following theorem proves it.Theorem 3. Suppose we have two distributions P Y = Z E G(Σ 1 , a p , b p ), and Q Y = Z E G(Σ 2 , a q , b q ), then the entropy of random variable Y distributed according to P Y and the KL-divergence between these two distributions are given by the following expressions: where Ψ(•) is digamma function, and tr(•) is the trace of a matrix.Also N i s are independent standard normal random variables, and λ 1 , . . ., λ d are eigenvalues of matrix Proof.Like the previous lemma, we apply the change of variable X = Σ −1/2 2 Y and compute the KL-divergence between the transformed distributions.The expression for entropy (35) in the case of ZEG distributions becomes Next, recall the following gamma function identities [49]: Using ( 44) and ( 45), we can simplify (43) to obtain Since Y = Σ 1/2 2 X, we can trivially derive the expression of H(Y) given in (41).For deriving the averaged log-loss term, we obtain the following expression by putting the gamma square radial component (30) into (40): We apply the change of variable µ = υ/r and express the integrals in terms of new variables µ and r, Using the equalities ( 44) and ( 45), we obtain where similar to the previous lemma, f wd is the density of the random variable , where N i s are independent standard normal random variables, and λ 1 , . . ., λ d are the eigenvalues of matrix Σ. Subtracting the entropy from the averaged log-loss and knowing that The moments of Λ were computed in [47], but we are giving a simple derivation of the first moment below.It is known that the random variable V i = N 2 i / ∑ d j=1 N 2 j has the following beta distribution: Expected logarithm of Λ can be expressed as a difference of two expectations: Using the fact that the expected logarithm of a chi-square random variable with d degrees of freedom is equal to Ψ(d/2) + log( 2), E[log(Λ)] can be computed by the following equation: With substitution ( 48) and ( 50) into ( 46), we get (42). KL-Divergence between ACG Distributions The following theorem gives expressions for the KL-divergence between ACG distributions and the entropy of a single ACG distribution.Theorem 4. Suppose we have two probability distributions G Y = ACG(Σ 1 ) and J Y = ACG(Σ 2 ), then the entropy of random variable Y distributed according to G Y and the KL-divergence between these two distributions are given by the following expressions: 2) , ( 51) where N i s are independent standard normal random variables, λ 1 , . . ., λ d are eigenvalues of matrix , and σ 1 , . . ., σ d are eigenvalues of matrix Σ 1 . Proof.Due to the invariance property of KL-divergence under invertible change of variables, we use the change of variable It is easy to verify that Ω is distributed according to a zero-mean generalized elliptical distribution with identity covariance [44].From the definition of KL-divergence given by ( 33), we have . By some simplifications, it is immediate that Since projecting any zero-mean generalized elliptical distribution (with identity covariance) on the unit sphere gives an ACG random variable (with identity covariance) [50], we can substitute E[log(Ω Σ−1 Ω/Ω Ω)] with E[log(X Σ−1 X/X X)], where the random vector X is distributed according to a multivariate normal distribution with identity covariance and zero mean.Because X Σ−1 X is a CQF and X X is a chi-square random variable, we have where λi s are the eigenvalues of Σ.Additionally, it is easy to verify that |Σ| = | Σ| −1 and λ i = λ−1 i , therefore ( 52) holds. Since one of the terms in the KL-divergence is equal to the minus entropy, we use our derived expression for the KL-divergence between ACG distributions to find a formula for the entropy of an ACG distribution.Define S Y = ACG(I), then the KL-divergence between G Y and S Y can be easily derived from the main definition (33): where H(Y) is the entropy of the random variable Y. Now, we compute the above KL-divergence using (52) which is Equating the right-hand sides of ( 55) and ( 56) gives (51). The following corollary shows a relation between the KL-divergence of ACG distributions and the KL-divergence of ZEG distributions.It is an immediate consequence of Theorems 3 and 4. Simulation Study In Section 2, we proposed three different methods for computing the expected logarithm of the CQF given in (1).We assume the weights of CQFs that are used in the simulations are eigenvalues of some random positive definite matrices.These random matrices are generated uniformly from the space of positive definite matrices with unit trace according to the procedure proposed in [43].In this section, we numerically investigate the running time and accuracy of these approaches.All methods were implemented in MATLAB (Version R2014a) (64-bit), and the simulations were run on a personal laptop with an Intel Core i5 (2.5Ghz) processor under the OS X Yosemite 10.10.3 operating system.Since the series method depends heavily on loops that are slow in MATLAB, we implemented this method in a MATLAB MEX-file.For the integral method, the integral is numerically evaluated using Gauss-Kronrod 7-15 rule [51,52].The absolute error tolerance is given as an input parameter of the numerical integration.In the integral method, the value can be computed with any given accuracy by choosing the absolute error tolerance; therefore, we do not analyze the integral method in the sense of the calculation error. Figure 2 investigates the effects of dimensionality on the average running time of different methods for computing the expected logarithm of the CQF explained in Section 2. For the integral method (upper-left plot), two curves for two different absolute error tolerances are shown.The integral formula (13) has the parameters k and θ that can be chosen freely, and we choose those given in (14).Different curves for the series method (upper-right plot) correspond to different values of L, which is the truncation length of the series.The curve of the fast method (lower plot) corresponds to the computation time of the third fast method explained in Section 2. One reason of lower computation time of the fast method is its lack of need for any eigenvalue computation.There is a curve in upper-right plot showing the computational time of eigenvalue computation. The approximation error of all three fast methods for different dimensions can be seen in Figure 3.The plot on the right-hand side of this figure magnifies the curve for the mean error of the third fast method (the blue curve with dots).As it can be observed in Figure 3, changing the dimensionality has a negligible effect on the mean and the standard deviation (SD) of the absolute approximation error for the fast methods.Small mean error and SD of the third method indicate the distinct advantage of the third fast method over the other two methods.This method uses a convex combination of the values of the other two approximations as explained in Section 2. Approximating the expected logarithm of the CQF using the fast methods induces an error on the KL-divergence between ACG distributions given by (52). Figure 4 shows the mean percentage of relative error and its standard deviation as a function of dimensionality.It can be observed that the relative error decreases as the dimensionality increases.The third fast method is clearly superior to the other two fast methods.The reason for such a small percentage of relative error is the observation that whenever the error is large, then the KL-divergence is large too.We are not showing the results for the dimensions less than ten because the error percentage is quite large in that regime.The red curve in the upper-left plot shows the computational time for computing the eigenvalues of random positive-definite matrices (using eig function in MATLAB) needed before applying the integral method or series method.Different curves for the upper-left plot correspond to the computational time of integral method for different absolute error tolerances including the time needed for computing the eigenvalues.Different curves for the series method correspond to the computational time for various values of truncation length of the series.Error SD of the first method Error mean of the first method Error SD of the second method Error mean of the second method Error SD of the third method Error mean of the third method Figure 6 shows how increasing dimension affects the performance of the series method.The parameter is set to 0.9 by choosing maximum and minimum weights in the CQF to be 1 and 1/19, respectively.The other weights of CQF are sampled uniformly between the maximum and minimum weights.It can be seen that the dimensionality has a negligible effect on the slope of the curves.This can be predicted from the formula of upper bound in (26), because the exponential term L dominates other terms in the equation and the slopes of the curves are determined mainly by the parameter .In this figure, the standard deviations are due to the different distribution of the weights between the maximum and minimum weights.Figures 5 and 6 demonstrate that the error upper bound is a relatively tight bound for the actual error. In Figure 7, we investigate the effect of and d on the averaged L to achieve an acceptable upper bound error (here 10 −8 ).We can see that as the amount of increases, the slopes of the curves increase and in the limit of → 1, it goes to infinity.This figure justifies our previous claim that when and the dimensionality are small, the series method is very efficient due to relatively small L needed to achieve an acceptable error. Conclusions In this paper, we developed three methods for calculating the expected logarithm of a central quadratic form.The integral method was a direct application of a more general result applicable for positive random variables.We then introduced three fast methods for approximating the expected logarithm.Finally, using an infinite series representation of central quadratic forms, we proposed a series method for computing the expected logarithm.By proving a bound for the approximation error, we investigated the performance of this method. We also derived expressions for the entropy and the KL-divergence of zero-mean elliptical gamma and angular central Gaussian distributions.The expected logarithm of the central quadratic form appeared in the form of KL-divergences and entropy of the angular central Gaussian distribution. By conducting multiple experiments, we observed that the three methods for computing the expected logarithm of a central quadratic form differ in running time and accuracy.The possible user can choose the most appropriate method based on his/her requirements. The methodologies developed in this paper can be used in many applications.For example, one can use the result of Theorem 1 for computing the expected logarithm of other positive random variables like a non-central quadratic form.Another line of research would be to use the KL-divergence between angular central Gaussian distributions with the fast approximations in learning problems that have a divergence measure in their cost functions. where λ max = max{λ i } d i=1 , λ min = min{λ i } d i=1 , and γ(•, •) is the lower incomplete gamma function.Adding G(x) to all sides of the above inequality, we get Since log(x) is positive for x > 1, therefore by multiplying all sides of the above inequality by log(x), we obtain log , (A3) which holds for all x > 1.For proving the first part of this lemma, namely holds for any positive choices of k, k, θ, and θ and then invoke squeeze theorem by taking the limits of all sides of (A3).From the definition of lower incomplete gamma function, the left-hand side (A5) can be rewritten as Using L'Hôpital's rule, it can be seen that the above limit is equivalent to lim x→∞ x log(x) 2 1 It is easy to see that (A6) is equal to zero and consequently (A5) and (A4) hold.Now, we want to prove the second statement in the lemma, that is lim If we multiply all sides of (A2) by log(x), then for 0 < x < 1 we have . (A8) Using the same strategy as above, we want to show that for any positive choices of k, k, θ, and θ, the following limit holds: Using L'Hôpital's rule, it can be seen that lim x log(x) 2 1 Therefore (A9) holds, and from (A8), we have By squeeze theorem, we can conclude that (A7) holds. Appendix B. Proof of Lemma 3 From the expression of F and G, we have In this proof, for the simplicity of notation, we define L We give separate proofs for the cases d > 2k, d < 2k and d = 2k.For the first case d > 2k, we have Consequently, it can be said that there exists a number a > 0 that for all x ≥ a, the function V (σ) is positive. Therefore, the integrand of ∞ a L(σ)dσ is positive in its domain of integration.If we choose 1 σ p dσ is convergent and its integrand is positive in its domain, from the limit comparison test, it follows that the integral ∞ a L(σ)dσ is convergent.Now, we want to show that the integral Therefore, there exists a number a > 0 that for all x ≥ a, the function −V (σ) is positive.Therefore, the integrand of ∞ a −L(σ)dσ is positive in its domain of integration.If we choose 1 < p < 1 + d/2, then lim σ→∞ −L(σ) Knowing that ∞ a 1 σ p dσ is bounded, using limit comparison test, we can conclude that ∞ a −L(σ)dσ is convergent.Now, with the same strategy as the previous case, we can show that the integral a 0 −L(σ)dσ is convergent and it is easy to see that ∞ 0 L(σ)dσ is also convergent.For 2k = d, excluding the obvious case G(σ) = F (σ), there exists a number a > 0 that for all x ≥ a, the function V (σ) is either positive or negative.If it is positive, then we use the proof strategy for the case d > 2k.Otherwise, we exploit the strategy for the case d < 2k. We want to show P (σ) ≥ 0, for all positive sigmas, and it is equivalent to say , for all {x i , y i } ∈ R + , (C8) which is the Cauchy-Schwarz inequality.So the function P(σ) is increasing and consequently, the second inequality holds. Appendix D. Proof of Lemma 4 As we can see in [14], the following bound exists for c i : which is true if i is large enough such that i < 1.Since L > d /(2 − 2 ), it can be observed that i < 1 for i ≥ L, hence for the total approximation error, we obtain Figure 1 . Figure 1.The mean squared error of the third fast method for approximating the expected logarithm of a CQF as a function of parameter l. Figure 2 . Figure 2.The average running time (in milliseconds) of the integral method (a), the series method (b) and the third fast method (c) in different dimensions for computing expected logarithm of the CQF.The red curve in the upper-left plot shows the computational time for computing the eigenvalues of random positive-definite matrices (using eig function in MATLAB) needed before applying the integral method or series method.Different curves for the upper-left plot correspond to the computational time of integral method for different absolute error tolerances including the time needed for computing the eigenvalues.Different curves for the series method correspond to the computational time for various values of truncation length of the series. Figure 3 . Figure 3.The absolute error for the approximation of the expected logarithm of the CQF by the fast methods explained in Section 2 for different dimensions.The third method uses a convex combination of the first two methods.The plot on the right shows the zoomed version of the error mean of the third method. 3 Figure 6 .Figure 7 . Figure 6.The relation between L and the error in the series method for = 0.9.
2016-08-24T23:09:51.855Z
2016-07-28T00:00:00.000
{ "year": 2016, "sha1": "8190aa30a3a18357738f1d89d684d84e0b86cc17", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/18/8/278/pdf?version=1469697371", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8190aa30a3a18357738f1d89d684d84e0b86cc17", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
219467702
pes2o/s2orc
v3-fos-license
Freezing methods affect the characteristics of large yellow croaker ( Pseudosciaena crocea ): use of cryogenic freezing for long-term storage Large yellow croaker ( Pseudosciaena crocea ) is the main coastal economic fish in China. After harvesting, the fish is rarely traded as the fresh product, but usually held in cold storage. Therefore, it is important to understand the quality changes occurring during preservation. On this study, freshly collected fishes were frozen by cryogenic (cabinet liquid nitrogen freezer at -40, -60 and -80 °C) and forced convection (ultra-low temperature freezer at -40 °C) freezing and stored at -18 °C for 6 months. Drip loss, relative moisture loss (RML), water holding capacity (WHC), color and texture of frozen fish were evaluated. The results showed that forced convection freezing had significantly higher drip loss and RML values compared to cryogenic freezing. WHC decreased dramatically irrespective of freezing methods employed. Cryogenic freezing at -60 and -80 °C had the highest yellowness values during storage; but the highest springiness, gumminess and shear force values were obtained only at -60 °C. We conclude that cryogenic freezing at -60 °C is appropriate for long-term storage of large yellow croaker. Introduction Large yellow croaker (Pseudosciaena crocea) is a delicious and tender-flesh quality marine fish. According to China Fishery Statistical Yearbook, the aquaculture production of large yellow croaker of the whole nation in 2017 was 165,496 tons (Guo & Zhao, 2017), thus making it one of the commercially important marine fish species of the country (Liu et al., 2008). On most cases, freshly collected fish is held in cold storage until supplied to the consumers. Therefore, storage of fish between catching and entering supply chain is imperative. Freezing is becoming a useful technique to preserve fish and other seafood products without significant loss in quality, because it increases the shelf-life of the product and creates the accessibility of stored products over long distances (Leygonie et al., 2012;Makarios-Laham & Lee, 1993;Tolstorebrov et al., 2016). However, if appropriate techniques were not employed, biochemical reactions (microbiological and enzymatic activities) are not completely halted, thus samples will continue to deteriorate with lipid oxidation, protein denaturation, etc. affecting the quality of frozen products. As such, these stored products will become unacceptable for consumption (Leygonie et al., 2012). Omproper freezing techniques could result in significant damage to stored fish products. Ondeed, previous studies have demonstrated that freezing rate affects the size, distribution and location of ice crystals formed in frozen samples, consequently leading to the muscle tissues damage resulting in weight loss and quality changes of frozen food products (Añón & Calvelo, 1980;Kono et al., 2017;Wagner & Anon, 1985). Moreover, freezing rates are affected by freezing methods, air speed, operating temperature, and products' properties (Espinoza Rodezno et al., 2013). Therefore, different freezing methods have been invented to minimize the damage induced by ice crystal formation, thereby satisfying the demand of variety types of food products. Dne technique that is gaining momentum in the recent past is cryogenic freezing, i.e. using cryogenic fluid like liquid nitrogen and/or carbon dioxide with rapid freezing rates. Cryogenic freezing has been reported to have higher muscle integrity (Chen & Pan, 1997;Pan & Yeh, 1993), and improve texture, color and sensory properties of the frozen food products (Agnelli & Mascheroni, 2002;Qian et al., 2018;Streeter & Spencer, 1973). However, the usefulness of cryogenic freezing is still debated because the stored fish and/or seafood products suffered significant quality loss in some studies (Chen & Pan, 1997;Jiang et al., 2018;Pan & Yeh, 1993). For example, the shelf-life of grass shrimp (Penaeus monodon) frozen with liquid nitrogen and subsequently stored at -20 °C was less than one month (Pan & Yeh, 1993). Likewise, the shelf-life of tilapia (Oreochromis SP) frozen with liquid nitrogen and stored at -20 °C was only predicted to be 2.7 months (Chen & Pan, 1997). Furthermore, cryogenic freezing showed no improvement in maintaining microstructure of northern snakehead (Channa argus) fillets, although it reduced the pH decrease and salt-soluble protein content . Whilst the search for proper storage of fishes continues, there is little information concerning the effect of freezing methods and storage duration on large yellow croaker. The aim of this Freezing methods affect the characteristics of large yellow croaker (Pseudosciaena crocea): use of cryogenic freezing for long-term storage paper is to investigate the feasibility of storing large yellow croaker using cryogenic freezing at different temperatures and forced convection freezing. To recommend the appropriate method for storage, we compared these two freezing techniques, stored fishes for 6 months and studied the freezing loss, drip loss, relative moisture loss, water holding capacity, color and texture of large yellow croaker. Fish materials Large yellow croaker was purchased at Ningde Jingli Aquatic Products Co., Ltd. (Fujian Province, China) and transported to University of Shanghai for Science and Technology, Shanghai, China within 12 hours in iced boxes. Ommediately after arrival in the lab (108 fishes in total), fish length and weight were determined. Fish samples were wrapped in plastic bags and stored at 0 °C in the fridge for 3 hours before used in further experiments. Freezing procedures Forced convection freezing (FCF) was conducted by forced wind freezing at -40 °C in a vertical ultra-low temperature freezer (DW-86L626, Haier Qingdao Special Electrical Appliance Co., Ltd., China). For cryogenic freezing, three different freezing temperatures at -40, -60, and -80 °C were applied independently in a cabinet cryogenic freezer (Praxair Onvestment Co., Ltd, Shanghai, China). The cryogenic freezer was equipped with one chamber (100 × 100 × 90 cm) and stainless steel shelving system holding four master trays at four levels ( Figure 1). The shelving unit was of 48 × 60 × 8 cm dimension to accommodate fish trays. Liquid nitrogen was injected through transfer line and sprayer nozzle into the freezing chamber from a pressurized tank (Dura-Cyl 160 MP, Praxair Co., Ltd., Shanghai, China). The pressurized tank had the storage of 110 kg of boiling liquid nitrogen and monitored by the tank-fitted relief valve. The fan/ suction ventilator was located in front of the sprayer nozzle and stimulated the circulation of the cooling medium throughout the chamber. A computer control algorithm (West 6100, Shenzhen Yitahua Electronics Development Co., Ltd, China) was attributed to regulate and control the injection temperature of nitrogen into the chamber via a solenoid valve. After setting the freezing temperature in the procedure, the tank valve was opened to the pressure of 2.5 MPa. Liquid nitrogen passed through the solenoid valve to the spray nozzle; the chamber environment was cooled to the freezing temperature. Fish samples were then put inside. Dnce the chamber temperature was lower than the predetermined temperature, the solenoid valve would be automatically closed; in contrast, it would be opened to atomize the liquid nitrogen and maintain the freezing temperature in the chamber. As the freezing time increased, the low-temperature nitrogen in the chamber accumulated and increased the pressure; the internal excess nitrogen would be discharged by the help of the fan and pressure relief pump. Twenty-seven fish per batch were frozen for each freezing treatment. A thermocouple type K (Yokogawa TX10 series, Yokogawa Test & Measurement (Shanghai) Co., Ltd, China) was inserted into the geometric dorsal center of fish to monitor the temperature changes in samples. All freezing treatments were finished as the fish core temperature reached -20 ± 0.5 °C. Fish samples were vacuum-packaged in polyethylene bags and transferred to a cold storage refrigerator (BD/BC-518, Fujian Chuanglong Electric Appliance Technology Co., Ltd., China), and stored at -18 °C for 6 months. Quality analyses were performed for samples after 0 (immediately after freezing treatments), 3 and 6 months of storage. Before analysis, frozen samples were thawed under flowing tap water for one hour until the core temperature of large yellow croaker reached 0 °C. Upon thawing, skin color measurements were conducted. Then, fish skin was peeled, and dorsal muscle was taken for further experiments. Freezing rate, freezing loss and drip loss Freezing rate and drip loss were calculated as given in the following Equations 1 and 2. (2) Water holding capacity Water holding capacity of fish muscle (%) was determined following the methods described by Luan et al. (2018) with modification. On brief, 3g of minced fish samples were put into 3 centrifugation tubes and spun at 3000 rpm for 20 min at 4 °C. After centrifuging, the samples were removed from the tubes and the difference in weights were calculated. The difference in weights are expressed on a percentage of fresh weight basis. Color measurement Color of samples was assessed by Chroma Meter CR-400 (Konica Minolta, Onc., Japan) at the fish abdominal skin (the center point between the ventral fin and the anal fin at abdomen section). COE L*, a*, b* values were measured. L* describes the lightness of the sample (L* > 0), a* intensity in redness (a* > 0), and b* intensity in yellowness (b* > 0). Color measurement was conducted by five replicates for each group. Texture Fish dorsal muscle was cut into 20 × 20 × 10 mm pieces. Textural values of fish from all groups were measured using a TMS-Pro Texture Analyzer (Food Technology Corporation, USA) with the Texture Lab Pro (TL-Pro) software package. A flat-ended cylindrical plunger (1/2" in diameter was pressed into the fillets perpendicular to the muscle fibers at a constant speed of 5 mms -1 until it reached 60% of the fillet height. The maximum force to cut the dorsal muscle was recorded as the shear force (N). All the measurements were performed by nine muscle pieces per replicate. Statistical analysis The statistical significance of observed differences among treatment means was evaluated using SPSS statistics software version 20.0 (OBM Analytics, US). Significant differences between convection and cryogenic freezing were analyzed using analysis of variance (ANDVA) with a LSD post-hoc test at a 95% confidence level (P < 0.05). Freezing time, freezing rate and drip loss Fish storage is imperative and our current understanding suggests that the quality of the frozen fish is affected by temperature/rate at which the samples frozen and stored (Añón & Calvelo, 1980;Hiner et al., 1945;Mørkøre & Lilleholt, 2007). Freezing temperature/rate can affect the evaporation of water from the products' surface when freezing (Campañone et al., 2001;Rao & Novak, 1977). When a product is frozen rapidly under lower freezing temperature, the evaporative loss will be minimal (Rao & Novak, 1977). The average length and weight of fishes used in the present study were 30.90 ± 0.87 cm and 364.30 ± 23.76 g, respectively. Dur freezing results indicate that the freezing time and rate of cryogenic freezing at -40, -60, and -80 °C were 20, 15, 10 min and 120, 240, 480 °C h -1 , respectively; meanwhile the FCF had the freezing time and rate of 360 min and 6.67 °C h -1 . Drip loss is defined as the loss of weight after thawing (Boonsumrej et al., 2007), and considered to be the major problem of frozen fishery products (Rao & Novak, 1977). Lower drip loss means that the formation of ice crystals was smaller in the samples, which is beneficial to maintain the quality (Anese et al., 2012). Under disparate freezing rates, drip loss differed significantly during the frozen storage ( Figure 2). On particular, FCF had significantly higher drip loss (0.96-2.27%) than those of cryogenic freezing at -40, -60 and -80 °C (0.72-2.09%) ( Figure 2). Drip loss of cryogenic freezing at -60 °C at 0, 3 and 6-month were significantly lower than the two other cryogenic freezing temperature (Figure 2). Freezing temperature/rate can affect the drip loss during thawing of fish products (Añón & Calvelo, 1980;Hiner et al., 1945;Espinoza Rodezno et al., 2013). At low freezing rate, large interfibrillar crystals develop and push the muscle fibers into irregular groups, that water loss during thawing cannot be reabsorbed through the muscle membrane (Añón & Calvelo, 1980). As the freezing rate increases, smaller ice crystals grow at the expense of intracellular water that can interact again with the muscle protein after thawing (Añón & Calvelo, 1980). Therefore, freezing temperature/rate is the key factor that should be controlled in the freezing process of fish products. On some studies, however, once the freezing rate continues to increase, the proportion of ice frozen within the fiber will be too high, presumably rupturing the fiber tissues (Hiner et al., 1945). This is particularly true in studies employing lower cryogenic freezing temperature (liquid nitrogen at -120 °C), which showed surface cracking and bigger bundle spacing between muscle fibers of frozen grass shrimp, compared to higher cryogenic freezing temperature (-80 and -100 °C) (Pan & Yeh, 1993). Therefore, it is vital to utilize proper temperatures but froze the samples. Moisture content and Relative Moisture Loss (RML) Moisture contents of the samples subjected to cryogenic freezing at -40 °C and FCF decreased significantly after six months of storage (Table 1). On addition, moisture contents of cryogenic freezing at -80 °C also decreased slightly, but it did not differ significantly after cryogenic freezing at -60 °C (P > 0.05, Table 1). Similar to our results, Espinoza Rodezno et al. (2013) also found that cryogenic freezing of catfish fillets had only slight decrease in moisture content after six months of frozen storage (cryogenic freezing). Moreover, the moisture content of fish samples subjected to cryogenic freezing at -60 °C were significantly higher than other freezing procedure during the frozen storage (P < 0.05, Table 1), whilst FCF obtained the lowest moisture content than those of other freezing procedure (P < 0.05, Table 1). These results supported that cryogenic freezing can help to hold water in the fish muscle or fillets. After convection or cryogenic freezing, there was an increase in the relative moisture loss of large yellow croaker through the frozen storage time (Figure 3). Moreover, different freezing treatments in our study (convection or cryogenic freezing) resulted in different relative moisture loss of large yellow croaker muscle (see Figure 3). The cryogenic freezing at -60 °C obtained the lowest RML values and the FCF had the highest RML ( Figure 3). Nevertheless, the RML of cryogenic freezing was not significantly different as compared to air blast freezing after six months of frozen storage (Espinoza Rodezno et al., 2013). Water holding capacity The WHC of large yellow croaker at different freezing treatments displayed a decreasing trend over the frozen storage time (Figure 4). The WHC of all freezing treatments did not differ significantly at 0-month, but began to differentiate after three months storage. At 6-month, the WHC values declined drastically in all treatments; moreover, FCF expressed lowest value. Water holding capacity can be correlated to sensory attributes of fish muscle (Jensen & Jørgensen, 1997) and can easily be affected by freezing injury resulting from phase transition of water in muscle fibers (Ueng & Chow, 1998). The WHC values in the present study decreased over the frozen storage and this mirrored with previous studies (Apata, 2014;Gokoglu et al., 2018). The decrease in WHC can cause the unacceptable WHC in meat products (Huff-Lonergan & Lonergan, 2005;Lan et al., 2016). On our study, the cryogenic freezing was observed to have the higher WHC values than those of FCF (Figure 4). This might be related to the higher shear force of cryogenic freezing at -60 °C at 6-month, compared to other freezing treatments. Color Skin color is another vital parameter in judging the physiological, behavioral, sensory attributes and acceptability of fish products (Pavlidis et al., 2006). Large yellow croaker is a marine fish with uniquely particular yellow skin color (Guo et al., 2018). There were no significant differences in the values of L*, a*, b* irrespective of freezing treatments when samples were tested immediately after freezing; however, a* values of cryogenic freezing at -60 and -80 °C were higher than cryogenic freezing at -40 °C and FCF (Table 2). During the frozen storage, the L* values of FCF decreased, whilst those of the cryogenic freezing -40 and -60 °C showed a slight increase (Table 2). At the end of 6-month storage, the samples subjected to cryogenic freezing had higher L* values compared to FCF (Table 2). On addition, a* values decreased throughout the storage time for the FCF and cryogenic freezing at -40 °C. At 6-month, a* values of frozen fishes subjected to cryogenic freezing at -60 and -80 °C still remained high, meanwhile a* values of cryogenic frozen fishes at -40 °C and FCF plummeted (Table 2). Moreover, the cryogenic freezing at -40 °C and FCF had a sharp decline in b* values at 6-month, while cryogenic freezing at -60 and -80 °C had the minimal decline tendency of b* values. Generally, the L* and b* values of large yellow croaker markedly decreased during frozen storage, and a* value significantly increased (Table 2). At 6-month, the lower values of L* and b* could be observed in the samples of FCF compared to those observed in cryogenic freezing of large yellow croaker. On terms of cryogenic freezing, the color parameters of large yellow croaker samples did not differ significantly in the first three months of frozen storage. At 6-month, L* value continued to have non-significant difference in cryogenic treatments; and higher a* and b* values were observed in cryogenic freezing at -60 and -80 °C, compared to -40 °C. From these results, it appears that cryogenic freezing at different freezing temperature does not have any effects on large yellow croaker skin color at least until 6 months. Up to date, there has been few studies addressing the effect of cryogenic freezing on the skin color of fish (Espinoza Rodezno et al., 2013;Sheehan et al., 1998). On catfish fillets, Espinoza Rodezno et al. (2013) observed higher L* during cryogenic freezing compared to air-blast freezing. Moreover, a*and b* values were reported not to be affected by different freezing methods in the research about the catfish fillets under air-blast and cryogenic freezing (Espinoza Rodezno et al., 2013). Similarly, Sheehan et al. (1998) also showed that a* value of raw Atlantic salmon (Salmo salar) flesh increased from 6 weeks to 12 weeks during frozen storage at -20 °C, although those authors did not find a significant change in carotenoid content of astaxanthin-fed fish after frozen storage for 6 and 12 weeks. The results of all these studies indicate that there is no change in skin color during cryogenic freezing. Texture Texture profile of fish can vary depending on the fish species and storage duration (Hashimoto et al., 2016;Hernández et al., 2009;Skjervold et al., 2001). On this study, texture profile of large yellow croaker was measured by cohesiveness, springiness, gumminess, chewiness and shear force values (Table 3). The cohesiveness values of large yellow croaker in FCF dropped immediately after freezing, and continued to decline until 6 months (Table 3). Meanwhile, cohesiveness values of cryogenic freezing at -60 and -80 °C remained stable, but increased at the end of six months storage. Moreover, the cohesiveness showed higher values in cryogenic freezing at -60 and -80 °C, compared to FCF (Table 3). On contrast to our findings, Jiang et al. (2018) reported that freezing methods (liquid nitrogen or ultra-low-temperature freezer) did not significantly affect the cohesiveness values of northern snakehead during five months' storage. We found no significant differences in the springiness, gumminess and chewiness between cryogenic and FCF in the first three months of storage (Table 3). This finding is in accordance with the texture profile parameters of octopus (Octopus vulgaris), that non-significant differences in springiness, gumminess and chewiness during the frozen storage at -18 °C for 30 days have been reported (Gokoglu et al., 2018). Collectively, these results point to the possibility that that large yellow croaker has special textural characteristics as noted in octopus. However, this suggestion warrants further in-depth study. Nevertheless, after 6 months of frozen storage, our results indicate a higher chewiness values of large yellow croaker in cryogenic freezing at -40 and -60 °C (except for -80 °C). Similar to our study, the ultra-low temperature (-80 °C) and liquid nitrogen immersion freezing significantly affected the chewiness of northern snakehead when stored at -20 °C for five months . Shear forces values of cryogenic and FCF did not differ significantly at 0 and 3-month (Table 3). This trend was similar to the cryogenic and air-blast freezing of chicken fryer halves (Streeter & Spencer, 1973). However, at 6-month, the shear force values of FCF and cryogenic at -40 and -80 °C decreased, reflecting the toughness of the samples (Table 3). This may be due to the enzymatic and bacterial activities that can decompose muscle proteins (Lan et al., 2016). Additionally, the phase transition of ice to water can lead to softer muscles and reduces the force use to shear fish muscle (Leygonie et al., 2012). Compared to other treatments, shear force value cryogenic freezing at -60 °C remained high after 6 months (7.20 ± 0.79 N) (Table 3). Ot showed the potential of cryogenic freezing at -60 °C for long-term storage of large yellow croaker. Conclusion On conclusion, this study showed that frozen storage by employing cryogenic freezing at -60 °C and subsequently storing it at -18°C resulted in better quality of large yellow croaker after 6 months than that of the FCF. However, cryogenic freezing at different freezing temperatures produced variations in product quality. At 6-month, cryogenic freezing at -60 and -80 °C had the highest yellowness values. The cryogenic freezing at -60 °C showed the superior texture profile with significantly higher values of springiness, gumminess and shear force in storage duration. Yet, further studies are deemed essential to understand the rationale behind the mechanisms on how cryogenic freezing affects large yellow croaker fillet by specifically focusing on water migration, ice crystals formation, microstructure of fish muscle, lipid oxidation and sensory modification in storage duration. Means with different letters (abc) in the same row are significantly different for different freezing treatments (p < 0.05); means with different letters (AB) in the same column are significantly different in the frozen storage time (p < 0.05).
2020-05-21T09:12:04.301Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "0febdd779853a8dfa798196a37658f0dbe689d2c", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/j/cta/a/FCNMHBm9SNfNPctSJZ3TThF/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "2c944d451ef0b527a9f7e7fe9355fc5f002ca6ce", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
15216322
pes2o/s2orc
v3-fos-license
Pulmonary function tests in the preoperative evaluation of lung cancer surgery candidates. A review of guidelines Before planned surgical treatment of lung cancer, the patient's respiratory system function should be evaluated. According to the current guidelines, the assessment should start with measurements of FEV1 (forced expiratory volume in 1 second) and DLco (carbon monoxide lung diffusion capacity). Pneumonectomy is possible when FEV1 and DLco are > 80% of the predicted value (p.v.). If either of these parameters is < 80%, an exercise test with VO2 max (oxygen consumption during maximal exercise) measurement should be performed. When VO2 max is < 35 % p.v. or < 10 ml/kg/min, resection is associated with high risk. If VO2 max is in the range of 35-75% p.v. or 10-20 ml/kg/min, the postoperative values of FEV1 and DLco (ppoFEV1, ppoDLco) should be determined. The exercise test with VO2 max measurement may be replaced with other tests such as the shuttle walk test and the stair climbing test. The distance covered during the shuttle walk test should be > 400 m. Patients considered for lobectomy should be able to climb 3 flights of stairs (12 m) and for pneumonectomy 5 flights of stairs (22 m). Introduction Lung cancer is currently responsible for the largest number of "neoplastic" deaths both among women and men [1,2]. Surgical treatment is one of the methods for treating this disease. The option of surgical treatment is largely determined by the stage of the neoplasm according to the TNM (tumor, node, metastasis) classification [3]. However, in some patients, the surgical options are also limited by concomitant respiratory system diseases, which impair lung function. One such disease, which most often limits the possibility of resection, is chronic obstructive pulmonary disease (COPD). It is, moreover, a risk factor for lung cancer independent of smoking [4]. Establishing the respiratory reserve is an important element of qualifying patients for resection of lung parenchyma. The currently binding recommendations of the European Respiratory Society (ERS) and the European Society of Thoracic Surgeons (ESTS) from 2009 underscore the significance of cooperation within multispecialist teams consisting of pneumonologists, thoracic surgeons, oncologists, and radiation therapists in determining the optimal treatment strategy for individual patients [5]. In the proposed algorithm for conducting the qualifica-ANAESTHESIOLOGY AND INTENSIVE CARE tion procedure, the subsequent function tests are aimed at increasing both perioperative safety and the percentage of patients qualified for surgical treatment. The aim of the study was to present the current guidelines concerning the assessment of respiratory system function before qualifying patients for the surgical treatment of lung cancer and to highlight the changes in relation to previous recommendations. Preliminary assessment -spirometry and exercise capacity testing According to the current guidelines of ERS and ESTS, both spirometry, aimed at measuring FEV 1 (forced expiratory volume in 1 second), and the assessment of the diffusing capacity of the lungs (DLco), performed by measuring gas diffusion within the lungs, are recommended during the first stage of qualifying patients for lung parenchyma resection [5]. Previous guidelines, both those published by the American College of Chest Physicians (ACCP) in 2007 and those published by the British Thoracic Society (BTS) in 2001, recommended only spirometry with FEV 1 measurement as the preliminary examination [6,7]. The DLco examination was recommended only for patients with lowered FEV 1 , postexercise dyspnea disproportionate to FEV 1 , or suspicion of interstitial lung disease [6,7]. The latest guidelines include the DLco measurement in the preliminary function examination, as numerous studies have demonstrated that reduced DLco levels constitute an independent risk factor for increased mortality and perioperative complications, even in patients with normal FEV 1 [8][9][10][11][12][13]. In a large study encompassing 872 patients qualified for lung parenchyma resection, Brunelli et al. found reduced DLco levels in 508 patients (63%) with normal FEV 1 , which resulted in an increased frequency of perioperative complications. This is one of the reasons for the recommendation that DLco assessment be performed routinely in all candidates for pulmonary parenchyma resections [9]. Spirometry should be conducted in accordance with the ERS/ATS standards [14]. The previously published guidelines established independent boundary values of FEV 1 for planned lobectomy and pneumonectomy (1.5 l and 2 l, respectively) [6]. Later guidelines added that FEV 1 should not be lower than 80% of the predicted value (p.v.), both in the case of planned lobectomy and pneumonectomy [7]. The current values employ only a percentage threshold value, which is 80% of the predicted value for both FEV 1 and DLco ( Fig. 1) [5]. The preliminary assessment of respiratory function may be concluded if the values of both indices (FEV 1 and DLco) exceed 80% p.v. This means that surgical treatment is not burdened with an increased risk of complications and perioperative mortality [5]. When at least one of the parameters (FEV 1 or DLco) is lowered (< 80% p.v.), the next stage of the qualification process for lung parenchyma resection should establish the patient's exercise capacity (Fig. 1). Exercise tests Exercise tests are widely used in pulmonary diagnostics (including the process of qualifying patients for the surgical treatment of lung cancer) due to their higher prognostic value with regard to FEV 1 and DLco measurement. Engaging large groups of muscles, exercise tests significantly burden the circulatory and respiratory systems, enabling the estimation of the physiological functional reserve before the planned surgical procedure [15]. Exercise tests should be performed in patients with FEV 1 or DLco lower than 80% p.v. (Fig. 1) [5]. The best test among them is the cardiopulmonary exercise test (CPET), also known as ergospirometry. Cardiopulmonary exercise test may be performed on a treadmill or, as recommended for respiratory system diseases, on a cycloergometer [15]. The measurement of exercise capacity in ergospirometry is peak oxygen uptake expressed by the VO 2 max parameter [15]. Reduced VO 2 max results in an increased risk of postresection complications [16]. This pertains especially to patients with VO 2 max < 65% p.v. (or < 16 ml/kg/min) [17]. Brunelli et al. reported that all deaths after lung resections that were at least as extensive as lobectomy occurred among patients whose VO 2 max was < 20 ml/kg/min [18]. Low-cost exercise tests In Poland, CPET is relatively inaccessible and costly, and requires trained personnel. Other, lower-cost methods of assessing exercise capacity include the 6-minute walk test (6MWT), shuttle walk test (SWT), and stair climbing test. The shuttle walk test During the test, the patient walks between 2 points, which are 10 m apart, at an increasing speed set by a testspecific sound signal. The distance covered during this test correlates well with VO 2 max [24][25][26]. Previous guidelines recommended 250 m to be the boundary value for increased complication frequency after lung resection [6]. Later reports demonstrated no significant difference in the distance covered during this test by patients with and without postoperative complications [27,28]. On the other hand, it was revealed that some patients covered distances shorter than 250 m in spite of the fact that their VO 2 max values exceeded 15 ml/kg/min [28]. In the study group, all patients who walked more than 400 m in distance achieved VO 2 max values exceeding 15 ml/kg/min [28]. As a result, the shuttle walk test is currently recommended as a screening exercise test for patients qualified for resection due to lung cancer [29]. All patients with results over 400 m should undergo additional CPET in order to establish their VO 2 max [29]. Stair climbing Stair climbing is another examination recommended for qualifying patients for lung cancer surgery [30,31]. During this simple test, the patient climbs flights of stairs, thus ascending a certain distance and number of floors. As demonstrated by Brunelli et al., patients who climb less than 12 m (3 floors) are twice as likely to suffer from complications, while mortality among them increases 13-fold, and the costs of their treatment rise 2.5-fold in comparison with patients who can climb 22 m (5 floors) [32]. Thus, the stair climbing test may be employed as a screening examination for the identification of patients who are able to ascend 22 m (5 floors) of stairs and can, therefore, undergo lung resection up to pneumonectomy without increased risk [32]. Lately, reports have been published, pointing to the significance of the speed of stair climbing with regard to the frequency of complications after lung resection procedures (the majority of the studied patients were operated on for non-neoplastic reasons) [33,34]. It was demonstrated that climbing 20 m within 80 s (speed ≥ 15 m/min) correlates well with VO 2 max [33]. All patients who achieved a result of less than 80 s had VO 2 max above 20 ml/kg/min [33]. Also, Ambrozini et al. demonstrated that climbing 12.16 m in a time exceeding 37.5 s indicates a higher risk of complications after thoracic surgery procedures [34]. Perhaps a larger number of reports assessing the stair climbing speed of patients qualified for resections due to lung cancer will result in the preparation of appropriate expert guidelines. Predicted postoperative values of FEV 1 and DLco (ppoFEV 1 , ppoDLco) After exercise tests, the next stage of preoperative function evaluation consists in calculating the predicted postoperative values of FEV 1 and DLco [5]. Moreover, the latest guidelines include the calculation of predicted oxygen consumption after resection (VO 2 max) [5]. This pertains to patients in whom ppoFEV 1 or ppoDLco is lower than 30% p.v. (in previous guidelines, the boundary value of ppoFEV 1 and ppoDLco in qualifying patients for resection was 40% p.v.) [6,7]. Considering the improving standards of perioperative care and the modern, less invasive surgical techniques, the boundary value of ppoFEV 1 and ppoDLco was lowered to 30% p.v. [5]. The formulae enabling the calculation of predicted postoperative values are similar for FEV 1 , DLco, and VO 2 max [35]. When lobectomy is planned, the values of ppoFEV 1 , ppoDLco, and ppoVO 2 max are calculated with consideration to the number of all resected segments (Formula 1) or obstructed segments whose number can be assessed using CT or bronchoscopy (Formula 2) [35]. In the case of pneumonectomy, scintigraphy and an assessment of perfusion within the resected lung are required [5][6][7]. New indicators in the preoperative assessment of lung cancer patients The assessment of physical activity by means of simple tests evaluating daily energy expenditure correlates with perioperative mortality among adults [36]. ANAESTHESIOLOGY AND INTENSIVE CARE Measuring motor activity with a pedometer facilitates the determination of the degree of exercise capacity impairment resulting from COPD or other diseases [37,38]. Its usefulness in predicting negative effects of surgery, including thoracic surgery, requires further study. The slope of the VE/VCO 2 curve obtained during CPET is an independent indicator of mortality in patients with moderate and severe COPD undergoing resection due to non-small-cell lung carcinoma [39]. The slope of VE/VCO 2 is a better predictive factor than VO 2 max, with regard to both mortality and perioperative complications in patients undergoing lung resection [40]. The increasing number of reports concerning the usefulness of these indicators in evaluating the risk of complications after lung cancer surgery procedures may well result in their routine use in preoperative evaluation in the future. Special circumstances in the preoperative assessment of lung cancer patients It should be remembered that surgical treatment of lung cancer is also possible in patients with significantly reduced lung function parameters (as long as FEV 1 and DLco are not lower than 20% p.v.) if the tumor is located in the upper lobes affected by emphysematous changes [7,41]. In such cases, concurrent resection of emphysematous bullae serves the role of lung volume reduction surgery in COPD and improves lung ventilation. Another exception occurs if the exercise tests cannot be performed by a patient with orthopedic ailments, e.g. advanced degenerative changes of the hip or knee joints, in which case only the calculation of predicted postoperative lung function indicator values is required. Conclusions The qualification of lung cancer patients for resection is a multistage process. Qualifying for the procedure requires not only meeting histopathological and radiological criteria, but also having normal, good lung ventilation and diffusion capacity. In doubtful cases, it must be verified by assessing the patient's exercise capacity. Proper qualification is aimed at both increasing perioperative safety and raising the percentage of patients qualified for lung resection due to cancer. Disclosure The authors report no conflict of interest.
2016-05-12T22:15:10.714Z
2014-09-01T00:00:00.000
{ "year": 2014, "sha1": "6c42142830b14b2a5af6d1ccc0412aa019ed4991", "oa_license": "CCBYNCND", "oa_url": "https://www.termedia.pl/Journal/-40/pdf-23592-10?filename=08_Trzaska-Sobczak.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6c42142830b14b2a5af6d1ccc0412aa019ed4991", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
92997234
pes2o/s2orc
v3-fos-license
Caulobacter crescentus Adapts to Phosphate Starvation by Synthesizing Anionic Glycoglycerolipids and a Novel Glycosphingolipid. Bacteria adapt to environmental changes in a variety of ways, including altering their cell shape. Caulobacter crescentus adapts to phosphate starvation by elongating its cell body and a polar stalk structure containing both inner and outer membranes. While we generally think of cellular membranes being composed largely of phospholipids, cellular elongation occurs when environmental phosphate, and therefore phospholipid synthesis, is limited. In order to adapt to these environmental constraints, C. crescentus synthesizes several glycolipid species, including a novel glycosphingolipid. This finding is significant because glycosphingolipids, while ubiquitous in eukaryotes, are extremely rare in bacteria. In this paper, we identify three proteins required for GSL-2 synthesis and demonstrate that they contribute to phage resistance. These findings suggest that bacteria may synthesize a wider variety of lipids in response to stresses than previously observed. membrane fluidity. For bacteria such as Escherichia coli, cells incorporate an increasing proportion of unsaturated fatty acids as temperatures decrease (1,2); the kinks introduced by acyl chain unsaturation decrease membrane viscosity to counteract the effects of lower temperature. Similarly, a variety of Gram-positive and Gram-negative bacteria alter the ratio of phospholipid headgroups in response to osmotic shock (3); E. coli increases the ratio of cardiolipin:phosphatidylethanolamine when osmotically stressed (4). Oligotrophic bacteria require adaptations to stresses associated with nutrient availability. For example, nutrient levels in freshwater lakes experience seasonal fluctuations, and phosphate concentration has been shown to be a limiting factor for bacterial growth (5). The oligotrophic Gram-negative bacterium Caulobacter crescentus responds to phosphate limitation by dramatically elongating its cell body and a polar stalk structure, a thin extension of the cell envelope, consisting of an inner membrane, a peptidoglycan cell wall, an outer membrane, and a surface layer (6) (Fig. 1A). The stalk has been hypothesized to serve as means to increase phosphate uptake (7), since all four members of the PstSCAB high-affinity phosphate import pathway are found in the stalk (7,8). Additionally, analytical modeling of nutrient diffusion suggests that stalk elongation is the most efficient method of increasing nutrient flux to the cell while minimizing cell surface area and volume (8). Under phosphate-rich growth conditions, cells are approximately 1 m in length, stalks are very short (ϳ100 nm), and phosphatidylglycerol (PG) accounts for approximately 30% of total lipids (9). Upon phosphate starvation, cell bodies and stalks can grow up to 3.5 m and 15 m in length, respectively, which requires significant production of new lipids to build the inner and outer membranes. When phosphate is limited it is unlikely that this new membrane contains phospholipids; therefore, we hypothesized that C. crescentus synthesizes alternative lipids for cellular and stalk elongation. Several alphaproteobacteria adapt to phosphate limitation by increasing the production of glyceroglycolipids and ornithine lipids. For example, Agrobacterium tumefaciens synthesizes monoglucosyl diacylglycerol (DAG), glucuronosyl diacylglycerol, and diacylglycerol trimethylhomoserine (DGTS) (10,11), while Mesorhizobium loti produces di-and triglycosyldiacylglycerols, DGTS, and ornithine lipid (12). Glycolipids make up a large proportion of the C. crescentus membrane even in phosphate-rich growth media (45% to 62%) (9), but phosphate-mediated changes in lipid composition have not been characterized. We hypothesized that during phosphate limitation C. crescentus either (i) increases the proportion of existing glycolipids or (ii) synthesizes novel lipid species to replace phospholipids. Analysis of total membrane composition following phosphate limitation revealed that both hypotheses were correct. C. crescentus increases the amount of monohexuronosyl DAG (MHDAG) and synthesizes a novel hexosyl-hexuronosyl-ceramide glycosphingolipid (HexHexA-Cer). This glycosphingolipid (GSL) represents a novel bacterial lipid species. In this report we characterize this GSL, identify the enzymes responsible for initiating ceramide synthesis and its sequential glycosylation, and address the physiological importance of ceramide-based lipids. RESULTS Phosphate limitation induces changes in membrane composition. Phosphate starvation induces elongation of both the cell body and stalk in C. crescentus (Fig. 1A). If we approximate the shapes of the cell body and stalk as cylinders, we can estimate that the total surface area of the cell, and thus the synthesis of membrane lipids, increases 6-to 7-fold upon phosphate limitation. This significant increase in membrane area suggests that C. crescentus must be able to produce alternatives to phospholipids during phosphate starvation. Indeed, when we compared the total lipid composition under phosphate-rich (1 mM phosphate) and phosphate-starved (1 M phosphate) conditions by using normal phase liquid chromatography-tandem mass spectrometry (LC-MS/MS), we saw a dramatic decrease in phosphatidylglycerol and a corresponding increase in several species of glycolipids (Fig. 1B). In particular, phosphate starvation 1B). These lipid species were identified by exact mass measurement in conjunction with collision-induced dissociation (CID) tandem mass spectrometry. The presence of a GSL was unexpected since sphingolipids, while highly abundant in eukaryotes, are rarely found in bacteria. Some species in the Bacteroides, Porphyromonas, and Prevotella genera are capable of synthesizing the phosphosphingolipids ceramide phosphorylethanolamine and/or ceramide phosphoglycerol (13). Non-phosphate-containing GSLs have been described only for the family Sphingomonadaceae, in which they can function as a substitute for lipopolysaccharide (LPS) (14). Structural analyses of Sphingomonas GSLs have revealed carbohydrate moieties containing 1, 3, or 4 sugar units ( Fig. 1C) (14,15). In contrast, the GSL found in C. crescentus has two sugars and thus represents a novel bacterial GSL species, which we named GSL-2 (Fig. 1C). Ceramide synthesis in C. crescentus. Sphingolipid synthesis begins with the production of a ceramide molecule which is then modified with various polar groups. In eukaryotes, ceramide synthesis is a four-step process that begins with the condensation of serine and a fatty acyl coenzyme A (acyl-CoA) to produce 3-oxo-sphinganine in a reaction catalyzed by oxoamine synthase (16). While each of the enzymatic steps in ceramide synthesis has been well characterized in eukaryotes (17), only the oxoamine synthase enzyme required for the first step appears to be conserved in bacteria. Indeed, bacterial species have been identified encoding one or more oxoamine synthases, including serine palmitoyltransferase (Spt), 8-amino-7-oxononanoate synthase (BioF), 5-aminolevulinate synthase (HemA), and 2-amino-3-oxobutyrate coenzyme A ligase (Kbl) (18). C. crescentus encodes three putative oxoamine synthases: CCNA_01220 (BioF), CCNA_01417 (HemA), and CCNA_01647 (BioF). To assess the role of the candidate oxoamine synthases in ceramide synthesis, total lipid composition was analyzed in wild-type, Δccna_01220, and Δccna_01647 cells. We were unable to obtain a deletion of ccna_01417 since this is an essential gene (19); however, recent biochemical analysis of CCNA_01417 reveals that it is most likely involved in the production and regulation of heme cofactors (20). Ceramides were completely absent in the Δccna_01220 strain (Fig. 2); therefore, we refer to this gene as ccbF, for Caulobacter crescentus BioF. Complementation of the ccbF deletion restored ceramide synthesis (see Fig. S1A in the supplemental material). Interestingly, deletion of BioF homolog CCNA_01647 had no effect on ceramide synthesis despite the fact that the protein has 51% similarity and 35% identity to CcbF (Fig. 2). We note that while the ccna_01647 deletion did not affect ceramide levels, we observed a reduction in MHDAG synthesis (Fig. 2), though the mechanism for this decrease is unknown. Glycosphingolipid synthesis requires two sequential glycosyltransferases. GSL-2 has a novel glycosylation pattern consisting of a hexose and a hexuronic acid (Fig. 1C). Lipid glycosylation, in both eukaryotes and prokaryotes, is performed by GT-4 family glycosyltransferases; C. crescentus encodes 11 putative GT-4 glycosyltransferases (21). To identify the glycosyltransferases required for GSL-2 synthesis, we narrowed down the list of candidate genes using the following criteria: (i) genes upregulated upon phosphate starvation (22), (ii) nonessential genes (19), and (iii) genes without a direct homolog in E. coli, since E. coli does not produce GSLs. Of the 11 initial candidates, only 2 genes fit all three criteria: ccna_00792 and ccna_00793 ( Fig. 3A and Table S4). Both candidate genes are part of the PhoB regulon, which contains genes upregulated under phosphate starvation conditions ( (Fig. 3D). Complementation of these deletions recovered ceramide glycosylation ( Fig. S1B and C). These data support a model of sequential ceramide glycosylation by CCNA_00793 and CCNA_00792, which we are naming sphingolipid glycosyltransferases 1 and 2 (Sgt1 and -2), respectively (Fig. 3E). Sgt1 functions as a glucuronosyltransferase, adding a hexuronic acid, while Sgt2 is a glycosyltransferase responsible for adding a hexose sugar. Sgt1 and -2 appear to specifically glycosylate sphingolipids, as neither deletion affected the glycosylation of DAG-based lipids ( Fig. 3C and D). The specificity of Sgt1 and -2 was further confirmed by heterologous expression in E. coli, in which we could detect only nonglycosylated lipids ( Fig. S2A and B). While GSL-2 synthesis occurs in response to phosphate starvation, neither ceramides nor GSL-2 appears to be necessary for stalk biogenesis or cell elongation (Fig. S3A). This is likely due to the sufficiency of upregulated DAG lipids under low-phosphate conditions (Fig. 1B). Unlike GSL-2, nonglycosylated ceramides are produced across a wide range of phosphate concentrations, albeit at lower levels in the presence of excess phosphate (Fig. S3B). Surprisingly, ccbF mRNA levels are reduced during phosphate starvation, despite higher levels of ceramide production (Fig. S3C). Restriction of ceramide glycosylation to growth environments in which phospholipid synthesis is limited appears to be critical for membrane homeostasis. Overexpression of Sgt1 and Sgt2 in high-phosphate media results in the production of both HexA-Cer and GSL-2 ( Fig. 4A to C). Unlike physiological glycosphingolipid production during phosphate starvation, Sgt1 and -2 overexpression leads to an accumulation of HexA-Cer (compare Fig. 1B to Fig. 4A). Furthermore, we did not detect HexHexA-DAG under these conditions, providing additional evidence that Sgt1 and -2 specifically glycosylate ceramide lipids (Fig. 4C). The production of glycosphingolipids under high-phosphate conditions leads to cell lysis, as assessed by propidium iodide staining (Fig. 4D). This phenomenon may be due to an Fig. 5A and B). GSL-producing Sphingomonas species are resistant to the antibiotic polymyxin B (24); therefore, we tested whether ceramides or GSL-2 conferred similar resistance on C. crescentus. Surprisingly, the deletion of ccbF increased resistance to polymyxin B under both high-and lowphosphate conditions (Fig. 5A and B). Under low-phosphate conditions, the Δsgt1 strain did not have increased polymyxin B resistance, demonstrating that the resistance phenotype was specifically due to the presence of ceramide lipids regardless of their glycosylation (Fig. 5B). Under both phosphate conditions, complementation of ccbF restored sensitivity to polymyxin B (Fig. 5A and B). Modifying the bacterial envelope structure and composition can also affect cellular interactions with bacteriophages. Phage ⌽Cr30 infects C. crescentus by attaching to the extracellular surface layer (S-layer) (25). Growth curves of wild-type and ccbF deletion strains infected with ⌽Cr30 in peptone-yeast extract (PYE) demonstrated that ceramides are important for increasing phage resistance (Fig. 5C); this initial screen was performed in PYE because ⌽Cr30 infections are generally inhibited in minimal media (26). The increased susceptibility of the ΔccbF strain could be attributed to either enhanced phage adsorption or an increased viral burst size. We tested the ability of ⌽Cr30 to adsorb to wild-type, ΔccbF, and Δsgt1 cells in Hutner base-imidazole-glucoseglutamate medium (HIGG)-1 M phosphate. Both of the mutant strains had an enhanced rate of phage adsorption which was restored to normal upon complementation (Fig. 5D), suggesting that mature GSL-2 is required to inhibit phage adsorption. Measurements of burst size did not reveal any differences between the strains in low-phosphate medium (Fig. 5E); however, the ccbF and sgt1 deletion strains appear to have shorter latent periods, consistent with faster phage adsorption to these cells (Fig. 5E). Phage ⌽Cr30 attaches to C. crescentus by binding to the cell envelope S-layer, a crystalline lattice composed of the protein RsaA (27). The S-layer is, in turn, anchored to the cell through interactions with the O-antigen domain of lipopolysaccharide (LPS) (28). A comparison of LPS and S-layer production in wild type and GSL mutants did not reveal any remarkable distinctions between the strains (Fig. S4A and B), suggesting that the enhanced phage adsorption observed in the GSL mutants is not due to an increase in S-layer production. The accessibility of the S-layer to phage is restricted by the production of an exopolysaccharide (EPS) capsule (29). To test whether the absence of ceramides disrupts EPS production, C. crescentus strains were grown in HIGG-1 M phosphate and incubated with fluorescein isothiocyanate (FITC)-labeled dextran (29). The wide zone of exclusion around the wild-type and ΔccbF cells show that they produce EPS, in contrast to the non-EPS-producing ΔMGE strain (29) (Fig. S4C). DISCUSSION C. crescentus adapts to phosphate limitation, in part, by dramatically elongating both its cell body and polar stalk appendage (7,30) (Fig. 1A), requiring a significant amount of lipid synthesis. Without the environmental phosphate required for phospholipid synthesis, C. crescentus upregulates the production of several glycolipid species, including a novel glycosphingolipid, GSL-2 (Fig. 1B). In this study, we identified three enzymes involved in GSL production: CcbF is responsible for the first step of ceramide synthesis (Fig. 2), while Sgt1 and Sgt2 sequentially glycosylate ceramide to yield GSL-2 ( Fig. 3C to E). Upregulation of glycolipid synthesis in response to phosphate limitation has been previously described for Agrobacterium tumefaciens and Mesorhizobium loti (11,12). In these species, cells produce nonphosphorus glycosyl-DAGs. While C. crescentus also produces mono-and diglycosyl-DAGs, this is the first demonstration of bacterial GSL synthesis in response to phosphate starvation. While GSLs are found ubiquitously in eukaryotic organisms, their presence in bacteria was thought to be limited to species of the family Sphingomonadaceae. In Sphingomonas species, GSLs are used as a substitute for LPS in the outer membrane and contain 1, 3, or 4 sugar units (14,15). Sphingomonas wittichii strain RW1 produces two different monoglycosylated GSLs in place of LPS (31). Not surprisingly, the gene for serine palmitoyltransferase, which catalyzes the first step of ceramide synthesis, is an essential gene in S. wittichii (32). In contrast, C. crescentus GSL synthesis genes are nonessential and GSL-2 is produced even in the presence of LPS (Fig. S4A). Furthermore, ablation of ceramide or GSL-2 has no effect on proliferation (Fig. 5A and B) or cellular elongation (Fig. S3A). Thus, we conclude that while GSL production occurs under conditions of phosphate limitation, it is dispensable for cell elongation, stalk synthesis, and survival. This is likely due to the presence of sufficient glycosylated DAGs to compensate for the loss of GSLs. This study is the first to identify bacterial glycosyltransferase enzymes required for ceramide glycosylation. As expected, BLAST homology searches (33) demonstrate that outside the Caulobacteraceae family, Sgt1 and Sgt2 are most homologous to glycosyltransferases in the GSL-producing Sphingomonadaceae family. Unlike many other bacterial glycosyltransferases which demonstrate a high degree of promiscuity regarding sugar acceptors (34,35), Sgt1 appears to have a high degree of specificity toward ceramide glycosylation. Deletion of sgt1 in C. crescentus has no effect on glycosyldiacylglycerol production (Fig. 3C), and heterologous expression of Sgt1 and Sgt2 in E. coli does not lead to lipid glycosylation (Fig. S2). While Sphingomonas species use GSLs to replace LPS, the role of GSLs in C. crescentus is less clear. Ceramide synthesis occurs over a wide range of phosphate concentrations, yet mature GSLs are produced only during phosphate starvation. Complete deletion of ceramides appears to alter the function of C. crescentus membranes, resulting in increased resistance to the lipid-interacting antibiotic polymyxin B and increased sensitivity to phage-mediated killing (Fig. 5A to C). These effects occur despite the absence of gross changes to LPS, S-layer, or EPS production ( Fig. S4A to C). Resistance to cationic antimicrobial peptides like polymyxin B often occurs by reducing the negative charge of the membrane to prevent binding; for example, in E. coli, lipid A is modified with 4-amino-4-deoxy-L-arabinose (36) to neutralize charge. In C. crescentus, the impact of ceramide or GSL deficiency on total membrane charge is less clear; nonglycosylated ceramides are neutral, while the hexuronic acid found in GSL-2 is anionic. Therefore, the relative abundances of all lipid species would be required to assess the role of membrane charge in antibiotic resistance. The increased susceptibility of ceramide-depleted cells to phage lysis appears to be due to enhanced phage adsorption to the ccbF and sgt1 deletion strains (Fig. 5D). Increased adsorption reduces the phage latency period without affecting the phage burst size (Fig. 5E). Although the abundance of S-layer protein was not affected in the GSL mutants (Fig. S4B), recent biophysical studies have shown that the S-layer protein RsaA can exist on the cell surface in either a crystalline or aggregated state (37). This is consistent with cryo-electron tomography showing distinct regions of S-layer organization in intact C. crescentus cells (38). While we do not know exactly how phage ⌽Cr30 binds to the S-layer, it is possible that GSLs affect S-layer organization, rather than production, thereby regulating phage interactions. MATERIALS AND METHODS Bacterial strains, plasmids, and growth conditions. The strains, plasmids, and primers used in this study are described in Tables S1, S2, and S3, respectively. Details regarding strain construction are available in the supplemental materials. C. crescentus wild-type strain NA1000 and its derivatives were grown at 30°C in peptone-yeast extract (PYE) medium (39) for routine culturing. To control phosphate levels, C. crescentus was grown in Hutner base-imidazole-glucose-glutamate media (HIGG) with variable amounts of phosphate (1 to 1,000 M) (40). E. coli strains were grown at 37°C in LB medium. When necessary, antibiotics were added at the following concentrations: kanamycin, 30 g/ml in broth and 50 g/ml in agar (abbreviated 30:50) for E. coli and 5:25 for C. crescentus; ampicillin, 50:100 for E. coli; tetracycline, 12:12 for E. coli and 1:2 for C. crescentus; gentamicin, 15:20 for E. coli and 0.5:5 for C. crescentus; and spectinomycin, 50:50 for E. coli and 25:100 for C. crescentus. Gene expression was induced in C. crescentus with either 0.3% (wt/vol) xylose or 0.5 mM vanillate. E. coli gene expression was induced with isopropyl-␤-D-1-thiogalactopyranoside (IPTG; 1 mM). Phage titering was performed by adding 1 to 10 l of ⌽Cr30 to 100 l of an overnight culture of NA1000 in PYE. This mixture was added to 4 ml of soft agar (0.3% [wt/vol] agar in PYE) and overlaid on a PYE-agar plate. After solidifying, the plate was incubated overnight at 30°C and plaques were counted. Microscopy and image analysis. Cells were spotted onto 1% agarose pads made in the corresponding growth medium. Phase microscopy was performed on a Nikon TiE inverted microscope equipped with a Prior Lumen 220PRO illumination system, Zyla sCMOS 5.5-megapixel camera, CFI Plan Apochromat 100ϫ oil immersion objective (numerical aperture [NA] of 1.45 and working distance [WD] of 0.13 mm), and NIS Elements software for image acquisition. Cell and stalk dimensions were measured using Morphometrics (41) and ImageJ v. 1.48q (NIH), respectively. To measure membrane permeability, cells were grown in the presence of 1 g/ml of propidium iodide. EPS production was assessed as previously described (29). Briefly, 500 microliters of cells grown in HIGG-1 M phosphate were collected by centrifugation (14,000 ϫ g, 5 min), and the pellet was resuspended in 30 l of 0.5ϫ phosphatebuffered saline (PBS). Ten microliters of the cell suspension was mixed with 5 l of FITC-dextran (10 mg/ml; molecular weight [MW], 2 MDa; Sigma) and 1 l of SlowFade Diamond mountant (Thermo Scientific). Two microliters of this mixture was spotted onto a glass slide, coverslipped, and sealed with vaseline-lanolin-paraffin (VALAP; 1:1:1) for imaging. qRT-PCR. RNA was extracted from bacterial cultures using the Qiagen RNeasy kit. Following DNase digestion, RNA (5 ng/l) was reverse transcribed using a high-capacity cDNA reverse transcription (RT) kit (Applied Biosystems). One microliter of cDNA was used as a template in a 10-l quantitative RT-PCR (qRT-PCR) performed with Power SYBR reagent (Applied Biosystems). qRT-PCR was performed on an ABI QuantStudio 6 using the threshold cycle (ΔΔC T ) method. rpoD expression was used as the loading control. Lipid extraction. C. crescentus strains were grown in 500 ml of HIGG with either 1 mM or 1 M phosphate until reaching stationary phase. Sgt1 and -2 E. coli expression strains were grown overnight in 500 ml of LB medium with 1 mM IPTG to induce protein expression. Lipids were extracted by the method of Bligh and Dyer (42). Cells were harvested in glass tubes at 10,000 ϫ g for 30 min, and the majority of the supernatant was removed; stalked C. crescentus organisms are very buoyant and do not form tight pellets, preventing the complete removal of supernatant. The cells were resuspended in the residual supernatant, 3.75 volumes of 1:2 (vol/vol) chloroform-methanol was added, and the samples were mixed by vortexing. Chloroform (1.25 volumes) and water (1.25 volumes) were added sequentially with vortexing to create a two-phase system, and the samples were centrifuged at 200 ϫ g for 5 min at room temperature. The bottom, organic phase was transferred to a clean tube with a Pasteur pipette and washed twice in "authentic" upper phase. Subsequently, the residual organic phase with the lipids was collected and dried under argon. LC-ESI-MS/MS. Methods for liquid chromatography-electrospray ionization-tandem mass spectrometry (LC-ESI-MS/MS) have been described previously (43,44). Briefly, normal phase LC was performed on an Agilent 1200 quaternary LC system equipped with an Ascentis Silica high-performance liquid chromatography (HPLC) column, 5 m, 25 cm by 2.1 mm (Sigma-Aldrich, St. Louis, MO). The LC eluent (with a total flow rate of 300 l/min) was introduced into the ESI source of a high-resolution Triple-TOF5600 mass spectrometer (Applied Biosystems, Foster City, CA). Instrumental settings for negative-ion ESI and MS/MS analysis of lipid species were as follows: IS, Ϫ4,500 V; CUR, 20 lb/in 2 ; GSI, 20 lb/in 2 ; DP, Ϫ55 V; and FP, Ϫ150 V. The MS/MS analysis used nitrogen as the collision gas. Data analysis was performed using Analyst TF1.5 software (Applied Biosystems). Growth curve analysis. For polymyxin B sensitivity assays, C. crescentus cells were diluted to an optical density at 660 nm (OD 660 ) of 0.05 in HIGG with 1 mM or 1 M phosphate, treated with 30 g/ml of polymyxin B, and incubated with shaking (250 rpm and 30°C). Complementation gene expression was induced with 0.3% xylose. Samples were taken at the desired times for absorbance measurements (OD 660 ). For phage sensitivity assays, cells were grown in PYE and diluted to an OD 660 of 0.05. A total of 148 l of culture was dispensed per well in a 96-well plate. To each well, 2 l of water (control) or ⌽Cr30 (final concentration, 5 ϫ 10 5 PFU/ml) was added. To prevent evaporation, each well was overlaid with 100 l of mineral oil. OD 660 was recorded every 20 min on a BMG Labtech CLARIOstar plate reader with incubation at 30°C with continuous shaking. Phage adsorption and burst size quantification. Phage adsorption and burst size were measured essentially as previously described (25). To measure phage adsorption, cells were grown in HIGG-1 M phosphate and diluted to an OD 660 of 0.2. Cells (1 ml) were aliquoted into glass culture tubes, and 10 5 PFU of ⌽Cr30 was added. Cultures were incubated with shaking at 30°C; at various time points, 10 l of culture was removed and diluted into 1 ml of water-chloroform (9:1 [vol/vol]). Ten microliters of this mixture was used to titer the unbound phage as described above. To measure viral burst size, cells were grown in HIGG-1 M phosphate and diluted to an OD 660 of 0.1. Cells (0.5 ml) were infected with 0.5 ϫ 10 5 PFU of ⌽Cr30 and incubated at 30°C for 15 min. The culture was diluted 1,000-fold into HIGG-1 M phosphate; a 200-l aliquot was removed every 15 min for titering as described above. SDS-PAGE and protein staining. E. coli strains were grown overnight in LB medium with 1 mM IPTG to induce protein expression. A total of 500 l of each strain was collected by centrifugation (6,000 ϫ g for 2 min) and the pellet was resuspended in 100 l of sample buffer. Protein samples were resolved on a 12% SDS-PAGE gel and stained with Coomassie blue for visualization. LPS purification and analysis. Lipopolysaccharide (LPS) was purified essentially as previously described (45). Briefly, 5 ml of C. crescentus cells grown in HIGG-1 M phosphate (OD 660 ϭ 0.5) was collected and washed once in 10 mM HEPES (pH 7.2). Cells were resuspended in 250 l of TE buffer (10 mM Tris, 1 mM EDTA [pH 7.2]) and frozen overnight at Ϫ20°C. Cells were thawed, treated with 1 l of DNase (0.5 mg/ml), 20 l of lysozyme (10 mg/ml), and 3 l of MgCl 2 (1 M), and incubated at room temperature for 15 min. For each sample, 36.25 l was mixed with 12.5 l of 4ϫ SDS sample buffer and boiled at 100°C for 10 min. After cooling to room temperature, 1.25 l of proteinase K (20 mg/ml) was added and samples were incubated at 60°C for 1 h. LPS samples were resolved on a 12% SDS-PAGE gel and stained with a Pro-Q Emerald 300 LPS stain kit according to the manufacturer's protocol (Thermo Scientific). Images were acquired on a Bio-Rad ChemiDoc MP using UV excitation and a 530-nm emission filter. S-layer (RsaA) purification and analysis. RsaA was purified essentially as previously described (46). Briefly, cells were grown overnight in HIGG-1 M phosphate and 5 ml (OD 660 ϭ 0.6) was collected by centrifugation. The cell pellets were washed twice in 5 ml of 10 mM HEPES (pH 7.2), resuspended in 200 l of 100 mM HEPES (pH 2), and incubated at room temperature for 10 min. Cells were pelleted (10 min at 5,000 ϫ g), and the supernatant containing RsaA was collected and neutralized with 2.8 l of 10 N NaOH. RsaA samples in 1ϫ sample buffer were resolved on a 7.5% SDS-PAGE gel without heat denaturing and stained with Krypton protein stain (Thermo Scientific). Images were acquired on a Bio-Rad ChemiDoc MP using green light-emitting diode (LED) excitation and a 605-nm emission filter.
2019-04-04T13:02:48.648Z
2019-04-02T00:00:00.000
{ "year": 2019, "sha1": "c18cc29557bdc49bcf89dc1c98edbc54fdc4e3a3", "oa_license": "CCBY", "oa_url": "https://mbio.asm.org/content/mbio/10/2/e00107-19.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "8e11287bf4f5d11a6710d6107fef1dc85d802cb1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
195788940
pes2o/s2orc
v3-fos-license
On the hybrid power mean of two kind different trigonometric sums Abstract The main purpose of this paper is using the analytic method, the properties of trigonometric sums and Gauss sums to study the computational problem of one kind hybrid power mean involving two different trigonometric sums, and give an interesting computational formula for it. Introduction Let p be an odd prime. The quartic Gauss sums C(m, p) = C(m) is de ned as where as usual, e(y) = e πiy . Recently, several scholars studied the properties of C(m, p), and obtained some interesting results. For example, Shen Shimeng and Zhang Wenpeng [1] obtained a fourth-order linear recurrence formula for C(m, p). Li Xiaoxue and Hu Jiayuan [2] studied the computational problem of the hybrid power mean and proved an exact computational formula for (1), where c denotes the multiplicative inverse of c mod p. That is, c · c ≡ mod p. In the same paper [2], the authors also suggested us to calculate the exact value of the Gauss sums where p ≡ mod be a prime, k be any positive integer, ψ denotes a fourth-order character mod p, and τ(ψ) denotes the classical Gauss sums. That is, Chen Zhuoyu and Zhang Wenpeng [3] used the analytic method and the properties of the classical Gauss sums to obtain an interesting recurrence formula for G(k, p), which completely solved the computational problem of G(k, p). Some works related to the power mean of the trigonometric sums can also be found in references [4]- [8]. They will not be repeated here. Inspired by references [1], we will consider the following hybrid power mean We naturally ask: does there exist a precise computational formula for (2)? The main purpose of this paper is to answer this question. For convenience, we assume that p is a prime with p ≡ mod , * p = χ denotes the Legendre symbol mod p, and This α closely related to prime p. In fact, we have the Square Sum Theorem: where r is any quadratic non-residue mod p (see Theorem 4-11 in [9]). In this paper, we will use the properties of Gauss sums and Legendre symbol to study the computational problem of (2), and give an interesting computational formula for it. That is, we will prove the following two conclusions. Theorem 1. If p is a prime with p ≡ mod , then we have the identity Theorem 2. If p is a prime with p ≡ mod , then we have where ψ is any fourth-order character mod p, and |G( , Note that the estimations |G( , p)| ≤ √ p and |α| ≤ √ p, from our theorems we may immediately deduce the following two corollaries: Corollary 1. Let p be an odd prime with p ≡ mod , then we have the asymptotic formula where k is any positive integer. Only the calculation is more complex, if k is large enough. So we have not given a general conclusion here. Several Lemmas To complete the proofs of our theorems we need four simple lemmas. Here we will use many properties of the classical Gauss sums and Legendre's symbol mod p, all of them can be found in many elementary number theory books, such as reference [11], so the related contents will not be repeated here. First we have the following: Lemma 1. If p is a prime with p ≡ mod , then for any fourth-order character ψ mod p, we have the identity Similarly, we can also deduce that On the other hand, from the properties of the fourth-order character ψ mod p we have where we have used identity ψ (a) = for any integer a with (a, p) = .
2019-07-04T13:50:28.433Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "b0ea0c1c9e5d807ba8642f848c1a8541fdc53656", "oa_license": "CCBY", "oa_url": "https://www.degruyter.com/downloadpdf/journals/math/17/1/article-p519.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b0ea0c1c9e5d807ba8642f848c1a8541fdc53656", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
11333502
pes2o/s2orc
v3-fos-license
Phagocytosis by macrophages mediated by receptors for denatured proteins-dependence on tyrosine protein kinases !" #$ %# & ' ( ( )) ( ( ( & * ( + ) ( ,-." ) & / " ) ) + " " ,-. " ) ) " ) ,-. ( )) ) ) ) ) ( 0 " ) ) ) ( ) + ) 1 ."& Correspondence B. Mantovani Departamento de Bioquímica e Imunologia, FMRP, USP Av. Bandeirantes, 390 Introduction Phagocytosis is usually defined as the uptake by cells of particles visible under the light microscope (<~0.5 µm) and their internalization into vacuoles -a process that requires structural modifications of the cytoskeleton, including actin polymerization (1)(2)(3).In mammals the chief phagocytes are polymorphonuclear leukocytes, monocytes and macrophages.The first phase of the process is the recognition of the particle by the cell, which is a physicochemical interac-tion between components of the cell membrane and the particle, leading to attachment of the particle, and is followed by the internalization phase.Several receptors have been identified in macrophages that mediate binding and ingestion of particles.The interaction between the cells and their targets may occur directly with components of the particle surface (non-opsonic phagocytosis) or may be mediated by opsonins (antibodies, complement components, etc.) supplied by the host (opsonin-dependent phagocytosis) in which immunological receptors are in-volved, such as Fcg for IgG, and CR3 and CR1 for complement components (4). Integrins are another group of proteins present in cell membranes that have been shown to mediate some physiological functions of phagocytes (macrophages and polymorphonuclear leukocytes) and could also play a role in pathological conditions (5)(6)(7)(8). It has been observed that activated monocytes and neutrophils are able to adhere to substrates (surfaces) coated with denatured proteins through interactions mediated by Mac-1 and p150/95 ß 2 integrins (9).This interaction may be the first step for some effector functions of these cells. In the present study we investigated the capacity of mouse peritoneal macrophages (normal or activated) to interact (recognize) and phagocytize sheep red cells covered with denatured bovine serum albumin (BSA).Red cells are used as a convenient particle for this kind of experiment and represent a model for other particles that could bind to macrophages through this type of interaction.We also analyzed the possible participation of tyrosine protein kinases in the signal transduction pathway of the phagocytic process under these conditions by using specific enzyme inhibitors. Animals and cells BALB-c mice, 30-35 g body weight, were used as a source of peritoneal macrophages; the cells were harvested with 3 ml Hanks medium.Macrophages from the inflammatory exudate induced by glycogen were obtained as follows: mice received a daily ip injection of a 0.1% glycogen solution (1 ml) and 4 h after the 3rd injection leukocytes were harvested from the peritoneal cavity.Macrophages activated by LPS were obtained 4 days after one ip injection of 50 µg LPS in 0.2 ml PBS.Normal macrophages were the cells obtained from mice that received no treatment.Sheep red cells were obtained from a local animal center in Alsevers medium. Denaturation of bovine serum albumin BSA was denatured as described in Ref. 9. The protein solution at 10 mg/ml in 8 M urea, 50 mM Tris-HCl, pH 8.0, was reduced with dithiothreitol for 2 h at 25ºC, and then alkylated with 60 mM iodoacetamide for 2 h at 25ºC in the dark.The mixture was dialyzed extensively against saline and frozen at -20ºC. Procedure for covering sheep red cells with native and denatured BSA In preliminary experiments several methods for binding BSA to red cells were tested in order to find a procedure that would not alter the red cell surface and thus promote the binding of erythrocytes to macrophages independently of the presence of native or denatured BSA on its surface.We found that the method described by Poston (11), with some small modifications, was suitable for our experiments.A stock solution of 2.25 M CrCl 3 .6H 2 O in water was diluted 400 times with saline and left at room temperature for 40 min (this delay is necessary before addition to red cells, or agglutination might oc-cur).Solutions and red cells were added to a test tube in the following order: 0.6 ml of 20 mM PIPES buffer, pH 6.5, 0.1 ml of the protein solution at 10 mg/ml, 0.1 ml of packed red cells, and 0.1 ml of the CrCl 3 solution.The mixture was then carefully shaken and left at room temperature for 5 min.The reaction was then interrupted by washing the red cell suspension three times with saline by centrifugation at 700 g for 7 min and the red cells were finally resuspended at 0.4% (v/v) in Hanks medium for the phagocytosis assays.We determined that native and denatured BSA were attached to red cells by hemagglutination tests with rabbit antibodies prepared against native and denatured BSA. Phagocytosis experiments The phagocytosis assays with macrophages were performed as described in Ref. 12. Briefly, mouse macrophages recently harvested from the peritoneal cavity with 3 ml of Hanks medium were layered onto glass coverslips and allowed to attach for 10 min at room temperature; then, after washing with Hanks solution, the cell monolayers were incubated in plastic chambers in the same medium containing the red cell suspensions of E-BSA (red cells covered with native BSA) or E-BSAd (red cells covered with denatured BSA) at a concentration of 8 x 10 4 red cells/µl.After incubation at 37ºC in an air saturated with water vapor for 30 min, the attached and ingested red cells were differentiated by hypotonic shock treatment (five times diluted PBS, 45 s) which lysed only the attached red cells.After cell fixation with glutaraldehyde and staining with Giemsa, the results were quantified by microscopic observation; at least 200 macrophages were counted in each determination.In the experiments with macrophages stimulated with glycogen or activated with LPS, the glass-adherent cells contain some polymorphonuclear leukocytes and lymphocytes be-sides macrophages; the cells were identified for quantification on the basis of conventional morphologic criteria (13). Results In order to assess the capacity of macrophages to bind to red cells covered with denatured BSA, which we have indicated as percent interaction (Figure 1), we measured a) the percentage of macrophages with three or more red cells (total of rosettes), b) six or more red cells (percent rosettes with great interaction), and c) the red cell/macrophage ratio, including all the macrophages in the microscopic fields.In these experiments we made no distinction between the red cells which were simply attached and those ingested (all of them had obviously interacted with the phagocyte).We observed that normal macrophages residing in the peritoneal cavity without any stimulation can bind very effectively to red cells covered with denatured BSA (a negligible interaction was observed with native BSA).The same result was observed with macrophages stimulated in vivo with glycogen or activated with LPS (in the latter, there was also a small interaction with the native protein). Experiments of competition for macrophage binding between denatured BSA in the fluid phase and E-BSAd were performed to verify whether red cell attachment to phagocytes was mediated by the protein covering the erythrocyte (Table 1).Nearly full inhibition of interaction was observed with denatured BSA in the fluid phase and the native protein also caused some inhibition (around 50%). The experiments illustrated in Figure 2 show that only the macrophages stimulated in vivo with glycogen or activated with LPS were able to phagocytize red cells by interaction with denatured BSA (a small degree of phagocytosis by macrophages activated with LPS was also observed with red cells covered with native BSA).Figure 3 is an illus- Discussion We have shown that mouse peritoneal macrophages can recognize (bind to) red cells covered with denatured BSA.With normal macrophages the particles are able only to attach to the phagocyte membrane, but the engulfment phase is not triggered by these means; however, macrophages from the inflammatory exudate induced in vivo by glycogen or activated by LPS can effectively phagocytize these particles.There is evidence that the recognition of denatured proteins by human monocytes and neutrophils is mediated by ß 2 integrins of the cell membrane, which have been identified as CR3 (Mac-1, CD11b/CD18) and p150/95 (9).These receptors belong to a family of leukocyte adhesion molecules that are heterodimeric proteins sharing a common ß subunit, but distinct a subunits (17).The cellular distribution of murine CR3 was found to be similar to the human one (18).It is thus reasonable to assume that CR3 and/or p150/95 are involved in the recognition of red cells covered with denatured BSA.Our experiments showing that denatured BSA in the fluid phase is able to almost fully inhibit the binding of E-BSAd to macrophages provide evidence that denatured BSA is mediating the interaction between the phagocyte and the particle.Some degree of inhibition was also obtained with the native BSA in the fluid phase.Similarity of structure may possibly account for this effect, which also agrees with the observation that native BSA can promote some adherence of monocytes to surfaces, although to a lesser extent than the denatured protein (9). The interaction of iC3b and denatured BSA with CR3 receptors is similar in one respect: iC3b is a fragment of C3 and could, thus, expose amino acid sequences normally not exposed in C3; this is analogous to what one could expect with denatured BSA.It is known that members of the integrin family recognize the tripeptide amino acid sequence Suspensions of E-BSA or E-BSAd were incubated with macrophage monolayers for 30 min at 37ºC, and the interaction was quantified as follows: percentage of macrophages with three or more red cells (open bars), macrophages with six or more red cells (hatched bars), and red cell/macrophage ratio including all macrophages (gray bars).Results are reported as means ± SD (N = 4, cells from different animals).In the three groups (normal, glycogen and LPS) the differences in red cell/macrophage ratio between E-BSAd and E-BSA were significant (*P<0.05,t-test). tration of the phagocytic capacity of macrophages from the inflammatory exudate produced by glycogen. In the experiments presented in Table 2 we analyzed the possible dependence of the phagocytic capacity of activated macrophages on the activity of tyrosine protein kinases -a group of enzymes participating in many cellular processes of signal transduction -using three inhibitors of these enzymes, genistein, quercetin and herbimycin A, at the concentrations usually employed for this purpose in some cells, including macrophages (14)(15)(16).It is evident that the triggering of phagocytosis of E-BSAd is dependent on tyrosine protein kinase activity since marked or practically full inhibition of phagocytosis was obtained with the three enzyme inhibitors.Arg-Gly-Asp (RGD), and iC3b contains such a sequence; however, CR3 must recognize another sequence within the iC3b molecule since a mutated form of iC3b that lacked the RGD triplet was also able to bind to recombinant CR3 (19).BSA does not contain the RGD sequence (20) and therefore the CR3 or p150/95 receptors must recognize another site in the denatured molecule.Although the capacity of these two types of receptors to bind to reduced and alkylated BSA has been established by experiments of affinity chromatography ( 9), the mechanism of the interaction remains to be investigated.One may also suppose that a hydrophobic region of the denatured protein could be responsible for the binding independently of any specific amino acid sequence.Also, we cannot exclude the possibility that the alkyl groups (acetamide) introduced at the sulfhydryl groups of the reduced protein may participate in the interaction. The inhibition of phagocytosis of E-BSAd by tyrosine protein kinase inhibitors (genistein, quercetin and herbimycin A) indicates that these enzymes are involved in the signal transduction pathway of the engulfment phase triggered by CR3 or p150/95, or both, in inflammatory and activated macrophages; this same dependence was observed in phagocytosis mediated by Fcg receptors using genistein as the enzyme inhibitor (16).We have also confirmed this finding in phagocytosis experiments of red cells covered with IgG antibodies employing the same three enzyme inhibitors used in our experiments with E-BSAd.We observed practically full inhibition of phagocytosis (expressed as red cell/macrophage ratio) with genistein and quercetin (96 ± 2 and 100 ± 0%, respectively, N = 3) and a marked inhibition with herbimycin A (61 ± 4%, N = 4).We should keep in mind, however, that these drugs can possibly have other effects on cells besides inhibition of tyrosine protein kinases; for example, quercetin was shown to inhibit also serine/threonine protein kinases (15).There are indications that genistein could be the most specific one (21) and therefore it is advisable to use more than one such inhibitor, as done here.There is, thus, the same dependence on the activity of tyrosine protein kinases in phagocytosis mediated by volved when different receptors trigger the process. The fact that some kind of particles can bind to phagocytes but the interaction does not trigger the mechanism of internalization has been long known.Thus, red cells interacting through iC3b bind very effectively to mouse macrophages and polymorphonuclear leukocytes but remain on the cell surface; when IgG (Fcg receptors) are involved in the interaction the engulfment phase is triggered (22,23).However, when macrophages are activated, the interaction mediated by iC3b (CR3 receptors) leads to phagocytosis (12,24).This situation is analogous to that observed here for the interaction mediated by denatured BSA.The mechanism whereby CR3 receptors become able to trigger phagocytosis in inflammatory or activated macrophages is not clear.Some observations suggest a requirement of receptor clustering (25) but there are also indications that the mechanism may depend on the phosphorylation of serine residues in the ß subunit of the receptor (26). The presence of denatured proteins at a tissue site may be a signal for macrophages to adhere and accumulate at that site.However, effector functions such as phagocytosis require additional signals for activation to enable the cells to perform this function, which physiologically means triggering their destructive powers.The possibility of this control is another example related to the different roles of normal and inflammatory or activated macrophages.Normal macrophages attached to glass coverslips were preincubated for 15 min at 37ºC with Hanks' medium, BSAd or BSA, dissolved in Hanks' medium at a concentration of 10 mg/ml.Red cells and macrophages were then incubated for 30 min at 37ºC and the interaction was quantified by determining percent rosettes (percent macrophages with three or more red cells) and red cell/macrophage ratio (the ratio between the number of red cells taken up and the number of macrophages counted).Results are reported as means ± SD (N = 4, cells from different animals). these two types of receptors. If different receptors can trigger phagocytosis it is reasonable to suppose that there should be convergent biochemical pathways leading to the final reaction of polymerization of G-actin to F-actin which is an essential component for engulfment (2,3).The activation of tyrosine protein kinases could represent one of the points of convergence, although possibly different sets of enzymes and protein phosphorylations might be in- Macrophages activated in vivo with LPS were preincubated at 37ºC for 10 min with tyrosine kinase inhibitors.Phagocytosis assays were performed in the presence of the inhibitors.Results are reported as percent inhibition relative to the control for each cell preparation (paired experiments), and calculated as follows: percent inhibition = 100 x (control -test)/control.Data are reported as means ± SD of 3 to 4 experiments (different animals). Figure 1 . Figure 1.Interaction between red cells covered with native (E-BSA) and denatured bovine serum albumin (E-BSAd) and normal macrophages, from the inflammatory exudate induced by glycogen, and activated by LPS.Suspensions of E-BSA or E-BSAd were incubated with macrophage monolayers for 30 min at 37ºC, and the interaction was quantified as follows: percentage of macrophages with three or more red cells (open bars), macrophages with six or more red cells (hatched bars), and red cell/macrophage ratio including all macrophages (gray bars).Results are reported as means ± SD (N = 4, cells from different animals).In the three groups (normal, glycogen and LPS) the differences in red cell/macrophage ratio between E-BSAd and E-BSA were significant (*P<0.05,t-test). Figure 2 . Figure 2. Phagocytosis of red cells covered with native (E-BSA) and denatured bovine serum albumin (E-BSAd) by normal macrophages, from the inflammatory exudate induced by glycogen, and activated by LPS.Experimental conditions are the same as described in the legend to Figure1, with the exception that after 30 min of incubation the cell monolayers were subjected to a hypotonic shock treatment to lyse all the attached red cells, leaving only the ingested ones.Percent phagocytosis is indicated by the percentage of macrophages that ingested at least one red cell (open bars) and the percentage of macrophages that ingested three or more red cells (hatched bars); the red cell/macrophage ratio is the ratio between the number of ingested red cells and the total number of macrophages counted (gray bars).Results are reported as means ± SD (N = 4, cells from different animals).In the glycogen as well as in the LPS groups the differences in red cell/macrophage ratio between E-BSAd and E-BSA were significant (*P<0.05,t-test). Figure 3 . Figure 3. Phagocytosis of E-BSAd (red cells covered with denatured BSA) by macrophages from the inflammatory exudate induced by glycogen.The experimental conditions are the same as in Figure 2. Bar = 33 µm. Table 1 . Effect of fluid-phase denatured bovine serum albumin (BSAd) and native BSA on the interaction between macrophages and red cells covered with denatured BSA (E-BSAd). Table 2 . Inhibition of E-BSAd phagocytosis by LPS-activated macrophages caused by treatment with tyrosine protein kinase inhibitors.
2017-09-22T08:54:41.715Z
2002-03-01T00:00:00.000
{ "year": 2002, "sha1": "ce61b84840b8246f59ce40ad7fa097c180d4916d", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/bjmbr/a/WFcNt7KdFvQgwBFdqfd9jQt/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ce61b84840b8246f59ce40ad7fa097c180d4916d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
84839227
pes2o/s2orc
v3-fos-license
Long-term relationship between everolimus blood concentration and clinical outcomes in Japanese patients with metastatic renal cell carcinoma: a prospective study Background Everolimus is an oral inhibitor of mammalian target of rapamycin, approved for metastatic renal cell carcinoma (mRCC). Recently, personalized medicine through therapeutic drug monitoring (TDM) is recommended in cancer therapy. In this study, the relationship between everolimus blood concentration and clinical outcomes on a long-term were evaluated in Japanese patients with mRCC. Methods Patients with mRCC were enrolled following treatment with everolimus at Tohoku University Hospital between April 2012 and December 2016. The relationship between everolimus trough blood concentration on day 8 of everolimus therapy and just before discontinuation or dose reduction, and their adverse events were evaluated. Patients were divided into two groups based on the median of everolimus blood concentration on day 8 of treatment, and the profiles of adverse events, and efficacy [time to treatment failure (TTF) and progression-free survival (PFS)] were evaluated. Results The median (range) everolimus blood concentrations on day 8 after starting everolimus administration and just before discontinuation or dose reduction were 15.3 (8.1–28.0) ng/mL and 14.8 (6.4–58.4) ng/mL, respectively, with no significant difference between these values (P = 0.3594). Patients (n = 6) with discontinuation or dose reduction following adverse events in everolimus therapy had significantly higher blood concentrations than patients (n = 4) with dose maintenance on both day 8 (median, 18.0 vs 8.2 ng/mL; P = 0.0139) and just before discontinuation or dose reduction (median, 22.9 vs 9.7 ng/mL; P = 0.0142). Median TTF and PFS of the total patients (n = 10) were 96 days (95% confidence interval [CI], 26–288) and 235 days (95% CI, 28–291), respectively. Subgroup analysis showed that TTF of the patients with > 15.3 ng/mL (n = 5) was not significantly different from that of the patients with ≤15.3 ng/mL (n = 5; P = 0.5622). Similarly, PFS of the patients with > 15.3 ng/mL was not significantly different from that of the patients with ≤15.3 ng/mL (P = 0.3436). Conclusions This study demonstrated the long-term relationship between everolimus blood level and clinical outcomes and adverse events in Japanese patients with mRCC. Thus, TDM in everolimus therapy could be a useful tool for the early prediction of adverse events for Japanese patients with mRCC. Background Tyrosine kinase inhibitors and mammalian target of rapamycin inhibitors (mTORi) are molecular targeted drugs for metastatic renal cell carcinoma (mRCC) [1]. Although these targeted drugs of mRCC show higher objective response rate and significantly prolong a median progression-free survival (PFS), various adverse events such as diarrhea, fatigue, vomiting, myelosuppression, and interstitial pneumonia are frequently induced [1]. Recently, personalized medicine for cancer using therapeutic drug monitoring (TDM) is recommended to maximize the efficacy of anticancer drugs, and several evidence of TDM of molecular target drugs such as imatinib and sunitinib have been demonstrated [2,3]. An mTORi everolimus used for mRCC has already been adapted for TDM in other applications such as the prevention of organ rejection after transplantation [4,5], and for the treatment of tuberous sclerosis complex [6,7] and various forms of cancer [8][9][10]. Everolimus is very effective, but its therapeutic blood concentration range is narrow and the variability of pharmacokinetics among individuals is high. Therefore, it is appropriate to perform individualized medical treatment using TDM [11]. In transplantation settings, trough level of everolimus should be maintained at 3-8 ng/mL when used in combination with other immunosuppressive drugs (calcineurin inhibitor and glucocorticoid) and at 6-10 ng/ mL when used without calcineurin inhibitor [11][12][13][14][15][16]. In the treatment of tuberous sclerosis complex, it is recommended that everolimus concentrations should be managed at 5-15 ng/mL [7,11,17]. But, in cancer, there is little evidence of TDM for everolimus in actual clinical practice [11]. Presently, there are several reports on the pharmacokinetics/pharmacodynamics studies of everolimus in cancer [11,[18][19][20]. Deppenweiler et al. reported that everolimus trough level between 11.9 and 26.3 ng/mL was associated with increase in PFS and decrease in the risk of toxicity [18]. A meta-analysis study by Noguchi et al. demonstrated that the risk of pulmonary adverse events is associated with the administration of everolimus in Japanese patients [19]. Moreover, another meta-analysis study reported the relationship between an increase in everolimus trough level and antitumor effect or risk of high-grade adverse events [20]. However, in cancer patients, there has been no report of monitoring the blood levels of everolimus on a long-term. The dose of everolimus may be reduced following the occurrence of clinically significant hematological or other adverse events. In addition, everolimus blood concentration has been reported to be affected by interaction between drugs [11]. Drugs that alleviate various symptoms will be used for cancer patients with the progress of their symptoms, but these such as antiepileptic drugs may cause drug-drug interactions. That is, in clinical practice, events that may affect everolimus blood concentrations often occur even during everolimus treatment. It is important to evaluate the relationship between everolimus blood level and long-term clinical outcomes. Therefore, in this study, the relationship between everolimus blood concentration and clinical outcomes on a long-term were evaluated in Japanese patients with mRCC. Patients The subjects of this study were prospectively recruited from mRCC patients for whom everolimus therapy was scheduled at Tohoku University Hospital from April 2012 to December 2016. Chemicals Everolimus and d4-everolimus as internal standard were purchased from Toronto Research Chemicals (Toronto, ON, Canada). Acetonitrile, methanol, ammonium formate, zinc sulfate, and formic acid were obtained from Wako Pure Chemical Industries (Osaka, Japan). Water was purified using a PURELAB Ultra Genetic system (Organo, Tokyo, Japan). Measurement of everolimus blood concentration The administration schedule of everolimus in this study was in a fasting state. Whole blood samples were obtained just before taking everolimus after day 8 of reaching the steady state of everolimus [21,22], the sampling was scheduled weekly during hospitalization. For outpatient, the samples were collected for each visit. Everolimus blood concentrations were measured by modifying a previously validated assay [23]. In brief, 100 μL of whole blood sample was mixed with 50 μL of a methanol solution of 100 ng/mL d4-everolimus as an internal standard and preprocessed by 200 μL methanol and 50 μL of 0.2 M zinc sulfate. The samples were centrifuged at 15,000×g for 5 min, the supernatants were analyzed by a column-switching liquid chromatography/tandem mass spectrometry system. Analytes were trapped and concentrated at the inlet edge of Shim-pack MAYI-C8 (10 mm × 4.6 mm i.d., 50 μm, GL Sciences, Tokyo, Japan) using the mobile phase [2 mM ammonium formate and 0.1% formic acid in water-methanol (41:9, v/v)] at a flow rate 0.5 mL/min. Then, analytes were separated on Luna® phenyl-hexyl column (50 mm × 2 mm i.d., 5 μm, Phenomenex, Torrance, CA, USA) using the mobile phase [2 mM ammonium formate and 0.1% formic acid in water-methanol (1:9, v/v)] at a flow rate 0.2 mL/min. The analysis was performed in selected reaction monitoring mode: m/z 975.4 to 542.2 for everolimus; m/z 979.5 to 542.2 for d4-everolimus. The quantitative range of everolimus was 1-50 ng/mL. The observed intra-day and interday precision and accuracy were below 6.6% and within ±6.8%, respectively. Samples with everolimus blood concentrations higher than the calibration curve range were diluted in saline. Evaluation of safety Adverse events by everolimus therapy were evaluated according to Common Terminology Criteria for Adverse Events version 4.0. The relationship between everolimus blood concentration and everolimus discontinuation or dose reduction due to adverse events was assessed, and everolimus blood concentrations on day 8 and just before discontinuation or dose reduction of everolimus therapy were used for analysis. In addition, the median value of everolimus blood concentration on day 8 was used to classify into two groups, high group and low group, and the association with adverse events was evaluated. Evaluation of efficacy Time to treatment failure (TTF) was defined as the period from the initiation of everolimus therapy to cessation for any cause (including disease progression or adverse events). Progression-free survival (PFS) was defined as the time from the start of everolimus treatment to the objective detection of disease progression or death. Patients were divided into two groups based on the median of everolimus blood concentration on day 8 of treatment, and the efficacy of everolimus (TTF and PFS) was evaluated in the groups. Statistical analysis The cut-off date for this analysis was March 2017. Patients whose blood samples were not obtained after day 8 from the start of everolimus treatment were excluded from the analysis. Continuous variables were compared between two groups by the Wilcoxon rank sum test, and categorical variables were compared by the chi-squared test or Fisher's exact test. Correlations between everolimus blood concentration on day 8, and age, body surface area (BSA), body mass index (BMI), and estimated glomerular filtration rate (eGFR) were evaluated using Spearman's rank correlation coefficient. TTF and PFS were estimated using Kaplan-Meier curves and compared using the log-rank test. Differences were considered significant at P < 0.05. All statistical analyses were performed using JMP pro 13.1.0 software (SAS Institute Inc., Cary, NC, USA). Patients Ten patients with mRCC, who were being administered everolimus, were evaluated in this study. The characteristics of the patients are shown in Table 1. The median (range) everolimus blood concentrations on day 8 after starting everolimus administration and just before discontinuation or dose reduction were 15.3 (8.1-28.0) ng/mL and 14.8 (6.4-58.4) ng/mL, respectively with no significant difference between these values (P = 0.3594). Fluctuations in the blood level of everolimus were also observed in some patients. Correlation coefficients between concentration/dose (C/D) and age, BSA, BMI, and eGFR are indicated in Fig. 1. No significant correlation between C/D ratio and each parameter was observed. Safety As shown in Table 1, patients (n = 6) with discontinuation or dose reduction by adverse events in everolimus therapy had significantly higher blood concentrations than patients (n = 4) with continuation on both the day 8 (median, 18.0 vs 8.2 ng/mL; P = 0.0139) and just before discontinuation or dose reduction (median, 22.9 vs 9.7 ng/mL; P = 0.0142). The profile of adverse events that occurred in this study is indicated in Table 2, eight patients (80%) had adverse events of all grades and five patients (50%) had adverse events of grade 3 or 4. In addition, we divided the patients into two groups (low level group, ≤ 15.3 ng/mL and high level group, > 15.3 ng/mL) on the basis of the blood concentration of everolimus on day 8 using the median value, and the safety of the drug was evaluated in the two groups of patients. In the low level group (n = 5), patients with adverse events of all grades were 3 (60%) and those with adverse events of grade 3 or 4 were 2 (40%). In the high level group (n = 5) of everolimus, patients with adverse events of all grade were 5 (100%) and those with adverse events of grade 3 or 4 were 3 (60%). For the grade 3 or 4 adverse events, pneumonitis and leukopenia were confirmed in two patients, one from the low level group and the other from the high level group. In the high level group, grade 3 hyperglycemia, hypoalbuminemia, and increased γ-glutamyltransferase were observed in one patient, which we have previously reported [24]. Table 3 shows the mean value ± standard deviation (SD) of everolimus blood concentration for each patient, everolimus blood concentration at the time of discontinuation or dose reduction, and the adverse events that caused discontinuation or dose reduction. Clinical application to measurement of everolimus blood concentration A case of drug-drug interaction detected by the measuring of blood concentration of everolimus is indicated in Fig. 3. Pat.1 in Table 3 is a 52-year-old Japanese female diagnosed with cell carcinoma 5 years ago. She underwent a partial right nephrectomy for clear cell carcinoma and the following year, her lung metastasis was discovered and sequentially treated with interferon and sunitinib. The sunitinib therapy was changed to everolimus when she was diagnosed with brain metastasis. The patient was administered carbamazepine for neurologic symptoms and prednisolone for cerebral edema associated with brain metastasis. Other concomitant medications were lansoprazole, domperidone, rebamipide, sodium ferrous citrate, and probucol. There were few adverse events of grade 2 or more after the initiation of everolimus 10 mg. The average trough concentration of everolimus in concomitant medications at the start of everolimus was 7.3 ng/mL in patients, whereas the mean level of patients treated with 10 mg of everolimus in a clinical trial was 13.2 ng/mL [22]. Therefore, administration of carbamazepine, prednisolone, and lansoprazole was discontinued because of its ability to induce cytochrome P450 (CYP) 3A4 [25][26][27]-the main metabolic enzyme of everolimus [11]. Considering less interaction with CYP3A4, carbamazepine was switched to levetiracetam [28], lansoprazole was changed to rabeprazole [29], and prednisolone was stopped after dose reduction. After discontinuing these drugs (carbamazepine, prednisolone, and lansoprazole), the blood concentration of everolimus gradually increased. There were no serious adverse events Discussion In this study, everolimus blood levels of the patients with discontinuation or dose reduction by adverse events were significantly higher than the patients with continuation (Table 1). Deppenweiler et al. reported that everolimus trough levels higher than 26.3 ng/mL were associated with increased risk of adverse events [18]. In the patients (Pat.2, Pat.4, Pat.7, Pat.9, and Pat.10) who exceeded the average everolimus blood level of 16.4 ng/mL, there was discontinuation or dose reduction in the everolimus therapy due to adverse events (Table 3). Everolimus treatment was discontinued in Pat.3 due to grade 3 pneumonitis even though the everolimus level was 13.1 ng/mL, which was not higher than that of other patients (Table 3). Subsequently, Pat.3 was diagnosed with interstitial pneumonia and because symptoms might continue to develop in the patient, steroid pulse therapy was required. The toxic range of interstitial pneumonia by everolimus may be lower than other adverse events, therefore it is better to increase the number of cases and verify in the future. In many cases, TDM of everolimus is considered useful in predicting the occurrence of adverse events. In this study, there was no significant difference between the median blood everolimus concentration on day 8 (15.3 ng/mL) and just before discontinuation or dose reduction of that therapy (14.8 ng/mL). These values were almost equal to the mean trough value 15.99 ng/mL [19] and 15.65 ng/mL [20] in previous reports. However, the everolimus levels fluctuated largely in Pat 4 (21.8 to 58.4 ng/mL) and Pat.9 (28.0 to 35.4 ng/mL). They had serious adverse events leading to dose reduction and discontinuation. In addition, Pat.1 had fluctuations in A B C D Fig. 1 The relationship between the concentration-to-dose (C/D) ratio of everolimus on day 8 and patients' demographic data. Demographic data include age, body surface area (BSA), body mass index (BMI), and estimated glomerular filtration rate (eGFR) and the relationship was analyzed with Spearman's rank correlation coefficient Table 2 Relationship between adverse events and everolimus blood concentration Total (n = 10) Everolimus blood concentration just before discontinuation or dose reduction (ng/mL) AST: Increased aspartate aminotransferase, ALT: Increased alanine transaminase, ALP: Increased alkaline phosphatase, γ-GTP: Increased γ-glutamyltransferase everolimus levels due to drug-drug interaction (Fig. 3). In cancer treatment, various supportive therapies are used, and this may cause drug-drug interaction. For instance, antiepileptic drugs are sometimes used for symptomatic relief, but because of many interactions that can occur between drugs, caution is needed in the administration anticancer drugs [11]. Hence, since intra-individual variations in everolimus pharmacokinetics are large and it is affected by concomitant drugs or food components, routine TDM may be effective for everolimus therapy [11]. In addition, large inter-individual variations were also observed in this study ( Fig. 1 and Table 3). It is known that the pharmacokinetics of everolimus is affected by drugs and food, as well as intra-individual [11]. To date, there is not sufficient clinical evidence that inter-individual its differences in metabolic enzymes and transporters affect everolimus pharmacokinetics [11]. Ravaud et al. [20] and Deppenweiler et al. [18] reported that everolimus blood level was directly correlated with the antitumor effect, but in this study, there was no significant difference between the TTF and PFS of the high everolimus level group and those of the low everolimus level group (Fig. 2). However, there were some differences between this study and the previous ones. The reports of Ravaud et al. [20] are based on the results of a phase II and III clinical trials, but our patients had worse performance status and more prior systemic therapies than those of the trial. In the research of Deppenweiler et al., the diagnosis of the patients was mainly breast cancer (n = 42, 77.8%) and few patients with kidney cancer (n = 10, 18.5%) [18], and the relationship between everolimus blood level and antitumor effect may differ depending on the type of cancer. In addition, our study involved only Japanese patients who were also less in number than in the previous studies. The limitation of the present study was that, it was a small case study and unlike clinical trials, patients with poor performance status or many prior systemic therapies made it difficult to evaluate efficacy. Further studies on the pharmacokinetics/pharmacodynamics of everolimus A B Fig. 2 The relationships between everolimus blood concentration and efficacy. Efficacy was evaluated as time to treatment failure (TTF) (A) and progression-free survival (PFS) (B) with the Kaplan-Meier method and the log-rank test are required to determine the clinical utility of TDM in oncology settings. Moreover, it is necessary to evaluate the significance of everolimus TDM by a randomized comparative study between TDM group and non-TDM group. This information would help to maximize the therapeutic potential of everolimus TDM for cancer while minimizing severe adverse events. Conclusions The present study demonstrated the long-term relationship between everolimus blood level and clinical outcomes and it showed that everolimus blood level correlate with adverse events in Japanese patients with mRCC. The relation with efficacy was not sufficiently evaluated due to the small number of cases in this study. It is necessary to study further in the future. Consequently, TDM in everolimus therapy could be a useful tool for the early prediction of adverse events in Japanese patients with mRCC.
2019-03-14T14:36:26.728Z
2019-03-12T00:00:00.000
{ "year": 2019, "sha1": "04b225146248f2ddd65df4f342a2581cf03f350e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s40780-019-0135-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "35d61094b4536fa02fa614451de0c7e20af1de9e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119169186
pes2o/s2orc
v3-fos-license
Establishment and Fecundity in Spatial Ecological Models: Statistical Approach and Kinetic Equations We consider spatial population dynamics given by Markov birth-and-death process with constant mortality and birth influenced by establishment or fecundity mechanisms. The independent and density dependent dispersion of spreading are studied. On the base of general methods of [14], we construct the state evolution of considered microscopic ecological systems. We analyze mesoscopic limit for stochastic dynamics under consideration. The corresponding Vlasov-type non-linear kinetic equations are derived and studied. Introduction Complex systems theory is a quickly growing interdisciplinary area with a very broad spectrum of motivations and applications. One may characterize complex systems by such properties as diversity and individuality of components, localized interactions among components, and the outcomes of interactions used for replication or enhancement of components. In the study of these systems, proper language and techniques are delivered by the interacting particle models which form a rich and powerful direction in modern stochastic and infinite dimensional analysis. Interacting particle systems are widely used as models in condensed matter physics, chemical kinetics, population biology, ecology, sociology, and economics. Mathematical realizations of such models may be considered as a dynamics of points in proper state spaces. In some applications the possible locations for the points of system are structured, e.g., if we consider dynamics on graphs, or, in particular, on lattices. Another class of models can be characterized by the free positions of points in continuum, e.g., in Euclidean space R d . As it was shown originally in statistical physics, many empirical effects, such as phase transition, are impossible in systems with finite number of points. Therefore, systems with infinite points can be considered as mathematical approximation for realistic systems with huge but finite number of elements. Among all infinite systems we will study locally finite ones. Namely, the configuration space over space R d consists of all locally finite subsets (configurations) of R d (1.1) Here γ Λ := γ ∩ Λ, the symbol | · | stands for the cardinality of a set, and B b (R d ) denotes the class of all bounded Borel sets in R d . Each configuration may be identified with a Radon measure on R d by the relation γ(Λ) = |γ Λ |. As a result, Γ can be equipped with the vague topology and the corresponding Borel σ-algebra. Depending on application the points of system may be interpreted as molecules in physics, plants in ecology, animals in biology, infected people in medicine, companies in economics, market agents in finance, and so on. It is supposed that points of a system evolve in time interacting with each other. In the present paper we focus our attention to the dynamics with birth and death mechanisms. The spatial birth-and-death dynamics describe an evolution of configurations in R d , in which points of configurations (particles, individuals, elements) randomly appear (born) and disappear (die) in the space. Heuristically, the corresponding Markov generator has the following form: where for F : Here functions d and b describe rates of death and birth correspondingly (for details see, e.g., [14]). In the present paper we apply the results of [14] to study the question about the existence of the evolution corresponding to (1.2) for a particular choice of the functions d and b. This question can be answered once we will be able to construct a semigroup associated with L in a proper functional space. This semigroup determines the solution to the Kolmogorov equation, which formally (only in the sense of action of operator) has the following form: To show directly that L is a generator of a semigroup in some reasonable functional spaces on Γ seems to be difficult problem. This difficulty is hidden in the complex structure of non-linear infinite dimensional space Γ. However, in various applications the corresponding evolution of states (measures on configuration space) helps already to understand the behavior of the process and makes possible to predict the equilibrium states of our system. In fact, properties of such an evolution itself are very important for application. The evolution of states is heuristically given as a solution to the dual Kolmogorov equation (Fokker-Planck equation): where L * is an adjoint operator to L defined on some space of measures on Γ, provided, of course, that it exists. Technically, we will study solutions of (1.4) in terms of correlations functions, k (n) t , n ≥ 0 which are symmetric functions on (R d ) n and related to a density of distribution for each n points of our system (rigorous definition will be given in Section 2). Among all birth-and-death processes we will consider only those in which new particles appear from existing ones. These processes correspond to the models of the spatial ecology. In the recent paper [12], we studied Bolker-Dieckmann-Law-Pacala ecological model, which corresponds to the following mechanism of evolution. Each existing individual can give birth to the new one independently of all other individuals of the system. It may also die influenced by the global regulation (mortality) again independently of all other members of the population or it dies because of the interaction with the rest of the population (local regulation). The latter mechanism may be described as a competition (e.g., for resources) between individuals in the population. Heuristically, the corresponding Markov generator has the form (1.2) with Here a + , a − are probability densities, and constants m, κ + , κ − ≥ 0. In population ecology, the constant m is called mortality and the functions a + , a − are known as dispersion and competition kernel, respectively. By [12], if m = κ − = 0 (free growth model) then the first correlation function (density of the system) grows exponentially in time. To suppress this growth we may consider the case m > κ − = 0 (contact model, see also [18,20]). Then for m ≥ κ + we obtain globally bounded density (even decaying in time for m > κ + ). Nevertheless, locally the system will show clustering. Namely, k (n) t ∼ n! on a small regions for t ≥ 0 (see [12] for details). The main result of [12] may be informally stated in the following way: if the mortality m and the competition kernel κ − a − are large enough, then the dynamics of correlation functions associated with the pre-generator (1.2) preserves (sub-)Poissonian bound for correlation functions for all times, i.e., k In the present article we introduce new mechanisms of local regulation in the corresponding system, alternatively to (1.5). Namely, we set κ − = 0 in (1.5) and consider two different modifications of (1.6). The first one includes the influence of the whole system on the reproduction (fertility, fecundity) of each single individual. The second modification of (1.6) contains a mechanism which shows establishment of each individual in the system. The precise descriptions are given in the next section. Such models have been actively studied in modern ecological literature, see e.g. [8] and references therein. Here, for the first time, we present a rigorous mathematical description for these evolutions. This article is organized in the following way. In Section 2, we describe the model rigorously providing the proper spaces for the corresponding functional evolutions. In Section 3 we apply general results about birth-and-death dynamics on configuration spaces obtained in [14]. Informally, the main results state that if mortality m is big enough and negative influence of establishment or fecundity is dominated by dispersion then the corresponding evolution exist. In Section 4, we study the mesoscopic description of our model in terms of Vlasov scaling. It should be noted also, that the Vlasov-type scalings for some Markov processes on finite configuration spaces were considered in [2][3][4][5][6]. Note that the corresponding limiting hierarchy was obtained at the heuristic level. In the present paper, we prove a weak convergence to the limiting hierarchy in the case of infinite continuous systems for bounded but non-integrable densities. It is worth pointing out that the necessity of a big mortality is a result of perturbation theory for linear operators which gives the existence of the corresponding dynamics for the infinite time interval. However, with the help of another technique considered in [7], [10], we are able to show the existence of the dynamics with any mortality but only on finite interval of time. This result will be presented in the forthcoming paper. Description of model We recall that the configuration space Γ is given by (1.1). It is equipped with the vague topology, i.e., the weakest topology for which all mappings Γ ∋ γ → x∈γ f (x) ∈ R are continuous for any continuous function f on R d with compact support. The space Γ with the vague topology is a Polish space (see, e.g., [16] and references therein). The corresponding Borel σ-algebra B(Γ) will be the smallest σ-algebra for which all mappings Γ ∋ γ → |γ Λ | ∈ N 0 := N ∪ {0} are measurable for any Λ ∈ B b (R d ), see, e.g., [1]. We set F cyl (Γ) for the class of all cylinder functions on Γ. Each F ∈ F cyl (Γ) is characterized by the following relation: Let 0 ≤ φ ∈ L 1 (R d ) be given even function such that As it was already mentioned in the Introduction we would like to study two classes of the interacting particle systems (IPS), whose mechanisms of evolution are described by the corresponding heuristically given Markov generators: 3) The first model shows the influence of establishment in the system and the second one presents fecundity. Here and in the sequel the mortality m is always supposed to be strictly positive. One can see that the establishment rate e −E φ (x,γ) will be smaller if x will be inside or close to the dense region of the configuration γ. In its turn the fecundity rate e −E φ (y,γ\y) would be also smaller if y is situated in the dense area of γ. The non-negative measurable rate b 0 represents the dispersion of the model. Let 0 ≤ a + , b + ∈ L 1 (R d ) be given even functions, and a + = 1. We consider two types of the dispersion: As it was mentioned above, we will study evolution of our model in terms of its correlation functions. Below we introduce some basic notions needed to describe the corresponding evolution. The space of n-point configurations in an arbitrary Y ∈ B(R d ) is defined by By definition we take Hence one can introduce the corresponding Borel σ-algebra, which we denote This space is equipped with the topology of the disjoint union. On Γ 0 (Y ) we consider the corresponding Borel σ-algebra denoted by B Γ 0 (Y ) . In the case of Y = R d we will omit Y in the notation. Namely, Γ 0 : For any Λ ∈ B b (R d ) the restriction of λ to Γ(Λ) := Γ 0 (Λ) will be also denoted by λ. The space Γ, B(Γ) can be obtained as the projective limit of the family of spaces Γ(Λ), B(Γ(Λ)) Λ∈B b (R d ) , see, e.g., [1]. The Poisson measure π on Γ, B(Γ) is given as the projective limit of the family of measures where π Λ := e −m(Λ) λ is the probability measure on Γ(Λ), B(Γ(Λ)) and m(Λ) is the Lebesgue measure of Λ ∈ B b (R d ); see, e.g., [1]. )measurable function on Γ (n) . As usual, functions on Γ are called observables and functions on Γ 0 are called quasi-observables. There exists a mapping from B bs (Γ 0 ) into F cyl (Γ), which plays the key role in our further considerations. It has the following form where G ∈ B bs (Γ 0 ), see, e.g., [15,21,22]. The summation in (2.4) is taken over all finite subconfigurations η ∈ Γ 0 of the (infinite) configuration γ ∈ Γ; we denote this by the symbol, η ⋐ γ. The mapping K is linear, positivity preserving, and invertible, with Note that if function F has special form where H(x, ·) is defined point-wisely at least on Γ 0 , then, by direct computation, We set also and for any f A measure µ ∈ M 1 fm (Γ) is called locally absolutely continuous with respect to the Poisson measure π if for any Λ ∈ B b (R d ) the projection of µ onto Γ(Λ) is absolutely continuous with respect to the projection of π onto Γ(Λ). By [15], in this case, there exists a correlation functional k µ : Γ 0 → R + such that for any G ∈ B bs (Γ 0 ) the following equality holds We recall now without a proof the partial case of the well-known technical lemma (see e.g. [19]) which plays very important role in our calculations. if both sides of the equality make sense. For arbitrary and fixed C > 1 we consider the functional Banach space In the sequel, symbol · C stands for the norm of the space (2.10). Let dλ C := C |·| dλ, then the dual space The space (L C ) ′ is isometrically isomorphic to the Banach space where the isomorphism is provided by the isometry R C In fact, one may consider the duality between the Banach spaces L C and K C given by the following expression In the paper [17], it was proposed the analytic approach for the construction of non-equilibrium dynamics on Γ, which uses deeply the harmonic analysis on configuration spaces. By this approach the dynamics of correlation functions corresponding to (1.4) is given by the evolutional equation where L △ is a dual operator to the K-image of L defined by the expression with respect to the duality (2.11). Hence, L △ =L * . In order to construct the evolution of correlation functions we are going to follow such a scheme: we show thatL is a generator of a C 0 -semigroup in the certain Banach space and after consider the dual semigroup which solves the Cauchy problem (2.12). Functional evolutions Note that B bs (Γ 0 ) ⊂ D. In particular, D is a dense set in L C . In [14], we have found sufficient conditions for operator (L, D) to be a generator of a semigroup in L C . In the case of Markov generators (2.2) or (2.3), this result may be formulated in the following way. Lemma 3.1 (Theorem 3.2 of [14]). Suppose there exists 0 < a < C 2 such that Then (L, D) is the generator of a holomorphic semigroup in L C . It is worth noting that if (3.1) is valid, then for any G ∈ D Theorem 3.2. Let 0 ≤ a + , b + , φ ∈ L 1 (R d ) be even functions such that (2.1) holds and a + = 1, B := b + ≥ 0. Suppose, additionally, that there exist constants A 1 , A 2 ≥ 0 such that Then (3.1) holds and L est = K −1 L est K, D) is the generator of a holomorphic semigroupÛ est (t) in L C . Remark 3.3. In the density independent case, b + ≡ 0, the assumption (3.4) holds with A 2 = 0. Moreover, since B = 0, the condition (3.5) will have the following form Before proof of Theorem 3.2, we give an example of a + , b + which satisfy (3.4) in the Lemma below. Lemma 3.4. Suppose that there exist constants E 1 , E 2 > 0 and δ > d such that Proof of Lemma 3.4. Using obvious inequality we obtain that that proves the statement. Proof of Theorem 3.2. Let us set To check (3.1), we will try to estimate the integral uniformly in x ∈ R d and ξ ∈ Γ 0 . In view of (3.6), one has Using (2.5)-(2.7), we obtain Next, let κ = e c φ C then, by (2.8), By (3.3), one has where we used the elementary inequality xe −x ≤ e −1 , x ≥ 0. Next, by (3.4), we may estimate To obtain (3.1), it is enough to suppose that D ≤ am, where a C < 1 2 . Hence, we need that m > 2D C only, that is (3.5). The theorem is proved. Theorem 3.5. Let 0 ≤ a + , b + , φ ∈ L 1 (R d ) be even functions such that (2.1) holds and a + = 1, B = b + ≥ 0. Suppose, additionally, that there exists constants A 1 , A 2 ≥ 0 such that for a.a. x, y, Then (3.1) holds and L fec = K −1 L fec K, D) is the generator of a holomorphic semigroupÛ fec (t) in L C . Let L ′ , Dom(L ′ ) be an operator in (L C ) ′ which is dual to the closed operator L , D . Here and belowL means eitherL est orL fec . We consider also its image on K C under the isometry R C , namely, letL By Proposition 3.5 of [14], for any α ∈ (0; 1) Under the conditions of Theorem 3.2 or Theorem 3.5, there exists a ∈ 0; C 2 such that (3.1) holds. In the following letT (t) denotes eitherÛ est (t) orÛ fec (t). One can consider the adjoint semigroupT ′ (t) in (L C ) ′ and its imageT * (t) in K C . By, e.g., Subsection II.2.6 of [9], the restrictionT ⊙ (t) of the semigroup T * (t) onto its invariant Banach subspace Dom(L * ) (here and below all closures are in the norm of the space K C ) is a strongly continuous semigroup. Moreover, its generatorL ⊙ will be part ofL * , namely, Theorem 3.7 (Theorem 3.8 of [14]). For any α ∈ 2a Therefore, for α ∈ 2a C ; 1 , one can consider the restrictionT ⊙α of the semi-groupT ⊙ onto K αC . This restriction will be strongly continuous semigroup with generatorL ⊙α which is restriction ofL ⊙ onto K αC (see, e.g., Subsection II.2.3 of [9]). Therefore, andL ⊙α coincides withL * on Dom(L ⊙α ). Note that for any k ∈ K αC ⊂ D(L * ) The explicit expressions can be found using (3.7) or (3.11). Hence, we have the strong solution (in the sense of the norm in K C ) of the evolution equation ∂ ∂t k t =L * k t (3.12) at least on the subspace K αC . Vlasov scaling To begin with, we would like to explain the idea of the Vlasov-type scaling. The general scheme describing this scaling for the birth-and-death dynamics as well as for the conservative ones may be found in [13]. This approach was successfully realized for the Bolker-Dieckmann-Law-Pacala model (1.2)-(1.6) in [11]. Let us now detail how we proceed to organize the Vlasov-type scaling. We will initially scale the generator L by the scaling parameter ε > 0, in such a way that the following holds. First of all the K-imageL ε of the rescaled operator L ε has to be a generator of a semigroup on some L Cε . Consider the corresponding dual semigroupT * ε (t). Let us choose an initial function of the corresponding Cauchy problem depending on ε in such a way that k (ε) 0 (η) ∼ ε −|η| r 0 (η), ε → 0, η ∈ Γ 0 with some function r 0 , independent of ε. Secondly, the scaling L → L ε has to be performed to assure that the semigroupT * ε (t) preserves the order of the singularity: Moreover, the dynamics r 0 → r t should preserve coherent states. Namely, if r 0 (η) = e λ (ρ 0 , η), then r t (η) = e λ (ρ t , η) and there exists explicit (nonlinear, in general) differential equation for ρ t : which is called the Vlasov-type equation. Below we realize this approach for the case of is either birth rate with establishment (see (2.2)) or the one corresponding to the fecundity mechanism. Let us consider for any ε ∈ (0; 1] the following scaling with b ε = b(εa + , εb + , εφ). Here D ± x are given by (1.3). We denote by b ε,est and b ε,fec the scaled rates for the corresponding models. We define also the renormalized operator (see [11,13] for details) where (R σ G)(η) = σ |η| G(η) for arbitrary σ > 0. Proof. We begin with the establishment case. Set By (3.7), we have Since ε ∈ (0; 1] and the estimate for ε −|η| K −1 0 b ε,est (x, ξ ∪ ·) (η) will be almost the same as for K −1 0 b (x, ξ ∪ ·) (η) in the proof of Theorem 3.2. The changes will concern the term |e −φ − 1| which will be substitute by φ. This leads to the new constant φ instead of c φ in further estimates. The rest part of the proof is the same as for the non-scaled case. The same approach may be used for the case of fecundity. Indeed, The analogous arguments to establishment case complete the proof. One can get the same result for the fecundity case in a similar way. Let us denote byB ∞ c the closed ball of radius c > 0 in the Banach space L ∞ (R d ). Using Lemma 4.3 one can easily pass to the limit in (4.1). Therefore, in view of the general results presented in [14] we are able to state now the main theorem of this section. 3. There exists α 0 ∈ (0; 1) such that for any α ∈ (α 0 ; 1) the operatorL ⊙α V,♯ = L * V,♯ with the domain will be a generator of a strongly continuous semigroupÛ ⊙α V,♯ (t) on the space K αC . Moreover, for k ∈ K αC has a unique solution k t = e λ (ρ t ) in K αC provided ρ t belongs toB ∞ αC and satisfies the Vlasov-type equation Taking into account the explicit expressions for B V,♯ x , one can rewrite (4.8) in more simple form. Namely, using (2.9), for the establishment case we obtain and, by (2.8), we will have Here and below * means usual convolutions of functions on R d . Analogously, for the fecundity case, we obtain Of course, we are mostly interesting in nonnegative solution of Vlasov equation to have k t = e λ (ρ t ) is a correlation function of Poisson non-homogeneous measure with intensity ρ t . The existence and uniqueness of such solution we establishes by the following propositions. Then the equation (4.9) with initial 0 ≤ ρ 0 ∈B ∞ c has a non-negative solution ρ t . Moreover, ρ t ∈B ∞ c and it is a unique solution fromB ∞ c . Proof. Let us fix some T > 0 and consider the Banach space of all continuous functions on [0; T ] with values in L ∞ (R d ); the norm on X T is given by We denote by X + T the cone of all nonnegative functions from X T . Denote also by B + T,c the set of all functions u from X + T with u T ≤ c. Let Φ be a mapping which assign to any v ∈ X T the solution u t of the linear Cauchy problem for a.a. x ∈ R d . Therefore, It is easy to see that Φv ∈ X T . Indeed, one can estimate where we have used the trivial inequality (4.14) Clearly, u t solves (4.9) if and only if u is a fixed point of the mapping Φ : X T → X T . We have that v ∈ X + T implies Φv ∈ X + T . Next, for any v, w ∈ X + Taking into account (4.14) and obvious inequalities e −x x ≤ e −1 for x ≥ 0, |e −a − e −b | ≤ |a − b| for a, b ≥ 0, and, moreover, for any a, b, p, q ≥ 0, we obtain Using the bound we may continue to estimate (4.15) as follows For v t ≤ c, w T ≤ c one can estimate this expression by Moreover, if ρ 0 ∈B ∞ c and v ∈ B + T,c then, by (4.13), provided (4.12) holds. As a result, by (4.11), (4.12), Φ is a contraction mapping on the closed set B + T,c . Taking, as usual, v (n) = Φ n v (0) , n ≥ 1 for v (0) ∈ B + T,c we obtain that {v (n) } ⊂ B + T,c is a fundamental sequence in X T which has, as a result, a unique limit point v ∈ X T . Since B + T,c is a closed set we have that v ∈ B + T,c . Then, according to the classical Banach fixed point theorem, v will be a fixed point of Φ on X T and a unique fixed point on B + T,c . The same considerations may be applied to the Vlasov equation (4.10). To combine these results with statement of Theorem 4.4 we need additionally that (4.11), (4.12) hold with c = αC.
2012-10-11T03:54:30.000Z
2011-12-08T00:00:00.000
{ "year": 2011, "sha1": "76c76098882677c61b3b6cbb99b43ae76c02abf8", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1112.1973", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "76c76098882677c61b3b6cbb99b43ae76c02abf8", "s2fieldsofstudy": [ "Environmental Science", "Physics", "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
259986526
pes2o/s2orc
v3-fos-license
Mucinous cystadenocarcinoma of the ovary in a 14-year-old girl: a case report and literature review Background Ovarian epithelial tumors are common in adults, and their peak incidence of onset is over 40 years of age. In children, most ovarian tumors are germ cell-derived, whereas epithelial tumors are rare and mostly benign. Case presentation This report describes a case of a 14-year-old Chinese girl with ovarian mucinous cystadenocarcinoma. She was admitted with a small amount of bloody vaginal discharge during the past month. Magnetic resonance imaging of the abdomen and pelvis showed a large solid cystic mass lesion in the left ovary. Tumor marker levels were within normal limits ( CA-125: 22.3 U/mL, HE4: 28.5 pmol/L, HCG: < 1.20 mIU/ml, AFP: 3.3 ng/ml, CEA: 2.2 ng/ml, CA19-9: < 2.0 U/mL). Laparoscopic exploration revealed a large left ovarian tumor. The patient underwent left salpingo-oophorectomy, and showed no significant issues during follow-up, as well as no evidence of recurrence or metastasis. Conclusions We report the first pediatric case of ovarian mucinous cystadenocarcinoma in China. Given the scarcity of reports addressing the clinical management of this condition, the present study provides a useful contribution to its further understanding in light of developing future treatment strategies. Background The incidence of epithelial ovarian tumors in children is low, accounting for less than 20% among ovarian tumor types [1]. In addition, the vast majority of ovarian tumors in children are benign [2,3], while malignant ovarian epithelial neoplasms are extremely rare. In this article, we report a case of a Chinese 14-year-old girl who developed ovarian mucinous cystadenocarcinoma after menarche. To our knowledge, this is the fifteenth reported case of cystadenocarcinoma (ninth case of mucinous cystadenocarcinoma) in children worldwide. The age range of the 14 previously reported cases identified in PubMed using the following search terms: ovary, epithelial carcinoma or tumor, mucinous carcinoma or tumor, adolescent, children or pediatric, was 4-14 years, with eight cases of mucinous cystadenocarcinoma and six cases of other cancer types (Table 1). A 14-year-old girl was admitted to the gynecology outpatient department of Women and Children's Hospital of Chongqing Medical University in April 2022, with complaints of a small amount of bloody vaginal discharge during the past month. Before this event she reported an average menstrual cycle of 25 days, with her menarche at the age of thirteen, and her periods averaging five days, with normal volume and no dysmenorrhea. In addition, the patient's past medical history showed no history of infectious or genetic diseases, trauma, surgery, blood transfusion, drug or food allergies. Pelvic color doppler ultrasound scan revealed a mass with cystic and solid components in the left ovary extending upward to the inferior hepatic rim, downward to below the pubic symphysis, and forward to the anterior axillary line. Magnetic resonance imaging (MRI) scans of the abdomen and pelvis showed a large solid cystic mass lesion in the left ovary that measured 6.7 × 18.9 × 29.3 cm, and the contrasted scan showed an enhanced septum and solid component (Fig. 1). The liver and intrahepatic ducts, gallbladder, pancreas, spleen, both kidneys and ureters were unremarkable. The patient's pre-operative serum levels showed that carbohydrate antigen (CA-125): 22.3 U/mL (normal: < 35 U/mL), human epididymal protein-4 (HE4): 28.5 pmol/L (normal: < 70 pmoI/L), human chorionic gonadotropin (HCG): < 1.20 mIU/ml (normal: < 5.0 mIU/ml), alpha-fetoprotein (AFP): 3.3 ng/ml (normal: 0-7 ng/ml), carcinoembryonic antigen (CEA): 2.2 ng/ml (normal: 0-5 ng/ml), carbohydrate antigen 19 − 9 (CA19-9): < 2.0 U/mL (normal: 0-43 U/mL) were within normal limits. The patient, with a height of 155 cm and weight of 45.5 kg (BMI: 18.9), presented no particular concerns regarding nutritional status, general vital signs, psychosocial state, or risk factors for stress-related injuries. The results of complete hemogram ( WBC: 7.5 × 10 9 /L, RBC: 4. After examination, given the patient's pediatric status and the aim to minimize surgical invasiveness, the patient underwent laparoscopic adnexectomy (single-port surgery) which revealed a large left ovarian tumor measuring 29 × 25 × 15 cm with an intact capsule adhering to the omentum; however, the surrounding peritoneum and the right ovary appeared to be uneventful. A small amount of ascites was found in the abdominal cavity and the peritoneal lavage was sent for cytological examination. A left salpingo-oophorectomy was performed followed by frozen tissue sectioning. Subsequently, the patient underwent appendectomy and omentectomy, and the specimens were sent for routine histopathological examination. As no abnormalities were observed in the right ovary during the laparoscopic exploration, and in an effort to minimize surgical impact on the right ovarian function, we did not remove any part of the right ovary for pathological examination. Case presentation Frozen sections revealed an enlarged dissected ovary measuring 15 × 9 × 4 cm, with a cystic cut section and presence of solid areas. Cysts were filled with turbid mucus, and their walls were mostly smooth with a few surface irregularities and a 3 × 2.5 × 1.5 cm pink nodular excrescence in the interior (Fig. 2a). No significant abnormalities were found in the fallopian tubes. The patient was followed up after surgery every three months with gynecologic ultrasound and analysis of tumor markers (CEA, AFP, CA19-9, CA-125, HE4). Ultrasound results showed no pelvic mass and tumor markers were within normal range. Her current menstrual cycle is about 15 days, with an average period of 3-4 days, low volume, and no dysmenorrhea. The patient has so far shown no signs of recurrence and is still being followed up. Discussion Here we reported a case of ovarian mucinous adenocarcinoma in a 14-year-old girl, which to our knowledge is the first case of ovarian epithelial tumor reported so far in Chinese and East Asian children. Of the eight cases of mucinous adenocarcinoma that have been previously reported in the literature worldwide, seven were from Europe or the United States, and one was from India. Some studies have suggested that ovarian cancer incidence may be related to race and ethnicity, with rates among white adolescents and young adults almost twice as high as those among black women of the same age group [12]. The question of whether racial and ethnic differences are also associated with the incidence of ovarian epithelial tumors in children has been so far unanswered, partly because of the few reported cases. Thus, continued attention and further research are needed in this regard. In previous reports on mucinous carcinoma of the ovary in children under 15 years of age, the clinical and pathological features of the patients showed that most of Fig. 1 (a) Pelvic color Doppler ultrasound scan revealed a large mass, predominantly cystic, with poor internal echogenicity. It presented as fine, weakly echogenic spots, with numerous unevenly sized septations (green arrow), and several differently sized solid echogenic protrusions into the cystic cavity (red arrow), giving a cauliflower-like appearance. (b) MRI images revealed a large cystic-solid mass, predominantly cystic, with visible septations (red arrow) and wall nodules (green arrow) within the patients had abdominal pain and distention as their first symptoms; elevated serum tumor marker CA125 was reported in two cases, and mildly elevated HCG was reported in one case. None of these reports had followup of the patients' postoperative menstrual status. In the present case, the patient did not have abdominal pain and distension, but presented with a small amount of bloody vaginal discharge. Through follow-up interviews, we know that her current menstrual cycle is about 15 days, with an average period of 3-4 days, low volume, and no dysmenorrhea. Epithelial tumors of the ovary are very common in women, and most commonly occur during adulthood. Of note, the types of ovarian tumors occurring in children before the age of 15 differ from those diagnosed in adults, with a majority of sex cord-stromal and germ cell tumors, and less than 20% of epithelial tumors. In this line, Van et al. [13] reported ovarian masses in infancy, childhood and adolescence, with the incidence of epithelial and germ cell tumors being 15% and 70%, respectively in the under-15 age group, and 41% and 43% in the over-15 age group. Similar findings were obtained by Young et al. [12] and Li et al. [14]. Concerning ovarian cancer pathogenesis, the traditional view is that all ovarian cancer subtypes originate at the surface epithelium of the ovary. During ovulation, damage to the ovarian surface is caused by the follicle rupture and consequent oocyte release, and during damage repair, the epithelial cells on the ovarian surface become invaginated and form cortical cysts. Exposure of epithelial cells lining the cortical cyst to a new hormonerich environment induces their proliferation, and eventually some epithelial cells that happen to harbor remaining DNA damage may become carcinogenic, thus leading to ovarian cancer [15]. This rationale explains those ovarian epithelial malignant tumors occurring after menarche but not the cases that occur before it. Consistently, the characteristics of ovarian epithelial tumors are different between children and adults, with serous tumors being the most common in adults while mucinous tumors are the most common in children. It has been suggested that some of the mucinous tumors in the age group of 10-14 may originate from monoblastic differentiation of the gastrointestinal mucinous epithelium in teratomas [16]. This is the age at which germ cell tumors are prone to occur, which may explain why a proportion of mucinous tumors occur before menarche. The main mode of presentation of ovarian epithelial tumors was abdominal pain, bloating or menstrual disturbances with vague symptoms that were initially Fig. 2 (a) Gross examination: The ovarian mass measured 15 × 9 × 4 cm, with a predominantly cystic appearance upon sectioning. The cysts were filled with turbid mucus, and their walls were largely smooth, with occasional surface irregularities and a pink nodular excrescence visible internally (red arrow). (b) Microscopic examination: The glands, exhibiting papillary and cribriform shapes, showed expansile growth with anastomosing architecture and minimal to absent stroma. The lining of most epithelial cells displayed moderate to severe atypia, with diminished or absent mucinous differentiation and conspicuous mitotic figures (red arrow) (H&E, x400) ignored. The girl reported in this case presented with a small amount of bloody vaginal discharge, which opportunely caught her attention. Notably, approximately 21% of patients with ovarian epithelial tumors are asymptomatic [14], which frequently leads to disease progression before the lesion is diagnosed. Both analysis of tumor markers as well as radiological studies are important tools in the diagnosis of ovarian cancer [17]. CA 125 has been widely used as a marker for epithelial ovarian tumors, however, its positive predictive value is debatable. Although serum CA 125 levels (> 35 U/mL) are elevated in more than 80% of patients with ovarian epithelial carcinoma, they are also elevated in approximately 1% of non-neoplastic conditions such as endometriosis, cirrhosis, pancreatitis, pelvic inflammatory disease, and advanced abdominal -non-ovarianmalignancies [18]. Furthermore, serum CA 125 has been reported to have only 78.1% sensitivity and 76.8% specificity in detecting primary ovarian epithelial carcinoma [19]. In spite of this, it remains a useful tumor marker to be used in combination with imaging findings, which often show an adnexal mass in the presence of a tumor. Frailty, characterized by increased vulnerability and reduced health response, plays an important role in predicting postoperative complications and survival outcomes in gynecological oncology. This assessment, which correlates with prolonged hospital stay and increased risk of organ failure, mortality, and rehospitalization, should be performed using standard scores such as the Clinical Frailty Scale (CFS-7) and Frailty Index (FI), [20]. This enables personalized therapeutic strategies for patients with gynecological malignancies, thereby improving oncological outcomes. Although we assessed her body mass index (BMI), nutritional status, general vital signs, psychosocial status, and risk factors for stress-related injury and concluded that she was likely to tolerate surgery with minimal risk of serious complications, we did not use specific frailty measures (e.g., CFS-7, FI) We recognize the importance of these assessments and acknowledge this as an area for future improvement. Following surgery, the patient developed a low-grade fever, and complete blood count (CBC) indicated an elevated white blood cell count and neutrophil percentage, warranting antibiotic treatment. Postoperative pain was managed with pain management therapy, resulting in only mild discomfort, as indicated by a Visual Analogue Scale (VAS) score of less than 3. No other postoperative complications were observed. The prognosis of ovarian cancers presenting at a young age is variable and depends on the tumor stage upon presentation as well as histological type. In the eight reported cases of ovarian mucinous carcinoma in children to date, five were at stage I. Among these, one case was found to have cancerous thrombi in the vessels and died from recurrent metastasis two years later, while the rest achieved a survival time of over five years. The other three cases were at a higher stage: one died a year after diagnosis, and the other two had either metastasis at the time of diagnosis or cancer cells were found in the ascites, suggesting a less favorable prognosis. Treatment of ovarian epithelial carcinoma in children relies on the experience from treating adult patients, with an emphasis on the preservation of reproductive functions. However, individualized regimens have been developed based on the establishment of comprehensive surgical staging. In patients with low-grade stage IA (serous, endometrioid or mucinous expansile subtype), fertility-sparing surgery (FSS) appears to be a safe option [21][22][23]. FSS can also be considered for stage IC1 tumors, as recurrences, which are often isolated on the remaining ovary, can typically be managed with subsequent surgery. However, it's important to note that recurrence rates tend to be higher in stage IC2, IC3, and grade 3 diseases, mainly in extra-ovarian sites. This suggests that these recurrences may not be directly associated with the fertility-sparing approach. Therefore, comprehensive counseling is crucial when considering FSS in these situations [24]. Various adjuvant chemotherapy regimens have been used in epithelial carcinoma of the ovary to improve survival. A comprehensive cohort study [25] found that chemotherapy not only reduced mortality for high-risk patients but also for those with stage IA/IB, grade 2 ovarian cancer. This aligns with previous studies showing no advantage of chemotherapy for women with stage IA and IB, grade 1 tumors. In particular, for histological subtypes like mucinous subtype, the expansile or grade I type, which is linked to a better prognosis, is not recommended for adjuvant chemotherapy, while the infiltrative form has a high relapse risk [26][27][28][29]. Conclusions In summary, ovarian epithelial tumors are very rare in children, and their pathogenesis, especially before menarche, remains unclear, which poses a challenge to the treatment and management of the disease. In this context, the present case report adds valuable information concerning the clinical, serological and imaging characteristics of mucinous cystadenocarcinoma, a rare type of epithelial ovarian cancer, in a young patient. Given the scarcity of reports addressing the clinical management of this condition, the present study provides a useful contribution to its further understanding in light of developing future treatment strategies.
2023-07-21T13:50:29.800Z
2023-07-21T00:00:00.000
{ "year": 2023, "sha1": "a073aaf98572cefdc75b2c8e6fa3be80d64444da", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "a073aaf98572cefdc75b2c8e6fa3be80d64444da", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
246681025
pes2o/s2orc
v3-fos-license
Vaccine effectiveness of ChAdOx1 nCoV-19 against COVID-19 in a socially vulnerable community in Rio de Janeiro, Brazil: a test-negative design study Objectives To estimate vaccine effectiveness after the first and second dose of ChAdOx1 nCoV-19 against symptomatic COVID-19 and infection in a socially vulnerable community in Brazil when Gamma and Delta were the predominant variants circulating. Methods We conducted a test-negative study in the community Complexo da Maré, the largest group of slums (n = 16) in Rio de Janeiro, Brazil, from January 17, 2021 to November 27, 2021. We selected RT-qPCR positive and negative tests from a broad community testing program. The primary outcome was symptomatic COVID-19 (positive RT-qPCR test with at least one symptom) and the secondary outcome was infection (any positive RT-qPCR test). Vaccine effectiveness was estimated as 1 – OR, which was obtained from adjusted logistic regression models. Results We included 10 077 RT-qPCR tests (6,394, 64% from symptomatic and 3,683, 36% from asymptomatic individuals). The mean age was 40 (SD: 14) years, and the median time between vaccination and RT-qPCR testing among vaccinated was 41 (25–75 percentile: 21–62) days for the first dose and 36 (25–75 percentile: 17–59) days for the second dose. Adjusted vaccine effectiveness against symptomatic COVID-19 was 31.6% (95% CI, 12.0–46.8) 21 days after the first dose and 65.1% (95% CI, 40.9–79.4) 14 days after the second dose. Adjusted vaccine effectiveness against COVID-19 infection was 31.0% (95% CI, 12.7–45.5) 21 days after the first dose and 59.0% (95% CI, 33.1–74.8) 14 days after the second dose. Discussion ChAdOx1 nCoV-19 was effective in reducing symptomatic COVID-19 in a socially vulnerable community in Brazil when Gamma and Delta were the predominant variants circulating. Introduction The impact of coronavirus disease 2019 (COVID-19) is disproportionate on socially vulnerable communities [1e3], which have decreased resilience when confronted by external stresses [4]. Large populations in low-and middle-income countries live in slums or favelas, which are densely populated urban areas with deteriorated or incomplete infrastructure, high risk of infectious diseases transmission, and limited access to health care services and vaccination [5]. Studies that estimate vaccine effectiveness (VE) in neighbourhoods such as favelas are lacking. Brazil has faced one of the worst public health crises worldwide because of COVID-19, which was aggravated by the spread of variants of concern (VoCs), particularly Gamma in 2021 [6]. We estimated the VE of ChAdOx1 nCoV-19 (AstraZeneca/Oxford vaccine, hereafter ChAdOx1) against symptomatic COVID-19 and infection using a test-negative design in an adult population from a large, socially vulnerable community (Complexo da Mar e) in Rio de Janeiro, Brazil. Methods Complexo da Mar e is the largest group of favelas in Rio de Janeiro, composed of 16 favelas with 140 000 residents [7], with 54% of the population age 30 years and a low human development index (0.686; 123rd of Rio's 126 neighbourhoods) in 2010 [8]. From the beginning of the pandemic until November 27, 2021, the region presented high rates of positive cases (7852 of 100 000) and deaths (271 of 100 000) [9]. In July 2020, a community broad testing strategy became available at the Complexo da Mar e after an effort of civil society, nongovernmental organizations, and the local community [10]. Testing was free of charge and available in tents located in three different regions in Mar e. There have been 213 RT-qPCR tests per 1000 inhabitants since the beginning of the campaign. During the first period, Gamma was the prevalent VoC in Rio de Janeiro, and Delta became dominant after July 2021 (Fig. S1) [11]. This study was approved by the national research ethics committee (CAAE: 49726921.6.0000.5248). The COVID-19 vaccination campaign in Complexo da Mar e initially followed the Rio strategy starting on January 17, 2021 and according to an age-based priority, and by the end of July 2021, 38% of Mar e residents had received a first dose. Mar e received a mass vaccination campaign, which administered approximately 36 000 first doses of ChAdOx1 between July 29 and August 1, 2021, followed by second doses between October 14 and 16, 2021, achieving 93.4% coverage with two doses in adults (Fig. S2). Our analysis encompasses the period between January 17, 2021 and November 27, 2021. During this period, a total of 83 762 doses (64 352 first, 19 410 second, and 11 217 third doses) of ChAdOx1 were administered. We did not analyze other vaccine platforms because of the small coverage in the area. RT-qPCR tests sampled after the third dose were excluded. We used a test-negative design to estimate the VE of ChAdOx1 against symptomatic COVID-19 (primary outcome). We linked the community-program testing database with the vaccination campaign database. Overall, we followed the methodology reported elsewhere [12e14]. Briefly, we selected all RT-qPCR tests (positive and negative) from symptomatic individuals, defined as presenting with at least one symptom, from RT-qPCR tests sampled within 10 days of symptom onset [13]. We excluded individuals with a previous positive RT-qPCR test and those with a negative and subsequent positive test in the following 14 days. We estimated VE as 1 e OR from adjusted logistic regression models. Our primary analysis was effectiveness against symptomatic COVID-19 21 days after the first dose and 14 days after the second dose of ChAdOx1. We adjusted by time of epidemic (restricted cubic spline on day of the year) and subsequently adjusted by age (restricted cubic spline), sex, self-reported colour/ race, Mar e residence region, occupation, whether the RT-qPCR test was from routine testing, and six chronic comorbidities (cardiovascular, pulmonary, and liver diseases; diabetes; obesity; and immunosuppressed status). We evaluated the interaction between effectiveness and age groups, divided by the median of the symptomatic population (35 years; >35 years). We conducted five sensitivity analyses (Table S1): (a) excluding test-negative cases that reported taste/smell alterations among symptomatic individuals [12]; (b) analyzing individuals with 2 symptoms; (c) analyzing symptomatic and asymptomatic cases together, (d) analyzing only asymptomatic cases, and (e) expanding the time groups after the first dose to 14 to 27, 28 to 41, 42 to 56 and >56 days. We considered the period between 0 and 13 days after the first dose as a bias indicator, because we would not expect any protection from the vaccine during this period [12]. We have missing data only for self-reported colour/race (15%), chronic comorbidities (<1%), and region of residence (<1%). We generated 20 multiple imputed datasets using chained equations. We summarized estimates using Rubin's rules. All analyses were conducted in R statistical software, version 4.0.3. Results We analyzed 10,077 RT-qPCR test results after applying the inclusion and exclusion criteria (Fig. S3). Overall, 36% of tests were from asymptomatic individuals. The test positivity was 19.4% (1238 of 6394) for symptomatic and 5.7% (198 of 3485) for asymptomatic cases (Figs. S4 and S5). The characteristics of symptomatic cases are shown in Table S2. The mean age was 38 years (standard deviation: 13 years), 65% were female, and 40% were of Brown/Pardo self-reported colour/ race. The prevalence of chronic comorbidities was low. The median time between vaccination and RT-qPCR testing among vaccinated patients was 41 days (25e75 percentile: 22e62 days) for the first dose and 35 days (25e75 percentile: 18e57 days) for the second dose. The characteristics of those with 2 symptoms, asymptomatic and symptomatic cases combined, and asymptomatic-only cases are shown in Tables S3, S4, and S5. VE of ChAdOx1 is shown in Table 1. Adjusted VE against symptomatic COVID-19 was 31.6% (95% CI, 12.0%e46.8%) 21 days after the first dose and 65.1% (95% CI, 40.9%e79.4%) 14 days after the second dose. The period between 0 and 13 days after the first dose (bias indicator) showed no indication of bias. After excluding negative tests from individuals with taste/smell symptoms (n ¼ 5377), the adjusted VE against symptomatic COVID-19 was 65.7% (95% CI, 41.6%e79.9%) 14 days after the second dose (Table S6). When analyzing those with 2 symptoms (n ¼ 5210), the adjusted VE against symptomatic COVID-19 was 62.3% (95% CI, 33.2%e78.8%) 14 days after the second dose (Table S7). The young group showed higher effectiveness ( Table 1). The adjusted VE increased for the subsequent days after the first dose, except for >56 days. VE when considering symptomatic and asymptomatic cases together was comparable with the main analysis (Table 1). Adjusted VE among asymptomatic cases was 26.6% (95% CI, e53.8 to e65.0%; Table S8) 21 days after the first dose. Discussion We observed 31% protection after the first dose and 65% after the second dose of ChAdOx1 against symptomatic COVID-19 in a socially vulnerable community in Rio de Janeiro, Brazil, in a period of mixed Gamma and Delta variant dominance. Our estimates are in accordance with studies of ChAdOx1 effectiveness in the context of Gamma/Delta variants [14,15]. We observed that VE increased up to 53.2% during 42 to 55 days after the first dose [14] and decreased afterward in those who did not receive the second dose. The reason for the decrease in effectiveness is not clear. We can hypothesize that this decrease might occur in part because of an increase in Delta dominance and then waning, and it reinforces the need for second dose uptake. There is limited evidence for protection against infection. Our estimates are comparable to protection against symptomatic cases [15]. However, the low number of events among asymptomatic cases shifts the estimate of the combined analysis toward symptomatic cases. Additionally, there might be some residual confounding related to reasons for being tested when asymptomatic. A detailed follow-up on asymptomatic cases could help our understanding of VE against infection. Our study has limitations. We could not evaluate VE against COVID-19 severity. Although the test-negative design can deal with important confounding factors, such as health-seeking behaviour, we cannot rule out residual confounding for other factors (e.g. infection risk exposure) [12e14]. Finally, the estimates might not be generalizable to the entire population, because we analyzed only tested individuals [12e14]. ChAdOx1 was effective in reducing symptomatic COVID-19 in an overall young socially vulnerable community in a group of favelas in Brazil, predominantly during Gamma/Delta variant circulation. New VoCs are likely to spread (e.g. Omicron); therefore, ChAdOx1 effectiveness should be re-evaluated. Transparency declaration This work is part of the Grand Challenges ICODA pilot initiative, delivered by Health Data Research UK and funded by the Bill & Melinda Gates Foundation and the Minderoo Foundation. This study was also supported by the National Council for Scientific and Technological Development, Coordinating Agency for Advanced Training of Graduate Personnel (finance code 001), Carlos Chagas Filho Foundation for Research Support of the State of Rio de Janeiro, and Pontifical Catholic University of Rio de Janeiro. OTR is funded by a Sara Borrell grant from the Instituto de Salud Carlos III (CD19/ 00110). All authors reported no conflicts. All authors conducted the research independently of the funding bodies. The findings and conclusions of this article reflect the opinions of the authors and not those of the funding bodies or other affiliations of the authors. Author contributions OTR, AABS, ITP, BBPA, JC, SH, and FAB conceptualized and participated in the design of the study. Conduct of the study and data collection were performed by OTR, AABS, ITP, BBPA, TWG, DRS, JC, SH, and FAB. Data analysis was performed by OTR, AABS, ITP, and BBPA. OTR drafted the initial manuscript, and all authors revised subsequent drafts. All authors read and approved the final manuscript for submission. OTR acknowledges support from the Spanish Ministry of Science and Innovation and State Research Agency through the "Centro de Excelencia Severo Ochoa 2019e2023" Program (CEX2018-000806-S) and from the Generalitat de Catalunya through the CERCA Program. In addition, the authors thank the Center for Healthcare Operations and Intelligence research group for their VE, vaccine effectiveness. a Adjusted by day of the year of RT-PCR-qPCR testing (restricted cubic spline). b Adjusted by age (restricted cubic spline), sex, cardiovascular disease, respiratory disease, obesity, diabetes mellitus, immunosuppressed status (including cancer), liver disease, occupation, region of residence, self-reported race, reason of testing, and day of the year of RT-qPCR testing using a restricted cubic spline. The fully adjusted model for symptomatic and asymptomatic was adjusted by a dummy variable of symptomatic/asymptomatic. c The p-value for interaction was 0.03 for symptomatic cases and 0.04 for symptomatic and asymptomatic cases. The reference group for the vaccine effectiveness estimates was those unvaccinated.
2022-02-10T14:10:20.727Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "460b1b853427c1b0812b7952ff14604476f761a8", "oa_license": "CCBY", "oa_url": "http://www.clinicalmicrobiologyandinfection.com/article/S1198743X22000568/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4c284f1eb48188a68c77f00200923ab5d1137f67", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
266261372
pes2o/s2orc
v3-fos-license
Trema orientale (L.) Blume: A review of its taxonomy, traditional uses, phytochemistry, pharmacological activities and domestication potential Trema orientale (L.) Blume is an important medicinal plant with multiple applications for treating several disease conditions. This study compiled published data on botanical, traditional uses, phytochemistry, pharmacology, and antimicrobials, coupled with discussing the conservation and domestication potential of T. orientale. Data were sourced from databases such as Google Scholar, PubMed, Scopus, Elsevier Plants of the World Online (Kew Science), Global Biodiversity Information Facility (GBIF), and World Flora Online (WFO), using key search terms: Trema orientale or orientalis, phytochemistry pharmacology, taxonomy, and domestication with Boolean operators to include and exclude articles for the review. The review indicated that molecular studies have shown that T. orientale is closely related to a sister group of Cannabis through plastome phylogenetic evidence which accounts for its transfer from Ulmaceae to the Cannabaceae family. T. orientale is distributed across several African countries and has recently been assessed as the Least Concern by the IUCN Red List of Threatened Species. Nevertheless, deforestation continues to pose an extinction risk to their population. Currently, 31 compounds have been isolated from different parts of T. orientale justifying many traditional uses accredited to it. T. orientale is considered a dose-dependent safe remedy for the treatment of infectious diseases, cancer, cardiovascular diseases and other disease conditions ascribed to it except for its continuous application. This review underscores the domestication potential of T. orientale including evidence of molecular markers, soil seed banks, and promising outcomes of germination experimentations. This, therefore, presents significant gains toward sustainable utilization of Trema orientale. Introduction The use of plants as medicine has been the backbone of mankind's existence on Earth.This is particularly important in most tropical countries where their rich plant biodiversity affords them the luxury to select from the plethora of plant resources with therapeutic potential.In developing countries, the use of plant medicine is the primary medical remedy for many people especially, among the rural population for treating and managing various disease conditions [1].About 80 % of the world's population depends on herbal medicine because it is less costly, readily and easily accessible to consumers [2]. The application of plant medicine as primary healthcare has been heightened in the 21st century [3].It is now more popular among the elite who hitherto doubted the efficacy and safety of plant medicine.This could be attributed to the integration of orthodox synthesis of drugs into the plant medicine production system [4].As a result, many chemical constituents with multiple medicinal potentials have been isolated from plants, confirming the medicinal potency of most plants [5][6][7].In addition, the availability of in-vitro, ex-vivo [8][9][10][11], and clinical trials of some plants to ascertain their efficacies have been studied [12,13].The recognition by the World Health Organization (WHO) of the effectiveness of herbal remedies and the subsequent support to countries to develop and improve herbal medicine has also assuaged fears and myths associated with the use of plant medicine [14].Literature has reported on the medicinal potency of several plants for treating common ailments like cancer, malaria, fever, asthma, and diarrhoea [15][16][17][18][19]. Trema orientale (L.) Blume. is an evergreen tree in the family Cannabaceae.Some of its common names include charcoal tree, gunpowder tree, pigeon wood, hop out, Indian charcoal tree, and Indian nettle tree.It has a widespread distribution across several zones, particularly in the tropics, and some temperate zones.It is indicated that almost all parts of the plants have some medicinal application [20]; stem bark for the treatment of malaria [21], leaves are used as dewormers for hookworms and roundworms [22], inflammatory and respiratory diseases [23].It thrives well in the humid tropics and it is mostly sourced from the wild. Forest biodiversity is an important source of plant resources [24].However, land use changes particularly the conversion of rich biodiversity areas for farming, physical developments, and mining with its attendant problem of climate change pose a serious extinction risk to the sustainable utilization of plants in general particularly T. orientale.For instance, Uganda reported that rapid habitat loss has exacerbated the already dwindling population sizes of T. orientale [25], although previous authors have widely reviewed the importance of T. orientale in terms of phytoconstituents [26][27][28], and mechanical and wood properties [29][30][31].Yet, there is a dearth of review data on the botany/diversity and domestication potential of T. orientale demonstrated by literature.Thus, this review sought to highlight the botany, availability, medicinal importance, and domestication potential, in addition to the pharmacological and phytoconstituents of Trema orientale.This review exemplifies a comprehensive documentation of available literature on T. orientale in contemporary times. Methodology Information for this review was gleaned from Google Scholar, Research Gate, PubMed, Science Direct, Elsevier, Taylor and Francis and other online resources such as the POWO and WFO.The keywords used in the search engines are Trema, taxonomy, phytochemistry, biological activity, traditional uses and domestication of Trema orientalis.These keywords were used together with other terms relating to the subject of the review and were used interchangeably during the search process.For Google Scholar, Google, Research Gate, Scopus, Elsevier, Taylor and Francis and PubMed we only searched for peer-reviewed articles published between 2000 and 2022 with the keyword "Trema.From our analysis of the search results, it was realized that most of the articles reported on the wood/mechanical properties of T. orientale.The next action was to search with Boolean operators (AND, OR or AND NOT, +) to combine or exclude the key terms that directly relate to the purpose of this review in all the databases searched.Thus, we used the terms 'taxonomy', 'phytochemistry', 'biological activity, 'traditional uses', and 'domestication potential' of the Genus Trema in the searches (Table 1).This excluded some of the documents resulting in articles mainly reporting on the phytochemistry, and biological/ pharmacological activities of the Genus Trema with little information on the botanical and conservation aspects.Further screening with specific emphasis on Trema orientale using the same keywords excluded some of the articles.We analyzed each of the remaining articles and key points relating to phytochemistry, and biological activity summarized.We, therefore, resorted to the POWO, WFO Plant List, and GBIF databases to analyze the taxonomic data using these keywords 'botanical description'; 'traditional uses'; and 'domestication of T. orientale' together with the Boolean operators (Table 1).All documents directly relating to the inclusion criteria were analyzed and included in the review. Taxonomy of Trema orientale Trema orientale, formerly Trema orientalis in the Kingdom Plantae falls under the taxonomic rank of vascular plants Phylum Tracheophyta and a subdivision Spermatophytina (spermatophytes, seed plants, or phanérogames).It is also categorized under the class Magnoliopsida and associated with the order Rosales.Nested within the Rosales are four families, which in the past have been variously delimited in an order of Urticales.The Urticales informally referred to as Urticalean Rosids, comprise Ulmaceae, Moraceae, Urticaceae and Cannabaceae.These families are grouped based on their shared characteristics and they include taxa with small, unisexual and wind-pollinated flowers [32,33].It is worth noting that, the distinctive characteristics of these plant families have led to their worldwide classification as culturally and economically important plants.T. orientale previously belonged to Ulmaceae but is currently an accepted species in Cannabaceae.Molecular studies have shown that T. orientale is closely related to a sister group of Cannabis through plastome phylogenetic evidence [34].Further knowledge on the uses of some species, particularly, in the genera Cannabis L., Celtis L., Trema Lour.and Humulus L. has been reported in previous studies [35,36].They contribute immensely to addressing critical challenges facing humanity such as healthcare, poverty and food security.For instance, Cannabis sativa is used as a bast fiber plant; hemp, and the euphoric medicinal plant; marijuana [33,37]. The family Cannabaceae consists of 9 genera and 109 accepted species (https://powo.science.kew.org/;accessed on January 25, 2023).The family is predominantly of trees and shrubs but is also represented by herbs and vines.They are dioecious or sometimes monoecious, leaves are alternate, opposite, compound or sometimes simple.Other studies reported that Cannabaceae is currently recognized as a distinct family consisting of Cannabis L. and Humulus L. which was initially separated from the family Moraceae by Rendle [34,37,38].Molecular studies on Cannabaceae remain unresolved because of limited taxon and character sampling, however, some morphological characters can be used for their identification [39]. Distribution, habitat and description Trema orientale (L.) Blume T. orientale, is a large shrub or medium-sized tree found in the tropical and sub-tropical wet regions.It has an extensive root system that enables it to survive long periods of drought [40].The height varies depending on the location and climatic conditions.In general, it prefers sites that experience high rainfall and the form it takes is dependent on the individuals' access to water, thus, it can grow from 8 ft to 20 ft in height [41,42].It has a short basally swollen bole, heavy branching rounded to spreading crown and slender branchlets covered with white velvety hairs (Fig. 1c).The stem bark is grey or dark brown, smooth, and marked with parallel longitudinal lines and corky spots (Fig. 1d).It produces a creamy-white to light yellow, fibrous, bright green latex immediately beneath the bark when slashed [23,43].The leaves are long, simple, alternate, stipulate along drooping branches about 14 cm long, and the stipules are three-nerved at the base, with unequal size (Fig. 1b).The leaf surface is papery, glabrous, rough to the touch and dull at the adaxial surface with short grey hairs below.The edges are finely toothed all around with an unequal-sided blade [43].Leaf margins are closely serrated from near the base, whereas the juvenile leaves are rough and hairy, up to (15 × 9 cm) but become smooth when they mature.The plant is pollinated by bees and flowers from December to February, producing cymes about 5-10 mm long [44].Flowers are mostly male with a few female (bisexual) flowers at the top; small, inconspicuous and greenish-cream.Bracts are 1 mm.long, triangular; pedicels short or absent.The fruits are small and round drupes and can either be green or dark purple but turn black when ripe [44] (Fig. 1a). T. orientale is a widely distributed tree species that remains abundant despite deforestation in some parts of its habitat.The native habitat of this species is tropical and subtropical regions.It covers three continents and encompasses 65 countries; particularly Africa and continental Asia [23] (Fig. 2).It has been introduced to Hawaii, Mauritius and Reunion.The species is not common across its whole range but it has a large population.It is also common in secondary forests and grows well in both heavy and light soils, it can also be established on flood-damaged river banks [40,46].This species has most recently been assessed for the IUCN Red List of Threatened Species in 2017 and is listed as Least Concern.The estimated Area of Occupancy (AOO) is 1560.000km 2 and the Estimated Extent of Occurrence (EOO) is 26,493,751.440km 2 [45,46].T. orientale has been referred to differently in different locations (Table 2). Traditional medicinal uses Almost all the parts of T. orientale have traditionally been used for the treatment of several diseases all over the world, especially in Africa.The leaves have reportedly been used for the treatment of coughs and sore throats while the bark is used to make cough syrup.Other reported uses include remedies for asthma, bronchitis, gonorrhoea, malaria, yellow fever, toothaches, and intestinal worms [47].Sometimes, the leaves and fruits are processed into infusions and taken for conditions such as bronchitis, pneumonia, and pleurisy [48].In Ghana, the leaves only or with fruit, enrich the recipe by crushing the leaves in lemon juice for cough treatment [49].The stems and twigs have been useful in the management and treatment of respiratory disorders, fevers, toothache, and venereal diseases in West Africa [50].The Ghana Herbal Pharmacopeia reported on its use as a dewormer for hookworm and roundworm treatment [22].In Nigeria, T. orientale has also been used for restoring tired muscles and aching bones [50]. Phytoconstituents of T. orientale The major groups of phytochemicals present in Trema orientale are tannins, flavonoids, saponins, cardiac glycoside, phytosterols, fatty acids, carbohydrates, iridoids, xanthones and phenolic compounds [33,52,53].Some isolated compounds from various parts of the plant and their structures are listed in Table 3 below. Pharmacological activities/studies There have been several studies over the years on Trema orientale including in-vitro and in-vivo studies using different extraction methods and/or part of the plant to establish its pharmacological effects.These reports include anti-inflammatory, antimicrobial, antidiabetic, antioxidant, and antidepressant activities among others.The main pharmacological studies on T. orientale are abridged and highlighted under this topic [23,33]. Anthelmintic activity The antiparasitic effects of crude extract of Trema.orientale bark and wood with hexane, ethanol and water against Caenorhabditis elegans have been reported.The research confirmed that a concentration higher than 1 mg/ml of the extracts has inhibitory activity against C. elegans within the first 2 h of exposure and lethal activity after 7 days of exposure [32].Similarly, an in-vitro anthelmintic evaluation of 70 % ethanolic leaf extract of T. orientalis on nematode larvae of sheep and goats was studied.To achieve the nematode larval mortality, larvae were isolated and counted from the faeces of the ruminants using the Baermann set-up/technique and exposed to different concentrations of the extracts.The results showed that the concentrations had anthelmintic activity against nematode larvae [68]. Cardioprotective activity An in-vivo antihypertensive effect of ethanol extracts of Trema orientale was conducted on male Wistar albino rats.Crude ethanolic extract of the plant was obtained using its leafy stem.N(G)-Nitro-L-Arginine-Methyl Ester (L-NAME) was administered to the rats two weeks before the commencement of the treatment, to induce hypertension.The animals were monitored for four weeks during the treatment period and the blood pressure (BP) of the experimental animals was recorded by the use of a non-invasive BP system.At 500 mg/kg body weight (bw), T. orientale reduced mean arterial pressure from 154.8 ± 7.84 to 103 ± 5.6 mmHg [69].Likewise, it is indicated that T. orientale is a potential antihyperlipidemic agent, reducing the risks of cardiovascular diseases.This study was carried out in Wistar albino rats by inducing the rat models with 25 % fructose for 15 days and Triton, treating them with ethanolic leaf extract of the plant.Blood samples were then collected from the animals to conduct biochemical analysis including but not limited to Total Cholesterol, High-Density Lipoprotein Cholesterol and Low-Density Lipoprotein Cholesterol.The biochemical tests and histopathology studies revealed that the plant has hypolipidemic activity [70]. Anticancer activity A recent study has been conducted to assess the anticancer effect of T. orientale aerial parts using methanol for the extraction process.In-vitro cytotoxicity assay was carried out in eight human cell lines and normal fibroblast cells, thus HCT116 (colorectal carcinoma cell line), A2780 (ovary adenocarcinoma), MRC5 (normal fetal lung fibroblast), MCF7 (breast adenocarcinoma), HT29 (colon adenocarcinoma), HepG22 (liver cancer cell line), TK10 (kidney renal cell adenocarcinoma), MDA231 (breast cancer cell line), and PC3 (prostate cancer cell line).It is indicated that the leaf extract was highly toxic to HCT116 cell lines at 2.256 ± 0.85 μg/mL, as compared to the other parts and therefore, a clonogenicity assay was conducted to determine the growth inhibitory effects of the plant. The results showed that HCT116 cells significantly declined, about 98 % at 7.5 μg/mL after fourteen days of incubation and exhibited the strongest clonogenic activity [71].Similarly, Kabir et al. [72] studied the anticancer potential of T. orientale methanolic leaf extracts in Swiss albino mice against Ehrlich ascites carcinoma (EAC).The animals were transplanted with EAC and treated for six days with the crude extract, intraperitoneal EAC cells were then collected, harvested and counted by hemocytometer.The LD 50 of the extract was 3120.650mg/kg bw, proving its safety at a dosage of as high as 800 mg/kg bw.The in-vivo evaluation at 400 mg/kg bw showed that the extract inhibited nearly 59 % growth of tumour cells in comparison with the control group, with considerable apoptotic characteristics.To validate this outcome, an in-vitro test using 3-[4, 5-dimethylthiazol-2-yl]-2,5-diphenyltetrazolium bromide (MTT) assay was also conducted.The MTT assay of the plant extract was cytotoxic with an IC 50 value of 29.952 ± 1.816 μg/mL, demonstrating that the plant has an anti-cancer effect. The FRC assay was established to have the best activity with 6007.8 ± 175.57μmol AAE g-1.The scavenging activities of H 2 O 2 and superoxide anion resulted in between 88.52 ± 0.68 % to 91.33 ± 4.01 % while DPPH also showed promising inhibitory activity of 83.86 ± 1.88 and 99.46 ± 0.38 when compared to ascorbic acid.Likewise, methanolic extracts of T. orientale leaf, root, stem and aerial parts were evaluated for their antioxidant activity using a DPPH assay [74].However, no scavenging activity was noted for the aerial parts while the IC 50 value of the leaf, root, stem extracts and ascorbic acid were 13.1 ± 0.2, 7.4 ± 0.2, 6.4 ± 0.2 and 1.01 ± 0.4 μg/mL, respectively.Following the same approach using DPPH assay, Saleh et al. [75] determined the antioxidant activity of T. orientale ethanolic extract of the leaves.The IC 50 of the free radical scavenging activity was 9.27 μg/mL, which proves the antioxidant potential of the plant.The same analysis was conducted for its aqueous extract but did not show antioxidant activity.Recently, Al-Robai et al. [71] used DPPH and 2,2′-azinobis-(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) assays to establish the antioxidant activities of methanolic extract of T. orientale aerial parts using ascorbic acid, propyl gallate, and Trolox as positive controls.All the extracts exhibited free radical scavenging activities in both methods.The DPPH method recorded a scavenging activity of 79.6 % while ABTS recorded 84.43 % for the leaf extract; ascorbic acid, propyl gallate, and Trolox as 92.86 %, 82.30 and 78.72 %, respectively in the ABTS method.This shows that the plant extract was more effective than propyl gallate, and Trolox.Although there was a significant difference between the DPPH and ABTS assay results, the values indicated that the leaves had the strongest activity, followed by bark, twigs and fruits in that order. Antimicrobial activity A study in India using 15 isolates of Staphylococcus aureus indicated that acetone extract of T. orientalis inhibits the growth of the organism with a MIC value of 3.7 mg/ml and biofilm inhibitory concentration value of 2.7 mg/mL [76].It is also indicated that methanolic extract from T. orientale leaves has an antibacterial effect against some Gram-positive and Gram-negative bacteria.The mean zones of inhibition for Bacillus subtilis, Micrococcus luteus, Escherichia coli, Salmonella enterica serotype Typhi, Shigella dysenteriae, Klebsiella pneumoniae and Pseudomonas vulgaris were 11.20 ± 0.81 mm, with S. Typhi and S. dysenteriae being more susceptible.However, this study showed that Xanthomonas campestris and Pseudomonas denitrificans were resistant to the extract [77]. A study was conducted to understand the inhibitory activity of granules against Salmonella spp.using wet granulated granules formulated from methanolic leaf extract of T. orientalis as the active ingredient.Excipients used in the formulation included sucrose, polyvinylpyrrolidone, sorbitol, lactose and water and with the concentrations of 0.25 mg/200g, 12.5 mg/200g and 6.5 mg/200g of the granules, to which the organism was susceptible to all three concentrations [78].This finding is consistent with a study by Rahman et al. [77] which was conducted on S. Typhi.Similarly, Napiroon et al. [37] sampled T. orientalis from various regions of Thailand to make lipophilic extracts and fractions to evaluate the presence of cannabinoids and their antimicrobial activities.The extracts inhibited Staphylococcus aureus, Pseudomonas aeruginosa, and Acinetobacter baumannii with minimum inhibitory concentrations (MICs) ranging between 31.25 and 125 μg/mL.Likewise, the antimicrobial efficacy of T. orientalis aerial parts was assessed using methanolic crude extracts of the leaves, fruits, bark and twigs against two Gram-positive, and two Gram-negative bacteria and opportunistic yeast.Using E. coli, K. pneumoniae, S. aureus, Enterococcus faecalis and Candida albicans as the test organisms, the results indicated inhibitory activity of the leaves against only C. albicans while the bacteria were resistant to other parts of the plant [71].This finding is however contradicting other studies which reported that S. aureus was susceptible to the plant.This could however be attributed to the different extraction methods used [37,77]. The susceptibility patterns of S. aureus, P. aeruginosa, E. coli, S. Typhi, S. dysenteriae, Streptococcus faecalis, Proteus mirabilis, Haemolytic Streptococcus viridian, Aspergillus niger, C. albicans and Aspergillus flavus to methanol and petroleum ether extracts of T. orientale leaves were also investigated [79].Salmonella Typhi was the most susceptible organism against both extracts, confirming other studies [77,78].Nevertheless, S. aureus and S. dysenteriae exhibited resistance to the extracts which are different from other reports [37,76,77] but confirmed Al-Robai et al. [71] findings.These inconsistencies could be attributed to differences in the inoculum size, extract concentrations, methods used and the geographical area of the plant as described by Eloff [80]. Safety and toxicity studies It is indicated that the methanolic leaf extract of T. orientalis is less toxic to cells with LC 50 of 170.2 μg/mL when compared to vincristine sulphate standard which has LC 50 of 2.5 μg/mL using the brine shrimp lethality bioassay [77]. In-vivo research was undertaken to explore how methanolic leaf extract of T. orientale affects the liver of Wistar rats.Cadmium chloride was administered to the animals, followed by T. orientale extract of dosages 100 and 200 mg/kg bw and observed for 21 days.Blood and tissue samples were collected from the rats for biochemical tests and histopathology studies.The total protein and alanine aminotransferase (ALT) in rats treated with cadmium chloride plus the extract was not significantly different from the control group.Histology studies revealed mild toxicity in the liver tissues when the extract was continuously administered raising important concerns about dosage [81]. Acute and sub-acute toxicity studies in Wistar albino rats were conducted using methanolic extract of T. orientale aerial part.Doses of 1, 2, 3, and 4 g/kg were used for the acute toxicology as animals were observed for 14 days.The animals were administered with 0.25, 0.5 and 1 g/kg bw of the extract, observed for 28 days and sacrificed for blood and tissue examinations during the sub-acute toxicity study.No death or changes in behaviour or toxicity were observed during the acute toxicity, proven the lethal dose (LD 50 ) being greater than 2 g/kg.Alkaline phosphatase, total protein and albumin increased significantly in groups administered as with higher doses [82].They concluded that the plant is safe at a lower concentration, and this agrees with the study [81]. In a similar study, it was established that T. orientale methanolic leaf extract doses of 100 and 200 mg/kg were toxic after continuous administration to Wistar rats.They reported an increase in haematology test parameters in rats treated with only the extract and animals that were administered with both the extract and Cadmium.The study also proved that the leaves of T. orientalis have both hematoprotective and hematopoietic properties which could support patients with anaemia [83]. Domestication potential Domestication is the introduction of an organism into a new environment other than its natural habitat with conducive conditions Y. Appau et al. for its survival [84].The domestication process to a large extent focuses on the morphological and genetic assessment between a species and its existing wild relatives [85], and basically to improve its suitability and perpetual outcome of its benefits for human consumption, especially in yield, taste, cultivation practices, and storage [86].Among all considerations, it is noted that the identification of individual species with desirable traits is the foremost in selecting species for domestication [87,88].In addition, targeting species with improved palatability, increased productivity of harvested parts, ease of growth, and harvest on farmlands [89].Consequently, an important task is to focus on the qualitative traits of species such as tree form, fruit shape, and sweeter pulp [90], which could result in 60 % of the progeny resembling the mother tree, especially in terms of productivity and ease of cultivation [91].An equally important factor for improved domestication is the environment and condition of the seeds. Studies have indicated a significant presence of T. orientale in soil seed banks in different forest ecosystems [92].Likewise, studies by Hall & Swaine [93], also reported on the abundance of T. orientale in Ghanaian forests.In Thailand, viable seeds of T. orientale were found in soil seed banks 175 m from the mother tree [94].Yet only a small fraction may germinate under favourable conditions due to the dormant nature of the seeds [95,96].Though several species of birds aid in the seed dispersal of T. orientale, fleshly ripe fruits rarely germinate and mostly decay especially in storage [97][98][99].Hence, requiring intervention to preserve and accelerate its germination process.A recent study by Nugraheni and Yuniarti [100], recommended that the optimum condition to maintain seed viability for at least one month and speed germination of T. orientale is to dry it for 24 hr, and put it in an air-tight aluminium foil container in an air condition room.Similarly, the highest growth uniformity of 74.75 % and 72 % before storage and after storage respectively was obtained by subjecting T. orientale seeds to a hydrated-dehydrated procedure [99].Studies by Rodrigues and Rodrigues [101], indicated the potential of enhancing the seeds of T. orientale through different pre-treatment options.Among other pre-sowing treatments, the authors indicated that soaking and depulping the seeds in concentrated H 2 SO 2 for 2 hr and 15 min respectively yielded the highest germination and healthy plantlet.The prospect of vegetative propagation of T. orientale has also been indicated in the literature.It is reported that rooted cuttings treated with 300 ppm of naphthalene acetic acid exhibited the highest percentage of germination when planted in sandy soil [102].These outcomes significantly contribute to the sustainable propagation and utilization of T. orientale. Conclusions This review documents the traditional uses, taxonomy, domestication potential, pharmacological activities and phytoconstituents of T. orientale in a bid to highlight its medicinal importance.T. orientale forms an important component of the herbal remedy of many people in Africa due to its multiple applications.This has been attributed to its rapid decline coupled with its germination difficulties.Nonetheless, it is established that there is enormous potential to improve seed quality through molecular markers and germination trials (pre-sowing treatments).This review demonstrates the phytochemical and medical properties of T. orientale and establishes that, at safe dosages, it may be used as a potent herbal remedy for a wide range of health conditions like infectious diseases, cancer and cardiovascular diseases.Phytochemical studies show the presence of phytosterols, flavonoids, tannins, saponins, cardiac glycosides, fatty acids, and phenolic compounds as secondary metabolites.Some compounds have however been isolated from the plant and there is a need for further studies to specify the compounds accounting for these activities, with their pharmacodynamics and pharmacokinetics. Table 1 Key search criteria. Table 3 Structures of compounds isolated from the various parts of T. orientalis.
2023-12-16T16:30:01.452Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "e7c6dad4dd0e083f4274b8e75eb5c1c0e53afeb8", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2405844023108486/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "20d5612c2754a5ddc260d1ea4b2571468f9cb01f", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology", "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
219504822
pes2o/s2orc
v3-fos-license
Successful Esophageal Replacement Surgery in a 3-Year Old with Post-corrosive Esophageal Stricture Accidental caustic ingestion in children, though entirely preventable, continues to be present in developing countries. Gastrointestinal injuries following caustic ingestion in children range from mild to fatal. Presentation of such children to the medical facility could be early or sometimes late with complications. Management is based on the type of injury and could range from medical conservative management to complex surgical procedures. Such complex surgeries are almost unavailable in developing countries. We present a 3-year old who presented to our facility with an esophageal stricture following accidental caustic ingestion four months prior to presentation. He had a failed stricture Introduction Accidental caustic ingestion in children is a worldwide problem (1), but most of the cases are unreported and the true incidence of the condition is not known (2). A wide range of gastrointestinal injuries result following ingestion of caustic substances; the degree of injury depends on the nature, concentration and quantity of the caustic ingestion. Esophageal stricture is one of the complications in children who present late. These strictures are managed by endoscopic dilatation with balloon or using rigid savary bougies. If these fail, surgery in the form of esophageal replacement using stomach or colonic interposition should be considered. Mortality and morbidity following surgery are low in expert hands (3). Late reconstructive surgeries in children with a reportable good outcome are rare in developing countries. Case report A 3-year-old boy was suspected to have swallowed a corrosive substance four months prior to admission. Following consumption, he was irritable and had increased secretions and coughing. He had been admitted twice in a local medical facility, where he was stabilized and treated with intravenous fluids and antibiotics for the dilatation and needed to be managed surgically; he subsequently had a good outcome, which is rare in developing countries. Keywords: Post-corrosive esophageal stricture, Esophageal replacement surgery Ann Afr Surg. 2020; 17 (2) oral sores. A chest x-ray at that point was unremarkable. Endoscopy showed erosions of the esophagus and the stomach. Upon discharge he developed progressive dysphagia. He was started on proton pump inhibitors for 2 weeks, with no improvement. A barium swallow showed features suggestive of spastic esophagitis. A repeat barium swallow three months later showed features of post-corrosive esophagitis with a tight esophageal stricture 7 cm along distal esophagus. The child was referred to our facility, Gertrude's Children's Hospital. He underwent esophageal dilatation, following which he developed esophageal rupture. The child had increasing respiratory distress by now. A chest x-ray showed subcutaneous emphysema in the neck and pneumomediastinum and bilateral pleural effusion (Fig.1). Barium studies confirmed features suggestive of esophageal rupture (Fig. 2). By this time, the child needed oxygen via nasal prongs and inotropes to support the blood pressures. Considering the general condition of the patient, the decision was made for esophageal diversion surgery and gastrotomy. A left cervical end esophagostomy was performed and the esophagogastric junction was banded twice with 6-mm nylon tape at the time the gastrostomy tube was inserted. The child tolerated the procedure well and was stabilized in the intensive care unit. Feeds were initiated through gastrostomy and the child was subsequently discharged. Three months later, the child was reviewed in readiness for esophageal replacement using colon as a conduit. Preoperative bowel preparation was done. Under general anesthesia, the cervical esophagostomy was mobilized via old neck incision (Fig. 3), through a laparotomy. Ascending, transverse and descending colon were mobilized (Fig. 4); most of the transverse colon and distal 4 cm of ascending colon was used as conduit (Fig. 5) on a left colonic artery pedicle and retro sternal space developed bluntly from above and below (Fig. 6). The closed end of colon was delivered into neck (isoperistaltic) via retrosternal space, esophagus spatulated along anterior border then anastomosed endto-side to posterior aspect of colon (Fig. 7). Nasogastric tube was passed into stomach and secured, without changing the gastrostomy site. The child was again received at the intensive care unit intubated. Feeds were initiated through the nasogastric tube, initially as continuous feeds then slowly changed to bolus feeds. Later the nasogastric tube was removed and the boy was allowed to feed orally and was discharged. The child's progress was followed-up four months after the surgery. He was feeding and growing well. He will be kept under regular follow-up. Discussion The incidence of accidental caustic ingestion in children in developing countries is high, because prevention is lacking. The problem is largely unreported in these settings (2). The average age of children with accidental caustic ingestion in Africa was 3.07±2.02 years (range 1.8-5.4 years), 75 males (0.91%) and 19 females (0.23%) with (F/M ratio, 1:4). In Africa, of children exposed to caustic substances, 33% of accidents were caused by alkaline agents and 33% by acidic agents (4). Acids cause coagulation necrosis, with eschar formation that may limit substance penetration and injury depth. Alkalis combine with tissue proteins and cause liquefactive necrosis and saponification, and penetrate deeper into tissues, causing extensive tissue damage. The degree of a corrosive lesion depends on the nature, concentration, and quantity of the caustic substance ingestion. Determining the severity of damage following caustic substance ingestion is one of the most important initial steps for treatment and for preventing complication. The symptoms may not ascertain the severity of the injury. Early manifestations are nausea, abdominal pain, vomiting, and dyspnea. Later manifestations include fever, tachycardia, tachypnea, and dysphasia. The presence of three or more symptoms is an important predictor of severe esophageal lesions (5). In children, 18% to 46% of all caustic ingestions are associated with esophageal burns. The incidence of coexistent gastric injury in the literature ranges from 20% to as high as 62.5% (6). Esophageal stricture is considered a short-term effect, but esophageal perforation, esophageal obstruction and cancer could be some of the long-term effects of ingesting caustic agents. In the initial management, laboratory studies like total white cell counts and C reactive proteins are useful in monitoring and guiding patient management. Shortly after ingestion, a plain chest radiograph may reveal pneumomediastinum, suggesting esophageal perforation, as well as free air under the diaphragm, indicating gastric perforation. Hypaque or gastrograffin studies are useful options to confirm the above. A computerized tomography scan likely offers a more detailed evaluation than early endoscopy about the transmural damage of esophageal and gastric walls and the extent of necrosis (7). Endoscopy is important in the management of caustic ingestion. Every child with suspected caustic ingestion and symptoms/signs (any oral lesions, vomiting, drooling, dysphagia, hematemesis, dyspnea, abdominal pain, etc.) should have an endoscopy within 24 hours of ingestion to identify all the consequent digestive tract lesions. The risk of severe damage increases proportionally with the number of signs and symptoms, and an endoscopy is always mandatory in symptomatic patients. It can be withheld if the child is asymptomatic and if adequate follow-up is assured. Findings on endoscopy are graded as follows: Grade 0 -a normal mucosa Grade 1 -only slight swelling and redness of the mucosa Grade 2A-presence of superficial ulcers, bleeding, and exudate Grade 2B -local or encircling deep ulceration Grade 3A -focal necrosis Grade 3B -extensive necrosis Most patients with grade 1 or 2A injuries have good prognosis (8). In the medical line of management, the use of steroids was controversial. Different steroids (dexamethasone, prednisolone, methyl-prednisolone) are used at different doses and through different routes of administration (oral, intravenous) from 7 days to 4-6 weeks in children with corrosive esophagitis. High-dose methylprednisolone used to manage grade 2B esophageal burns may reduce stricture development (9). Intralesional steroid injections increase efficacy of bougie dilation and decrease the need to repeat it (10). Administration of broad-spectrum antibiotics is usually advised mainly if corticosteroids are initiated, as well as if lung involvement is identified. Mitomycin C, a chemotherapeutic agent with DNA crosslinking activity, when injected or applied topically to the esophageal mucosa, significantly reduced the number of dilatation sessions needed to alleviate dysphagia in patients with caustic esophageal strictures. However, long-term follow-up is needed to prove its efficacy and to evaluate potential long-term side effects (11). Surgical intervention is indicated when attempted dilatation either causes esophageal perforation or fails to relieve stenosis. As esophageal strictures caused by corrosive injury are usually long, dilatation often is unsuccessful and some form of esophageal replacement surgery will be required. When surgery is indicated, the best solution is usually emergent (or urgent) external diversion of the esophagus via a cervical stoma (esophagostomy), and the insertion of a gastrostomy tube for feeding. Attempts to repair perforations or replace the esophagus at this time are ill advised as the patient usually is not in an optimal condition. With esophageal diversion, a useful procedure, particularly in sick patients, is to band (tie-off) the gastro-esophageal junction (twice) with 3-mm nylon tape (as opposed to transecting it). The tape is tied tight enough to occlude the esophageal lumen but not to totally occlude blood supply to the esophageal wall. With esophageal replacement, several alternatives are available that offer the chance of a good quality of life for a prolonged period; none, however, are as good as the native esophagus. Replacement surgery should be performed once a patient's nutritional status has returned to normal (at least 3 months after starting gastrostomy feeds). Replacement conduits can be fashioned from the stomach, colon or jejunum. Available evidence shows that gastric (that uses the whole stomach) and colonic conduits are the most favored (12). Colonic conduits are more complex to fashion than gastric conduits but have less reflux. In addition, colonic conduits occupy less space in the chest than gastric conduits and so are associated with less respiratory complications. Thus, the colon is considered the optimal conduit (12). It may be placed retrosternally or in the posterior mediastinum. It is usually placed in an isoperistaltic orientation, and an antireflux procedure is not generally indicated when it is placed retrosternally. We normally place a nasogastric tube intraoperatively and aim to retain the feeding gastrostomy for 3 months postoperatively (we remove it once we are satisfied with the quality of swallowing). In the immediate postoperative period, we do not feed the patient for 7 days, and then we perform a water-soluble contrast study on day 8. If there is no leak, we commence oral feed. Cervical leak is quite common in the immediate postoperative period; most leaks are small and will heal spontaneously (12). We keep the patient 'nil per oral' until cervical leaks are healed, but we feed the patient through the gastrostomy tube. We keep the nasogastric tube in-situ until the cervical leak has healed. Conclusion Esophageal replacement offers a robust solution to dysphasia resulting from caustic injury to the esophagus. The procedure can be performed with a relatively low risk of early mortality. The isoperistaltic colon is probably the best choice of conduit in children.
2020-06-08T01:43:46.273Z
2020-05-21T00:00:00.000
{ "year": 2020, "sha1": "88357039404a3652172e1e5671e85ae35501e77c", "oa_license": "CCBY", "oa_url": "https://www.ajol.info/index.php/aas/article/download/196051/185075", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "dd803612e2b78b9a2606dd8c95054dd38e46d077", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225283837
pes2o/s2orc
v3-fos-license
A Modified GAN for Compressed Sensing MRI Magnetic resonance imaging is a commonly used diagnosis method in medicine. Most reconstruction methods are based on the compressed sensing theory, while it is inefficient and time-consuming. In recent years, deep neural networks have developed rapidly. GAN architecture has been widely used in various image tasks after published. This paper proposes a new MRI reconstruction method RISEGAN. The generator uses U-Net structure to extract multi-scale features in the down sampling and up sampling modules. Combined with the residual learning and squeeze excitation blocks, the mapping between the under sampled and the fully sampled image is established. Experiment results show that our method can finish reconstruction with premium quality. Evaluation indicators have also improved. Introduction Magnetic Resonance Imaging (MRI) technology is one of the most commonly used inspection techniques in modern clinical medical imaging diagnosis. It is accurate, non-invasive and harmless to human bodies. It can detect precise human biological information and is used in many organs such as brain and soft tissue. Although MRI has many advantages, its scanning speed is slow, which cost a long time. In order to obtain a clearer image, sufficient scan time is required. During the scan process, patients have to lie down and be still, which leads to discomfort and motion artifacts, and the utilization efficiency of the equipment is also limited. In order to optimize the scan time and calculation cost, we hope to use less observation data to reconstruct the image as accurately as possible. The theory of compressed sensing(CS) was first proposed by Candes et al in 2006 [1], which provides a new method of signal sampling. Compared with the traditional Nyquist sampling theorem, compressed sensing can recover the sparse signal better and improve the sampling efficiency. So many MRI methods are based on CS. The concept of Sparse MRI was proposed by Lustig et al. [2], which studied sparse transforms, non-coherent sampling, reconstruction models and algorithms for MRI. In MRI based on compressed sensing, reconstruction model and optimization algorithm are particularly important aspects that determine the reconstruction time and quality. In sparse MRI, wavelet transform is used as sparse basis, and then scholars successively use Dual Tree Complex Wavelet (DTCWT) [3], Double-Density Complex Wavelet [4], Curvelets [5] and Total Variation(TV) [6] as sparse basis for reconstruction. Fixed sparse basis is used to constrain the image but usually can't make full use of the sparsity of the image, resulting in bad image quality. After that, many researchers began to study the adaptive sparse representation represented by dictionary learning [7]. However, dictionary learning needs to learn a kind of images, which usually takes a lot of time and reducing reconstruction speed. In order to reconstruct MR image faster and better, the optimization algorithm also plays an important role. In Sparse MRI, Nonlinear Conjugate Gradient Descent Algorithm (NLCG) is used to solve the optimization problem. This algorithm is slow and computationally expensive. After that, Iterative Shrinkage Thresholding Algorithm (ISTA) [8] is proposed, which has simpler structure and lower complexity. On the base of the ISTA, Split Augmented Lagrangian Shrinkage Algorithm (SALSA) [9] is derived. The original unconstrained optimization problem is transformed into a constrained optimization problem by variable separation [10]. The alternative direction method of multipliers (ADMM) is also used [11]. This method is used to deal with image problems based on TV constraints later. The representative algorithms are Fast Total Variation Deconvolution (FTVD) [12] and Reconstruction from Partial Fourier (RecPF). Traditional MRI imaging methods have the following problems. Firstly, the widely used fixed sparse basis may not be able to capture the complex tissue structure well, affecting the imaging quality. Secondly, it may take a lot of iterative steps to optimize the objective function, which is often timeconsuming. Finally, most of the algorithms are in pursuit of higher indicators and usually leads to excessive image smooth and affects the visual perception of the image. Because of the development of deep learning, neural network has been used to solve various image problems including classification, segmentation, object detection and other fields. Recently, deep learning has been introduced into CS-MRI to solve the limitations of traditional methods. Wang et al. [13] first used deep learning in CS-MRI and used convolution neural network (CNN) to establish the mapping between the zero filled image and the fully sampled image. The ADMM net [14] defined he data flow graph absorbing the idea of the ADMM to optimize the MRI model based on CS. IanJ. Goodsell et al. proposed the Generative Adversarial Networks (GAN) in 2014 [15]. Subsequently, GAN architecture has been widely used in various image tasks and made breakthrough results. DAGAN of Imperial University of technology [16] uses the idea of GAN for MRI. The generator network uses the architecture of U-Net for reconstruction. Subsequently GANCS is proposed [17], Mardani et al. fused LSGAN and CycleGAN together. The real and imaginary parts of k-space data are fed in two channels to train the network simultaneously. Because of the multi-resolution characteristics, Ronneberger et al. proposed the U-Net architecture [18]. It is widely used in biomedical image processing. The main characteristic of U-Net is using the cascaded down sampling layers to obtain the exponential growth receptive fields and integrate multiscale details. Residual learning was proposed by He Kaiming to solved the problem of vanishing gradient in very deep networks [19]. It could be used in all kinds of networks because it can accelerate the convergence of networks. The Inception module was proposed in GoogleNet [20], through several parallel convolution layers to obtain the feature maps of different resolutions and increase the width of the network at the same time. In the Squeeze-Excitation block proposed by Hu, Jie, etc, attention mechanism [21] is introduced to explicitly modeling the interdependence between convolution feature channels to improve the representation quality of the network. The network performs the recalibration of feature channels and use global information to selectively emphasize strong information features and suppress less useful features. RISEGAN (Residual-Inception-SEblock-GAN) proposed in this paper has the following innovations: (1) SE-block is used in the generator network to perform attention mechanism on each level of features and recalibrate the information between channels.(2) In the process of down sampling, several convolution kernels of different sizes are used to improve the ability of feature extraction under different receptive field sizes, which improve the quality of details in images. GAN network GAN network has been utilized in various fields after proposed 2014. It consi sts of a generator G and a discriminator D. The target of G is to map the hidden variable Z to a given distribution of real data to deceive the discriminator D. The goal of discriminator D is to distinguish the real data x from the fake data generated by G. In the process of confrontation game, the generator and discriminator train each other. The training process of GAN can be expressed in equation (1): (1) is the distribution of real data, is the distribution of hidden variables. In this paper, G network is used to establish the mapping between under sampled image and fully sampled image. D network is used to determine whether the input image is the result of G network or a fully sampled image. Inception Inception is an architecture proposed in GoogleNet. This structure uses convolution kernels of different sizes to extract parallel features at the same time. Then concatenate channels to achieve multi-scale feature fusion. This method not only increases the width of the network, but also fuses the features of different receptive fields. In the reconstruction of MRI, the main problem is the restoration of detailed features. Most MR images contain detailed information in multiple scales. So we combine the Inception structure in the network to improve the reconstruction quality. In the experiment, 3 * 3, 5 * 5, 7 * 7 convolution kernels are used to extract the feature information at different scales, so that the structure information can be captured better in the process of down sampling. Residual learning Residual learning is proposed by He et al. It can be used in deeper networks to solve the problem of gradient vanishing in network training, which makes it possible to train deeper networks and avoid over fitting. Shortcut is used to add connections between network layers and avoid network degradation. On the other hand, it makes the network learn the residual instead of the whole mapping relationship. Therefore, the network is easier to train and can converge faster. In this paper, we will make a long residual connection between the input and the output, so that the model can learn the difference between the under sampled image and the fully sampled image. It just needs to learn the missing part of the image features. In addition, the short residual connection is used in the features extraction stages of each down sampling block to reduce the training difficulties and improve the accuracy. U-Net U-Net was first proposed by Olaf Ronneberger et al. The network structure includes a shrinking path for capturing context and a symmetric expanding path for precise positioning. It works pretty good in medical image segmentation. Because of its structural characteristics, it also has good performance in image super-resolution, inpainting and other issues. In the process of down sampling, the feature maps of each scale is connected to the corresponding level in the up sampling path. These skip connections can directly pass the features of the encoders to the decoders. The decoder fuse features and get more precise results. So we choose U-Net structure is used as the generator. SE Block Squeeze-Excitation block use attention mechanism to allocate computing resources to the most useful part of the features. This is to improve the quality of network representation by explicitly modeling the interdependence between convolution feature channels. This mechanism allows the network to perform the recalibration of feature channels and learn to use global information to selectively emphasize the features useful and suppress the useless features. An SE block consists of three parts: figure 1. The significance of a single channel is described by a global average pooling layer. Then channel importance is extracted by a bottleneck structure. Finally, the weights are multiplied with the original channels. This block can be integrated into various networks and improve the model. Although a small number of parameters will be used, the cost is acceptable compared with the improvement. In this paper, SE block will be added before the output of each up sampling and down sampling blocks to adjust features of this level. Traditional MRI method Suppose is a vector stacked by a 2 dimension complex MR image which has a length . The problem of traditional compressed sensing MRI is to recover the vector from an under sampled vector in K space [22]. and y satisfy where is an under sampled Fourier transform matrix. Because this is an ill-conditioned problem, we must use its prior information to recover .The problem can be transformed into the following optimization problem, i.e equation (2). ‖ ‖ (2) Where represents the regularization constraint for . is a regularization coefficient. In compressed sensing, is usually the norm of in a sparse domain. The traditional solution needs to find a suitable optimization algorithm for iteration. Reconstruction with deep learning In recent years, with the development of deep learning, there has been a method using deep neural network to reconstruct the image [23]. A deep network can be used to directly establish the mapping between the under sampled zero-filled image and the fully sampled reconstructed image. It can be solved by optimizing the objective function in equation (3). is the parameter of the neural network that can be trained by gradient descent algorithm. Network structure The whole structure of the networks used in this paper is shown in figure 2. The generator is a modified U-Net. In the left side, there is cascaded down sampling blocks, and right side is the symmetrical up sampling blocks. There are skip connections and concatenation to connect feature maps under a same scale. A skip connection is also used between input and output to learn the missing information. The structure of i-th down sampling block is shown in figure 3. In the down sampling blocks, the output of the (i-1)th block is followed by three parallel convolution layers of 3 * 3, 5 * 5, 7 * 7 kernel size. Set strides=2 to down sampling. Then concatenate three outputs similar to Inception. After that, two 3 * 3 convolution kernels are used to extract further features. Other branch is used to form a residual block. Because the number of channels increases after concatenation, 1 * 1 convolution is used to adjust the channels and fuse feature maps of different scales. Finally, a SE block is used to reweight channels for attention mechanism. Its output is the output of i-th down sampling block. The structure of i-th up sampling block is shown in figure 4. In the up sampling block, use transpose convolution for the output of the (i-1)th level and set strides=2. The output of the i-th level down sampling block is also an input to obtain all features at this scale. Then two convolution layers of 3 * 3 kernels are used to extract features. Finally, the attention mechanism of SE block is used to adjust the channel weights, and the output is the output of the i-th up sampling block. Behind all convolution layers in the network, a BN layer is connected to adjust the data distribution and a leaky ReLU function is used as the activation function. loss function To train the generator, its loss function consists of four parts as follows: is a fully sampled image. ̂ is the output of the generator. The calculation flow diagrams of losses are shown in figure 5. The first term is Mean Squared Error(MSE). It is used to measure the pixel difference between and ̂. The second term is perceptual loss proposed by Li Feifei et al [24]. It measures the difference of high-level features of and ̂ to ensure the structural similarity between the images. The third term is the loss of frequency domain. It is the MSE of the Fast Fourier transform of and ̂ for the similarity of frequency domain information. Because the data is sampled from the frequency domain, the consistency of frequency information is also necessary. The last term is the adversarial loss calculated by the discriminant network. The purpose of the generator is to minimize the loss to fool the discriminant network. The adversarial network is a normal network similar to a binary classification network. There are 10 convolution layers. A single neuron is connected by a fully connection layer. The activation function is Sigmoid and the output is the probability that the image is a real image. Dataset The network uses the MICCAI 2013 grand challenge dataset for training and testing. There are 16104 T1 weighted MR images for training, 5024 for validation, and 9854 for testing and taking the mean value as the result. The original data is fully sampled, so we make under sampling of 1D and 2D Gaussian distribution for masks. The sampling rates are 10%, 20%, 30% respectively to achieve 10x, 5x, 3.3x imaging speed improvement. Figure 6 figure shows the generation of training data. A zero filled image is generated from 10% Gaussian mask mutiplied on the frequency of the fully sampled image. We used data augmentation to increase data and help training. Training parameters The model is implemented by TensorLayer library and trained on GTX1080 with 12GB memory. VGG network uses pre-trained weights on ImageNet that are publicly available on the internet. Some hyper-parameters used in training are as follows: the batch size is set to 8 and the initial value of learning rate is . We used a 0.5 times learning rate decay for every 5 epochs but no less than . The Adam optimizer is used for optimization with a parameter 0.5. For the loss function of the generator, is 16, is and is 0.1.The average training time of each model is 12 hours. [25]. Bold in the table is the best result. In 1D 10%, 30% and 2D 10%, the complete RISEGAN achieves the best results. In other sampling patterns, three ablation models all perform the best MSE in a certain sampling pattern. However, there are only small differences among these models. Smaller MSE may cause the reconstructed image to be too smooth. Therefore, further judgments need to be combined with other indicators. Experimental results Observing the SSIM results in table 2, RISEGAN has the best results in all sampling patterns. Compared with ZF and DAGAN, each ablation experiment also has some improvement. Smaller sampling rate means less available data and the improvement is more. SSIM improves 0.19 under 1d-10% sample pattern. The improvement is relatively small when the sampling rate is higher and the information is sufficient. Observing the PSNR results in table 3, RISEGAN achieves the best results in most sampling patterns. When it is not the best result, it is also close to the highest value. Compared with DAGAN, each ablation experiment has an improvement of more than 1.5 points. It shows that the proposed Figure 7 shows the reconstruction results of the zero filled image in the above figure. The first row are the outputs of the networks and the second row is the error maps of the outputs. The brighter color corresponds to a higher error. The result of DAGAN has lower PSNR and the difference is higher in the error map. Compared with DAGAN, the results of RIGAN, RSEGAN, ISEGAN have more details. The PSNR value is higher 2 points than DAGAN. However, these three results are similar. RISEGAN has the best result. It has the highest image quality and the best PSNR value. Conclusion In this paper, we use a new neural network RISEGAN to establish the mapping between the under sampled data and the fully sampled image to accelerate MR imaging. Compared with the DAGAN architecture, SE block is integrated to the model for attention mechanism between channels, combined with the Inception structure to achieve multi-scale features extraction. Besides, we use the residual learning to accelerate the training process. Results show that the proposed model can achieve better reconstruction image quality than previous methods. In the future, we can do further research in 3D and dynamic MRI.
2020-10-28T18:44:38.141Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "68ef17a8372ff0ad0ad3ffcab568eb0738883294", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1642/1/012001", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "fe2e1f200ef4620599ff8588faa8ea8c7693bbd9", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Computer Science" ] }
247516169
pes2o/s2orc
v3-fos-license
Using Social Media to Facilitate Communication About Women’s Testing: Tool Validation Study Background Strong participant recruitment practices are critical to public health research but are difficult to achieve. Traditional recruitment practices are often time consuming, costly, and fail to adequately target difficult-to-reach populations. Social media platforms such as Facebook are well-positioned to address this area of need, enabling researchers to leverage existing social networks and deliver targeted information. The MAGENTA (Making Genetic Testing Accessible) study aimed to improve the availability of genetic testing for hereditary cancer susceptibility in at-risk individuals through the use of a web-based communication system along with social media advertisements to improve reach. Objective This paper is aimed to evaluate the effectiveness of Facebook as an outreach tool for targeting women aged ≥30 years for recruitment in the MAGENTA study. Methods We designed and implemented paid and unpaid social media posts with ongoing assessment as a primary means of research participant recruitment in collaboration with patient advocates. Facebook analytics were used to assess the effectiveness of paid and unpaid outreach efforts. Results Over the course of the reported recruitment period, Facebook materials had a reach of 407,769 people and 57,248 (14.04%) instances of engagement, indicating that approximately 14.04% of people who saw information about the study on Facebook engaged with the content. Paid advertisements had a total reach of 373,682. Among those reached, just <15% (54,117/373,682, 14.48%) engaged with the page content. Unpaid posts published on the MAGENTA Facebook page resulted in a total of 34,087 reach and 3131 instances of engagement, indicating that around 9.19% (3131/34,087) of people who saw unpaid posts engaged. Women aged ≥65 years reported the best response rate, with approximately 43.95% (15,124/34,410) of reaches translating to engagement. Among the participants who completed the eligibility questionnaire, 27.44% (3837/13,983) had heard about the study through social media or another webpage. Conclusions Facebook is a useful way of enhancing clinical trial recruitment of women aged ≥30 years who have a potentially increased risk for ovarian cancer by promoting news stories over social media, collaborating with patient advocacy groups, and running paid and unpaid campaigns. Trial Registration ClinicalTrials.gov NCT02993068; https://clinicaltrials.gov/ct2/show/NCT02993068 Background High participant response rates and recruitment yields are critical to public health research but are difficult to achieve [1][2][3].Traditional recruitment practices, including radio or newspaper advertising, in-person referrals, and flyers, are often time consuming to implement, costly, and fail to adequately target difficult-to-reach populations [4,5].The initial net cast using these types of recruitment methods may result in a high number of interested parties; however, such efforts result in proportionately fewer eligible and enrolled participants, and certain demographics are frequently left underrepresented [6].Social media is well-positioned to address many of these issues and improve participant recruitment by providing new platforms for people to learn about public health research [7][8][9][10]. The term social media broadly describes a variety of web-based social networking platforms or web-based spaces where the public can generate, engage with, and share information, including platforms such as Facebook, Twitter, and Instagram [11,12].Social media enables researchers to deliver information to a wide audience; target specific groups of people, including hard-to-reach subpopulations; and adapt outreach efforts on an ongoing basis [7][8][9][10].Current research indicates that social media recruitment methods are an improvement over traditional methods in terms of both cost and effectiveness [13][14][15][16]. Facebook, used by more than three-quarters of adults on the web, is particularly well-suited for research recruitment [17,18].Over Facebook, users can engage with user-generated content, publish photos on their Facebook pages, post status updates, and share information with friends and family.Users follow the content of interest and engage socially with paid advertisements and other content.Researchers can leverage this environment, creating content tailored for specific populations using online behavioral advertising (OBA) and respondent-driven sampling to improve reach [19]. OBA data can help researchers improve their marketing reach.OBA data include information collected from a broad range of web-based sources about the behaviors that users exhibit on the web [20].OBA appeals to researchers in public health, seeking to improve recruitment tools and offering an alternative outreach method with a broader reach that may overcome certain recruitment barriers, such as geographic limitations [21][22][23][24][25]. Instead of wondering whether a flyer is posted in the right place for the right type of individual to see, researchers can guarantee that their message is being displayed to the intended person.This approach is not without its limitations, and some health professionals and researchers have expressed reluctance, citing concerns about biased sampling or reach that may accompany social media platforms [26] and privacy [12,27,28]. Facebook also allows public health professionals to leverage existing social networks through snowball sampling [29,30].Snowball sampling, which has traditionally taken place offline, can capitalize on existing web-based social networks, such as patient advocacy groups [30,31].By encouraging a small sample of a target population to refer others to a research study, snowball sampling helps researchers access hidden subpopulations that are typically difficult to sample using traditional recruitment methods [30].From snowball sampling to inviting opportunities to shape the tone, imagery, and content to fit the needs of the intended audience, social media is well-positioned to function as a targeted communication tool.With these advantages, social media has the potential to take traditional snowball sampling one step further, enabling researchers to potentially connect with harder-to-reach populations [32].This quality grants social media recruitment the ability to potentially shift the pattern of health inequities, improving the representation of certain communities in the research arena [33]. Recent reviews indicate that most studies using Facebook to recruit participants for health research have focused on people aged 18 to 30 years [8,34].In comparison, few studies have evaluated social media as a means of recruiting people affected by cancer who are aged ≥35 years [34], and no studies have explored how social media recruitment performs when targeting women at risk for ovarian cancer.The consensus is that older people may be less likely to adopt new technologies, such as social media [34,35].Other studies have reported high reach but low engagement among social media users, resulting in a high attrition rate for social media recruitment [36].However, this research failed to examine advertisement content or take the growth of the social media platform into consideration.As the social media base continues to grow, the profile of the average user evolves, and with it, the age of the average Facebook user continues to increase [37].With this evolution in mind, ongoing assessment is needed to evaluate the effectiveness of social media for research participant recruitment across different demographics, and more research is needed to better understand how Facebook functions as a recruitment tool in the context of ovarian cancer [20]. Study Aims This research sought to determine whether Facebook is an effective recruitment tool for targeting women aged ≥30 years for recruitment into the MAGENTA (Making Genetic Testing Accessible) study by evaluating innovative methods for the recruitment of research participants using Facebook.To accomplish this objective, a series of posts and advertisements, including paid and unpaid posts, were published and assessed on an ongoing basis.These materials used a variety of imagery and languages and leveraged Facebook's OBA tools to target specific populations and eligible participants.We hypothesized that unpaid Facebook posts and Facebook advertisements would improve the reach of the study material and result in improved study enrollment. About the MAGENTA Study The MAGENTA study was a nationwide Stand Up To Cancer initiative that sought to improve access to genetic testing for ovarian cancer.The study recruited and randomized 3839 women from the United States with a potentially increased risk of ovarian cancer.Participants were randomized to 1 of 4 arms, receiving a combination of pretest or posttest telephone genetic counseling and pretest or posttest web-based education with optional telephone counseling [38].The active recruitment period took place between April 2017 and January 2020.This study received institutional review board approval from the MD Anderson Cancer Center and was a collaborative effort that included several cancer research centers and patient advocacy groups, including the Ovarian Cancer Research Alliance, National Ovarian Cancer Coalition, and Minnesota Ovarian Cancer Alliance. Once potential participants had learned about the MAGENTA study, they were prompted to visit the study website.From there, interested parties clicked to participate in the web-based communication system, starting with study information and then moving through the eligibility screen, informed consent, and enrollment (Figure 1).Data were collected at baseline and follow-up using REDCap (Research Electronic Data Capture), an electronic survey tool sponsored by the University of Washington (WA).All outreach materials received institutional review board review through the MD Anderson Cancer Center.The results of the MAGENTA study indicate that electronic genetic education and results released without genetic counseling were noninferior with regard to patient distress.Importantly, research also found that providing genetic education and results in this capacity was associated with higher test completion and lower distress [38]. Developing a Media Kit Adapting the methods outlined by Carter-Harris et al [39] and Musiat et al [40], the study media kit was developed in collaboration with key stakeholders.This group comprised health care professionals from cancer care and research centers and patient advocates from advocacy groups across the United States, including those listed previously.Patient advocates were consulted extensively during the development of the study materials, including the media kit described in the following sections.The media kit included Facebook recruitment materials and a list of social media contacts, such as patient advocacy groups and other groups with an apparent interest in breast and ovarian cancer. The media kit also included different types of posts generated for recruitment purposes, including paid advertisements, unpaid posts, sample tweets, a list of relevant hashtags to incorporate into posts, and a selection of media for use across all social media posts and advertisements (example posts can be reviewed in Figures 2-4).Unpaid Facebook posts and paid advertisements XSL • FO RenderX included at least one media component, a brief description of the study, relevant hashtags, and a link to the study home page (Figure 1).A MAGENTA Facebook page was created to develop trust with potential participants [41,42].The Facebook page provided basic information about the study, served as a platform for sharing unpaid and paid social media posts, and directed potential participants to the study website.Materials from the media kit were assessed by patient advocates and underwent usability testing.Advertisements and posts were created with tone and imagery in mind, focusing on content related to ovarian cancer research that elicited a combination of the following concepts adapted from Batterham [43]: 1. Content instills a sense of collaboration, conveying the idea that one is participating in research as a member of a team to address a health problem (in this case, ovarian cancer was framed as the problem). 2. Content instills a sense of independence or conveys the idea that one is addressing the problem of ovarian cancer as an individual through research participation. 3. Content instills a sense of altruism, conveying the idea that the individual is participating in research for the benefit of others. 4. Content instills a sense of self-gain or self-preservation, conveying the idea that the individual is participating in research for personal gain. Publishing Paid Advertisements and Unpaid Posts Unpaid posts were published directly on the MAGENTA Facebook page on a regular basis and on patient advocacy Facebook pages.Paid advertisements were published using Facebook's advertising tool.Once the objective or goal of the campaign (eg, post engagement, website clicks, and video views) was set, the audience was identified using Facebook's audience-targeting tool.Targeted populations for the purposes of this study included English-speaking women ≥aged 30 years living in the United States.Additional geographic and behavioral targeting was included on a case-by-case basis and is described in greater detail in Table 1. Census data were used to inform additional geographic and socioeconomic targeting and included data surrounding racial-ethnic groups and the rurality of the location.These variables were layered using ArcGIS Pro (version 2.5; Esri) to select the specific geographic targets.ArcGIS is a mapping and analysis tool that allows users to use a geographic information system to capture, manipulate, and analyze geospatial data. Once the audience was selected, advertising content was uploaded to Facebook, a campaign budget was selected, and a campaign schedule was set.On the basis of the intended audience, Facebook uses OBA approaches to push out content with the above parameters in mind.Although targeting affects results, over social media platforms, including Facebook, the budget arguably has the most impact on reach, and larger budgets are generally associated with more results, assuming that appropriate targeting is used. Other Recruitment Efforts All Facebook recruitment efforts took place alongside traditional recruitment efforts as part of the MAGENTA study.Traditional recruitment efforts included clinician referrals, direct emails from patient advocacy groups, the dissemination of study information at provider and patient advocate conferences, and sharing physical flyers in patient advocacy and clinical settings.Traditional efforts were largely based on the participating cancer research institutes, organizations, and patient advocacy groups.Study recruitment commenced with traditional methods, allowing for a controlled launch that allowed for an additional real-time usability assessment of the web-based communication system.In this first round of recruitment, enrollment relied primarily on word of mouth and flyer dissemination, both of which were facilitated by collaborating with patient advocacy groups.Following this controlled outreach, the study team expanded the outreach to include social media posts, as described above, in an effort to expand the reach of messaging. Evaluating Paid Advertisements and Unpaid Posts Facebook analytics captured how users interacted with the social media MAGENTA content.Analytics included, among others, the following: engagement is defined as any time an individual takes action on the post, where action includes a click, comment, share, or view; results are defined as the number of times an advertisement achieved a specific outcome, delineated by the campaign objective; reach is defined as the number of people who saw the advertisement at least once; impressions are the number of times an advertisement was on a screen; clicks refer to the number of times someone clicked on the advertisement; and video plays.The study team also reviewed the cost per result.Cost per result was calculated by dividing the total amount of money spent by the number of results, which may include the number of video views or website visits, for example, obtained over the course of the campaign.Analytics, including cost, were reviewed daily to assess the effectiveness and provide opportunities to adjust campaign content or targeting.The same information was collected for unpaid posts published directly on the MAGENTA study's Facebook page.If at any time the MAGENTA study website or another part of the web-based communication system became overburdened, advertisements were pulled, or turned off, until the traffic subsided. Ethics Approval This study, including all outreach materials, received institutional review board review through MD Anderson Cancer Center (2016-0298). Overview Active social media recruitment for the MAGENTA study took place between September 2017 and October 2018.The MAGENTA study relied on traditional recruitment methods starting in April 2017 until September 2017.Traditional recruitment methods continued throughout the social media recruitment period; however, the study team focused on web-based recruitment efforts in the interest of improving the reach across all 50 states.The recruitment timeline can be viewed in Figure 5.During the active social media recruitment period, Facebook materials reached a total of 407,769 users, generating 57,248 (14.04%) instances of engagement, suggesting that approximately 14.04% of people who saw information about the MAGENTA study on Facebook engaged with the content.These numbers did not identify unique users and excluded posts published on Facebook pages managed by other breast and ovarian cancer groups.During this time, the MAGENTA study home page was shared 1948 times, and the MAGENTA study video was viewed 31,358 times (Table 2). Users Learn About the MAGENTA Study Over Television and Social Media Of the 13,983 respondents to the MAGENTA REDCap eligibility questionnaire during the active social media recruitment period, <1% (n=23, 0.16%) indicated that they learned about the study from a magazine, <1% (n=86, 0.62%) from the radio, <2% (n=253, 1.81%) from a health care provider, >3% (n=459, 3.28%) from a patient advocacy group, and <8% (n=1102, 7.88%) from a friend.Approximately 8.64% (1209/13,983) indicated that they learned about the study from a family member, whereas 27.44% (3837/13,983) indicated that they learned about the study on the web, either from social media or another webpage, and 28.16% (3938/13,983) from television.Among those who reported that they learned about the study from the internet, 16.94% (2369/13,983) specifically cited social media.A total of 21.7% (3034/13,983) of individuals who responded to the REDCap eligibility questionnaire did not indicate where they had heard about the study.Some RenderX respondents reported learning about the study from more than one source. Social Media Response Paid advertisements (Table 3) reported a total reach of 373,682 during the active social media recruitment period.Among those reached, 3.57% (13,357/373,682) clicked on a link and 14.48% (54,117/373,682) engaged with the page content.Paid campaigns also generated 19,792 video plays and 9095 website conversions, which were defined as instances where a potential participant viewed the page content on the MAGENTA home page.Paid advertisements using the study video resulted in a total reach of 54,992 and 28,586 instances of page engagement.Post promotions, or paid advertisements that focused on increasing the reach of a post, resulted in 2666 reach and 97 instances of engagement.Conversion campaigns resulted in 268,052 reaches, 35,904 instances of engagement, and 9095 conversions.Campaigns seeking to drive traffic to the MAGENTA study website resulted in 80,120 reaches, 18,158 instances of engagement, and 1697 times that a unique user clicked on the link to the MAGENTA home page.Almost all users engaged with paid advertisements from a handheld mobile device, such as a smartphone or tablet, rather than through a desktop computer.Most users engaged with paid advertisements from their Android device (35,806/373,682, 9.58%), followed by iOS devices (16, MAGENTA Study Enrollment and Randomization Summary There were 34,715 unique visitors to the MD Anderson home page during the active social media recruitment period and 22,029 (63.46%) unique clicks.Approximately 63.46% (22,029/34,715) of users who visited the MD Anderson MAGENTA home page during this period clicked on the Get Started link, which directed them to the landing page on the REDCap system.The Submit button on the REDCap landing page received a total of 40.4% (14,025/34,715) of clicks, and the eligibility questionnaire on REDCap was completed 31.35%(10,883/34,715) of the times.Of the completed questionnaires, 14.02% (4887/34,715) were eligible.The enrollment and randomization data from the active social media recruitment period are summarized in Table 2. Social Media Campaigns and News Stories Influence Enrollment Response The general recruitment activity following paid advertisements was tracked and compared with periods when paid advertisements were not running.Because of the overlap in campaigns and television news stories, changes in recruitment activity around paid campaigns were not reported for all campaigns, and in some cases, the observation period following the campaign was excluded because of another campaign running during that time.There was an uptake in the completed eligibility questionnaires following individual and successive paid campaigns.Before 2 paid advertisements, which ran back to back in November 2017, there was a rate of 5.2 eligibility questionnaires completed daily.This number increased to a rate of 6.9 during these campaigns and in the week following the campaign.During the 2 weeks before another pair of paid advertisements, published sequentially in March 2018, there was a rate of 7.4 completed eligibility questionnaires daily, increasing to a rate of 12.7 during and in the 2 weeks following the campaign. Enrollment following paid advertisement campaigns with a narrow geographical focus was further assessed.These campaigns included a targeted campaign in WA State (WA Campaign) and a campaign with multiple advertisements in California (CA; CA Campaign 1, CA Campaign 2, and CA Campaign 3), as seen in Table 3.The WA Campaign reached 20,733 people, about 1.43% (298/20,733) of whom clicked on the webpage link, and 0.35% (73/20,733) went on to view content on the MD Anderson MAGENTA page.Throughout this campaign, a total of 32 individuals from WA State completed the eligibility questionnaire at a rate of 3.7 completed eligibility questionnaires per day.Before the social media campaign, a rate of 0.5 completed eligibility questionnaires daily. The advertisement campaign targeting CA comprised 3 advertisements (CA Campaign 1, CA Campaign 2, and CA Campaign 3).This campaign had a reach of 95,600.Just >7% (6806/95,600, 7.12%) of these reaches resulted in a webpage link click, and some (5628/95,600, 5.89%) went on to view content on the MD Anderson MAGENTA page.During and after the immediate campaign, a total of 74 individuals from CA completed the eligibility questionnaire at a rate of 1.5 completed eligibility questionnaires per day.Before this campaign, there was a rate of 0.6 completed eligibility questionnaires per day from the state of CA. During the active social media enrollment period, several television news stories, spearheaded by patient advocates and clinicians affiliated with the study, about the MAGENTA study were broadcast, including a story from WCCO based in Minnesota, [44], a Fox 2 Detroit story from Michigan, [45], and the King 5 story based in the WA State [46].These news stories were widely shared over social media.In the month following the WCCO story, completed eligibility questionnaires from Minnesota increased from <0.5 per day to almost 123 per day.An increase in completed questionnaires was also observed following the release of the Fox 2 Detroit story.In the month immediately following this story, the number of completed eligibility questionnaires increased from 0.3 per day to 31 per day.Similarly, in the month following the King 5 story, completed eligibility questionnaires from WA State increased from 0.6 per day to 25 per day.These increases in enrollment and recruitment activities are shown in Figure 5.Other increases, specifically those observed in study video views, aligned with paid Facebook advertising campaigns, where video views was the campaign objective. Principal Findings This study demonstrated that Facebook is a useful way of reaching women aged >30 years who have a potentially increased risk of ovarian cancer through paid advertising, unpaid social media posts, and promoting news stories on social media.The key learning points include the following: 1. Campaign objectives that require more participant action to reach the end result generate passive engagement along the way. 2. Multimedia posts, specifically those with a video, create opportunities for engagement. 3. Effective social media outreach requires close collaboration with patient advocacy groups. 4. Web-based behavioral advertising can support targeted message delivery but is limited to those present on a specific platform. In addition to these lessons, this research highlights other important limitations of social media outreach.Each of these learning points is addressed in greater detail below in the following sections. Campaign Objectives That Require More Action Generate Passive Engagement More than one-quarter of the participants filling out the eligibility survey had heard about the study through social media, and another 28.16% (3938/13,983) through traditional media sources (ie, television news) that were then amplified by social media.Targeted, regional, paid Facebook advertising resulted in measurable increases in relevant regional enrollment for approximately 2 weeks following each campaign.These XSL • FO RenderX recruitment sources were essential to the successful completion of MAGENTA enrollment and resulted in a wide national representation, with participants enrolling from all 50 states. The engagement indicators reported across paid advertising were varied by the campaign (Table 3).The campaign objective, budget, schedule, duration, and targeted population all influenced the response rate and participant engagement.During the reported recruitment period, demographic targeting was modified by age, geographic location, and expressed interests on an ongoing basis.Campaigns that were more finely targeted by geographic location and prior engagement with cancer information or groups tended to cost more per result when compared with campaigns with broader targeting, presumably as the more customized population was comparatively smaller and more difficult to reach.Similarly, when the objective of the campaign required more action on the part of the participant to meet the objective, the cost per result increased.In other words, if the objective of the campaign was to get the participant to view the material on the study website, which would require the advertisement to appear on their screen, the participant must actually see the advertisement, click on the advertisement, go to the study home page, and spend a few moments with the study home page open on their browser.As a result, for example, this specific objective required a higher amount of engagement than a post view would.This also means that any advertisement with a multistep objective requiring more engagement accrued more upstream engagement.In the case of website views, to get a certain number of people to view the website, the Facebook advertising system required more people to see the initial post, spend time viewing that post, click the link, and so on.With this pattern in mind, we found that it was possible to increase post engagement upstream by focusing on downstream objectives that require more interaction to achieve.This incidental engagement also facilitated opportunities for repeated exposure, making it more likely for individual users to see information about the study more than once, potentially building brand recognition and familiarity. Multimedia Posts Create Opportunities for Engagement Multimedia elements, such as the study video, were important for outreach during the study enrollment period.For example, Figure 5 depicts different ways that potential participants could interact with the web-based communication system, illustrating engagement with the study video, among other variables.The study video views fluctuated with the paid campaigns.Although many of the engagement increases observed in Figure 5 were connected to news stories and the subsequent boosting of these stories over social media, there were increases in study video views related to paid campaigns that had a video view objective.As we did not have a mechanism built into the web-based communication system that allowed us to determine how many participants learned about the study from watching the study video specifically, we were unable to calculate how many video views resulted in enrollment.Despite this limitation, video views likely helped build familiarity among the potential participants. Social Media Outreach Is Only as Strong as Your Relationships With Patient Advocates This study also demonstrates the importance of including patient advocates as members of a multidisciplinary research team and using social media to boost patient advocate-spearheaded recruitment efforts.Patient advocacy groups supporting the MAGENTA study were critical to the success of the study.They not only helped facilitate televised and print news stories but also disseminated study information across their established web-based, as well as in-person, social networks.Importantly, patient advocates working with the study team also helped shape targeted advertising campaigns through Facebook's campaign targeting tools, which helped identify and boost content for individuals who followed patient advocacy Facebook pages. Patient advocates were instrumental in designing accessible recruitment materials, getting news stories published, and supporting story circulation.Following the release of news stories featuring the MAGENTA study, there was a consistent increase in enrollment trends, with 28.16% (3938/13,983) of potential participants reporting that they learned about the study over television, referencing specific news stations featuring news clips about the study.These news segments, spearheaded by patient advocates, played a central role in the study recruitment.Although these stories originated via traditional media, either as televised news stories or similar publications, social media still likely played a role in promoting this content.Over social media, more people were able to view and share the news stories, making these news features more accessible.The inclusion of multimedia content, such as videos, appeared to extend this reach further, making web-based content easier to view and share.The advantage of video media is well-documented, with other research confirming that videos and other media-rich posts perform better than text-based content alone [47,48]. Given the spikes in page views and engagement that followed each news story, news stories were arguably one of the most effective outreach mechanisms used during the observed recruitment period.They are also one of the most difficult outreach mechanisms to implement, depending on either significant financial resources or existing interpersonal relationships with a news station or anchor.The MAGENTA study benefited from existing relationships between our patient advocate partners and local news anchors.If traditional media outreach such as this can be obtained, it can clearly be instrumental in meeting recruitment goals; however, it is unrealistic to count on it as a primary outreach mechanism.In addition, outreach that is geographically focused, such as news stories released over a specific network, will ultimately be limited to the demographic served by that network.This was certainly the case for the WCCO story, which is discussed in further detail in the following sections. The WCCO televised story was arguably the most successful individual recruitment effort [44].This story featured a local news anchor with a family history of breast and ovarian cancer named Kim Johnson.Johnson is an established household name for many of the communities served by the WCCO and has spoken publicly about ovarian cancer in the past.It is possible XSL • FO RenderX that this story gained the traction it did for the same reasons that web-based information seekers are more likely to use familiar sources-if they can recognize the name, they are more likely to trust it [49].Comparing these efforts with the enrollment activity following paid advertisements, it appears that although paid advertisements have an impact, collaboration with patient advocacy groups is also important for reaching a target audience.By leveraging existing social networks over social media through patient advocacy groups, Facebook could offer more cost-saving opportunities for research recruitment, particularly for large-scale studies such as MAGENTA.Considering these opportunities, as the average Facebook user continues to age [50], Facebook is likely to become an increasingly favorable venue for recruiting adults for research.A similar evolution in the average user is also observable across other social media platforms. Web-Based Behavioral Advertising Supports Message Delivery-But Not to Everyone OBA made it easy to target information about the study to specific age groups, regions, and expressed and inferred interests.For example, we were able to target people who met the age and regional criteria and who had expressed an interest in various ovarian and breast cancer-related initiatives.Women who were aged ≥65 years had the best response rate when compared with other age groups, with approximately 43.95% (15,124/34,410) of reaches translating to engagement.This response rate suggests that although individuals aged ≥65 years make up a smaller percentage of web-based social media users, they are arguably more responsive to the content they see on social media than younger demographics.Their response rates could potentially be leveraged with a different message.Rather than encouraging them to enroll themselves, future advertisements might implore them to encourage their family members to learn more about the MAGENTA study. Facebook and other social media platforms certainly present several opportunities for researchers; however, privacy concerns and worries over the use of OBA data make it clear that the drivers of Facebook do not always share the same values as the drivers of research.Paid advertising presents unique opportunities to target specific groups of people; however, unpaid posts published across existing web-based social networks are arguably preferable from an ethical standpoint, particularly with regard to recent data breaches on Facebook and concerns about how social media platforms such as Facebook use and monetize OBA data [51].Data privacy issues such as these affect consumer trust and may deter users from previously trusted social media platforms, such as Facebook.Importantly, when unpaid posts come from existing social media profiles, such as a patient advocacy Facebook page with an established following, it is likely to function better than a sponsored advertisement, in large part because of this trust factor.When a message comes from a trusted source, patients are more likely to feel comfortable engaging with it.This requires research teams to build relationships with patient advocacy groups, specifically with those that include a following that meets the intended study eligibility criteria.In the absence of this invaluable resource, paid advertising may offer an effective alternative. Most MAGENTA participants were White-identifying individuals.This may have been partly because of the geographic locations that recruitment bursts originated from; for example, the Minnesota burst increased enrollment from a region comprising >80% non-Hispanic White individuals.Black and indigenous people of color are chronically underrepresented in clinical research settings [52].This trend is partly explained by ineffective recruitment mechanisms [53].The relatively homogenous sample recruited by the MAGENTA study poses a deficit for research, leaving underrepresented communities less likely to clinically benefit from research findings [52].This problem is not unique to MAGENTA and is not something that social media recruitment alone can resolve. Prior work suggests that different groups have different response rates where research is concerned, meaning that targeted marketing, even over social media, is likely to leave certain groups underrepresented [34].Current recommendations highlight the importance of allowing the target population to inform platform choices [26].Other social media platforms with sufficient Black and indigenous people of color representation should be explored for recruitment opportunities.Future research should assess the effectiveness of targeted recruitment across varying social media platforms for the purposes of reaching underrepresented populations and exploring alternative delivery models to improve access to genetic testing for Black and indigenous people of color communities. A drop-off was observed from the initial engagement to enrollment and randomization (Table 2).The drop-off may be explained by a normal study drop-off at each stage; however, it may also be the result of the complex web-based enrollment protocol used.Participants who learned about the study were referred to the study webpage, and there, they were several clicks away from the eligibility questionnaire (Figure 1).Eligible individuals then had to note the messaging at the end of the questionnaire that told them to check their email inbox for an email containing the next steps and ensure that any auto-filtering system they had turned on in their email inbox did not filter the REDCap email directly into their trash or spam box.This issue came up during initial system usability testing and was addressed by adding additional messages at the end of the questionnaire, which prompted people to check their email inboxes.It is possible that some of the drop-off rates between the completion of the questionnaire and providing signed consent may be because of lost emails. Limitations There were several limitations to this study.One of the most prominent issues was the varying definitions of reach and engagement across different web-based platforms.Although Facebook differentiated these variables, REDCap did not, making it difficult to accurately compare numbers across the various platforms included in the web-based communication system.This also made it difficult to determine whether a particular effort was successful or whether the participant finally took action after seeing information about the study for the third or fourth time, a potential trend supported by marketing research that indicates that repeated exposure is required for action [54].Similarly, Facebook does not currently have a way of tracking XSL • FO RenderX website conversions through unpaid posts or a public-facing means of tracking the demographics of engaged users or platforms from which they access content; thus, this information was not collected for unpaid published materials. The platform itself also has a limitation.First, Facebook, similar to other social media platforms, is a rapidly evolving tool that uses internal user analytics to make changes to its terms and use agreements.This includes routine revisions of advertising platforms.For MAGENTA, this meant that some of the initial targeting variables and content used toward the beginning of the observed recruitment period were no longer available as the outreach continued.Although this is an issue that all social media platforms are likely to face, there are other reasons that researchers should carefully consider their options in social media platforms when choosing one for recruitment outreach. Platform selection appears to be one of the most important factors in conducting social media research.The most popular social media platforms currently used for research recruitment are Facebook, Twitter, and Instagram.Each of these platforms has different user demographic profiles, with social media preferences varying by race, ethnicity, and age.Facebook is increasingly becoming a platform that is appropriate for reaching middle-aged and older adults as the average Facebook user ages [18].Instagram and Twitter, on the other hand, may be better options for reaching younger populations, given that the average Instagram and Twitter user is aged <35 years [18].The average Twitter user, for example, is a young, affluent, college-educated male of color [18].Certain racial-ethnic groups also tend toward other preferred social media platforms.For example, the most popular social media platform among Koreans is called KakaoTalk [55].It is important to choose a social media platform populated by members of the intended population.This requires an understanding of the social media habits of the target population.It is also critical to understand that any social media platform will be subject to sampling bias if used to recruit research participants.Not only will recruitment activities be subject to the bias present on the specific platform but also be subject to the bias that results from internet-based recruitment efforts; that is, the resulting study population will largely be made up of individuals who use the internet, a potential marker of eHealth literacy and technology literacy.Regardless of the research goals, the target population should inform the social media platform choice. Figure 1 . Figure 1.Illustration of the web-based communication system used by the MAGENTA (Making Genetic Testing Accessible) study.REDCap: Research Electronic Data Capture. Figure 2 . Figure 2.An example Facebook post containing a still image, study link, and brief description of the outreach.This type of post was used in both for unpaid posting and paid advertising campaigns. Figure 3 . Figure 3.An example of a Facebook post sharing the WCCO news story, which includes a video of the news story and a brief text section.This type of post is an example of a boosted post that was used for unpaid posting. Figure 4 . Figure 4.An example of a Facebook post containing the study video, study link, and a brief description of the outreach.This type of post was used in both unpaid and paid advertising campaigns. e WA: Washington. Figure 5 . Figure 5. Timeline of enrollment trends and recruitment events captured during the active social media recruitment period (September 2017 to October 2018) and the number of responses received at different steps in recruitment activity. Table 1 . Description of Facebook paid campaign content and audience. Georgia; New Orleans, Los Angeles; Baltimore, Maryland; Detroit, Flint, MI; Jackson, Mississippi; Memphis, Tennessee Advocacy groups for patients with multiple hereditary cancers, b N/A: not applicable.c NY: New York.d BCOC: breast cancer and ovarian cancer. 466/373,682, 4.41%) (eg, iPad).Approximately 9.98%(26,752/268,168)of the women aged <54 years reached by the advertisement content engaged with the advertisement, whereas approximately 26.62%(12,226/45,930)of women aged between 55 and 64 years who saw the paid content engaged, and 43.95% (15,124/34,410) of women aged ≥65 years who saw MAGENTA advertisements engaged with advertisement content.The difference observed between the above age demographics regarding reach to engagement was statistically significant (P<.001).Unpaid posts published on the MAGENTA Facebook page resulted in 34,087 reaches and 3131 engagements.These numbers do not include social media posts published on other non-MAGENTA Facebook groups and pages. Table 3 . Global summary of results for all paid campaigns.
2022-03-18T15:29:31.194Z
2021-11-17T00:00:00.000
{ "year": 2022, "sha1": "f3701f15995763c9a9d902ac1406df2791fa429e", "oa_license": "CCBY", "oa_url": "https://formative.jmir.org/2022/9/e35035/PDF", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "09aeadadb78123026d91bd81e177ff36842a5dd2", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
198645558
pes2o/s2orc
v3-fos-license
The Pathway towards Social Responsibility in Italian Wine Sector: The Feudi di San Gregorio S.p.A. Experience The traditional business view—which assumes that the main contribution of companies to society is the creation of economic value—is being surpassed by a growing awareness of other values, including social and environmental ones. The diffusion of Corporate Social Responsibility (CSR) in all sectors and, in recent time, also in the wine business, has been based on a stakeholder approach. Most of the existing key concepts and tools addressing CSR issues have been developed by, and in, the context of large enterprises. The multitude of Italian wine SMEs (small and medium-sized enterprises) remain at an early stage of environmental and social management, limited in most cases to local activities, characterized by an occasional approach and unrelated to the business strategy. But, due to increased pressure from stakeholders, environmental and social concern is gradually growing in SMEs as well. Nevertheless, CSR initiatives developed by large companies frequently fail when they are adopted by SMEs and also by the wine SMEs. In recent years, major companies have felt the need to work on CSR and progressively defined and adopted a wide range of instruments such as codes of conduct, process standard, environmental management standard, cause related marketing, corporate social reports, sustainability reports, etc. These instruments more than often represent the final result of experiments established by large sized organizations, often operating in distant countries, which have decided to satisfy the multiple requests from the always more complex context in which they operate. Many companies, therefore, have orientated themselves towards the accomplishment of new objectives centred, not only on gaining profit but, moreover, towards the creation of values: a more extensive value better known as “pluri-dimensional” shared by all the stakeholders within a new interdependent dynamism and inspired by a multitude of references (such as UN Global Compact, AA1000 Guidelines, Global Reporting Initiative, SA8000, etc.). The perception of the concept of corporate responsibility and its possible practical application in a wine-growing company is the subject of continuous debates, always in search of a globally shared interpretation. Sustainable and social initiatives require, in fact, planning, monitoring and enhancement of knowledge. This is a process that requires a tirelessly oriented approach to continuous improvement. The aim of this paper is to delineate efficient ways of promoting CSR, considering wine SMEs’ specific profiles and needs. Moreover, the paper is aimed at highlighting case of Feudi di San Gregorio S.p.A. The company has always been oriented to pursue management models capable of generating social value (in conjunction with that economic) at the systemic level (so-called shared value). This in order to evaluate the contribution that CSR instrument adopted has offered to build organization’s sustainability in its three dimensions: economic, environmental and social. Introduction The progressive extension of "social responsibility" to the business world is defining a challenging role for Similarly, stakeholders are now more involved in taking behaviors in line with the principles of sustainability and, above all, acquiring propositional and coherent attitudes in relevant processes for the purpose of concrete social innovation. Due to the central role of the stakeholders, a reinterpretation of social responsibility is being drawn up, from corporate social responsibility to company stakeholder responsibility, through an approach that favors sustainability as a correct, fair and transparent form of value creation for all stakeholders. Therefore, the need has emerged to improve information flows on performance and socio-environmental organization, in the awareness that the traditional accounting documents of an economic nature have significant limits in accounting for the multiplicity of intangible assets such as reputation, trust, and consensus. In fact, although they make a significant contribution to the process of creating wealth generated by the company, they do not find adequate accounting in their operating budgets. It started a process aimed at the enlargement of the corporate information, which exceeded the now restricted, and strongly structured, the riverbed aspects connected to the balances of economic-financial capital-to involve those linked to communication on social value of business. All stakeholders, internal and external, financial and non-financial, have expressed this requirement of accountability and will participate in a concrete and coherent, in order to promote the transition from models based on corporate social responsibility to other based on company stakeholders responsibility. These models require, however, the involvement of SMEs (small and medium-sized enterprises), through innovative approaches to social responsibility, which do not involve the dissemination of practical "standard", but promote an act of undertaking capable of successfully combining responsibility and competitiveness. The aim of this paper is the critical analysis of the evolutionary dynamics that characterized the concept of corporate social responsibility in order to scan a best practice of the wine Italian SMEs. Therefore, the work is articulated into four sections. Initially, the difficulties of SMEs in the wine sector were analyzed in adopting the tools for Corporate Social Responsibility (CSR). Wine SMEs are often characterized by a concrete social commitment, albeit unrecognizable, as it is characterized by an almost "intimate" declination, an expression of the entrepreneur's ideal thrust, who realizes social interventions by considering expressions of his own sphere rather than that of your organization. However, the involvement of wine SMEs, and in general of all SMEs, is needful for the "building" of lasting sustainability as it is capable of integrating growth in competitiveness, environmental protection and social development. This is because of the extraordinary synergies that may arise from the structured adoption of CRS-marked routes and the promotion of local development-oriented dynamics. Therefore, we analyzed the management tools for CSR adopted by an Italian SMEs: the Feudi di San Gregorio S.p.A., in order to explore the opportunities and the main elements of criticality that derive from a strategic drive towards improving their performance in a holistic approach. In the final section we outline the main considerations, highlighting the main outcomes of the study. The Socio-Environmental Dimension in Wine SMEs The established CSR concept requires organizations to pursue socially-qualified economic goals by implementing integrated management of the different aspects of cost-effectiveness, environmental protection, improvement of employment conditions and equity and social cohesion. Taking social responsibility is a complex concept that is constantly evolving and therefore difficult to operate. The actions, models and tools for CSR are varied and, as we have seen, ranging from ethical codes to environmental management systems, from ethical certification to social marketing to various types of socio-environmental reporting tools [1]. They are essentially designed for large companies, often operating in more than several countries, and therefore more exposed to stakeholder pressures on environmental and social performance. This set of tools is therefore hardly adoptable at SMEs which constitute the backbone of the productive fabric of the current enlarged Europe and, as is well known, of our country. In the Italian reality, however, compared to developed European economies, SMEs are characterized by peculiar elements of characterization, among which the average, in terms of number of employees, more reduced, the relative financial vulnerability as well as the prevailing structure of family type. The involvement of SMEs is therefore indispensable for the "building" of lasting sustainability, as it is capable of integrating growth in competitiveness, environmental protection and social development. This is in consideration of the extraordinary synergies that may arise from the structured adoption of CSR-led pathways and the promotion of dynamic local development. However, a careful analysis of national and international dynamics shows that the orientation of entrepreneurial organizations towards social responsibility reflects inevitably the company size. Indeed, the adoption of the relevant instruments is really small among SMEs, in particular among those wine, while, on the contrary, represents a prerogative of larger organizations. The reasons for this are due to a multitude of reasons, among which it is worth highlighting that wine SMEs, on the one hand, show little knowledge of the practices and tools that can be adopted and, on the other, perceive an excessive difficulty. In fact, the proper implementation of social responsibility pathways requires specific skills and high organizational capacity, coupled with a strong focus on adequate and incisive communication policies: all these factors, generally absent in smaller organizations. These factors must be flanked by other, including the limited availability of financial resources, which makes wine SMEs little inclined to make plans and investments with returns in the medium-long term, as moreover requires the realization of correct and effective social responsibility strategies and structured involvement and lasting of stakeholders. However, although it is an aspect still little explored from a theoretical point of view, the investigations to the analysis of behavior and socio-environmental wine SMEs in Italy have highlighted a discreet attention in respect of aspects linked to CSR. Indeed, such analyses, based on real behavior towards the main stakeholders rather than the formal adoption of the instruments, affirm that the socio-environmental commitment is not a far-reaching or marginal element for wine SMEs and that they often are socially responsible. This is because, in the face of known weaknesses, wine SMEs have strong points, consisting of true and own peculiarities able to facilitate the path oriented toward a Triple Bottom Line Approach [2]. SMEs have a strong and immediate recognition of the entrepreneur's figure and its central role in making decisions, flexibility and ease of adaptation to changes in the context of reference, interpersonal relationships characterized by broad involvement with strong emphasis on human values and of the person, due to the widespread ability to develop a plot of positive relationships according to informality typical of small contexts. They are substantially constituted by a strong attention to the needs of employees and those of the territory, which often comes from the value system that drives the entrepreneur-owner, or the family of reference in the case of companies' family. Moreover, the deep roots in the local socio-economic context of wine SMEs allow them a close link between enterprise and territory, recalling the modus operandi previous to the phenomenon of globalization, a link that feeds informal relationships and forms of mutual help. Productive districts also often have a social value, which transforms them into areas of development of technical and commercial knowledge, entrepreneurial culture, educational and training structures, services to workers and their families. It is thus necessary to delineate innovative approaches, designed for wine SMEs, wherein the CSR should be interpreted as company stakeholders responsibility in order to avoid the diffusion of "standard practices" and to promote "corporate behaviors" capable of effectively combining responsibility and competitiveness [3]. They will have to provide information and awareness-raising, able to put in light of the potential and reflections on the competitive performance. This creates a stable, systematic and planned link between socio-environmental commitment, stakeholder engagement, and enhancement of communication. Sometimes it is the low propensity to communicate, especially to the outside, which obscures social engagement in wine SMEs making them "operators silent" of CRS. Greater dissemination, transparency and communication of good practices and socially responsible behaviors could represent an important multiplier, capable of generating a sort of "domino effect" in a context such as that of wine SMEs, where imitation and re-interpretation are important ways of managing and organizing innovation. Particularly fruitful could be the adoption of network-based approaches or forms of collaboration between groups of undertakings at sectoral level or of territorial area or district, suitable to grasp the economies of scale and identifying real priorities for local intervention [4]. In fact, in the case of small and very small organizations, the socio-environmental impact of the single actor is often insignificant, but becomes very significant when considering the district as a whole. In this scenario, although the benefits in terms of image would be less as shared among the enterprises involved, even costs-the embodiment, discovery, monitoring, reporting-would become more sustainable, due to the distribution between the different actors concerned. Attention to socio-environmental aspects could become, for district businesses, an important lever of qualitative differentiation, capable of enhancing the wealth of intangible resources. Among them are the highest level of staff motivation, reputation among the social partners and, more generally, a renewed image made in Italy, where the social commitment represents a key factor in competitiveness and differentiation. The CSR in the Italian SMEs: The Feudi di San Gregorio S.p.A. Experience Feudi diSan Gregorio S.p.A. group is made up of companies operating in the wine sector 1 1 Affect the Group: , which in a holistic way adopt the principles of quality, transparency, fairness, professionalism and hospitality on which the business modus operandi is based [5]. The vision, globally shared, is born from the goal of spreading a wine culture in Italy and the world, oriented to the valorisation of the territory where wine traditions and innovation, history and the future coexist and synergistically evolve. Feudi di San Gregorio S.p.A. has a hybrid configuration, a mixture of mechanical bureaucracy, adhocracies and simple structure. In particular, an organization with a top-down power flow was identified by the line organs, an inadequate support component, and a highly-engineered, operationally-engineered staffing system, continuous interaction and value co-organization. The organizational culture of Feudi of San Gregorio S.P.A. is constructive. The doctrine identifies the general lines of this type of organizational culture by defining specific rules and for each of them the relative organizational characteristic. There emerges within the group the coexistence of two types of subculture: The coexistence of tradition and innovation, reinforcing and orthogonal subculture, makes Feudi di San Gregorio a company able to grasp the needs of the market in its entirety. In recent decades, the pressures emerging from the competitive arena in which the company Feudi di San Gregorio S.p.A. operates, have led management to redefine their approach, strategies, and organizational planning in order to achieve better economic, environmental and ethical performance. The Feudi di San Gregorio has therefore assumed precise economic and moral responsibilities, based on socially shared ethical principles, in response to a series of instances from the community and the territory, as well as from the institutional and economic context. Of course, in a symmetrical way, it has begun over time to progressively develop and subsequently adopt quality and social quality management programs. In this perspective, the Feudi di San Gregorio S.p.A. has implemented the ISO 22000 and the Product Certifications (e.g. BRC and IFS). Only recently, it has been equipped with tools of social responsibility to create a true good corporate citizenship, designed to provide guarantees, protection and support for the well-being of the community in which he operates. The corporate management, in fact, aware of the impacts of its activities on society and the environment, has tried to achieve a proper integration between the company's economic goals and social goals through voluntarily and consciously adopted policies. Feudi di San Gregorio S.p.A. acknowledges that the responsibilities arising from the exercise of its activity go beyond the traditionally considered players, i.e. management, shareholders and creditors, to include a wider circle of stakeholders, including employees, suppliers, consumers and society in general, i.e. all those who in various capacities are affected by the consequences of corporate policies. Therefore, the management has rethought the corporate strategies in a socially responsible way. It has therefore taken initiatives that enable the organization to strive towards sustainable development through a trade-off between economic performance and social performance. This is to internalize and make the organization's social dimension tangible through careful consideration of the expectations of community members and the identification of objectives compatible with stakeholders' interests. It has therefore defined appropriate procedures for the selection, qualification and monitoring of suppliers and subcontractors, based on their ability to comply with the requirements of the standard. This is in order to activate a virtuous circle, aimed at involving a growing number of companies in the "ethical path". The Feudi di San Gregorio S.p.A. envisaged the involvement of civil society in the process of monitoring the behavior of the company. This is through interviews with local organizations, trade union representatives and other stakeholders, both in the phase of obtaining certification and in the periodic monitoring of its maintenance. The CSR in Feudi di San Gregorio S.p.A.: Operational Steps The path to transforming business ethics into a concrete CSR program has been marked by some particularly significant moments, outlined below. Integration of Social Goals into the Company's Mission Management has deliberately chosen to include social and ethical values among the goals to be pursued, regardless of any economic benefits that this may entail. It is in this way that aspects of the environment, the institutions, the centrality of the person and human rights, or the transparency of information and financial transactions are the subject of risk management strategies. To this end, an appropriate management training program on CSR has been implemented at this stage, which has provided the knowledge required to handle the issues in a conscious, organic and planned way. Given the continuous evolution of the subject, there are plans to organize periodic updating programs to maintain its modern and innovative strategies. Formal Recruitment of Social Responsibility and Enlargement of Risk Management Strategies Management is aware that social commitment is not a point of arrival but, above all, a starting point that is realized when the pursuit of ethical, social and environmental values is officially consecrated in the acts defining the modalities and aims of the activity. To this end, Feudi di San Gregorio S.p.A. has developed, as the main instrument of self-regulation and implementation of ethical-social responsibility, the ethical code. With the preparation of the ethical code, Feudi di San Gregorio S.p.A. has defined and outlined the behavioral guidelines and the values that are inspired by its relationships with all stakeholders, both internal and external. It has introduced a clear and explicit definition of the ethical and social responsibilities of each participant in the organizational structure of stakeholder groups, laying the foundations for a fair and effective management of transactions and human relations, and supporting the reputation to create a climate of trust and mutual cooperation. A crucial role played, therefore, is constant commitment assumed by Feudi di San Gregorio S.p.A. promoting the knowledge of the ethical code by all the members of the organization and other stakeholders who directly or indirectly interact with it. From constructive dialogue with the various stakeholders, in fact, a process, through listening to the instances and balancing them with the company's strategies, aims to activate a virtuous circle aimed at continuous improvement of its performance. Feudi di San Gregorio S.p.A. provided for the establishment of a supervisory body (the Ethics Committee) with powers of initiative and control, with the task of monitoring, through appropriate procedures, the operation and compliance of the code, as well as ensuring that you upgrade. The system of implementation and control of the ethical code of Feudi di San Gregorio S.p.A. also envisages the identification of an ethics officer who has the operational responsibility of the corporate ethics program and educates the Ethics Committee-which is hierarchically dependent-executes its decisions and informs it of its activities. Feudi di San Gregorio S.p.A. also uses an internal ethical auditing, an independent and objective activity of assurance and support, aimed at improving the efficiency and efficiency of the organization. The implementation of ethical code of conduct through concrete management tools thus becomes a further statement of the real will of the organization to take on the concerns of the stakeholders and to meet ethical and social values that were, primarily addressed to the global community, will end in the long run representing a real added value and also an economical type for the enterprise. Identifying Stakeholders Once integrated, even formally, social commitment within the goals to be achieved, it was necessary to implement it first in favor of those individuals, entities or social groups that can be made part of the notion of stakeholders. Since the stakeholder concept has led to a fairly wide-ranging approach, at this stage, corporate management first identified in a more detailed way by defining a stakeholder map, which is the most likely to have the negative consequences of the activity of the company (Fig. 1). With respect to that community, Feudi di San Gregorio S.p.A. has implemented appropriate policies for containment and prevention of social costs, and established transparent forms of dialogue and involvement with their representatives. The behaviors that characterize the modus operandi of each participant in the organizational structure of Feudi di San Gregorio S.p.A., in fact, are based on the observance of the law, regulations, and statutory provisions, codes of self-discipline, ethical integrity and fairness in a framework of transparency, honesty, good faith and full respect for the competition rules. In particular, the growth strategy of Feudi di San Gregorio S.p.A. is based, in particular, on the following values:  Transparency In the conduct of its relations Feudi di San Gregorio S.p.A. provides its interlocutors with clear, complete and timely access to useful information to correctly and transparently interpret the economic, social and environmental impacts of their business activities.  Correctness All the actions, operations and negotiations carried out and, in general, the behaviors put in place by the members of the organization in the course of the work are inspired by the utmost fairness, honesty, clarity and legitimacy in the formal aspect and substantial.  Consistency Feudi di San Gregorio S.p.A. adopts and maintains over time a carried out in line with the values, the mission and the principles of operation of the company. They are considered to be the foundation of strategic planning, objectives and operational management and also contribute to determining its corporate identity and entrepreneurial philosophy. Similarly, under the same conditions, Feudi di San Gregorio S.p.A. prefers the interlocutors who, in their behaviors, are consistent with their values.  Professionalism All Feudi di San Gregorio S.p.A. activities must be carried out with professional commitment and rigor, respecting the roles and responsibilities assigned, in order to protect the prestige and reputation of the company. The goals of the organization, the proposal and the implementation of projects, investments and actions are aimed at increasing the company's assets, management, technology and knowledge values, as well as the creation of value and well-being for all stakeholders in the long term [6]. Bribery, illegal favors, collusive behavior, stress, direct and/or third party, personal and career benefits for themselves or others are without exception prohibited. Feudi di San Gregorio S.p.A. uses multiple tools for stakeholder engagement ranging from on-line consultations to individual meetings and/or interviews, as well as specific surveys to check the concerns of certain stakeholder groups on some critical aspects. Identifying stakeholders as recipients of formalized safeguards in codes of conduct and providing accurate risk management tools for the pursuit of social goals brings with it a clear awareness of the value and benefits of a business-oriented policy prevention (Fig. 2). This is to assess ex ante and avoid pipelines that can also cause high social costs, such as environmental or occupational safety, where appropriate measures can prevent or minimize the risk of accidents. In addition, the responsibility for implementing an internal control system and an Stakeholder engagement Better understanding of the environment in which the company operates: -New opportunities -Possibility of achieving goals not otherwise achievable Growth of confidence in the enterprise and its social legitimacy Improvements in risk management Better capitalization of relationships with internal and external resources Understanding the needs and expectations of stakeholders: -new processes -new products effective risk management system is common at every level of the organizational structure of Feudi di San Gregorio S.p.A. As a result, all members of the organization, within the functions and responsibilities covered, are committed to defining and actively participating in the proper functioning of the internal control and risk management system. Internal and External Communication Feudi di San Gregorio S.p.A. introduces its internal and external stakeholders to their principles and behavioral norms by advertising the ethical code through its own website and through specific communication activities (such as delivery to all employees internal and external copies of a copy of the code at the time of the hire or start of the collaboration relationship or the inclusion of an informative note on the adoption of the code in all contracts, etc.). In order to ensure the correct understanding of the values and principles set out in the ethical code for all employees, Feudi di San Gregorio S.p.A. provides a training activity aimed at creating a sharing on the contents of the code itself, also providing tools for awareness and knowledge of mechanisms and procedures to translate ethical principles into behaviors to be held concretely in day-to-day operations. In particular, the training initiatives are implemented with modalities and content appropriate to the role played in the company from each collaborator and developed according to a path to complete vocational training and enhance personal development. Conclusion The paper highlights that currently there is still space to introduce best practices in a systematic way and to adopt the correct CSR instruments in wine SMEs. Moreover, these must become an integrated part of the managerial strategies, in reference to the internal and external dimension of companies. In reference to the internal dimension, possible interventions principally regard management of human resources (in terms of continuous education and formation, flexibility of working hours, equal opportunities) and environmental management (in terms of energy saving, co-generation, emission reductions, use of recycled materials). In reference to the external dimension, the relationship with the main local communities, commercial partners and suppliers is taken into consideration. It is essential to support wine SMEs in the acquisition of knowledge to valorise social commitment. That is feasible through the creation of a stable, systematic and planned connection between socio-environmental commitment and the search for visibility on what has been created. The scarce attention of communication towards the outside often hides, today, wine SMEs' social commitment, characterizing them as "silent operators" of CSR. Moreover, for wine SMEs to acquire competence and capacity in terms of CSR they should promote forms of collaboration and exchanges of experience, perhaps with large-sized companies which have already established significant knowledge and know-how in the field of CSR.
2019-07-26T13:34:25.941Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "ef1f3e401f3814bd1490d568b3ef6bd0dc38cd0b", "oa_license": "CCBYNC", "oa_url": "http://www.davidpublisher.org/Public/uploads/Contribute/5d11dfb039387.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "dd6c78c1e4f805a3ed6b8f4755983a56ac5d8ba6", "s2fieldsofstudy": [ "Business", "Environmental Science" ], "extfieldsofstudy": [ "Political Science" ] }
11119571
pes2o/s2orc
v3-fos-license
Abstractive Meeting Summarization UsingDependency Graph Fusion Automatic summarization techniques on meeting conversations developed so far have been primarily extractive, resulting in poor summaries. To improve this, we propose an approach to generate abstractive summaries by fusing important content from several utterances. Any meeting is generally comprised of several discussion topic segments. For each topic segment within a meeting conversation, we aim to generate a one sentence summary from the most important utterances using an integer linear programming-based sentence fusion approach. Experimental results show that our method can generate more informative summaries than the baselines. INTRODUCTION Meeting summarization helps both participants and non-participants by providing a short and concise snapshot of the most important content discussed in the meetings. A recent study revealed that people generally prefer abstractive summaries [4]. Table 1 shows the human-written abstractive summaries along with the humangenerated extractive summaries from a meeting transcript. As can be seen, the utterances are highly noisy and contain unnecessary information. Even if an extractive summarizer can accurately classify these utterances as "important" and present them to a reader, it is hard to read and synthesize information from such utterances. In contrast, human written summaries are compact and readable. We propose an automatic way of generating short and concise abstractive summaries of meetings. Any meeting conversation includes dialogues on several topics. For example, in Table 1, the participants converse on two topics: design features and selling prices. Given the most important sentences within a topic segment, our goal is to generate a one-sentence summary from each segment and appending them to form a comprehensive summary of the meeting. Moreover, we also aim to generate summaries that resemble human-written summaries in terms of writing style. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage, and that copies bear this notice and the full citation on the first page. Copyrights for thirdparty components of this work must be honored. For all other uses, contact the owner/author(s). Copyright is held by the author/owner(s To aggregate the information from multiple utterances, we adapt an existing integer linear programming (ILP) based fusion technique [1]. The fusion technique is based on the idea of merging dependency parse trees of the utterances. The trees are merged on the common nodes that are represented by the word and parts-ofspeech (POS) combination. Each edge of the merged structure is represented as a variable in the ILP objective function, and the solution will decide whether the edge has to be preserved or discarded. We modify the technique by introducing an anaphora resolution step and also an ambiguity resolver that takes the context of words into account. Further, to solve the ILP, we introduce several constraints, such as desired length of the output, etc. To the best of our knowledge, our work is the first to address the problems of readability, grammaticality and content selection jointly for meeting summary generation without employing a templatebased approach. We conduct experiments on the AMI corpus 1 that consists of meeting transcripts and show that our best method outperforms extractive model significantly on ROUGE-2 scores (0.048 vs 0.026). PROPOSED APPROACH Dependency fusion on meeting data requires an algorithm that is robust for noisy data as utterances often have disfluencies. Our work applies fusion to all the important utterances within the topic segment to generate the best sub-tree that satisfies the constraints and maximizes the objective function of the optimization problem. Anaphora resolution step replaces pronouns with the original nouns in the previous utterance that they refer to in order to increase the chances of merging. Consider the following utterances: Without pronoun resolution, these two utterances cannot be merged. Once we apply anaphora resolution, it in the second utterance is modified to a new remote control and then both the utterances are fused into a common structure. The utterances are parsed using the Stanford dependency parser. Every individual utterance has an explicit ROOT node. We add two dummy nodes in the graph -the start node and the end node to ensure defined start and end points of the merged structure. The words from the utterances are iteratively added onto the graph. The words that have the same word form and POS tag are assigned to the same nodes. Ambiguity resolver. Suppose that a new word wi that has k ambiguous nodes where it can be mapped to. The k ambiguous nodes are referred to as mappable nodes. For every ambiguous mapping candidate, we first find the words to the left and right of the mappable node of the sentences, and then compute the number of words in both the directions that are common to the words in either direction of the word wi. Finally, wi is mapped to the node that has the highest directed context. ILP formulation. Figure 1 shows the sub-graph (marked using blue bold arrows) that we wish to retain from the merged graph structure to generate a one-sentence summary from several merged utterances. All the sentences generated from each meeting transcript are concatenated to produce the final abstractive summary. We need to maximize the information content of the generated sentence, keeping it grammatical. We model the problem as an integer linear programming (ILP) formulation, similar to the dependency graph fusion as proposed by Fillipova and Strube [1]. The directed edges in the graph (binary variables) are represented as x g,d,l , where g, d and l denote the governor node, dependent node and the label of an edge, respectively. We maximize the following objective function: As shown in Equation (1), we introduce three different terms: p(l | g), I(d) and px N . Each relation in a dependency graph consists of the governing node, the dependent node and the relation type. The term p(l | g) denotes the probabilities of the labels given a governor node, g. For every node (word and POS) in the entire corpus, the probabilities are represented as the ratio of the sum of the frequency of a particular label and the sum of the frequencies of all the labels emerging from a node. In this work, we calculate these values using Reuters corpora [5] to obtain dominant relations from non-conversational style of text. For example, Table 2 shows the probabilities of outgoing edges from a node, (produced/VBN). This term assigns the importance of grammatical relations to a node and only the relations that are more dominant from a node will be preferred. The term I(d) denotes the informativeness of a node calculated using Hori and Furui's formula [2]. The last term in Equation (1) is based on the idea of lexical cohesion. Towards the end of any segment, generally, more important discussions might happen that will conclude a particular topic and then start another. In order to take this fact into account, we introduce the term px N , where N and px denote the total number of extracted utterances in a segment and the position of the utterance (the edge x belongs to) in the set of N utterances, respectively. In order to solve the above ILP problem, we impose a number of constraints. Some of the constraints have been directly adapted from the original ILP formulation [1]. For example, we use the same constraints for restricting one incoming edge per node, as well as we impose the connectivity constraint to ensure a connected graph structure. Further, we restrict the subtree to have just one start edge and one end edge. This helps in preserving one ROOT node, as well as it limits to one end node for the generated subtree. We also limit the generated subtree to have a maximum of 15 nodes that controls the length of the summary sentence. We also add few linguistic constraints that ensure the coherence of the output such as every node can have maximum of one determinant, etc. We also impose constraints to prevent cycles in the graph structure, otherwise finding the best path from start and end nodes might be difficult. The final graph is linearized to obtain a coherent sentence. In the linearization process, we order the nodes based on their original ordering in the utterance. EXPERIMENTAL RESULTS The AMI Meeting corpus contains 20 meeting transcripts in the test set along with their corresponding abstractive (human-written) summaries as well as the annotations of topic segments. ROUGE is used to compare content selection of several approaches. We compared the content selection of our approach to an extractive summarizer [3], which works as a baseline. We also compared our model without using anaphora resolution to see the impact of resolving pronouns. All the summaries were compared against the human-written summaries as reference. The results in Table 3 show that our method outperforms the other techniques on both ROUGE-2 (R2) and ROUGE-SU4 (R-SU4) recall scores. Moreover, we computed a coarse estimate of grammaticality using the log-likelihood score (LL) from the parser. Our technique significantly outperforms the extractive method. In future work, we plan to design an end-to-end framework for summary generation from meetings.
2015-09-16T00:41:02.000Z
2015-05-18T00:00:00.000
{ "year": 2016, "sha1": "7f4c19daea73c50c810817556d6555d8e7c2cfbb", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1609.07035", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c138f81ad5e23577b246c7f7ae5533b5cba37b60", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
11774040
pes2o/s2orc
v3-fos-license
Characterising microbial protein test substances and establishing their equivalence with plant-produced proteins for use in risk assessments of transgenic crops Most commercial transgenic crops are genetically engineered to produce new proteins. Studies to assess the risks to human and animal health, and to the environment, from the use of these crops require grams of the transgenic proteins. It is often extremely difficult to produce sufficient purified transgenic protein from the crop. Nevertheless, ample protein of acceptable purity may be produced by over-expressing the protein in microbes such as Escherichia coli. When using microbial proteins in a study for risk assessment, it is essential that their suitability as surrogates for the plant-produced transgenic proteins is established; that is, the proteins are equivalent for the purposes of the study. Equivalence does not imply that the plant and microbial proteins are identical, but that the microbial protein is sufficiently similar biochemically and functionally to the plant protein such that studies using the microbial protein provide reliable information for risk assessment of the transgenic crop. Equivalence is a judgement based on a weight of evidence from comparisons of relevant properties of the microbial and plant proteins, including activity, molecular weight, amino acid sequence, glycosylation and immuno-reactivity. We describe a typical set of methods used to compare proteins in regulatory risk assessments for transgenic crops, and discuss how risk assessors may use comparisons of proteins to judge equivalence. Introduction Most transgenic crops for commercial use have been genetically engineered to produce new proteins. Among other things, the proteins may improve the crop's resistance to insect attack, confer tolerance to various herbicides, make the crop more nutritious, improve its processing properties, or act as markers to identify the crop. Risk assessments for the consumption or cultivation of such crops use studies that test relevant properties of the novel protein to predict the likelihood that the crops will harm human health or the environment (Hérouet et al. 2005;Craig et al. 2008;Romeis et al. 2008;Sanvido et al. 2012). Many studies for risk assessment require grams of highly (C90 %) pure protein. Often it is not possible to prepare the required amount of purified protein from transgenic plants because the proteins are produced in low amounts and their purification from the plant matrix is technically extremely difficult, if not impossible (Hérouet et al. 2005). It is, however, relatively easy to produce sufficient protein of acceptable purity by over-expressing the protein in fermentable microbes, such as Escherichia coli. Microbial proteins can be purified from disrupted bacterial cells using standard methods, including precipitation and chromatography. After purification, the protein is concentrated, desalted and, in most cases, lyophilised. The resulting powder is a microbial test substance, which is used to measure properties of the protein considered relevant for assessing risks. For non-pesticidal proteins, the requirement for large amounts of protein is mainly due to mammalian toxicity studies. A typical study for evaluation of acute oral toxicity (e.g., based on the US Environmental Protection Agency guideline OCSPP (formerly OP-PTS) 870.1100 requires a minimum of 5 male and 5 female mice each to be given a single dose of at least 2,000 mg protein per kg body weight. Depending on the weight of the animals, this study alone can use about 2 g of protein. Sometimes, a repeated-dose oral toxicity study is used (e.g., based on OECD guideline 407). Such studies require a minimum of 5 male and 5 female mice each to be given a single dose of at least 1,000 mg protein per kg body weight daily for 28 days, which may require over 25 g of active protein (Delaney et al. 2008). In addition to mammalian toxicology studies, regulations require pesticidal proteins to undergo ecotoxicology testing, and some of the studies may use large amounts of protein. A typical study of acute oral toxicity in birds (e.g., based on US EPA guideline OCSPP 850.2100) requires a minimum of 5 male and 5 female birds (usually bobwhite quails) each to be given a single dose of at least 2,000 mg protein per kg body weight, representing over 2 g of protein. Studies of honey bee brood (Oomen et al. 1992) may expose each of 3 hives to 1 L of sucrose solution containing the protein at 10 times the concentration in the pollen of the transgenic crop. If the pollen contains 50 lg protein per g fresh weight, and the density of sucrose solution is 1.2 g/ml, 1.8 g of protein would be needed for the study. High purity of the protein test substance aids interpretation of the results of studies. If adverse effects are observed in a toxicology or ecotoxicology study, one needs to be confident that the effects are caused by the protein and not by an impurity in the test substance. In some ecotoxicology studies, it is possible to expose animals to high concentrations of protein via diets containing tissue from the transgenic crop. Statistically significant differences between groups of organisms fed material from a transgenic crop and groups fed similar non-transgenic crop tissue are common (e.g., Wandeler et al. 2002;Obrist et al. 2006;Faria et al. 2007;Rosi-Marshall et al. 2007;Bøhn et al. 2010). Interpretation of such results regarding effects of the transgenic protein is difficult because the transgenic and non-transgenic crops differ by more than just the presence or absence of the transgene, and therefore the test materials almost certainly differ by more than just the presence or absence of the protein coded by that transgene (e.g., Parrott 2008). While there are clear advantages of microbial test substances for regulatory studies, it is essential that the substances are properly characterised. Their purity must be estimated so that the correct amount of test substance is used to give the required amount of protein in a given study. Information about solubility is crucial as many studies require aqueous solutions of the test substance. Failure of the test substance to dissolve or remain in solution could invalidate a study. It is also crucial to show that the protein in the microbial test substance is a suitable surrogate for the protein produced by the transgenic crop, because there may be intended or unintended differences between the microbial and plant proteins. Corroboration of the hypothesis of no significant difference between the microbial and plant proteins in relevant properties is taken as evidence that the proteins are functionally and biochemically equivalent for the purposes of studies that inform risk assessments for the transgenic crop. This paper provides an overview characterisation and equivalence 1 studies for microbial test substances that support their use in risk assessment studies as surrogates for plant-expressed proteins. Its purpose is to show the variety of data that is produced in order to judge the robustness and applicability to transgenic crop risk assessments of studies that use microbial test substances. The paper does not provide exhaustive detail about experimental design, nor does it provide a 1 Studies may be conducted to test whether a protein produced in one transgenic event is functionally and biochemically equivalent to the protein produced in a second transgenic event. These are often termed ''bridging studies'', but are identical to the equivalence studies described here. complete set of data for a single test substance, such as provided by Fuchs et al. (1993a, b), Gao et al. (2004Gao et al. ( , 2006b and Hérouet et al. (2005). Instead, it concentrates on general principles, summaries of current methods, potential problems and interpretation of multiple lines of evidence to provide an up-to-date review of current practice in establishing the suitability of microbial proteins to act as surrogates for plantproduced transgenic proteins. As discussed below, a microbial test substance need not be identical to the plant protein for which it is a surrogate. Equivalence means that the microbial and plant proteins are deemed sufficiently similar for the purposes of specific studies that contribute data for risk assessments. An important corollary of this definition is that it is not feasible to devise a procedure that will determine equivalence for all test substances for all uses. The methods described below should be regarded as options for a risk assessor to build a weight of evidence to judge whether or not an individual test substance is suitable for a particular use. They are not a series of tests that trigger an objective ''pass'' or ''fail'' decision based on universal criteria that distinguish equivalence from non-equivalence. Solubility Determination of the solubility of a protein test substance in aqueous solutions is essential for its further characterisation because most analytical techniques require the substance in a solubilised form. The solubility determination of a protein test substance is therefore commonly the first experiment conducted during test substance characterisation. Furthermore, the solubility determination provides important information about the possible delivery vehicles in animal toxicity studies and non-target organism effects studies. The solubility of microbial test substances in water and other aqueous solutions can be determined by a simple optical test. Defined volumes of the solvent are added to the lyophilised test substance and its solubility-the highest concentration at which the test substance is dissolved completely-is determined by visual inspection, or confirmed by analytical methods of total protein determination described below, or both. For toxicology studies, solubility of the test substance in water is desirable because it eliminates the possibility of side effects from other components of the solvent, such as buffers. Unfortunately, many protein test substances have limited solubility in water, meaning that buffers such as tris(hydroxymethyl)methylamine (TRIS), Ncyclohexyl-3-aminopropanesulfonic acid (CAPS), or various phosphate buffers are often required to dissolve test substances. Other additives, such as ethylenediaminetetraacetic acid (EDTA) dithiothreitol (DDT) or Tween20 may also be needed to stabilise the protein in the solution. For some studies, such as acute oral toxicity in mice, a homogenous suspension of test substance may be an acceptable alternative to a solution. In studies that require aqueous solutions of the protein test substance, the best buffer is the simplest one that maintains the test substance in solution for the period of use. Where buffers are used, it may be necessary to determine whether they significantly affect the results. In non-target organism effects studies, for example, preliminary experiments to determine the effect of the buffer on the test species are advisable, and inclusion of control groups exposed to buffers in effects studies should be considered (Romeis et al. 2011). Sometimes it is possible to modify study designs to cope with test substances that are difficult to dissolve. The effect of proteins on the development of honeybee brood may be determined by exposing hives to an aqueous solution of sucrose in which the test substance is dissolved. The sucrose solution is placed in feeders near the hive, and worker bees carry solution back to the hive and feed it to the developing brood (Oomen et al. 1992). For test substances that are easily maintained in solution, the required amount of protein could be delivered to a hive in a single 1L batch of treated sucrose solution placed near the hive at the beginning of the experiment. For test substances that are difficult to keep in solution over a longer period, each hive could be exposed to 200 ml of treated solution on each of the first 5 days of the study. Purity The purity of a microbial test substance is usually determined in two stages. First, the proportion of the test substance that is protein is determined using standard laboratory methods such as BCA TM (bicinchoninic acid) (Hill and Straka 1988;Walker Transgenic Res (2013) 22:445-460 447 1996), Bradford analyses (Bradford 1976), or, in cases of highly pure ([90 %) test substance preparations, spectrophotometrically by measuring absorbance at 280 nm (Gill and von Hippel 1989). Secondly, the proportion of protein that is the protein of interest (POI) is determined by sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE) of the protein test substance, followed by staining with Coomassie Blue and quantitative densitometry (Fishbein 1971). The proportion of total protein comprising the POI is calculated as the area under the peak representing that protein divided by the total area under all peaks. The purity of the test substance is simply calculated as the proportion of the test substance comprising protein multiplied by the proportion of protein comprising the POI. Figure 1 shows a typical densitometric analysis of a Coomassie Blue stained SDS-PAGE gel. The purity of a protein test substance is reported as per cent POI weight-by-weight. The purity of the protein test substance provides information for further analysis of the protein test substance, and for accurate dosing of the POI in mammalian toxicity and nontarget organism effects studies. Highly pure test substances ([90 % POI) are preferred, because they reduce the probability of adverse effects arising from impurities such as proteins from the E. coli expression system (e.g., Franken et al. 2000). High purity test substances also allow the highest possible dosing of the POI at a minimal volume to animals by gavage where limit doses are required for acute toxicity exposure studies in rodents. Measures of test substance equivalence Intactness and immuno-reactivity Analysis of the molecular weight of the microbial and plant proteins provides information on whether they have been truncated or degraded in a sample; therefore, molecular weight is commonly called a measure of intactness. Molecular weight determination can also detect modifications of proteins, such as glycosylation, and insertions or deletions of amino acids. Immunoreactivity refers to the ability of a protein to bind specific antibodies. Loss of immuno-reactivity may a Coomassie stained SDS-PAGE gel. Lanes 1 and 7 Molecular weight standard SeeBlue Ò Plus2 (Invitrogen; bands indicated as kDa); lanes 2, 3, 4, 5 and 6 0.3, 0.6, 1.2, 2.4, 4.8 lg protein test substance. The molecular weight of Vip3Aa19 corresponds to ca. 89 kDa. b Densitometric analysis of the Coomassie stained SDS-PAGE gel using a laser densitometer. The signals derived from the individual protein bands are translated into peak areas (indicated by the numbers on the gel and on the densitometry graph). The peak areas signal can be used to calculate the percentage of each protein within the total protein fraction of the test substance. The analysis showed that the protein comprising the protein of interest (Vip3Aa19) represented 91.4 % of the total protein fraction in the test substance. (Color figure online) indicate modifications to a protein that change its biochemical or functional properties. Western blot analysis, also known as protein immuno-blotting, is a convenient method for comparing the intactness and immuno-reactivity of protein samples. In western blotting, proteins are separated by SDS-PAGE and transferred from the gel to a membrane in a second electrophoresis step called ''blotting'' (Burnette 1981). The proteins are immobilised on the membrane, which thereby acquires an exact copy of the original protein gel image. Once blotted, the POI can be detected using specific antibodies, allowing the POI to be identified in complex mixtures such as protein crude extracts from plants. Western blot analysis displays the apparent molecular weight of the POI by comparing its electrophoretic mobility with that of a molecular weight standard. Kurien and Scofield (2006) provide a recent review of western blotting techniques. Western blotting is a powerful technique for analysis of the immuno-reactivity and intactness of proteins of interest from different matrices. Side-by-side comparison of the apparent molecular weight of the proteins from different sources provides compelling evidence for equivalence because major differences in modification of the proteins would result in changes in mobility. Confirmation of the intactness of a protein within its matrix also supports the reliability of associated ELISA analyses, as breakdown of the protein could lead one to over-estimate its concentration. An example of western blotting to compare the intactness and immuno-reactivity of a microbial and plant POI is shown in Fig. 2. The analysis shows that mEPSPS derived from recombinant E. coli and from GA21 maize bind rabbit anti-EPSPS polyclonal antibodies, and have the same apparent molecular weight. The loss of resolution observed for the mEPSPS protein bands derived from the maize crude extract is explained by the interference from large amounts of protein derived from the plant matrix. The endogenous maize EPSPS in the negative control (non-transgenic) maize extract appears as a faint band because the antibody is not able to discriminate between the native maize EPSPS and mEPSPS. Intact mass The determination of the molecular weight of a protein by western blots is relatively imprecise (Sadeghi et al. 2003). Exact estimates of intact protein masses can be obtained by mass spectrometry (MS). Two MS methods can be used to determine the intact mass of both microbial and plant proteins: electrospray MS, often implemented on a quadrupole-time-of-flight (Q-TOF) type mass spectrometer, and Matrix Assisted Laser Desorbtion Ionisation (MALDI) MS on a MALDI-TOF instrument (Sundqvist et al. 2007). For MS analysis of microbial proteins, Q-TOF analysis is preferred because it achieves higher mass accuracy than MALDI-TOF analysis (Sundqvist et al. 2007). Q-TOF machines are able to distinguish between proteins with single amino acid substitutions, or other low molecular weight modifications, such as methionine oxidation. Such differences would not be detected by MALDI-TOF MS. Plant-produced POIs can in principle be analysed by either Q-TOF or MALDI MS. MALDI is currently the method of choice because its greater sensitivity enables analysis of small amounts of plant POIs that are difficult to obtain in large, pure batches (Hérouet et al. 2005 coli and from transgenic maize. Lane 1 Molecular weight standard SeeBlue Ò Plus2 (Invitrogen; bands indicated as kDa); lanes 2 and 3 7.5 and 15 ng mEPSPS microbial mEPSPS, respectively; lanes 4 and 5 7.5 and 15 ng mEPSPS from GA21 maize (crude extract), respectively; lanes 6 and 7 7.5 and 15 ng mEPSPS from GA21 maize (purified using immunoaffinity chromatography), respectively; lanes 8 and 9 3.5 and 6.9 lg total protein from non-transgenic maize, respectively. The molecular weight of mEPSPS corresponds to about 47.4 kDa Transgenic Res (2013) 22:445-460 449 Obtaining the precise mass of the POI provides direct evidence about the form of the protein present in the transgenic crop, and can make a strong case for sequence identity with the microbial protein; for example, one may be able to demonstrate that the plant protein is processed in a particular way, whether or not it has a leader sequence, or that it has not been unexpectedly glycosylated. No other method can provide such detail about the chemical form of the intact protein within the plant. Analysis of the intact mass of plant-produced POIs by MS presents significant problems. First, sufficient POI must be isolated from the plant and concentrated into a small volume (a few lL). Secondly, the POI must be of high purity, so that peaks from contaminating proteins or other compounds do not confound the analysis. In general, these problems are reduced the higher the concentration of the POI in the transgenic plant. Thirdly, the isolation method must not modify the mass of the POI. Polyphenol oxidation during extraction is a particular problem (Le Bourvellec and Renard 2012) and methods to reduce it, such as including in extraction buffers compounds that adsorb phenols (e.g., Loomis and Battalie 1966), may be necessary. Protein sequence The amino acid sequence of a protein provides useful information about its likely structure and function (e.g., Eisenhaber et al. 1995); therefore, amino acid sequence comparisons are conducted as tests of the biochemical and functional equivalence of microbial and plant proteins. Two methods are used routinely: N-terminal sequencing and peptide mass mapping. N-terminal sequences of both microbial and plantproduced POIs can be determined using Edman sequencing (Edman and Begg 1967). The POI is converted to a phenylthiocarbamyl protein by reaction of the N-terminal amino acid with phenylisothiocyanate. The modified N-terminal amino acid is released by cleavage with trifluoroacetic acid and converted to a phenylthiohydantoin, which can be identified after separation by chromatography or electrophoresis. The amount of sequence obtainable is limited because the conversion reactions do not go to completion. However, the sequence of the first 10 amino acid residues can almost always be determined for a microbial protein, and where sequence can be obtained for plant protein the comparison is straightforward. Edman data are semi-quantitative and are able to detect mixed N-terminal forms should they be present in plant or microbial protein samples. Many plant POIs are N-terminally modified into forms that are blocked to chemical sequencing, most commonly by acetylation (Martinez et al. 2008). In these circumstances, N-terminal sequence comparison is not possible and MS provides an alternative approach to confirm the N-terminal amino acid sequence for the plant POI; however, technical hurdles, as indicated below, limit its application in many cases. Peptide mass mapping is the application of MS to the characterisation of plant and microbial protein sequences. Outside plant biotechnology, the term normally refers to the application of MALDI-peptide mass fingerprinting methods for protein identification (Jensen et al. 1997;Ren et al. 2005;Dauly et al. 2006). Protein identification is considered reliable if the coverage is at least 15 % of the sequence and 5 or more peptides are matched (Jensen et al. 1997). Peptide mass mapping has been used to sequence proteins for regulatory submissions for transgenic crops; for example, Gao et al. (2006a) used MALDI-TOF peptide mass fingerprinting to characterise Cry1A.105 and Cry2Ab produced in MON 89034 maize. Each protein was separated on an SDS-PAGE gel and then digested with trypsin (Williams et al. 1997). The masses of the tryptic fragment peptides were measured using MALDI-TOF MS (Billeci and Stults 1993) and were compared with those of the predicted peptides from the expected amino acid sequence of the respective proteins. Where a mass matched that of a predicted peptide, the sequence of the peptide was considered assigned as the expected sequence. The method matched 52 peptides, confirming 43.8 % (516 of 1,177 amino acids) for full-length Cry1A.105, and matched 32 peptides, confirming 44.4 % (283 of 637 amino acids) for fulllength Cry2Ab2. Gao et al. (2004) took a similar approach to the characterisation of Cry34Ab1 and Cry35Ab1 expressed in Pseudomonas and transgenic maize, and to Cry1F produced in Pseudomonas and transgenic cotton (Gao et al. 2006b). Scott et al. (2006) used the same gel tryptic digest method but combined it with single ion monitoring on an electrospray single quadrupole instrument to compare 2mEPSPS produced in E. coli and in GHB614 cotton; peptides from the microbial 2mEPSPS were identified in the cotton 2mEPSPS with a coverage of over 90 % and the calculated masses for the peptides were identical. Another approach to identification of tryptic fragments from SDS-PAGE gels uses nano-liquid chromatography-MS/MS (LC-MS/MS) conducted on a Q-TOF instrument (Marvin et al. 2000). Peptide mass mapping MS/MS data are interrogated with the Mascot search tool (Perkins et al. 1999) using a database containing the predicted amino acid sequence of the plant or microbial protein. Data from individual peptides confirm parts of the sequence and together build up a coverage map for the whole protein. Analysis of the microbial protein is conducted in the same way and the two coverage maps used to confirm the presence of the same protein in both plant and microbial samples. Figure 3 shows the maps for microbial and plant-derived Vip3Aa19 with 75 % and 71 % coverage of the proteins respectively. Obtaining MS/MS spectra of peptides at high mass accuracy enables confirmation of the identity of individual peptides from the protein digest. Each MS/MS spectrum contains information about the amino acid sequence within the peptide, which is not provided by the other methods. This provides two advantages: first, protein identity is verified without the need to achieve high levels of sequence coverage; and secondly, peptides from contaminating proteins can be easily shown to not be associated with the protein of interest. In some cases, the N-terminal peptide can also be covered, which provides reliable data regarding the amino acid sequence and would make the separate N-terminal sequence analysis indicated above redundant. No threshold value for percentage of sequence coverage of a protein obtained from nano-LC-MS/MS data has been established; however, in the proteomics field it is common to accept the identification of proteins in a sample where high quality spectra from only two peptides unique to the protein of interest have been recorded (Bradshaw et al. 2006). Indeed more recently the need for even the second peptide has been questioned (Gupta and Pevzner 2009). It is common in peptide mass mapping that the coverage for the microbial and plant proteins is not identical. This does not necessarily imply a difference in sequence between the two proteins. For example in the case of the data for Vip3Aa19 shown in Fig. 3 coverage of individual peptides and percentage coverage is similar but not identical. This might have occurred for a number of reasons. The samples might not have been the same strength and commonly the plant protein is the weaker sample showing as in this case lower percentage coverage. In LC-MS/MS, the process by which the mass spectrometer selects ions for fragmentation is to some extent random and the same peptides are not selected for fragmentation in each run even in comparable runs of an identical sample (Liu et al. 2004;Elias et al. 2005). Glycosylation Over half of the proteins in plants are estimated to be glycosylated (Apweiler et al. 1999). Glycosylation typically consists of the addition of complex structures derived from carbohydrates. Glycosylation can alter the physiochemical properties of a protein, such as its tolerance of heat, functional activity, protein folding, transport and half-life (Solá et al. 2007). N-glycosylation is a common glycosylation motif in plant proteins (Strasser et al. 2004;Nagels et al. 2012). The sequence motif Asn-Xxx-Ser/Thr, or in some rare cases Asn-Xxx-Cys, where Xxx is any amino acid except Pro, is required for N-glycosylation. The absence of these sequences can therefore completely exclude N-glycosylation of the protein. For other glycosylation types, programs have been developed using algorithms to predict glycosylation sides with over 90 % accuracy (Hamby and Hirst 2008), and provide useful information regarding the glycosylation potential of a protein. Glycosylation has been much studied in connection with potential increased allergenicity (Wilson et al. 2001). However, increasing knowledge of plant glycosylation has recently led to the conclusion that carbohydrate moieties are probably insignificant as clinically important allergen determinants (Altmann 2007). Nevertheless, plant proteins are analysed for glycosylation status in order to detect possible changes in function as part of protein equivalence assessments. Transgenic proteins in plants are not intended to be glycosylated and recombinant proteins produced in E. coli are not glycosylated (e.g., Baneyx and Mujacic 2004); therefore, demonstrating the absence of glycosylation adds to the weight of evidence that the POI and the microbial protein are functionally equivalent. Differences in glycosylation status might be regarded seriously owing to potential variation in physicochemical properties of the proteins. The analysis of glycosylation in protein equivalence studies is routinely accomplished using immuno-blot assays. Proteins are separated by gel electrophoresis and electro-transferred to a membrane as in western blot analysis. Once immobilised on the membrane, glycosyl-residues are detected using antibodies or sensitive chemical methods (Haselbeck and Hösel 1990;Westermeier and Marouga 2005) so that only glycosylated proteins result in visible bands. Figure 4 shows an immuno-blot glycosylation analysis of mEPSPS derived from recombinant E. coli and from extracts of leaf material from transgenic GA21 maize. Transferrin, a protein known to be glycosylated, was used as a positive control, and creatinase, a protein known to be non-glycosylated, was used as a negative control. The control proteins were used to confirm the integrity of the assay and to establish its sensitivity. In this analysis, visualisation of glycosylated proteins was achieved by chemical oxidation of glycan moieties, which were then covalently labelled with digoxigenin (DIG), and detected with an alkaline phosphatase-linked antibody sensitive to DIG. Alkaline phosphatase catalyzed a colorimetric reaction resulting in stained bands representing glycosylated proteins. Loading different amounts of the positive control allowed the sensitivity of the assay to be estimated. The results indicate that both mEPSPS proteins are not glycosylated, or that glycan moieties occur at a frequency of less than one glucose equivalent per molecule of mEPSPS. Further evidence about glycosylation status may be obtained from the western blot analysis comparing the plant-derived protein with the microbial test substance. Typical plant N-glycosylation patterns are rather complex and increase the mass from between 1 and 2 kDa per glycosylation site (Wilson et al. 2001). Such increases in the total mass of the plant protein would be evident from a side-by-side comparison with the microbial protein on a western blot. Fig. 3 Amino acid sequence coverage map for a the microbial Vip3Aa19 and b for the plant-derived Vip3Aa19. The sequence highlighted and underlined represents peptides identified. Evidence for 75.2 % of the sequence was obtained by combining the results of analyses using three separate enzymes Activity Many transgenic proteins are enzymes. For these proteins, it is important to test the hypothesis that the activity of the protein in the microbial test substance is equivalent to that of the protein produced in the transgenic crop. Similarity in activity is a good predictor of similar biological interactions, such as mammalian toxicity and effects on non-target organisms. A B Enzyme activity assays vary depending on the chemistry of the reaction catalyzed. Specific activity is reported in units per amount of enzyme. Units are defined for each enzymatic reaction in terms of the product produced, or the substrate used, over time under defined conditions. To calculate specific activity, the concentration of enzyme within the activity assay must be estimated. This is done routinely by enzyme-linked immunosorbent assay (ELISA) (Tijssen 1985), a well-established method to quantify proteins in different matrices. Often ELISA cannot distinguish between active and inactive proteins, which can result in inaccurate estimates of specific activity. Hence activity studies are often conducted on crude plant extracts to avoid inactivation during purification of enzymes produced by the transgenic plant. Another consideration is that the plant matrix may reduce the specific activity of enzymes because of the action of proteases and reactive secondary metabolites, such as phenols, and other effects such as rapid pH changes during extraction. The best comparison of specific activity of microbial and plant-produced proteins may therefore be between a crude extract from a transgenic plant and microbial test substance spiked into extract from a suitable non-transgenic control plant. The reduced activity of enzymes in some plant matrices is illustrated by an example of transgenic maize resistant to herbicides containing glyphosate. Glyphosate inhibits 5-enylpyruvylshikimate-3-phosphate synthase (EPSPS), an enzyme in the biochemical pathway that synthesises aromatic amino acids from shikimic acid (Amrhein et al. 1980). Expression of a modified EPSPS (mEPSPS), containing two amino acid substitutions, provides glyphosate resistance in Event GA21 maize owing to reduced binding of glyphosate to the modified enzyme (Dill 2005). The activities of mEPSPS derived from recombinant E. coli and from transgenic maize were determined using a EPSPS-specific activity assay based on the detection of orthophosphate released during the transfer of the enolpyruvate moiety of phosphoenolpyruvate to shikimate 3-phosphate (Stalker et al. 1985). The released phosphate is detected by its forming a complex with Malachite Green and molybdate under acid conditions, which is detected by spectrophotometry at 660 nm (Itaya and Ui 1966;Lanzetta et al. 1979). The microbial mEPSPS had about 9 times the specific activity of the plant enzyme (Table 1). However, when the microbial enzyme was added to an extract from non-transgenic maize, the specific activity of the microbial enzyme was only about twice that of the plant enzyme, indicating that the maize extract inhibits mEPSPS activity. This is important information when judging whether the microbial mEPSPS is a suitable surrogate for the plant-produced EPSPS. The activity of pesticidal proteins is measured as the concentration or dose of the toxin that affects a given proportion of individuals of a test organism in a certain time. For insecticidal proteins, such ''bioactivity'' is usually reported as an LC 50 , the concentration of the toxin that kills 50 % of a sensitive test insect a certain time after exposure to the toxin. While enzyme activity assays should be reproducible within ranges established during the assay validation, insect bioassays are expected to have greater within-and among-assay variability than enzyme assays in their absolute responses owing to biological variation among the individual insects tested (Robertson et al. 1995). To minimise extraneous variation, tests of biological equivalence of insecticidal proteins from plant and microbial sources, bioactivity should be estimated concurrently under uniform conditions using individuals from the same cohort of insect larvae randomly allocated to treatments (Romeis et al. 2011). Bioactivity is usually assessed using a target pest of one of the transgenic events that the equivalence study will support. If the target pest is difficult to rear in the laboratory, or shows variable responses to a protein, a non-target pest species may provide a more rigorous test of the hypothesis of no difference in bioactivity; for example, the target pest of Cry3Bb1 produced in MON863 maize is western corn rootworm (Diabrotica vergifera vergifera), but Colorado potato beetle (CPB; Leptinotarsa decemlineata) is preferred for bioassays of Cry3Bb1 because of its greater sensitivity to the protein (e.g., Duan et al. 2008). Comparisons of bioactivity may be particularly useful when the microbial protein is known to differ from the plant protein in amino acid sequence because of point mutations in the transgene that occurred during or after plant transformation. An example is Vip3A, an insecticidal protein isolated from the vegetative cells of Bacillus thuringiensis. Vip3Ais toxic to several lepidopterous pests of maize and cotton (Lee et al. 2003), and provides control of these pests when produced in transgenic crops. Vip3A produced in VipCot cotton (Kurtz et al. 2007) and Pacha maize (Dively 2005) is a 789 amino acid protein denoted Vip3Aa19. Vip3A produced in MIR162 maize is also 789 amino acids long, but differs from Vip3Aa19 by one amino acid: isoleucine instead of methionine at position 129. The protein in MIR162 maize is denoted Vip3Aa20. The change from methionine to isoleucine at position 129 is a conservative substitution. Both amino acids are uncharged, nonpolar and have similar molecular weights (149 vs 131); thus, the difference in amino acids is unlikely to change the three dimensional structure of the protein. An additional reason for expecting similar properties of Vip3Aa19 and Vip3Aa20 is that the amino acid difference occurs outside the protein tryptic core (Lee et al. 2003). Corroboration of the hypothesis that the microbial and plant test substances do not have different activities is important for interpreting the results of toxicity and ecotoxicology studies. If the activity of the microbial protein is no different from that of the plant protein, then the dose or concentration of microbial protein used in a study can be compared directly with predicted environmental concentrations of proteins that may result from cultivation of the transgenic crop. Suppose that several representative surrogate non-target organisms are exposed to diets containing a microbial protein at 500 lg/g diet with no observable adverse effects, and that predicted highest exposures of non-target organisms to the protein via cultivation of the transgenic crop are no greater than 50 lg/g diet. Provided that dietary exposure in the test diet is confirmed, it follows that one can infer with high confidence that the no observable adverse effect concentration (NOAEC) of the protein to all species represented by the tested surrogates is greater than or equal to their highest exposure in the field (the worst- One unit of mEPSPS activity is defined as the release of 1 nmol of phosphate per minute under standard assay conditions (Padgette et al. 1987) case expected environmental concentration or EEC). In risk assessments this may be presented as a hazard quotient (HQ): EEC/NOAEC B0.1 ). This would be strong corroboration of the hypothesis that non-target organisms will not be exposed to harmful concentrations of the protein via cultivation of the transgenic crop (e.g., Raybould et al. 2007). If the hypothesis that the microbial and plant proteins do not have different activity were rejected, it would not necessarily mean that the microbial test substance is unsuitable for risk assessment studies. If other studies show that the proteins are equivalent apart from activity, the difference in activity could be allowed for in risk assessments. In the example above, if the microbial protein were found to have half the activity of the plant protein, that is the LC 50 of the microbial protein is twice that of the plant protein, then one could correct the estimate of the NOAEC to half the value based on the concentration of the protein the microbial test substance. After correction for bioactivity, the HQs would be B0.2-still strong corroboration of the hypothesis of no harm to non-target organisms, but giving a little less confidence in a conclusion of negligible risk compared with studies done with a test substance of equivalent activity to the plant protein. Finally, one could argue that greater potency of the protein in the microbial test substance may allow a higher NOAEC to be set for risk assessments of the transgenic crop containing the less potent protein; however, in practice this is unlikely to be convincing as lower activity of the plant protein may be due to effects of the plant matrix or to inactivation during purification, not to intrinsically higher activity of the microbial protein. To date, the desired traits of most commercial transgenic crops are based on the production of proteins that are enzymes or toxins. Methods to measure the activity of these proteins are conceptually straightforward. New traits may be based on proteins that are not so simple to assay for activity. One method of increasing water-use efficiency of maize is the production of a cold-shock protein derived from Bacillus subtilis (CSPB). CSPB binds to singlestranded DNA or RNA. Its binding activity may be revealed in vitro by fluorescence from a labelled double-stranded probe as it becomes opened by the protein (Castiglioni et al. 2008). This assay has been used to determine the equivalence of a microbial and plant-produced CSPB (Pester et al. 2009). Water-use efficiency may also be improved by the production of new transcription factors (e.g., Kasuga et al. 1999). There are in vitro methods for determining the specificity and affinity of transcription factors (Jolma and Taipale 2011). These methods could fulfil the role of functional assays when assessing test substance equivalence in cases where activity assays as described above are not applicable. Judging the equivalence of proteins Microbial and plant proteins may differ in several ways: the differences may be unintended results of changes during transformation or test substance production, or may be intended to assist production of the test substance; and the differences may be single amino acid substitutions, additions of short amino acid tags, or large deletions of parts of the microbial or plant proteins. Unintended differences from the plant protein may arise during production of microbial protein and these differences tend to be minor. One common source of variation is cleavage of the N-terminal methionine when proteins are produced in microbes and its retention in plant-produced proteins. Another source of variation is mutations in the gene for the POI during transformation of the plant or the microbial expression vector; mutations tend to result in differences of one or two amino acid residues between the plant and microbial proteins (e.g., Raybould and Vlachos 2011). Occasionally, the microbial protein is designed to be different from the plant protein. Short tags of 6-10 amino acids such as histidine may be added to the Nor C-terminus of the microbial protein to aid purification (Schmitt et al. 1993). Sometimes, the protein produced in the plant may be hard to produce in microbes. For example, many insect-resistant transgenic crops produce truncated (or ''activated'') forms of Cry proteins. Producing similarly truncated Cry proteins as microbial test substances is often difficult. The solubility of Cry proteins varies depending on the organism in which they are produced (Khasdan et al. 2003), and although truncated Cry proteins are soluble in plants, they are often insoluble in microbes. In such cases, one option is to produce a full-length microbial protein and truncate it by treatment with a protease such as trypsin (e.g., Porcar et al. 2010). Alternatively, Transgenic Res (2013) 22:445-460 455 it may be possible to use the full-length protein in, for example, non-target organism effects studies if the proteins are equivalent in attributes other than length. The suitability of a microbial protein for a risk assessment study depends on whether it is determined to be equivalent to the plant protein in properties relevant to the purpose of the study. Usually, a weightof-evidence approach is taken; that is, no single study determines whether or not the proteins are equivalent, and equivalence is determined by evaluation of the results of several studies such as those described above and outlined in Table 2. Other lines of evidence, such as whether for single amino acid differences both amino acids are neutral or acid, or both are hydrophilic or hydrophobic, and whether the substitution has occurred in part of the protein known to determine important properties such as bioactivity, may also be considered. It is important to realise that equivalence does not imply that the plant and microbial proteins are identical. Equivalence it is intended to mean that the microbial protein is sufficiently similar biochemically and functionally to the plant protein such that studies using the microbial protein provide reliable Glycosylation affects many properties of proteins including stability and function. It has been claimed that glycosylation affects the allergenicity of proteins, although recent work casts doubt on this suggestion. Nevertheless, differences in glycosylation status might be regarded seriously owing to potential variation in physicochemical properties of the proteins Functional activity Enzymatic activity assay, insecticidal bioassay Detection of potential differences in specific catalytic activity (enzymes) or insecticidal bioactivity (toxins) Confirmation of equivalent activities confirms equivalent protein folding (tertiary and quaternary structure). Depending on the results of other equivalence tests, differences in activity may be acceptable. Differences in activity may be allowed for in safety studies. For example, margins of exposure could be based on comparisons of activity, not concentration Equivalence is judged separately for each test substance based on a weight of evidence information for risk assessment of the transgenic plant. ''Sufficiently similar'' cannot be defined completely objectively, but is a judgement by risk assessors about whether studies using microbial protein provide reliable and robust tests of risk hypotheses that the cultivation of the transgenic plant will not cause harm. Decisions about the suitability of a particular microbial protein should therefore concentrate on properties that predict harm, and it follows that the microbial protein could be deemed sufficiently similar to the plant protein for some studies but not for others. Those features that are most important should receive the most attention, depending on the intended use of the test substance. For example, equivalent bioactivity may be most important for non-target organism studies, similar glycosylation and immuno-reactivity may be the main requirements for allergenicity studies, while analysis of amino acid sequence may be best for determining suitability for studies that compare enzymatic degradation of proteins. Conclusions Safety studies using purified microbial proteins may provide important data to assess the risks to human and animal health and to the environment from the use of transgenic crops (Garcia-Alonso et al. 2006;Delaney et al. 2008;Romeis et al. 2008). The studies are carried out to internationally accepted guidelines that specify factors such as replication, test duration, measurement endpoints and validity criteria, to maintain the repeatability and reliability of the studies (Delaney et al. 2008;Romeis et al. 2011). The usefulness of a study with microbial protein does not depend solely on the experimental design elements noted above. It is also essential that the microbial test substance is a suitable surrogate for the plant-produced transgenic protein for the purposes of the study. Suitability as a surrogate does not imply that the microbial and plant proteins must be identical, only that relevant properties of the microbial test substance and the plant protein are sufficiently similar, such that studies with the microbial protein reliably predict the probability of harmful effects that may result from human, animal or environmental exposures to the protein via the transgenic crop. Variation in the functions of proteins, the purposes of studies, and opinions of decision-makers about the relevance of particular differences between proteins, means that it is not feasible to define one set of equivalence criteria that applies to all test substances for all uses. The suitability of test substances as surrogates must be judged individually based on a weight of evidence from studies comparing the microbial and plant proteins for properties including activity, molecular weight, amino acid sequence, glycosylation and immuno-reactivity. Establishment of the suitability of the test substance, along with experimental designs that follow international guidelines, will ensure that studies with microbial protein provide reliable information about the risks posed by the use of transgenic crops.
2017-08-02T23:46:27.727Z
2012-10-12T00:00:00.000
{ "year": 2012, "sha1": "1bbec57a3216b0d9f2836950835a51b6fa2ab22e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11248-012-9658-3.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7a2941977ad56d8ba6e1c782e87b72763e5141b0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
221668672
pes2o/s2orc
v3-fos-license
Theoretical Assessment of Sustainability Principles for Renewable Smart Air-Conditioning Upon an improvement in the quality of life, air-conditioning has generally been applied. Nevertheless, environmental and health issues related with the use of air-conditioning occurs more often. Therefore, this paper aims to theoretically assess the principles of sustainability to achieve sustainability for renewable smart air-conditioning. Not only with consideration to the geometry (i.e. system mechanisms and components), fuzzy logic control and proportional-integral-derivative that such studies drawn particular attention to, but with concerns to a matter which has been previously ignored. That is with consideration to the Environmental Management and Sustainable Development ISSN 2164-7682 2020, Vol. 9, No. 3 http://emsd.macrothink.org 19 potential which the renewable-based options, advanced smart control technique and profitability measures of air-conditioning reinforces the three pillars of sustainability, and their sustainable indicators as context-specific transformations have, to not only eradicate indoor health effects, lower the levels of energy consumption and rate of carbon emissions, but to uncover the significance of and particular contribution renewables and smart control opportunities makes to the sustainability of the system. In meeting this aim and demonstrating the sustainability of the theoretical framework, this paper reveals renewable and smart control system as the fundamental key components of the air-conditioning as it promotes to reduce levels of energy consumption and lower carbon emissions, vis-à-vis establish a comfortable and healthy indoor environment as an exercise in the sustainable theoretical framework whose status as renewable smart air-conditioning not only tackle poor indoor air quality but also combat global warming and climate change. Introduction Sustainability has been primarily characterized more than thirty years ago and is broadly acknowledged as a significant conceptual framework inside which to situate municipal development and policy. The basic influence between the related parts of sustainability; economic, social and environment is subsequently varied amid the understanding of idea which has prompted an assortment of municipal structures being depicted as sustainable (Dempsey et al., 2011). As material utilization and energy nodes, urban areas are causally connected to quickening worldwide natural deterioration and are not sustainable through themselves. Concurrently, urban areas and their occupants can undertake a significant work in accomplishing worldwide sustainability (Wilhite, 2009). The scholarly investigation of sustainable development currently holds an assortment of points of view and methodologies. Moreover, it incorporates various practices and policies based on common agrarian utopianism in the direction of capital-intensive large-scale client market growth. The well-being of human remains at the focal point of investigation when considering sustainability and utilization of renewable air-conditionings. The most blazing season in parts of the world as of now brings daily levels of encompassing warmth past the human species physiological limits, especially for individuals undertaking physical labor (Kjellström et al., 2009). Climate change will generate excessive levels of exposure of heat and air-conditioning will be required increasingly more in profoundly populated urban zones. The three bottom-line principles of sustainability namely, economic, environment and social is significant in considering the sustainability of air-conditioning systems; however, the real issue confronting mankind with respect to the three bottom line principles of sustainability is whether human action is sustainable. For this paper, the assessment of renewable smart air-conditioning is regarded as sustainable if:  the most efficient means for re-establishing the system to sustainability may necessitate change in the system. In other words, combining both the renewable and smart/ intelligent system as a unique system for the sustainability of the system which in turn improves the human adaptation of the wellbeing of occupants. Renewable smart air-conditioning can be defined as a sustainable innovative system that uses renewable energy and intelligent control procedures to optimize the quality of indoor air environment, lower the levels of energy consumption and rate of carbon emissions which in turn combats global warming and climate change. Moreover, Gugulothu et al., (2015) suggested that structures are accountable for the utilization of around forty percent of the essential energy and the outflow of about thirty-three percent of greenhouse gases on the planet. Similarly, established researchers have committed a lot of exertion to secure housing energy sustainability in two primary ways; those utilizing renewable energy for the remaining and those lessening outside energy supply. In the two different ways, resources of solar assets are picking-up acceptance since they increase energy autonomy and sustainability simultaneously while contributing almost zero effect on the environment. Subsequently, lessening the degree of energy utilization and guaranteeing ideal level of comfort in designing a smart air-conditioning system contributes considerably to the level of absolute energy utilization. Several research studies recommended that areas like assembly rooms, indoor arenas and meeting lobbies contribute as much as seventy five percent of the overall energy consumption related with air-conditioning utilization. However, Wang et al., (2012), proposed that decrease of energy utilization, waste in structures and CO 2 requires an intelligent control system for the reason that energy utilization has been legitimately connected with wellbeing and eventually to operational expenses. A structure's indoor environmental essential factors of comfort, as indicated by the users" inclinations are indoor air quality, visual and thermal (Siddqui et al., 2015). Thusly, studies have demonstrated that intelligent fuzzy logic model produced promising outcomes and is applied to a considerable case in structures while showing broad overall energy utilization decrease and decline of CO 2 as opposed to the current control system and accomplishing the level of ideal comfort (Wang et al., 2012;Siddqui et al., 2015;Yu and Lin, 2015). In this manner, controlling systems designers have been taking a step at creating different control methodologies for air-conditioning systems so as to optimize the air-conditioning exhibition. Studies uncovered that indoor structure environments control systems can be mostly characterized into two classifications as per the methodologies utilized: computational intelligent procedures and conventional controllers. The proportional integral derivative controller is one of the most well-known air-conditioning systems customary controllers. Neural networks, intelligent controller have as of late turned out to be applied as a flexible, accurate and fast device to the control methodology design, modelling and simulation of air-conditioning. Through an appropriate structured controller, the exhibition of an air-conditioning system can be altogether improved. Then, it merits creating sustainable-based novel control techniques to optimize the energy efficiency and indoor environment quality for renewable air-conditioning system. Consequently, Oye et al., (2020), suggested that all project such as the renewable air-conditioning ought to earn profits over a long period of time that is enough to develop and Environmental Management and Sustainable Development ISSN 2164-7682 2020 survive. The living standard improvement and national income is the index to the system economic progress. For this reason, profit is the legitimate purpose of the system; even so, it must not remain fundamentally over-emphasized. Possessors of the system may attempt to make best use of the project profit considering the society welfare. Hence, revenue is not just the return to possessors; however, it is likewise associated with the interest of the society other sectors. Profit is the measure for arbitrating not only for the economic sustainability, but likewise the social purposes, management and efficiency of the system (Oye et al., 2020). In the mission for climate change mitigation, for instance utilizing renewable energy sources, especially generation of solar energy to power smart air-conditioning to exterminate various indoor health effects and reduce the level of energy consumption and carbon emissions are expected to produce sustainable future developments zones further climate resilient. Therefore, this paper compliments existing research in sustainability domain by reporting the theoretical assessment of air-conditioning system through applying the bottom-line principles of sustainability. The proposed theoretical framework of renewable smart air-conditioning rationally contributes to the industry, renewable energy and sustainability studies through offering solar energy, smart technologies and profitability measures as a reasonable answer towards the fundamentals of sustainability for air-conditioning. Conceptual Issues of Sustainability Currently, societies and humankind face sustainability challenges which primarily connects with the capacity to sustain ecosystems, humanity, and societies on the planet. In the future, these challenges are anticipated to turn out to be more substantial. Accomplishing sustainability in the aspect of air-conditioning is consequently one of the most vital objectives of a society and its people since air-conditioning has generally been applied as a result of the improvement in the way of life. The concerns and issues personified through sustainability are comprehensive and many. They subsequently cover such various issues as pollution and climate change, government policies, stability and peace, anthropogenic and natural disasters, social and cultural sustainability, globalization, urbanization, population growth, energy consumption, carbon emissions, indoor air quality, energy efficiency, industrial development, production, technological systems such as smart/ intelligent control systems, ecosystem degradation, drought and water quality, loss of biodiversity and species extinction, desertification and land use, sanitation, resource supply (food, mineral, energy, water) and waste management (conventional, radioactive, hazardous toxic). However, the extensiveness of matters associated to sustainability proposes that a comprehensive and holistic tactic to sustainability is required. Sustainability actions are progressively becoming the businesses and governments agendas and operating strategies. The sustainability principles (environment, economic and social) are required to be associated if actions are to achieve the fundamental goals of sustainability for renewable smart air-conditioning. Sustainable Framework Assessment The sustainability assessment framework is significant for making sustainability functioning, Environmental Management and Sustainable Development ISSN 2164-7682 2020 monitoring and measuring the progress of renewable smart air-conditioning. Nevertheless, assessing sustainability is challenging, because there is no generally recognized sustainability technique that exists for a system. There are several motives for this, as well as the arduously in quantifying the key pillars of sustainability (Hacatoglu et al., 2016). For instance, albeit ozone depleting substances and greenhouse gases emissions are quantifiable, measuring their social and economic influences is challenging. Likewise, regardless of living standard frequently being quantified as "gross domestic product per capita"; life quality can be further important measure of satisfaction, comfort and human well-being. In this regards, assessment of sustainability can also be argumentative (Morse and Fraser, 2005). However, several approaches for measuring or assessing sustainability have been established as discussed below:  A number of methods of assessment apply indicators or principles of sustainability, which are characteristically straightforward quantitative substitutions that systematically measure environmental, social and economic factors. The indicators of sustainability are typically integrated, unifying the bottom-line principles namely social environment and economic whereas, others are not integrated, quantifying only a single sustainability fragment. This method unifying the three bottom-line principles is analytically proposed in this paper as a result of its suitability and straightforward measuring proxies for the renewable smart air-conditioning.  Indexes of sustainability have been established grounded upon a composite or aggregate of certain indicators of sustainability. A single-value sustainability quantity grounded upon an aggregate index is valuable for communication and understanding, owing to its straightforwardness. Nevertheless, the resolve of such indicators requires aggregation, weighting and normalization of data. These stages generally lead to a loss of beneficial information and can be problematic. The measure of a single-value sustainability can cover facts that are essentially connected with the multidimensional nature of sustainability and therefore be deceptive. As a result of this known facts, such quantifying method for the framework sustainability assessment of the renewable smart air-conditioning system is not recommended.  Research studies by Daly (1990), established sustainable development operational principles. Albeit beneficial, these are restricted to quasi-sustainable use of non-renewable resources and as result, this method is not suggested in this paper due to their limited usage. Studies unfolded that a small number of sustainability assessment approaches simultaneously reflect the three bottom-line principles of sustainability namely economic, environment and social. Conversely, several methodology assessments lay emphasis upon only one sustainability area or dimension such as the environment or economic sustainability. Numerous instances are contemplated to demonstrate this as highlighted below:  Assessment of biophysical methods is suitable for assessing environmental sustainability, through measuring environmental impact and resource use. Nevertheless, they are usually inappropriate for addressing economic and social domains of sustainability. ISSN 2164-7682 2020  Financial assessment can be applied to environmental and social capital to evaluate sustainability. Nevertheless, financial assessments for non-market services and goods are not well established, inappropriate appraisals, and problematic owing to our restricted ecosystems understanding and the resources they offer.  The index of environmental sustainability positions countries grounded upon numerous environmental indicators aggregate. Nonetheless, these are generally incapable to associate economic growth and environmental sustainability. Dewulf et al., (2000) Sustainability of technology via a range of illustrations Sustainability of energy Evans et al., (2009) Sustainability indicators for renewable energy Gnanapragasam et al., (2011) Sustainability of a national energy conversion system using hydrogen from solid fuels Gomez-Echeverri et al., (2012) Global energy assessment for sustainable routes identification Sustainability of manufacturing operations Nazzal et al., (2013) Sustainability as a tool for manufacturing decision making Sustainability of infrastructure and buildings Khalid et al., (2015) Sustainable building of heating, ventilation, and air-conditioning Russell-Smith et al., (2015) Sustainable target value design to improve buildings Sustainability of energy, water and environment systems Krajacic et al., (2015Krajacic et al., ( , 2018 Sustainability overview of the topic and description of sample studies Sustainability of region Gomez-Echeverri et al., (2012) Sustainability for the world Gnanapragasam et al., (2011) Sustainability for countries Mansoori et al., (2016) Sustainability for state Oye T. T., (2018) Sustainability for suburbs Environmental Management and Sustainable Development The non-existence of a framework approach is the weakness of numerous existing approaches of assessing sustainability, which considers the system being measured as a whole and typically accounts for the connections amongst its subsystems. This is significant for the reason that attaining a society that is sustainable is a systems problem, where the economic, social and environment are completely interdependent. Integrated human-environmental systems have connections between diverse systems that basically lead to trade-offs; for example, costs reduction can perhaps originate a procedure to have lower efficiency or higher emissions. An approach of non-systems concentrating upon solitary factors can perhaps be regularly understood to be insufficient for holistically assessing sustainability. As earlier noted, for instance, biophysical methods lay emphasis primarily upon environmental Environmental Management and Sustainable Development ISSN 2164-7682 2020 sustainability and disregard social and economic dimensions, whereas approaches grounded upon weak sustainability typically lay emphasis upon factors of the economic and disregard the biophysical sustainability domain. Obviously, a system sustainability requires to be assessed with a systems framework method. Analysis of life cycle is generally a fragment of such method, as it recognizes the system energy and material inputs and outputs or procedure and utilizes this information to assess the effects of economic, social and environmental domain of the system framework. In the light this, the sustainability principles via a sustainable framework concept can be systematically applied for renewable smart air-conditioning with the purpose to reduce the level of energy consumption and rates of carbon emissions while improving the system control performance and inhabitants" comfort. Nevertheless, in recent years, the principles of sustainability using framework concept have been applied to some areas as given in Table 1. Proposed System framework Using Sustainability Principles Consequently, Figure 1 systematically offers and applied the three pillars of sustainability namely; environment, economic and social as a whole arising from the methodical framework and subsequently re-established the system to sustainability which necessitate transformation in the system. In other words, it combines both the renewable and smart system as a unique system for the sustainability of the system which in turn improves the human adaptation of the wellbeing of occupants. Figure 1. Proposed framework using sustainability principles and indicators The environmental sustainability requires economic sustainability, and social sustainability relies on environmental sustainability. On the other hand, the three areas of sustainability can be treated with equality as proposed by research studies. The promotion of sustainable practices is to typically pursue a balance between environment, economic and social performance in project applications. That is to say, the connection amongst sustainability, renewable energy and smart control for air-conditioning becomes clear to reduce the level of energy consumption and rates of carbon emissions while tackling health effects of inhabitants. In this regard, Figure 2 takes the proposed indicators of sustainability further to systematically present a sustainability framework for renewable smart air-conditioning. Figure 2 set out the theoretical sustainable framework for renewable smart air-conditioning. This analytical grid is subsequently augmented by the three pillars of sustainability namely environment, social and economic which in turn unfolds the basis of sustainable smart air-conditioning in this paper. Here, the theoretical framework of the proposed system is thoroughly designed by way of the adaptation strategy of the three bottom line principles of sustainability where the environment is technically influenced by both the social and economic sustainability. Therefore, moving from top-to-bottom, this in turn indicates two forms of sustainable environment for sustainable-based smart air-conditioning namely, renewable air-conditioning and the indoor environment. The renewable-based air-conditioning unfurled the conceptual modelling approach for the system description of the photovoltaic solar air-conditioning while the mathematical modelling of the indoor environment unleashes two forms of smart/ intelligent indoor environment namely, fuzzy logic control -proportional integral derivative (FLC-PID) and back propagation neural network -proportional integral derivative (BPNN-PID) for the manageability of both indoor temperature and indoor CO 2 respectively. Here, particular attention is drawn to the connection of both the environment and social sustainability of the system where the PV solar air-conditioning (energy consumption, CO 2 emissions and climate change), IEQ (temperature) ISSN 2164-7682 2020 and the indoor air quality (CO 2 ) is in turn horizontally linked to the wellbeing and comfort of the inhabitants or users of the building through human adaptation. Moving horizontally, particular attention is also drawn to the connection of both the environment and the economic sustainability of the system where the depicted system and components unfurled the financial profitability of the system namely, net present value, internal rate of return, accounting rate of return and payback period through the cost efficiency of the proposed theoretical framework. Therefore, the principles of sustainability namely, environment, economic and social aimed for renewable smart air-conditioning are systematically addressed in the following sections. The Environment The subsystems of the environment are social and economic which is the basis and descend of the Earth entire energy and material interactions. The sustainability of humankind suggests guaranteeing the Earth ability to underpin the associated human and activities. Inhabitants and human economies have developed such that activities of anthropogenic currently have long-term and global effects, with numerous consequences. These can reduce the planet capacity to fundamentally sustain life. However, numerous environmental issues influence sustainability such as the energy consumption and carbon emissions emanating from the use of air-conditioning systems around the world causing climate change and health effects on inhabitants. There is also loss of biodiversity all over the biosphere, owing to economic growth and other related factors as a result of the constant usage of air-conditioning system during the hottest period of the year, and this subsequently put threats on environmental sustainability. Therefore, the conceptual modelling and mechanisms of solar air-conditioning to reduce the level of energy consumption and rate of carbon emissions is unfolded in the next subsection. Conceptual Modelling and Mechanisms of Solar Air-Conditioning The adapted building is a household located in Rome, Italy. The chosen location is as a result of the hot temperate environment. Subsequently, 500kj/K/m 2 is the capacity of heat and 0.5W/K.m 2 is the proposed building coefficient total heat loss that basically commensurate with a regular insulated light of building construction. The key mechanisms of the photovoltaic solar air-conditioning are the modules of photovoltaic associated with an alternating/ direct current inverter, outdoor heat rejection unit, indoor cold distribution elements and electric-driven chiller. The modelling of the system comprises of the outdoor unit for warming the domestic hot water, the cold storage, and a hot storage. Figure 3 represents the photovoltaic panel arrangement which is situated on the roof of the building. ISSN 2164-7682 2020 ISSN 2164-7682 2020 Also, the over-current shield is 18A which is in-parallel and the entirely photovoltaic components are connected electrically. For 280.8V is the entire voltage and 1000V is the extreme voltage system range. Photovoltaic components connection through the manufacturers" recommendation is sixty at serial connection maximum number. Manufacturer representative guarantees the categories of these photovoltaic units; however, there is lowest of ninety percent performance after ten years of usage and eighty percent after twenty-five years. Hence, in the calculations, photovoltaic minimal power reduction with time is considered. Likewise, 0.9% is the yearly efficiency reduction of the preceding annual"s rate. Hereafter, the photovoltaic units unframed are aimed to be secured on the proposed roof area. For 15° inclination to horizon are advised for photovoltaic units to be ascended. The roof examined is with a slope of 5°. This suffices angle aimed at connection of electric power beneath the elements. Aluminum is examined for sketching unframed components. The ascending of photovoltaic considered is designed in understanding of flexion defense with likely alterations owing to enlargement of the thermal. Flexion defense from uppermost and lowermost is planned concerning snow masses and winds respectively. Rubber gaskets amongst the mounting and photovoltaic elements are examined to permit expansion of the thermal alterations. System Process of Operation The proposed operation of the solar air-conditioning is improved through various subsystems and the precedence in a cooling season is covering the cooling mandate. The cooling machine to the heat pump is examined to offer heat transfer from the environment of low temperature to an environment of high temperature. The heat pump is power mutable and is regulated through its peculiar regulator box. The heat rejection to the hot storage is measured through the uppermost potential of heat rejection. The temperature of the water at the tank middle level and the internal coil of the coefficient heat transfer are demarcated in order to administer this potential concerning hot storage. Also, the heat transfer rate of outdoor unit, the air temperature of outdoor and the consumption of electricity of the fan is reproduced through the main energy factor. Nevertheless, free or unrestricted heat rejection of the hot storage is considered during the conceptual modelling of the photovoltaic solar air-conditioning. Unrestricted heat rejection emanating from hot storage in the direction of the outside air through the operation of outside unit is without the operation of heat pump. There is consumption of electricity in this procedure with consumers being the hot-side pump and outside fan component. As a result, the unrestricted heat rejection is detached from the operation of the system in acquiescence with outcomes of the pre-simulation. The regulator obliges for domestic hot water preheating in the non-cooling season. In this circumstance, some mechanisms of the system operate in the reverse mode, for instance, the outside component is a source of heat, and its interrelation is subsequently substituted to the cold side of the heat pump. The purpose of this regulator is to extremely protect the thermal energy mandate of the domestic hot water. The foremost obstacle at this juncture in non-cooling season is a low temperature of the outside air. For the outdoor unit, this primarily causes unsafe freezing. Also, the low temperature of the outside air subsequently upsurges the difference in temperature in both the hot and cold side of heat Environmental Management and Sustainable Development ISSN 2164-7682 2020 pump. This expressively reduces the coefficient of performance of the heat pump and can brand the system less appealing. For the flow and return hot side temperatures, heat pump is regulated through the integral sensors. Temperatures of the brine return may be restricted if needed to the lowest. Also, regulator of the production of heat may be examined in dual conducts. The heating of domestic heat water management is achieved grounded upon the principle of "float condensing" -meaning the level of temperature required for heating at a precise temperature of the outdoor is formed and directed via accumulated values emanating from outside and the flow sensors. The sensor of the room temperature can likewise be examined to reimburse the nonconformity in room temperature (Cooling Machine Manufactory, 2019). The heat pump transports the heat up to a secure level of temperature in the other regulator technique and it is recognized as "fixed condensing". The mechanized system of the heating operation is then substituted through the device of outside regulator unit. Therefore, the essential representation of the system through polysun software program is revealed below in Figure 5. Accordingly, the improvement performance of the system through including sub-system were theoretically explored in this paper for London, Toulouse and Rome districts. Research studies by Mauthner and Weiss (2014), suggested that majority of the solar air-conditionings are situated in specific European districts due to their climatic conditions. Hence, examinations were constraints to the above-mentioned three locations within the EU. This is associated with their impact on the general performance of the system -the necessity is that the cooling request must be covered. The supplementary components of the system can be added till satisfactory improvement is achieved. Thusly, the system without augmentation has fewer segments, which essentially improves the innovation. Through the cold demand, the cooling machine generates cold that is hard to gauge. Therefore, surpassing the cold production and Environmental Management and Sustainable Development ISSN 2164-7682 2020 henceforth power utilization can occur. For the system cooling limit in this circumstance is deficient, and the building set temperature is not gotten. Therefore, the cooling machine power is expanded on account of the subsystem. Moreover, Table 2 presented the annual improvement performance result for the three districts. Correspondently, the key focus of the execution of the cold storage for the three districts is to diminish the demand of electricity peak. In this way, if the system in view is updated basically with a cold storage, the room temperature variances may be extensively diminished by way of affecting the pinnacle load. In the solar air-conditioning, the cold storage obliges to overcome any issues between the cooling request and solar power gain. The cooling machine activity is nearer to the ideal driving temperature array with the inclusion of a cold storage tank, which marginally expands the heat pump regular productivity. Additionally, because of this expansion with the present cooling load of each district, the activity time for generation of cooling could be decreased. Also, the heat rejection usage through preheating of the domestic hot water recycles in excess of a fourth of the heat rejected. Albeit, to protect the cooling request in times of low radiation of solar, the strategy of free cooling may be connected if the temperatures of the open-air are in a range that is suitable. This present system performance permits a productive cooling request covering. Also, the energy flow diagram presented in Figure 6, Figure 7 and Figure 8 subsequently shows that there is effective cooling request for the three locations. It significantly demonstrates that the proposed system is self-sufficient and able to produce the required energy demand throughout the year. Also, the cooling machine outdoor unit is utilized in the non-cooling season as warmth hotspot for preheating of domestic hot water. In most cases, the building cooling request is influenced through its envelope protection, via the proportion of window-to-divider zone just as through the measure of interior load, infiltration and ordinary ventilation. In the cooling season, the utilization of resources with lesser thermal conductivity implies higher costs of investments and may prompt higher cooling energy requirements. Nonetheless, thermal improvement builds the demand of the yearly cooling. Hence, there is decrease in the level of energy consumption which leads to significant amount of CO 2 savings in each district as presented in Table 2. However, the results show that photovoltaic solar air-conditioning innovation can cover the cooling request in the proposed locations while reducing the level of energy consumption and the rates of carbon emissions. However, the climate change which is the utmost key significant challenges of sustainable environment emanating as a result of the rate of energy consumption and carbon emissions is likewise discussed below: Environmental Management and Sustainable Development Climate change: Steadiness of the greenhouse gases (GHGs) concentrations in the atmosphere; for the reason to avert the damaging consequences of climate change and global warming is established by most investigations as the significant challenges of the present-day. The greenhouse gases consume infrared radiation released at the Earth surface in the atmosphere. This subsequently leads to the effect of greenhouse and the related terrestrial warming. The main greenhouse gas is carbon dioxide (CO 2 ), nevertheless there are other greenhouse gases such as nitrous oxide (N 2 O) and methane (CH 4 ). The foremost anthropogenic origins of greenhouse gas emissions consist of the combustion of fossil fuel, ruminant animals" enteric fermentation and utilization of agricultural nitrogen. Global warming and climate destabilization risks are worsened through effects of positive response, for instance, the growth of absorption solar radiation occurring as a result of the loss of reflecting exteriors like ice. Since 1990, the Intergovernmental Panel on Climate Change Environmental Management and Sustainable Development ISSN 2164-7682 2020 (IPCC) published an inclusive report assessment revising the state-of-the-art climate science and forecasts trends of the future. In 2014, the finalized fifth reports assessment unfolded the warming trends because of anthropogenic actions as "very likely". Studies by Berthiaume and Rosen (2017), proposed that the amount of projected warming has subsequently intensified as climate models turn out to be further sophisticated, and as a consequence of the associated climate and supplementary influences. Considerable attempt at the United Nations and universally is concentrated upon attaining world-wide treaties or agreements to stabilize concentrations of the greenhouse gas in the air at a level that evades unsafe anthropogenic climate change. As a result of this movement, this paper unfolded the use of renewable energy for air-conditioning which in turn contributed to such goals by means of diminishing the system level of energy consumption and rate of carbon emissions. The Social Social sustainability is a comprehensive notion such as cultural development, wellbeing, health, equity and several other factors. A specific meaning of social sustainability and its contribution is ongoing universally. The development of sustainability rational to comprise a strong social constituent took some time. Initial effort on sustainability is usually concentrated upon either economic sustainability or environmental sustainability while neglecting the social sustainability factors. The importance of the development of both societal and human has been recently realized. The health and wellbeing of the users of air-conditioning systems is significant as studies proposed that majority of people spent about 80% to 90% of their time at home (Yu and Lin, 2015). Moreover, the indoor environment scenario-based mathematical modelling is established in this paper to contribute to the sustainability of the proposed system. The material proposed is sustainable brick harmonizing the three bottom line principles namely; environment, social, and economic influences to meet the goals of today while considering future effects. In essence, the scenario modelling of indoor temperature and indoor air quality (CO 2 ) is revealed in the subsequent section. Scenario Modelling of Indoor Temperature The temperature of the indoor is influenced via the indoor environment air temperature, loss of heat from the wall, room volume and heater. Therefore, the temperature of the indoor environment can be stated in view of the principle of energy conservation as follows: For the time is ( ), temperature of the indoor is ( ℃), the total wall heat transfer coefficient is ( / 2 . ℃), wall area is ( 2 ), temperature of the outdoor is ( ℃), heater work rate ℎ1 ( ), room volume is ( 3 ), air heat capacity is ( / . ℃) and air density is ( / 3 ) Accordingly, in equation (2) can be calculated as: (Heat transfer coefficient, 2019). ISSN 2164-7682 2020 For the wall thickness is ( ) and brick work thermal conductivity is ( / . ℃), and ( / 3 . ℃) are basically the specific convection fluids transfer of heat coefficients on the wall of each sides. In view of this scenario, it is considered that = = and is the air convection transfer of heat coefficients. Environmental Management and Sustainable Development Reflecting on equation (2), it can be conveyed into: Utilizing Laplace transform, equation (3) can be expressed as: Subsequently, the calculation can be designated as: . . Scenario Modelling of Indoor Air Quality Research studies by Hui et al., (2006), Mui et al., (2008), and Wolkoff, (2013) suggested that there are diverse kinds of indoor contamination within the indoor environment and it is unimaginable to expect to control and monitor the entire indoor pollutants; hence, one contaminant that is predominant, which necessitates the utmost measure of natural air to weaken such pollutant to an adequate level is typically chosen in this examination as the control signal for the control methodology studied and the indicator analyzed is the CO 2. By way of controlling indoor CO 2 to the ideal levels, the greater part of the other indoor air contaminations may be kept-up at adequate levels. The concentration of the indoor CO 2 cannbe specified as (Chao and Hu, 2004): Hence, equation (10) can be expressed as: For CO2 = . + is the system time constant The Smart Indoor System Consequently, the intelligent proportional integral derivative controller which is FLC dependent is projected for over-all control of the indoor environment quality. Fuzzy proportional integral derivative controllers can be inspected rather than direct proportional integral derivative controller in the entire applications of either modern or classical system control. The scheme basically converts the error amongst the reference and controlled or measured variable into an expected command that is like-wisely connected to process actuator. In reasonable plan, it is imperative to have data about their identical transfer of output-input qualities. For the fuzzy proportional integral derivative controller is essentially planned for the temperature and indoor air quality. Thus, temperature will be scrutinized as the signal control in-order to fundamentally depict the controller. The fuzzy proportional integral derivative controller that is self-tuning fundamentally comprises of two major divisions:  fuzzy logic controller as revealed in Figure 9 and,  proportional integral derivative controller The controller of fuzzy logic is scrutinized for regulating the proportional integral derivative parameters on-line which is the k p , k i and k d through the rules of the fuzzy logic control for improved control performance of the proportional integral derivative in diverse circumstances. Also, he anticipated controller of fuzzy proportional integral derivative is a design-based ISSN 2164-7682 2020 control auto-adaptive via the means of utilizing controller of incremental fuzzy logic. The controller of the proportional integral derivative is scrutinized for control of the indoor environment. Figure 9. Fuzzy proportional integral derivative controller structure Environmental Management and Sustainable Development The parameters of proportional integral derivative fuzzy self-tuning are to mainly discover the association of fuzzy amongst three proportional integral derivative parameters and likewise the ec and e. Grounded upon output y and input r, the system output y is measured and subsequently, ec and e is calculated. For the objects that are controlled to accomplish improved dynamic stable execution, the controller of fuzzy logic fundamentally tunes the three (k p , k i and k d ) parameters through the rules of fuzzy control on-line. Therefore, it is required to know individual proportional integral derivative parameters functions. Subsequently, it is attainable to regulate the connection amongst ec and e which is the fuzzy input and k p , k i and k d which is the fuzzy output and lastly to essentially construct the rule of the fuzzy logic. Subsequently, the projected fuzzy controller of proportional integral derivative that is self-tuning seeks to advance the control execution generated through the controller of proportional integral derivative. For it retains the basic controller of proportional integral derivative structure and may not compulsory alter portions of the hardware of the primary control system for execution. According to the research studies by Mui et al., (2008) and Song et al., (2013), exploring the entire kinds of indoor air contaminants for the over-all air quality control and monitoring is a difficult situation. However, Persily, (1997); Committee of European Normalization, (1998) andASTM, (2003), instructed that the concentration of indoor CO 2 (carbon dioxide) analysis and measurement may be beneficial for comprehending ventilation and indoor air quality efficiency. The indoor air quality control trouble is the dimension errors and delay of time. Nonetheless, for the goal to attain the system best performance regarding small overshoot, interference resistance, systemic stability and response speed for optimum sustainability goal, a back-propagation neural network that is based upon algorithm weight update with proportional-integral-derivative controller is anticipated. Therefore, the smart control systems for regulating indoor air quality of the ISSN 2164-7682 2020 renewable air-conditioning is vital in achieving a healthy living environment for the building inhabitants. With an appropriate smart control technique of proportional integral derivative controller, the indoor air quality can be significantly improved. The anticipated smart control for renewable air-conditioning can demonstrate accomplishment in exceptional control performance and occupants" comfort improvement. The Economic An economy that delivers standards of good living, the facilities that individuals need, and occupations is essential for society sustainability. A sustainable society needs continuing economic development somewhat than unbiassed economic growth. What happens currently is the economic growth which is frequently measured as growth in gross local product where consumerist economies rely upon economic growth to produce prosperity and occupations. For this reason, studies by Aghbashlo and Rosen, (2018), unfolded that the economy functions within a globe possessing limited capacities and resources, over the long term, a constantly growing economy is not inevitably sustainable. Hence, the universal economy must function more in a stable-state manner, with zero or little development. This fundamentally suggests that the economic assessment of the renewable smart air-conditioning must be equated as stable-state manner irrespective of the financial profitability analysis outcome in order to promote economic growth and the system benefits to the society since a growing economy is not certainly sustainable. Profitability Concept The meaning of profitability is the capability to create profit from the entire business pursuits of a firm, company, enterprise, organization or a system. It demonstrates by what means the productivity of the administration may produce profit through utilizing the entirely possessions accessible in the marketplace. Research studies by Harward and Upton (1961), suggested that productivity or efficiency is the capability of a specified asset to make a return as a result of its utilization. Nevertheless, the expression represented as "Efficiency" is not tantamount to the expression characterized as "Profitability". Profitability is basically an efficiency index; and is subsequently viewed as efficiency measure and administration lead to better efficiency. Albeit, profitability is a vital standard for determining the efficiency, the profitability extent is unable to be confiscated as a concluding efficiency proof. Occasionally, acceptable profits can symbolize inefficiency and on the contrary, a suitable efficiency degree can go together with profit absence. Also, figure of the net profit basically discloses an acceptable steadiness amongst the received values and given value. Although, operational productivity change is just among the influences upon which an enterprise profitability mainly relies upon. Hence, it can be presumed that profit is not fundamentally the key mutable upon which the foundation of an organization financial and operational productivity may be equated. For the reason to quantify the efficiency of assets utilized and also to quantify the efficiency of operationthe analysis of profitability is contemplated as one of the best methods. ISSN 2164-7682 2020 Using Economic Profitability Procedures According to Oye et al., (2020), cost of the system installation is determined according to the following grounds:  the equipment costs are basically ascertained through its category  costs are contingent upon the specific place and the category of building involved.  it also relies upon the level of salary in the engineering domains in the specific environment. Recent research studies by Oye et al., (2020) unfolded that photovoltaic electric of up to 2.5 kWh/d is sufficient for home-based requirements and as consequence, all components examined is within the aforementioned specification. According to their studies, this implies that a 2.5kW solar system will yield electricity of about (2.35kWh x 2.5kW) = 5.9kWh per day, averaged through the year. This interprets into power of about 2,140kWh per year. The average cost of photovoltaic solar air-conditioning is £3998.62 which includes all the system components and installation cost. Studies revealed that solar system is becoming cheaper since 2.5kWh system cost about three times the current price of the system in the last twelve months (Oye et al., 2020). The financial profitability of the solar air-conditioning via calculating the accounting rate of return, net present value, payback period and the internal rate of return are presented as follows: The formula for accounting rate of return is specified in the equation (12) The formula for the net present value is expressed in the equation (14) below: Where is the cash flow generated per year; is the rate of inflation and is the number of years. Table 3 presents the system annual income, operating profit and cash flows. However, higher net present values are advantageous and the rule for precise decision is as follows: ISSN 2164-7682 2020 The NPV in equation (14) can further be expressed as: ( 1.05) The formula employed for the payback period can be expressed in equation (17) as: As a result of an uneven cash flow and to determine how much time to recover the original investment, the payback period can further be expressed as: Also, the unrecovered cost at the start of the year = Initial Investment -Cash flows by the end of year 9. Subsequently, the payback period is expressed and calculated as follows: The internal rate of return is 10.19% and is calculated through IRRCalculator.net software programme. Subsequently, the analysis depicted that the project is worth undertaking through utilizing a sustainable and renewable means of technology. The economic sustainability opinions are divided into categories of strong and weak sustainability where the "strong" focus more on the environment while the "weak" focused more on the system financial assessment. However, it is beneficial to focus more on the strong sustainability of the renewable smart air-conditioning for the overall welfare of the society for constant economic growth. Remarks The use of air-conditioning has generally been applied as a result of the life-threatening summer conditions. There are probably always air-conditioning in indoors from offices to every room in the household which subsequently adds to the level of energy consumption and carbon emissions, and thereby causing climate change. Nevertheless, with so much contact to air-conditioning, there has been a continued debate as to whether the air-conditioning has an adverse effect on the human body and subsequently adds to the level of energy consumption and carbon emissions. Yes, is the answer to such question. However, studies also unveiled that human health is considered as the vocal point when considering sustainability. According to several studies, inhabitants in buildings with constant use of air-conditioning have higher illness rates with high level of energy consumption than individuals in buildings with natural ventilation. Study demonstrates that individuals who work in an environment that is over-air-conditioned can possibly experience constant fatigue and chronic headaches and also creates financial disadvantages for the individuals who have to pay for the power due to the high rates of energy consumption. Individuals who work or live in structures which are regularly being pumped full of cool air can similarly experience persistent breathing difficulties and mucous membrane irritation. This leaves individuals further helpless to contracting the flu, colds and other illnesses associate with the utilization of air-conditioning. Air-conditioning may come-in convenient on a certainly hot day; nonetheless, it is likewise the worst offender for circulating micro-organisms and germs that cause breathing difficulties and yet, increasing the rates of energy consumption and carbon emissions. According to recent study directed at Louisiana State Medical Centre, eight categories of mould reside in 22 out of 25 indoors tested. Besides, residing in the air-conditioning zones for a lengthy period can potentially cause eyes, nose and throat respiratory problems. Air-conditionings are well-recognized to air-borne diseases circulation such as Legionairre"s Disease, a possibly deadly infectious disease that gives rise to pneumonia and high fever. It likewise assists with rhinitis circulation, a disorder which causes the nose mucous membrane inflammation. Indoor occupants are further expected to become dehydrated due to poor indoor air quality in an area with unregulated air-conditioning as equated to other areas with regulated air-conditioning. The regulated air-conditioning can further be smart controlled through an appropriate smart control technique of proportional integral derivative controller to optimize indoor air quality within the indoor environment. The unregulated air-conditioning sucks humidity from the apartment while leaving inhabitants dehydrated and with the necessity to drink water. Dehydration emanating from air-conditioning utilization can cause migraines and headaches. Unexpected contact to the sun or heat after a lengthy contact to air-conditioning might cause headache. Likewise, in cases of air-conditioning housings which are not appropriately sustained, inhabitants are further susceptible to migraines and headaches. Unregulated contact to air-conditioning can possibly cause dry and itchy skin and eyes. Individuals having symptoms of dry eyes are recommended not to reside for too long in air-conditioning zones since it deteriorates effects whereas, extreme contact to air-conditioning alongside with sun contact can make the skin itchy and dry. The utilizations of central air-conditioning systems are also well-recognized to boost the effects of the disease that inhabitants may hitherto be suffering from. Air-conditioning is infamous for increasing the rates of energy consumption and carbon emissions and thereby causing climate change, and subsequently, it intensifies the indications of low blood pressure while making management of pain further problematic for individuals obstinate on utilizing their central air-conditioning. Nevertheless, this paper has unveiled the sustainable theoretical assessment from the principles of sustainability to challenge these problems. Conclusion Sustainability is measured as a technique for the industry to protect the environment. The promotion of sustainable practices is to typically pursue a balance between environment, economic and social performance in project applications. If we admit this, the connection amongst renewable energy and smart control for sustainability of air-conditioning becomes clear; air-conditioning is of strong environmental significance and subsequently has high economic and social influences. Due to increase awareness of environmental protection, this issue of energy consumption and carbon emissions emanating from the use of air-conditioning systems and its resultant health effects has sequentially gained worldwide attention. Unfolding sustainable theoretical framework for renewable smart air-conditioning practices has been underpinned as a typical way forward in promoting economic and social improvement in the air-conditioning manufacturing while minimizing the effects on the environment and human health. In pursuance of diminishing these unfavorable effects on the environment and to subsequentially attain sustainability in the industrial sector; three principles of sustainability, the sustainable indicators -propositions and prospects appeared. For they sequentially form the theoretical framework for unifying the principles of sustainability into the study of air-conditioning right from the theoretical phase. The components of the photovoltaic solar air-conditioning were simulated in Polysun software, and the results unveiled that the system can provide the required yearly energy yield without compromise. Also, the simulation results of photovoltaic solar air-conditioning uncover the importance of using renewable as a source of clean energy in air-conditioning systems and the particular contribution it makes to the level of energy consumption and carbon emission. The system significantly demonstrates the importance of using solar energy as alternatives to fossil fuel to power air-conditioning systems, vis-à -vis combatting global warming and climate change and hereby promoting sustainable development. Consequently, the indoor environment quality has fundamentally influenced inhabitants" wellbeing, individuals' efficiency and comfort feelings. Thus, significant factors of the indoor environment must be properly controlled for growing request of everyday comforts. As a result, the anticipated smart control of temperature and indoor CO 2 regulator possesses high-quality execution on controlling indoor temperature and indoor CO 2 since the indoor environment can be effectively regulated through the proportional integral derivative controller. Therefore, the anticipated smart control system can optimize thermal comfort and eradicate the indoor environment health effects. Subsequently, the economic assessment of the solar-based system demonstrates how significant savings can be achieved through utilizing a sustainable and renewable means of technology -the photovoltaic solar air-conditioning technology. Each financial assessment indicators depicted that the project is worth undertaking. Therefore, the photovoltaic solar air-conditioning demonstrated to reduce the level of energy consumption and carbon emissions and also saves investment costs; it however proves to meet the sustainability agenda of the three pillars of sustainability. As a result, there is development, steadiness and productivity of the system for greener solutions to the worlds energy requirements.
2020-06-04T09:13:05.826Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "1b05a1ab9a5f74741be9894b3fd4eabb683da32a", "oa_license": "CCBY", "oa_url": "https://www.macrothink.org/journal/index.php/emsd/article/download/16953/13289", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "904306509d6ea341e1c1543f4dd75de10ecfed07", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Business" ] }
244682066
pes2o/s2orc
v3-fos-license
Designating the preference of tram shelter as a part of transit-oriented development’s concept within Kota Tua Jakarta using fuzzy logic This research is a part of multi-year research, which has been started since last year by conducting some studies, either literature or simulation, for photovoltaic technology used in public transportation. In supporting the primary research of the implementation of Transit-Oriented Development’s concept within Kota Tua Jakarta, this research was also delivering the analysis of preferences through the community to designate the appropriate tram shelter within the historical area of Kota Tua Jakarta. By using fuzzy logic, this research has analysed the relevant point of tram shelter within the historical area of Kota Tua Jakarta. It has proposed together with another result of the study to be a good design. Thus, this research has completed some of the Transit-Oriented Development’s basic principles, such as transiting, connecting, and shifting to support the need within the area of Kota Tua Jakarta. Introduction One of the basic concept of Transit-Oriented Development is connecting people from one place to another place easily and to do activities within one area to reduce the using of private vehicle [1,2] also mentioned that historical area is the most significant area within city which has unique character and usually become the identity of the city. One of the main problem within historical area is lack of utility such as infrastructure. Bahri and Purwantiasning [1] in the previous research also discussed and mentioned that there is a possibility to enhance the quality of historical area particularly Kota Tua Jakarta by providing a unique public transportation within the area to connect people from one place to another. The study of Bahri and Purwantiasning also proposed an alternative solution to serve local community particularly visitors within Kota Tua Jakarta by providing unique tram within the area. The previous study also proposed the possibility of the route for the tram by providing 10 tram's stops of shelter within the area of Kota Tua Jakarta. Those ten tram's shelter have been proposed as a main stop within Kota Tua Jakarta which representing all point of interest within Kota Tua Jakarta. This research as a next step of the previous research aimed to proposed the main halt or main tram's stop/ shelter within Kota Tua Jakarta using a fuzzy logic as a decision maker. By using this fuzzy logic, hopefully the proposed of main halt or main tram's shelter will be decided significantly. There are many previous researches which have been carried out by using a fuzzy logic method as a tool to be decision maker. For example Santos et al. [3] have been completed their research by using fuzzy logic to help as system in selecting an optimal and sustainable life cycle maintenance and rehabilitation strategies for road pavements. The similar research by Stetter [4] also mentioned that a fuzzy logic is very helpful to get decision making in virtual actuator in allowing the accommodation of several possible faults such as a slippery surface under one of the drive modules of Automatic Guided Vehicles. Another research referring to the fuzzy logic method also mentioned by Liu et al. [5], which described how to make an exit selection behaviour of pedestrians, because it has been mentioned that this decision making plays an important part in the process of evacuation. Those three examples of research which undertaken a fuzzy logic as a method to make decision making have shown that this fuzzy logic could be implemented to any field. In the research of Daradkeh and Tvoroshenko [6], it has been mentioned that a fuzzy logic method has been regarded as a technology which could be used for making reliable decision on variety needs. This research aimed to propose a appropriate tram shelter within Kota Tua Jakarta as a part of the implementation of Transit-Oriented Development's concept. Similar with the previous researches, this research has undertaken a fuzzy logic method to make a reliable decision of the preference tram shelter. It has clearly mentioned in above statement, that to implement the concept of Transit-Oriented Development, a comfortable public transportation become the main issue. As supported by Bozzo et al. [7] that nowadays, in human mobility demands require rapid responses on behalf of public transportation, which must provide dependable, comfortable, economically and environmentally sustainable services, with high transport capacities. By delivering a concept of tram as a public transportation within Kota Tua Jakarta, this research has supported the idea of a comfortable public transportation which connect people from one place to another and from one activity to another. Research methods To obtain a practical location selection for main-halt of tram or tram-stop, this research has completed by using a simulation based on fuzzy logic as a decision making. This simulation system has been undertaken by using a MATLAB programme. There are some Point of Interest (POI) that have been designated in this research. Each data of POI has been described in the Table 1. Based on those data, it has been formulated a decision-making system with some variables and fuzzy sets which have been used in this research (see Table 2). From the data in the Table 1, moreover the designated system has been imported with data input to see which POI that has a high recommendation. Thus, the highest recommendation of POI has been decided as a main location of tram-shelter or tram-halt within Kota Tua Jakarta. And this main tram-shelter or tram-halt should be integrated with the existing transportation system within Kota Tua Jakarta. Determining of the main tram-shelter or tram-halt location simulated by using a decision making system based on Fuzzy Logic. This system simulation is done in MATLAB programming. The variables and fuzzy sets used are given in Table 2. Results and discussion One of the Transit-Oriented Development's concept is by connecting people from one place to another, one activity to another easily. Using the term of "easily" or "easy", it should be related to the distance and connectivity as mentioned in previous study [8][9][10][11]. Table 1 describes the data of each Point of Interest (POI) that have been used as input variables of fuzzy logic, they are: the distance, the number of visitors, the connectivity and the availability of open space. The distance has been calculated from the main train station Stasiun Kota Jakarta or Stasiun Beos. The number of visitors of the POI has been adopted from the annual report of the statistic of Kemendikbud Indonesia. The connectivity has been calculated from the number of the available public transportation within the surrounded POI. The last used variable as a fuzzy logic input variable is the availability of open space within surrounded POI either inner or outer open space. The fuzzy logic system has been designed to determine the location of the main tram-shelter or tram-halt is given in Figure 1. The Mamdani type of Fuzzy Inference System (FIS) fuzzy logic system was used with 4 input variables (Distance, Number of Visitors, Connectivity and Open Space) and 1 output variable (recommendation) as given in Table 1. Based on the number of input fuzzy variables, there are four fuzzy variables which have been used by fuzzy sets each number is three, therefore, there are eighty one fuzzy rules in total which have been used in this fuzzy system, as shown in Figure 4. The result of the analysis using fuzzy variables and fuzzy sets has been concluded as described in the follow of Table 3: Conclusion To conclude this research, the authors have stated that there is a big possibility to implement the service of unique tram within Kota Tua Jakarta as one main concept of Transit-Oriented Development's concept. To apply the service of the unique tram within Kota Tua Jakarta, we need to analyse the appropriate the tram-shelter or tram-stop particularly the main point. We have obtained the fuzzy logic system to analyzed the preference of main tram-shelter using fuzzy variables and fuzzy sets. The recommendation has been provided as a result from the analyze of fuzzy system. As a result, we have recommended Fatahillah Museum Point as a main tram-shelter point and as a starter-point for the tram route within Kota Tua Jakarta. By delivering this recommendation, hopefully the implementation of the Transit-Oriented Development's concept could be applied appropriately.
2021-11-27T20:06:56.615Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "f262d051c1f640169e8e3be7ac21afc93376b504", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/878/1/012057", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f262d051c1f640169e8e3be7ac21afc93376b504", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
267018192
pes2o/s2orc
v3-fos-license
Sex and Ethnic Disparities during COVID-19 Pandemic among Acute Coronary Syndrome Patients : Introduction: The Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) caused a global pandemic that emerged in 2019. During this period, a significant disparity in hospitalization and mortality rates emerged, particularly in terms of Ethnicity and sex. Notably, this study aims to examine the influence of sex and Ethnicity on acute coronary syndrome outcomes, specifically during the global SARS-CoV-2 pandemic. Methods: This retrospective observational study analyzed adult patients hospitalized with a primary diagnosis of acute coronary syndrome in the United States in 2020. Primary outcomes included inpatient mortality and the time from admission to percutaneous coronary intervention (PCI). Secondary outcomes encompassed the length of stay and hospital costs. The National Inpatient Sample (NIS) database was utilized to identify and study patients in our test group. Results: A total of 779,895 patients hospitalized with a primary diagnosis of acute coronary syndrome in the year 2020 and 935,975 patients in 2019 were included in this study. Baseline findings revealed that inpatient mortality was significantly higher in 2020 compared to 2019, regardless of sex and Ethnicity (adjusted odds ratio (aOR) 1.20, 95% confidence interval (CI) 1.12–1.23, p -value < 0.001). Concerning primary outcomes, there was no difference in inpatient mortality for hospitalized patients of different sexes between 2019 and 2020 (STEMI: aOR 1.05, 95% CI 0.96–1.14, p -value 0.22; NSTEMI/UA aOR 1.08, 95% CI 0.98–1.19, p -value 0.13). Regarding time to admission for PCI, NSTEMI/UA cases were found to be statistically significant in female patients compared to males (mean difference 0.06 days, 95% CI 0.02–0.10, p -value < 0.01) and African Americans compared to Caucasians (mean difference 0.13 days, 95% CI 0.06–0.19, p < 0.001). In terms of the length of stay, female patients had a shorter length of stay compared to males (mean difference − 0.22, 95% CI − 0.27 to − 0.16, p -value < 0.01). Conclusions: As acute coronary syndrome is an urgent diagnosis, a global pandemic has the potential to exacerbate existing healthcare disparities related to sex and Ethnicity. This study did not reveal any difference in inpatient mortality, aligning with studies conducted prior to the pandemic. However, it highlighted significantly longer treatment times (admission to PCI) for NSTEMI/UA management in female and African American populations. These findings suggest that some disparities may have diminished during the pandemic year, warranting further research to confirm these trends in the years to come. Introduction The COVID-19 pandemic has presented unprecedented challenges to healthcare systems worldwide, illuminating underlying disparities in patient outcomes across diverse demographic groups.Amidst the pandemic, individuals with acute coronary syndrome (ACS) have encountered distinctive challenges, with emerging evidence suggesting differential impacts based on sex and Ethnicity.In 2019, Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV-2) was discovered, causing widespread COVID-19 infections that would result in a global pandemic.In the United States alone, the COVID-19 pandemic has been estimated to be responsible for 101 million cases and 1.1 million deaths [1].Moreover, the pandemic has brought to the forefront known healthcare disparities specifically related to ethnicity and sex.The global crisis of COVID-19 has highlighted its impact on different sexes and Ethnicities.The intersection of COVID-19 and ACS has raised critical concerns.Studies have highlighted not only the direct cardiovascular implications of COVID-19 infection but also the secondary effects of the pandemic on the management and outcomes of ACS patients.Factors such as delayed presentation to healthcare facilities, altered treatment pathways, and variations in access to care have significantly influenced the landscape of ACS management during this crisis [2].Studies have revealed that males and females experience varied susceptibility and severity levels, attributed to biological differences like immune response variations.For instance, females often exhibit stronger immune activity, potentially influencing their lower vulnerability to viral infections.Regarding ethnic groups, variations in COVID-19 susceptibility, access to healthcare resources, and cultural determinants have contributed to differential outcomes among various ethnic groups with ACS, warranting focused attention to address these inequalities.Studies have found that African Americans and Hispanics are about twice as likely to be hospitalized due to COVID-19 compared to Caucasians [3]. The influence of Ethnicity on COVID-19 outcomes has been analyzed in numerous studies.The pandemic has shed light on ethnic disparities within the ACS patient population.Variations in COVID-19 susceptibility, access to healthcare resources, and cultural determinants have contributed to differential outcomes among various ethnic groups with ACS, warranting focused attention to address these inequalities.Studies have found that African Americans and Hispanics are about twice as likely to be hospitalized due to COVID-19 compared to Caucasians [4].This discrepancy also extends to mortality rates, with African Americans being one and a half times more likely to die from a COVID-19 infection compared to Caucasians [4].With respect to sex, a study conducted during the initial stages of the pandemic found that men with COVID-19 infection had a higher likelihood of death compared to women, regardless of age [5]. The COVID-19 pandemic has exacerbated adverse outcomes for acute coronary syndrome (ACS) patients.Studies indicate increased morbidity and mortality, often due to delays in seeking medical help driven by fears of COVID-19 exposure [6,7].This hesitation in seeking timely care affected various demographic groups, potentially worsening outcome disparities.Understanding how Ethnicity and sex intersect with these delays is crucial to address health disparities and enhance care strategies for ACS patients during pandemics.There was a noticeable increase in the time before medical intervention during the early stages of the pandemic, likely aimed at reducing unwarranted COVID-19 exposure [5,8]. In this study, we aim to analyze the impact of sex and Ethnicity on acute coronary syndrome outcomes during the COVID-19 pandemic.Specifically, we aim to evaluate two key metrics: inpatient mortality rates and the duration from admission to percutaneous coronary intervention (PCI) among ACS patients during this critical period. Study Design The methodology employed in this study was a retrospective observational analysis encompassing adult patients admitted with the primary diagnosis of acute coronary syndrome (ACS) within the United States throughout the year 2020.This study's primary objective was to compare the outcomes of ACS patients based on their sex and Ethnicity during 2020 in contrast to 2019.The key endpoints under scrutiny were the rates of inpatient mortality and the duration from admission to PCI.Additionally, secondary endpoints included the duration of hospital stays and the overall cost of hospitalization.The study delved into independent variables such as sex and Ethnicity while accounting for potential confounders like age, hospital bed capacity, primary payer, hospital location, patient comor-bidities assessed by the Charlson Comorbidity Index (CCI), hospital region, and teaching status. Data Source and Sample Analysis was conducted using the National Inpatient Sample (NIS) database of the Healthcare Utilization Project (HCUP), a database created by the Agency for Healthcare Research and Quality (AHRQ).NIS consists of discharge data from a 20% stratified sample of US hospitalizations designed to be representative of all nonfederal acute care inpatient hospitalizations nationwide.Patient identification was carried out using the International Classification of Diseases, Tenth Revision, Clinical Modification (ICD10-CM) coding system.Specific ICD10-CM codes were sought for primary diagnoses including STEMI (I21, I211, I212), NSTEMI (I214, I222), and UA (I25, I200, I222).Patients with diagnoses of shock (R571, R578, R579, R6521) and those requiring mechanical support (T884XXD, T88DXXS, T884XXA, T884,5A1522G) were excluded from the analysis (Supplemental File). Statistical Analysis Statistical analysis was performed using STATA version 17.0 software [9].Categorical variables were presented as percentages, while continuous variables were expressed as mean ± SD.Student's t-test was employed to compare continuous variables, while the chi-square test was used for categorical variables.Univariate regression analysis calculated unadjusted odds ratios for the study's outcomes.For further analysis, multivariate linear regression (for continuous outcomes) and logistic regression (for binary outcomes) were utilized to determine adjusted odds ratios (aORs).The construction of models involved the inclusion of significant variables associated with the outcomes of interest, as detailed in Table 1.All p-values were two-sided, with 0.05 considered the threshold for statistical significance. Baseline Demographics and Characteristics of Patient Population In this study, the comparison between patients hospitalized with acute coronary syndrome (ACS) in 2019 and 2020 revealed several notable characteristics.In 2019, the total number of patients amounted to 935,975, while in 2020, it was 779,895.The mean age of admitted patients was similar in both years, averaging 66.7 years in 2019 and 66.3 years in 2020.Gender distribution showed that 35% of the patients were female in both years.When considering Ethnicity, Caucasians comprised the majority in both 2019 (73%) and 2020 (74%), followed by African Americans, Hispanics, Asians, Native Americans, and other ethnic groups, though there were slight variations in these proportions between the two years. Regarding comorbidities, the Charlson Comorbidity Index demonstrated a shift in scores.In 2019, 52% of patients had a score of 3 or higher, while in 2020, this percentage decreased to 47%.Common comorbidities such as hypertension, diabetes, and obesity showed marginal changes between the years, with hypertension being the most prevalent at around 42-43%, diabetes at 27-28%, and obesity at 22-24%.However, chronic kidney disease exhibited a decline from 15% in 2019 to 12% in 2020.Other conditions like alcohol abuse, peripheral vascular disease, and chronic pulmonary disease maintained relatively stable prevalence rates between the two years.In terms of hospital characteristics, the distribution among bed sizes (small, medium, large) and urban locations (rural, urban non-teaching) remained quite consistent between 2019 and 2020 (Table 1). Inpatient Mortality among Different Sexes and Ethnicities Inpatient mortality for patients hospitalized with ACS was not different among different sexes during the year 2020, and likewise, there was no difference in mortality based on sexes in 2019 (2020: STEMI aOR 1.05 CI 0.96-1.14,p-value 0.22; NSTEMI/UA: aOR 1.08, CI 0.98-1.19,p-value 0.13) (Table 2).There was no difference in inpatient mortality based on Ethnicity for patients admitted with ACS during the year 2020.In 2019, inpatient mortality was higher for patients of Asian or Pacific Islander descent who were admitted with ACS when compared to Caucasians (Table 2). For patients admitted with ACS, inpatient mortality was significantly higher in 2020 compared to 2019 regardless of sex and Ethnicity (aOR 1.20; 95% CI 1.12-1.23;p-value < 0.001). Time from Admission to PCI Outcome among Different Sexes and Ethnicities Although the time from admission to PCI for NSTEMI and UA was longer for females when compared to males in both 2020 and 2019, the small effect size makes the result clinically insignificant (2020 NSTEMI/UA: mean difference 0.06 days, 95% CI 0.02-0.10,p-value < 0.01) (Table 3 and Figure S1). Similarly, African Americans with NSTEMI and UA were found to have a statistically significantly longer time from admission to PCI than Caucasians in both 2020 and 2019; the small effect size makes the results clinically insignificant (2020 NSTEMI/UA: mean difference 0.13 days, 95% CI 0.06-0.19,p < 0.001) (Table 3). Resource Utilization among Different Sexes and Ethnicities In 2020, female patients had a shorter average length of stay (LOS) for UA/NSTEMI admissions (NSTEMI/UA: mean difference −0.22,CI −0.27-−0.16,p-value < 0.01).Length of stay was not different based on sex for STEMI admissions in both 2020 (p-value = 0.29) and 2019 (p-value = 0.55).In both years, patients who were Hispanic and Asian or Pacific Islander and admitted with STEMI had longer LOS compared to Caucasians (Supplementary Tables S1-S3). Discussion Given the time-sensitive nature of ACS management, a global pandemic like the COVID-19 pandemic has the potential to worsen existing healthcare disparities in sex and Ethnicity.The rationale behind this investigation is rooted in the recognition of the multifaceted impact of demographic variables, notably sex and Ethnicity, on the trajectory and prognosis of ACS patients during the COVID-19 pandemic.Acknowledging the emerging data that indicate potential disparities in outcomes among different demographic groups, our study seeks to provide a comprehensive assessment of inpatient mortality rates and the temporal aspect of PCI initiation, shedding light on potential variations influenced by sex and Ethnicity.This study showed that first, during the COVID-19 pandemic in the year 2020, there was no difference in inpatient mortality, with respect to sex and Ethnicity in ACS patients.Prior to the pandemic, in the year 2019, inpatient mortality for ACS admissions among patients of Hispanic, Asian, or Pacific Islander Ethnicity was higher compared to those of Caucasian Ethnicity.Second, among patients admitted with NSTEMI and UA, the time from admission to PCI was significantly longer for females and African Americans when compared to males and Caucasians, respectively in both 2019 and 2020, although the effect size was small.Third, female patients admitted with UA and NSTEMI in 2020 had a shorter length of hospital stay compared to male patients. The inconsistency in the evaluation and management of suspected acute coronary events in females has been assessed considerably in the literature.Studies have revealed a higher mortality for women with acute coronary syndrome compared to men.This is particularly seen in the case of ST-elevation myocardial infarction [8,11,12] The higher mortality in women may be explained in part by the higher prevalence of comorbid conditions, longer system delays to appropriate care, decreased use of guideline-directed medical therapy, and older age at presentation [13][14][15][16]. Numerous studies have been performed to compare inpatient mortality and survival rates among different Ethnicities.In the Blue Cross Blue Shield of Michigan Cardiovascular Consortium (BCBS-MICC) study, the Ethnicity of the patients was not found to affect inpatient mortality in coronary artery disease [17].Similarly, in an analysis of 12,555 acute myocardial infarction patients in New York City, investigators found no significant difference in survival rates among African Americans compared to Whites [18].However, a retrospective analysis by Yong et al. of 689,238 hospitalizations for ACS from 2008 to 2011 in the United States found that patients of Asian descent had the highest inpatient mortality rates when compared to other Ethnicities.In contrast, African American patients had the lowest inpatient mortality rates when admitted with STEMI and NSTEMI [19]. Our study attempted to determine whether the pre-existing sex and Ethnicity disparities in the management of acute coronary syndrome were influenced by the COVID-19 pandemic.Inpatient mortality for patients admitted with ACS was significantly higher in 2020 compared to 2019 regardless of sex and Ethnicity and it is reasonable to infer that inpatient mortality in ACS during the pandemic was determined primarily by the clinical characteristics of patients.This contrasts with the pre-pandemic data from 2019, which showed increased inpatient mortality for ACS admissions among Asians and Hispanics compared to Caucasians.Notably, we found that African American and Hispanic ACS patients experienced slight delays in appropriate and timely interventions during the COVID-19 pandemic, although the effect size was small and likely not clinically significant. Multiple studies have attempted to analyze the impact of sex and Ethnicity on timely curative intervention in the setting of acute coronary syndrome [18,20,21].The ACTION Registry-GWTG study, which studied 46,245 STEMI patients in the United States, showed remarkable delays in terms of reperfusion interventions specifically among Hispanic STEMI patients compared to Caucasians [20].Another study by Bradley et al. revealed significantly longer door-to-drug and door-to-balloon times for Hispanic and African American myocardial infarction patients compared to Caucasian [22].Another study based in China found that, compared to men, women with ACS were less likely to receive adequate initial treatment, including PCI.The current study focused on patients with ACS admissions in the United States and included 779,895 patients during the first year of the COVID-19 pandemic, yielding a study with a higher power and more precise estimate of the magnitude of the effect. Limitations Our study leveraged one of the largest available databases in the United States, the NIS database, to conduct our research.However, inherent limitations accompany its use.Coding and documentation errors within the database raise concerns about the representativeness of all subjects in our study concerning the broader study population.Furthermore, the database's structure provides admission length in days but lacks granularity by not offering time in hours.It also lacks patient-specific data like individual lab results, medications, and imaging outcomes, which could significantly impact our findings.To address potential biases from the observational nature of the database, we conducted a comprehensive multivariable analysis. Nevertheless, our study faced certain limitations.Firstly, crucial patient-level data essential for analysis, including laboratory results, oncologic status, angiograms, coronary lesion descriptions, imaging studies, medications, and procedural details, were unattainable.Secondly, the database's administrative design, reliant on coding for diagnosing conditions and procedures, introduces a susceptibility to documentation errors.Thirdly, the retrospective observational approach introduces the potential for selection and unmeasured biases.However, our efforts included a robust multivariable analysis to mitigate allocation bias.Finally, important information regarding out-of-hospital mortality, shortterm non-hospitalized outcomes, and long-term results was unavailable within the NIS database. Conclusions Our study during the COVID-19 pandemic found no differences in inpatient mortality among ACS patients based on ethnicity and sex.However, there was an overall increase in inpatient mortality in 2020 compared to 2019.Females with UA and NSTEMI had shorter hospital stays than males.No significant delays in time to PCI were observed based on ethnicity or sex.Importantly, disparities seen before the pandemic in 2019, particularly increased inpatient mortality among Hispanic, Asian, or Pacific Islander patients with ACS, were eliminated in 2020.Future research should explore underlying factors contributing to and further minimizing these disparities. Table 1 . Baseline demographic and characteristics of patients who had a primary diagnosis of ACS.ACS: Acute coronary syndrome. Table 2 . Inpatient mortality outcomes among ACS patients during 1st year of the pandemic 2020 and the pre-pandemic year 2019.Ethnicity is presented in comparison to white/Caucasian patients while Female is presented in comparison to male patients.p-value ≤ 0.05 indicates significance.STEMI: ST segment elevation myocardial infarction.NSTEMI: non-ST segment elevation myocardial infarction.UA: Unstable angina. Table 3 . Admission to PCI time among NSTEMI/UA patients during 1st year of the pandemic 2020 and the pre-pandemic year 2019.Ethnicity is presented in comparison to white/Caucasian patients while Female is presented in comparison to male patients.p-value ≤ 0.05 indicates significance.
2024-01-17T16:26:00.259Z
2024-01-12T00:00:00.000
{ "year": 2024, "sha1": "6b77e04d1ae186b68b328c88ba2314222c41c1f9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-3846/5/1/4/pdf?version=1705039644", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "72d735c43e9013d1a1f2b3a16144320d4c887d9f", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [] }
203720623
pes2o/s2orc
v3-fos-license
Gravity influences bevacizumab distribution in an undisturbed balanced salt solution in vitro Purpose The effects of gravity on bevacizumab or the recommended head position after intraocular bevacizumab injection have not been reported. To evaluate the effect of gravity on bevacizumab in vitro, we added bevacizumab to the upper part of a test tube filled with balanced salt solution (BSS) and examined its distribution over time. Materials and methods Sixty-four test tubes were divided equally into two groups; group 1 (32, collected from upper part of the tube) and group 2 (32, collected from lower part of the tube). Each test tube was filled with 5 mL BSS before bevacizumab (1.25 mg/0.05 mL) was added, and then stored at 36°C. Bevacizumab concentration in 8 test tubes from each group was measured at 12, 24, 48, and 168 h using an enzyme-linked immunosorbent analysis (ELISA) kit. Mann–Whitney and Jonckheere–Terpstra tests were used for statistical analysis. Results Bevacizumab concentration was significantly higher in Group 2 than in Group 1 at 12, 24, 48, and 168 h (12, 24, 48, and 168 h; P < 0.01 each; Mann–Whitney test). The mean change in bevacizumab concentration over time tended to increase in Group 1 (P < 0.01; Jonckheere–Terpstra test), but tended to decrease in Group 2 (P < 0.01; Jonckheere–Terpstra test). Conclusions The significant differences in concentration between the upper and lower parts even after a considerable amount of storage time showed that bevacizumab did not dissolve immediately and diffused evenly throughout the solution. It appeared that more bevacizumab settled in the lower part of the tube than in the upper part because of gravitational force. However, the concentration difference between the upper and lower parts decreased as bevacizumab gradually diffused over time, indicating that the difference in concentration due to gravity was more significant at the beginning of bevacizumab injection. Conclusions The significant differences in concentration between the upper and lower parts even after a considerable amount of storage time showed that bevacizumab did not dissolve immediately and diffused evenly throughout the solution. It appeared that more bevacizumab settled in the lower part of the tube than in the upper part because of gravitational force. However, the concentration difference between the upper and lower parts decreased as bevacizumab gradually diffused over time, indicating that the difference in concentration due to gravity was more significant at the beginning of bevacizumab injection. PLOS Introduction Proliferative diabetic retinopathy (PDR) can cause vitreous hemorrhage, retinal detachment, and neovascular glaucoma, and can even lead to blindness [1]. Vitrectomy is a surgical treatment for restoring vision in PDR patients [2]. At the end of vitrectomy in PDR patients, bevacizumab (Avastin1; Genentech, San Francisco, CA, USA) is injected intraocularly as a relatively safe method to effectively reduce recurrent intraocular hemorrhage [3]. Moreover, intraocular bevacizumab injection is known to be a safe and effective method to treat vitreous hemorrhage occurring after vitrectomy for PDR patients [4]. For this reason, bevacizumab injection is often administered after vitrectomy in patients with PDR. We also typically administer an intraocular bevacizumab injection after vitrectomy in patients with PDR, and then observe using a surgical microscope whether the injected drug collects at the bottom of the eye. So, we questioned whether bevacizumab dissolves and disperses rapidly and evenly throughout the vitreous chamber when injected into balanced salt solution (BSS)-filled eyes after vitrectomy. To date, there have been numerous studies on the half-life, clearance rate, and overall pharmacokinetics of intraocular bevacizumab injections after vitrectomy, but there have not been studies on the intraocular distribution by weight [5][6][7]. Hence, using an indirect method initially, we examined the distribution of bevacizumab over time following its addition into the upper part of a test tube filled with BSS. We aimed to provide insights into the intraocular distribution of bevacizumab by weight following post-vitrectomy injection, and to present our opinions about postoperative head position. Materials and methods Sixty-four test tubes were divided into two groups of 32 test tubes each. Each borosilicate glass test tube (diameter: 10mm, length: 100mm) was filled with 5 mL of BSS (BSS Plus, Alcon laboratories, Fort Worth, TX). A 1-cc syringe with a 30-gauge needle was used to inject 1.25 mg/0.05 mL bevacizumab (Avastin; Genentech, San Francisco, CA, USA). By using the 1-mL syringe positioned vertically with the needle downwards from 0.5 cm above the surface of the solution in each test tube. Bevacizumab was added at a rate of two drops per second while carefully making sure that the needle did not touch the tube's internal wall. Following the drug injection, the tubes were immediately covered and stored undisturbed in CO 2 INCUBATOR (MCO 175, Sanyo, Japan) at a temperature of 36.0˚C and CO 2 of 0.0%. The concentration was measured in eight tubes from each group after 12, 24, 48, and 168 h of storage. The solution was collected only once from each tube: the upper part was collected from one tube and the lower part from another tube. For the upper layer, the solution was collected 0.5 cm below the surface, and for the lower layer, the solution was collected 0.5 cm above the floor. From the upper (Group 1) and lower (Group 2) parts of the tube, 0.2 mL of solution was collected by using a micropipette to avoid dispersion of the drug. Microcapillary tips (Denville scientific, Metuchen, NJ) were used to collect the lower layer of the solution, while carefully minimizing disturbances of the upper layer or its mixture with the lower layer. Bevacizumab's concentration was analyzed using an enzymelinked immunosorbent analysis (ELISA) kit (Protein Detector ELISA Kit; KPL, Inc., Gaithersburg, MD, USA), which was used immediately after calibration of the concentration according to the manufacturer's calibration guidelines. Statistical analysis was performed using IBM SPSS 21.0 software (SPSS Inc., Chicago, IL, USA). P-values < 0.05 were considered statistically significant. Results Bevacizumab concentrations in the samples collected from the 64 test tubes [Group 1 (32, from the upper part) and Group 2 (32, from the lower part)] were analyzed. No samples were lost or contaminated. Table 1 and Fig 1 show concentration of bevacizumab over time in the upper and lower parts of the test tube following bevacizumab injection. The details of individual concentration measurements for each tube at each time point are shown in S1 Table. We compared the concentrations in Groups 1 and 2 at 12, 24, 48, and 168 h. A significant difference was observed between the two groups at all points, with a higher bevacizumab concentration in Group 2 than in Group 1 (12 h, P < 0.01; 24 h, P < 0.01; 48 h, P < 0.01 and 168 h P < 0.01 by Mann-Whitney test). The mean changes in bevacizumab concentration in Group 1 and Group 2 were calculated. The bevacizumab concentration in Group 1 showed a significant increasing trend over time (P < 0.01 by Jonckheere-Terpstra test), while Group 2 showed a significant decreasing trend (P < 0.01 by Jonckheere-Terpstra test). The difference in bevacizumab concentration between the two groups showed a gradually decreasing pattern over time, but this decrease was not statistically significant (P = 0.275 by Jonckheere-Terpstra test). Discussion This study showed that the bevacizumab concentration in the lower part of the tube was significantly higher than that in the upper part of the tube from 12 to 168 h post injection, even when the drug was injected from above the surface. The significant differences in concentration between the upper and lower parts even after a considerable amount of time had passed showed that bevacizumab does not dissolve immediately and diffuse evenly throughout the solution. It appears that more bevacizumab settles in the lower part of the tube relative to the upper part because of gravitational force. Furthermore, the mean concentration of bevacizumab tended to significantly increase in Group 1 over time, but tended to significantly decrease in Group 2, indicating that the concentration difference between the upper and lower parts decreased as bevacizumab gradually diffused over time. Thus, this indicates that the difference in concentration due to the effect of gravity is more significant particularly at the beginning of bevacizumab injection. Several factors affect the change in bevacizumab concentration when it is injected directly into the vitreous chamber, including convection currents, diffusion, temperature, and volume. Jooybar et al. reported that during intravitreal injection, the position, needle size, and injection speed could cause differences in the distribution of the injected drug within the vitreous chamber [8]. In the present study, the samples were stored close to body temperature. The test tubes were filled with 5 mL of BSS, which is equivalent to the vitreous volume. Bevacizumab at a concentration of 0.05 mL, which is consistent with the amount used for intraocular injection, was injected using a needle of constant size and at a constant injection speed. Under these conditions, we observed a significant difference between the upper and lower parts of the mixture over a long period, which suggests that similar results may occur in clinical practice. Although no research on the effect of gravity on bevacizumab has been conducted to date, previous studies have reported the effects of head position and gravity on the intraocular injection of other drugs. Lim et al. administered intravitreal injections of gentamicin, which is heavier than BSS, in rabbits that underwent vitrectomy. The eyeballs were collected after keeping the rabbits in a fixed position for 30 min, and then evaluated. The study reported significant injuries to the retinal tissue located inferiorly and indicated that gravity affects intravitreal injection of gentamicin. According to these results, they recommended that patients should be placed in an appropriate position during intravitreal injection to minimize foveal damage [9]. Jaissle et al. reported that gravity could lead to the accumulation of crystals in the inferior part of the vitreous humor following intraocular triamcinolone injection. The study showed that deposition could occur at the posterior pole of the retina depending on the patient's head position, and this form of deposition could be more likely after vitrectomy [10]. Of course, while it requires the results of additional animal experiments or in vivo clinical trials, bevacizumab showed a higher concentration at the inferior side due to gravity as demonstrated in our study. We believe that these factors can be considered similar to the above drug during intraocular injection of bevacizumab after vitrectomy. For example, if the patient is asked to maintain a supine position after injection, a higher concentration of the drug could be delivered to the fovea. In case of a problem in the anterior eye, such as neovascular glaucoma, the patient could be made to keep prone position after injection. The limitation of this study is that the experimental conditions were not identical to realworld clinical situations. The surface area/length/shape of the tubes used in this study are different from that of an actual vitreous chamber. Moreover, patients are not in a completely fixed, stationary position in real-world conditions, and their posture or movement could cause the solution to mix. In addition, the surrounding tissues could affect the absorption of the drug, resulting in a relatively smaller gravity-induced concentration gradient. Furthermore, errors in the measurement of bevacizumab concentration due to inevitable mixing of samples while collecting a relatively small amount of sample (0.2 mL) from each layer should be taken into consideration. As the sample was collected by passing through the upper layer of the solution, we cannot completely rule out a possibility that this method may have affected the lower layer's bevacizumab concentration even if microcapillary tips had been used carefully. However, we believe it is very unlikely that this sampling method, would have affected our results that the lower layer's bevacizumab concertation was statistically significantly higher because the upper layer was found to have a lower bevacizumab concentration. Nevertheless, given that the patient must rest in a fixed position after vitrectomy, we believe that bevacizumab will be present at a higher concentration in the inferior part of the vitreous chamber at least in the early stage. Further studies on the effect of gravity on bevacizumab, including model eye, animal experiments, and clinical trials, are required in the future. The results of the present in vitro distribution study can lay the foundation for future studies. In addition, further research should be conducted to verify whether this gravity-dependent distribution is evident for intraocular bevacizumab injection in patients who have not undergone vitrectomy. This would help determine the optimal postoperative head position for the vast number of patients who undergo intraocular bevacizumab injection worldwide. Supporting information S1 Table. The
2019-10-06T13:01:07.506Z
2019-10-04T00:00:00.000
{ "year": 2019, "sha1": "b467529452bc8fdaa0fce5e0b8cb8b83df423803", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0223418&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e9760650554ffa04779030946981cd26c8f46190", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
17254330
pes2o/s2orc
v3-fos-license
Antidepressant-like Effect of Tetrahydroisoquinoline Amines in the Animal Model of Depressive Disorder Induced by Repeated Administration of a Low Dose of Reserpine: Behavioral and Neurochemical Studies in the Rat Animal models are widely used to study antidepressant-like effect in rodents. However, it should be mentioned that pharmacological models do not always take into account the complexity of the disease process. In the present paper, we demonstrated that repeated but not acute treatment with a low dose of reserpine (0.2 mg/kg i.p.) led to a pharmacological model of depression which was based on its inhibitory effect on the vesicular monoamine transporter 2, and monoamines depleting action in the brain. In fact, we observed that chronic treatment with a low dose of reserpine induced a distinct depressive-like behavior in the forced swim test (FST), and additionally, it produced a significant decrease in the level of dopamine, noradrenaline, and serotonin in the brain structures. 1,2,3,4-Tetrahydroisoquinoline (TIQ) and its close methyl derivative, 1-methyl-1,2,3,4-tetrahydroisoquinoline (1MeTIQ) are exo/endogenous amines present naturally in the mammalian brain which demonstrated a significant antidepressant-like effect in the FST and the reserpine model of depression in the rat. Both compounds, TIQ and 1MeTIQ, administered chronically in a dose of 25 mg/kg (i.p.) together with reserpine completely antagonized reserpine-produced depression as assessed by the immobility time and swimming time. Biochemical data were in agreement with behavioral experiments and demonstrated that chronic treatment with a low dose of reserpine in contrast to acute administration produced a significant depression of monoamines in the brain structures and impaired their metabolism. These neurochemical effects obtained after repeated reserpine (0.2 mg/kg i.p.) in the brain structures were completely antagonized by joint TIQ or 1MeTIQ (25 mg/kg i.p.) administration with chronic reserpine. A possible molecular mechanism of action of TIQ and 1MeTIQ responsible for their antidepressant action is discussed. On the basis of the presented behavioral and biochemical studies, we suggest that both compounds may be effective for the therapy of depression in clinic as new antidepressants which, when administered peripherally easily penetrate the blood–brain barrier, and as endogenous compounds may not have adverse side effects. Introduction In recent years, depression has been recognized as a major public health problem. Understanding how to prevent and treat depression is therefore, an urgent subject. It is well known that monoamine neurotransmitters, such as dopamine (DA), noradrenaline (NA), and serotonin (5-HT) in the central nervous system play a key role in the pathophysiology of depression (Cantello et al. 1989 Chan-Palay andAsan 1989;Colpaert 1987;Mayeux et al. 1984;Elhwuegi 2004). However, abnormalities in monoaminergic neurotransmission are associated with a number of neurological disorders including Parkinson's disease (PD) and schizophrenia. Although the mechanism provoking depression has not been clearly elucidated; however, oxidative stress associated with generation of reactive oxygen species (ROS) can be one of the main causes in molecular processes underlying this disease. The endogenous generation of ROS results from metabolism of monoamines in the cytosol and auto-oxidation of monoamines. Physiologically, neurons have many endogenous mechanisms to maintain health and protect against degeneration. The vesicular monoamine transporter 2 (VMAT2) is one of such custodians that function to regulate the cytosolic environment of neurons, protecting them from endogenous and exogenous toxins (Uhl 1998;Miller et al. 1999). Localized on vesicular membranes in neurons, VMAT2 acts to accumulate cytosolic monoamines in synaptic vesicles after they have been synthesized from their precursors for regulated exocytotic release as well as after their reuptake from the synaptic cleft into the neuron (Surratt et al. 1993). The monoamines, particularly DA and NA have the ability to undergo spontaneous oxidation in the cytosol, which is potentially damaging to cellular structures (Graham 1978;Antkiewicz-Michaluk et al. 2006;Wąsik et al. 2009). Thus, the level of VMAT2 expression plays an important role in nerve cell safety and is essential for cellular susceptibility to oxidation (Liu et al. 1992). In fact, in VMAT2-deficient mice the striatal DA level was reduced by 85 % with a concomitant reduction in the metabolites, DOPAC and HVA. In addition, several markers of oxidative stress and damage were observed in the VMAT2-deficient mice. Moreover, it was found that disruption of VMAT2 led to depressive-like phenotypes (Ziemssen and Reichmann 2007;Taylor et al. 2009). Reserpine is a vesicular monoamine re-uptake blocker, which depletes monoamines in the brain, and produces depression-like syndrome in animals (Kandel 2000;Nagakura et al. 2009;Rojas-Corrales et al. 2004). Thus, it seems to be an ideal model to screen the potential antidepressants. In the present study, we analyzed the antidepressant potential of endogenous substances from the tetrahydroisoquinoline group: 1,2,3,4-tetrahydroisoquinoline (TIQ) and its close methyl-derivative, 1-methyl-1,2,3,4-tetrahydroisoquinoline (1MeTIQ; Fig. 1). Both these compounds are members of the TIQ family widespread in plant, animal, and human brains (McNaught et al. 1998;Rommelspacher and Susilo 1985). Among several endogenous TIQs, 1MeTIQ has a special position as a neuroprotective compound with antiparkinsonian potential, since it was demonstrated to reverse bradykinesia induced by 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) or 1-benzyl-1,2,3,4-tetrahydroisoquinoline (1BnTIQ) (Makino et al. 1990;Kotake et al. 1995;Tasaki et al. 1991). Both TIQ and 1MeTIQ, in contrast to other TIQs (e.g., 1BnTIQ and salsolinol), inhibited MAO A and B activities in the micro-molar concentrations (Patsenka and Antkiewicz-Michaluk 2004). The compounds had antioxidant properties as indicated by the ability of TIQ and 1MeTIQ to inhibit free radical formation and to abolish H 2 O 2 generation from DA via the Fenton reaction (for review see Singer and Ramsay 1995;Antkiewicz-Michaluk et al. 2006). Those results demonstrate that TIQ and 1MeTIQ are MAO inhibitors and possess intrinsic antioxidant properties. In that light of these observations the question arises whether TIQ and 1MeTIQ may have antidepressant effect? Both compounds easily penetrate into the brain through the blood-brain barrier, and their neuroprotective properties might be relevant from the clinical point of view. Additionally, these compounds have not been investigated yet, in the context of their antidepressant properties in the reserpine model of depression. In the present paper, we examined behavioral and neurochemical effects of acute and repeated treatment with a low dose of reserpine, and then we evaluated antidepressant-like effect of the investigated compounds, TIQ and 1MeTIQ, in reserpinized rats as a model of depression. We used the behavioral forced swim test (FST) to examine the antidepressant properties of TIQ and 1MeTIQ. The FST is a test with high predictive validity for antidepressant efficacy in human depression. Recently, a behavior sampling technique was developed that scores individual response categories, including immobility, swimming, and climbing (Detke et al. 1995). Although all antidepressant drugs reduce immobility time in the FST, at least two distinct active behavioral patterns are produced by pharmacologically selective antidepressant drugs (Borsini and Meli 1988). Serotonin-selective re-uptake inhibitors increase swimming behavior, while drugs acting primarily to elevate extracellular levels of NA or DA increase climbing behavior (Borsini 1995;Detke et al. 1995;Detke and Lucki 1996). Additionally, the locomotor activity test was used to check motor function of reserpinized rats. In the second part of the study, in addition to the behavioral tests, we carried out also the neurochemical ex vivo studies in the rat brain structures [ventral tegmental area (VTA), nucleus accumbens, and hypothalamus] to determine: the levels of monoamines and their metabolites, the rate of monoamine metabolism, and the indices of neuronal activity. Animals Behavioral tests were carried out on male Wistar rats (Charles River) of initial body weight 230-240 g (about 7 weeks old). The animals were kept under standard laboratory conditions with free access to laboratory food and tap water, at room temperature of 22°C with an artificial day-night cycle (12/12 h, light on at 7 a.m.). All the procedures were carried out in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and were granted an approval from the Bioethics Commission as compliant with Polish Law. The experimental protocols were approved by the Local Bioethics Commission of the Institute of Pharmacology, Polish Academy of Sciences in Kraków. Drugs 1,2,3,4-Tetrahydroisoquinoline hydrochloride (TIQ, Sigma-Aldrich, USA); 1MeTIQ were synthesized in Department of Drug Chemistry, Institute of Pharmacology Polish Academy of Sciences, Krakow, Poland), and the purity of the compound was verified by measurement of the melting point, and homogeneity was assessed on a chromatographic column. TIQ and 1MeTIQ were dissolved in sterile 0.9 % NaCl solution. The chemical structures of TIQ and 1MeTIQ are shown in Fig. 1. Reserpine (Sigma-Aldrich, USA) was suspended in 1 % Tween 80. The drugs were injected in a volume of 4 ml/kg. Treatments In order to check pro-depressive effect of reserpine, the experimental protocol was divided into two main parts. In the first part of the study (reserpine model of depression), we analyzed reserpine-induced depressive disorder after a single and repeated administration (once daily for 14 days) of reserpine (0.2 mg/kg i.p.) in a variety of behavioral (forced swimming test, locomotor activity: travelled distance and rearing times), and biochemical tests (the concentration of monoamines: DA, NA, and serotonin in the brain structures). In order to evaluate the duration of the obtained reserpine effect, the tests were carried out 120 min after the last injection. In the second part of that study, we evaluated the antidepressant-like effect of the investigated TIQs. For this purpose, TIQ and 1MeTIQ in a dose of 25 mg/kg i.p. were administered chronically (14 days) 30 min before each reserpine injection (0.2 mg/kg i.p.), and their effects on the reserpine-induced depressive-like disorder was investigated in the behavioral and biochemical tests 120 min after the last dose of reserpine. Control group received chronically 1 % Tween 80. Immediately after the end of behavioral tests, the rats were killed by decapitation, and different brain structures (VTA; nucleus accumbens and hypothalamus) were dissected for the later neurochemical studies that assessed the metabolism of monoamines by high pressure chromatography method (HPLC) with electrochemical detection (ED). The experiments were carried out between 10 a.m. and 4 p.m. Each experimental group consisted of 6-8 rats. Behavioral Studies The FST Procedure The studies were carried out on rats and were based on the method of Porsolt et al. (1978). All the animals were individually tested in the FST on two consecutive days with one session per day. On the first day, the rats were individually placed in non-transparent plastic cylinders (diameter: 23 cm, height: 50 cm) containing 30 cm of water, maintained at 25-26°C. They were let to swim for 15 min before being removed (pre-test session). After that the animals were dried and returned to their home cages. The procedure was repeated 24 h later, and the time of the escape-oriented behavior of the rats was recorded (for 5-min test session). The observed behavioral parameters (in the order of priority) were: time spent floating in water (immobility), swimming, and struggling (climbing). According to Detke et al. (1995), the immobility is described as behavior of the rat when it makes only the movements necessary to keep its head above the water. In this case, animals can make certain, slight swimming movements in order to remain afloat. Climbing is defined as vigorous movements of four limbs, with the front paws breaking against the wall of the cylinder. During swimming rats make coordinated and sustained movements (more than necessary) with all four limbs, usually traveling around the interior of the cylinder, but without breaking the surface of the water with forelimbs. Water was changed between subjects. The FST was performed 120 min after acute and chronic (14 days) administration of reserpine (0.2 mg/kg i.p.) In combined treatment groups, TIQ and 1MeTIQ (25 mg/kg i.p.) were administered chronically 30 min before each dose of reserpine. Locomotor Activity The locomotor activity was measured in actometers (Opto-Varimex activity monitors, Columbus Instruments, USA) linked on-line to an IBM-PC compatible computer. Each cage (43 x 44 x 25 cm) was surrounded with a 15 9 15 array of photocell beams located 3 cm from the floor surface. Interruptions of these photocell beams were counted Neurotox Res (2014) 26:85-98 87 as a measure of horizontal and vertical locomotor activity. Horizontal locomotor activity was defined as the travelled distance (in cm), and the vertical activity as rearing times (in seconds). Locomotor activity was analyzed using Auto-Track Software Program (Columbus Instruments, USA) and recorded in 15 min intervals for 60 min. Locomotor activity was measured at 120 min after acute and chronic administration (14 days) of reserpine (0.2 mg/kg i.p.). Neurochemical Studies Ex Vivo: Monoamine Metabolism in Rat Brain Structures The animals were killed by decapitation after the end of behavioral experiments. The brains were rapidly removed and dissected on an ice-cold glass plate. After decapitation, the VTA, nucleus accumbens, and hypothalamus were taken and immediately frozen on solid CO 2 (-70°C) until used for biochemical assays. DA and its metabolites, the intraneuronal, 3,4-dihydroxyphenylacetic acid (DOPAC); the extraneuronal, 3-methoxytyramine (3-MT), and the final metabolite, homovanillic acid (HVA); NA and its main extraneuronal brain metabolite, normetanephrine, (NM) and serotonin (5-HT) and its intraneuronal metabolite 5-hydroxyindolacetic acid (5-HIAA) were assayed by means of high-performance liquid chromatography (HPLC) with electrochemical detection (ED). The tissue samples were weighted and homogenized in ice-cold 0.1 M trichloroacetic acid containing 0.05 mM ascorbic acid. After centrifugation (10,0009g, 5 min), the supernatants were filtered through RC58 0.2 lm cellulose membranes (Bioanalytical Systems, West Lafayette, IN, USA). The chromatograph HP 1050 (Hewlett-Packard, Golden, CO, USA) was equipped with Hypersil columns BDS-C18 (4 9 100 mm, 3 lm). The mobile phase consisted of 0.05 M citrate-phosphate buffer, pH 3.5; 0.1 mM EDTA; 1 mM sodium octyl sulfonate; and 3.5 % methanol. The flow rate was maintained at 1 ml/min. DA, serotonin, NA, and their metabolites were quantified by peak area comparisons with standards run on the day of analysis (ChemStation, Hewlett-Packard software computer program). Calculations and Statistics The data of behavioral and neurochemical studies were calculated by means of a one-way or two-way analysis of variance (ANOVA), followed when appropriate by Duncan's post hoc test. The data were considered statistically significant when P \ 0.05. The total catabolism rate for DA was assessed from the ratio of the final DA metabolite concentration, HVA to DA concentration and expressed as the catabolic rate index Behavioral Studies The Effect of Acute and Chronic Administration of Reserpine on the FST Carried out 120 min After the Last Injection Chronic but not acute administration of reserpine (0.2 mg/ kg i.p.) produced pro-depressive activity and significantly increased the immobility time in the FST in rats F 2,18 = 2.66; P \ 0.05 (Fig. 2a). The one-way ANOVA showed a significant decrease (*25 %) in the swimming activity after chronic reserpine F 2,18 = 2.69; P \ 0.05 and no change in the climbing (Fig. 2b, c). The Locomotor Activity Test Both acute and chronic administration of reserpine in a low dose (0.2 mg/kg i.p.) produced a significant decrease in the horizontal (travelled distance in cm) and vertical (rearing time in seconds) exploratory locomotor activity of rats (P \ 0.001) during the first 30 min after the start of the measurement of motor activity. At later intervals of 45 and 60 min, no changes in motor activity were detected between reserpine groups and the control (Fig. 3a, b). The Effect of Chronic Administration TIQ and 1MeTIQ on Reserpine-Evoked Depressive-like Behavior in the FST in the Rat Chronic administration (14 days) of TIQ and 1MeTIQ in a dose of 25 mg/kg i.p. together with reserpine (0.2 mg/kg i.p.) produced antidepressant-like activity and completely antagonized pro-depressive effect of chronic reserpine ( Fig. 4a, b). The one-way ANOVA showed a significant effect of treatments, F 3,21 = 9.75; P \ 0.0003, and Duncan's post hoc test revealed an increase in the immobility time for reserpine alone versus control group (P \ 0.05), and its significant decrease in both the combined groups: TIQ ? reserpine (P \ 0.01) and 1MeTIQ ? reserpine (P \ 0.05) (Fig. 4a). Similarly, a significant decrease in the swimming time in reserpine group was completely antagonized by TIQ and 1MeTIQ in the combined treatment groups, F 3,21 = 4.47; P \ 0.01 (Fig. 4b). The climbing time was significantly increased only in the combined treatment group 1MeTIQ ? reserpine, F 3,21 = 3.28; P \ 0.05 (Fig. 4c). Neurochemical Studies The Comparison of a Single and Repeated Administration of a Low Dose of Reserpine on the Concentration of Monoamines: DA, NA, and Serotonin in the Nucleus Accumbens and Hypothalamus The one-way ANOVA showed a significant effect of chronic reserpine administration on DA F 2/15 = 6.81, P \ 0.002 and NA level F 2/15 = 3.39, P \ 0.03 in the nucleus accumbens. The Duncan's post hoc test indicated that chronic treatment with reserpine in contrast to a single injection decreased significantly the level of DA (by *35 % vs. control; P \ 0.01) and NA (about 50 % of control; P \ 0.05). Additionally, the one-way ANOVA demonstrated also a significant effect of chronic reserpine on DA F 2/15 = 3.83, P \ 0.02; NA F 2/15 = 26.94, P \ 0.0001; serotonin F 2/15 = 16.85, P \ 0.0007 concentrations in the hypothalamus. The Duncan's post hoc test revealed a significant decrease in all monoamines: DA (about 40 % of control, P \ 0.05); NA (30 % of control, P \ 0.02) and serotonin (about 35 % of control, P \ 0.01) after chronic reserpine and no effect of a single injection (Table 1) Fig. 2 The effect of acute and repeated administration with a low dose of reserpine on FST in rat. Reserpine (0.2 mg/kg i.p.) was administered acute or chronically, once daily for 14 days. Control group received chronically 1 % Tween 80. FST was carried out 120 min after the last dose of reserpine. The data are the mean ± SEM. The results were analyzed by means of one-way ANOVA, followed when appropriate, by post hoc Duncan's test. Statistical significance: *P \ 0.05 versus control group Neurotox Res (2014) (Table 2). 3-Methoxytyramine The one-way ANOVA revealed a significant effect of treatment on 3-MT concentration in the VTA (F 3/20 = 7.83; P \ 0.001), and hypothalamus (F 3/20 = 4.41; P \ 0.01) but not in the nucleus accumbens (F 3/20 = 1.06; NS). The Duncan's post hoc test demonstrated that TIQ and 1MeTIQ administered together with reserpine produced the significant increase in 3-MT concentration versus control group in the VTA and hypothalamus (from 50 % up to 100 % of control, respectively) ( Table 2). Homovanillic Acid The statistical analysis revealed a significant effect of treatment on the level of HVA in the VTA (F 3/20 = 13.39; P \ 0.00005), nucleus accumbens (F 3/20 = 13.00; P \ 0.00006), and hypothalamus (F 3/20 = 7.83; P \ 0.001). The Duncan's post hoc test demonstrated that HVA level was significantly decreased (P \ 0.01) by reserpine (only in the nucleus accumbens) and by TIQ and 1MeTIQ in the combined groups in the investigated structures ( Table 2). (Table 3) Reserpine (0.2 mg/kg i.p.) was administered acute or chronically, once daily for 14 days. Control group received chronically 1 % Tween 80. Horizontal locomotor activity was defined as the travelled distance (in cm) and the vertical activity as rearing times (in seconds). Locomotor activity was analyzed 120 min after acute and chronic treatment of reserpine (0.2 mg/kg i.p.) during 60 min using Auto-Track Software Program. The data are the mean ± SEM. The number of animals per group, N = 6-8. The results were analyzed by means of one-way ANOVA, followed when appropriate, by post hoc Duncan's test. Statistical significance: **P \ 0.001 versus control group antagonized the reserpine-induced depression in the concentration of NA (Table 4). Normetanephrine The one-way ANOVA demonstrated a significant effect of treatment on NM level in the VTA (F 3/20 = 3.65; P \ 0.03) and hypothalamus (F 3/20 = 33.32; P \ 0.00000). The Duncan's post hoc test indicated that reserpine led to a decrease in NM level in the brain structures, and TIQ and 1MeTIQ administered together with reserpine significantly antagonized this effect. Particularly in the hypothalamus, TIQ led to a strong increase (about 400 % of control group, P \ 0.001) substantially exceeding the value of NM in the control group ( Table 4). The Effect of Chronic Administration of TIQ and 1MeTIQ on Reserpine-Induced Changes in the Serotonin System Serotonin The one-way ANOVA indicated a significant effect of treatment on the level of serotonin only in the hypothalamus (F 3/20 = 6.95; P \ 0.002). The Duncan's post hoc test indicated that chronic administration of reserpine produced a significant decrease in serotonin concentration in the VTA (about 35 % of control, P \ 0.05) and in the hypothalamus (about 30 % of control, P \ 0.01). These effects were antagonized by TIQ and 1MeTIQ in the combined treatment groups (Table 5). Reserpine (0.2 mg/kg i.p.) was administered once (acute treatment) and for 14 days once daily (chronic treatment). Control group received once daily for 14 days 1 % Tween 80. Animals were decapitated 120 min after the last injection of reserpine (acute and chronic treatment). The concentration (ng/g wet tissue) of monoamines: DA, NA, and serotonin was measured in the rat nucleus accumbens and hypothalamus. The data are the mean ± SEM. The results were analyzed by means of one-way ANOVA, followed when appropriate, by post hoc Duncan's test Statistical significance: * P \ 0.05, ** P \ 0.01 versus control group; ? P \ 0.05, ?? P \ 0.01 versus reserpine acute 5-Hydroxyindolacetic acid The one-way ANOVA demonstrated a significant effect of treatment on the level of 5-HIAA in the tested structures: VTA (F 3/20 = 11.29; P \ 0.00012), nucleus accumbens (F 3/20 = 10.15; P \ 0.0002), and hypothalamus (F 3/20 = 9.54; P \ 0.0004). The Duncan's post hoc test indicated that repeated treatment of reserpine had no effect but chronic administration of TIQ and 1MeTIQ together with reserpine produced a significant (P \ 0.01) decrease of 5-HIAA concentration in all tested structures ( Table 5). in the VTA (about 70 % of control, P \ 0.01) and in the hypothalamus (about 60 % of control, P \ 0.01) but did not change it in the nucleus accumbens. TIQ and 1MeTIQ significantly antagonized the effect evoked by reserpine in these structures and clearly decreased the rate of serotonin metabolism in the nucleus accumbens (from 25 to 30 % of control, respectively, P \ 0.01) ( Table 5). Discussion In this study, we investigated the effects of repeated administration of a low dose of reserpine (0.2 mg/kg i.p.) on behavioral (FST, motor function) and neurochemical parameters, and then we studied the effect of TIQ and 1MeTIQ on reserpine-induced depression in the rat. In fact, we observed that chronic but not acute treatment with a low dose of reserpine induced a distinct depressive-like behavior in the FST, motor impairment, and additionally a significant decrease in the level of DA, NA, and serotonin in the brain (Figs. 2, 3; Table 1). As already well known, reserpine is an inhibitor of VMAT2 and interferes with the storage of monoamines by blocking the ATP-dependent uptake mechanism of the storage organelles (Nagakura et al. 2009;Rojas-Corrales et al. 2004). In addition to that, the oxidative catabolism of cytosolic DA and serotonin by monoamine oxidase A and B (MAO) is accelerated, which is followed by disappearance of these neurotransmitters and formation of a cellular oxidant hydrogen peroxide (especially in DA MAOdependent oxidation). This action mimics the increased turnover of DA in the surviving dopaminergic terminals in the course of PD (Gerlach and Riederer 1996). Interestingly, the VMAT2-deficient animals showed an increased oxidative stress, progressive loss of DA terminals, and accumulation of a-synuclein (Caudle et al. 2007(Caudle et al. , 2008. There are many studies showing that depression is characterized by a significantly decreased antioxidant status, as evidenced by a lowered tryptophan, tyrosine, vitamin E, zinc concentration, and reduced glutathione, which are all antioxidants (Maes et al. 2011;Kodydkova et al. 2009). Recently, a new hypothesis postulating that the activation of inflammatory and oxidative stress pathways is a key pathophysiological factor in depression, was formulated (Vetulani and Nalepa 2000;Maes 2008). One of the main purposes of this study was to find out more realistic model of depression which could be correlated with neurochemical changes in monoaminergic systems for estimation antidepressant efficacy of the investigated new compounds: TIQ and 1MeTIQ. We applied a such small dose of reserpine (0.2 mg/kg) which because its low concentration only partially affected a vesicular monoamines T2 transporter, and after acute administration did not evoke any changes in the forced Table 4 The effect of chronic administration of TIQ and 1MeTIQ on reserpine-induced changes in the noradrenergic system after chronic administration in the different structures of rat brain swim behavioral test as well as in the biochemical parameters (the concentration of monoamines) in the brain. Somehow, repeated (daily for 14 days) administration leads to the significant ''depression-like'' syndrome in FST with the simultaneous distinct drop of monoamine concentrations in the brain structures. In the light of these observations, we can suggest that repeated treatment with a low dose of reserpine could be a good progressive model of depression and concomitant abnormalities in the motivation function and, additionally, a useful tool for testing potential new antidepressants. Our results are also in agreement with other findings showing that the reserpine model is characterized not only by behavioral depression but also by impairment of monoamine neurotransmission in the brain (Colpaert 1987;Gerlach and Riederer 1996;Fernandes et al. 2008Fernandes et al. , 2012. Previous in vitro studies have shown that the investigated compounds, TIQ and 1MeTIQ, possess free radical scavenging capacity and intrinsic antioxidant properties (Antkiewicz-Michaluk et al. 2006). Several TIQs and their congeners, including TIQ and 1MeTIQ, interfere with MAO activity, inducing putative neuroprotection related to the pathogenesis of PD (Naoi and Maruyama 1993). Both compounds investigated in the present study inhibited MAO A and MAO B activities with preferential effects on the MAO A form (Patsenka and Antkiewicz-Michaluk 2004). These results justify the question about the physiological significance of endogenous TIQs in the control of neurotransmitter function, and prevention of neurotoxicity related to MAO activity in the brain. The FST measuring immobility has been shown to be an appropriate test to evaluate antidepressant activity, as antidepressants generally delay and decrease immobility (Cryan et al. 2005;Murray et al. 2008;Zhao et al. 2008). The modified FST measures the frequency of different types of active behaviors: swimming, which is sensitive to serotoninergic compounds, such as SSRIs, and climbing, which is sensitive to tricyclic antidepressants and drugs Table 5 The effect of chronic administration of TIQ and 1MeTIQ on reserpine-induced changes in the serotonin system after chronic administration in the different structures of rat brain Reserpine (0.2 mg/kg i.p.) was administered chronically, once daily for 14 days. TIQ and 1MeTIQ (25 mg/kg i.p.) were administered 30 min before each dose of reserpine (combined groups). Control group received chronically 1 % Tween 80. Animals were decapitated 120 min after chronic drugs administration. The concentration of serotonin (5-HT) and its metabolite, 5-hydroxyindolacetic acid (5-HIAA) was expressed as ng/g wet tissue. The rate of serotonin metabolism was expressed as the ratio of its metabolite, 5-HIAA to serotonin: [5-HIAA]/[5-HT]x100. The indices were calculated using concentrations from individual tissue samples. The data are the mean ± SEM. The results were analyzed by means of one-way ANOVA, followed when appropriate, by post hoc Duncan's test Statistical significance: * P \ 0.05, ** P \ 0.01 versus control group. ? P \ 0.05, ?? P \ 0.01 versus reserpine-treated group with selective effects on catecholamine transmission (Cryan and Lucki 2000;Cryan et al. 2005;Detke et al. 1995). As shown by Detke et al. (1995), the increase in climbing activity is connected with an enhanced NA system activation. In the present study, we observed for the first time the antidepressant-like effect in the FST of the tetrahydroisoquinoline amines: TIQ and 1MeTIQ in the animal model of depressive disorder induced by repeated administration of reserpine. Chronic reserpine significantly increased the immobility time in the FST, and concomitantly produced a significant decrease in the swimming time. However, in the locomotor activity test, both acute and repeated administration of reserpine produced a significant depression in the horizontal (travelled distance in cm) and vertical (rearing time in sec.) exploratory locomotor activity of rats (P \ 0.001) during the first 30 min after the start of the measurement of motor activity (Fig. 3a, b). On the contrary, in the FST, there was a clear distinction between the acute and chronic effects of reserpine administration, where only chronic treatment led to the depressive-like behavior in that test (Fig. 2). The results of these experiments clearly indicate a significant dissociation between ''pro-depressive'' effects observed in the FST (inhibition of motivation) only after repeated reserpine treatment and, in contrast to that, depression of the locomotor activity after both, acute and repeated administration. Taken together the results also indicate that locomotor activity test is not suitable for estimation ''pro-depressive'' effects in rat. The investigated compounds, TIQ and 1MeTIQ, administered chronically together with a low dose of reserpine completely antagonized reserpine-produced depression as measured by both investigated parameters: immobility time and swimming time (Fig. 4). The behavioral data obtained in the FST clearly indicate that both compounds express antidepressant-like activity in reserpinized rat. The data are in agreement with our recent papers concerning antidepressant-like effect of TIQ and 1MeTIQ in the FST and chronic mild stress model, which was comparable to classic antidepressants, imipramine, and desipramine (Wąsik et al. 2013;Mo_ zd_ zeń et al. submitted to the Editor 2013). Neurochemical data showed that these effects may be connected with the activation of monoaminergic system (dopaminergic, serotoninergic, and noradrenergic) in the brain. As already well known, multiple mechanisms are responsible for the development of depression. Monoamine neurotransmitters are involved in the pathogenesis of depression and play an important role in mediating the effects of antidepressants (Javaid et al. 1979;Borsini and Meli 1988). It is well documented by clinical data that the classical antidepressants (imipramine, desipramine) as well as SSRI activate monoaminergic neurotransmission in the brain as reuptake inhibitors of NA and serotonin (Grunewald et al. 1979;Javaid et al. 1979;Borsini and Meli 1988;Borsini 1995;Cryan and Lucki 2000;Cryan et al. 2005;Detke et al. 1995). Biochemical data demonstrated that chronic treatment with a low dose of reserpine in contrast to acute administration produced the depression of monoamines in the brain structures. The concentrations of DA (in all investigated structures), NA (in the nucleus accumbens and hypothalamus), and serotonin (in the VTA and hypothalamus) were significantly lowered (Tables 2,4,5). In opposite, the rate of MAO-dependent oxidation of DA and serotonin was significantly increased (Tables 3, 5). These biochemical effects obtained after repeated reserpine administration were completely antagonized by chronic joint injections of TIQ or 1MeTIQ with reserpine. Regarding the mechanism of action of TIQs in reversing the effects of reserpine, it was shown previously in our ex vivo experiments that both compounds, TIQ and 1MeTIQ, shifted DA catabolism from MAO-dependent oxidation to COMT-dependent methylation and abolished at least generation of hydroxyl radicals via Fenton reaction (Antkiewicz-Michaluk et al. 2006). Such MAO-dependent oxidation inhibiting effect produced by TIQ and 1MeTIQ in reserpine-treated rats is clearly visible both in the decline of DA and serotonin metabolites (DOPAC and 5-HIAA, respectively) as well as in the decrease in oxidation indices in the combined treatment groups (Tables 2, 3, 5). Similarly to DA and serotonin, both TIQ compounds normalized the level of NA and its extraneuronal metabolite, NM decreased by chronic administration of reserpine. NA released into the synaptic cleft is catabolized by COMT to NM by the process of COMT-dependent methylation, so this extraneuronal metabolite is a good marker of NA release. As it was demonstrated in the present paper, chronic reserpine inhibited noradrenergic transmission. TIQ and 1MeTIQ administered together with reserpine prevented the noradrenergic depression evoked by reserpine (Table 4). Our present data demonstrate that the investigated compounds, TIQ and 1MeTIQ, used in the reserpine model of depression in the rat elicited the antidepressant-like activity in the FST. Both compounds are characterized by a wide spectrum of actions on all monoaminergic systems in the rat brain. Thanks to their ability to inhibit both MAO A and MAO B activity (Patsenka and Antkiewicz-Michaluk 2004) and to scavenge free radicals (Antkiewicz-Michaluk et al. 2006), TIQ and its close methyl derivative, 1MeTIQ may be useful not only for the therapy of neurodegenerative disease (e.g., PD) but also in the treatment of the depression as new antidepressants. Especially, 1MeTIQ raises hope for its application in depression as a safe drug with proven clinically useful mechanism of action described recently as neuroprotectant with antiaddictive potency (Antkiewicz-Michaluk et al. 2014).
2017-08-03T01:00:58.367Z
2014-01-10T00:00:00.000
{ "year": 2014, "sha1": "9bc8b2bf9c587bb6e4ed8569d2bfca41af051db5", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12640-013-9454-8.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9bc8b2bf9c587bb6e4ed8569d2bfca41af051db5", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
258259697
pes2o/s2orc
v3-fos-license
Cycling in primary progressive multiple sclerosis (CYPRO): study protocol for a randomized controlled superiority trial evaluating the effects of high-intensity interval training in persons with primary progressive multiple sclerosis Background Primary progressive multiple sclerosis (PPMS) is the least prevalent multiple sclerosis (MS) phenotype. For persons with PPMS (pwPPMS), pharmacological treatment options are limited. As a complementary non-pharmacological treatment, endurance training improves the health-related quality of life (HRQoL), numerous MS symptoms, and MS-related performance impediments. High-intensity interval training (HIIT) has been shown to induce superior effects compared to moderate-intensity continuous training (MCT). As current evidence is based on MS samples with mixed phenotypes, generalizability to pwPPMS remains unclear. Methods CYPRO is a parallel-group, single-center, and single-blind randomized controlled superiority trial evaluating the effects of HIIT compared to MCT in pwPPMS. Sixty-one pwPPMS are randomized (1:1) to perform volume-matched HIIT or MCT sessions on bicycle ergometers two to three times per week in addition to standard rehabilitative care during their three-week inpatient stay at Valens rehabilitation clinic, Switzerland. Standard rehabilitative care comprises endurance and strength training, physiotherapy, and occupational therapy. HIIT sessions include six 90-second intervals at 95% peak heart rate (HRpeak), interspersed by 90-second active breaks with unloaded pedaling, aimed to reach 60%HRpeak. MCT represents the standard treatment at Valens rehabilitation clinic and is performed as continuous cycling at 60%HRpeak for the duration of 26 minutes. The primary outcome is cardiorespiratory fitness, assessed as peak oxygen consumption (V̇O2peak) during cardiopulmonary exercise testing (CPET). Secondary outcomes include peak power output during CPET, walking capacity, cognitive performance, HRQoL, fatigue, anxiety and depressive symptoms, and blood-derived biomarkers (e.g., serum neurofilament light chain, glial fibrillary acidic protein, kynurenine pathway metabolites) related to MS pathophysiology. All outcomes are assessed at baseline and discharge after three weeks. Venous blood sampling is additionally performed immediately and two hours after the first HIIT or MCT session. Discussion CYPRO will expand current knowledge on symptom management and rehabilitation in MS to the subpopulation of pwPPMS, and will contribute to the exploration of potential disease-modifying effects of endurance training in MS. The superiority design of CYPRO will allow deriving explicit recommendations on endurance training design in pwPPMS that can be readily translated into clinical practice. Trial registration CYPRO has been prospectively registered at ClinicalTrials.gov on 8 February 2022 (NCT05229861). Supplementary Information The online version contains supplementary material available at 10.1186/s12883-023-03187-6. Background Multiple sclerosis (MS) is a chronic inflammatory, demyelinating, and neurodegenerative disease of the central nervous system (CNS) that globally affects approximately 2.8 million people [1,2]. In most cases, MS initially manifests as relapsing-remitting phenotype (RRMS) and frequently transitions into secondary progressive MS (SPMS) after several years. Primary progressive MS (PPMS) is the least prevalent MS phenotype, affecting 10-15% of persons with MS (pwMS) [3]. PPMS is characterized by gradual disability worsening from disease onset [4]. Disability worsening is considered to primarily evolve from neurodegenerative aspects of the MS pathophysiology, such as neuroaxonal damage, astrocytic gliosis, and mitochondrial failure due to increased oxidative stress, all of which culminate in pronounced brain and spinal cord atrophy [3,5]. In contrast to neuroinflammation predominating in RRMS, neurodegeneration is largely unresponsive to current disease-modifying treatment [3]. In PPMS, pharmacological treatment is limited to the humanized anti-CD20 antibody ocrelizumab. However, the benefits of ocrelizumab on disease progression and brain atrophy are diminished in older persons with PPMS (pwPPMS), or those presenting with low residual inflammation [6]. Alongside ongoing efforts to improve disease-modifying treatment in pwPPMS, optimization of complementary non-pharmacological treatment options is recognized as essential to maintain and improve the health-related quality of life (HRQoL) in pwPPMS [7]. Accordingly, the International Progressive MS Alliance called upon action, proposing symptom management and rehabilitation as one key priority area for research in progressive MS [8]. In line with the proposed goals for MS therapies, endurance training qualifies as an effective means to improve HRQoL, numerous MS symptoms, such as walking impairment or fatigue, and MS-related performance impediments, such as reduced cardiorespiratory fitness, in pwMS [8][9][10]. Beyond that, highintensity interval training (HIIT) has been described to beneficially modulate systemic concentrations of bloodderived biomarkers relevant to PPMS pathophysiology. For example, an acute HIIT session reduced concentrations of serum neurofilament light chain (sNfL), serving as a surrogate marker of disease progression, neuroaxonal damage, and CNS atrophy [11,12]. Additionally, HIIT has been shown to shift systemic concentrations of neurotoxic and pro-oxidant metabolites towards neuroprotective metabolites of the immunomodulatory and neuroactive kynurenine pathway (KP) [11,13]. However, results have been obtained from mixed samples, dominated by RRMS phenotype. The generalizability of training designs and the desired beneficial effects of endurance training on the subpopulation of pwPPMS remains unclear, given that PPMS presents with distinct clinical features, such as progressive spastic paraparesis as a hallmark symptom [14,15]. Owing to the high prevalence of progressive spastic paraparesis, particularly pwPPMS may be prone to physical deconditioning, which is indicated by low cardiorespiratory fitness. As cardiorespiratory fitness is associated with HRQoL, walking impairment, cognitive performance, fatigue, and potential CNS tissue sparing in pwMS, improvement of cardiorespiratory fitness represents a key research outcome in MS trials and constitutes a central target in MS rehabilitation [10,16]. With the randomized controlled trial CYPRO (CYcling in Primary PROgressive Multiple Sclerosis), we present a novel approach to validate the established mixed-sample effects of HIIT and moderate-intensity continuous training (MCT) on cardiorespiratory fitness and a comprehensive set of further MS-relevant outcomes in a sample that is exclusively composed of pwPPMS. assessed at baseline and discharge after three weeks. Venous blood sampling is additionally performed immediately and two hours after the first HIIT or MCT session. Aim, study design, and setting CYPRO is a parallel-group, single-center, and singleblind randomized controlled superiority trial that is performed at the Valens rehabilitation clinic, Switzerland. The aims of CYPRO are described as the following: 1. Primary aim: To investigate the effects of HIIT compared to MCT on cardiorespiratory fitness, as indicated by peak oxygen consumption (VȮ 2peak ). 2. Secondary aims: To investigate the effects of HIIT compared to MCT on peak power output (PPO), walking capacity, cognitive performance, HRQoL, fatigue, anxiety and depressive symptoms, and blood-derived biomarkers related to MS pathophysiology. Additionally, CYPRO aims to investigate the acute effects of a single HIIT session compared to a single MCT session on changes in blood-derived biomarkers. HIIT represents the experimental condition that is hypothesized to be superior to the standard treatment MCT as an active comparator. The study design and flow of participants are illustrated in Fig. 1. Sample size calculation Sample size calculation for the estimated effect of HIIT or MCT on the primary outcome VȮ 2peak was performed using G*Power software (Version 3.1.9.7, Heinrich Heine Universität, Düsseldorf, Germany) [20]. Estimating a drop-out of 15%, a total of 61 participants (HIIT: n= 30, MCT: n= 31) was found to be sufficient to identify a small to moderate effect (ES ≥ 0.15) in ANOVA analysis on 2 group (HIIT vs. MCT) * 2 time (T 0 vs. T 4 ) interaction. Power was set at 80% and alpha at 0.05. Correlation among repeated measures was set at 0.6. Participants Inclusion criteria for CYPRO comprise adult age (≥ 18 years), definite PPMS diagnosis, Expanded Disability Status Scale (EDSS) score ≤ 6.0, and signed consent [21][22][23]. Exclusion criteria concern severe lower extremity spasticity or concomitant disease states (i.e., orthopaedic, cardiovascular, metabolic, psychiatric, other neurological, serious medical conditions) that would impair the ability to participate. Pregnant or breastfeeding women or those intending to become pregnant are excluded. Further, pwPPMS are prohibited to participate, if they regularly perform HIIT (i.e., ≥ 2 times per week), if noncompliance is suspected, if they are not able to follow study procedures (e.g., due to insufficient German literacy), or in case of a recent treatment change (e.g., change in disease-modifying treatment (≤ 6 weeks), stem cell treatment (≤ 6 months)). PwPPMS are excluded from further participation in case of severe adverse events (e.g., cardiovascular decompensation) during cardiopulmonary exercise testing (CPET) at baseline, sudden severe disease progression, or falling sick (e.g., due to bacterial or viral infections) during the participation in CYPRO. Recruitment and randomization procedure All pwMS entering the Valens rehabilitation clinic for an inpatient stay are screened for study eligibility. Eligibility screening is performed as a two-step process. In the first step, PPMS diagnosis and EDSS score are checked in advance of clinic admission. In the second step, all other eligibility criteria are checked by in-person consultation on the day of clinic admission. Sixty-one pwPPMS will be recruited. As the sex ratio in pwPPMS is 1:1, we pursue recruiting an equal number of female and male pwPPMS [14]. The number of screened pwMS in total and per screening step, as well as reasons for ineligibility are documented. Stratified randomization to either HIIT or MCT (1:1) is performed by a group of blinded investigators not involved in CYPRO. Allocation sequence generation is performed using random-sized (block size 4-6) permuted block randomization with Randomization-In-Treatment-Arms (RITA) software (Version 1.51, Evident, Lübeck, Germany). To ensure allocation concealment, the blinded investigators are called for assignment each time a participant presents for inclusion [24]. CYPRO study personnel that performs enrolment, obtains consent, and assigns participants has no access to details on the allocation sequence and blocking. Strata include relative VȮ 2peak (mL · min -1 · kg--1 ) at baseline, sex, age, and EDSS score. Exercise protocols Within their three-week inpatient stay at the Valens rehabilitation clinic, participants perform two to three weekly HIIT or MCT sessions on bicycle ergometers (Cybex 750 C, Cybex International Inc., Massachusetts, USA) in addition to standard rehabilitative care. Standard rehabilitative care comprises endurance and strength training (30-45 minutes, three to five times per week), physiotherapy to improve balance and/or walking ability (30 minutes, daily) as well as occupational therapy focused on fatigue management and activities of daily living (30 minutes, two to three times per week). Exercise volume of HIIT and MCT protocols have been matched in Units of Exercise. Units of Exercise are calculated as [intensity (percentage of peak heart rate (%HR peak )) x session duration (minutes) x frequency (number of sessions per week) x number of weeks] [25]. %HR peak is derived from HR peak achieved during CPET at baseline. During sessions, HR is continuously recorded by HR sensors (H10 HR sensor, POLAR, Kempele, Finland) attached to chest belts, and connected to wristwatches (M430 sports watch, POLAR, Kempele, Finland). Both, HIIT and MCT are conducted under the supervision of trained exercise scientists and physiotherapists, either individually or in small groups of up to three participants. Therapists are instructed to monitor the HR, and to adjust pedaling resistance in case of deviations from the target intensity. If participants are unable to follow the prescribed protocols (e.g., due to pronounced ankle flexor spasticity), dose modifications (i.e., decreasing revolutions per minute (rpm) or interval duration, increasing break duration) are permissible. Any dose modification is documented in a case report form. Blinding of therapists and participants towards group allocation is not feasible due to the study design. High-intensity interval training (HIIT) HIIT sessions commence with a two-minute low-intensity warm-up (60%HR peak ) at 60-70 rpm. Subsequently, six 90-second high-intensity intervals (95%HR peak ) are performed at high pedaling rates of 80-100 rpm. Intervals are interspersed by 90-second active breaks with unloaded pedaling at 60-70 rpm, aimed to return to 60%HR peak . Sessions close with a two-minute low-intensity (60%HR peak ) cool-down of unloaded pedaling at 60-70 rpm. In total, one HIIT session lasts 21 minutes. Moderate-intensity continuous training (MCT) MCT represents the standard treatment at Valens rehabilitation clinic and is used as an active comparator. Participants perform continuous bicycle ergometry at moderate intensity (60%HR peak ) and pedaling rates of 60-70 rpm for the duration of 26 minutes. Safety considerations Exercise is safe in pwMS and not associated with a higher risk of adverse events compared to exercise in healthy individuals [26]. Similarly, HIIT on bicycle ergometers is well-tolerated and holds a low risk of adverse events in pwMS [27]. As in healthy individuals, transient knee and/ or leg pain, or muscle soreness may occur in response to HIIT and MCT sessions. Between HIIT and MCT sessions, participants receive at least 48 hours of rest to ensure adequate recovery. Supervising therapists are instructed to prevent, monitor, and document the occurrence of adverse events, if any. To prevent injuries or falls, participants may be assisted in getting on and off the bicycle ergometer, if necessary. HR monitoring, rating of perceived exertion (RPE, Borg Category Ratio-10point (Borg CR-10) scale), and observation of vegetative signs serve to minimize any risk of overexertion [28]. Study personnel may need to withdraw participants in case of repeatedly reported adverse events (e.g., leg pain), the occurrence of severe adverse events (e.g., cardiovascular decompensation), or other medical reasons (e.g., infections). In those cases, participants are forwarded to a physician for medical clearance. With the signature of the consent form prior to participation, participants confirm that they are informed about potential exerciserelated risks and the right to refrain from participation in CYPRO at any time. Outcomes VȮ 2peak , PPO, walking capacity, cognitive performance, HRQoL, fatigue, and anxiety and depressive symptoms are assessed by allocation-blinded outcome assessors at study entry (baseline, T 0 ) and discharge after three weeks (T 4 ), at least 48 hours after the last HIIT or MCT session. Venous blood samples are taken before (T 1 ), immediately after (T 2 ), and two hours after (T 3 ) the first HIIT or MCT session, and at T 4 . Baseline variables Demographic data (sex, age) and MS-related data (EDSS score, time since diagnosis (months), pharmacological treatment, if any) are obtained from medical records. Bodyweight is determined by digital scales (Soehnle Style Sense Comfort 100, Soehnle, Nassau, Germany) at fasted state without footwear. Body weight (kg) and selfreported body height (cm) are used to calculate the body mass index (BMI). Cognitive status at baseline is evaluated using the Montreal Cognitive Assessment (MoCA). The MoCA is 30-point test used to assess global cognitive status, including items addressing short-term memory, visuospatial abilities, executive functions, attention, concentration, working memory, language, and orientation. A MoCA score < 26 indicates cognitive impairment [29]. Sensitivity and specificity to discriminate between cognitively impaired and non-impaired pwMS have been proven [30]. MoCA performance does not influence the eligibility to participate in CYPRO. Cardiorespiratory fitness VȮ 2peak is assessed by CPET on a bicycle ergometer (ergometrics er800s, ergoline GmbH, Bitz, Germany). CPET is performed in a fasted state between 8:00 and 9:00 AM and follows a ramp-type protocol. The ramptype protocol consists of (a) a resting state measurement without pedaling while participants are sitting on the bicycle ergometer (3 minutes); (b) subsequent pedaling at 20 watts (3 minutes); (c) the testing phase with a progressive increment of 5 to 10 watts per minute until subjective exhaustion (8-12 minutes); (d) followed by a cool-down of unloaded pedaling (3 minutes). HR is continuously monitored. Blood pressure and RPE (Borg CR-10 scale) are assessed every two minutes and within the last ten seconds of the test. VȮ 2 is monitored by direct and continuous measurements (breath by breath) by ergospirometry (Vyaire Medical, Vyntus CPX, Illinois, USA). VȮ 2peak is defined as the highest 15-second averaged VȮ value when the following criteria are attained: respiratory equivalent ratio > 1.10; HR peak within 10 min −1 of the age-predicted maximum and Borg-CR-10 rating > 8.5 [31]. The absolute VȮ 2peak value (mL · min −1 ) is divided by bodyweight (kg) to obtain the relative VȮ 2peak value (mL · min − 1 · kg − 1 ) as the primary outcome. Peak power output (PPO) PPO is assessed as the peak wattage achieved during CPET and represents the maximum mechanical power produced by lower extremity musculature. PPO is considered a measure of the physical functional reserve. In pwMS, lower PPO is correlated with higher energetic costs of walking [32]. Walking capacity Walking capacity is tested using the six-minute walk test (6-MWT). Participants are asked to walk back and forth on a 30-meter hallway for the duration of six minutes, performing 180° turns around cones at each end [33]. According to the modified 6-MWT script, participants are instructed to walk at maximum speed. Breaks or any kind of encouragement are not permitted. Participants are allowed to use an assistive walking device, if necessary to ensure safe ambulation. 6-MWT performance is defined as the total distance in meters covered within six minutes. The modified 6-MWT has excellent inter-rater and intra-rater reliability, and correlates with fatigue, self-reported physical functionality, and perceived ambulation impairment in pwMS [34]. Cognitive performance Cognitive performance is tested using the validated German modification of the Brief International Cognitive Assessment for Multiple Sclerosis (BICAMS-M). Similar to the original BICAMS, this modified version comprises three subtests evaluating information processing speed, verbal memory, and visuospatial memory [35]. The Symbol Digit Modalities Test (SDMT) evaluates information processing speed. The participant is displayed a code of nine abstract symbols, paired with nine digits. Underneath the code, incomplete rows are presented that only contain the abstract symbols in a pseudo-random order. The participant is asked to verbally match as many digits as possible to the corresponding abstract symbol within 90 seconds. SDMT performance is indicated by the number of correct matches, written down by the outcome assessor. Instead of the California Verbal Learning Test, the German language Verbal Learning and Memory Test (VLMT) was adopted, as norm data are based on a larger validation cohort. The VLMT is used to assess verbal memory. Participants are read aloud a 15-word list five times. After each repetition, participants are asked to immediately recall as many words as possible. VLMT performance is indicated by the summed number of correct words across the five trials. The Brief Visuospatial Memory Test-Revised (BVMT-R) is performed to assess visuospatial memory. The participant is asked to memorize six abstract geometric figures, presented on a 2 × 3 array for ten seconds. After these ten seconds, the array is removed. The participant is asked to copy the six figures on a blank form. The procedure is repeated three times. For each trial, the outcome assessor rates the shape, size, and location of the figures on a 0-2 scale. Higher ratings indicate better performance. Overall BVMT-R performance is indicated by the summed number of ratings across the three trials [36,37]. Parallel versions of the VLMT and the BVMT-R are used at T 4 . For the SDMT, no parallel version is used as learning effects are considered to be minor [37,38]. The validated BICAMS-M allows to detect cognitive impairment in MS and is a reliable instrument to monitor cognitive performance over time [37]. Health-related quality of life HRQoL is assessed using the German version of the Multiple Sclerosis Impact Scale (MSIS-29). The MSIS-29 comprises two subscales, addressing the physical impact (20 items), and psychological impact (9 items) of MS. Items address upper and lower limb functionality, ambulation, incontinence, sleep, emotional well-being as well as disease-related limitations in daily living, and societal participation. Each item is ranked on a 5-point Likert scale. Higher scores indicate a greater impact of MS and lower HRQoL. To calculate subscale scores, individual scores are summed, averaged, and transformed to a 0-100 scale [39,40]. The German version additionally allows to calculate a total MSIS-29 score that is the arithmetic mean of both subscale scores [40,41]. The MSIS-29 is a responsive measure in MS rehabilitation, and has been used in pwPPMS [42]. The German version of the MSIS-29 has been proven to be valid and reliable [40,41]. Fatigue MS-related fatigue is assessed with the German version of the Fatigue Scale for Motor and Cognitive Functions (FSMC). The FSMC is a multidimensional 20-item composite scale that comprises a 10-item motor and a 10-item cognitive subscale. Items are rated on a 5-point Likert scale. Higher values indicate greater total, motor, or cognitive fatigue, respectively. Cut-off values allow to identify substantial fatigue (i.e., FSMC composite score ≥ 43), and to classify fatigue severity as mild, moderate, or severe. The FSMC composite scale as well as both subscales have been proven to be reliable, and are sensitive and specific for detecting MS-related fatigue [43]. Anxiety and depressive symptoms Anxiety and depressive symptoms are assessed with the validated German version of the Hospital Anxiety and Depression Scale (HADS). The HADS is used in non-psychiatric populations with medical ailments, and includes 14 items that allow evaluation of anxiety (7 items) and depressive symptoms (7 items). Items are scored on a 4-point Likert scale. According to the original HADS manuscript, separate sum scoring is performed for the items on anxiety and depressive symptoms. Higher values indicate greater severity of anxiety or depressive symptoms. Subscale score ranges are used to distinguish between non-cases (0-7 points), doubtful cases (possible anxiety/depression, 8-10 points), and cases (probable anxiety/depression, 11-21 points) [44,45]. A HADS total score of ≥ 13 points is considered to indicate overall psychological distress [46,47]. For the German population, normative values are available [46]. In pwMS, the HADS is a sensitive and specific self-report measure that supports the detection of major depression, and/or generalized anxiety disorder [48]. Blood-derived biomarkers Blood sampling is performed in a fasted state. Samples are obtained from the antecubital vein in supine position. To evaluate the chronic effects of HIIT and MCT on blood-derived biomarkers, resting blood samples are taken between 8:00 and 9:00 AM after ten minutes of supine rest (T 1 , T 4 ). To evaluate acute effects on bloodderived biomarkers, blood samples are taken immediately after (T 2 ) and two hours after (T 3 ) the first HIIT or MCT session. Per sampling time point, two whole blood collection tubes (BD Vacutainer®, BD CPT™, 4ml), prefilled with lymphocyte separation medium, and one serum tube (tube vacutainer SSTII serum yel, 6ml) are taken. Blood samples are analysed regarding the acute and chronic effects of HIIT and MCT on KP modulation, and surrogate markers of neurodegeneration (i.e., sNfL, glial fibrillary acidic protein (GFAP)). Whole blood samples are centrifuged at 3500 g for 20 minutes, separating peripheral blood mononuclear cells (PBMCs) and plasma. PBMCs are resuspended in plasma, purged into centrifugation tubes, diluted with an equal amount of phosphate-buffered saline, and centrifuged at 2400 g for 10 minutes. The supernatant is discarded. PBMCs are resuspended in cell culture freezing medium, and aliquoted. Aliquots are frozen at -80 °C until analysis. mRNA of PBMCs is isolated using a commercial column-based isolation kit. cDNA synthesis is performed. Based on mRNA and cDNA, gene expression of KP-relevant genes is determined. As such, expression levels of indoleamine 2,3-dioxygenase-1, kynurenine aminotransferase 1-4, kynurenine-3-monooxygenase, aryl hydrocarbon receptor, CYP1A1, interleukin-4-induced-1, and SLC7A5 are determined using real-time quantitative polymerase chain reaction (qPCR), run on a qTower³ G touch (Analytik Jena GmbH, Jena, Germany). Blood serum is centrifuged at 2500 g for 20 minutes, is aliquoted, and frozen at -80 °C until analysis. Targeted metabolomics (liquid chromatography tandem mass spectrometry (LC-MS/MS)) is performed at BEVITAL AS, Bergen, Norway, to determine serum concentrations of tryptophan, KP downstream metabolites (e.g., kynurenine, kynurenic acid, quinolinic acid), and B vitamers. Systemic concentrations of sNfL and GFAP are assessed using a single molecule array (SiMoA HD-1 device, Quanterix, USA) at the University Medical Center of the Johannes Gutenberg University, Mainz, Germany, according to the manufacturer's instructions. Sequences and time frames of assessment procedures are depicted in the SPIRIT 2013 diagram (Fig. 2). Compliance Drop-out and session attendance as well as reasons for study withdrawal and incomplete session attendance are captured in total, and separately for HIIT and MCT. Participants who drop out continue standard rehabilitative care at the Valens rehabilitation clinic, as far as medical conditions allow. Participants are not replaced. Collected data are stored upon the termination of data analysis. None of the assessments planned at later stages are conducted. The attendance rate is calculated as the number of completed sessions by the number of prescribed sessions. Protocol adherence to the prescribed duration and intensity is derived from HR recordings of HIIT and MCT sessions upon completion of data collection. Reasons for session abortion or protocol deviations, including but not limited to dose modifications and adverse events, are questioned, and documented in a case report form. Severe adverse events (e.g., cardiovascular decompensation) are directly reported to the local Ethics committee. Overall compliance is assessed by comparing prescribed Units of Exercise to performed Units of Exercise per group, combining measures of adherence (intensity (%HR peak ), session duration (minutes)), and attendance (total number of sessions, i.e., number of sessions per week x number of weeks) [25]. Compliance will be given as percentage of prescribed Units of Exercise. Data management and confidentiality Data generation, transmission, storage, and analysis follow Swiss legal requirements for data protection. Personal data are considered confidential and disclosure to third parties is prohibited. The anonymity of participants is guaranteed by utilizing unique subject identification code numbers that are consecutively generated by a computerized list. Personal data and the allocation list are locked separately from anonymized baseline variables and outcome data. Unblinding of outcome assessors towards allocation is permissible upon completion of T 4 . Data of all outcomes as well as case report forms will be archived (1) in folders that are stored in locked study closets, and (2) electronically on personalized passwordsecured desktop computers. Backups are automatically performed every hour. Access is provided to authorized study personnel, the Clinical Trial Board of the Kliniken Valens hospital group, and the local Ethics committee at all times for purposes of trial-related monitoring, audits, review, and regulatory inspections. Statistical analysis Statistical analyses are performed according to the intention-to-treat principle. Descriptive statistics will be reported as arithmetic mean (M) and standard deviation (SD) for continuous data, and as absolute number and percentage (%) for categorical data for the total sample, and separated by group. Outcome data will be checked for normality (Shapiro-Wilk test) in advance. If necessary, analyses will be adjusted accordingly. If missing values are < 5%, ANCOVA will be performed to detect time*group interaction effects and main effects of time. In that case, effect sizes will be calculated as partial eta squared (pη 2 ). Otherwise, a baseline-adjusted Mixed Model for Repeated Measures (MMRM) approach, with group, time and the group × time interaction as fixed effects (type III sums of squares, CS covariance structure over time) is used to assess between-group differences over time. Bonferroni-corrected pairwise comparisons of estimated marginal means of the group × time interaction are computed. Effect sizes will be reported as Cohen's d with 95% confidence interval (CI). For all analyses, the level of significance is set at p = .05. Within-group differences and between-group differences are depicted as point estimates and measures of variability for all outcomes. Bivariate correlation analyses will be conducted to determine potential associations between changes in VȮ peak , PPO, walking capacity, cognitive performance, HRQoL, fatigue, anxiety and depressive symptoms, and blood-based biomarkers using Pearson's r. All statistical analyses will be conducted with IBM SPSS Statistics for Windows (Version 29.0., IBM Corp., Armonk, New York, USA). Besides the final analysis, one interim analysis will be performed upon collection of 50% of participant data (HIIT: n = 15, MCT: n = 16). Discussion CYPRO primarily aims to evaluate the effects of two different endurance training modalities, that are HIIT and MCT, on cardiorespiratory fitness in pwPPMS, representing the rarest and least investigated MS phenotype [3]. Using a comprehensive set of outcome measures, that combine performance indices (i.e., VȮ 2peak , PPO, walking capacity, cognitive performance), patient-reported outcome measures on HRQoL, fatigue, anxiety and depressive symptoms, and blood-derived biomarkers, CYPRO will expand current knowledge on the effects of endurance training in MS to the subpopulation of pwPPMS. Thus, CYPRO is a direct response to the International Progressive MS Alliance's call for action to "expedite the development of disease-modifying and symptomrelief treatments for progressive MS", by focusing on the key priority research area of symptom management and rehabilitation [8]. Among candidate non-pharmacological treatment options, endurance training is a powerful means to improve HRQoL, numerous MS symptoms, and MSrelated performance impediments [9,10]. Meanwhile, not only MCT but also HIIT found incorporation in current treatment recommendations [49,50]. HIIT is safe and feasible, and may be a more enjoyable endurance training option than MCT for pwMS [27]. Moreover, HIIT revealed to be superior to MCT in improving VȮ peak and cognitive performance in pwMS and has been shown to beneficially modulate concentrations of blood-derived biomarkers relevant to MS pathophysiology, such as sNfL [11,51,52]. Those results have been obtained from previous studies that have been performed in the same setting, and involved HIIT and MCT protocols similar to those designed for CYPRO. Under the premise that pwPPMS respond similarly to HIIT and MCT as mixed samples, findings may be expected to be reproduced. As a prospective study that is well-powered for pwPPMS, CYPRO is a novel approach accounting for the distinct features of pwPPMS, that besides PPMSrelevant performance indices and patient-reported outcomes measures, includes blood-derived biomarkers, such as sNfL or GFAP, that are closely related to neurodegenerative aspects of PPMS pathophysiology. Herewith, CYPRO will not only expand knowledge on symptom management and rehabilitation, but will also contribute to the exploration of potential disease-modifying effects of endurance training in MS. The superiority design of CYPRO will allow deriving explicit recommendations on endurance training design in pwPPMS that can be readily translated into clinical practice. Study status The first participant was enrolled on 13 March 2022. Participants are currently being recruited. Until March 2023, 260 pwMS entering the Valens rehabilitation clinic have been screened for study eligibility. Fifty-five pwMS held PPMS diagnosis. Among those, 23 pwPPMS have been recruited for CYPRO. Publication and dissemination Findings of CYPRO are condensed in manuscripts according to the Consolidated Standards of Reporting Trials (CONSORT) 2010 Statement and will be published in pertinent peer-reviewed journals [53]. No professional writers will be involved. Further, findings will be presented at relevant congresses and will be disseminated to the participants, relevant expert groups, and the public. The Swiss Multiple Sclerosis Society receives a final report. Acknowledgements Not applicable. Author contributions M.K. wrote the original draft and created visualizations. N.J., J.B. and P.Z. provided supervision, N.J., A.R., J.B. and P.Z. acquired funding, R.G., J.B. and P.Z. conceptualized the study, N.J., A.R., R.G., J.B. and P.Z. reviewed and edited the manuscript. All authors read and approved the final manuscript and agreed both to be personally accountable for the author's own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature. Funding and responsibilities Open Access funding enabled and organized by Projekt DEAL. CYPRO is funded by the Swiss Multiple Sclerosis Society (financial sponsor, Schweizerische Multiple Sklerose Gesellschaft (SMSG), Josefstrasse 129, 8031 Zurich, Switzerland, grant number: SMSG-2021-01). The funding body has no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. R.G. acts as a legal representative of Kliniken Valens in relation to the trial site and is the sponsor investigator of CYPRO. As secondary sponsor investigators, J.B. and P.Z. take responsibility for sponsorship and are available for public and scientific queries. J.B. is the local principal investigator at study site. Data availability The full protocol submitted to the Ethics committee, the charter of the Clinical Trial Board, model consent forms, data collection forms, case report forms, details of data management procedures, and datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request. Current license rights for testing manuals, questionnaires, and scales apply. Sharing of deidentified individual clinical trial participantlevel data (IPD) is not intended. Declarations Ethics approval and consent to participate This manuscript complies with the study protocol (Version 1.0, date: 20 January 2020) submitted and approved by the local Ethics committee (Ethikkommission Ostschweiz (EKOS, Scheibenackerstr. 4, 9000 Sankt Gallen, Switzerland, reference number: BASEC2022-00122, date of approval: 8 February 2022). CYPRO is performed in accordance with the Declaration of Helsinki. Any intended protocol modifications are forwarded to the EKOS, and ethical approval is awaited prior to implementation. All participants provide written informed consent for all forms of personally identifiable data including biomedical, clinical, and biometric data. Participants provide separate informed consent for shipping of anonymized blood samples for analyses performed in Germany and Norway. Consent for publication Not applicable.
2023-04-22T13:26:45.292Z
2023-04-22T00:00:00.000
{ "year": 2023, "sha1": "5d16dae0a724e18cf75c09d736086b3814037238", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "5d16dae0a724e18cf75c09d736086b3814037238", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
212838678
pes2o/s2orc
v3-fos-license
Influence of individual roughnesses nozzle edge on the length of the supersonic section of an underexpanded microjet The paper presents the results of the experimental investigation of the influence of a small group of roughnesses uniformly located on the edge of the convergent nozzle on the length of the supersonic core of an underexpanded air microjet. The tests are carried out on the low-pressure test bench with the aid of the Pitot tube. The set of nozzles with the diameter of 2 mm, with the smooth output edge and the edge of the similar shape roughnesses (from 2 to 4) is used. The underexpanded microjets flowing out from the nozzles of the diameter 10.6; 16.1; 21.4, and 34.8 μm are simulated by the outflow Reynolds number. It is shown that the presence of roughnesses can reduce essentially the supersonic section length and eliminate the previously found effect of microjet “relaminarization”, however the reduction value dubiously depends on the amount of the roughnesses on the nozzle edge. Introduction The influence of nozzle edge roughness on the measurement results is a serious challenge in the research of gas dynamics and stability of gas microjets. In particular, analysis of the roughness effect on the length of the supersonic core of axisymmetric underexpanded microjets is of interest. Since the length of the supersonic section depends on the intensity of the jet flow and ambient gas mixing, the supersonic section length should be influenced by the disturbances created by the roughness elements on the nozzle edge. Among them, the lengthwise stationary vortex structures are the best studied; they can gain in the underexpanded jets because of the positive curvature of the jet flow boundary [1,2]. The vortex structures generated by the roughness can form initial running disturbances amplifying in the jet flow shear layer and resulting in the flow tubulisation and reduction of the length of the jet supersonic section. Note that most investigations in this field are made for completely turbulent macrojets. The important peculiarity of the microjets is the moderate outflow Reynolds numbers which rarely exceed 5,000. Only the laminar and transition modes of the jet flow lie in this range. Moreover, we fail to produce the perfectly smooth surface of the output edge during the manufacture of the real convergent micronozzle. There are always relatively large humps and cavities of uncontrolled size and amount (see, for instance, [3,4]). Because of small sizes of the micronozzles, it is difficult to connect the measured disturbances of the jet flow field with specific roughnesses on the micronozzle edge. This paper presents the measurements of the length of the supersonic core of the axisymmetric underexpanded air jets flowing put from the smooth convergent nozzle with the roughnesses of the same size and shape uniformly imposed on the output edge -but the quantity of the roughnesses varies from two to four. Such an approach permits understanding the influence of a single roughness on the jet stability and mixing process intensity, plus finding the possible mutual influence of the disturbances in the jet flow from each individual roughness. Experimental equipment and measurement technique The experiments are carried out on the jet low-pressure test bench described in [5]. The test bench permits independently maintaining the jet outflow Reynolds number Red and jet underexpansion degree n. The experiments involve the convergent axisymmetrice nozzle with the diameter 2 mm. [5] previously showed the possibility to simulate the micronozzles by the millimeter-diameter nozzles by Red. In particular it was demonstrated that the dependencies of the supersonic core length on the underexpansion degree value n, obtained for the jets flowing out from the micron and millimeter nozzles, coincide in quantity as the outflow Reynolds numbers Red are equal. The framework of this approach, in the experiments the outflow of the underexpanded air jets from the nozzles diameter 10.6; 16.1; 21.4 and 34.8 μm was simulated in the underexpansion degree range from 1.1 to 6 and outflow Reynolds numbers from 800 to 5000. Figure 1 presents the scheme of the nozzle itself and nozzle plenum chamber. There was a possibility to rotate the nozzle about the lengthwise axis. The nozzle conicity of 60 provides the minimal effect of acoustic wave reflection from the nozzle end face and discrete tone appearance. The output edge of one nozzles was absolutely smooth (nozzle No.1), whereas the output edge of the other nozzles were covered with lengthwise N-shaped scratch marks with the height and deepening h = 170 μm, width 300 μm and length 1 mm. There were two scratch marks on the edge (nozzle No. 2), three (nozzle No. 3), four (nozzle No. 4), and they were located uniformly over the nozzle output cross-section circle. The measurements in the jet flow were carried out with the Pitot tube, its inner diameter 0.17 mm and outer diameter 0.4 mm. During the experiments, the Pitot pressure P0' was measured. The tube was connected with a miniature differential pressure gage TDM4-IV1, which forms the Pitot probe. Characteristic pressure relaxation time in the probe was 0.3 s. The probe was installed on the 3-component pointing device, its position alignment accuracy was  0.1 mm. The main measurements were performed as the probe moved along the jet axis. Additive measurements were carried out for the Pitot pressure distributions in a number of cross sections of the jet flowing from the nozzle No. 2. The length of the supersonic section of model microjets was found as the distance from the nozzle cut to the point in the axis where the pressure P0' corresponds to the value of this the pressure for the Mach number 1. The technique of determination of the supersonic section length with the Pitot tube is described in detail in [6]. Figure 2 shows the results of measurement of the length of the supersonic section of Lc air jets rated for the nozzle diameter d. The graph (a) corresponds to the model jet flowing out from the nozzle with the diameter (a) d = 10.6 μm; (b) -16.1 μm; (c) -21.4 μm; (d) -34.8 μm. The solid, dashed and dash-dot curves on the graphs show the supersonic core length Lc/d versus the jet underexpansion degree n, obtained for turbulent macrojets, respectively in [7], [8] and [9]. The graphs show that, as a certain underexpansion degree is reached, the length of the supersonic core of the model jets falls fast to the level typical for the turbulent macrojets. The short-time jump of the supersonic core length after the fall (see figure 2b) results from the jet "relaminarization" effect [4,10]. The presence of the roughness on the output nozzle edge reduces the value Lc/d and eliminates the "relaminarization" effect. The effect of the supersonic core length reduction appears better for the model nozzles of a bigger diameter. The degree of Lc/d reduction dubiously depends on the roughness amount. The strongest reduction effect is observed for the nozzle No. 2 with two roughnesses. For the nozzle No. 4 with four roughnesses this effect is weak or absent. Results and discussion Evident that the roughness influence on the jet flow should depend on the relation of the roughness To gather the data about the boundary layer thickness on the nozzle edge, the numerical simulation of the flow in the nozzle settling chamber and in the nozzle conic contraction was made with the ANSYS Fluent package (see the scheme in figure 1). Figure 3 presents the calculated dependencies of the ratio of the roughness hump/cavity characteristic size h to the mixing layer thickness δ on the nozzle edge versus the underexpansion degree value n for four diameters of the model micronozzles. It is seen that the roughness size is comparable or exceeds the boundary layer thickness on the nozzle edge within the whole measurement area n, and the roughnesses should influence more intensively on the mean flow in the jet for the model nozzles of the bigger diameter. To determine the roughness influence on the mean flow in the jet, we tried to measure the value of the pressure transversal distribution deformation P0', created by two roughnesses in the nozzle No. 2 during the simulation of the microjet flowing out from the nozzle with the diameter 34.8 μm. The Pitot tube was used to perform the measurements across the jet axis along the center line passing through the nozzle roughness, as well as along the center line in the perpendicular direction where there were no roughnesses. Additive measurements were carried out across the axis of the jet flowing out from the nozzle without roughness (nozzle No. 1). The measurements were performed in the same jet cross section by the lengthwise axis x/d. Figure 4 shows the examples of these measurements. It is evident that on the graphs there are no significant distortions of the mean flow field, except for the visible jet expansion along the direction binding two roughnesses on the nozzle No. 2 edge. One reason of the non-monotony behavior of the supersonic core length vs the roughnesses amount on the output nozzle edge may be the interaction between the flow disturbances created by the neighboring roughnesses. The N-shaped roughness along with the flow can create one lengthwise vortex instead of two counter-rotating vortices. Then the closely located vortices can inhibit each other as the amount of the roughnesses rises on the nozzle edge. This is probably the explanation of the essential difference between the supersonic core lengths outflowing from the nozzles No. 1 and No. 4. The other possible reason of the non-monotony influence of the roughness amount on the supersonic section length can be the inhibition of the convective instability of the shear layer of the jet lengthwise vortices created by the bigger amount of roughnesses. Inhibition of the convective instability prevents the appearance of the global jet flow instability, generation of the jet discrete tone, jet transition in the turbulent flow mode and reduction of the supersonic core length. The definite answer to the question about the possible reasons of the non-monotony influence of the roughness amount on the supersonic core length can be found from the numerical simulation results. Conclusion The paper presents the results of the experimental investigation of the effect of a small group of the roughnesses uniformly located on the convergent nozzle edge, on the length of the supersonic core of the model underexpanded air microjets. The non-monotony influence of the amount of roughnesses on the length of the supersonic core of the model microjets. As the roughness amount rises from two to four, the supersonic core length rises and reaches the values obtained for the smooth nozzle. Moreover, the "relaminarization" effect recovers in the model microjet. Suggestions about two possible reasons of such an effect of the roughness amount are proposed:  mutual inhibition of the closely located lengthwise vortices created by the neighboring roughnesses;  inhibition of the convective instability of the mixing layer of the jet lengthwise vortices.
2019-12-12T10:29:44.550Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "c462f6fef668bf5c83ede18834c899b4d6db279b", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1404/1/012087", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "25043e946d54ae3f4d8b243149499f6923e2baf4", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
246678834
pes2o/s2orc
v3-fos-license
Are infants exposed to antimicrobials during the first 3 months of life at increased risk of recurrent use? An explorative data-linkage study Abstract Objectives To investigate whether infants exposed to antimicrobials in hospital during the first 3 months of life had an increased risk of ambulatory antimicrobial use during the following year compared with infants not exposed to antimicrobials during the first 3 months of life. Methods Norwegian cohort study of infants less than 3 months consisting of one group exposed to antimicrobials recruited during hospitalization and one group not exposed to antimicrobials. Ten unexposed infants were matched with one exposed infant according to county of residence, birth year and month, and sex. The Norwegian Prescription Database was applied to register antimicrobial use from the month after discharge and 1 year onward. We defined comorbidity based on antimicrobials prescribed as reimbursable prescriptions due to underlying diseases. Results Of 95 infants exposed to antimicrobials during the first 3 months of life, 23% had recurrent use compared with 14% use in 950 unexposed infants [relative risk (RR) = 1.7 (95% CI = 1.1–2.5) and comorbidity-adjusted RR = 1.4 (95% CI = 0.9–2.2)]. The recurrence use rate in exposed term infants (≥37 weeks, n = 70) was 27% compared with 12% in their unexposed matches [RR 2.3 = (95% CI = 1.4–3.7) and comorbidity-adjusted RR = 1.9 (95% CI = 1.2–3.2). Of 25 exposed preterm infants, 3 (12%) had recurrent use. The total antimicrobial prescription rate was 674/1000 in the exposed group and 244/1000 in the unexposed group [incidence rate ratio = 2.8 (95% CI = 1.6–4.9)]. Conclusions Infants exposed to antimicrobials during the first 3 months of life had an increased risk of recurrent use during the following year. This increased risk also appeared in term infants without infection-related comorbidity. Introduction Understanding patterns of antimicrobial use is essential to combat increasing antimicrobial resistance. 1,2 Microbiome studies have also reported negative consequences of antimicrobial exposure in early childhood. [3][4][5] Antimicrobial exposure of the immature microbiome has been linked to increased risk of developing obesity, asthma, allergy, inflammatory bowel disease, behavioural difficulties and impaired growth. 4,6-12 Recurrent antimicrobial exposures have been shown to be an even stronger risk factor for developing chronic conditions. [6][7][8] For infants less than 3 months there is a low threshold for antimicrobial therapy when symptoms of possible infection are present or if the c-reactive protein value is raised. However, only a small proportion of those treated with antimicrobials have a confirmed infection. [13][14][15] Thus, risk algorithms and auto-stop antimicrobial functions have been implemented to reduce antimicrobial use. 16,17 After the first few months of life, the risk of severe bacterial infections decreases. 18,19 However, late-infancy studies also indicate that infants receive an excess of antimicrobial prescriptions, mainly for respiratory symptoms. 20,21 There is a lack of follow-up studies examining subsequent antimicrobial prescriptions in infants. One might suspect that these infants are at risk of recurrent antimicrobial use because of infection-related comorbidities. Also, early-life antimicrobial exposure could lead to antimicrobial resistance or disruption of the microbiome affecting an immature immune system and thereby alter antimicrobial consumption pattern. Finally, behavioural factors like lower threshold for seeking medical help, parental expectations and prescription habits of the doctor could be of importance. [21][22][23] Thus, we hypothesized that antimicrobial exposure during the first 3 months of life increases the risk of subsequent antimicrobial use. To explore the hypothesis, we investigated whether infants exposed to antimicrobials in hospital during the first 3 months of life had an increased risk of antimicrobial use in ambulatory care during the following year compared with infants who had not been exposed to antimicrobials during the first 3 months of life. In addition, we aimed to adjust for infection-related comorbidities, to explore if observed associations were different in selected subgroups and to discuss the potential for reduced antimicrobial use. Study design We conducted a cohort study of infants less than 3 months consisting of one group exposed to antimicrobials in hospital (AB+) and one group not exposed to antimicrobials either in hospital or in ambulatory care (AB−). All infants were followed for 1 year with regards to antimicrobial prescriptions using the Norwegian Prescription Database (NorPD) (Figure 1). We defined the follow-up period as early childhood (varying from 1-12 months to 3-14 months). An antimicrobial prescription was defined as one course of antibiotic dispensed from the pharmacy. Infants exposed to antimicrobials during the first 3 months of life (AB+ + + + +) In Norway, postnatal antimicrobial treatment is given in a public hospital setting. Also, preterm infants or severely sick term infants often remain in hospital care for several weeks. The infants in this study were recruited from the paediatric department in a district hospital in Ålesund. Infants less than 3 months, born in the county (catchment area) in 2017 and receiving systemic antimicrobials were enrolled in the AB+ group. In the county there were 2681 live births in 2017. The paediatric department consisted of a general paediatric ward with 18 beds and a neonatal intensive care level III unit with 13 beds. Data were registered by study nurses every day at 8 a.m. throughout 2017 and included gestational age, sex, age in months at the start of antimicrobial therapy, indication for use, type of antimicrobial, respiratory support, complications/other conditions and positive blood cultures. Data were double-checked by the project leader through the electronic medical record. Indication for treatment was based on symptoms and laboratory or radiological findings. Prophylaxis was defined as antimicrobials given to prevent infections. Respiratory support was defined as invasive ventilation, continuous positive airway pressure (CPAP) or high flow (HF). Complications/other conditions were defined as invasive ventilation, therapeutic hypothermia, thoracic drainage, exchange transfusion, need of immunoglobulin or vasoactive drugs, congenital heart disease, suspected genetic syndrome or severe neurological disease, and any other severe congenital condition requiring surgery or invasive interventions. We defined preterm birth and complications/other conditions as risk factors for recurrent antimicrobial use. Thus, we defined low-risk infants as term infants without complications/other conditions. Preterm birth was defined as gestational age ,37 weeks. Recurrent antimicrobial use during early childhood Infants not exposed to antimicrobials during the first 3 months of life (AB− − − − −) Infants in the AB− group were randomly identified from the National Population Register. This register contains information on everyone who resides in Norway. Each infant in the AB+ group was matched with 10 infants in the AB− group according to county of residence, month and year of birth, and sex. Through the NorPD, we controlled that none of the infants in the AB− group received any antimicrobial prescription during the first 3 months of life. Follow-up period in the NorPD Six infants in the AB+ group were excluded: one died during infancy and five were not registered with a home address in the county covered by the hospital. The final cohort consisted of 95 infants in the AB+ group and 950 matched infants in the AB− group. These were linked to the NorPD using the national identity number and were followed from 1 January 2017 throughout December 2018. The NorPD contains information on all prescriptions dispensed to individual patients in ambulatory care in Norway. 24 We included prescriptions of all systemic antibacterials (ATC group J01). Indications for the prescriptions were not available. To access and adjust for infection-related comorbidity equally between the groups, we identified all infants receiving reimbursable antimicrobial prescriptions due to underlying diseases during the follow-up period. In Norway, the reimbursable antimicrobial prescription system is targeted towards patients with persistent increased infection risk after certain criteria and is actively used by prescribers. Chronic lung conditions, immunodeficiencies and relapsing pyelonephritis would be examples of this. The ICD-10 or ICPC-2 classification systems are used to specify the reason for reimbursement on the prescription. Also, if one expects that the patient would need antimicrobials for at least 3 out of the next 12 months, one can in most cases prescribe a reimbursable prescription. In Norway, most infants start in day-care centres around the age of 1 year, a relevant aspect when analysing ambulatory prescriptions in infants. Analyses and outcome variables Patient demographics were quantified using descriptive statistics and are presented as numbers and percentages. Numbers of treatment days are presented as medians and IQRs. For infants in the AB+ group, we analysed antimicrobial prescriptions individually from the month after discharge from hospital and 1 year onward. Data for infants in the matched AB− group were analysed for the same period. The main outcome variable was number of infants prescribed antimicrobials in ambulatory care, presented as number and percentage; the secondary outcome variable was total number of prescriptions in ambulatory care, presented as number and prescriptions per 1000 inhabitants. Furthermore, we also present prescriptions of oral broad-spectrum antimicrobials not recommended as first-line agents: macrolides, clindamycin, cefalexin, ciprofloxacin and co-trimoxazole. 25 To compare the 1 year antimicrobial use rate between the AB+ group and the AB− group, we estimated the relative risk (RR) with 95% CI using a log-binomial regression model and the log-link function. To compare 1 year total antimicrobial prescriptions, we estimated the incidence rate ratio (IRR) with 95% CI using a negative binomial regression model. In both models, we estimated robust standard errors to account for possible correlation due to matching. We also adjusted for infection-related comorbidities. These analyses were performed for all infants and for selected subgroups. Distributions of different antimicrobials are presented as percentages and only one prescription per type of antimicrobial was included per infant for this purpose. Stata SE 17.0 (StataCorp LLC, TX, USA) was used for all analyses. Table 1. Characteristics of infants less than 3 months exposed to antimicrobials (AB+) compared with infants less than 3 months not exposed to antimicrobials (AB−) Thaulow et al. Ethics The study was approved by the Regional Committee for Medical and Health Research Ethics (2017/30/REK Midt) and by the Local Data Protection Official at the study hospital. Results Of 2681 live births in 2017 evaluated for inclusion in this study, 101 (3.8%) children were exposed to antimicrobials in hospital during the first 3 months of life. Ninety-five infants were included in the AB+ group and 950 matched unexposed infants in the AB− group. Table 1 shows baseline data for both groups. Within the AB+ group, the median number of days of initial antimicrobial exposure was 3 (IQR = 2-5) for low-risk term infants, 4 (IQR = 3-4) for term infants with complications/other conditions and 3 (IQR = 2-4) for preterm infants. Of 26 infants with initial antimicrobial exposure for 5 days or more, 20 were term infants and 6 were preterm infants, and 6 had complications/other conditions. Table 2 shows that 23% in the AB+ group were prescribed antimicrobials during the follow-up period, while 14% in the AB− group were prescribed antimicrobials during the same period [RR = 1.7 (95% CI = 1.1-2.7) and comorbidity-adjusted RR = 1.4 (95% CI = 0.9-2.2)]. For selected subgroups in the AB+ group, we found the following rates of infants with antimicrobial prescriptions in the follow-up period: infants with complications/ other conditions, 3/15 (20%); extremely preterm infants, 1/3 (33%); infants treated for pyelonephritis, 5/5 (100%); and infants needing invasive ventilation, 1/11 (9%). Table 3 shows that the total number of antimicrobial prescriptions was 674/1000 inhabitants in the AB+ group and 244/1000 inhabitants in the AB− group [IRR = 2.8 (95% CI = 1.6-4.9)]. When including only one prescription per type of antimicrobial per infant, nearly half of all prescriptions were penicillin V (Figure 2). The exposure rate for penicillin V was 15/95 (15.8%) in the AB+ group and 81/950 (8.5%) in the AB− group. Of in total 64 prescriptions in the AB+ group, 31 (48%) were trimethoprim, 19 (30%) were penicillin V and 14 (22%) were other antimicrobials. Of 232 prescriptions in the AB− group, 101 (44%) were penicillin V, 36 (16%) were amoxicillin, 27 (12%) were macrolides and 68 (29%) were other antimicrobials. All trimethoprim prescriptions in the AB+ group were reimbursable prescriptions and distributed between six infants, five of whom were treated for pyelonephritis during the first 3 months of life. All prescriptions dispensed were oral formulations. Discussion To the best of our knowledge, this is the first follow-up study monitoring recurrent antimicrobial use in infants exposed to antimicrobials in hospital shortly after birth. Interestingly, we found that low-risk term infants had an increased risk of recurrent antimicrobial use (RR = 2.5) compared with infants that had not received Table 2. Comparison of antimicrobial use in ambulatory care during 1 year in early childhood between infants exposed to antimicrobials during the first 3 months of life (AB+) and infants not exposed to antimicrobials during the first 3 months of life (AB−) AB+, N or n (%) AB-, N or n (%) RR (95% CI) a Comorbidity-adjusted RR (95% CI) Log-binomial regression model, including estimation for robust standard errors. b The two groups were matched according to county of residence, birth month and year, and gender. c Prescriptions were registered from the month after initial exposure and 1 year onward. A previous study from the same hospital found that 27% of hospitalized neonates were exposed to antimicrobials, and that only 14% of treatments for suspected early-onset sepsis were confirmed by blood culture or laboratory criteria (c-reactive protein of 30 mg/L or more). 15 This finding is also in line with other reports. 13,14 Thus, many of the infants in the AB+ group were probably unnecessarily exposed to antimicrobials in the first place. We carefully searched the literature for other studies targeting the risk of recurrent use of antimicrobials in infants, but could not find any comparable studies. Some studies have argued that single antimicrobial courses in neonates may not be very harmful. 26,27 However, there is increasing evidence of alterations in the developing microbiome, 3-5 increasing the risk of adverse long-term effects. 4,6-12 The results of this study confirmed our hypothesis that children exposed to antimicrobials shortly after birth (AB+) had an increased risk of recurrent use. This is important since recurrent antimicrobial use is reported to be a particular risk factor for adverse long-term effects. [6][7][8] We introduced different potential reasons for our hypothesis of increased antimicrobial use in the AB+ group: comorbidities, behavioural factors, disruption of the microbiome and antimicrobial resistance. Adjustments for infection-related comorbidity slightly decreased the risk of recurrent antimicrobial use in the AB+ group compared with the AB− group in all comparisons. More specifically, infantile pyelonephritis was the single most identifiable risk factor for recurrent antimicrobial use in the AB+ group. This is not surprising as urinary tract infections often relapse and many receive antimicrobial prophylaxis after the first event of pyelonephritis. 28,29 However, the indication for prophylaxis in this condition has been debated, as the benefit is reported to be small compared with the risk of developing resistance. 28,29 We found no association between respiratory support in the AB+ group and the risk of antimicrobial prescriptions during follow-up. Furthermore, few preterm infants and infants with neonatal complications/other conditions were prescribed antimicrobials in the follow-up period. Reasons for this could include increased protection from the environment, thereby decreasing the risk of infections. Also, they might have had closer follow-up from specialist care. Given the immature microbiome of premature infants, our results do not support that disruption of the microbiome shortly after birth contributes to more antimicrobial prescriptions in early childhood. 30 For low-risk term infants in the AB+ group, the risk of recurrent antimicrobial use remained more than doubled, even after comorbidity adjustment. We revealed similar findings when comparing the total number of prescriptions. However, our methods of comorbidity assessment did not necessarily capture all infants with increased infection risk, but previous literature have reported that the majority of infants receiving an antibiotic Table 3. Comparison of total antimicrobial prescriptions in ambulatory care during 1 year in early childhood between infants exposed to antimicrobials during the first 3 months of life (AB+) and infants not exposed to antimicrobials during the first 3 months of life (AB−) AB+, N or n (n/1000) AB−, N or n (n/1000) IRR (95% CI) a Comorbidity-adjusted IRR (95% CI) Thaulow et al. in early life do not have confirmed infections. [13][14][15] As many infections in early childhood are self-limiting, 21 we speculate whether behavioural factors in parents and prescribers could be of importance. One study from Finland concluded that psychological factors should be considered in infants receiving recurrent antimicrobial prescriptions. 23 Treatments for suspected infection in early life could concern the parents and lead to a lower threshold for seeking a doctor with the expectation of antimicrobial treatment. 21,22 A doctor's prescription attitude may also be influenced by a history of postnatal antimicrobial treatment. 31 More information to outpatient clinics and the public regarding harmful effects of antimicrobial use in early childhood could be helpful. Balanced information regarding a future threshold for antimicrobial use could be implemented as part of neonatal antimicrobial stewardship programmes. The results of our study can encourage future interventions and antimicrobial stewardship programmes to increase focus on the transition between hospitalization and ambulatory care to reduce unnecessary prescriptions. The high proportion of infants being prescribed penicillin V during the follow-up period reflects that respiratory tract symptoms was a common reason for antimicrobial prescribing. 25 This correlates with a European study reporting that respiratory tract infection was the most common indication for ambulatory antimicrobial therapy in infants. 21 The prescription rate for broadspectrum antimicrobials, such as macrolides and clindamycin, was low in our study, particularly in the AB+ group. Hence, it is not likely that the AB+ group experienced more episodes of resistant bacteria. This also corresponds with the low rates of antimicrobial resistance reported in Norwegian children. 18 Two out of three infants exposed to antimicrobials during the first 3 months of life were males and the proportion of males being prescribed antimicrobials in the follow-up period was also slightly higher than for females in both groups. To compare, a global survey found that 59% of infants receiving antimicrobials in neonatal units were males. 14 A study from Italy reported a 3.5% higher antimicrobial exposure rate for males compared with females in children less than 2 years. 32 Also, studies from Norway confirm this gender gap. 15,24,33 Compared with other countries, the antimicrobial prescription rate during early childhood was in the lower range. 21,32,34 A strength of our study is that we linked prospectively collected clinical data with the NorPD and the National Population Register, creating a robust cohort of infants for follow-up in the NorPD. It is also a strength that our two groups were matched according to age, gender and residency, to control for these possible confounders, and that we were able to follow prescription activity for the exact same period for the two groups. One limitation of the study is the lack of variables and potential confounders in the AB− group, namely gestational age, hospitalization and respiratory pressure support. However, by accessing reimbursable prescriptions, we were able to adjust our analysis for infection-related comorbidities. Despite this, our adjusted results may have been subject to confounding by indication due to underlying causes leading to antimicrobial Figure 2. Distribution of ambulatory antimicrobial prescribing pattern for 1 year in early childhood (within the range of 1-14 months of age) in infants exposed to antimicrobials during the first 3 months of life (AB+) and in a control group of infants not exposed to antimicrobials during the first 3 months of life (AB−). Only one prescription per type of antimicrobial included per infant. This figure appears in colour in the online version of JAC and in black and white in the print version of JAC. Recurrent antimicrobial use during early childhood exposure that could not be captured by the comorbidity assessment used in this study. However, our aim was not to conclude the exact reason for the increased risk in the AB+ group, rather to discuss potential reasons based on our results. For some subgroups, such as preterm infants, we realize that the sample size is low, indicated by the large CIs. Thus, these subgroup analyses should be interpreted with caution and the findings should be validated in future studies using a larger group of preterm infants. Changing residency during the study period could have occurred, affecting the geographical distribution of our patients, but all ambulatory prescriptions would still be recognized through the NorPD. The NorPD captures ambulatory prescriptions only. Thus, infants may have received antibiotics in hospital in the follow-up period. However, antibiotic exposures in hospital would in most cases be followed by an ambulatory prescription at discharge. Also, in the AB+ group we surveyed antibiotic use in hospital during 2017 and we registered no readmissions for antimicrobial use. Finally, we included patients from only 1 out of 11 counties in Norway, possibly limiting the external validity of the study. However, by analysing public statistics from NorPD, we revealed that our county had an antimicrobial exposure rate of 20% in 2017 for children 0-4 years, identical to the national rate. 24 This increases the generalizability of our findings, but similar studies from countries with high rates of antimicrobial use are warranted. In conclusion, we revealed that infants exposed to antimicrobials during the first 3 months of life had an increased risk of recurrent use during early childhood. Low-risk term infants had a double risk of recurrent antimicrobial use, even after adjusting for infection-related comorbidities. Given the increased vulnerability of infants to antimicrobial exposure, measures should be taken to avoid unnecessary antimicrobial use in infants, as well as after the neonatal period.
2022-02-10T06:17:08.058Z
2022-02-08T00:00:00.000
{ "year": 2022, "sha1": "36abc4a93dc6a2f280a7514c7abe3e0fbf0eb9ce", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/jac/advance-article-pdf/doi/10.1093/jac/dkac024/42437020/dkac024.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d80123141c50fdf49552ef5271bf8fb7509a473f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119271599
pes2o/s2orc
v3-fos-license
$q$-Virasoro/W Algebra at Root of Unity and Parafermions We demonstrate that the parafermions appear in the $r$-th root of unity limit of $q$-Virasoro/$W_n$ algebra. The proper value of the central charge of the coset model $ \frac{\widehat{\mathfrak{sl}}(n)_r \oplus \widehat{\mathfrak{sl}}(n)_{m-n}}{\widehat{\mathfrak{sl}}(n)_{m-n+r}}$ is given from the parafermion construction of the block in the limit. Introduction Ever since the AGT relation [1,2,3] (the correspondence between the correlators of 2d QFT and the 4d instanton sum) was introduced, the both sides of the correspondence have been intensively studied by a number of people. For example, in the 2d side, the β-deformed matrix model is used in order to control the integral representation of the conformal block [4,5,6,7,8,9,10]. There are also some proposals for proving the 2d-4d connection [11,12,13,14,15]. Moreover similar correspondence has been found and examined [16,17,18,19,20,21,22,23,24,25,26]. Among these, we pay our attention, in this paper, to the correspondence between the coset model, sl(n) r ⊕ sl(n) p sl(n) r+p , (1.1) and the N = 2 SU(n) gauge theory on R 4 /Z r [20,23]. Here sl(n) k stands for the affine Lie algebra in the representation of level k and r and p will be specified in this paper. On the 2d CFT side, a quantum deformation (q-deformation) of the Virasoro algebra [27] and the W n algebra [28,29] is known, while the 4d gauge theories can be lifted to five-dimensional theories with the fifth direction compactified on a circle. There exists a natural generalization to the connection between the 2d theory based on the q-deformed Virasoro/W algebra and the five-dimensional N = 2 gauge theory [30]. For recent developments, see, for example, [31,32,33,34,35,36,37]. In the previous paper [32], we proposed a limiting procedure to get the Virasoro/W block in the 2d side from that in the q-deformed version. On the other hand, we saw that the instanton partition function on R 4 /Z r are generated from that on R 5 at the same limit. This result means if we assume the 2d-5d connection, it is automatically assured that the Virasoro/W blocks generated by using the limiting procedure agree with the instanton partition function on R 4 /Z r . Our limiting procedure corresponds to a root of unity limit in q. A root of unity limit of the q-Virasoro algebra was also considered in [38]. Our limit is slightly different from this and is similar to the one used in order to construct the eigenfunctions of the spin Calogero-Sutherland model from Macdonald polynomials in [39,40]. In the present paper we will elaborate our limiting procedure and show that the Z r -parafermionic CFT which has the symmetry described by (1.1) appears in the 2d side. We clarify also the relation between the free parameter p and the omega background parameters in the 4d side. The paper is organized as follows: In the next section, we review the limiting procedure for q-Virasoro algebra [32]. In section 3, we consider the q-deformed screening current and charge and show that the Z r -parafermion currents are derived in a natural way. In section 4, we consider the generalization to q-W n algebra. Root of Unity Limit of q-Virasoro Algebra In this section, we review the root of unity limit [32] of the q-deformed Virasoro algebra [27] which has two parameters q and t = q β . The defining relation is where p = q/t and The multiplicative delta function is defined by Using the q-deformed Heisenberg algebra H q,t : the q-Virasoro operator T (z) can be realized as The q-deformed chiral bosons are defined in terms of the q-deformed Heisenberg algebra as (2.7) Here ξ + = q, ξ − = t. Let us consider the simultaneous r-th root of unity limit in q and t which is given by Since t = q β , this limit is possible if the parameter β takes the rational number such as where m ± are non-negative integers. In the limit, we have two types of bosons φ(w) and ϕ(w) [32] respectively given by where w = z r and The commutation relations are [a m , a n ] = mδ m+n,0 , [a n , The boson φ(w) and the twisted boson ϕ(w) play an important role for the appearance of the Z r -parafermions. Z r -parafermionic CFT The q-deformed screening current and the charge are defined respectively by where the Jackson integral is defined by Multiplying the regularization factor, we obtain the screening charge in the root of unity limit, up to normalization, where we have defined [41] Here A r is the normalization factor and we have introduced The correlation function is given by For example, we consider the r = 2 case. In the limit, we obtain lim q→−1 S(z) =: e √ βφ(w) e ϕ(w) :, (3.8) and after the appropriate normalization, we obtain the following screening charge for the superconformal block [42,43]: is the NS fermion. From now on we will show that the Z r -parafermions appear in the general r-th root of unity limit. In particular, ψ 1 (w) will be shown to work as the first parafermion current. The Z r -parafermion algebra consists of (r − 1) currents ψ ℓ (w) (ℓ = 1, · · · , r − 1) satisfying the following defining relations [44]: where ψ † ℓ (w) = ψ r−ℓ (w) and are the conformal dimension of ψ ℓ (w) and the central charge of the parafermionic stress tensor T PF . The explicit form of T PF (w) is given in [45]. The coefficients c ℓℓ ′ are given by The OPE of (3.4) is (3.16) Here we have defined the second parafermion, Similarly, the (ℓ + 1)-th parafermion is obtained from ℓ-th parafermion by In particular, where B r is a constant which can be determined by the relation After all, we have the chiral boson φ(w) coupled to Q E and the Z r -parafermion ψ ℓ (w). Therefore, the stress tensor of the whole system is where T B (w) stands for the usual stress tensor for the chiral boson field. The central charge is is the central charge of the unitary series of the Z r -parafermionic CFT [46]. The form of the screening charge in the case of general r is the same as that of eq. (3.9). Root of Unity Limit of q-W n Algebra In this section, we consider the generalization to the q-W n algebra [29]. We denote by h the Cartan subalgebra of sl(n) Lie algebra. The q-W n algebra is expressed in terms of the following h-valued q-deformed boson, and e a (a = 1, · · · , n − 1) are the simple roots and , : h * ⊗ h → C is the canonical pairing. The commutation relations are given by where C ab is the Cartan matrix of A type and The q-number is defined by Similar to the q-Virasoro case, we consider the limit, where ω = e 2πi r and k is a natural number mutually prime to r. The condition to be able to take this limit is that β is a rational number, where m ± are non-negative integers. Taking this limit, lim h→0 ϕ a 0 (z) = we obtain 1 n a a n w −n , (4.11) Here we have normalized as α a nr = −(−1) nk √ rha a n , (4.14) α a nr+ℓ = e iπk(nr+ℓ)/2 − e −iπk(nr+ℓ)/2 √ r(n + ℓ/r) a a n+ℓ/r . The commutation relations are [a a n , a b m ] = nC ab δ n+m,0 , (4.17) [ a a n+ℓ/r , a b −m−ℓ ′ /r ] = n + ℓ r C ab δ n,m δ ℓ,ℓ ′ . (4.18) The correlation functions are For each e a , we define where A r is a normalization factor and φ (ℓ) a (w) ≡ ϕ a (e 2πiℓ w). Let α = n−1 a=1 n a e a ∈ Q, where n a are non-negative integers and Q denotes the root lattice. We obtain the corresponding parafermion, up to its normalization, The independent parafermion can be given only for the case α ∈ Q/rQ. Not of all ψ α are independent; 1 ∼ ψ ea · · · · · · ψ ea r . (4.25) For example, in the the case of sl(3) algebra and r = 4, the corresponding parafermions are drawn in the Fig. 1. We define the parafermion associated with negative of a simple root by ψ −ea ∼ ψ ea ψ ea · · · ψ ea r−1 . (4.26) The normalization can be determined by the correlation functions [47], where α 2 = (α, α). In particular, In the case of the sl(2) algebra, we obtain the first Z r -parafermion, Similar to the case of n = 2 (3.22), the central charge is given by c (r) n = n(n − 1)(r − 1) r + n + (n − 1) 1 − n(n + 1) In the case of s = 0 corresponding to Q E = 0, we have the central charge of the usual Sugawara stress tensor for sl(n) r , c (r,m,0) n = r(n 2 − 1) r + n = c sl(n)r (4.34) It is well-known that the affine Lie algebra sl(n) r is represented by parafermions and an auxiliary boson [47]. In the case of s = 1, because (4.31) becomes c (r,m,1) n = (n 2 − 1)r(m − n)(m + n + r) (r + n)m(m + r) , (4.35) the model gives us the unitary series of the coset, sl(n) r ⊕ sl(n) m−n sl(n) m−n+r . (4.36) We can see how the level p is related with the omega-background parameters ǫ 1 and ǫ 2 in the 4-d side. Since β = −ǫ 1 /ǫ 2 , (4.8) yields the condition to the ratio of these parameters. Therefore, when we introduce the free parameter ǫ, ǫ 1,2 can be written respectively as ǫ 1 = ǫ(p + n + r), ǫ 2 = −ǫ(p + n). (4.37) This result suggests that the Nekrasov-Shatashvili limit ǫ 1 → 0 (resp. ǫ 2 → 0) of the N = 2 gauge theory on the R 4 /Z r corresponds to the critical level limit p + r → −n (resp. p → −n) of the coset model.
2014-08-29T11:38:23.000Z
2014-08-19T00:00:00.000
{ "year": 2014, "sha1": "86d67e48e4052e0d3d6edb13894e919f4b557b57", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.nuclphysb.2014.10.006", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "86d67e48e4052e0d3d6edb13894e919f4b557b57", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
234821032
pes2o/s2orc
v3-fos-license
Bacterial Analysis of Roof-harvested Rainwater and their Implications to Public Health in the Caribbean Island of Anguilla Anguilla is a dry to semi-arid island in the Caribbean where the majority of residents rely on roof-harvested rainwater. The objective of this study was to assess the presence of indicator coliform bacteria and associated health risk to the consumer in Anguilla. Rooftop rain water harvested samples were collected from 86 homes. Bacterial count was done by membrane filtration and culture on Difco Modified mTEC Agar and Hach m-ColiBlue24 agar. Bacteria grown on agar plates were counted by using darkfield colony counter. All owners or residents were interviewed for water use related gastroenteritis. Coliform bacteria were grown in 88.4% (76/86) samples with very few colonies to confluent growth of fecal E. coli, non-fecal E. coli and coliform other than E. coli. Results indicated that the majority of the samples contain the mixture of different coliform bacteria. Household survey revealed the majority of the households drink unpotable roof harvested rain water without any treatments and have the history of at least one episode of gastroenteritis-like illness during the past year. Our result is suggestive of that coliform bacterium detected in roof-harvested rainwater throughout Anguilla poses a potential health risk to the consumers and requires proper treatment for consumption. 1Department of Microbiology and Immunology, American University of Barbados School of Medicine, Wildey, Saint Michael, BB 14007 Barbados and 2Department of Microbiology, Saint James School of Medicine, Anguilla, BWI Corresponding author Dr. Birendra Raj Tiwari, Department of Microbiology and Immunology, American University of Barbados School of Medicine, Wildey, Saint Michael, BB 14007 Barbados; Email: tiwari.birendra58@gmail.com Orcid ID: https://orcid.org/0000-0003-4654-8197 DOI: https://www.doi.org/10.3126/nmcj.v23i1.36235 Introduction Roof-harvested rain water is the main source of water supply in arid and semi-arid regions of the world; this poses a risk of gastroenteritis and other opportunistic infections. [1][2][3][4][5] Thermotolerant Escherichia coli (fecal E. coli), non-fecal E. coli and coliform other than E. coli are considered as indicators of pathogens in drinking water. [6][7][8][9][10] Anguilla is one of the sparsely populated dryto-semi-arid Leeward islands in the northerly Lesser Antilles of the Caribbean region and has no any natural sources of fresh water. 11 Majority of residents of Anguilla rely on roof harvested rain water for drinking which is probably contaminated with bacterial pathogens. 12 Therefore, the objective of this study was to assess the contamination of indicator bacteria in roof-harvested rain water in Anguilla. The study also aimed to find out if there are any associated health risks due to consuming this water. Materials and Methods Sample collection and Ethical approval: The sampling was done during the months of September and October 2018 by selecting the 86 households using roof-harvested rain water in 10 different locations of the island. Ethical approval was obtained from Saint James School of Medicine Anguilla Campus Research Committee. Written informed consent was taken from each house owner before collection of samples. A brief questionnaire was filled out regarding the information for consumption habits and safety practices such as cleaning, disinfection of cistern, and filtration of water before pouring into cistern. Additionally, information related to any waterborne illness was obtained if they have experienced in the past year. Two water samples were collected from the kitchen tap in 100-ml sterile bottles opening the tap for a minimum of 2-3 minutes to flush the water or to clear the water lines. Immediately after collection the bottles were sealed, labeled, placed into chillers with ice packs to maintain a constant temperature of 4°C and to protect from sun light and were carried to the Anguilla Water Laboratory. All the samples were cultured within 6 hours by following the United States Environmental Protection Agency protocols for sample collection, transport, storage, sterilization and processing. 13 Bacterial culture: Agar plates were prepared by dissolving the hydroscopic powder of modified mTEC Agar for thermotolerant E. coli according to manufacturer instructions (Becton, Dickinson and Co, Sparks, MD). Briefly, the agar was dissolved by boiling on a hot plate and autoclaved at 121°C for 15 minutes and cooled to 50°C in water bath. About 25 ml of the medium was dispensed into sterile 90 mm Petri dish and allowed to solidify at room temperature. The plates were then stored at 4°C until use. Premade Hach m-ColiBlue24 (Hach Company, Loveland, Co) broth in ampules was used to detect non-fecal E. coli and coliform other than E. coli. The ampules was opened aseptically and the content was carefully poured uniformly into the petri dish with absorbent pad. Six membrane filter funnels (Thermo Scientific, Waltham, MA) were attached to six chamber Gast vacuum setup. Flamed forceps dipped in alcohol were used to remove the 0.45-µm Millipore membrane filters (Thermo Scientific, Waltham, MA) from packet and inserted into filter funnels, 100 ml of collected samples were poured into each funnel accordingly. Agar plates were then labeled with sample code numbers and placed adjacently to the funnel. After Gast vacuum was turned on and allowed the water sample to filter through the membrane. Membrane filters were removed and placed on the surface of the agar top without bubbles. A separate membrane was used for different samples from the same house. The modified mTEC agar plates were incubated at 45.7°C to detect the thermotolerant E. coli, m-ColiBlue24 broth in plate for non-fecal E. coli and coliform other than E. coli were incubated at 37°C for 24 hours. Control plates were incubated along with the test plates for each day of sampling to rule out the contamination. Bacterial identification and quantification: The isolated organisms were identified following the United States Environmental Protection Agency protocols Hach20 Water Analysis Handbook 5th edition guidelines. 14 In order to estimate the bacterial quantity, colonies were counted based on corresponding color codes by using a QUEBEC Darkfield Colony Counter (AMETEK, Inc.1100 Cassatt Road Berwyn, PA). Thermotolerant E.coli was identified as bluish-purple colonies on Difco Modified mTEC Agar plates whereas non-fecal E. coli and total coliform bacteria were identified as blue and red colonies respectively on m-ColiBlue24 broth absorbent pad. Oxidase test was performed following m-ColiBlue24 broth protocol to estimate actual Tiwari et al number of total coliform, because a few noncoliform bacteria (such as Pseudomonas spp. and Aeromonas spp.) can grow on m-ColiBlue24 broth and produce red colonies. Pseudomonas spp. and Aeromonas spp. are oxidase-positive, whereas all coliform bacteria are oxidasenegative. Therefore, the oxidase test is helpful to confirm which red colonies are total coliforms. All the findings were then recorded. A brief questionnaire survey was conducted from the all the 86 households. Residents living at least for one year in the particular house under survey were interviewed regarding their practices for filtration, cleaning, disinfection and health related calamities associated by consuming roof-harvested rainwater. Results A total of 88.4% (76/86) water samples were found contaminated with coliform bacteria. Thermotolerant E. coli was detected in 43 (56.6%) of samples. Non-fecal E. coli and other coliform were detected in 49 (64.5%) and 67 (88.1%) samples, respectively (Table 1). By assessment We analyzed the bacterial composition in different samples. Discussion Roof-harvested rain water is the main source of water supply in arid and semi-arid regions of the world and as this poses a risk of gastroenteritis and other opportunistic infections 1-5 we assessed roof-harvested rainwater for presence of fecal indicator bacteria in households of the island country of Anguilla using the membrane filtration technique. We used membrane filtration technique as this is considered to be one of the best quantitative methods to enumerate the bacterial number in drinking water. 15 Similarly, we also employed the Difco Modified mTEC agar and m-ColiBlue24 broth as they are the best medium to detect thermotolerant E. coli, nonfecal E. coli and coliform other than E. coli in drinking water. 16 To the best of our knowledge, this is the first study concerning the extent of bacterial contamination in roof-harvested rain water from the country of Anguilla. Majority of water samples have shown contamination with coliform bacteria. Detection of fecal E. coli in over half of the samples (56.6%) is the evidence of contamination of roof-harvested rainwater with human waste. Thermotolerant E. coli (fecal E. coli), non-fecal E. coli and coliform other than E. coli are considered as indicators of pathogens in drinking water. [6][7][8][9][10] Our results indicated that roof-harvested rainwater did not meet the acceptable quality standard, therefore, is not recommended for drinking without proper treatment. These results are supported by several previous studies carried in different countries. 17,18 In this study, 76.7% of the household residents mentioned that they utilize the roof-harvested rainwater for both daily sanitary needs and personal consumption. However, none of them were aware about the necessity of filtration and chlorine disinfection of drinking water. Our study revealed that only 28.0% of the households remove the debris and clean the cistern regularly. With regards to comparison between different locations throughout the island we did not find any definite difference in terms of microbial contamination status of the cistern water. East and West parts of the island quantitatively represented equal number of coliform colonies (in this study, a few samples contained bacteria that were too numerous to count introducing the potential errors for exact CFU quantity in the samples). The results indicated potential health risks due to the consumption of this non-potable water; 45.0% (30/66) of respondents drinking cistern water without treatment have a history of gastroenteritis-like illness in the past year. On the other hand, 10.0% (2/20) of respondents of those who only drink bottled water also have history of gastroenteritis-like illness in the past year. Our results correlate with other studies, who suggested the potential health related risk by the consumption of roof-harvested rainwater without proper treatment. 19,20 Due to constrains of resources and laboratory facilities we were only able to assess the indicator bacteria within cistern water at a quantitative level. A more thorough study should include analysis of all potential microbial pathogens using metagenomics and other DNA testing would provide several other bacterial pathogens in the roof-harvested rainwater. In a review, Lye 2 identified the diseases attributed to the consumption of untreated rainwater include bacterial diarrheas due to Salmonella and Campylobacter, bacterial pneumonia due to Legionella, botulism due to Clostridium, tissue helminthes and protozoal diarrheas from Giardia and Cryptosporidium. However, in this study, we did not examine the specific diarrheagenic pathogens as reviewed by Lye. 2 Our results indicated that thermotolerant E. coli, non-fecal E. coli, and other coliform bacteria are widely dispersed in roof-harvested rainwater throughout the island of Anguilla. The water samples did not meet acceptable standards, and therefore are not recommended for drinking without proper treatment. Our results also revealed fecal bacterial contaminations posed a potential health risk to the consumers. The lack of cleaning, filtration and disinfection of the harvested water is the cause of fecal bacterial contamination, we recommend to residents to carry out proper treatment processes before consuming harvested rainwater.
2021-05-16T04:09:43.703Z
2021-04-02T00:00:00.000
{ "year": 2021, "sha1": "b2fdab64707c258c0dbbea1f11c9fdb185b29192", "oa_license": null, "oa_url": "https://www.nepjol.info/index.php/nmcj/article/download/36235/28274", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b2fdab64707c258c0dbbea1f11c9fdb185b29192", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
255385133
pes2o/s2orc
v3-fos-license
Psychological Argumentation in Confucian Ethics as a Methodological Issue in Cross-Cultural Philosophy Graham Priest claims that Asian philosophy is going to constitute one of the most important aspects in 21st-century philosophical research (Priest 2003). Assuming that this statement is true, it leads to a methodological question whether the dominant comparative and contrastive approaches will be supplanted by a more unifying methodology that works across different philosophical traditions. In this article, I concentrate on the use of empirical evidence from nonphilosophical disciplines, which enjoys popularity among many Western philosophers, and examine the application of this approach to early Chinese philosophy. I specifically focus on Confucian ethics and the study of altruism in experimental psychology. It is legitimate to suppose that Priest's subdivision into the three trends allows their natural integration with one another, thanks to which an amalgamation of cognitive science and Asian philosophy can be achieved. Thus, from a heuristic perspective, the issue of naturalized philosophy can also reach the Asian tradition, or at least such a plausibility should be given a preliminary consideration. Whereas the methodological linkage with science constitutes an important feature of Western philosophy, this cannot be said of the Chinese philosophical tradition. The production of philosophical knowledge in both contexts took place in significantly different and multifaceted circumstances that, according to Richard Nisbett,even include environmental conditioning. 4 Therefore, some deep differences on philosophical and metaphilosophical levels may constitute an obstacle to the plausibility of successful collaboration between Chinese thought and scientifically-oriented approaches. The above concern should not be taken too far, however. Confucian ethics constitutes a most representative argument for it. One of the earliest examples is Francisco J. Varela's discussion on the cognition-consciousness relation with references to Mencius (Mengzi 孟子), Laozi 老子, as well as vajrayāna Buddhism (Varela 1999: 26-32). Varela smoothly moves across different philosophical traditions and refers them to scientific data, successfully dismissing the incompatibility threat for the benefit of his research objectives. More importantly, referring to scientific evidence can also be found in recent publications by scholars specializing in Chinese philosophy. This can be exemplified by The Philosophical Challenge from China, a book devoted to new approaches in Chinese philosophy, edited by Brian Bruya, a philosopher working in both Chinese philosophy and cognitive science. Among contributors to the book, one can find David Wong, SEOK Bongrae, and Hagop Sarkissian, whose research combines Confucian ethics with cognitive science, neuroscience, or experimental psychology (Bruya 2015). Edward Slingerland deserves to be mentioned as a scholar who not only applies scientific methodology to Chinese philosophy but is an advocate for combining humanities and science on a larger scale. 5 Despite not belonging to mainstream Chinese philosophical research, this new trend shows that at least some issues in Chinese thought can be investigated with the aid of science. In this article, I would like to discuss the question of how referring to the results of empirical sciences influences philosophical methodology. This question will be addressed on a specific level. I shall examine Confucian-Mencian ethics in terms of altruistic motivation by supporting it with experimental psychology argumentation, as conducted and presented by C. Daniel Batson (Batson 1991). Confucian Ethics: Departing from Behavior Before I proceed to discuss the issue of altruism in the Mencius, it is necessary to highlight some features of Confucian ethics that justify its compatibility with experimental psychology argumentation. From a typological perspective, Confucian ethics classified as virtue ethics reveals the potential for discussion from a psychological angle. Rosalind Hursthouse provides a description of virtue ethics, which in contrast with deontology and consequentialism is mostly concerned with "the virtues or moral character" (Hursthouse 2013). Therefore, it is primarily oriented at developing one's character traits, which are regarded as a generic feature of a moral person or an evaluating measure in this respect. Confucian moral philosophy identifies the moral character with ren 仁. The term has many translations due to its broad semantic field supervening on different contexts. 6 Among others, it can be perceived as a special form of sensitivity, tantamount to being good. More properly, ren should be conceived of as "one's being ren," which corresponds to one's actual manifestation of sensitivity rather than its encapsulation within some ethical concept. Another semantic layer of ren is understood as humanity. 7 In this sense, one's ren is tantamount to one's advancement in becoming a human being. Ren as a psychological trait is necessarily forged through human practice, on multiple levels of social interaction, stretching from family to state. It is realized by means of rituals/propriety (li 禮), which prescribe a specific behavioral repertoire for a given individual, depending on her specific place in the network of social relations. Concurrently, ren emerges in practice and as practice, instead of being represented or applied. In this way, practice constitutes the self-cultivation of an individual's ren as long as the proper form of behavior is abided by. As can be seen, ren does not constitute a psychological faculty that is independent from human practice but emerges in enactment. In other words, the moral agent's character traits extend the subject beyond the mind into action. Ren as the objective in Confucian moral projects is also understood in relation to wisdom. Therefore, an exemplary person (junzi 君子) or a sage (sheng 聖) who is advanced in her ren-cultivation concurrently treads the path of cognition. The fact that achieving the moral ideal of becoming ren is identified with the pursuit of wisdom shows that Confucianism is concerned with practical knowledge. This can be testified to by, for example, Analects (Lunyu 論語) 1.7, where proper practice as knowledge is valued even higher than the intellectual advancement of a person (see Waley 1997: 2). Another example can be taken from the famous Mencius 2A6. One of the Confucian virtues constituting good nature-knowledge (zhi 智)-is conspicuously viewed in terms of practical knowledge. Contrary to a rationalistic approach to morality, here ethical wisdom consists in the practical discrimination between right and wrong-shifei 是非 (Mencius 2A6). This also confirms the fact that in early Confucianism, knowledge is enactive-emergent rather than attainable in a purely speculative character. The identity of the moral and epistemic order as well as the enacted character of ren is an important feature of Confucian ethical discourse. Since practice consists in 6 D. C. Lau and James Legge in their translations of the Mencius, for instance, translate ren as "benevolence" (Lau 1984;Legge 1985). Translations of the Analects by Arthur Waley and Edward Slingerland use "goodness" (see Van Norden 2008: 202). As regards Bryan W. Van Norden's translation of the Mencius, he uses "benevolence" but in the appended glossary he makes a remark that ren in 7B16 has a wider meaning, with a shade of "humaneness" (Van Norden 2008: 202). A completely different understanding is proposed by Roger T. Ames and Henry Rosemont in their translation of the Analects. In the Introduction, they propose the following translations of ren: "authoritative conduct," "to act authoritatively," "authoritative person," and compare them to other existing renderings of ren into English (Ames and Rosemont 1998: 48-51). 7 It is also suggested by the popular interpretation of the character ren 仁 as the compound of ren 人 and er 二. moulding the human mind, both in a moral and cognitive respect, Confucian ethics discourse allows one's character traits to be described based on behavior. This can be illustrated by Analects 1.11, where Confucius provides evidence for stating one's filial piety-xiao 孝: "If for the whole three years of mourning he manages to carry on the household exactly as in his father's day, then he is a good son indeed" (see Waley 1997: 6-7). The judgment concerning one's virtue is based on whether a son's behavior accords with a form of conduct appropriate to the situation of mourning. A more direct reference to the behavior-personality trait is made in Analects 12.1: "He who can himself submit to ritual is good" (see Waley 1997: 144-145). Here the passing moral evaluation is also based on the underpinning enactment and assumed entraining by li. The above assumption pervades the narration of both the Analects and the Mencius, where the people under discussion are presented in terms of, for instance, obeying the rules of how to properly perform ceremonies instead of concentrating on their "internal psychological faculties." This approach confirms the Confucian notion of the individual, who is primarily constituted of the moral roles appropriate to her position within a social network of relations with others. One can state that this conception is a reversal of Sartre's existence preceding essence (Sartre 2007: 20), for which relations are secondary. Confucians treat essences as primary. Therefore, it is quite understandable that Confucian discourse not only focuses on behavior but also endows it with the considerable potential of revealing character traits. Concurrently, it has to be emphasized that the interpretation of Confucians as behavioristic reductionists entertaining the idea that the moral project culminates only in developing certain patterns of behavior has to be excluded. In Analects 3.3, Confucius asks rhetorically: "A man who is not Good, what can he have to do with ritual?" (see Waley 1997: 24-25). This presupposes that proper behavior is by necessity coupled with adequate moral qualities which affectively engage the agent, and that indifferent performance is insufficient for moral qualification. Thus, Confucianism does not assume a model in which a moral agent fulfills herself in merely internalizing proper forms of social interaction. Behavior is crucial in constituting the emotional dimension of ren. The above discourse feature allows a sound premise to emerge that Confucian ethics can be viewed from the perspective of experimental psychology. Despite being an empirical science, psychology overlaps considerably with Confucianism in terms of methodology; they both consider human reactions and interactions in specific situations as indicatory of, inter alia, personality traits. Therefore, experiment results can be integrated as argumentation into Confucian moral philosophy. The Altruism Issue: The Case of Mencius 2A6 In my discussion, I assume Batson's definition of altruism as "a motivational state with the ultimate goal of increasing another's welfare" (Batson 1991: 6). The differentiating feature is ascribed to what motivation lay behind an action. In order to be labelled altruistic, another's benefit must be the ultimate, as opposed to instrumental, goal. In my presentation of altruism in Confucian ethics, I refer to Mencius 2A6, which appears to be most appropriate for the discussion for several reasons. First, it departs from the conception of the four sprouts (siduan 四端), which is an explanation of good human nature, as postulated by Confucian-Mencian ethics. Second, it thoroughly presents a situation which evidently argues for an altruistic approach in terms of the agent's motivation. Finally, Mencius shows and enumerates the possible alternative motivations that are rejected in favour of the altruistic one. The situation described by Mencius serves as an explanation of his claim that all people possess a sense of compassion (buren ren zhi xin 不忍人之心; Mencius 2A6). In this way, he somewhat presets the interpretation, even considering the fact that competing options are mentioned. Mencius says that if the moral agent suddenly (zha 乍) notices a child who is about to fall into the well, she would be "moved to compassion" (D. C. Lau's translation of ceyin 惻隱; see Lau 1984). The fact that the affective state emerges naturally is presupposed and accentuated by the reaction being sudden. Mencius attributes this to good human nature and eliminates other explanations by saying that the agent is not motivated by: (a) "get [ting] in the good graces' of the child's parents," (b) "win[ning] the praise of his fellow villagers or friends," (c) "dislik[ing] the cry of the child." (Mencius 2A6;see Lau 1984: 82) These possible alternative motivations seem to be excluded on the grounds of being impossible in comparison with the "evident" altruistic motivation manifested by compassion. The firm foundation for Mencius' altruistic claim is provided by referring to the innate four sprouts (siduan), whose practical development guarantees being good. Compassion here fulfills an important role since it is the sprout of ren (ceyin zhi xin, ren zhi duan ye 惻隱之心, 仁之端也; Mencius 2A6). The affective foundation of ren seems to sufficiently confirm Mencius in the evident altruistic motivation of the agent. However convincing the example and explanation may seem, the interpretation varies among philosophers. Bernard Mandeville (1670Mandeville ( -1733, for instance, describes a very similar situation but represents a completely different standpoint than Mencius, namely that human actions are always motivated by self-interest. He provides a situational example intertwined with ethical evaluation to illustrate apparent altruist actions, and views it in terms of agent's motivation: There is no Merit in saving an innocent Babe ready to drop into the Fire: The Action is neither good [n]or bad, and what Benefit soever the infant received, we only obliged our selves; for to have seen it fall, and not strove to hinder it, would have caused a Pain, which self-preservation compell'd us to prevent. (Mandeville 2010: 82-83) The juxtaposition of Mandeville's and Mencius' completely different intuitions concerning human nature, apart from showcasing the commonplace coexistence of incongruent philosophical conceptions, also indicates an important methodological issue. In constructing their argumentation, philosophers often refer to everyday experience. In the case of ethical issues, the material most often consists of human behavior, which is subjected to interpretation that supports a philosophical statement. The material is oftentimes collected, selected, and interpreted in an unsystematic way, which casts doubt on its contribution to a strong argument. Using material from experience can be conducted in a more systematic way, which ensures higher accuracy manifested by a more comprehensive and thorough processing of the raw data. A contemporary epistemologist, for instance, is more likely to examine perception by referring to physiology rather than her personal experience. In the same way, ethical statements can gain more validity when the interpreted data come from systematically conducted experiments that follow standards connected with, for example, bias elimination, impairment caused by external factors or interference with a different aspect of the study. For this particular reason, resorting to experimental psychology as a systematic empirical study of human behavior allows an examination of to what extent Mencius' altruism conception is sound, as well as whether debunking the competing egoistic motivations finds any experimental confirmation. Mencius 2A6 Arguments For and Against Altruism versus Experiments The linkage between experiments discussed below and Mencius' conception of altruism is established on account of the fact that both Mencian compassion and Batson's empathy, being the same or a very similar emotion, underpin altruistic motivation. Concurrently, it has to be mentioned that the discussed fragment from the Mencius uses the vocabulary item ceyin 惻隱, which is in semantic proximity to compassion and sympathy. Batson's empathy, despite being semantically discrete, can be viewed in terms of compassion. In order to support the adequacy of referring Batson's empathy to Mencian compassion, I refer briefly to a thorough analysis of compassion and empathy in Stephen Darwall's "Empathy, Sympathy, Care" (Darwall 1998). Darwall, among others, focuses on Mencian ceyin and empathy from the experiments that I discuss below. He understands D. C. Lau's translation of "compassion" 8 from Mencius 2A6 as "sympathetic concern or sympathy." It "responds to some threat or obstacle to an individual's good or well-being," has "that individual himself as object," and "involves concern for him, and thus for his well-being, for his sake." It is thus specified concern that allows Darwall to draw the distinction between sympathy and empathy. He generally understands empathy as primarily consisting in feeling what one imagines the individual taken as an object of one's own empathy feels (Darwall 1998: 261). It does not have to include concern for another's well-being, which is an indispensable composite of sympathy. Darwall further distinguishes between projective and proto-sympathetic empathy. The former consists in projecting into another's perspective. As regards the latter, it enriches the projective empathy perspective with concurrent focus on another and her feelings (which she may not necessarily be conscious of) (Darwall 1998: 270-271). What should be noticed is that proto-sympathetic empathy shares with sympathy the concern for another's well-being. This fact is important in light of another remark of Darwall that proto-sympathetic empathy is close to Batson's motivating empathy. Although Batson uses the word "empathy," he describes motivation as being oriented toward increasing the well-being of another (Darwall 1998: 273). Thus, we can assume that the subjects in the experiments are motivated by either sympathy or at least protosympathetic empathy. Darwall himself remarks that Batson actually manipulates by means of the two forms of empathy and there is a "psychological connection between empathy and sympathy" that prompts sympathy-led motivation (Darwall 1998: 273). Accordingly, the experiments can be regarded as ones that test the relation between sympathetic concern and altruism. This creates a reference point between Batson's empathy and Mencian ceyin. In my discussion of argumentation in 2A6, I assume that the situation presumes the action/motivation to help the child. I propose a clarification of Mencius' argument for altruistic motivation in the following way: The motivation for helping the child has the child's need as the ultimate end. It is guaranteed by compassion, the basis of ren (Mencius 2A6). The agent's action of helping would not be ren if it were self-interested. As regards other possible motivations rejected by Mencius, they can be described in the following way: (1) "Get [ting] in the good graces of the child's parents" (Mencius 2A6;see Lau 1984: 82). The motivation for helping the child is social reward, which is represented here by getting in the parents' good graces. Mencius claims that egoistic motivation cannot underlie the described feeling. (2) "Win[ning] the praise of his fellow villagers or friends" (Mencius 2A6;see Lau 1984: 82). The moral agent is mostly concerned with satisfaction "caused" by positive appraisal in terms of socially established morality, and rescuing the child is a means to an end. As in (1), Mencius claims the feeling does not result from such egoistic motivation. (3) "Dislik[ing] the cry of the child" (Mencius 2A6;see Lau 1984: 82). The motivation for helping is to put an end to the child's crying, which the agent experiences as repugnant. The ultimate aim is alleviating one's own discomfort, and as in (2), benefitting the child is instrumental. Similarly to (1) and (2), Mencius excludes this egoistic motivation. I will analyze the negative cases (1)-(3) with reference to relevant experiments. Since the experiments I refer to compare selected egoistic and altruistic motivations, the hypothesis propounded by Mencius is discussed throughout the two cases. Before I proceed to the experiments, for the sake of clarity I shall briefly outline the general idea of Batson's experimental study and the way in which it verifies his empathy altruism hypothesis. Batson's study consists in proving his "empathy-altruism" (EA) hypothesis, according to which empathy evokes altruistic motivation to reduce the other's need (Batson 1991: 90). As can be seen, the hypothesis is conceived in terms of motivation. This is also the case in the three competing egoistic hypotheses discussed below. Batson refers to experiments which show evidence that subjects in a state of empathy are more likely to help others in need (e.g., Coke et al. 1978;Dovidio, Allen, and Schroeder 1990;see Batson 1991: 95). Following this, Batson claims that in a state of high empathy, the motivation is exclusively altruistic (Batson 1991: 97). The aim of his study is to prove the altruistic basis of the motivation as opposed to the egoistic motivation. 9 Batson claims that in experiments comparing the EA hypothesis with one of the competing egoistic hypotheses, the two different motivations would produce two different arrays of responses to the same staged situation. Thanks to this, the helping rate for each motivation can be measured and compared. The comparison result shows which of the tested hypotheses is more probable. Apart from the last one, all experiments involve the manipulation of the situation parameters that are expected to influence the helping rate. Empathy manipulation examines the relation between the magnitude of empathy and the helping rate. Apart from manipulating the empathy, depending on the particular egoistic motivation being tested against the EA hypothesis, the manipulation of a specific parameter is introduced. The parameters for the three egoistic hypotheses discussed in this article are: escape ease, negative social evaluation potential, and feedback availability. Manipulating these conditions in staged situations is designed to show how behavior patterns with or deviates from the tested hypotheses, in high as well as low empathy condition. I begin my analysis with the third motivation, according to which the agent feels uncomfortable and in order to eliminate the source of this state instrumentally helps the child, whereas the ultimate end is the elimination of the unpleasant stimulus. The hypothesis based on this type of motivation is referred to by Batson as the "aversivearousal reduction" (AAR) hypothesis. Accordingly, "becoming empathically aroused by witnessing someone in need is aversive and evokes motivation to reduce this aversive arousal" (Batson 1991: 109). One can interpret Mencius' dismissing this motivation in the way that empathy can only co-occur with altruistic motivation and there cannot be any correlation between aversion and empathy. In his discussion of AAR, Batson mostly concentrates on experiments concerning the empathy-affective arousal connection. His two experiments that I will discuss here test the helping rate in both low and high empathy states. In order to differentiate between the AAR egoistic motivation and the EA hypotheses, escape ease manipulation is introduced. Batson regards this factor as an adequate way of testing the two hypotheses in parallel on the grounds that by choosing escape one might reach the egoistic goal (reducing one's arousal) while concurrently not removing the stimulus (another's suffering), that is, not reaching the altruistic goal (Batson 1991: 110). The AAR-EA hypotheses contrastive predictions concerning the helping rate are as follows: AAR hypothesis: Easy escape and low empathy: low helping rate. Easy escape and high empathy: low helping rate. Difficult escape and low empathy: high helping rate. Difficult escape and high empathy: high/very high helping rate. (Batson 1991: 111) EA hypothesis: Easy escape and low empathy: low helping rate. Easy escape and high empathy: high helping rate. Difficult escape and low empathy: high helping rate. Difficult escape and high empathy: high helping rate. (Batson 1991: 111) The clash occurs in the high empathy condition with the easy escape. Egoistic persons, in order to terminate their aversion, are likely to resort to escape, whereas altruisticallymotivated ones should remain insensitive to this factor. Faced with a difficult escape, the high help rate should be similar for AAR and EA as egoistically-motivated persons incline toward the high-cost termination of their aversion by helping the person in need (Batson 1991: 110-111). The first experiment is named "A Shocking Situation: Observing Elaine Perform under Aversive Condition," and was conducted by Batson et al. in 1981. The participants of the experiment are female undergraduates observing Elaine (confederate) undergoing ten unpleasant electric shocks, which were supposed to evoke their aversion (Batson 1991: 113). In order to manipulate the ease of escape, the participants were informed that if after two shocks they did not help Elaine, they would not observe the remaining eight of the ten shocks (easy escape condition) or they would have to see all of them (difficult escape condition). As regards empathy manipulation, the participants were presented a 14-value questionnaire allegedly filled out by Elaine. To elicit a state of high empathy, Elaine's values and interests were very similar to those of the particular participant, whereas they were extremely dissimilar when a low empathy state was elicited (Batson 1991: 115). The result of the experiment was as follows: Easy escape and low empathy: 0.18 Easy escape and high empathy: 0.91 Difficult escape and low empathy: 0.64 Difficult escape and high empathy: 0.82. (Batson 1991: 116) The data were interpreted that easy escape did not affect the helping rate in the high empathy condition, which disconfirms the AAR and confirms the EA hypothesis. However, in the low empathy condition, the impact of difficulty of escape suggests egoistic motivation (Batson 1991: 116-117). The second experiment is named "Bad News: Consequences of Carol's Car Crash" and was conducted by Toi and Batson in 1982. The participants listen to a recording in which Carol Marcy says that she broke both her legs, as a result of which she missed many classes and was threatened with having to repeat the first year of her education programme. The participants were given the chance to aid her by sparing time to help her catch up with her studies (Batson 1991: 119-120). Empathy manipulation consisted in focussing on the information of the recording (low empathy condition) or imagining how Carol feels (high empathy condition). Escape manipulation consisted in that if a participant refuses to help, she will not hear Carol again (easy escape condition) or will have to meet her as she is her classmate (difficult escape condition) (Batson 1991: 120). The experiment result was as follows: Easy escape and low empathy: 0.33 Easy escape and high empathy: 0.71 Difficult escape and low empathy: 0.76 Difficult escape and high empathy: 0.81. (Batson 1991: 121) The helping result in high empathy disconfirms the AAR hypothesis predictions, because the easy escape condition does not significantly affect the helping rate. On the contrary, the EA hypothesis is confirmed in the high empathy condition (Batson 1991: 121). Thus, the experiment result confirms Mencius' claim that rescuing the child is not motivated by the desire to avoid an unpleasant experience. In the above and following cases, when referring to the Mencian discussion of motivation, we should focus on the high empathy state. As regards the example provided by Mencius, it would be incorrect to refer it to the low empathy condition, especially if we consider the manipulation techniques applied in both experiments. In other words, only high empathy cases can represent ren. Considering this, the experiments conspicuously indicate that the AAR hypothesis has been disproven. In (1), we have to consider that in helping the child the action is egoistically motivated by public opinion. In the negative version, the agent feels the pressure of morality endorsed by society and helping is motivated by avoiding "negative social evaluation" (NSE) (Batson 1991: 128). The experiment testing this motivation resorts to manipulating the level of negative evaluations in the parallel testing of the EA and NSE hypotheses on the grounds that egoistically motivated agents help only because they want to avoid the anticipated negative social evaluation, which is the ultimate goal (Batson 1991: 128). When negative social evaluation is not given, there is no egoistic goal to be reached and helping is the ultimate and altruistic goal. This hypothesis is contrastively examined by the experiment "Loneliness of the Long Distance Student [study 2]" by Fultz et al. in 1986. The prediction for social evaluation is that helping will increase in socially evaluative circumstances. As regards the NSE hypothesis prediction, if the potential for negative evaluation is high, there will be more helping in high-empathy condition than in low. The EA hypothesis predicts that helping will be higher in high empathy contexts and lower in low empathy contexts on both levels of social evaluation (Batson 1991: 132). The procedure involves participants-students reading two confidential letters. The first one is from Janet, a first-year student who feels lonely and is looking for a reliable friend. The second is from a professor who unexpectedly offers to befriend Janet by meeting her and staying in touch with her. On reading the first letter, the participants complete an emotional response questionnaire for the experimenter (Batson 1991: 132). The experiment involves empathy manipulation. In the low empathy condition, the participants were asked to read Janet's letter and stay objective while focussing on her communication techniques. In the high empathy condition, the participants were asked to assume Janet's perspective while reading. As for the low potential of negative social evaluation, the participant is the only person who reads Janet's notes; the questionnaires are anonymous and Janet will be contacted only if help is offered. In the case of high potential of negative social evaluation, both the experimenter and participant read Janet's notes; the examiner personally collects and compares the participants' questionnaires, and Janet will know about the participant's negative decision (Batson 1991: 132-133). The result of the experiment is as follows: Low potential for negative social evaluation condition and low empathy: 0.67 Low potential for negative social evaluation condition and high empathy: 1.71 High potential for negative social evaluation condition and low empathy: 1.29 High potential for negative social evaluation condition and high empathy: 2.44. (Batson 1991: 134) The result disconfirms the influence of NSE (helping should be high in high evaluation). The results pattern according to the EA hypothesis, where helping will increase with empathy (Batson 1991: 134). Therefore, as Mencius expected, "get[ting] in the good graces of the child's parents" (Mencius 2A6;see Lau 1984: 82) does not make the moral agent rescue the child. Help is proportional to compassion, irrespective of the possible respect earned among others. The second type of egoistic motivation has been investigated by Smith et al., who claimed that empathically stimulated individuals are motivated by "the good feeling of sharing vicariously in the needy person's joy at improvement," and call it "empathic joy" (EJ) (Batson 1991: 153). In order to test the EJ hypothesis, in the experiment "Effect of Feedback on Helping Katie Banks" by Batson et al., the participants listened to a broadcast of Katie Banks, whose parents died leaving her with a younger brother. Helping consisted in a fund drive. Apart from manipulation consisting in perspective taking, feedback manipulation was introduced. It was considered a good way of testing the EA and hypotheses in parallel on the grounds that egoistically-motivated agents would only help once provided feedback, , by which they felt empathic joy, the ultimate and egoistic goal, as opposed to helping another, which is instrumental (Batson 1991: 154). When no feedback is provided, there is no egoistic goal to be reached and helping another is the ultimate and altruistic goal. The feedback consisted in receiving no feedback or follow-up information from Katie on her improvement. The difference in the predictions of the EJ and EA hypotheses consisted in that the EJ hypothesis predicted equally low help in a no-feedback condition regardless of the empathy levels. The EA hypothesis predicted higher help in high empathy no-feedback situations (Batson 1991: 158-159). The result was as follows: No feedback and low empathy: 0.33 No feedback and high empathy: 0.83 Feedback and low empathy: 0.67 Feedback and high empathy: 0.58. (Batson 1991: 160) For high empathy, the EJ hypothesis was disconfirmed, and EA confirmed 10 (Batson 1991: 160). This confirms Mencius' dismissal of motivation directed by the desire for internal praise. Another experiment was modified by introducing the likelihood of improvement by Batson et al: "Likelihood of Improvement and Desire for Further Exposure to a Person in Need." The story is similar to the Katie Banks story above. However, the chance for help is supplanted by being given updated information about Katie. The levels of improvement likelihood were 20/50/80%. The predictions were that in EJ the egoistic motivation underlying choosing the second interview would grow with the likelihood. The EA prediction was that there would be more interest in a high empathy condition. As for the likelihood, most interest should be at 50% because it represents the highest uncertainty concerning the improvement (Batson 1991: 161-162). The result was as follows: Improvement likelihood 20% and low empathy: 0.22 Improvement likelihood 20% and high empathy: 0.50 Improvement likelihood 50% and low empathy: 0.33 Improvement likelihood 50% and high empathy: 0.67 Improvement likelihood 80% and low empathy: 0.44 Improvement likelihood 80% and high empathy: 0.44. (Batson 1991: 162) The experiment confirms the EA hypothesis for high empathy (Batson 1991: 162-163). The Mencian claim that the moral agent does not calculate the possibility of emotional self-satisfaction is confirmed. Thanks to compassion, she is concerned with the child's well-being, which manifests her altruistic ren. From the perspective of the Mencian conception of motivation, the value of the experiments lies in showing that Mencius was right in that the three egoistic motivations are not underpinned by compassion. The argument strength provided by the experiments from Batson's study can be primarily attributed to how they were engineered. The situational design as well as the specific parametric manipulation of the staged situations maximally reduced any possible interferences with other possible motivations than the one being examined. This precise and systematic approach, although not absolutely perfect, carries more weight than intuitions formed from a subjective perspective. Apart from benefitting the Mencian conception alone, experimental psychology argumentation also enables opposite theories to be confronted through empirical criteria. It seems that experimental methodology can contribute to developing a new language for comparing at least some conceptions from different philosophical traditions. Methodological Conclusions The above employment of psychological experiments as argumentation in Confucian ethics invites some methodological conclusions. Contemporary philosophical research in the West, thanks to its perennial interspersion with science, has smoothly integrated with other disciplines, which is not only to be found in cognitive science, but also in the discussion of originally philosophical issues belonging to, for example, epistemology (e.g., David Chalmers), aesthetics (e.g., Gabrielle Starr) or ethics (e.g., Jesse Prinz). The meeting of Chinese philosophy with scientific evidence is only recent and still beyond mainstream research. However, its integration with cognitive science or psychology, 11 for instance, testifies that it should not be excluded from naturalized philosophy research. Undoubtedly, experimental psychology evidence contributes to moral philosophy. When philosophers address issues connected with human behavior, thoroughly conducted experiments undoubtedly guarantee a higher accuracy of data to work on than an individual philosopher's perspective, from which human 11 A good recent example of such integration is the first chapter, devoted to moral psychology, of Bruya 2015. reactions are predicted. Representative groups, adequate manipulation, and other factors, including cultural ones, do not guarantee perfect accuracy, but they are far more reliable than many unsystematic examinations underlying philosophical statements. Batson's study of altruism is a good example of how intuitions connected with morality can be empirically investigated with a high degree of precision. As I mentioned at the beginning of my article, psychology evolved from philosophical investigations and the questions it addresses still often overlap with what interests contemporary philosophers. This is also the case with experimental psychology, whose combination with philosophical enquiries has given rise to experimental philosophy. As Joshua Knobe et al. concluded in their article "Experimental Philosophy," the collaboration between the two disciplines can be viewed as "a return to a more traditional conception of how psychology and philosophy should relate and develop" (Knobe et al. 2012: 96). Comparative philosophers are not excluded from this methodological merger. Their discipline is situated even closer to experimental psychology evidence than that of philosophers working in one tradition, which is often raised to the universal level. The particular sensitivity of comparative philosophy to a cultural or even civilization context requires more concern with experimental evidence, which emphasizes the importance of cultural psychology in particular. Cultural psychology is particularly aware that the reliability of experiment results depends on considering the participants' cultural background, which has been proven in numerous experiments by, for instance, Richard Nisbett and MASUDA Takahiko. 12 This is reflected in experiment methodology, which is sensitive to the cultural (or even subcultural) variability among the subjects. 13 Whereas evidence provided by cultural psychology benefits comparative philosophy in an almost evident way, one may pose the question if cultural psychology can benefit from comparative philosophy contribution. The example of experimental philosophy research seems to answer this question positively. Cultural psychology benefits here mostly in theoretical-methodological respect. Experimental methods do not always suffice to critically reflect upon or transcend certain established narrations of reality. This faculty belongs to philosophical reflection, which is able to build alternative theoretical models, some of which can be tested by experimental sciences. Justin Garson's discussion of Batson's study concerning egoistic motivation showcases how philosophy can question experimentally-confirmed hypotheses. He points out that the experiments discussed by Batson disprove one hypothesis at a time and regardless of how many other hypotheses can be tested, they only eliminate single egoistic motivations (Garson 2015: 15-16). In his opinion, one can entertain a different conception of egoistic motivation, namely that people are simultaneously motivated by multiple desires. One of them can be disjunctive desire, according to which human motivation instead of being reduced to one single desire is driven by an alternative of them (Garson 2015: 16). Since the experiments do not consider this conception of human motivation, philosophical insight in this respect can significantly contribute to the improvement of them. In other words, philosophical models can influence experiment models. Extending the analogy to cultural psychology and philosophy is by all means legitimate. Whereas the comparative value of experiments is undeniable on account of underscoring possible cultural differences in behavior, the theoretical construction raised on them may prove to be fallible. A lack of philosophical engagement providing alternative conceptions may lead to conclusions drawn too hastily regarding what guides human behavior in specific cultures. 14 The lack of traditional ties between Chinese philosophy and science has not prejudged the contemporary methodical development of the latter. Chinese philosophy starts to turn toward scientific evidence. This fact does not only testify that Chinese philosophy can be included in naturalized philosophy research. It seems that it can also pose a more general methodological question. The fact that comparative philosophy deals with thought systems of different cultural provenance necessitates a different methodological approach than in the case of focusing on a single philosophical tradition. Comparative methodology is deeply concerned with revealing possible conceptual and heuristic (in)commensurabilities in order to produce a unifying discourse supervening on them. In other words, we have a unifying methodology built on culturally discrete materials. Considering this, it seems that integrating Chinese philosophy with naturalized philosophy research cannot be accommodated by the comparative approach for two reasons. First, the methodological objective is not comparative. It does not involve showing how given philosophical systems vary with regard to a particular question but is more concerned with integrating them in approaching a philosophical issue. Regarding moral nature, for instance, comparative philosophy would be more concerned with how it is viewed from different philosophical standpoints, whereas in the "integrated" approach more consideration would be given to how moral nature can be understood from a perspective combining the two (or more) traditions. Second, although comparative philosophy creates space where different philosophical traditions can be encapsulated in one philosophical language, it treats the material it works on as culturally discrete. Combining Chinese philosophy into naturalized philosophy discourse is more inclined toward seeing the different traditions as one, varied, yet indiscrete. This can be compared to cultural globalization processes, where local cultures are no longer perceived separately but rather merging with the global cultural landscape. Considering the above reasons, one can expect that the ongoing integration of Chinese philosophy with experimental psychology as well as other empirical sciences is likely to make a methodological contribution to philosophical research not only in the comparative respect. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
2023-01-03T14:23:45.523Z
2016-10-03T00:00:00.000
{ "year": 2016, "sha1": "8355bd19faa0169bdde79c3e505871108c6b14ab", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11712-016-9523-9.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "8355bd19faa0169bdde79c3e505871108c6b14ab", "s2fieldsofstudy": [ "Philosophy", "Psychology" ], "extfieldsofstudy": [] }
11348621
pes2o/s2orc
v3-fos-license
A Semantic Without Syntax 1 Here, by introducing a version of"Unexpected hanging paradox"we try to open a new way and a new explanation for paradoxes, similar to liar paradox. Also, we will show that we have a semantic situation which no syntactical logical system could support that. In the end, we propose a claim as a question. Based on this claim, having an axiomatic system for computability theory is not possible. In fact we will show that the method applied here could yields us as a generalized result, some Theories like Physic is not axiomatizable. Introduction: In *4+ by applying a version of "Unexpected hanging paradox" we showed that there is a proof which doesn't show the truth. Here, we try to formalize this problem. First, we explain the scenario of paradox again. Scenario: In [4] we represented the following version of "Unexpected hanging paradox" : Once upon a time a logician was arrested. He was judged in a court. As all knows the interest of our logician was to arguing, he was sentenced as following by judges: "You will be executed in the next week if and only if you don't conclude logically in a written form at the time of execution orthe days before the execution that you will be executed". The logician started the arguing and he wrote the result of his arguing as following: (Later on he sent this message to his lawyer) I prove that I will not be executed the next week. Suppose that it is so: 1) I will not be executed on Friday, if so since till Friday I will not be executed; I conclude that I will be executed in Friday. 2) Analogously , I will not be executed on Thursday, If so since by (1) I will not be executed on Friday And till Wednesday I am not executed so I conclude that I will be executed on Wednesday. But as judge says I can't conclude that (logically and in a written form). So I will not be executed at that time. So I will not be executed 3) Analogously, I will not be executed on Wednesday, if so by (1) and (2) I will not be executed on Friday and on Thursday …. 7) I will not be executed on Saturday. By 1, 2,…6 and I conclude that I will be executed on Saturday By what judge says I will not be executed.* I stop to conclude more and thinking about this subject more. He sent this message to his lawyer, and in each day he sent a short message based on that he confirmed his conclusion. More exactly he said: I will not be executed tomorrow. On Wednesday the lawyer had no more messages. As he understood, the logician was executed on Tuesday! With an open surprised eyes at the time of execution. He claimed injustice, and he exhibited the message of poor logician to journals and the court. The court said: The logician proved that he will not be executed, a true proof. We executed him on Tuesday, and as he claimed he didn't conclude in a written form that he will be executed on Tuesday, on the contrary he claimed that "he will not be executed in that day". In fact and in other words, he proved in his message that by accepting what the judge said as a true claim, he would not be executed in that week. More formally, P: If we consider what the judge said as a true claim. q: He would not be executed in that week. His proof (p|---q) is true as a proof, but it doesn't show the Truth. In this paper we defend the above claim and we try to show how we could develop this idea. We should note, we don't claim that the above result is not proposes to be a solution for unexpected hanging paradox in all its versions. It is simply considered as the only way to explain this version of "Unexpected hanging paradox". Later on, we will know the above result as a possible explanation for the other versions of this paradox and some other paradoxes. To formalize the above proof, it is sufficient to consider that A (prisoner) as a Turing Machine which could utter the code of a phrase, like [ ]. We have given formalization for the above paradox, and we note that any formalization should conclude this formalization and all suffer that they are contradictory. An Explanation and Conclusion: In the above system (formalization of paradox) there is a contradiction, in brief the judges claim that the prisoner will be executed next week, but the prisoner prove that they will not. So this system is a contradictory system, but at the same time we have a semantic for this system. So we have a contradictory system which has a semantic. In other word, we have semantics which no consistent syntactical system supports them. As a result, our proof in above does not show any truth. So there are some intrinsically deficiencies in modeling and formalizing the proofs. In other word, formal systems are not able to support such semantic situations. Clearly, this opens a way to a new explanation for some paradoxes similar to liar paradox, as following: In such paradoxes the proofs don't show the truth. Since there is no consistent syntactical system to support the related semantic. This would be considered as the central result and idea of this paper. As the last word, we propose the following question, in the subject of Computability theory. Question: Is the following a true claim? In the above formalism, we could consider A as a Turing machine. By a slightly modification in the formalism We are able to replace A by [A] (code of A), in a similar way by So the above problem has a formalism in the scope of Computability Theory. If one of the real goal of Computability Theory is to explain the real situations, It should explain the above semantic situation. But by above explanation we know that no axiomatization is able to afford this, since any axiomatization falls in contradictory. So Computability Theory is not axiomatizable. Forthcoming, it is notable to see three points here: 1. As formalizing of the problem shows we are able to rearticulate the paradox such that the concept of time would be eliminated. 2 .In addition to Computability Theory the above claim is true for some other Theories as Physics and Mathematics as a whole. 3. One of the most important facts here is: In this paradox there are no debatable and fully controversial objects and concepts like infinity. There is a weaker but less controversial result here. Any axiomatization of Computability Theory implies the inability of Computability Theory to explain and to describe the above paradox. But our Mind, by grasping semantic and syntax around this problem is able to understand the gist of the situation around this paradox. So either Computability Theory is not axiomatizable or our Mind is not equivalent to a Turing machine. Finally, It is notable to say, in all above we could replace Turing machine by any machine stronger than Turing machine. It is certain that, the above issues introduce axiomatization approach as a weak approach to study in so many subjects.
2012-03-14T09:22:22.000Z
2012-03-14T00:00:00.000
{ "year": 2012, "sha1": "5009c3af7c4c20ea70045a26275f961f99e85ecf", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5009c3af7c4c20ea70045a26275f961f99e85ecf", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Computer Science" ] }
244661377
pes2o/s2orc
v3-fos-license
Scoping review of food safety at transport stations in Africa Objective The WHO has declared food safety as a public health concern. Transport hubs such as taxi ranks, bus stations and other transport exchange sites are major food trading/purchasing sites, particularly in Africa. Research evidence is needed to improve food safety policies and ensure consumption of safe food, owing to the increasing burden of foodborne diseases, particularly in the WHO Africa Region. We systematically mapped and described research evidence on food safety at transport stations in Africa. Design A scoping review guided by the Arksey and O’Malley framework. Data sources We searched for original research articles in PubMed, Web of Science, and EBSCOhost (Academic search complete, CINAHL with Full-text and Health Source), SCOPUS, and Google Scholar from their inception to 25 October 2020. Eligibility criteria for selecting studies We included studies that focused on food safety, involved transport stations, involved African countries and were published in English. Data extraction and synthesis Data extraction was performed by two reviewers using a piloted-tested form. Thematic analysis was used to organise the data into themes and subthemes, and a narrative summary of the findings is presented. Results Of the total 23 852 articles obtained from the database searches, 16 studies published in 6 countries met the inclusion criteria. These 16 studies were published between 1997 and 2019, with the most (5) in 2014. Of the 16 studies, 43.8% (7) were conducted in South Africa, 3 studies in Ghana, 2 in Ethiopia and 1 study each in Nigeria, Kenya, Lesotho and Zambia. Most (44.4%) of the included studies focused on microbial safety of food; few studies (22.2%) focused on hygienic practices, and one study investigated the perspective of consumers or buyers. Microbes detected in the foods samples were Salmonella spp, Escherichia coli, Shigella spp, Bacillus sp, Staphylococcus aureus, which resulted mainly from poor hygiene practices. Conclusions There is limited research that focused on food safety at transport stations in Africa, especially on aspects such as hygiene practices, food storage and occupational health and food safety. Therefore, we recommend more research in these areas, using various primary study designs, to inform and improve food safety policies and practices for transport stations in African countries alongside improving access to clean water/handwashing facilities, and undertaking structural changes to facilitate behaviours and monitoring for unintended consequences such as livelihoods of vulnerable populations. ABSTRACT Objective The WHO has declared food safety as a public health concern. Transport hubs such as taxi ranks, bus stations and other transport exchange sites are major food trading/purchasing sites, particularly in Africa. Research evidence is needed to improve food safety policies and ensure consumption of safe food, owing to the increasing burden of foodborne diseases, particularly in the WHO Africa Region. We systematically mapped and described research evidence on food safety at transport stations in Africa. Design A scoping review guided by the Arksey and O'Malley framework. Data sources We searched for original research articles in PubMed, Web of Science, and EBSCOhost (Academic search complete, CINAHL with Full-text and Health Source), SCOPUS, and Google Scholar from their inception to 25 October 2020. Eligibility criteria for selecting studies We included studies that focused on food safety, involved transport stations, involved African countries and were published in English. Data extraction and synthesis Data extraction was performed by two reviewers using a piloted-tested form. Thematic analysis was used to organise the data into themes and subthemes, and a narrative summary of the findings is presented. Results Of the total 23 852 articles obtained from the database searches, 16 studies published in 6 countries met the inclusion criteria. These 16 studies were published between 1997 and 2019, with the most (5) in 2014. Of the 16 studies, 43.8% (7) were conducted in South Africa, 3 studies in Ghana, 2 in Ethiopia and 1 study each in Nigeria, Kenya, Lesotho and Zambia. Most (44.4%) of the included studies focused on microbial safety of food; few studies (22.2%) focused on hygienic practices, and one study investigated the perspective of consumers or buyers. Microbes detected in the foods samples were Salmonella spp, Escherichia coli, Shigella spp, Bacillus sp, Staphylococcus aureus, which resulted mainly from poor hygiene practices. Conclusions There is limited research that focused on food safety at transport stations in Africa, especially on aspects such as hygiene practices, food storage and occupational health and food safety. Therefore, we recommend more research in these areas, using various primary study designs, to inform and improve food safety policies and practices for transport stations in African countries alongside improving access to clean water/ handwashing facilities, and undertaking structural changes to facilitate behaviours and monitoring for unintended consequences such as livelihoods of vulnerable populations. BACKGROUND The WHO estimates that more than 600 million people fall sick (almost 1 in 10 people) with foodborne diseases annually, of which nearly 420 000 people die, and about 33 million years of healthy lives are lost every year worldwide. 1 2 The burden of foodborne diseases is estimated to be highest in the WHO African and South-East Asia Regions, mainly occurring among vulnerable populations such as infants, young children, pregnant women, older people, poor people and individuals with underlying illnesses. 3 Food contamination mostly results throughout the food supply chain (from the procedures used in processing the foods, inadequate storage temperatures, unhygienic practices by food handlers, poor sanitation at cooking places/ vending areas, poor waste management and inadequate treatment of leftovers). 4 Unsafe food has negative implications on health systems, and affects the development and national economies of countries, as well as trade. 3 Therefore, eating unsafe foods poses a significant public health threat. To avert the consequences of unsafe food on health systems, and to sustain national economies, development, trade and tourism, 5 Open access to routines in the preparation, handling and storage of food meant to prevent foodborne illness and injury'. 5 To reduce the incidence of food-related diseases, particularly in high burden regions, the observations of food safety measures/precautions at all levels of the food processing chain, including the places where food is prepared and sold, are critical. 9 10 Like other WHO Regions, especially in low-income and middle-income countries, food trading in the Africa Region takes place at several formal and informal places, such as in the markets, restaurants, streets, open spaces in academic institutions, and transport stations (taxi ranks, bus stations, lorry parks), and other transport exchange sites. Food vending at public spaces serves as a source of livelihood, 6 10 11 and more than two billion people eat food sold at various vending locations. including transportations stations on daily basis globally. 12 13 To this end, evidence is essential to inform in-country policies/guidelines, and further research, to ensure that food prepared and sold at transport stations promotes livelihoods, nutrition, food safety, and environmentally sustainable practices. This scoping review systematically mapped literature focused on food safety at transport stations in Africa, to summarise evidence and identify gaps. Scope of the review The Arksey & O'Malley framework (research question identification; identifying relevant studies; selection of study; data charting, collating, and summarising and reporting the findings 14 15 was employed to scope and synthesise literature to answer the question-what evidence exists on food safety at transport stations in Africa? This review's study protocol was developed a priori. 16 This study included published peer-reviewed articles that reported findings from any African country/countries, focused on food safety, and involved transport stations. However, this study was limited to English publications (due to lack of expertise in other international languages), and primary study designs A detailed description of this scoping review study eligibility criteria is captured in the published protocol. 16 We followed the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) extension for Scoping Reviews checklist to report this study. 17 Identify relevant studies We searched for primary research articles relating to food safety at transport stations in PubMed, Web of Science and EBSCOhost (Academic search complete, Cumulated Index to Nursing and Allied Health Literature (CINAHL) with Full-text, and Health Source), SCOPUS, and Google Scholar from their inception to 25 October 2020. To enable the capturing of all relevant articles, a comprehensive search strategy (developed in consultation with an expert librarian) consisting of keywords, Boolean terms (AND/OR), and Medical Subject Heading terms, was used for the electronic database search (online supplemental file 1). Syntax was modified appropriately where needed. Filters such as date and study design were not applied during the literature search in the databases. DK and PG independently conducted the database search and title screening, and imported all potentially eligible articles onto an EndNote Library. The reference lists of all included articles were also screened for potentially relevant articles using the same approach. Selection of articles Prior to the abstract screening, the 'find duplicates' function in EndNote was used to find all duplicate articles, and they were removed from the library. A screening form was developed in Google forms, using this study's eligibility criteria, for the abstract and full text screening phases. Two reviewers (coauthors) independently screened the abstracts as well as the full text articles. Discrepancies that arose during the abstract stage were resolved by discussion among the review team until a consensus was reached. At the full text screening phase, discrepancies were resolved by a third reviewer. All the additional articles identified from the reference list of the included articles equally underwent full text assessment. The PRISMA flow diagram was employed to account for all the articles involved. 18 Charting the data A data extraction form was designed consisting of the following: author(s) and publication details, country of study, study design, study setting, study population, sample size, sex, study findings and recommendations. To ensure consistency and reliability, two reviewers piloted the data extraction sheet using a random sample of three included studies. The pilot testing of the form also enabled the review team to discuss discrepancies, and to revise the data extraction form prior to its final usage. Subsequently, 2 reviewers conducted the data extraction for the remaining 15 included studies using both inductive and deductive approaches. The review team resolved all discrepancies at this stage through discussion. Collating, summarising and reporting the results This study subsequently employed thematic analysis, and collated all the emerging themes and subthemes relating to food safety. A summary of the findings from the included studies is presented narratively. Patient and public involvement No patient involved. RESULTS Of the 23 852 articles obtained from the database searches (see figure 1 flow diagram), 146 articles met the eligibility criteria at the title screening stage. Using EndNote "Find Duplicates" function, 30 duplicates were found and removed before abstract screening was conducted. Subsequently, 83 articles were removed at the abstract screening, and 20 at full text (17 of these did not include transport stations/taxi ranks/ bus stations, but did involve sale from market centres, public places, chop bars, mini restaurants, major streets and sidewalks, and were excluded). Finally, 13 studies were included, and, from a manual search of their reference lists, a further 3 articles were added, giving a total of 16 articles for further analysis. 28 29 and 1 (6.2%) each in Nigeria, 30 Kenya, 31 Lesotho 32 and Zambia. 33 Most of the studies were published in the last 6 years; however, no published study was found in 2015 and 2020 (figure 2). Fifteen (93.8%) of the included studies were cross-sectional studies, and one (6.2%) was a mixed-method study. Of the 16 included studies, 50.0% reported on microbial safety of food 4 19 23 27-29 33 and 25.0% reported hygiene practices of food handlers/ vendors. 6 21 30 31 One included study each reported on the following: occupational health and food safety risk 24 ; knowledge of hygiene practice 26 ; hygiene practices of food handlers/vendors and microbial safety 25 ; and knowledge of food safety measures and hygiene practice by food handlers/vendors. 32 4 27 and Ethiopia 28 29 and the last 11.1% in Zambia. 33 Seven of the eight studies reported unacceptable levels of microbes in the food. 4 32 The studies in South Africa focused on the following: hygiene practices and implications for consumers 21 ; food and nutrition knowledge as well as practices related to food preparation, 7 the effect of hygiene practices and attitudes of meat vendors, 25 and sources of food contamination. 23 The study from Ghana investigated how fast food operators washed their hands, 26 while the studies from Nigeria, Kenya and Lesotho evaluated food safety and sanitary practices 30 ; food vendors and hygiene practices; and food safety knowledge, attitudes and practices 31 of food vendors and consumers' perceptions. 32 A summary of the key findings from these studies is presented below (table 3). Characteristics of the included studies Author Open access 26 Letuka et al study 32 in Lesotho indicated that 95% of food vendors did not know washing utensils with detergents helps reduce contamination. 32 The mean knowledge (49%±11) of the food vendors included in the study was considered poor. 32 About 6% of the consumers that participated in the study chose not to buy food sold at taxi ranks due to food safety issues and hygiene. 32 Occupational health and food safety risk In South Africa, Qekwana et al 24 evaluated the occupational health and food safety risks associated with the traditional slaughter of goats, and the consumption of such meat. 24 Approximately 63% of the practitioners were not wearing protective clothing during slaughter, and about 78% of practitioners did not know their health status. 24 Almost 83% of the practitioners hung up their carcass to facilitate bleeding, flaying and evisceration. 24 The study further observed that none of the practitioners practiced meat inspection. In Nigeria, Aluko et al 30 study revealed that approximately 62% of the vendors had no formal training, and their medical status was also unknown. 30 DISCUSSION This scoping review mapped evidence on food safety at transport stations in Africa, and revealed a very low number of papers that are published in this area, given many African employees in both formal and informal sectors commute through these transport hubs. 12 13 An average of one paper per year relating to food safety at transport hubs in Africa as revealed by this review is simply not enough. Nonetheless, the few papers depict an imbalance of research, with most focused on microbial safety, 4 19 20 23 27-29 33 and few on socioeconomic aspects such as hygiene practices, 6 21 30 31 and occupational health and food safety risk. 24 Moreover, this review revealed no study evaluated the storage of food or how the food is transported to the vending site. Aluko et al 30 ► Approximately 17% of food vendors washed their hands always after using the toilet, ► 63% of them rarely kept their fingernails short, and ► Nearly 4% of them always kept their leftover cooked food in a refrigerator, despites having unstable power supply Odundo et al 31 ► Food vendors had poor hygiene practices however, men were observed to have better hygienic practices than women (p<0.05), ► Hygiene practice of the vendors was found to be significantly associated with training (those trained observe hygiene), and ► Wearing of jewellery, long and unclean nails, and lack of protective clothing were observed. Letuka et al 32 ► Observed that the food handlers operated under unhygienic environment RTE, ready-to-eat. Open access As evidence by this review, most of the food sold at transport hubs does not meet the minimum standards and is not safe for consumption due to the presence of several microbes. 4 19 23 25 27 29 33 34 There are several reasons for this such as poor practices relating to hygiene, storage, preparation, cooking, cleaning and serving. 4 19 20 23 27-29 33 However, these findings are similar to previous review findings involving markets, 35 homes and restaurants. 36 A recent publication by Gizaw 35 indicated that several studies reported microbial contamination of foods sold in the market, with bacteria and fungi similar to those identified in our review. 35 Also, a review by the WHO reported that the main factors contributing to foodborne disease outbreaks in homes or restaurants were poor temperature control in preparing, cooking, and storing food. 36 Although very few papers were found by this review, the evidence is compelling that there should be policy interventions to address issues relating poor hygiene practices, including food storage, preparation, cooking, cleaning and serving by food handlers at transport hubs, not only in South Africa but across Africa. Similar to a previous scoping review 10 most of the included papers were published within the last 6 years but, no published study was found in 2015 and 2020. While the reason for the lack of published papers in 2015 might be difficult to determine, the COVID-19 pandemic which resulted in 'covidisation' of research might be the reason for the lack of publication in this field of research in 2020. Although we cannot conclude that no primary research has been conducted in these countries focusing on the safety of food sold at transport stations, it suggests a research/publication gap. Food safety research is, perhaps, more relevant now than ever in Africa, since the burden of foodborne diseases is rising annually, resulting in the declaration of food safety as a public health concern by the WHO. 7 8 Aside from this, most commuters tend to buy ready-to-eat (RTE) food from street food vendors, including those at transport hubs 37 38 ; hence, the sale of food at transport stations is rising, 38 39 particularly in Africa 6 partly due to an increase in demand for RTE, and the employment opportunities it offers to many individuals who otherwise would not have had any source of income. 10 40 Even more worrying is the fact that most of the articles included that focused on microbial safety, reported high levels of food contamination with several microorganisms, especially Salmonella spp and E. coli. 4 19 23 25 27 33 34 Therefore, more research is needed across African countries to prevent potential negative consequences. Our study findings have implications for practice and research. For instance, the likelihood of food poisoning with microbes such as Salmonella spp, E. coli., Shigella spp, Bacillus spp, S. aureus and several others, revealed by most of the included studies that focused on microbial contamination of food, is alarming. This, if not checked, could further worsen the already high burden of foodborne diseases in a continent that has several of its countries already experiencing many health systems and economic challenges. Aside from this, the majority of individuals who commute through transport hubs, possible will purchase a meal from a transport hub/exchanges site, which may be the only meal 12 13 of the day and yet the food safety standards are poor. 4 19 20 23 27-29 33 Thus, if not checked, the excess cases of foodborne diseases from any outbreak will further impact negatively on the already challenged public health systems in Africa. Also, poor people who are exposed to these unsafe foods get an infection, may have to pay more for healthcare, which can further exacerbate their poverty situation. Moreover, people who are already living in extreme poverty who get exposed to foodborne disease may not even make it to the hospital for care and can end up dying at home. 41 Good hygiene and sanitation practices, such as adequate hand washing, adequate washing and storage of pots and dishes, good waste management, observation of food preparation standards and serving etiquette, among others, have the potential to reduce the risk of food contamination from both biological and non-biological hazards, yet this study reveals fewer studies that focused on hygienic practices. We, therefore, recommend more research to further inform contextualised policy decisions aimed at improving hygiene and sanitation practices by food vendors at transport stations. Also, very relevant to ensuring food safety is the occupational health practices of the vendors. Regular food handling tests and food inspections, conducted by the appropriate local authorities, should be mandatory in all African countries. Food handler tests should seek to ensure that food vendors are fit healthwise to prepare and serve food meant for public consumption. However, our review found limited studies that evaluated occupational health and food safety. Considering that evidence from South Africa and Nigeria suggests about 78% and 62% of food vendors do not know their health status 30 42 and the increasing number of informal food sellers at various transport exchange sites, future studies are recommended to focus on occupational health and food safety in Africa. The means and manner of storing food, especially leftover RTE food, can either increase or reduce the risk of food contamination, but, again, this scoping review found no study that focused on food storage practices of the vendors at transport stations. Also essential, and yet we did not find any study focusing on it, is the quality of food (nutritious aspects) of the meals sold at transport stations. Eating a well nourishing diet or balanced meals is critical to ensure good health [43][44][45] ; hence, we encourage future primary studies to include the nutritious aspects. Such studies may help streamline guidelines or inform policies to improve the quality of the food sold at transport exchange sites or taxi ranks. Moreover, this review found that the majority (17 out of 18) of the respondents in the included studies were the vendors (mostly women) or food samples taken from the vendors. The perspectives of consumers (buyers) or commuters regarding food safety at transport stations are Open access also very relevant, and we recommend future research to involve them. A comparative study to investigate food safety practices among males and females food vendors at transport stations might be relevant since many males are now getting involved in the business. 6 46 47 To the best of our knowledge, this study is the first scoping review that systematically mapped literature relating to food safety at transport stations in Africa. A major strength of our study method is that it permits the inclusion of multiple study designs. Also, the choice of this study method permitted us to highlight literature gaps, and made recommendations for future research. Aside from this, we conducted a thorough search in six databases using a comprehensive search strategy which enabled us to capture the most relevant articles to answer the review question. Moreover, two independent reviewers were used to select the studies and perform data extraction processes which helped to prevent selection bias and ensured the reliability and trustworthiness of this study results. Despite this, our scoping review has many limitations. This study included only original study peer reviewed papers, which resulted in the exclusion of one review paper 10 and one Masters' dissertation. 48 We did not also consult the websites of WHO and the Food and Agriculture Organisation websites for possible relevant studies. Furthermore, this study cannot be generalised since the search was limited to African countries only. Although date limitation was removed, we limited the publication language to English only, which perhaps eliminated relevant articles published in other languages. Despite these limitations, this study has provided essential evidence relating to food safety at transport stations and has shown literature gaps to guide future research. CONCLUSION Based on this scoping review's eligibility criteria, our study results suggest there is limited research focusing on food safety at transport stations in Africa. Most of the existing published studies are focused on microbial safety of food, and very few/none on other aspects such as hygiene practices, food storage, occupational health and food safety, and nutrition. Hence, we recommend more primary research involving community members and policymakers in these areas going forward alongside improving access to clean water/handwashing facilities, and undertaking structural changes to facilitate behaviours and monitoring for unintended consequences such as livelihoods of vulnerable populations. Contributors BPN, DK, SED, SM and RS conceptualised and designed the study. DK developed and designed the database search strategy and conducted the search. PG contributed to the screening of the studies and data extraction. DK wrote the draft manuscript and BPN, SED, GM and RS critically review it and made revisions. All the authors approved the final version of the manuscript. BPN is the author responsible for the overall content of this study. Competing interests None declared. Patient consent for publication Not applicable. Provenance and peer review Not commissioned; externally peer reviewed. Data availability statement Data are available upon reasonable request. Data sharing not applicable as no datasets generated and/or analysed for this study. All data relevant to the study are included in the article or uploaded as supplementary information. Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise. Open access This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https:// creativecommons. org/ licenses/ by/ 4. 0/.
2021-11-27T06:17:17.521Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "896bcc6df14362d7f42a536b3264cb05be752e5b", "oa_license": "CCBY", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/11/11/e053856.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "6bc8bacc5087b9ca365b13c2fe0e31e5499543b6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53133981
pes2o/s2orc
v3-fos-license
Preparation and Characterization of a Magnetic Paper by Coprecipitation Loading Process Many research of making magnetic paper focus on the substrate as part of the green promotion strategy. For example, research on wood, kenaf, sugarcane bagasse, and other had been successfully used to produce the green magnetic paper. However, in order to achieve better environmental friendly magnetic paper, the production and synthesise method of the magnetic paper also needs to be considered. Previously, there were two famous method in producing magnetic paper namely lumen loading and in-situ coprecipitation method. As a continuity effort to the previous green magnetic wave absorber technology, a new enhanced technique named coprecipitation loading method is introduced; which is better in conserving energy. From this research, the new method had shown average performance in terms of magnetic properties and loading degree of the magnetic particles. However, compared to the previous two methods it save a huge amount of energy in the production process thus leading to better conservation of energy in promoting greener environment. Introduction Magnetic sheet can be used as electromagnetic wave absorbers to encounter electromagnetic interference or to enhance communication signals. Previously, magnetic sheet was produced using pure magnetite or as composite by mixing polymers and magnetic particles. By modern approach, conventional paper can be altered to have additional characteristic such as magnetic properties [1]. The importance of magnetic paper is to encounter the disadvantages of pure magnetic sheet where the magnetic paper has better flexibility, cost efficiency, and disposability. The research of this magnetic paper was dated back to the 1980s when a group of researcher successfully produced this magnificent paper in laboratory. Green et al. [1] had produced magnetic paper using lumen loading technique. A paper fiber usually has lumen, a hollow cylindrical empty space within the fibers. This method loads the lumen with nano-sized magnetic particles while leaving the exterior surface of the fiber free from magnetic particles. It is important to clear out the outer layer of the fiber from magnetic particles to retain the mechanical properties of the paper. Next, the other method of making magnetic paper was using in-situ coprecipitation method. This method was first developed by Ricard et al. [2]. The advantage of this process was it allows better control on the size and distribution of the particles inside the lumen. This technique uses the ferric (Fe3+) and ferrous (Fe2+) salts and transform it into magnetite by oxidizing it by the addition of NaOH. The involvement of this nanotechnology give more advantage compared to lumen loading technique in terms of the size and particles distribution but smaller particles lead to lower magnetic saturation value. This is because of nano-sized magnetite shows superparamagnetic behavior [3]. Many previous researches only focus on the substrate and loading technique. Research that focuses on substrate plays between different substrate to promote greener environment. For example, previously, substrate from wood chip paper pulp was changed to kenaf [3] pine softwood [4] and bamboo pulp [5]. On the other hand, research on loading technique usually plays around the method to load the magnetic particles into the lumen. Up to date, there are two major techniques to load magnetic particles in lumen which had been explained before [6]. In this research, the manipulation of the magnetic particles loading techniques had been done by combining both established methods to enhance the energy saving process in order to promote greener environment. Methodology In the study, the basic material used for making paper was paddy straw pulp. This material was selected as part of the continuing process from the previous work done. The productions of the magnetic papers were done using three different methods. The first method is using lumen loading technique, followed by in-situ coprecipitation method and lastly, the proposed method which is coprecipitation loading method. Lumen loading process Paper pulp weighted at 15g was mix with 1250ml of distilled water and 0.1 g L-1 alum and mechanically stirred at 1000rpm for 30 minutes. 30g of nano magnetite particle at the size of 50 nm produced by Sigma-Aldrich that has Licensed Under Creative Commons Attribution CC BY undergone ultrasonic separation later were mix with the colloid for 1hr and rotated at 1000rpm for 60 min. Next, the rotor speed was slowed down at 400rpm and the colloid were mixed with polyethylenimine (PEI, 2%, w/w polymer on pulp) and leave for 4 hr. Afterwards, the suspension were wash in a self designed sieve (45 µm) and later undergone paper making process but placing the washed pulp in a self designed mold and dried for 24 hr. In-situ coprecipitation process 15g pulp was stirred at 1000rpm in 1500ml distilled water for 30 min. Next, the colloid was heated to approximately 80 ºC. Ferrous chloride (FeCl2.4H2O) and ferric chloride (FeCl3) were mixed in the solution at the ratio of 1:2 respectively and left for 5 min. Later, sodium hydroxide (NaOH) was added into the suspension and left for 15 min. During this process, the suspension will turn into black colour. Then, PEI (2 %, w/w polymer on pulp) was mixed and further stirred for 1hr. Lastly, the suspension was washed, molded, and undergone drying process as the final stage of the magnetic paper making process. Coprecipitation loading process 15g of dried pulp was vigorously stirred at 1000rpm for 30minutes in 1250 ml distilled water. Simultaneously, magnetic particles were produced by mixing ferrous chloride (FeCl2.4H2O) and ferric chloride (FeCl3) at the ratio of 1:2 respectively in a 250ml distilled water at 80 ºC. After 5 minutes, the solution was mix with NaOH and black precipitated will form instantaneously and leave for another 15 min before the heat supply can be stop. 15 min of time compensation was needed to ensure that all ferrous and ferric chloride was fully oxidized with the addition of NaOH. Immediately, the black precipitate was mixed with the pulp and stirred for 10 minutes. Then, PEI (2 %, w/w polymer on pulp) was mixed and further stirred for 1hr. Lastly, the suspension was washed, molded, and undergone drying process as the final stage of the magnetic paper making process. Morphology Analysis The lumen structures of the lumen loaded and in-situ coprecipitation pulp were observed using Zeiss EVO-50 ESEM scanning electron microscope machine. Observation of the lumen was made by cross cutting the paper. Magnetic Measurement: The magnetic properties of the loaded paper were analyzed using Lakeshore 7407 vibrating sample magnetometer machine. The value of magnetization and coercive force of each sample were performed at room temperature only. X-ray Diffraction Measurement The magnetic particles were analyzed using PANanalytical X'PERT PRO MPD X-ray diffraction machine with model number of PW3040/60. This machine was used to identify the particles used for the loading process and to make conformation on the purity of the particles being loaded into the lumen of the pulp. Thermogravimetric analyzer The percentage of the magnetic particles being loaded were determined using thermogravimetric methods. The samples were heated at 900ºC for at least 4 hours and the heating rate was 10 ºC minˉ1. After the heating process, the weight of the samples at 900 ºC was measured. Next, Calculation of the loading degree was done using this equation. Loading degree of paper = % Weight of treated paper -% weight of empty paper (1) Power Consumption: The power needed for each method was calculated. The calculation was based on the mechanical stirring and heating process. The mechanical stirring calculation was based on making assumption that the stirrer was a shaft and the calculation will use the shaft work calculation. The power transmitted through a shaft is as follow: where n is the rotational speed, T is the stirrer torque and t is time required for the whole stirring process. Next, the power consumption calculation also involves the heating process power consumption and the formula was based on maintaining the water temperature at 80C for 1 to 4 hours' time interval depending on type of process. The formula for calculating the power required for maintaining water temperature is as follow: where m is the mass of water, c is the specific heat of water, T is the temperature different and t is the time required for the temperature different. XRD measurement where Dhkl is the mean crystallite size, β is the broadening of full width at half maximum intensity (FWHM) of diffraction peaks (311) in radian, θ is the Bragg angle and λ is the X-ray wavelength (Munawar et al. 2010). In Figure 1, every pattern shows clear peak at several important points and these points indicates the presence of magnetite and its purity in the sample being examined. Since no other unwanted or foreign peaks, it can be interpreted that the commercially and synthesized particles used in all techniques had high purity and homogeneity of magnetite (Fe3O4). Figure 2 shows the morphology images of the samples. Figure 2 (a) is the image of samples without magnetic particles and it shows the image of lumen, a hollow part that exists inside of fiber. Lumen is important because it act as a chamber to be filled with magnetic particles. Here, most of the parts in the lumen are fully filled with magnetic particles while the exterior surfaces of the samples are cleaned from any foreign particles. The outer surface needs to be cleaned so that it will not interfere with the interfiber bonding. This situation will help to maintain the mechanical properties of the magnetic paper while still giving the paper significant magnetic properties. Morphology Analysis In lumen loading method, during the impregnation stage, the fluid flow was very fast and in turbulence manner. This allows the magnetic particles to enter the lumens through pit. While most of the particles will remain inside, residue of the particles at the outer surface of the fiber were washed out leaving the outer surface free from fillers. However, investigation by Zakaria et al. [6] showed that it was difficult to clean out the magnetic particles from the exterior fiber surface thus reducing the fiber mechanical strength. In-situ technique gives a new perspective in producing magnetic paper. In this method, the particles were produced chemically through in-situ method. This means that during the vigorous stirring of pulp, chemical synthesis of magnetite particles took place. After certain time interval or so called as aging time, particles grew in size and deposited on the surface of the fibres [7]. Particles in the lumen remain while most of the excess particles were washed out during the washing process. This method produces particles around 60-80 nm [3] and can be observed as slurry image as in Figure 2 (c). On the other hand, lumen loading that was using purchased magnetite was around 50 nm. However, due to time factor, the particles tend to agglomerates. This fact make the image of particles is more pronounce in lumen loading as in Figure 2 (b) compared to in-situ technique as in Figure 2 (c) respectively. In this project, one of the novelties was the introduction of coprecipitation loading method. This method was the combination of lumen loading and in-situ coprecipitation method. Magnetite particles were first produced chemically and after a moment, it was mixed with the pulp suspension. The dispersion of particles and behavior was almost for both in-situ and the proposed method. This can be proved by Figure 2 (d) where coprecipitation loading shows a slurry image that indicates the magnetic particles presence which was almost similar to the image obtained from in-situ method as in Figure 2 Degree of Loading Degree of loading gives an idea on the percentage of particles being loaded into the fiber. From Figure 3, observation shows a trend where lumen loading has the highest degree of loading with 34.4% followed by in-situ method having average value at 24.4% and the lowest degree of loading was coming from coprecipitation loading technique with 20.4%. One of the reasons that leads to this trend was because in lumen loading technique, different sizes of particles were enforced to settle down inside the lumen and fill most of the empty spaces. This can be proved by Figure 2 (b) where the lumens are almost fully filled by the magnetic particles. However, both in-situ and coprecipitation loading had lower degree of loading because the magnetic particles that were chemically produced only laminates the surface of the lumens and the exterior surface of the fibers as shown in Figure 2 (c) and (d). Since much of the voids of the lumens were not fully filled, it leads to less degree of loading readings. Magnetic properties From Figure 4, observation from the hysteresis loop shows highest magnetisation is coming from lumen loading samples with the value of 34.40 emu/g. On the other hand, magnetisation from recycled paper from both in-situ and coprecipitation loading techniques shows the value of 11.18 emu/g and 8.96 emu/g respectively and a bit lower compared to the lumen loading technique. Despite of lumen loading had higher value, all magnetisation value is significantly smaller than the standard value of magnetite crystals which is 92 emu/g. Lower than this value, the samples exhibit good superparamagnetic behaviour [8]. (LL is for lumen loading, IS is for in-situ coprecipitation and CL is for coprecipitation loading methods The low value of magnetic saturation was basically due to the size of the magnetite particles. According to Garcia et al. [9] nano-sized magnetic particles show very low of magnetic saturation (Ms) and coercivity (Hc). The smaller the nanosized particles, the smaller the Ms and Hc will be thus leading the sample to show superparamagnetic behaviour. From the trend, it can be interpreted that when the loading degree increase, the Ms and Hc also increases. Despite of the trend, lower loading will increase paper strength. This is due to the loss of fibre-fibre bonding sites as a result of the deposit of the magnetic particles on the fibre surface ( Power Consumption The power consumption calculation was only base on the level of energy used in mechanical stirring process and heating of water during chemical synthesis. Although the calculation only shows energy needed to produce 3 g of magnetic paper, it gives general idea on how much energy needed to produce the magnetic paper. In mechanical stirring, work done calculation was based on work done in rotating shaft. This calculation gives the value of how much work was needed in the impregnation stage of all methods. On the other hand, the heating calculation was based on inversion of how much energy was dissipated to the surrounding during the heating process. From the calculation, in-situ method used the most energy at 185.22 kW, followed by lumen loading technique at 94.48kW and lastly coprecipitation loading at 56.81 kW as portrayed in Figure 5. In in-situ and coprecipitation loading methods, water was needed as the chemical synthesis medium and heat was for the chemical synthesis to initiate. Coprecipitation loading used six times less water than in-situ method and only for the purpose of producing the magnetic particles. However, insitu method needs more water for the pulp to be fully immersed while the chemical synthesis took placed. More water means more heat was needed to start the chemical synthesis and needed to be maintained at elevated temperature until the synthesis complete thus making in-situ consume more energy compared to coprecipitation loading. Lumen loading on the other hand consumes less energy than in-situ method because it only involves mechanical stirring process. Despite of this situation, it needs more work than coprecipitation loading because total stirring time was 4 hr while coprecipitation loading only need 1 hr of stirring time. During the first hour, vigorous stirring makes the total work consumption spikes high. However, during the 4 hr stirring, this stage was done at slower stirring rate at 400rpm. This makes the total work for the later 4 hours used less energy compared to the first hour. Figure 5: Power consumption of different methods to produce magnetic papers. (LL is for lumen loading, IS is for in-situ coprecipitation and CL is for coprecipitation loading methods) Conclusion From the experiment, all data shows that lumen loading had better magnetic properties compared to the in-situ coprecipitation method and coprecipitation loading. Besides that, lumen loading method also gives better filling in the lumen compared the other two method. Despite of the advantages, the proposed method had better power saving during the production with slight significant performance in terms of magnetic and loading degree. As a conclusion, objective of the study had been achieved by introducing this new simple, cost efficient and energy saving method. Future effort should be done to enhance this new developed method in terms of magnetism and mechanical properties of the magnetic paper produced thus taking this new method to another level. Acknowledgement A special tribute to Universiti Teknikal Malaysia Melaka (UTeM) for funding this research and making all the impossible into possible under the Short Term Research Funding Scheme (PJP/2012/FKP (38A) S0142). Special credit also to Universiti Putra Malaysia for the machine used in pulping process using twin pulp digester and for magnetic properties testing using Vibrating Sample Magnetometer Machine (VSM). Next, authors also would like to acknowledge Dr Taufik from UTeM for the power efficiency consultation. Lastly, special thanks to all that had been involved in making this paper either directly or indirectly. Thank you.
2019-08-18T23:14:22.037Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "314eefd632deef174d1029e78556bfc38e930c14", "oa_license": null, "oa_url": "https://doi.org/10.21275/art20174936", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "95d523c38577deb2e87757e6f1ef209a5966c05c", "s2fieldsofstudy": [ "Materials Science", "Environmental Science" ], "extfieldsofstudy": [] }
9658240
pes2o/s2orc
v3-fos-license
Do Matrix Metalloproteases and Tissue Inhibitors of Metalloproteases in Tenocytes of the Rotator Cuff Differ with Varying Donor Characteristics? An imbalance between matrix metalloproteases (MMPs) and the tissue inhibitors of metalloproteases (TIMPs) may have a negative impact on the healing of rotator cuff tears. The aim of the project was to assess a possible relationship between clinical and radiographic characteristics of patients such as the age, sex, as well as the degenerative status of the tendon and the MMPs and TIMPs in their tenocyte-like cells (TLCs). TLCs were isolated from ruptured supraspinatus tendons and quantitative Real-Time PCR and ELISA was performed to analyze the expression and secretion of MMPs and TIMPs. In the present study, MMPs, mostly gelatinases and collagenases such as MMP-2, -9 and -13 showed an increased expression and protein secretion in TLCs of donors with higher age or degenerative status of the tendon. Furthermore, the expression and secretion of TIMP-1, -2 and -3 was enhanced with age, muscle fatty infiltration and tear size. The interaction between MMPs and TIMPs is a complex process, since TIMPs are not only inhibitors, but also activators of MMPs. This study shows that MMPs and TIMPs might play an important role in degenerative tendon pathologies. Introduction Matrix metalloproteases (MMPs) are a large family of proteolytic enzymes, which can degrade all components of the extracellular tendon matrix [1,2]. Their activities are antagonized by the interaction with the tissue inhibitors of metalloproteases (TIMPs). The balance between MMPs and TIMPs plays a critical role in tendon degeneration and healing [3,4]. The healing after rotator cuff (RC) reconstruction is associated with high failure rates [5][6][7], mainly linked to the formation of inferior, disorganized scar tissue at the tendon bone insertion site [8,9]. The mechanisms underlying the poor tendon healing are widely unknown. Since the MMPs and TIMPs regulate tendon modeling and remodeling, it is hypothesized that the development of tendon pathologies is dependent on the MMP/TIMP balance [2,10]. The activity of MMPs is highly regulated by four endogenous antagonists (TIMP-1 to TIMP-4), which can inhibit all MMPs by the formation of stoichiometric complexes through a non-covalent interaction with a zinc-binding site in the MMPs. TIMPs are not specific to any single MMP group and their inhibitory effects are overlapping [15]. Notably, TIMPs do not only inhibit the active forms of MMPs, but also interfere with the latent form and regulate their activation process. Thus, TIMPs can have promoting as well as inhibitory effects on the regulation of cell growth, invasion, differentiation, apoptosis and angiogenesis [16,17]. The aim of the project was to examine the expression and secretion of MMPs and TIMPs in tenocyte-like cells (TLCs) of Supraspinatus (SSP) tendon tears from donors differing in their degenerative status, age and sex. We hypothesize that the donor characteristics influence the MMP and TIMP expression and secretion in the TLCs, which might lead to an imbalanced MMP/TIMP ratio in the donor risk groups. Correlation of Donor Characteristics First, correlations between the individual parameters of the radiographic examination were investigated. Strong correlations were observed between muscle fatty infiltration (MFI) and tendon retraction (rs = 0.769, p < 0.001) and tear size (rs = 0.607, p < 0.001) as well as between tendon retraction and tear size (rs = 0.690, p < 0.001). Subsequently, correlations between radiographic parameters and age were evaluated. The MFI showed strong correlation with the age (rs = 0.673, p < 0.001), whereas mild age-dependent associations were observed for tear size (rs = 0.463, p = 0.011) and tendon retraction (rs = 0.411, p = 0.024). Average MMP and TIMP Expression and Protein Secretion over All Samples (n = 30) Gene expression analysis revealed MMP-2 expression to be the strongest in all TLCs, followed by MMP-3 and MMP-1. MMP-9, -10, and -13 were expressed in very low amounts, whereas MMP-10 and MMP-13 were only expressed in 26 or 28 samples, respectively. High expression levels of TIMP-1, -2 and -3 were found in all cells, while the TIMP-4 mRNA expression was much weaker (Figure 1). Due to the weak expression of MMP-9, -10 and -13, these MMPs were not analyzed on protein level. For the rest of the MMPs and TIMPs, the protein analysis of cell culture supernatants revealed a comparable pattern, where MMP-2 was the most secreted MMP and TIMP-1 and -2 the most secreted TIMPs in the cells ( Figure 2). The FCS containing medium, which served as negative control, did not show detectable levels of MMPs or TIMPs in any of the ELISA analysis. MMP and TIMP protein secretion of all samples. MMP-2 and TIMP-1 data were derived from sandwich ELISA. All other proteins were analyzed using Multiplex ELISA technique. All values were normalized to the total protein content (Coomassie Plus assay), given as mean ± SD (n = 30) represented in a logarithmic graph. MMP-2, TIMP-1, and -2 protein secretion was strongest in the cells, while MMP-1, -3, and TIMP-4 protein secretion was lower. Relationship between Donor Characteristics and MMP/TIMP Expression and Secretion Results of MMP/TIMP expression at mRNA and protein level did not differ significantly between TLCs of male and female donors. Therefore, all 30 TLC cultures were analyzed without separation regarding the donor sex. To determine the influence of donor age, donors were segregated into two groups: under 65 years (n = 16) and over 65 years (n = 14). The analysis revealed an age-dependent increase in the mRNA-expression levels of MMP-2, -9, -13 and TIMP-2, -3 ( Figure 3A). This could only be confirmed at the protein level for MMP-2. In addition, protein levels of TIMP-1 were significantly elevated in TLCs from older donors, while mRNA expression was unaltered ( Figure 3B). Spearman's rho correlation revealed mild correlations between the age and the mRNA levels of MMP-2 (Spearman's rank correlation coefficient (rs) = 0.504; p = 0.005), TIMP-2 (rs = 0.485; p = 0.007), and TIMP-3 (rs = 0.455; p = 0.012) ( Table 1). ). (A) qRT-PCR was performed to analyze gene expression. The box plot data represent the relative gene expression with 18S as reference gene using the ΔCt method with efficiency correction. The mRNA levels of MMP-2, -9, -13 and TIMP-2, -3 are significantly increased with higher age; (B) Protein levels were analyzed using ELISA and normalized to total protein content (Coomassie Plus assay). Protein levels of MMP-2 and TIMP-1 were significantly elevated with higher age. To allow visualization of all MMPs/TIMPs in one figure, MMP-9 and -13 values were multiplied by 10 5 . To analyze the association between MMP/TIMP expression and MFI, patients were grouped as follows: score 0-1 (n = 10) and score 2-4 (n = 20). The mRNA levels of MMP-2, -9 and TIMP-3 were significantly increased in TLCs from donors with enhanced MFI (Figure 4). At the protein level, none of the investigated proteins showed a significant alteration. MFI correlated mildly with mRNA values of MMP-9 (rs = 0.432; p = 0.017) and protein values of TIMP-1 (rs = 0.413; p = 0.023) ( Table 1). Tendon retraction was analyzed using two distinct groups: patients with the score of 0-1 (n = 9) versus patients with the score of 2-3 (n = 20). At mRNA and protein levels, no significant differences could be found between the two groups (data not shown). Nevertheless, tendon retraction showed mild correlation with protein secretion of TIMP-2 (rs = 0.407; p = 0.029) ( Table 1). Figure 5. MMPs and TIMPs grouped according to the tear size (small tear size: Bateman score 1-2 (n = 12); big tear size: Bateman score 3-4 (n = 18)). (A) qRT-PCR was performed to analyze gene expression. The box plot data represent the relative gene expression with 18S as reference gene using the ΔCt method with efficiency correction. MMP-9, and -13 expression of donors with bigger tear size was significantly increased, while MMP-10 expression was decreased at mRNA level; and (B) Protein levels were analyzed using ELISA and normalized to total protein content (Coomassie Plus assay). Protein level of MMP-1 and TIMP-1 showed an elevated secretion with bigger tear size. To allow a visualization of all MMPs/TIMPs in one figure, MMP-1 values were multiplied by 10 2 and MMP-9, -10 and -13 by 10 5 . Discussion The healing of RC tears depends on the patient characteristic as well as the tendon tissue quality. However, the cellular or molecular mechanisms behind this relationship are still elusive. We previously showed that TLCs isolated from donors differing in age, sex and MFI exhibit different cellular characteristics and stimulation potential with BMP-2 and BMP-7 [18][19][20]. Since MMPs and TIMPs represent the main regulators of modeling and remodeling in the tendon, it was the aim to assess their expression and secretion in TLCs and their supernatant, related to the age, sex and degenerative status of the tendon. We hypothesized a change of the expression and secretion of MMPs and TIMPs in TLCs of risk groups, which might lead to an imbalance and therefore might explain the weaker healing potential in these patient groups. In fact, we observed a predominately increased expression and secretion of MMPs and TIMPs in TLCs of older donors and of donors with higher degenerative status of the tendon.The most prominent changes within the MMPs were found for the gelatinases MMP-2 and MMP-9. Gelatinases such as MMP-2 and -9 mainly degrade smaller collagen fragments or already degraded collagen (gelatin) [21]. Since tears of the RC are mainly due to degeneration, this may indicate an important role of the gelatinases in later steps of the degeneration process after the full collagen structure was already degraded by other MMPs such as the collagenases. Within the TIMPs, mostly TIMP-1, -2 and -3 were regulated with donor characteristics. In the present study, no difference in expression or secretion of MMPs and TIMPs were found between TLCs of male and female donors. Estrogen was reported to increase MMP-13 expression in rat Achilles tendon cells when directly applied to the cells [22]. Hormones such as Estrogen may not play a major role when they are not directly added to the cells. However, it has to be kept in mind that the FCS in the medium contains an undefined amount of hormones and the phenolred, which is present in the cell culture medium, can act as a weak Estrogen mimic [23]. Since tenocytes of male and female donors were shown to express Estrogen receptors [24], both components might influence the TLCs of male and female donors in the same manner. The age of the donors is an important factor influencing the healing outcome after RC surgery and a cut off is mainly described at the age of 60 to 65 years [7,25]. In the present study, with an age cut off at 65 years, a significantly enhanced expression and secretion of MMP-2, -9 and -13 as well as TIMP-1, -2 and -3 was found and confirmed by correlation analysis. Results for MMP-2 and -9 are in accordance with Yu et al. who found the same changes in rat Achilles tenocytes [26]. In contrast to our findings, a decreased expression of TIMP-1 and TIMP-2 was reported. This is probably due to differences in experimental parameters such as donor species and age [26]. Similarly, another group reported increased MMP-1 activity in SSP tendons of elderly donors [10]. However, this was not observed in the present study. The SSP tendon has a high collagen turnover, which is linked to high MMP activity levels in the tendon [10]. MMP-1, MMP-2, MMP-3, MMP-9, MMP-10 and MMP-13 are mainly described in the context of tendon healing. However, direct comparisons are critical, since most studies analyzed differences in MMP/TIMP expression between healthy and ruptured tendons using tissue sections and not cultured TLCs from ruptured RC tendons with different degeneration. MFI is a predictor for high degenerative status of tendons. Here, we found that enhanced MFI is correlated with significantly increased expression of MMP-2 and -9 in TLCs. A high MFI is the result of a barely moved shoulder due to the tendon rupture and insufficient mechanical stimulation of the tendon [27]. As mechanical stimulation was reported to influence MMP expression in rat tenocytes [28], this might explain possible alterations of MMPs with increasing MFI. A bigger tear size was found to be positively correlated with the increase in RNA levels of MMP-9 and -13 and protein level of MMP-1. These findings are in line with other studies, showing increased expression of these MMPs in ruptured Achilles or RC tendons compared to intact controls [10,29,30]. In contrast, Castagna et al. described no changes in protein levels of MMPs and TIMPs between SSP tendons of a torn area versus a macroscopically healthy area and the intact subscapularis tendon in 13 patients [31]. A recent study reports that the MMP-9 expression is associated with tendon retraction [30]. In contrast, others found a decreased gelatinolytic activity (MMP-2, MMP-9 and MMP-13) in ruptured SSP tendons compared to normal tendons [10]. MMP-3 was presently neither regulated on RNA nor protein level regarding the degenerative status of the tendon or other donor characteristics. Contradictory findings were observed by other authors, who reported that MMP-3 expression was mostly downregulated in ruptured or tendinopathic tendons [10,29,30]. It was hypothesized that the role of MMP-3 as an activator of other MMPs is important to bear a correct remodeling process [10]. In general, an increased MMP activity on the in vivo level could cause an imbalance between MMPs and TIMPs, which might lead to an excessive degradation of the tendon extracellular matrix and therefore to an inferior healing. It is clinically observed that the healing after RC reconstructions is often inferior in patients with higher age or degenerative status [25,27,32]. These clinical findings may be a result of increased activity of MMPs, as found presently. However, beyond the MMPs also TIMP-1, -2 and -3 were increased with higher age, MFI, and tear size. An increased TIMP-1 expression in the healing SSP tendon of rats was also described by Choi et al. [33]. Additionally, an elevated TIMP-1 level in plasma samples was found in patients with full thickness tears of the RC [34]. Other studies described that TIMP-2, TIMP-3 and TIMP-4 expression was decreased in ruptured Achilles or RC tendons compared to intact controls [29,35]. The presently observed increased TIMP expression and secretion might be a direct response of the cells to prevent an elevated MMP activity in their microenvironment. However, it is not clear if tendon rupture or degenerative changes are either cause or consequence of altered MMP/TIMP expression and activity levels. It was described that a failed healing investigated more than six months after RC reconstructions correlated with an increased MMP-1 and MMP-9 expression analyzed at the time of surgery which indicates that the biological environment at the time of surgery may directly influence the healing outcome [36]. Additionally, TIMPs are not only inhibitors of MMPs, but are also able to regulate the activation of MMPs [14]. Furthermore, this might explain the increased amounts of TIMPs and underlines the complex process of interaction between MMPs and TIMPs. With the knowledge that an imbalance between MMPs/TIMPs in a tissue may challenge its healing capacity, it can be hypothesized that the use of MMP inhibitors could improve tendon healing. This hypothesis was examined previously by injecting substance P, an inhibitor of endopeptidases, in operative Achilles tendon repair in rats, which resulted in improved biomechanical properties [37]. Other studies used the MMP inhibitors doxycycline and α-2-macroglobulin for the repair of SSP tendons in rats and found decreased collagen degradation and improved collagen organization and biomechanical competence of regenerated tissues [38,39]. However, broad-spectrum MMP inhibitors as therapeutic agent have to be used with caution. Since MMPs are involved in various other functions in the body, they might cause undesired side effects. The goal would therefore be to develop more selective MMP inhibitors [40]. Limitations of the Study TLCs were cultured in medium containing 10% heat inactivated FCS. FCS contains growth factors as well as some MMPs, which might have influenced the MMP and TIMP expression and secretion in the TLCs. However, the FCS was heat inactivated, which reduces the growth factor concentration as well as MMPs, as described for human plasma samples [41]. Furthermore, all cells were cultured identically, which should have affected all cells in the same manner. Culturing primary cells in 2D culture is always a critical point, due to the risk of differentiation. Analyzing RNA directly isolated from the tendon tissue would avoid this problem, but is very challenging because of the limited biopsy size and RNA available. Therefore, only cells at very low passages (passages 1 and 2) were used to minimize differentiation problems. Mazzocca et al. described that tenocytes cultured in 2D culture can be used within the first three passages until a phenotypic drift occurs [42]. In the present study, we were not able to fully ensure that pure tenocyte cultures were used for the experiments. It might be possible that the presence of immune cells such as macrophages or lymphocytes could have affected the results of the study, because they are also able to express MMPs and TIMPs. However, as we could show previously, the tenocyte cultures isolated with the same method were characterized by flow cytometry analysis and expressed hematopoietic markers (CD11b, CD14, CD19, CD34, CD45) in less than 2% [18][19][20]. Therefore, the presence of immune cells in the tenocyte cultures can be nearly ruled out. Data regarding the baseline of MMP and TIMP expression in cells of intact tendons would be a helpful additional information to validate the findings of the present study. However, SSP samples from age-matched patients (43-76 years) will never be totally intact, as RC tendons often undergo degenerative changes through age, which might have an effect on the MMPs and TIMPs as well. As it is understood that mechanical stimulation plays an important role in the MMP/TIMP expression, the static 2D cell culture applied in this study might have had an effect on the MMP and TIMP expression and secretion. Furthermore, knowing the movement history of the RC patients prior to surgery could improve our study. However, this is not possible, but it can be speculated from the MRI data of the SSP muscle that patients with a high MFI have a longer or more severe pain history with decreased movement. Additionally to that, it was not possible to examine the period of time when the RC tear initially occurred. This is reasonable, because RC tears are mostly of degenerative nature, which is a slow process. Even if the patients above 50 years of age had an injury, mostly a degenerative pathology preceded [43]. However, there might be a correlation between the time period and the MFI as well. Within the present study, we concentrated on analyzing the most described MMPs and the TIMPs. However, other proteases also play a role in extracellular matrix remodeling and therefore tendon healing such as cathepsins, a desintegrin and metalloproteases (ADAMs) or their attendant subgroup a desintegrin and metalloprotease with trombospondin motifs (ADAMTs). Therefore, the present study can only give a limited insight into the mechanisms of tendon degeneration. Tendon Material TLCs were isolated from SSP tendon biopsies from 16 male and 14 female donors undergoing arthroscopic or open shoulder surgery. Biopsies were obtained (3-5 mm) from the proximal edge of the torn tendon. All patients gave their written informed consent and the local ethics committee of the Charité-Universitaetsmedizin Berlin authorized the anonymous use of tendon samples, which would otherwise be discarded (Ethic number: EA1/060/09). Cell Isolation and Culture TLCs were isolated by collagenase digestion as described previously. The previous characterization study revealed a distinct tenocyte phenotype for cells isolated using this method [48]. Cells were cultured with growth medium (DMEM/Ham's F12 with 10% heat inactivated FCS and 1% Penicillin/Streptomycin, all Biochrom AG, Berlin, Germany) at 37 °C, with 95% humidity and 5% carbon dioxide, with a medium change three times per week. When TLCs reached a minimum of 5 × 10 5 vital cells they were cryo preserved until use. Gene Expression Analysis Cells from each donor (passage 1 or 2) were seeded in three wells of a 6-well plate and cultured under standard conditions for 7 days until they grew 80%-90% confluent. RNA was isolated from the cells using the NucleoSpin RNA II Kit (Macherey Nagel, Düren, Germany) according to the manufacturer. RNA quantity and purity was analyzed with a Nanodrop ND-1000 Spectrophotometer (PeqLab Biotechnologie, Erlangen, Germany). A total of 100 ng RNA were transcribed into complementary DNA (cDNA) using the qScript cDNA Supermix (Quanta BioSciences, Gaithersburg, MD, USA) with the Epgradient Mastercycler (Eppendorf, Hamburg, Germany). As PCR template, 1.25 ng cDNA were used for the qRT-PCR analysis. The cDNA was diluted 1:3 with Sybr Green mastermix (Quanta BioSciences) containing 10 µM of forward and reverse primer mix to a total volume of 15.5 µL. All primer sequences were designed using Primer 3 software (Freeware; Available online: http://frodo.wi.mit.edu/primer3), and were produced by Tib Molbiol, Berlin, Germany (Primer sequences see Table 3). A qRT-PCR program with an initial denaturation step for 3 min at 94 °C was used, followed by an amplification program with 40 repeated cycles (95 °C for 15 s, 64.2 °C for 45 s, 72 °C for 30 s), and a melting curve program. For all primers, amplification efficiencies were analyzed and relative gene expression calculated using the ΔCt method with efficiency correction. In previous studies 18S was found to be the most stable gene compared to other housekeeping genes in TLCs and was therefore used as reference gene. If no gene expression was measurable within amplification, the Ct value was set at 37, where only unspecific amplification or primer dimers can be expected. Multiplex ELISA The protein concentration of the four TIMPs, MMP-1 and MMP-3 in the cell culture supernatant was analyzed using Magnetic Luminex Performance Assays (TIMP Multiplex Kit, MMP base kit plus MMP-1 and MMP-3 kit, R&D Systems, Abingdon, UK) according to the manufacturer. Briefly, cell culture supernatants from three 6-wells were pooled. Standards and pooled supernatants were measured in duplicates. Medium with 10% FCS served as negative control. Supernatants were diluted 1:4 for TIMP assay and used undiluted for MMP-1/MMP-3 assay. An automated magnetic washing device (Bio-Plex ® Pro II, BioRad Laboratories, Munich, Germany) was used. The assays were analyzed using the Bio-Plex ® 200 System (BioRad) with the Bio-Plex Manager software. Protein concentrations of the TIMPs and MMPs were normalized to the total protein content in the supernatant analyzed by Coomassie Plus assay (Thermo Fisher Scientific, Dreieich, Germany). Sandwich ELISA TIMP-1 concentration was further analyzed by conventional sandwich ELISA (TIMP-1 DuoSet ELISA, R&D Systems). Protein concentration of MMP-2 was analyzed using Total MMP-2 Quantikine ELISA (R&D Systems) according to the manufacturer instructions. Standards and pooled cell culture supernatants were measured in duplicates. Medium with 10% FCS served as negative control. Supernatants were diluted 1:4 for MMP-2 ELISA and 1:200 for TIMP-1 ELISA. Protein concentrations of TIMP-1 and MMP-2 were normalized to total protein content (Coomassie assay). Statistics Statistical analysis was performed using SPSS 20 (IBM, Armonk, NY, USA). For the analysis of significant differences between two cohorts, the radiographic scores were grouped as follows: the cut off for MFI and tendon retraction was defined between grade 1 and 2, and the cut off for tear size between grade 2 and 3. An age cut off was defined at 65 years in accordance with the literature [6,7]. Mann-Whitney U test was performed to analyze significant differences between the defined groups. The box plots represent the median with 25th to 75th percentile and the whiskers are placedat 1.5 times the interquartile range below/above the first/third quartile of the box. Spearman's rho test was used for correlation analysis and is given as Spearman's rank correlation coefficient (rs). Statistical dependence between two variables was only considered for correlation values above 0.4. A rs of 0.4-0.6 was considered mild, whereas a rs above 0.6 was considered as strong correlation. The level of significance was set at p < 0.05. Conclusions In the present in vitro study, TLCs from donors with higher age (>65 years) or degenerative status of the tendon showed increased mRNA and protein levels of mainly the gelatinases MMP-2 and MMP-9, but also the TIMPs were upregulated in theses donor groups. The interaction between MMPs and TIMPs is a complex process, since TIMPs are not only inhibitors of MMPs, but are also able to regulate the activation of MMPs. The results of the present study show that MMPs and TIMPs might play an important role in degenerative tendon pathologies, but also highlights the need of more knowledge in this research area.
2015-09-18T23:22:04.000Z
2015-06-01T00:00:00.000
{ "year": 2015, "sha1": "09e5d0063023b5947fd0398a91a8866e28336f70", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/16/6/13141/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "09e5d0063023b5947fd0398a91a8866e28336f70", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6836027
pes2o/s2orc
v3-fos-license
Temporal Relationship between Diet-Induced Steatosis and Onset of Insulin/Leptin Resistance in Male Wistar Rats Rats fed with high-fat-high-sucrose (HFHS) diet are known to manifest metabolic syndrome including hyperinsulinemia, hyperleptinemia, hyperglycemia, diabetic dyslipidemia, and hepatic steatosis. The aim of the current study is to determine the temporal relationships between the development of hepatic steatosis and the onset of insulin and leptin resistance in hypothalamus and liver in male Wistar rats (six weeks of age) fed chow or HFHS diet for up to 8 weeks. Fasting plasma glucose, lipids/lipoproteins, insulin and leptin levels were quantified, histopathologic score of hepatic steatosis and inflammation were assessed, and the responses of common checkpoints of insulin and leptin signalling responsible for lipogenesis and gluconeogenesis were analyzed. In addition, acute insulin or leptin administration was performed at different stages of HFHS dieting to determine the responsiveness of the respective signalling pathways. Hyperinsulinemia, hyperglycemia, dyslipidemia, and increased homeostasis model assessment of basal insulin resistance occurred 1-week after HFHS dieting, coinciding with upregulation of suppressor of cytokine signalling 3 in both hypothalamus and liver. However, hepatosteatosis, accompanied with increased expression of sterol regulatory element binding protein 1c and phosphoenolpyruvate carboxykinase, did not manifest until 4- to 8-week after HFHS dieting. Lowered insulin sensitivity (shown by decreased insulin receptor substrate 1 and protein kinase B phosphorylation) occurred approximately 2 weeks prior to leptin resistance (shown by impaired signal transducer and activator of transcription 3 activation) in both the liver and hypothalamus. Acute insulin/leptin administration also demonstrated the impaired insulin or leptin signalling transduction. These data suggest that lowered insulin sensitivity and leptin resistance occurred at least 2–3 weeks earlier than the manifestation of hepatosteatosis in rats fed HFHS diet. Introduction Nonalcoholic fatty liver disease (NAFLD) has become a common form of chronic liver disease worldwide, affecting one third of populations [1]. Accumulation of lipid, mainly triglyceride (TG), is considered the main feature of NAFLD, with steatosis being the pathological status in clinics. Pathology of NAFLD encompasses a spectrum of abnormalities, ranging from simple steatosis to nonalcoholic steatohepatitis, fibrosis, and eventual cirrhosis [2,3]. In addition to these hepatic abnormalities, NAFLD is also a contributing factor of metabolic syndrome, a cluster of lipid and lipoprotein disorders closely associated with type 2 diabetes and premature cardiovascular disease [4]. Although the mechanisms underlying the pathogenesis of NAFLD are not fully understood, available data from human and animal studies have indicated a link between insulin and/or leptin resistance and NAFLD [5][6][7][8][9][10]. Insulin signalling sensitizers (e.g. pioglitazone and metformin) that commonly used in treating diabetes have proven clinically beneficial in improving biochemical indices in NAFLD patients [11]. Similarly, improving insulin sensitivity can also lead to decreased hepatic steatosis in animals [12]. Moreover, strategies that improve both insulin and leptin sensitivity have demonstrated promising outcomes in attenuating NAFLD [13]. The adipose-derived hormone leptin is known to act on hypothalamus to reduce food intake and increase energy expenditure. Studies with animals fed with a high-fat diet invariably show hyperleptinemia and hyperinsulinemia under diet-induced-obesity (DIO) conditions [14]. Hyperleptinemia exerts no anorexic effect in the DIO animals, suggesting a state of leptin resistance in hypothalamus [15]. Leptin resistance in hypothalamus is associated with up-regulation of suppressor of cytokine signalling 3 (SOCS3) and subsequent down-regulation of signal transducer and activator of transcription 3 (STAT3) activation [16,17]. Activation of STAT3 is achieved by phosphorylation catalyzed by Janus kinase 2 (JAK2), a tyrosine kinase associated with OBR [18], whereas SOCS3 participates as a suppressor in the negative feedback loop attenuating STAT3 mediated signaling [19]. SOCS3 attenuates leptin signaling through several mechanisms, including direct binding to OBR and/or tyrosine-phosphorylated JAK2 and thus inactivating JAK2 [20]. Recent experimental data have suggested that leptin can act directly on the liver, influencing the insulin signalling pathway in addition to the leptin signalling pathway, and therefore on hepatic lipid and lipoprotein metabolism [21,22]. Functional OBR, both the long and the short forms (designated OBR L and OBR S , respectively), were expressed in the liver [23]. Gain or loss of hepatic leptin action in mice has a profound impact on the phosphatidylinositol 3 kinase (PI3K) activity and hepatic steatosis (i.e. TG content) [24,25]. Thus, leptin may directly exerts an effect on lipid and glucose metabolism in the liver through the PI3K pathway. The common PI3K pathway that is shared by insulin and leptin signalling features a close cross-talk between the two hormonal regulations in lipid and glucose metabolism in both central and peripheral tissues. In central, insulin has been shown to act on the same key areas in the brain as leptin does [26]. Intracerebroventricular infusion of insulin in mice affects both food intake and lipid metabolism in peripheral tissues, whereas brain-specific disruption of the insulin receptor gene in mice resulted in disorders similar to that observed in leptin-deficient ob/ob mice [27]. In the liver, inactivation of hepatic SOCS3 resulted in increased insulin sensitivity and lipogenesis [28], whereas expression of recombinant STAT3 in the liver markedly attenuated hyperglycemia and hyperinsulinemia in diabetic mice [29]. Liver-specific inactivation of the insulin receptor in mice resulted in dyslipidemia and increase risk of atherosclerosis [30]. The present study aimed to determine changes of the insulin/leptin signalling molecules during the development of hepatosteatosis. We employed the HFHS diet-induced hyperinsulinemia, hyperleptinemia, and diabetes rat model to delineate a temporal relationship between the development of NAFLD and the onset of insulin/leptin resistance. The data indicate that the lowered insulin sensitivity occurred earlier than leptin resistance in both liver and hypothalamus. Uncontrolled hepatic glucose production and upregulation of lipogenesis were detected at late stages of HFHS dieting, which was associated with pathological manifestation of NAFLD. Animals Male Wistar rats (6 week old) were obtained from SLAC Animal Laboratories (Shanghai, China), and housed under a standard 12-h light-dark cycle (lights on at 7:00 AM) with access to food and water ad libitum. After approximately 1-week acclimation, rats were placed on chow or HFHS diet (SLAC Animal Laboratories) for up to 8 weeks. The energy content of chow diet is 4.15 kcal/g, and 100 g chow contains (in grams): casein, 20; starch, 66.07; soybean oil, 4; cellulose, 5; mineral mix, 3.5; vitamin mix, 1; L-cystine, 0.18; and choline bitartrate, 0.25. The energy content of the HFHS diet is 5.13 kcal/g and 100 g HFHS food contains (in grams): casein, 20; starch, 34.07; sucrose, 15; lard, 15; soybean oil, 4; cellulose, 5; mineral mix, 3.5; vitamin mix, 1; L-cystine, 0.18; choline bitartrate, 0.25; and cholesterol, 2. Rats were individually housed in a pathogen free environment, and body weight and food intake were measured weekly. For the acute insulin/leptin treatment experiment, rats fed with chow or HFHS diets were intraperitoneally injected with recombinant insulin (0.75 U/kg body weight) or recombinant rat leptin (0.6 mg/kg body weight), and liver samples were collected 30 min after the injection. The animal protocols were performed in accordance with the guidelines and approval of the Animal Experiment Ethics Committee at Shanghai University of Traditional Chinese Medicine. Oral Glucose Tolerance Test Rats were fasted for 6 h after the start of the light cycle, and then orally administered with glucose (1.5 g/kg body weight). Tail-vein blood samples were collected at baseline and at indicated time intervals (15,30,60,90, and 120 min) after glucose treatment. Blood glucose levels were determined with a diabetes monitoring strip (Lifescan One Touch, IN). Hepatic Histology Assessment Liver sections were stained with Hematoxylin and eosin (HE) and Oil-Red O (neutral lipid), the procedures were performed according to previously describe methods [31]. Briefly, the liver tissues were fixed in 10% neutral buffered formalin for 24 h, dehydrated and embedded in paraffin, the sections were cut, deparaffinized and stained with HE. Snap frozen tissues were placed in optimal cutting temperature compound and then sectioned and stained with Oil-Red O buffer. Images were taken under Olympus IX71 Inverted microscope (Tokyo, Japan). Hepatic steatosis was graded based on the extent of lipid accumulation: <5% (score 0), 5-33% (score 1), >33-66% (score 2), and >66% (score 3) according to the histopathologic criteria specified previously [32]. Measurement of Hepatic Lipid Content Liver TG and cholesterol were quantified as described previously [33]. Briefly, liver tissue (200 mg) was homogenized in 3 ml of ethanol-acetone (1:1) mixture. The homogenate was extracted over night at 4°C, and centrifuged for 15 min at 3,000 rpm at 4°C. The organic layer was removed, TG and cholesterol were measured using commercial kits (Kangtai Bioengineering Institute, Beijing, China). Tissue Sampling and Western Blot Analysis After euthanasia and blood collecting, hypothalamus and liver were removed, immediately frozen in liquid nitrogen and stored at -80°C. Frozen tissues were homogenized in Tissue Protein Extraction Reagent (Pierce Biotechnology, Inc., Rockford, USA), with the addition of protease inhibitor (Roche, Nutley, USA) and phosphatase inhibitor cocktail (Roche, Nutley, USA). Protein concentrations were determined using the bicinchoninic assay reagents and the microbicinchoninic assay method (Pierce Biotechnology, Inc. Rockford, USA). For Western blot analysis, 100 μg of protein were fractionated by SDS-PAGE (8-12% gradient gel), transferred onto a PVDF membrane (Bio-Rad, Hercules, CA), Membranes were blocked with 5% skim milk in Tris-buffered saline and probed with target primary and secondary antibodies (see S1 Table for detailed information of antibodies used in the present studies). The targeted proteins were detected with ECL Detection Kit (Millipore, Billerica, USA), images were taken and qualified by Gel-Pro system (Tanon Technologies). For western blot analysis, the amount of protein loaded was confirmed by the Bradford method, and equal loading was verified by staining with Ponceau S reagent (Sigma Chemical Co.) and by determining the signal of beta actin. Statistical Analysis For each outcome measure, a one-way analysis of variance was performed (SPSS 18.0) for each animal group studied (n = 6-8). A significant main effect (P < 0.05 or P < 0.01) was followed up with Student-Newman-Kuel post hoc comparisons. Values are presented as means ± standard error of the mean (SE), and P < 0.05 denotes a statistically significant difference. Changes in serum metabolic parameters upon HFHS dieting As expected, the HFHS dieting markedly induced a diverse range of metabolic abnormalities, including hypertriglyceridemia, hypercholesterolemia, hyperglycemia, hyperinsulinemia, as well as hyperleptinemia, and most of these abnormalities occurred as early as 1-week and remained throughout the entire HFHS dieting ( Table 1). The hypercholesterolemia in HFHS diet-fed rats was associated with increased LDL-c and decreased HDL-c (Table 1), typical phenotypes of diabetic dyslipidemia. The hallmarks of lowered insulin sensitivity, including hyperinsulinemia, hyperglycemia, and increased HOMA-IR were also manifested 1-week after HFHS dieting, indicative of a rapid response of the animals in attenuating insulin signaling. Oral glucose tolerance test (OGTT) did not show significant difference between HFHS dietand chow diet-fed animals at 1-week (Table 1), However, prolonged HFHS dieting (4, and 8week) resulted in a trend of increased OGTT-AUC value as compared to that of chow diet-fed rats (0 week) ( Table 1). These results suggest that HFHS dieting contributes to the compromised insulin sensitivity in these rats. Hyperleptinemia occurred at 2-week HFHS dieting (Table 1), one week after the onset of hyperinsulinemia. Food intake increased in both dietary groups as the rats growing, and the HFHS diet-fed rats were profoundly hyperphagic at 8 th week despite hyperleptinemia (Table 2). Serum FFA levels were comparable between HFHS and chow dieting during the first 4 weeks and increased by 30% at the end of 8 th week HFHS dieting ( Table 1), suggesting that the lipolytic function of adipose tissue was not compromised until the late stage of insulin/leptin resistance. The increased fasting FFA concentration at the late stage of HFHS dieting was not associated with an increase in plasma TG. Rather, fasting plasma TG was gradually decreased from 2 nd to 8 th week of HFHS dieting (Table 1). In a separate experiment where the rats were fed the HFHS diet for up to 12 weeks, fasting plasma TG concentration was further decreased to a level that was lower than that in control animals (chow: 1.62 ± 0.80 mM, HFHS: 0.98 ± 0.16 mM; P < 0.01). Assuming that fasting plasma TG concentration reflects secretion of hepatic very low density lipoproteins (VLDL), the decrease in plasma TG probably is indicative of compromised hepatic VLDL production upon prolonged HFHS dieting. Changes in hepatic metabolic parameters upon HFHS dieting The HFHS dieting also markedly induced hepatic steatosis; elevated liver-associated TG and cholesterol were observed throughout the entire 8-week feeding period (Table 2). Notably, significant increase in hepatic TG and cholesterol was detected as early as ½-week HFHS feeding (TG: 39.1 ± 1.25 versus 15.5 ± 0.25 mg/g; TC: 15.71 ± 0.92 versus 4.97 ± 0.86 mg/g, P < 0.01). Hepatosteatosis in HFHS-fed rats was associated with hepatomegaly, thus the liver-to-body weight ratio was increased immediately 1-week after HFHS dieting, even though there was no significant change in body weight between the two dietary groups until the end of 8 th week ( Table 2). Although biochemical analysis of hepatic TG and cholesterol showed significant increase upon HFHS dieting, histological analysis did not suggest hepatosteatosis at 1 st or 2 nd week dieting ( Fig. 1A and 1B). The steatosis score, a histological scoring system for NAFLD, of HFHS-fed liver was less than 1, and the extent of hepatocytes with visible lipid accumulation was less than 5% (Table 2). Massive macrovesicles were observed in liver sections at the 4 th to 8 th week HFHS diet-fed rats ( Fig. 1A and 1B), resulting in pathological steatosis scoring ( Table 2). Significant inflammatory cells were not found in liver tissues even with 8-week HFHS diet-fed rats (Fig. 1A). Determination of liver enzymes (e.g. ALT and AST) also showed no changes between HFHS and chow diet fed rats (Table 2). These results combined suggested that (i) HFHS dieting induced NAFLD was not detected histologically until 4 th to 8 th week HFHS dieting, and (ii) there was an absence of overt liver damage or inflammation during the 8-week HFHS dieting. However, accumulation of hepatic TG and cholesterol could be detected biochemically at as early as ½-week HFHS dieting, indicating alterations in lipid metabolism occurred in the liver at least 2 weeks before pathological NAFLD diagnosis. Up-regulation of SOCS3 in both hypothalamus and liver upon HFHS dieting The abnormalities in lipid/lipoprotein and glucose metabolism upon HFHS dieting were associated with activation of counter-regulatory signaling pathways, including activation of SOCS3. HFHS dieting rapidly induced expression of SOCS3, a negative feedback regulator of leptin and insulin signaling in both central and peripheral target tissues [34]. Western blot analysis revealed that the level of SOCS3 in hypothalamus ( Fig. 2A) and the liver (Fig. 2B) was markedly increased as early as 1-week HSHF dieting. Elevated hypothalamic and hepatic SOCS3 expression was associated with down-regulation of OBR ( Fig. 2A and 2B), suggesting an induction of leptin resistance in the respective tissues. However, down-regulation of phosphorylation of STAT3 (the JAK2 effector protein) occurred much later during HFHS dieting as compared to SOCS3 un-regulation. In hypothalamus, decreased STAT3 phosphorylation was observed between 6 th (not shown) and 8 th week ( Fig. 2A) after HSHF dieting. The late onset of STAT3 inactivation shown in the present study was in agreement with previous observations, where attenuated hypothalamic JAK2/STAT3 signaling did not occur until 5 th to 6 th weeks of high-fat diet [35]. The normal STAT3 activity during the first 4-week HFHS dieting has thus been dubbed as the "early stage" leptin resistance [36]. SOCS3 has also been shown to attenuate hepatic insulin signaling by binding to the insulinreceptor (INSR), interfering IRS phosphorylation [37], or promoting ubiquitin-mediated IRS degradation [38]. Indeed, rapid up-regulation of SOCS3 in the liver upon HFHS dieting (as early as 1-week) was associated with decreased IRS1 phosphorylation (Fig. 2B). STAT3 phosphorylation in hypothalamus decreased at 8-week, and decreased hepatic STAT3 phosphorylation also occurred at 4 th week HFHS dieting (Fig. 2B). These results suggest that hypothalamic and hepatic leptin resistance was not manifest immediately upon hyperleptinemia in HFHSfed rats, and alteration in STAT3 signaling probably exert an effect on lipid or glucose metabolism predominately at late stage of leptin resistance. It has been shown previously that down-regulation of OBR could also be achieved by PTP1B [20]. Throughout the entire HFHS dieting, there was no change in PTP1B levels in liver as compared to that in chow diet controls (Fig. 2B). Thus, PTP1B may not play a role in the observed down-regulation of hepatic OBR under HFHS conditions. However, in hypothalamus, up-regulation of PTP1B was observed at 4 th and 8 th week HFHS dieting ( Fig. 2A), which might contribute to down-regulation of hypothalamic OBR. Changes in signalling pathways involved in gluconeogenesis and lipogenesis upon HFHS dieting The combined impairment in insulin and leptin signaling converged to produce metabolic changes in PI3K, Akt and FoxO1 [20]. In the liver, down-regulation of PI3K, Akt phosphorylation, and FoxO1 was detected as early as 1-week HFHS dieting (Fig. 3A), coinciding with up- regulation of SOCS3 (Fig. 2B). In hypothalamus, down-regulation of PI3K and Akt did not occur until 4 th and 8 th week HFHS dieting (Fig. 3B), coinciding with down-regulation of hypothalamic STAT3 (Fig. 2A). These results, in agreement with what reported previously [39], suggest that impairment in IRS1/PI3K/Akt pathway occurred earlier than that in STAT3/SOCS3 pathway during insulin/leptin resistance. The present results also suggest that the impairment in IRS1/PI3K/Akt pathway occurred earlier in the liver than in hypothalamus. The observation of a marked decrease in hepatic FoxO1 upon HFHS dieting (Fig. 3A) was unexpected. Normally upon PI3K/Akt activation, FoxO1 is phosphorylated and excluded from nuclei for degradation [40]. The reason for the observed down-regulation of FoxO1 in the face of decreased PI3K/Akt activation remains to be explained, although leptin has been shown to decrease FoxO1 expression in hypothalamus through PI3K [41]. RT-PCR analysis of gene expression of INSR, IRS1, Akt, and FoxO1 at 1 st HFHS dieting showed that there was no difference in their mRNA levels in the livers as compared with chow diet controls (data not shown). These results suggest that the impaired activation of insulin signaling was unlikely the cause of decreased FoxO1 proteins. Rather, the decrease in FoxO1 was likely attributable to accelerated posttranslational degradation. Up-regulation of PEPCK and SREBP1c was observed in the liver of HFHS-fed rats (Fig. 3C), consistent with the occurrence of lowered hepatic insulin sensitivity and leptin resistance. The precise mechanism or transcription factors involved in PEPCK and SREBP1c expression under HFHS conditions are not entirely clear. However, pronounced up-regulation of PEPCK and SREBP1c in the liver occurred at 4 th and 8 th week HFHS dieting (Fig. 3C), suggesting that dysregulation of gluconeogenesis and lipogenesis was associated with inactivation of both PI3K/Akt and STAT3 pathways. There was no change in the expression of LXRα between HFHS and chow diet conditions (Fig. 3C), suggesting that LXRα did not contribute to up-regulation of SREBP1c under the current experimental conditions. Consideration was also given to the possibility that hyperglycemia under HFHS conditions was due to diminished hepatic glycogen synthesis. To test this possibility, we measured the GSK3 phosphorylation status. As shown in Fig. 3C, phospho-GSK3α (Ser 21 ) and phospho-GSK3β (Ser 9 ) were both markedly elevated in the liver of HFHS diet-fed rats, indicating suppressed activities in glycogen synthesis. Thus, the unchanged OGTT (Table 1), together with elevated PEPCK and GSK3β phosphorylation (Fig. 3C), suggests that HFHS dieting-induced hyperglycemia in the rats is most likely attributable to hepatic glucose production and not glucose utilization. Hepatic response to acute insulin/leptin The impaired hepatic insulin/leptin sensitivity was further determined using the rats acutely treated with insulin and leptin, respectively. Hepatic Akt2 phosphorylation under chow diet conditions was markedly increased upon insulin treatment, and such an insulin response was diminished under HFHS diet conditions (Fig. 4A). Likewise, the markedly stimulated PI3Kp85/p55 expression by insulin under chow diet conditions was attenuated after 2-week HFHS feeding (Fig. 4A). These data suggest strongly impaired hepatic insulin signaling by HFHS diet feeding which may contribute to the overall lowered insulin sensitivity in these animals. Acute leptin administration resulted in increased hepatic STAT3 phosphorylation under chow diet conditions. Attenuation of leptin-induced STAT3 phosphorylation under HFHS diet conditions was not obvious at 1 st and 2 nd week, but became significant at 4 th and 8 th week of dieting (Fig. 4B). Additionally, HFHS diet feeding markedly diminished leptin-induced PI3K expression as compared with that under chow diet conditions (Fig. 4B). Together, these data confirm that both lowered insulin and leptin sensitivity occurred during the development of hepatic steatosis upon HFHS dieting, and suggest that the impairment in hepatic PI3K/Akt pathway may occur earlier than that in hepatic STAT3/SOCS3 pathway. Discussion The present study utilized a well-studied Wistar male rat model to delineate a temporary relationship between the development of clinical NAFLD, the existence of lowered insulin sensitivity, and the onset of leptin resistance in hypothalamus and the liver, through a continuous feeding with HFHS diet for up to 8 weeks. The acute challenge of insulin or leptin also confirms lowered insulin and leptin sensitivity occurred during the development of hepatic steatosis upon HFHS dieting (Fig. 4), further verifies the existence of impaired signaling transduction. In comparison with those fed with chow diet, the HFHS diet-fed rats manifested metabolic syndromes as early as 1-week dieting. However, the clinical manifestation of NAFLD in these rats, using a histological scoring system developed for humans, did not occur until the end of 4 th to 8 th week dieting. At the late stage of HFHS dieting, massive hepatomegaly was apparent, which was accompanied with significantly elevated fasting plasma FFA concentrations (e.g. 8-week data in Tables 1 & 2), suggesting a compromised FFA storage in peripheral tissues and increased flux of FFA into the liver. The ability to secrete TG from the liver, presumably in the form of VLDL (as assessed by the fasting plasma TG concentrations) was starting to deteriorate at the late stage of HFHS dieting (Table 1), probably also contributing to the progression of hepatosteatosis as reported previously [42]. Moreover, uncontrolled expression of lipogenesis and gluconeogenesis (as shown by up-regulation of SREBP1c, GSK3β phosphorylation and PEPCK, respectively) in the face of hyperinsulinemia and hyperleptinemia, further exacerbates diabetic dyslipidemia upon prolonged HFHS dieting. Hepatic gluconeogenesis and lipogenesis are regulated by both the insulin and leptin signaling pathways. Indeed, up-regulation of PEPCK and GSK3β phosphorylation were readily observable in the present study throughout the 8-week HFHS feeding. Hepatic glycogen synthase is regulated by phosphorylation of GSK3β, and GSK3 inhibitors could stimulate hepatic glycogen synthase [43]. The increased GSK3β phosphorylation in our study indicated that reduced hepatic glycogen synthase also contribute to the hyperglycemia. In addition, the increasing fasting glucose levels may also be caused by diminished glucose utilization in peripheral tissues. Impaired insulin signalling could induce glucose transporters (Gluts) diminution, which limits glucose uptake and contributing to the hyperglycemia [44]. Detailed analysis of the main checkpoints of the respective insulin and leptin signaling pathways showed that the occurrence of lowered insulin sensitivity was at least 2 weeks earlier than leptin resistance in both hypothalamus and liver. Specifically, STAT3 activation did not occur until prolonged HFHS dieting (4 th week in the liver and 8 th week in hypothalamus) even in the presence of hyperleptinemia (which occurred at 2-week of HFHS dieting) and up-regulation of SOCS3 (which occurred as early as 1 st week HFHS dieting). Although how does the liver or hypothalamus maintain a relatively normal STAT3 activation during the early stage of insulin/ leptin resistance is unclear, an anti-steatogenic effect of hepatic STAT3 has been suggested previously [29,45]. The activation of STAT3 has been shown to play a role in the suppression of PEPCK [46] and SREBP1 expression [47]. The observed elevation in PEPCK and SREBP1c expression at late stage of HFHS dieting (Fig. 3C), coinciding with down-regulation of hepatic STAT3 (Fig. 2B), is consistent with an anti-diabetic, anti-steatogenic role of STAT3. Late suppression of STAT3 phosphorylation in mice fed with a high-fat diet has been reported previously [28,39,48]. Based on these results, it is tempting to speculate that the cellular maintenance of STAT3 activation may represent a hepatic protective mechanism that dampens gluconeogenesis and lipogenesis during early stage of insulin/leptin resistance. It is well established that STAT3 activation in leptin signaling is attenuated by SOCS3, and decrease in STAT3 activation has been considered as an indicator of leptin resistance. SOCS3 also attenuates the insulin signaling [37,38], representing a cross-talk between the two signaling pathways. The present data showed that the earliest alteration upon HFHS dieting (at 1-week) was up-regulation of SOCS3 in hypothalamus and liver, which coincided with the onset of hyperinsulinemia, hyperglycemia, elevated hepatic TG and cholesterol, down-regulation of OBR, and attenuated insulin receptor activation. However as discussed above, it was noted in the present study that a rapidly up-regulated SOCS3 expression in hypothalamus and liver (at the early stage of HFHS dieting) was not immediately linked to down-regulation of STAT3 activation. Available experimental data indicated that both hyperinsulinemia and SOCS3 contribute to an enhanced lipogenesis through SREBP1c up-regulation [47]. Expression of SOCS3 has been shown to play a central role in hepatic steatosis and insulin resistance in mice [37]. Over-expressing SOCS3 (through adenovirus mediated gene transfer) resulted in up-regulation of PEPCK in mice [37]. On the other hand, STAT3 has the ability to suppresses both SREBP1c and PEPCK, and thus play a critical role in attenuating lipogenesis and gluconeogenesis [49]. These data suggest that SOCS3 and STAT3 regulatory loop was uncoupled initially, leaving a window of opportunity for selective regulation of lipogenesis and gluconeogenesis by the two factors during the early and late stages of leptin/insulin resistance. We hypothesize that the early stage of leptin resistance, manifested by the maintenance of normal STAT3 activity, in the face of up-regulated SOCS3, is probably a compensatory response to the rapidly deteriorated insulin sensitivity under HFHS dieting. In summary, using HFHS diet-induced NAFLD rat model, we have obtained experimental evidence suggesting that the development of the early stage of NAFLD (without apparent complication of inflammation) is a consequence of uncontrolled hepatic lipogenesis and gluconeogenesis. These metabolic alterations was closely associated with altered insulin/leptin signaling in both hypothalamus and the liver, and the existence of lowered insulin sensitivity and leptin resistance occurred at least 2-3 weeks prior to the manifestation of hepatosteatosis. Supporting Information S1
2016-05-12T22:15:10.714Z
2015-02-06T00:00:00.000
{ "year": 2015, "sha1": "51a7f68d2897443a9866b4943cda9042c48fa1b6", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0117008&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "51a7f68d2897443a9866b4943cda9042c48fa1b6", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
238740988
pes2o/s2orc
v3-fos-license
Lost but Not Least—Novel Insights into Progesterone Receptor Loss in Estrogen Receptor-Positive Breast Cancer Simple Summary Most breast cancers co-express estrogen receptor α (ERα) and progesterone receptor (PgR). These cancers are sensitive to endocrine therapy and, in general, have superior outcomes. However, a subset of tumors expresses ERα but loses expression of PgR in various mechanisms. The processes driving the loss of PgR may cause resistance to hormonal treatment and a more aggressive clinical course. The current review summarizes current knowledge on the biology of ERα-positive PgR(−)negative breast cancer and discusses the associations between molecular mechanisms and clinical characteristics. Abstract Estrogen receptor α (ERα) and progesterone receptor (PgR) are crucial prognostic and predictive biomarkers that are usually co-expressed in breast cancer (BC). However, 12–24% of BCs present ERα(+)/PgR(−) phenotype at immunohistochemical evaluation. In fact, BC may either show primary PgR(−) status (in chemonaïve tumor sample), lose PgR expression during neoadjuvant treatment, or acquire PgR(−) phenotype in local relapse or metastasis. The loss of PgR expression in ERα(+) breast cancer may signify resistance to endocrine therapy and poorer outcomes. On the other hand, ERα(+)/PgR(−) BCs may have a better response to neoadjuvant chemotherapy than double-positive tumors. Loss of PgR expression may be a result of pre-transcriptional alterations (copy number loss, mutation, epigenetic modifications), decreased transcription of the PGR gene (e.g., by microRNAs), and post-translational modifications (e.g., phosphorylation, sumoylation). Various processes involved in the down-regulation of PgR have distinct consequences on the biology of cancer cells. Occasionally, negative PgR status detected by immunohistochemical analysis is paradoxically associated with enhanced transcriptional activity of PgR that might be inhibited by antiprogestin treatment. Identification of the mechanism of PgR loss in each patient seems challenging, yet it may provide important information on the biology of the tumor and predict its responsiveness to the therapy. Introduction Estrogen receptor α (ERα) and progesterone receptor (PgR) are crucial prognostic and predictive biomarkers in breast cancer (BC). Expression of steroid hormone receptors (HRs) in cancer cells justifies the introduction of endocrine therapies (ET), e.g., selective estrogen receptor modulators (SERMs), aromatase inhibitors (AIs), or selective estrogen receptor degraders (SERDs) [1]. These therapies primarily target ER, but BCs co-expressing PgR tend to show an even better response to hormonal treatment. Since the progesterone receptor gene (PGR) is dependent on ERα, the negative PgR status may indicate altered ERα signaling and impaired response to ET [2]. In the last two decades, the prognostic and mechanisms of PgR loss, genetic landscape and biology of ERα(+)/PgR(−) tumors, and the role of microRNA (miRNA) in the down-regulation of PgR. Mechanisms of PgR Negativity BC may either show primary PgR negative phenotype (i.e., negative PgR expression in tumor sample assessed before systemic therapy), lose PgR expression during neoadjuvant treatment (assessed in the postsurgical specimen), or acquire PgR negative phenotype in local relapse or metastasis. Loss of PgR at the Genetic Level Among the HER2(−) group of tumors, the ERα(+)/PgR(−) cases show significantly lower PGR mRNA expression when compared to ER(+)/PgR(+) cancers, suggesting that in most cases the loss of PgR occurs before or during transcription [31]. At the genetic level, PgR loss might be explained by a copy number loss of the PGR gene, which was reported to occur in 27-52% of cases of BC [31]. Importantly, exogenous expression of PgR in breast cancer cells ensued growth inhibition in an MCF-7 cell line with a heterozygous loss of the PGR gene [32]. On the other hand, PGR mutations are exceedingly rare, since in the analysis of 959 ER(+)/PgR(−) cases all the tumors were classified as PGR-wild-type [33]. In another large dataset, only 9 missense mutations in the PGR gene were identified (estimated frequency 0.36%) [34]. A recent study on PGR variants in metastatic ER(+) BC demonstrated that 3 out of 4 samples of functionally deleterious Y890C variant were PgR(−) by IHC, so this specific variant may contribute to PgR loss by clonal selection [35]. Additional proofs of the role of growth factors in the development of ERα(+)/PgR(−) BC come from a neu-related lipocalin-transforming growth factor α (NRL-TGFα) transgenic mouse model [37]. During tumorigenesis, ERα expression was noted in all types of precursor lesions and persisted in cancer, whereas PgR expression was lost very early. In bi-transgenic mice overexpressing prolactin (PRL) and TGFα (NRL-PRL/TGFα), these hormones cooperatively enhance Akt activity, resulting in decreased PgR and increased ERα expression [38]. Despite enhanced ERα expression, the developed tumors were insensitive to estrogens, again supporting the hypothesis on diminished hormone responsiveness in ERα(+)/PgR(−) BC. Thus, targeting growth factors pathways may increase sensitivity to ET in single hormone receptor-positive BC. Additional proofs of the role of growth factors in the development of ERα(+)/PgR(−) BC come from a neu-related lipocalin-transforming growth factor α (NRL-TGFα) transgenic mouse model [37]. During tumorigenesis, ERα expression was noted in all types of precursor lesions and persisted in cancer, whereas PgR expression was lost very early. In bi-transgenic mice overexpressing prolactin (PRL) and TGFα (NRL-PRL/TGFα), these hormones cooperatively enhance Akt activity, resulting in decreased PgR and increased ERα expression [38]. Despite enhanced ERα expression, the developed tumors were insensitive to estrogens, again supporting the hypothesis on diminished hormone responsiveness in ERα(+)/PgR(−) BC. Thus, targeting growth factors pathways may increase sensitivity to ET in single hormone receptor-positive BC. Molecular Mechanisms Underlying False-Negative PgR Staining in IHC Progesterone receptor undergoes multiple post-translational modifications, including phosphorylation, acetylation, sumoylation, methylation, and ubiquitination [39]. Even in the absence of ligands, PgR is constitutively phosphorylated at some sites, and exposure to progestogen results in a net increase in the phosphorylation [40]. The result of this modification depends on a specific phosphorylation site that modulates PgR stability, nuclear transport, DNA binding, and transcriptional activity. Hormone binding results in poly-ubiquitination of PgR leading to ligand-induced PgR down-regulation-this process is paradoxically the hallmark of cells actively expressing PgR-dependent genes [40]. In human BC cells, ERK1/2 activation triggers PgR-B phosphorylation at Ser294, which, thereby, inhibits PgR sumoylation at Lys388. Undersumoylated PgR(−)B is derepressed and transcriptionally overactive, thus highly sensitive to low progestin concentration [41] ( Figure 1). However, Ser294 phosphorylation targets the receptors for rapid proteasomal degradation [42]. Moreover, PgR Ser294 and Ser400 phosphorylation reduce PgR nuclear export, probably enhancing the genomic action of progesterone [43], and phosphorylationinduced PgR desumoylation enhances the transcription of proliferative genes via recruitment of a CREB-binding protein (CBP) and mixed linage leukemia gene 2 (MLL2) [44]. Thus, in the final effect, PgR might express enhanced transcriptional activity but, simultaneously, undergo instant degradation and be undetectable by IHC [42]. An animal study by Zhang et al. demonstrated that the loss of tumor suppressor, Tat-Interacting Protein (Tip30), accelerates cancerogenesis in the MMTV-Neu mouse model of BC, and leads to the development of exclusively ER(+)/PgR(−) tumors [45]. Loss of Tip30 results in impaired degradation of EGFR and enhanced Akt signaling, which correlated with both increased expression and phosphorylation of ERα and loss of PgR in IHC staining [45]. In in vitro culture, the PgR protein was detectable following proteasome inhibition, and the progesterone antagonist RU486 suppressed the growth of Neu+/Tip30−/− tumors [45]. Finally, various clones of anti-ER and anti-PgR antibodies may show discordant results, and multiple additional pre-analytic or analytic factors influence the final quantification of steroid hormones expression. Failure to detect PgR expression by IHC occurs in various laboratories with a frequency of 5 to 15% of cases [46]. While PgR-negativity assessed by IHC may be a technical issue, the other possibility is that alternative splicing of PgR produces cancer-specific variants of PgR that are undetectable with N-terminally targeting antibodies. These truncated variants are generated by the deletion of some of the eight exons of PGR or by the preservation of introns and are capable of binding to progesterone, interacting with co-factors, and binding to DNA, thus they may remain functional [47]. Nevertheless, the clinical significance of alternative splicing of PgR needs to be elucidated. Identification of patients with false-negative PgR status may help to identify patients who are more likely to benefit from ET. Influence of Tumor Suppressors Loss on PgR Expression The phosphatase and tensin homolog (PTEN) is a tumor suppressor frequently lost in BC [48]. The role of PTEN is to dephosphorylate phosphatidylinositol 3,4,5-triphosphate (PIP3), thus the loss of PTEN correlates with higher levels of PIP3, which, in turn, activates the Akt signaling pathway [48]. Loss of heterozygosity at the PTEN locus coexisting with HER2 overexpression results in substantial Akt activation, leading to loss of PgR [49] ( Figure 1). Additionally, PTEN-knockout mice (K8PTEN-KO) demonstrate increased proliferation of mammary epithelial cells mainly restricted to the preferential expansion of PgR(−) cells [50]. In contrast to PTEN, the association between Breast cancer type 1 susceptibility protein (BRCA1) and PgR expression is ambiguous. On the one hand, BRCA1 was reported to stimulate the ubiquitination of PgR protein by E2 enzyme UbcH5c and its subsequent degradation [51]. On the other hand, Sanford et al. found no difference in the proportion of low-positive (<10% positive cells) and negative PgR staining between patients with and without deleterious germline BRCA1 mutations [52]. Epigenetic Mechanisms of PgR Suppression DNA methylation is the most important epigenetic mechanism orchestrating transcription. The first report on the inverse association between PGR promoter methylation and PgR expression in BC was published in 1996 and since then this observation has been confirmed by several studies [53]. Recent data demonstrate that IHC PgR(−) tumors show higher PGR methylation [54][55][56][57]. Nonetheless, in PgR(−) breast tumors, PGR methylation is usually either low or absent, so hypermethylation of PGR promoter is unlikely the major mechanism of PgR silencing, albeit some data are contradictory [56][57][58]. Interestingly, one study reported a higher incidence of DNA methylation in PGR promoter in HER2-amplified/overexpressing cases, pointing to the role of methylation in the pathogenesis of ER(+)/PgR(−)/HER2(+) breast tumors [59]. Several studies point to an association between PGR methylation and patients' outcome, e.g., tamoxifen resistance [57,60]. Additionally, long-term tamoxifen treatment leads to epigenetic silencing of ER-responsive genes, including PGR [61]. Owing to a high prevalence of ER(+)PgR(−) phenotype among breast tumors recurring after tamoxifen treatment, PGR methylation status was proposed as a predictive marker for tamoxifen insensitivity [61]. Consequently, loss of PgR was also demonstrated in BC cell lines with decreased tamoxifen sensitivity following long-term treatment [62]. Moreover, in MCF-7 BC cell line signaling from membrane-associated ER contributes to epigenetic modulation of PGR gene via the action of histone methyltransferase enhancer of Zeste homolog 2-EZH2 [63]. Numerous groups have reported on the restoration of PGR gene expression in PgR(−) cell lines following treatment with agents blocking DNA epigenetic modifications, namely the inhibitors of histone deacetylases and DNA methyltransferases [64,65]. Exposure to epigenetic modulators also resulted in increased PGR mRNA expression in the hormonereceptor-positive MCF-7 cell line [64]. In the future, it may be possible to convert PgR(−) BC into PgR(+) with the use of epigenetic modulators in order to enhance its sensitivity to ET [66]. The Interplay between Isoforms and Splice Variants of Steroid Hormone Receptors and PgR Expression Whereas most estrogenic actions in BC cells seem to be driven by ligand binding to ERα homodimers, the latter may also form heterodimers with ERβ1, which can promote transcription of a distinct pool of genes, and to down-regulate several ERα-dependent genes, including PGR ( Figure 2) [67,68]. The inverse correlation between ERβcx, a splice variant of ERβ, and PgR was noted; interestingly PgR-low BCs expressing ERβcx showed poorer response to tamoxifen [69]. Expression of PgR is also modulated by splice variants of ERα, e.g., ERα36, which positively correlates with PgR expression [70,71]. In vitro study utilizing ERα36 knock-out cell lines demonstrated reduced levels of PgR and its altered phosphorylation at Ser294 and Ser345 [71]. Additionally, there is a dominant-negative splice variant of ERα (ERα∆7), which is non-functional, but is detected by IHC. This may explain why a subset of ERα(+) tumors shows the molecular characteristics of the basal subtype [72]. Interestingly, the frequency of PgR expression in ERα(+)/ERα∆7-high basal carcinomas was 29.7% compared to 85.2% for ERα(+)/ERα∆7-low luminal B carcinomas [73]. Identification of such hormone receptor variants may in the future support treatment decision-making, but current routine procedures have not incorporated their assessment yet. The interplay between miRNAs and ERα expression is well described, but still not completely understood. Estrogens bound to ERα regulate miRNA processing and the formation of miRISC interacting with Drosha, DICER, and protein argonaute-2 (AGO2), and in this way influence gene repression by miRNAs [76]. On the contrary, multiple miRNAs modulate the expression and action of ERα via direct interactions with ESR1 mRNA and alterations of ERα coregulators. Additionally, some oncogenic miRNAs interfere with ERα-dependent signaling pathways, which, in consequence, may result in partial loss of ERα functionality reflected by loss of PgR expression in BC (i.e., acquisition of ER(+)/PgR(−) phenotype). Figure 2. Pre-translational mechanisms of PgR loss and down-regulation. Green arrows indicate stimulatory effects, red T-shaped lines depict inhibitory effects, dotted lines show potential effects. At pre-transcriptional stage, PgR loss is a consequence of methylation of PGR promoter, copy number loss (often), or mutations (very rarely). Splice variants of ERα and ERβ may either suppress or activate the transcription of PGR. Low levels of estradiol after menopause are frequently insufficient to induce expression of PgR. PGR mRNA is a direct target of multiple miRNAs, but some miRNAs may downregulate PgR indirectly, e.g., via activation of mTORC1. For details, see text. Abbreviations: AGO2-protein argonaute-2; ERα-estrogen receptor α; HER2-human epidermal growth factor receptor 2; miRNAs-microRNAs; MISS-membrane-initiated steroid signaling; mTORC1-mammalian target of rapamycin complex 1; PGR-progesterone receptor gene; PgR-progesterone receptor. Created with BioRender.com -accessed date 22 September 2021. The interplay between miRNAs and ERα expression is well described, but still not completely understood. Estrogens bound to ERα regulate miRNA processing and the formation of miRISC interacting with Drosha, DICER, and protein argonaute-2 (AGO2), and in this way influence gene repression by miRNAs [76]. On the contrary, multiple miRNAs modulate the expression and action of ERα via direct interactions with ESR1 mRNA and alterations of ERα coregulators. Additionally, some oncogenic miRNAs interfere with ERα-dependent signaling pathways, which, in consequence, may result in partial loss of ERα functionality reflected by loss of PgR expression in BC (i.e., acquisition of ER(+)/PgR(−) phenotype). Recent studies have also shed some light on miRNA regulation of PgR expression. Interestingly, the 3′UTR of PGR is the longest amongst mRNAs encoding steroid receptors (9434 nucleotides) but surprisingly contains only six conserved miRNA binding sites. It was demonstrated that exogenous miR-423-5p is capable of inhibiting PGR gene transcription in vitro [77], miR-126-3p suppresses PgR expression in mouse mammary gland [78], Recent studies have also shed some light on miRNA regulation of PgR expression. Interestingly, the 3 UTR of PGR is the longest amongst mRNAs encoding steroid receptors (9434 nucleotides) but surprisingly contains only six conserved miRNA binding sites. It was demonstrated that exogenous miR-423-5p is capable of inhibiting PGR gene transcription in vitro [77], miR-126-3p suppresses PgR expression in mouse mammary gland [78], and miR-181a, miR-23a, and miR-26b down-regulate PgR in ERα(+) BC [79,80]. miR-181a and miR-26 are repressed by estrogen and they belong to the feed-forward loop involving ERα. Their down-regulation following estrogenic stimulation leads to PGR up-regulation and their up-regulation in ERα(+) tumors may contribute to ERα(+)/PgR(−) BC development [79]. The main interactions between microRNAs and PgR expression are shown in Figure 2. Estrogen-dependent PgR up-regulation may be abrogated by progestin-controlled miRNAs, most notably miR-129-2 and miR-513a-5p. Progesterone treatment of BC cell lines leads to the up-regulation of miR-129-2, resulting in down-regulation of PgR, and tumors with elevated miR-129-2 have significantly decreased levels of PgR [81]. Similar effects were observed for miR-513a-5p, which represses PgR expression and reduces the amounts of PgR induced by estrogenic stimulation [82]. In vitro studies demonstrate that inhibitors of miR-129-2 increase expression of PgR providing a potential tool for stabilization of PgR levels in PgR-low/negative patients considered for hormonal therapy [81]. An interesting mechanism of PgR regulation in BC, partially driven by miRNA, involves a model, in which early lesions recapitulate the developmental program of normal mammary gland orchestrated by progesterone signaling via PgR and moderate HER2 expression [85]. This program facilitates the early dissemination of cancer cells, enhancing migration and stemness. Growing lesions gradually increase their tumor cell density and overexpress HER2, which up-regulates the expression of miR-9-5p and miR-30a-5p, leading to the down-regulation of PGR in the mouse BC model. This mechanism increases the proliferation of cells contributing to primary tumor growth but impairs its ability to spread. Plausibly, ERα (+)/PgR(−)/HER2(+) BCs show inferior prognosis because they represent an end-point in the pathway beginning with early, occult dissemination initially driven by PgR(+) cells, while clinically overt PgR(−) cancers may comprise only of residual scattered phospho-PgR(+) spots with stem cell potential and an ability to spread [85]. An additional mechanism of PgR regulation by miRNA involves miR-155 and the mTOR pathway. In BC, IGF-mediated mTORC1 activation down-regulates PgR expression [30]. Increased expression of miR-155 in ERα(+) BC cells enhances mTORC1 signaling via inhibition of the mTORC2 signaling component Rictor [86]. TCGA data on BC show that levels of Rictor and PgR positively correlate with each other, whereas Raptor (complexed with mTORC1) shows an inverse correlation with PgR [86]. mTOR inhibitor, everolimus, demonstrated efficacy in combination with ET in advanced BC and is generally believed to reverse endocrine resistance by inhibition of mTORC-1-dependent phosphorylation of ERα, but de-repression of PgR expression may represent another possible mechanism of action [87][88][89]. Nevertheless, limited data suggest that PgR status is not a predictive factor in advanced/metastatic BC treated with everolimus [90]. Curiously, a group of small duplex RNAs, antigene RNAs (agRNAs) are also able to regulate gene expression by targeting gene promoters (noncoding transcripts). Several studies demonstrated that PgR expression is regulated by synthetic agRNAs mediated by argonaute (AGO) proteins, but it was unknown if similar effects may be mediated by endogenous RNAs [91]. A very recent study shows that sequestosome 1 (p62) accumulation in BC cells triggers PgR suppression in an AGO2-mediated mechanism, comprising most likely agRNAs, not miRNAs [92]. On the contrary, in another study, high AGO2 levels were correlated with PgR loss due to altered ERα signaling probably driven by miRNA [93]. If small RNAs can precisely up-regulate expression PgR in BC to increase its sensitivity to ET remains to be elucidated. Loss of PgR during Therapy and in Breast Cancer Relapse A large meta-analysis of steroid HRs discordance in primary and recurrent BCs estimated the frequency of secondary PgR loss at 46% of patients, being more common in distant metastases than in local relapses [23]. The prognostic significance of this conversion is not well established, however, some studies report on the association between worse outcomes and the negative conversion of steroid HRs [12]. The loss of ERα and/or PgR in relapsing tumors or after primary systemic treatment probably indicates the selection of HR-negative cells in a heterogeneous pool of tumor cells. Moreover, circulating tumor cells (CTCs) frequently show discordant profiles with primary tumors. PgR(−) CTCs are present in 68-87% of patients with PgR(+) primary tumor, and this pool may be responsible for ERα(+)/PgR(−) metastases [94]. On the other hand, in metastatic BC, the loss of PgR expression on CTCs may occur, even if still present in both primary tumors and metastases [95]. The switch from PgR(+) to PgR(−) after neoadjuvant chemotherapy occurs in 12-15% of cases and is associated with worse clinical outcomes [96,97]. Similarly, neoadjuvant ET with SERMs or AIs may lead to the down-regulation of ERα and PgR, respectively [12]. A letrozole-induced decrease in PgR expression is most likely due to decreased estrogens levels and diminished estrogenic signaling [98,99]. Accordingly, studies on patient-derived xenografts and cell lines demonstrate that estrogen withdrawal can lead to PgR expression loss [100]. The decline in PgR expression is also promoted in a time-dependent manner by treatment with fulvestrant, as demonstrated in sequential biopsies of advanced BC [94]. Fulvestrant and the other SERDs have no agonistic activity and inhibit ligand binding to ERα, promote its degradation, and diminish transcription of ERα-dependent genes, including PGR [101]. Fulvestrant response rate seems independent from the baseline HER2 and PgR status because it antagonizes nuclear, cytoplasmatic, and membranebound ERs, completely inhibiting the cross-talk between the growth factor receptor and estrogen signaling [102]. Intriguingly, patients with a retained high PgR expression have a longer duration of response than patients with PgR loss at 6 weeks of treatment [101]. Moreover, overexpression of Tissue Inhibitor of Metalloproteinases-1 (TIMP1) ensues the down-regulation of PgR and drives resistance to fulvestrant in the MCF-7 cell line, but the mechanism of TIMP1-associated PgR depletion is unknown [103]. Resistance to fulvestrant may also be driven by mitogen-activated protein kinase (MAPK) pathway activation with increased levels of ERK, MEK, and RSK, kinases known to phosphorylate and inactivate PgR, hence, potentially, providing space for treatment with antiprogestins [104]. Phase 2 clinical trial investigating the combination of fulvestrant and onapristone for advanced or metastatic BC after progression on aromatase and CDK4/6 inhibitors (NCT04738292) is planned [105]. Analysis of mRNA expression profiles from several datasets demonstrated that ERα(+)/PgR(−) BCs share gene expression patterns both with double positive and double negative tumors [107]. This was confirmed also in our analysis of the TCGA dataset, where we identified 2 and 32 differently expressed genes between ER(+)/PgR(−) and double-positive or double negative tumors, respectively. Importantly, we found only 10 genes uniquely differentiating between two subtypes of single hormone receptor-positive tumors [83]. The Biology of ERα(+)/PgR(−) BC The biology of ERα(+)/PgR(−) BC cells is probably highly variable and depends on many cofactors (Figure 3). Isolated effects of ER (stimulated by estrogens) and PgR (stimulated by progestins) on gene expression are similar because they regulate the expression of shared target genes in the same direction (genomic agonism) [108]. In BC cells positive for both types of steroid hormone receptors, PgR competes with ERα regarding access to RNA polymerase III, and, hence, reduces its availability and ERα-dependent translation [84]. In consequence, when PgR expression is lost, ERα gains access to a broader range of translational machinery, which may promote tumor aggressiveness and growth. Moreover, chromatin binding of ERα is more consistent in double-positive tumors, whereas ERα binding patterns in PgR(−) subset are highly variable [108,109]. In PgR-deficient cells, ERα predominantly binds in the proximity to transcription start sites, whereas in PgR(+) cells PgR redirects ERα to bind distally to promoters. In consequence, in ERα(+)/PgR(−) BC ERα seem to act as a proximal promoter rather than distal enhancer of gene transcription, which stimulates pro-growth estrogenic signaling and reduces the responsiveness to ET [108]. Thus, PgR acts as a molecular rheostat regulating ER activity. Additionally, PgR mediates ERα chromatin binding to genes involved in cell death, apoptosis, and differentiation pathways and blocks ERα-dependent tumor growth [32]. Moreover, unliganded PgR regulates ESR1 transcription via epigenetic modifications of the ESR1 promoter. PgR depletion results in ESR1 promoter hypermethylation, down-regulating expression of ER, which cannot be reversed after PgR re-expression [109]. The combined effect of estrogens and progestins on BC cells co-expressing ERα and PgR demonstrate that there is phenotypic antagonism between ERα and PgR. It has clinical consequences-in premenopausal patients, PgR has a more pronounced positive prognostic significance because of the availability of progesterone, which stimulates PgR signaling [110]. On the contrary, in post-menopausal females, progesterone levels are low, and thus are unable to produce a prominent phenotypic antagonism to ERα, which makes PgR expression a less important predictive factor in older patients. Once PgR expression is lost, other receptors such as ERβ or androgen receptor (AR) may more significantly modulate ER-dependent actions. In the absence of PgR, AR most likely enhances ER-mediated transcription. In the nuclei of ER(+)/PgR(+) BC cells, AR competes with ER and PgR to bind to DNA, thus interfering with the estrogen-mediated transcription. Conversely, when PgR is lost, another receptor, ERβ, down-regulates ERα target genes, whereas AR enhances ERα target gene transcription and potentially contributes to tumor growth [111]. However, high AR expression is associated with prolonged relapsefree survival, lower grade, and lower number of affected lymph nodes in ERα(+)/PgR(−) BC, thus the mechanistic role of AR and its influence on ERα(+)/PgR(−) tumor aggressiveness requires further studies [112,113]. The loss of nuclear PgR expression does not imply loss of progestin responsiveness in BC cells [114]. Similarly to estrogens, progestins may act via membrane receptors (mPgRs), which have three subtypes: mPgRα, mPgRβ, and mPgRγ, the first being the most prevalent in breast tissue [115]. In PgR(−) BC cell lines progesterone produces an antiapoptotic response and activates MAPK and PI3K/Akt through mPgRs [114,116]. Expression of mPgR was correlated with HER2-overexpression, a number of lymph node metastases, and a worse prognosis in BC [117]. Thus, mPgRs might be important players in the biology of ERα(+)/PgR(−) BCs providing pro-growth signals. Nevertheless, some in vitro studies utilizing BC cell lines demonstrated that mPgRα mediates antiproliferative and antimetastatic signaling of progesterone [118,119], although the effects of mPgRs are potentially dependent on the model (in vitro vs. in vivo or clinical studies), progesterone levels, and competition with nuclear receptors. Of note, there is an inverse relationship between nuclear PgR and mPgR [117]. A recent study in PgR-low/null tumors defined phospho-PgR target gene sets (ERBB2, PAX2, AHR, AR, and RUNX2) which regulate cancer stem cell biology and increase tumor heterogeneity [85]. Paradoxically, antiprogestin treatment may possibly be effective in these clinically PgR(−) tumors, preventing the development of endocrine Green arrows indicate stimulatory effects, red T-shaped lines depict inhibitory effects. In tumor cells co-expressing ERα and nuclear PgR the latter may exert both nongenomic and genomic effects. It regulates the expression of genes in a similar way to ERα (genomic agonism) but guides ERα binding to chromatin to induce expression of genes associated with good outcomes (phenotypic antagonism). PgR interacts with translational machinery (mainly RNA polymerase III) reducing its availability for ERα-dependent translation. Loss of nuclear PgR results in a shift of ERα role from distant enhancer to proximal promoter activating subset of genes associated with cancer progression. Depletion of PgR increases ESR1 gene promoter methylation and down-regulates ESR1. Other steroid receptors, i.e., ER and AR may exert different effects on ERα-dependent genes expression in ERα(+)/PgR(+) and ERα(+)/PgR(−) breast cancers. For details, see text. Abbreviations: AR-androgen receptor; ESR1-estrogen receptor 1 gene; ER-estrogen receptor; HRE-hormone receptor element; (m)PgR-(membranous) progesterone receptor, RNA pol III-RNA polymerase III. Created with BioRender.com-accessed date 22 September 2021. The loss of nuclear PgR expression does not imply loss of progestin responsiveness in BC cells [114]. Similarly to estrogens, progestins may act via membrane receptors (mPgRs), which have three subtypes: mPgRα, mPgRβ, and mPgRγ, the first being the most prevalent in breast tissue [115]. In PgR(−) BC cell lines progesterone produces an antiapoptotic response and activates MAPK and PI3K/Akt through mPgRs [114,116]. Expression of mPgR was correlated with HER2-overexpression, a number of lymph node metastases, and a worse prognosis in BC [117]. Thus, mPgRs might be important players in the biology of ERα(+)/PgR(−) BCs providing pro-growth signals. Nevertheless, some in vitro studies utilizing BC cell lines demonstrated that mPgRα mediates antiproliferative and antimetastatic signaling of progesterone [118,119], although the effects of mPgRs are potentially dependent on the model (in vitro vs. in vivo or clinical studies), progesterone levels, and competition with nuclear receptors. Of note, there is an inverse relationship between nuclear PgR and mPgR [117]. A recent study in PgR-low/null tumors defined phospho-PgR target gene sets (ERBB2, PAX2, AHR, AR, and RUNX2) which regulate cancer stem cell biology and increase tumor heterogeneity [85]. Paradoxically, antiprogestin treatment may possibly be effective in these clinically PgR(−) tumors, preventing the development of endocrine resistance [85]. However, not all antiprogestins are equally adequate to this approach, since it was shown that in the presence of progesterone onapristone blocks Ser294 phosphorylation, whereas mifepristone and aglepristone induce Ser294 phosphorylation, behaving similar to partial agonists of PgR [85]. Phase I study of onapristone in heavily pre-treated, metastatic endometrial, ovarian, and BC showed promising results and proposed activated progesterone receptor as a potential predictive factor [120]. The understanding of PgR significance in BC is further complicated by the coexistence of its isoforms, as phosphorylated PgR-A is a more potent driver of cancer stem cell expansion, whereas PgR-B is involved in BC cells proliferation [121]. In normal mammary gland tissue, the levels of PgR-A and PgR-B are similar, while the ratio is disturbed during cancer transformation, usually resulting in PgR-A prevalence [122]. In vitro studies demonstrated that the PgR-A/PgR-B ratio determines the functional outcome of PgR action, including both the target genes and response to hormones and growth factors [123]. This observation was further confirmed in clinics because a high PgR-A/PgR-B ratio was indicative of a shorter time to relapse in patients treated with tamoxifen within the ATAC trial [124]. Interestingly, it is speculated that tamoxifen resistance and the worse prognosis are associated solely with methylation of PGRA promoter, resulting in the functional predominance of PgR-B [57]. High frequency of ERα:PgR-B interaction was predictive of relapse on an adjuvant AI, and in some cases, a substantial amount of ERα:PgR-B interactions coexisted with a lack of IHC-detectable PgR expression [125]. It was recently shown that among HER2-negative tumors ERα(+)/PgR(−) BCs display distinctive tyrosine kinases profiles [126], characterized by higher overall kinase activity than double-positive tumors, with RAS, PI3K, and ErbB signaling being mostly responsible for these differences. Four kinases showed significant expression differences between PgR(−) and PgR(+) tumors: fibroblast growth factor receptor 4 (FGFR4) and LCK were up-regulated, whereas Fyn-related kinase (FRK) and macrophage-stimulating 1 receptor (MST1R) down-regulated in PgR(−) cases. Interestingly, all these kinases are directly regulated by progesterone. Moreover, Tahiri et al. identified 24 kinase-encoding genes differentially expressed between double-positive and PgR(−) tumors, dividing ER(+)/HER2(−) BCs into two prognostically distinct clusters: cluster 1 comprising mostly PgR(+) patients with a better prognosis, and cluster 2 characterized by worse prognosis and the predominance of PgR(−) patients [126]. Additionally, PgR(−) patients in cluster 2 had inferior survival to PgR(−) patients in cluster 1. Unfortunately, the association between the clusters and luminal A vs. B phenotype was not studied. Importantly, these associations are not seen in HER2(+) samples, suggesting that the effects of HER2 are dominant. This is further supported by our study on single hormone receptor BC, in which miRNA profiles of single hormone receptor-positive breast cancers were mainly dependent on the status of HER2, rather than on ERα/PgR status [83]. Conclusions Lack of PgR expression in ERα(+) BC has multiple potential explanations but the molecular, pathological and clinical heterogeneity of this group remains underappreciated. The biology of ERα(+)/PgR(−) BC is context-dependent, being highly modulated by the cross-talk between growth factors receptors and nuclear or membranous steroid hormone receptors. Novel therapeutic targets as microRNAs, epigenetic modifications, tyrosine kinases, and transcriptionally overactive PgR should be further investigated in the future. Identification of the mechanism of PgR loss in each patient seems challenging, yet it may provide important information on the biology of the tumor and predict its responsiveness to the therapy. Finally, future studies should focus on the investigation of novel biomarkers predicting the disease course, as well as its response to endocrine and chemotherapy in this distinctive group of patients.
2021-10-14T05:23:52.460Z
2021-09-23T00:00:00.000
{ "year": 2021, "sha1": "00ef0c7621775ff40b988fb9d10b0ddfcb2e2e11", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/13/19/4755/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "00ef0c7621775ff40b988fb9d10b0ddfcb2e2e11", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
6684059
pes2o/s2orc
v3-fos-license
The Double-stranded RNA–dependent Protein Kinase Differentially Regulates Insulin Receptor Substrates 1 and 2 in HepG2 Cells The RNA-dependent protein kinase (PKR), initially known as a virus infection response protein, is found to differentially regulate two major players in the insulin signaling pathway, IRS1 and IRS2. PKR up-regulates the inhibitory phosphorylation of IRS1 and the expression of IRS2 at the transcriptional level. INTRODUCTION Insulin signaling, a central signaling pathway that regulates many cellular activities, such as glucose and lipid metabolism, protein synthesis and degradation, and cell growth and differentiation (Saltiel and Kahn, 2001), has been extensively studied over the past decades. Insulin signaling is initiated upon binding of insulin to the insulin receptor (IR), a receptor tyrosine kinase (Patti and Kahn, 1998), and transmitted intracellularly by the insulin receptor substrates (IRS; White, 1998). At least four of the IR substrates belong to the IRS group, with IRS1 and IRS2 being predominant and expressed in most tissues, including the liver (White, 1998;Thirone et al., 2006). On phosphorylation of the tyrosine residues catalyzed by IR, the IRS proteins initiate, through different binding mechanisms (White, 1998), various downstream signal transduction cascades, including mitogen-activated protein kinase (MAPK), pathways (c-Jun N-terminal kinase (JNK), extracellular signal-related kinase (ERK), p38 MAPK; Lowenstein et al., 1992;Skolnik et al., 1993), and phosphoinositide 3-kinase (PI3K; Backer et al., 1992), which in turn activates Akt/protein kinase B (Akt/PKB; Alessi et al., 1997), and atypical protein kinase C (aPKC; Standaert et al., 1997). As one of the central signaling pathways regulating multiple fundamental cellular activities, insulin signaling is sophisticatedly tuned by a large number of regulators. Dysregulation of insulin signaling is closely related to the development of insulin resistance (Sone et al., 2001) and contributes to multiple diseases and disorders, such as type 2 diabetes as well as other metabolic, endocrine, and cardiovascular disorders (Reaven, 1988;Kahn, 1998;Sone et al., 2001). At the molecular level, dysregulation of insulin signaling could occur at several possible stages, e.g., degradation or mutation of IR (McElduff et al., 1984;Imamura et al., 1994), inhibitory phosphorylation or degradation of IRS (Zick, 2005;Herschkovitz et al., 2007), or suppression of down-stream signaling molecules, such as PI-3 kinase or Akt/PKB (reviewed in Taylor and Arioglu, 1998). However, most of the regulation of insulin signaling occurs at the level of the IRS proteins, the hub proteins that transmit signals from IR to down-stream targets of insulin signaling (Zick, 2005;Herschkovitz et al., 2007). The phosphorylations of IRS play critical roles in determining its activity. IRS proteins mediate intracellular insulin signaling, through the tyrosine residues, which facilitate recruitment of IRS substrates and therefore promote insulin signaling (White, 1998), whereas its phosphorylations at the serine residues generally suppress the activities of IRS by blocking the interaction be- tween IRS and IR (Paz et al., 1997), inhibiting the tyrosine phosphorylation of IRS (Hotamisligil et al., 1996) or inducing the degradation of IRS (Pederson et al., 2001). A number of serine residues have been identified to negatively regulate the activity of IRS1, in particular, Ser307 (equivalent to Ser312 in human IRS1). Ser307 has been extensively investigated and characterized as a key indicator of inhibitory serine phosphorylation of IRS1 and insulin resistance and confirmed in insulin-resistant rodent models (Hirosumi et al., 2002). Interestingly, most of the down-stream targets of insulin signaling, such as mammalian target of rapamycin (mTOR; Ozes et al., 2001), PKC (Ravichandran et al., 2001), S6 Kinase 1 (S6K1; Tremblay and Marette, 2001), and JNK (Aguirre et al., 2000;Lee et al., 2003), have been shown to function as IRS serine kinases, which are involved in the negative feedback pathways of insulin signaling by promoting the inhibitory serine phosphorylation of IRS1. Initially identified as an antiviral protein, the doublestranded RNA-dependent protein kinase (PKR) is best known for triggering cell defense responses and initiating innate immune responses by arresting general protein synthesis and inducing apoptosis during virus infection (Proud, 1995). Until recently, PKR had not been reported to be involved in the insulin signaling pathway. However, studies have shown that insulin or insulin-like growth factor-I (IGF-I) suppressed the phosphorylation of PKR in muscle cells (Russell et al., 2007;Eley et al., 2008), by activating protein phosphatase 1 (PP1) through the IRS-PI3K-Akt pathway. We have found a similar inhibitory effect of insulin on PKR in HepG2 cells (Wu et al., 2009). Given the feedbacks involved in insulin signaling, we wanted to explore whether PKR, as a downstream target of insulin signaling, is also able to initiate a feedback pathway that functions on IRS1 phosphorylation. In fact, as a signal integrator of many intracellular signaling events (Williams, 2001), PKR has been shown to activate certain IRS kinases, e.g., IB kinase (IKK; Bonnet et al., 2000;Gao et al., 2002) and JNK (Zhou et al., 2003;Yang and Chan, 2009). Therefore, we investigated whether PKR is involved in the induction of inhibitory Ser312 phosphorylation of IRS1 and if so, whether this effect is mediated by other IRS kinases. Because IRS1 and IRS2 are both expressed in liver cells, we also investigated whether PKR affected IRS2. Our results indicated that PKR regulates the gene transcription, rather than the posttranslational modification, of IRS2. The transcription of the IRS2 gene has been shown to be up-regulated by several transcription factors, such as forkhead box O1 (FoxO1; Zhang et al., 2001;Ide et al., 2004) and cAMP response element binding (CREB; Jhala et al., 2003). We found that PKR enhanced the expression of IRS2 through FoxO1, which regulates the transcription of IRS2 but not IRS1. In summary, PKR, initially known as a virus infection response gene, differentially regulates the major IRS proteins, IRS1 and IRS2, which are central hubs in the insulin signaling system. Cell Culture and Reagents Human hepatoblastoma cells (HepG2/C3A) were cultured in Dulbecco's modified Eagle medium (DMEM; Invitrogen, Carlsbad, CA) with 10% fetal bovine serum (FBS; Biomeda, Foster City, CA) and penicillin-streptomycin (penicillin: 10,000 U/ml, streptomycin: 10,000 g/ml; Invitrogen). Freshly trypsinized HepG2 cells were suspended at 5 ϫ 10 5 cells/ml in standard HepG2 culture medium and seeded at a density of 10 6 cells per well in standard six-well tissue culture plates. After seeding, the cells were incubated at 37°C in a 90% air/10% CO 2 atmosphere, and 2 ml of fresh medium was supplied every other day to the cultures after removal of the supernatant. The HepG2 cells were cultured in standard medium for 5-6 d to achieve 90% confluence before any treatment. Human insulin and 2-aminopurine (2-AP) were purchased from Sigma-Aldrich (St. Louis, MO), C2 ceramide, tautomycetin, SC-514, okadaic acid (OA), PKR inhibitor, JNK inhibitor (SP600125) and their analogues, used as negative controls, from EMD Biosciences (San Diego, CA). Insulin Treatment Human insulin was stocked in HEPES buffer, which was therefore used in controls for all the experiments with the insulin treatment. We treated the cells with insulin at the concentrations lower than 1 nM to mimic the physiological concentrations (Gual et al., 2003). Cells were deprived of serum for 16 h before insulin treatment. Chemical Inhibitors In the present study, we used the commercially available 2-AP and PKR inhibitor as one of the tools to elucidate the role of PKR (data not shown). However, even though these chemical have been widely used in the studies of PKR for a variety of systems, the specificity of these inhibitor has not been extensively tested in the literature. Keeping in mind the potential nonspecific targets of these PKR inhibitors, we performed more PKR gene silencing and overexpressing studies to test our hypothesis and draw the conclusions. The JNK inhibitor, SP600125, IKK inhibitor, SC-514, PP1c inhibitor, TMC, and PP2A inhibitor, OA are popularly used and have been proven to be specific to their targets at the concentrations we used (Muranyi et al., 1997;Bennett et al., 2001;Shim et al., 2002;Kishore et al., 2003;Mitsuhashi et al., 2003). All of the inhibitors, except 2-AP, are dissolved in DMSO. 2-AP is dissolved in glacial acetic acid solution in PBS (1:200; GA). Western Blot Analysis and Immunoprecipitation HepG2 cells were lysed as described previously . Total protein levels were quantified by bicinchoninic acid (BCA) assay kit from Pierce Biotechnology (Rockford, IL). Twenty to 40 g of total protein was resolved by SDS-PAGE gels from Bio-Rad (Hercules, CA), transferred to nitrocellulose membranes, and probed with primary and secondary antibodies. Biotinylated protein ladders (Cell Signaling, Beverly, MA) were loaded to one well of each SDS-PAGE gel, and anti-biotin antibody was used to detect the protein ladders on Western blots. Antibody detection was performed using the enhanced chemiluminescence kit from Pierce Biotechnology and imaged on the Molecular Imager ChemiDoc XRS System from Bio-Rad. Immunoprecipitation was performed as described previously ). The Western blots were quantified using the Quantity One software (Bio-Rad). Phospho site-specific anti-IRS1 (Tyr941), PPP1A (Thr320), and anti-PPP1A antibodies were purchased from Abcam (Cambridge, MA); phospho site-specific anti-IKK␣/␤ (Ser176/180), FoxO1 (Ser256), anti-biotin, anti-IKK␤, and anti-FoxO1 antibodies from Cell Signaling; and phospho site-specific anti-IRS1 (Ser312), IRS2 (Ser731), PKR (Thr451), JNK (T183/ Y185), anti-IRS1, anti-IRS2, anti-PKR, anti-JNK, and anti-␤-actin antibodies from Sigma-Aldrich. Secondary anti-rabbit and anti-mouse antibodies were purchased from Pierce Biotechnology. PP2A Phosphatase Activity Assay The PP2A immunoprecipitation phosphatase assay kit purchased from Millipore (Temecula, CA) was used to measure dephosphorylation of a phosphopeptide as an index of phosphatase activity. Briefly, the cells were lysed using the phosphatase extraction buffer specified by the assay kit, and the catalytic subunit of PP2A (PP2A/C) was immunoprecipitated with anti-PP2A-C supplied in the assay kit. Agarose-bound immune complexes were collected and resuspended in 80 l of Ser/Thr buffer with 750 M of phosphopeptide (KRpTIRR; obtained from the kit). The reaction was conducted for 10 min at 30°C in a shaking incubator. Supernatants (25 l) were transferred in a 96-well plate, and released phosphate was measured by adding 100 l of malachite green phosphate detection solution. Color was developed for 10 min before reading the plate at 650 nm. The absorbance of the reactions was corrected by subtracting the absorbance in samples treated without Ab. Results were expressed as fold change of PP2A activity compared with control cells. Statistical Analysis All experiments were performed at least three times, and representative results are shown. All data, unless specified, are shown as the mean Ϯ SD for indicated number of experiments. One-way ANOVA with Student's t test was used to evaluate statistical significances between different treatment groups. PKR Induces the Phosphorylation of IRS1 at Ser312 and Mediates the Effects of Ceramide on IRS1 Phosphorylation We as well as others showed that the phosphorylation of PKR is down-regulated by insulin through the IRS proteins (Russell et al., 2007;Eley et al., 2008;Wu et al., 2009). We then further studied whether feedback exists between PKR and the IRS proteins. To evaluate the potential function of PKR on the phosphorylation of IRS1, we silenced PKR with a previously validated siRNA, which significantly reduced the PKR mRNA, protein, and phosphorylation levels ; Figure 1A). PKR, known as an eIF-2␣ kinase, directly binds to eIF-2␣ and induces the phosphorylation of eIF-2␣ at Ser51 (Taylor et al., 2005). Therefore the phosphorylation of eIF-2␣ at Ser51 is considered to directly indicate the kinase activity of PKR (Woldehawariat et al., 1999;Pataer Figure 1. Involvement of PKR in regulating the phosphorylation of IRS1. Reverse transfection of suspended HepG2 cells was performed with scrambled siRNA (negative control) or siRNA of PKR for 24 h, and the transfected cells were cultured in regular media for another 24 h (A). Cells were then treated with different concentrations of insulin for 15 min and harvested after the treatment (A). HepG2 cells were exposed to 5 M PKR inhibitor (PI) or its analogue as a negative control (NC) or 10 mM 2-AP dissolved in PBS:glacial acetic acid (200:1; GA, control) for 12 h followed by the treatment of 0.2 nM insulin for 15 min (B). Western blot analysis was performed to detect the levels of ␤-actin and PKR and the total and phosphorylated levels of PKR and IRS1. The phosphorylation levels of IRS1 at Ser312 and Tyr941 were quantified by normalizing to total IRS1 levels and are expressed as the average of three samples Ϯ SD from three independent experiments (A, middle and bottom). Student's t test was performed for analyzing the differences between samples transfected with siPKR and scrambled siRNA (negative control; A). Significantly higher (Tyr941) or lower (Ser312) than negative control; i.e., scrambled siRNA; *p Ͻ 0.01. PKR Regulates IRS1and IRS2 Vol. 21, October 1, 2010 et al., 2002;Zhang et al., 2008). In fact, the phosphorylation of eIF-2␣ at Ser51 was shown to be highly correlated with the phosphorylation of PKR at Thr451 or its activity (Woldehawariat et al., 1999;Pataer et al., 2002;Morimoto et al., 2004;Zhang et al., 2008). We confirmed that gene silencing of PKR significantly reduced the phosphorylation of eIF-2␣ at Ser51 (Supplementary Figure 1), indicating that gene silencing indeed reduced the overall kinase activity of PKR. Silencing PKR in HepG2 cells significantly blocked the Ser312 and amplified the Tyr941 phosphorylation of IRS1 induced by insulin ( Figure 1A). To further confirm a catalytic role of PKR in regulating apoptosis, we inhibited the activity of PKR with pharmaceutical inhibitors of PKR, PKR inhibitor (PI; Jammi et al., 2003) and 2-AP (Ben-Asouli et al., 2002). They suppressed the Ser312 and amplified the Tyr941 phosphorylation of IRS1 ( Figure 1B), similar to the siRNA of PKR. These results suggest that PKR induces the inhibitory phosphorylation of IRS1 at Ser312 in HepG2 cells, thereby suppressing the phosphorylation at Tyr941. Indeed, a PKR activator (Ruvolo et al., 2001), ceramide, has been shown to inhibit IRS1 activity by up-regulating the Ser phosphorylation and blocking the Tyr phosphorylation of IRS1 (Kanety et al., 1996;Miura et al., 2003). Given the effects of PKR on the phosphorylation of IRS1, we further asked whether PKR mediates the up-regulation of the serine and down-regulation of the tyrosine phosphorylation of IRS1 by ceramide through gene silencing and inhibition studies. We first confirmed that ceramide induces the phosphorylation of PKR at Thr451, promotes phosphorylation of IRS1 at Ser312, and suppresses phosphorylation at Tyr941 in HepG2 cells (Figure 2, A and B). Furthermore, in ceramidetreated cells, silencing the gene expression of PKR reduced the phosphorylation of IRS1 at Ser312 to control level (Figure 2B), which is consistent with the recovery IRS1 phosphorylation at Tyr941 in response to insulin ( Figure 2B). Inhibiting PKR with PI or 2-AP had a similar effect as PKR silencing in blocking the effects of ceramide on the phosphorylation of IRS1 (Figure 2, C and D). Taken together, both the gene silencing and inhibition studies suggest that PKR may be involved in regulating insulin signaling by inducing phosphorylation of IRS1 at Ser312 and suppressing phosphorylation at Tyr941. This function of PKR provides a potential mechanism by which ceramide regulates IRS1 phosphorylation. However, it is unclear how PKR regulates the phosphorylation of IRS1. Coimmunoprecipitation showed that PKR did not directly interact with IRS1 ( Supplementary Figure 2), even upon activation of PKR by ceramide (data not shown), suggesting other intermediate signaling molecules must mediate the effect of PKR on the phosphorylation of IRS1. IRS1 Ser kinases, which directly phosphorylate the serine residues of IRS1, include IKK (Gao et al., 2002), mTOR (Ozes et al., 2001), PKC (Ravichandran et al., 2001), S6K1 (Tremblay and Marette, 2001), and JNK (Aguirre et al., 2000;Lee et al., 2003). PKR has been reported to positively activate IKK (Bonnet et al., 2000) and the MAPKs, in particular JNK (Zhou et al., 2003;Yang and Chan, 2009). Therefore, we hypothesize that PKR induces phosphorylation of IRS1 at Ser312 through IRS serine kinases: JNK, IKK, or both. PKR Positively Regulates JNK and IKK, Both of Which Mediate the Effect of PKR on the Ser Phosphorylation of IRS1 We previously showed that PKR coimmunoprecipitates with JNK and activates JNK in HepG2 cells ). Here, we show that IKK also is activated by PKR. Silencing PKR ) significantly reduced the phosphorylation of IKK␣/␤ at Ser176/180 ( Figure 3A), which indicates IKK activity. To further confirm the involvement of JNK and IKK in mediating the effect of PKR on the phosphorylation of IRS1 at Ser312 and Tyr941, we overexpressed PKR in HepG2 cells and silenced or inhibited JNK or Figure 2. PKR mediates the effects of ceramide on the phosphorylation of IRS1. HepG2 cells were exposed to different levels of ceramide for 12 h (A). Reverse transfection of suspended HepG2 cells was performed with scrambled siRNA (negative control) or siRNA of PKR for 24 h, and the transfected cells were cultured in regular media for another 12 h (B). Cells were then treated with ceramide (10 M) for 12 h followed by insulin (0.5 nM) treatment for 15 min (B). Pretreated with 10 M ceramide for 12 h, HepG2 cells were exposed to different levels of PKR inhibitor dissolved in DMSO (control; C) or 10 mM 2-AP dissolved in PBS: glacial acetic acid (200:1; GA, control; D) for another 12 h. After treatment, the cells were harvested, and Western blot analysis was performed to detect the level of ␤-actin and the total and phosphorylated levels of PKR and IRS1. IKK. We confirmed that overexpression of PKR increased the phosphorylation of eIF-2␣ ( Supplementary Figure 1), indicating that overexpression indeed up-regulates the overall kinase activity of PKR. Overexpressing PKR by transfecting the plasmid pCMV6-hPKR into HepG2 cells enhanced phosphorylation of IRS1 at Ser312 and suppressed phosphorylation at Tyr941 (Figure 3B), supporting our previous results that PKR induces serine phosphorylation of IRS1 at Ser312 (Figures 1 and 2). Furthermore, in PKR-overexpressing cells, silencing the gene expression of JNK1/2 or IKK significantly reduced Ser312 phosphorylation and restored Tyr941 phosphorylation of IRS1 in response to insulin (Figure 3B), suggesting that both kinases, JNK and IKK, mediate the effect of PKR on the phosphorylation of IRS1 at Ser312 and in turn the suppression of the tyrosine phosphorylation of IRS1. This was also supported by inhibiting JNK or IKK using their respective specific chemical inhibitor, SP600125 or SC-514 (quantified Western blots results are shown in Supplementary Figure 3; Wu et al., 2009). In addition to IRS1, we also investigated the potential effect of PKR on another major IRS family protein, IRS2, which also mediates insulin signaling in the liver (Thirone et al., 2006). PKR Up-Regulates the Protein Expression Level of IRS2 Silencing the gene expression of PKR (Figure 1) down-regulated the protein level of IRS2 ( Figure 4A), but not IRS1 (Figure 1), suggesting that PKR is required for cells to maintain proper protein expression level of IRS2. Similarly, both PKR inhibitors (Figure 1), PKR inhibitor (PI) and 2-AP, also down-regulated the protein level of IRS2 ( Figure 4B), further supporting the effect of PKR on the protein expression level of IRS2. Therefore, PKR exploits a regulatory role on the protein level of IRS2, but not on IRS1. As discussed above, PKR affects IRS1 serine phosphorylation (Figures 1-3). Notably, upon inhibiting or silencing PKR, the phosphorylation of IRS2 at Ser731 varied proportionally to its total protein level ( Figure 4, A and B), suggesting that PKR is not affecting the phosphorylation of IRS2 at Ser731. As expected, the general tyrosine phosphorylation of IRS2, normalized to the total protein level of IRS2, is not affected by PKR silencing or inhibition, as measured by Western blotting of the IRS2 immunoprecipitates with anti-phospho tyrosine antibody (data not shown). Next, to determine whether PKR transcriptionally regulates the protein level of IRS2, we measured the mRNA expression levels of IRS1 and IRS2 upon PKR inhibition and gene silencing. Both the PKR inhibitors and the siRNA of PKR down-regulated the mRNA expression of IRS2, but not of IRS1 (Figure 4, C and D), suggesting that PKR regulates IRS2 at the transcriptional level. PKR Up-Regulates the Protein Level of IRS2 through the Transcription Factor FoxO1 PKR as a protein kinase, activates several transcription factors, such as IRF-1, p53, and nuclear factor kappa B (NF-B; Kumar et al., 1997;Cuddihy et al., 1999), but these transcription factors have not been shown to regulate the transcription of IRS2. However, the transcription of IRS2 has been shown to be dependent on the transcription factor FoxO1 in the liver (Zhang et al., 2001;Ide et al., 2004) or CREB in pancreatic ␤-cells (Jhala et al., 2003). It is not known whether PKR interacts with either of these two transcription factors. We found that PKR had no effect on the activity or translocation of CREB in HepG2 cells (not shown). However, silencing the PKR gene significantly increased the phosphorylation of FoxO1 at Ser256 ( Figure 5A). Phosphorylation at Ser256 inhibits the DNA-binding activity of FoxO1 and its nuclear import by suppressing the nuclear targeting signal on its DNA-binding domain (Rena et al., 2001(Rena et al., , 2002. We also performed nuclear extraction and measured the nuclear level of p-FoxO1 Ser256 in response to PKR silencing (Supplementary Figure 4A). The nuclear level of the p-FoxO1 Ser256 was significantly increased upon silencing of PKR, further confirming that PKR regulates FoxO1 activity by regulating its phosphorylation and translocation. Our results suggest, for the first time, that PKR reduces the phosphorylation of FoxO1 and thereby activates it. As a protein kinase, PKR does not have the phosphatase activity that is required to dephosphorylate FoxO1. However, PKR is known to phosphorylate B56␣, the regulatory subunit of PP2A, which then activates the catalytic subunit of PP2A (Xu and Williams, 2000). PP2A, in turn, is known to dephosphorylate FoxO1 at Ser256 (Yan et al., 2008). Indeed, silenc- . Cells were harvested, and Western blot analysis was performed to detect the total and phosphorylated levels of JNK and IKK (A). Reverse transfection of suspended HepG2 cells was performed with scrambled siRNA (control) or siRNAs of JNK1 and JNK2 together or siRNA of IKK for 24 h, and the transfected cells were cultured in regular media for another 24 h. Next, the forward transfection of empty vector pCMV6-XL5 (pCMV6) or plasmid containing PKR cDNA sequence (pCMV6-hPKR) was performed, followed by insulin treatment (0.5 nM) for 15 min (B). After treatment, cells were then harvested, and Western blot analysis was performed to detect the protein level of IKK and JNK and total and phosphorylated levels of PKR and IRS1 (B). ing PKR significantly suppressed the activity of PP2A (Figure 5B), thereby confirming that PKR activates PP2A in HepG2. To further investigate the potential involvement of PP2A in mediating the dephosphorylation of Ser256 on FoxO1 by PKR, we overexpressed PKR in HepG2 cells and inhibited the activity of PP2A with OA. Overexpressing PKR by transfecting the plasmid pCMV6-hPKR into HepG2 cells reduced the phosphorylation of FoxO1 at Ser256 ( Figure 5C), supporting our results that PKR induces dephosphorylation of FoxO1 at Ser256 ( Figure 5A). More importantly, OA, a specific PP2A inhibitor, restored the serine phosphorylation level of FoxO1 in PKR overexpressed cells ( Figure 5C). Similarly overexpressing PKR also decreased the Ser256 phosphorylation of FoxO1 in the nucleus, and OA restored the nuclear level of p-FoxO1 Ser256 in PKR overexpressed cells (Supplementary Figure 4B). Taken together, PKR dephosphorylates and activates FoxO1, mediated by PP2A. To confirm the positive effect of FoxO1 on the expression of IRS2 in HepG2 cells, we performed gene silencing of FoxO1. Silencing the gene expression of FoxO1 significantly reduced the protein level of IRS2, but not IRS1 ( Figure 5D), suggesting that FoxO1 controls the expression of IRS2 in HepG2 cells. To further confirm that FoxO1 mediates the effect of PKR on IRS2 protein level, we overexpressed PKR in HepG2 cells as well as silenced FoxO1. Overexpressing PKR in HepG2 cells increased the protein level of IRS2 (lanes 1 vs. 2 in Figure 5E), whereas silencing FoxO1 in control and PKR overexpressed cells significantly reduced the protein level of IRS2 ( Figure 5E), confirming that PKR up-regulates the protein level of IRS2 through the transcription factor FoxO1. In summary, our results suggest PKR differentially regulates IRS proteins ( Figure 6). First, PKR induces phosphor-ylation of IRS1 at Ser312 and suppresses tyrosine phosphorylation of IRS1, mediated by the IRS kinases, JNK and IKK. Second, PKR activates a transcription factor, FoxO1, which up-regulates the gene expression of IRS2. In addition, we as well as others have identified PKR as a downstream substrate of insulin signaling. Therefore taken together, our results suggest that PKR is involved in insulin signaling through a feedback mechanism and regulates the central transmitters of intracellular insulin signaling in the liver, IRS1, and IRS2, through different pathways ( Figure 6). DISCUSSION In the present study, we identified the effects of PKR on two major IRS proteins, IRS1 and IRS2, in HepG2 cells. First, PKR up-regulates the phosphorylation of IRS1 at Ser312, which in turn suppresses the tyrosine phosphorylation of IRS1. This effect of PKR is mediated by JNK and IKK (Figure 3). It is well known that PKR stimulates the transcription factor NF-B by activating IKK (Bonnet et al., 2000), and this process does not require the catalytic activity of PKR. Instead, The N-terminus of PKR is responsible for the activation of IKK (Bonnet et al., 2006). As discussed previously ), PKR has been reported also to play a role in the phosphorylation of the three MAPKs: JNK, ERK, and p38 MAPK, in the rank order of JNK Ͼ p38 MAPK Ͼ ERK (Zhou et al., 2003). Among the three MAPKs, JNK has been suggested to play a central role in inducing the inhibitory serine phosphorylation of IRS1 (Aguirre et al., 2000;Hirosumi et al., 2002). We did not test the effects of PKR on the other two less responsive MAPK proteins, ERK and p38 MAPK, which were also suggested to induce phosphorylation of IRS1 at Reverse transfection of suspended HepG2 cells was performed with scrambled siRNA (negative control) or siRNA of PKR for 24 h, and the transfected cells were cultured in regular media for another 24 h. Cells were then harvested (D) or treated with different concentrations of insulin for 15 min and harvested after the treatment (A). Confluent HepG2 cells were treated with 5 M PKR inhibitor (PI) or its analogue as a negative control (NC) or 10 mM 2-AP dissolved in PBS:glacial acetic acid (200:1) (GA) for 12 h (B and C). Western blot analysis was performed to detect the total and phosphorylated levels of IRS2 at Ser731 (A and B). RT-PCR was performed to detect the gene expression levels of IRS1 and IRS2 in response to the PKR inhibitors (C) or siRNA of PKR (D). Gene expression data were expressed as the average of nine samples Ϯ SD from three independent experiments. The protein levels of IRS2 were quantified by normalizing to ␤-actin, and the phosphorylation levels of IRS2 at Ser731 were quantified by normalizing to total IRS2 levels. Both the protein and phosphorylation levels of IRS2 are expressed as the average of three samples Ϯ SD from three independent experiments. Student's t test was performed for analyzing the differences between siPKR and scrambled siRNA (negative control). Significantly lower than negative control; i.e., scrambled siRNA (A and D) or chemical analogue of the PKR inhibitor (B and C); *p Ͻ 0.05. Significantly lower than control; GA, solvent of 2-AP; **p Ͻ 0.05. Ser residues (Rui et al., 2001;Fujishiro et al., 2003). Thus, although JNK may be an important intermediate, it is not likely the only one involved in mediating the signaling pathway from PKR to IRS1 phosphorylation. The phosphorylation of IRS1 at Ser312 by PKR provides a potential mechanism through which ceramide, an activator of PKR, promotes the inhibitory serine phosphorylation of IRS1. Indeed, other activators of PKR have also been shown to function on IRS1 in a similar manner. For example, HCV core protein, which directly binds and activates PKR (Yan et al., 2007), has been shown to induce the phosphorylation of IRS1 at Ser312 (Banerjee et al., 2008). The ability of PKR to promote serine phosphorylation of IRS1 provides a possible mechanism by which PKR mediates HCV infection and the inhibition of the IRS1 activity (Aytug et al., 2003;Banerjee et al., 2008). A novel function of PKR that we uncovered is its regulation of the protein level of IRS2 through the transcription factor FoxO1. The effect of PKR on FoxO1 has not been studied previously. We show that PKR dephosphorylates and activates FoxO1 mediated by the protein phosphatase PP2A. FoxO1 directs the expression of genes involved in a wide variety of cellular responses, one of which is the regulation of glucose homeostasis and energy metabolism (Gross et al., 2008). Insulin induces the phosphorylation of FoxO1, thereby inhibiting its activity (Barthel et al., 2005). In healthy states, the low insulin level during fasting sustains the activity of FoxO1, which facilitates the transcription of key enzymes involved in gluconeogenesis (Puigserver et al., 2003). FoxO1 also can up-regulate IRS2 gene expression through a feedback loop (Zhang et al., 2001;Ide et al., 2004), which occurs in HepG2 cells ( Figure 5). However in insulin resistance models, the persistent activation of FoxO1, due to disruption of the IRS-PI3K-Akt pathway, contributes to the development of hyperglycemia and glucose intolerance (Samuel et al., 2006;Dong et al., 2008). Therefore, FoxO1, which becomes activated during insulin resistance, is believed to serve as a dominant regulator of hepatic gene expression. The roles of FoxO1 in regulating fasting glucose homeostasis and enhancing hyperglycemia and glucose intolerance in insulin resistance models begs the question of whether PKR activation is involved in the regulation of glucose metabolism mediated by FoxO1. Given the inhibitory effect of insulin on PKR and the regulation of FoxO1 phosphorylation by PKR, we hypothesize that insulin-stimulated FoxO1 phosphorylation is mediated by PKR ( Figure 6). However, it has been previously shown that insulin stimulates the FoxO1 phosphorylation through Akt, which directly induces the phosphorylation of FoxO family of transcription factors, including FoxO1 (Bru- Figure 5. Involvement of FoxO1 in mediating the effect of PKR on IRS2. Reverse transfection of suspended HepG2 cells was performed with scrambled siRNA (control) or siRNA of PKR (A and B) or siRNA of FoxO1 (D) for 24 h, and the transfected cells were cultured in regular media for another 24 h. The forward transfection of empty vector pCMV6-XL5 (pCMV6) or plasmid containing PKR cDNA sequence (pCMV6-hPKR) was performed, and the cells were then treated with OA (2 nM) or its vehicle, ethanol, as a control, for 1 h (C). Reverse transfection of scramble siRNA (negative control, lanes 1 and 2) or siRNA of FoxO1 (siPKR, lanes 3 and 4) was performed, followed by the forward transfection of empty vector pCMV6-XL5 (pCMV, lanes 1 and 3) or the plasmid containing PKR cDNA sequence (hPKR, lanes 2 and 4; E). After treatment, the cells were harvested, and Western blot analysis was performed to detect the total and phosphorylated levels of FoxO1 (A and C), the total levels of IRS1, IRS2, FoxO1 and ␤-actin (D and E), and the total and phosphorylated levels of PKR (E). PP2A activity assay was performed to detect the phosphatase activity of PP2A (B). The phosphorylations of FoxO1 at Ser256 normalized to the total FoxO1 protein levels (C) and the protein levels of IRS1 and IRS2 normalized to ␤-actins (E) are expressed as the average of three samples Ϯ SD from three independent experiments. Student's t test was performed, and p values were calculated for analyzing the differences between the indicated samples. net et al., 1999;Guo et al., 1999;Rena et al., 1999;Barthel et al., 2005). As shown in Figure 6, the effect of PKR on FoxO1 phosphorylation stimulated by insulin is a complementary pathway to the direct phosphorylation of FoxO1 by Akt. Indeed we found that insulin is also able to stimulate the phosphorylation of FoxO1, even in PKR-overexpressing cells (data not shown). Therefore, the PKR pathway identified in this study does not preclude the strong direct phosphorylation of FoxO1 by Akt induced by insulin. In the present study, we identified that PKR regulates IRS1 and IRS2 through different mechanisms. Although the protein structures of the IRS proteins are highly conserved, both animal and cell studies indicate that IRS1 and IRS2 serve complementary, rather than redundant, roles in insulin signaling (Araki et al., 1994;Tamemoto et al., 1994;Withers et al., 1998;Kido et al., 2000). It has been suggested that hepatic IRS1 and IRS2 control different aspects of hepatic metabolism, with IRS1 more closely related to glucose homeostasis and IRS2 more closely related to lipid metabolism (Taniguchi et al., 2005). However more recently, using specific knockouts of liver IRS1 or IRS2, researchers demonstrated that the two proteins may overlap in their insulin action (Dong et al., 2008). Nevertheless, the different functions of IRS1 and IRS2 on hepatic insulin signaling have not been completely elucidated. Investigators have claimed that IRS1 and IRS2 are expressed differently and exert distinct functions under fasting and refeeding conditions (Kubota et al., 2008). IRS1 is stably expressed in the postprandial state, while IRS2 is highly expressed in the fasted state and downregulated in the fed state (Dong et al., 2008;Kubota et al., 2008). This has been proposed to be mainly due to the different circulating insulin levels during fasted and fed states (Dong et al., 2008;Kubota et al., 2008). High levels of insulin, in the fed state, suppress the IRS2 mRNA and protein expressions at the transcriptional level, through the PI3K-Akt pathway (Hirashima et al., 2003). Because Akt serves as a kinase that directly induces the phosphorylation of FoxO1 Barthel et al., 2005), the downregulation of Akt due to the low insulin levels, during fasting, results in dephosphorylation and therefore the translocation and activation of FoxO1 (Rena et al., 2001;Rena et al., 2002). This then contributes to the up-regulation of IRS2 gene transcription during fasting. Along with the differential expression patterns of IRS1 and IRS2, a functional relay exists between IRS1 and IRS2 in mediating hepatic insulin signaling during fasting and after refeeding. In the fasted state, especially the late stage of fasting, the major downstream target of insulin signaling, PI3K, is more associated with IRS2 than IRS1 (Kubota et al., 2008). Therefore, the elevated expression of IRS2 in the fasted state may serve as a complementary mechanism to compensate for the decreased level of circulating insulin (Dong et al., 2008;Kubota et al., 2008). On the other hand, after refeeding, the IRS1-associated PI3K starts to rise, whereas the IRS2-associated PI3K decreases (Kubota et al., 2008). This suggests a switch of hepatic insulin signaling hub protein, from IRS2 in the fasting state to IRS1 in the fed state. Thus, insulin signaling is mediated primarily through IRS1 after refeeding, whereas IRS2 plays a more predominant role during fasting, compensating for the loss in insulin signaling activity due to the low insulin level (Kubota et al., 2008). Considering the distinct expressions and functions of IRS1 and IRS2 during fasted versus fed states, we propose that PKR may regulate insulin signaling activity differentially during the fasted versus fed states. Taking into account the inhibitory effect of insulin on the phosphorylation of PKR, in the present manuscript we propose that insulin suppresses the activity of PKR in the fed state, resulting in the suppression of the negative feedbacks that functions on IRS1 Ser phosphorylation and thereby promoting insulin signaling activity through IRS1. On the other hand, when the circulating insulin is at a low level, PKR remains at a relatively higher activity level during the fasting (vs. fed) state, enhancing IRS2 gene expression through FoxO1, and promoting hepatic insulin signaling to compensate for the low insulin levels during the fasted state. Taken together, the disparate effects of PKR on the IRS1 and IRS2 proteins may serve as a potential mechanism by which the two IRS proteins are differentially regulated. In summary, we identified that PKR functions as a key regulator of the central transmitters of insulin signaling in liver cells, IRS1 and IRS2 ( Figure 6). PKR induces the inhibitory phosphorylation of IRS at Ser312 (Figures 1-3) and activates the transcription factor, FoxO1, which up-regulates the protein expression level of IRS2 (Figures 4 and 5). Taken together, PKR appears to be an important player in the regulation of insulin signaling through the major transmitters of insulin signaling, IRS1 and IRS2. Figure 6. Proposed signaling pathways through which PKR is involved in insulin signaling network in HepG2 cells. Insulin activates insulin signaling by IR and IRS, leading to the suppression of PKR phosphorylation at Thr451. PKR induces the phosphorylation of IRS1 at Ser312 through two other kinases, JNK and IKK. In addition, by activating PP2A, PKR dephosphorylates a transcription factor, FoxO1, which up-regulates the gene expression of IRS2.
2016-08-09T08:50:54.084Z
2010-10-01T00:00:00.000
{ "year": 2010, "sha1": "e370a90f4fce0f59d434f124b2d7940ff58e97fb", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc2947480?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "e370a90f4fce0f59d434f124b2d7940ff58e97fb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
53496363
pes2o/s2orc
v3-fos-license
Structure of the Kasbah fold zone (Agadir bay, Morocco). Implications on the chronology of the recent tectonics of the western High Atlas and on the seismic hazard of the Agadir area Detailed re-interpretation of the north-eastern segment of a profile realized across the Agadir bay along a NE-SW trend and crosscutting the main structures, together with analysis of available isochron maps, allowed us to retrace the geological history of the offshore western High Atlas. Two tectonostratigraphic sequences were distinguished: Unit II, which displays a simple structure, laying unconformably on Unit I, with a more complex structure dominated by a reverse fault (F1) striking E-W with a dip to the north. Correlation to boreholes Souss-1 and AGM-1 allowed us to assign Unit I to the Triassic – Palaeogene and Unit II to the Miocene – Present. The NE fault block shows a ramp-flat fault plane (F2) with an overlying SW-vergent fold that can be interpreted as a fault-bend fold. Three main stages were distinguished: (1) during the Cretaceous, F1 could have been a syndepositional normal fault with the NE block moving downwards; (2) towards the beginning of the Tertiary, the displacement of plane F2 induced the development of a fault-bend fold and erosion of the forelimb and hinge of the fold; displacement along F2 was transferred to fault F1; (3) afterwards, during the Miocene, reverse motion of F1 deformed and tilted the plane F2 and accentuated the folded structure. This evolution is typical for a frontal basin above a fault-related fold. Evaluation of the thickness and bed depth differences shows that the largest growth rate was recorded in Late Miocene times. Seismic activity recorded in the Agadir bay appears to be clearly related to this fault zone, as inferred from focal mechanisms. Seismic moment evaluation suggests that earthquakes of magnitude Mw≥6 are likely to occur, but could not be much larger because of the fault segmentation geometry of the High Atlas Front. Introduction In the Moroccan High Atlas, the southernmost Alpine chain of the western Mediterranean, the major uplift phase was recorded in Late Miocene -Pliocene times, as can be inferred from the thick conglomerate formations deposited at the chain borders, which contain clasts derived from the uplifted relief (Duffaud, 1981, Petit et al., 1985, Medina, 1986, 1994, Fraissinet et al., 1988, Zouine, 1993, Chellaï & Perriaux, 1996, Aït Hssaine, 2000. In the western High Atlas ( fig. 1), which represents the Atlantic termination of the Atlas chain (e.g. , the Palaeogene and pre-Moghrebian (Pliocene) Neogene cover near Agadir was in great part eroded, except in a few locations, where the Maastrichtian beds are overlain by the Early Miocene "White Conglomerate" (Conglomérat blanc), the probable Messinian, and the Pliocene / Moghrebian formations (Allard et al., 1958;Duffaud, 1962;Ambroggi, 1963;Weisrock, 1980;Alonso-Gavilán et al., 2001). In the marine part, boreholes and seismic-reflection profiles performed since the latest 1970's (Flament-Lieffrig, 1979), and in particular during the last decade (Mridekh et al., 2000;Samaka, 2001;Mridekh, 2002;Hafid et al., 2006), have provided valuable information on the Cenozoic strata, because of their better preservation, frequently with a large thickness. The numerous structures mapped ( fig. 1) are related to halokinesis and syndepositional tectonics, mainly in a compressional setting. One of the most conspicuous structures in the area is the Kasba Fold Zone (zone des plis de la Kasba; Mridekh et al., 2000;Mridekh, 2002; hereafter abbreviated to KFZ), also called the "Elkasba monocline" (flexure d'Elkasba) by Hafid et al. (2006), which is the northernmost offshore struc-ture of the Agadir bay. Interpretation of the structures related to this fold zone suggests that, contrary to the emerged part of the western High Atlas, deformation seems to have been continuous in Tertiary times. In this paper, we expose new details on the geometry and chronology of the KFZ frontal fault, based on the re-interpretation of a selected seismic profile segment, and referring to fold kinematic constraints (e.g. Suppe, 1983;Suppe & Medwedeff, 1990;Wickham, 1995). These new observations are also useful for updating the chronology of the Atlasic compressional events in the area, and assessing the seismic hazard of the Agadir city, which has already been destroyed on February, 29 th , 1960, by a large earthquake (I MSK =X; M = 6), with a death toll of about 12,000 (see review in Cherkaoui et al., 1991). Structural setting The studied profile (hereafter referred to as "Profile A"), trending NE-SW, is located in the Agadir bay, west of the city ( fig. 1). The formations in the bay are crosscut by a major, NE-SW striking vertical fault, called the "Marine" Tildi fault-corridor (couloir de failles du Tildi; Mridekh, 2002), reaching 30 km in length ( fig. 1, TD). This fault separates two structurally-contrasting blocks (Mather, 1980, Samaka, 2001, Mridekh, 2002: -The south-eastern block shows a simple structure; isochron curves of the main units suggest a steady dip to the NW, in continuity with the southern limb of the Souss asymmetric syncline, the axis of which, trending ENE-WSW, is located south of Agadir . -The more complex north-western block is folded, faulted and crosscut by diapir structures. de falla. Respecto a su evolución temporal, se han distinguido tres estadios: 1) Durante el Cretácico, F1 pudo haber sido una falla normal sindeposicional, con el bloque NE desplazándose hacia abajo; 2) Al comienzo del Terciario, el desplazamiento del plano F2 indujo el desarrollo de un pliegue de flexión de falla y la posterior erosión de su flanco anterior y de la charnela (el desplazamiento a lo largo de F2 fue transferido a F1); y 3) Durante el Mioceno el desplazamiento inverso de F1 deformó y basculó el plano F2, acentuando la estructura de plegamiento. Esta evolución es típica de una cuenca frontal situada sobre un plegamiento relacionado con fallas. La evaluación del espesor y las diferencias de profundidad del lecho muestra que la mayor tasa de crecimiento se registró durante el Mioceno tardío. La actividad sísmica registrada en la Bahía de Agadir parece estar claramente relacionada con esta zona de fracturación, como se infiere de los mecanismos focales. La evaluación del momento sísmico sugiere que los terremotos de magnitud Mw ≥ 6 son probables, pero no mayores, debido a la geometría segmentada de la fracturación del frente del Alto Atlas. Isochrons of the top of the Albian, striking NW-SE near the coastal area, rapidly deepen to the southwest, reaching 2.5 sTWT (Mather, 1980, Samaka, 2001. The same can be observed for the top of the Turonian (Mridekh, 2002). On the map and on the regional profile (figs 1 and 2), the most important structures are, according to the nomenclature of Mridekh et al. (2000), and Mridekh (2002): -The Kasba fold zone (KFZ), a structural high oriented E-W to WNW-ESE, located in continuity with the southwestern closure of the Aït Lamine anticline (Duffaud, 1962). -The Agadir and Massa basins, which are synclinal depressions filled with Neogene to Quaternary, and minor Palaeocene deposits. These basins are separated from each other by the E-W trending Massa Front. -The Massa Front, trending mainly E-W, becoming WNW-ESE to the west. This structure corresponds to an anticline developed upon a blind thrust (Hafid et al., 2006), which can be interpreted as a fault-propagation fold, with a probable inter-ference of salt (Mridekh et al., 2000). Genetically, the development of this fold in a corner may also be related to left-lateral strike-slip motion of the Tildi Marine fault. All these structures appear west of the trace of the Tildi marine fault, which is injected with a Triassic salt wall partly within a releasing stepover. The much larger amount of shortening in the northwestern block indicates a sinistral strike-slip motion, clearly transferring the front of the High Atlas to the southwest. Borehole data Among several boreholes performed in the Agadir bay, the nearest to the studied profile are AGM-1, Souss-1 ( fig. 1) and MARCAN-1. The Souss-1 borehole ( fig. 3) was used for tying the seismic units, with the exception of the shallowest units of the Agadir offshore basin, which wedge out towards the borehole. -Unconsolidated shales and siltstones (600-650 m) of Lutetian age. -Siltsones, grey to brown shales and shales intercalated with siltstones (650-1,950 m); this unit is dated as Albian at the base, and mainly covers the Cenomanian; the top is of Turonian to Maastrichtian age. Seismo-and tectono-stratigraphy For the purpose of our structural study, we distinguished two main units ( fig. 4), designated in the following as Unit I and Unit II, separated by a major unconformity (D on fig. 4), which corresponds to the "Tertiary unconformity" of Samaka (2001). Unit I consists of a set of high-amplitude reflectors, parallel to each other in most of the section except at the top, where lenticular-shaped structures may reflect large channels filled with deposits. Correlation to borehole data shows that Unit I corresponds to the Triassic, Jurassic, Cretaceous and Paleogene strata, particularly to seismic units Tr to Mo1-1 in the terminology of Mridekh et al. (2000) and Tr to Pg of Hafid et al. (2006)). In this unit, it is possible to observe regional-scale seismic horizons (Mridekh et al., 2000;Samaka, 2001;Mridekh, 2002), emphasized by strong reflectors, as those of the tops of the Triassic, the Jurassic, the Albian and the Cenomanian units ( fig. 2). The uppermost levels of Unit I are eroded and truncated by the major angular unconformity D ( fig. 2), against which they terminate in a toplap pattern. 4A). Tying to boreholes allowed us to correlate Unit II with units Mo1-2 to Mo1-6 of Mridekh et al. (2000) and Ng of Hafid et al. (2006). In detail, and according to the nomenclature of Mridekh (2002), 4 sub-units can be distinguished; they correspond to Mo1-2 (Early Miocene), Mo1-3 (Middle Miocene), Mo1-4 (Late Miocene) and Mo1-5 to Mo1-6 (Pliocene and Quaternary). Mo1-4 and Mo1-5 are separated by the minor unconformity d ( fig. 4). Structure In the north-easternmost segment of the studied profile ( fig. 2), the previous studies have interpreted the steeply-dipping frontal fold or monocline as a flower structure (Mridekh et al., 2000;Mridekh, 2002;Hafid et al., 2006). However, detailed interpretation ( fig. 4B) supported by a depth section (fig. 5) reveals that the major fault plane (F1) has an apparent dip to the NE, the value of which from the depth section is 60°, with a short step between 6,000 and 7,000 m. We also interpreted a branch dipping about 50°NE, merging upwards with the main plane at ca. 4,000 m. Within Unit I, this fault separates two blocks with a different structural pattern: -In the footwall, the structure is rather simple with sub-horizontal reflectors, except at the proximity of fault plane F1, where they are tilted to the SW. -The hanging wall shows a boxfold-like anticline above a strong reflector (R) with a sinuous trace that can be interpreted as a flat and ramp fault surface (F2 in fig. 4B). In the northeastern limb, reflectors above R are sub-parallel, whereas in the southwestern limb, they terminate against R with downlaps, suggesting thickening of strata towards fault F1 in the SW ( fig. 4A, B). The fold also shows two couples of axial surfaces that delimit kink bands ( fig. 4B, dashed lines). Unit II, which is nearly horizontal in the southwest of the profile in fig. 4B, has a SW dip in its northeastern part. This change in dip occurs after crossing a synclinal axial surface dipping to the NE (fig. 4B). The strata are arranged in a fan-like geometry above the south-western limb of the anticline, with an apex oriented to the NE. The higher part of the limb suggests a discrete axial surface located upon the eroded hinge of the underlying fold in Unit I. In detail, the fan geometry in Unit II is different within the kink band of the limb between both axial surfaces: in sub-unit Mo1-3, the reflectors remain parallel to each other and terminate in toplaps against d; above the latter, the reflectors are rather uniformly convergent towards the anticline axial surface. Kinematics of structures From the geometrical and kinematic point of view, several models can be investigated to find a kinematic pathway that could account for the observed structure: fault-propagation and fault-bend folding (Suppe et al., 1997), trishear folding (Erslev, 1991), displacement-gradient folding (Wickham 1995), etc. Detailed observation of the growth fan leads to the following remarks: -In sub-units Mo1-2 and Mo1-4, dips of the reflectors and thickness of the sets decrease toward the hinge; the converging fan along the fold limb suggests deformation by limb rotation, probably through displacement-gradient folding. -In sub-unit Mo1-3, reflectors are parallel, with a constant dip, end in toplaps against unconformity d, and thin out above the anticline hinge, suggesting deformation by kink-band migration related to a fault-propagation or fault-bend folding. From these two observations, it appears that the kinematic history of the KFZ is variable, resembling in many details to other reported cases, such as those of the Sant Llorenç de Morunis anticline in Spain (Suppe et al., 1997), the Lost Hills structure (Wickham, 1995;his fig. 7) and especially the Santa Fe Springs segment of the Puente Hills blind thrust near Los Angeles ( fig. 3 in Shaw et al., 2002). However, retracing the detailed kinematic history is beyond the scope of the present paper since it needs a profile of much better quality and data. 1. The increase in thickness of the Cretaceous beds in the southwestern limb of the anticline in the hanging wall does not seem to be related to halokinesis, which should have led to the development of a symmetric fan in the other limb. This suggests that either F1 was a syndepositional normal fault during the Cretaceous, or that the onset of anticline growth was during the Late Cretaceous, with a fan above the frontal part of the fold. Although there are indications for the onset of compression at that time in the distal part of the offshore basin (Hafid, 1999;Mridekh, 2002), it appears more logical to relate the fan to a syndepositional normal motion on fault F1, because of the absence of a clear fold. 2. By the Latest Cretaceous to Palaeogene, displacement along plane F2 led to the development of a fault-bend fold and subsequent erosion of its hinge ( fig. 4). The lack of any comparable feature in the footwall of F1 suggests that displacement on fault F2 was transferred to plane F1. Consequently, in the limit of the datings provided by ONAREP / ONHYM, the first phase of compressional deformation in the area is of latest Cretaceous -Palaeocene age. However, some ambiguity remains on its exact age, as the onshore data indicate that, along the southern side of the western High Atlas, the Cretaceous formations are conformably overlain by the marine Conglomérat blanc, dated as Late Oligocene by Allard et al. (1958), and reassigned to the Early Miocene (Aquitanian / Burdigalian) by Cahuzac (1987), on the basis of paleontological and eustatic arguments. On a regional scale, the offshore area is located in the continuation of the southwestern closure of the Aït Lamine anticline ( fig. 1), which is kinematically a fault-bend fold (Mustaphi, 1997). According to analysis of offshore data, the development of this fold should be of Latest Cretaceous age, which is in accordance with data from the northern border of the Atlas (Froitzheim, 1984). 3. Afterwards, during the Early Miocene, reverse displacement on fault plane F1 deformed and tilted plane F2, and accentuated the fold structure, as attested by the continuation of fault plane F1 by the synclinal axial surface. The relatively large thickness of the Neogene strata implies that the rate of deposition was higher than that of anticlinal growth. This compressional regime was constant since the Early Miocene, and emphasized by the sedimentary fan within units Mo1-3 and Mo1-4. Inland, two major events are recorded by the undifferentiated Mio-Pliocene and Plio-Villafranchian foreland deposits; these two formations are separated by an unconformity reflecting a period of quiescence. Growth rate Measurement of the mean anticline growth can be inferred from the depth difference of selected reflectors of the growth fan (Table 1A). The mean vertical rate is 0.128 mm/y for the whole Unit II, considering an Early Miocene age (23 Ma) for its base. This rate remained steady in the Middle Miocene (0.124 mm/y), but seems to have slowed down since the Late Miocene, falling to 0.04 mm/y (Table 1A). (Table 1A) by levelling the top surfaces upon the two fault blocks, the rates are 0.133 mm/y for Mo1-2 + Mo1-3, under d, 0.2 mm/y for Mo 1-4, and only 0.04 mm/y above the latter. Considering individual units The mean rate is comprised within the boundary values of 0.1-0.2 mm/y uplift rate in the last 2-2.5 Myr calculated for the Kasbah anticline by Meghraoui et al. (1998), but is much lower than the slip rate calculated by Sébrier et al. (2006) for the Ameskroud fault (0.4 mm/y during the last 35 kyr) and Tagragra anticline (0.3 mm/y for the last 2-2.5 Myr). These differences are not surprising because of the uncertainties on ages and tie points between wells and seismic sections, and the use of different parameters (uplift, slip rate) by different authors. In addition, we have used a much larger time span, probably with several episodes of fault quiescence. Whatever the type of calculation, it clearly appears that the main growth stage with the higher slip rates occurred during the Late Miocene, which is in accordance with the episode of deposition of Mio-Pliocene conglomerates known inland. Implications on the seismic hazard in the Agadir area The study of the KFZ structure also allowed us to reassess the seismic hazard in the Agadir area, on the base of a comparison to similar seismogenic structures described from Los Angeles area (Shaw & Suppe, 1996;Shaw et al., 2002). The review of available data on seismicity ; and online catalogue of earthquakes available at the CNR website http://sismolag.cnrst.ma/) and on focal mechanisms El Alami et al., 1992;Medina, 1994Medina, , 2008 shows that the events are few (Medina, 1994). For the period 1960-2007, only 20 events were inventoried in the Agadir area between 9.5°-10.05°W; 30.25°-30.7°N (Table 2). The list does not take into account the numerous aftershocks of the 1960 earthquake, since the epicentres of the events before 1992 are generally poorly constrained because of the low number of seismological stations in Morocco at that time. The largest shocks are those of February, 29 th , 1960 (I MSK = X; M = 6), which destroyed the former city, and the events of April, 5 th , 1992 (M = 4.7), and November, 16 th 2003 (M = 4.2), which were felt by the local population. Seven epicentres, among which those of the April, 5 th , 1992 event and its aftershocks, are located immediately north of KFZ trace ( fig. 1), strongly suggesting a possible relationship with the fault zone. This is supported by the shallow depth of the foci, in the range of 2 km only. For the other shocks, the depths of foci are not available (Table 2). Other epicentres are located close to fault systems and diapir walls as shown in figure 1. Three focal mechanism solutions are available for these events, one of which is composite. The solutions for the February, 29 th , 1960 event are very different from one author to another, ranging from reverse motion (Wickens & Hodgson, 1967in Udias et al., 1989, to strike-slip faulting with a N-S trending P axis (Girardin et al., 1977), or E-W trending P axis (e.g. . Because of the low quality of recordings and the poor azimutal distribution of the seismological stations at that date (Stevens & Hodgson, 1968), it is difficult to reach a reliable solution for this event, without digitizing and modelling available records. The focal mechanism solution for the main shock of April, 5 th 1992 is thrust faulting along planes oriented N250; 53°(with a sinistral component) and N141; 70°(with a dextral component). The composite solution constructed for the weak aftershocks , is also an almost-pure thrust fault with both planes oriented E-W (N83; 53°and N296; 42°). The magnitude of any earthquake that should occur along the KFZ fault can be evaluated with the help of the seismic moment equation (e.g. Frohlisch & Apperson, 1992): where A is the area of the fault, s the amount of displacement along the fault (in cm) and µ is Young's modulus (taken to be 5.10 11 dyn.cm -2 ). The moment magnitude M w can be expressed by the equation of Hanks & Kanamori (1979): There are also several empirical equations and graphs set by Wells and Coppersmith (1994;their tables 2A-C), which express the relationships between the fault parameters. However, as we have two unknowns, the magnitude and the amount of displacement, we just use Hanks and Kanamori's equation to assess the maximum magnitude for a given mean displacement along the fault knowing the theoretical rupture area. The size of the fault (at least 20 km length x 6 km width = 120 km 2 ) suggests that, for a mean displacement of about 1 m, earthquakes of magnitude M w = 6.33 are likely to occur, as in the case of the 1960 Agadir earthquake. These data suggest that the KFZ is a potential seismogenic structure that should be taken into account in studies on seismic hazard, as for the Tildi (onshore and offshore segments), Lahouar, El Klea and south-Tagragra faults (Sébrier et al., 2006). However, it appears that larger earthquakes are not likely to occur, because of the highly segmented character of faults in the western part of the High Atlas Front, contrary to other well known structures, such as the San Andreas, Zagros or Taurus faults. Regional extent of the South Atlas Front The structural pattern in the Agadir bay implies that the front of the High Atlas is offset to the southwest by the Tildi marine fault. This is a current feature in the southern border of the western High Atlas, where several structures are offset by transfer strike-slip faults (see above). Concerning the continuation of the Atlas toward the Atlantic, the review of published papers indicates that the idea of a possible relationship between the South Atlas faults (or Tizi n'Test fault zone) and the Canary Islands was first exposed by Flament-Lieffrig (1979). Nairn et al. (1980) also suggested a connection through the "Tarfaya fault" because of the E-W trend of the Tarfaya margin. In contrast, other authors such as Schminke (1982), did not find from petrological data any clear genetic nor kinematic relationship between the two domains. Later, Weijermars (1987) suggested continuity between the "South Atlas Fault" and the eastern segment of the Hayes transform fault north of the Canary Islands; he considered this fault zone as the southern boundary of the Eurasian plate. However, the more recent work of Mustaphi et al. (1997, their figure 4) showed that the inverted El Kléa Triassic fault, the southernmost segment of the Atlas Front inland, rotates from NE-SW to NNE-SSW and runs parallel to the coast without reaching the Canary Islands. At a larger scale, Sahabi et al. (2004, their figure 6) also suggested continuity between the Atlas Front and the eastern boundary of the Tarfaya basin, at least during the Triassic. In the offshore Tarfaya basin, the recent studies of Abou Ali et al. (2004Ali et al. ( , 2005 only show NE-SW Mesozoic normal faults without any signs of inversion. Finally, the offshore/onshore studies of Hafid et al. (2000Hafid et al. ( , 2006 suggest that, in the oceanic area, the High Atlas folds rotate and terminate in a mainly thin-skin, salt-driven, westward-thrusting pattern, which we interpret as an escape pattern. At depth, it is noteworthy that there is a lithospheric thermal dome beneath the High and Middle Atlas and the Rif, that reaches the Agadir area to the west (Frizon de Lamotte et al., 2008). It would be very possible that this anomaly extends to the west and then to the Canary Islands; however, the problem cannot be assessed without a detailed geophysical study and especially by collecting geothermal data along the Tarfaya margin (A. Rimi, pers. comm.). We conclude from the exposed data, that the Atlas front may terminate in a Rif/Betics-like pat-tern, i.e. as an arc within the Atlantic, with no shallow connection with the Canary Islands; however, at a lithospheric scale, both domains may be thermally connected, but this needs collecting thermal and gravimetric data along the Tarfaya segment. Conclusions 1. Re-interpretation of the north-eastern segment of a profile realized across the Agadir bay along a NE-SW trend shows two tectono-stratigraphic sequences: Unit II, which displays a simple structure, laying unconformably on Unit I, with a complex structure dominated by a NE-dipping reverse fault (F1). Correlation with boreholes Souss-1 and AGM-1 allowed us to assign Unit I to the Triassic to Palaeogene and Unit II to the Miocene-Present. 2. The NE block shows a fault plane of the ramp-flat type (F2); and the overlying fold can be interpreted as a fault-bend fold, with a SW-vergence. 3. Three stages were distinguished: (1) during the Cretaceous, F1 could have been a syndepositional normal fault with the NE block moving downwards; (2) towards the end of the Cretaceous or by the Palaeogene, the displacement of plane F2 induced the development of a fault-bend fold and to erosion of the forelimb and hinge of the fold; displacement along F2 was transferred to fault F1; (3) afterwards, during the Miocene, the reverse motion of F1 deformed and tilted the plane F2 and accentuates the folded structure. This evolution is typical for a frontal basin above a fault propagation fold; growth of the anticline occurred mainly during Late Miocene times. 4. Seismic activity recorded in the Agadir bay appears to be clearly related to this fault zone, as also inferred from focal mechanisms. Seismic moment evaluation suggests that earthquakes of magnitude M w = 6 or more are likely to occur. In fact, it corresponds to the rigidity modulus defined by (μ=E/2(1+ν)) E being Young's modulus and ν being Poisson's Ratio. However, the following calculations remain correct, as the used values correspond to those of the rigidity.
2018-10-25T19:22:38.287Z
2009-12-30T00:00:00.000
{ "year": 2009, "sha1": "ffe3bfd9542994a9d28abc08feea77ba4ed96e6b", "oa_license": "CCBY", "oa_url": "http://estudiosgeol.revistas.csic.es/index.php/estudiosgeol/article/download/638/664", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ffe3bfd9542994a9d28abc08feea77ba4ed96e6b", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
827382
pes2o/s2orc
v3-fos-license
Surveillance of testicular microlithiasis?: Results of an UK based national questionnaire survey Background The association of testicular microlithiasis with testicular tumour and the need for follow-up remain largely unclear. Methods We conducted a national questionnaire survey involving consultant BAUS members (BAUS is the official national organisation (like the AUA in USA) of the practising urologists in the UK and Ireland), to provide a snapshot of current attitudes towards investigation and surveillance of patients with testicular microlithiasis. Results Of the 464 questionnaires sent to the BAUS membership, 263(57%) were returned. 251 returns (12 were incomplete) were analysed, of whom 173(69%) do and 78(31%) do not follow-up testicular microlithiasis. Of the 173 who do follow-up, 119(69%) follow-up all patients while 54(31%) follow-up only a selected group of patients. 172 of 173 use ultra sound scan while 27(16%) check tumour makers. 10(6%) arrange ultrasound scan every six months, 151(88%) annually while 10(6%) at longer intervals. 66(38%) intend to follow-up these patients for life while, 80(47%) until 55 years of age and 26(15%) for up to 5 years. 173(68.9%) believe testicular microlithiasis is associated with CIS in < 1%, 53(21%) think it is between 1&10% while 7(3%) believe it is > 10%. 109(43%) believe those patients who develop a tumour, will have survival benefit with follow-up while 142(57%) do not. Interestingly, 66(38%) who follow-up these patients do not think there is a survival benefit. Conclusion There is significant variability in how patients with testicular microlithiasis are followed-up. However a majority of consultant urologists nationally, believe surveillance of this patient group confers no survival benefit. There is a clear need to clarify this issue in order to recommend a coherent surveillance policy. Background Ever since Doherty et al [1] described testicular microlith-iasis on ultrasound scan in the late 80 s, the interest in this subject had been on the increase. However despite an association of testicular microlithiasis with testicular tumour [2][3][4][5][6][7][8] its aetiological role and the need for followup have remained largely unclear. The incidence of microlithiasis detection has increased with the development of more sensitive ultrasound transducers in the recent past and this has in turn increased patient anxiety in addition to the NHS workload. The cost of on going radiological surveillance of this patient group could also be phenomenal. A recent prospective follow-up study of patients with incidentally diagnosed testicular microlithiasis by Raymond A Costabile [9,10], shows no link between incidentally diagnosed testicular microlithiasis on ultrasound and testicular tumour. However the importance of testicular microlithiasis in patients with a high risk of developing testicular tumour such as cryptorchidism, small atrophic testis, sub-fertile men and testicular tumour in the contra lateral testis is not clear and needs further evaluation [12,13]. With conflicting evidence for the rationale of routinely following patients with incidentally diagnosed testicular microlithiasis [9][10][11] we conducted a national survey to provide a snapshot of current attitudes towards investigation and surveillance of this patient group in the United Kingdom. Methods A standardised questionnaire was sent to the 464 consultants on the British Association of Urological Surgeons (BAUS) register. BAUS is the official national organisation (like the AUA in USA) of the practising urologists in the UK and Ireland. The questionnaire aimed to record individual consultants' preferred practice for managing patients in whom a finding of testicular microlithiasis is made. Participants were asked initially whether they chose to routinely follow-up these patients and, if so, how this was achieved: • All patients or a selected group only • Intended duration of follow-up (life-long/up to 55 yrs of age/<5 yrs) • Surveillance modality used (Clinical examination/ultrasound/tumour markers/Biopsy) • Frequency of follow-up ultrasonography if utilised Participants were also asked to record their opinion with regard to whether they felt a survival advantage was conferred for those who go on to develop a testicular malignancy by surveillance of this microlithiasis group. In addition, the participants' perceptions of the degree of association between microlithiasis and testicular carcinoma-in-situ were also requested. Results Of the 464 questionnaires sent out, 263 (57%) were returned of which 12 were inadequately completed. A total of 251 were therefore analysed. 173 (69%) of the responding participants routinely choose to follow-up patients with testicular microlithiasis while 78 (31%) do not. Of those who do, 119 (69%) decide to follow-up all patients while 54 (31%) only do so for a selected patient group. Of the 173 participants who do follow-up microlithiasis, all but one consultant used ultrasonography in addition to clinical examination as part of their surveillance. This lone consultant uses annual clinical examination as his preferred method of follow-up. While 10 (6%) of the participating consultants arrange ultrasound scan on a 6 monthly basis, a vast majority of them 151 (88%) do it annually. A further 10 (6%) scan patients at more extended intervals. 27 (16%) consultants request tumour markers in addition to ultrasound scan, while 10 (6%) would consider biopsy in selected group of patients. With regard to the duration of follow-up these patients, 66 (38%) of the positive responders would follow-up their patients for life, while 80 (47%) would follow them up until they were 55 yrs of age. 26 (15%) would discharge their patients after 5 yrs of surveillance. Based on their understanding of the current literature 173 (69%) participants believe testicular microlithiasis is associated with carcinoma-in-situ in less than 1% of patients. 53 (21%) felt this figure to be between 1-10% while 7 (3%) believe it to be greater than 10%. 109 (43%) participants believe that surveillance does confer a survival benefit for microlithiasis patients who go on to develop testicular malignancy while 142 (57%) do not. Interestingly, 66 (38%) responders who do choose to follow-up this patient group do not think there is a survival benefit. Discussion Microlithiasis in the testis can be histological or radiological microcalification. They are not essentially the same entity. Of interest to the urologist however is the radiologically detected micolithiasis. Oiye was the first to describe intratesticular calicifications in 6 of 192 testicles in autopsy specimens as early as1928 [14]. This report was followed a year later by Blumensaat, who reported similar intratubular bodies in postmortem specimens [15]. He felt they were degenerated spermato-gonia displaced into the lumen of the seminiferous tubules. Later Bigger and Mc Adams using various histochemical techniques found that the laminated eosinophilic material was a glycoprotein derived from intra tubular secretions, which later calcified [16]. But it was not until 1961 that Azzopardi & Mostofi [17] from the Armed forces institute of Pathology in Washington described the two different types of intra-testicular calcification and their associated pathology. They reported the more commonly found rounded laminated intra tubular calicifications associated with cryptorchid testis, adenomatous or inflammatory pathology. They then reported the amorphous haematoxylin staining calcific bodies in dilated seminiferous tubules found in 13 of 17 patients with wide spread chorio-carcinoma. Histo-chemical methods showed them to consist of phospholipid, protein debris, DNA and calcium phosphate. These calcifications were seen in close association with malignant neoplastic cells. Diffuse microcalcification in the testis on a plain X ray film was first reported by Priebe & Garret in a 4 year old boy with an otherwise normal testicle in 1970 [18]. But it was not until the mid 80 s when Doherty et al using a 10-MHz transducer first described ultra sonically detected testicular microlithiasis [1]. Ever since, the interest in this entity has increased, with several case reports and retrospective studies reporting an association with testicular cancer [2][3][4][5][6][7][8]. However these studies were either isolated case reports or retrospective studies in selected group of patients. In one series of 263 sub-fertile men, 20% were found to have microlithiasis [12]. In the same study 20% of the men with bilateral microlithiasis were found to have CIS. Interestingly there was no association of CIS with unilateral microlithiasis in this study group. In another series of patients with testicular germ cell tumour Skakkabaek found a significant association of contra-lateral testicular microlithiasis and CIS [13]. Clearly in the high-risk group (described earlier) there seems be a significant association of testicular microlithiasis and CIS, which needs to be clarified with further longitudinal studies. In an ultrasound screening study involving 1504 men between 18 to 35 years from the US army officer corps, Peterson & Costabile R A. [9] found the prevalence of testicular microlithiasis to be 5.6%. In this study African Americans were found to have a higher prevalence of 14% as opposed to whites who had a prevalence of 4%. However the incidence of testicular tumour is higher in whites than African Americans. Analysis of the geographical distribution of these cases showed a negative correlation with the incidence of testicular tumour in the United States. Interestingly there was an association with STD in the regions where testicular microlithiasis had a higher prevalence in this study. In their follow-up report after more than 4 years presented at the AUA meeting in 2004 at San Francisco, USA, they have not had a single case of testicular tumour in their study subjects with testicular microlithiasis. Our survey confirms that many urologists tend to follow this patient group for a considerable period of time. However there seems to be a considerable variation in the surveillance policy. This is likely to have an enormous bearing on the cost conscious NHS practice in future. The estimated cost in the United States to follow-up all patients with microlithiasis between 18-to 35 years old are about 18-billion dollars per year [10]. It is also known from many studies that there is an average delay of 3-6 months between noticing a testicular lump and seeking medical advice without significantly affecting cure. With the cure rate for testicular cancer exceeding 90%, it is debatable whether an earlier ultrasound diagnosis will have any effect on the outcome than self examination. With the emerging evidence it seems safe not to routinely follow-up patients who are incidentally diagnosed with testicular microilithiasis [9][10][11]. They should however be advised to continue testicular self-examination. The importance of testicular microlithiasis in the high-risk groups is not clear and needs further evidence. Until such time it would be logical to follow-up all patients in the high risk group. We do acknowledge that the response rate to our questionnaire survey has been moderate, with only 57% returns. This has a small chance of bias towards urologists actively following microlithiasis returning the questionnaire, than those who are not keen on following them. However this is more likely to be due to the fact that the questionnaire was sent during school term holidays, when many urologists tend to be on annual leave. The returns were also fairly evenly distributed throughout the UK. Conclusion Our survey highlights a significant variation in how patients with testicular microlithiasis are followed-up in the UK. The majority of consultants nationally believe surveillance of this patient group confers no survival benefit. However significant proportions of them continue to follow-up these patients. There is an urgent need to clarify this issue in order to recommend a coherent surveillance policy.
2014-10-01T00:00:00.000Z
2006-03-15T00:00:00.000
{ "year": 2006, "sha1": "a6921524a13ee08ab8aa62d87c5421f4ce48221e", "oa_license": "CCBY", "oa_url": "https://bmcurol.biomedcentral.com/track/pdf/10.1186/1471-2490-6-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0f902ef1df2b8399cfb65e402875def5d1271249", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
93115265
pes2o/s2orc
v3-fos-license
Vibrational chiral spectroscopy with femtosecond laser pulses Vibrational circular dichroism and optical rotary dispersion spectra can provide detailed information about molecular structure and the conformation of biomolecules. Their artefact-free recording with high time resolution is a current experimental challenge. We outline recent progress. Introduction Vibrational transitions can provide direct access to chemical structure, because they are often localized in different parts of a molecule with well-defined orientations of the transition dipole moments. This is exploited in anisotropy measurements in non-linear spectroscopy. A vibrational band shifts in frequency when another one is excited by a pump-laser pulse, either due to direct coupling of the two vibrations or by heating of the molecule. The dependence of the signal on the orientation of the probe laser polarization is directly related to the angle between the transition dipole moments [1]. For chiral molecules vibrational circular dichroism (VCD) and vibrational optical rotary dispersion (VORD) offer an additional, powerful probe. They can be understood as a measure of the deviation of charge motion during a vibration from that of a linear dipole. Even spectrally unresolved or delocalized modes can thus serve as sensitive indicators of molecular structure, in particular in combination with reliable quantum chemistry calculations [2]. The goal of our current research is to record changes in vibrational chiral spectra in the course of a photo-induced reaction with ultrafast time resolution. Measurement Principle Since vibrational circular dichroism and circular birefringence are very tiny (10 -3 -10 -5 ) compared to absorption or linear dichroism of an ensemble of molecules in solution, a conventional polarimetric characterization with mid-infrared light cannot yield chiral vibrational spectra with sufficient precision. Rather than probing the full Mueller matrix of the sample using a set of different polarizations, time-resolved VCD measurements therefore require mid-IR light pulses with a pure, well-defined state of polarization. As samples are usually non-depolarizing, this implies that Jones matrices are sufficient for the description of the experiment. . The probe beam, and thus the CFID, can be much more intense without saturating the detector. Y-polarized reference fields (with opposite sign for consecutive probe pulses, small red arrows) are created either before the sample (elliptical light, B), behind the sample (C) or guided arround the sample (D). They interfere with the CFID to produce VCD or VORD signals and their pump-induce changes. EPJ Web of Conferences The Jones matrix for a sample of isotropically oriented molecules with weak (vibrational) circular dichroism describes the fact that an initially x-polarized electric field accquires a weak y-polarized component with a S/2 phase shift (and vice versa for y-polarized incident light): This new field component is the chiral free induction decay (CFID), which is too weak to be detected unless it is made to interfere with a much stronger reference field. In the case of elliptical polarization, The measured VCD signal can thus be strongly enhanced, when highly elliptical (G<<1) instead of circular polarized light (G=1) is used. However, this comes at the price of an additional polarizer behind the sample, which strongly enhances the sensitivity to linear dichroism and birefringence background signals [3]. It can be seen from equation (1) that to lowest order the y-polarized reference field is not affected by the optical activity of the sample. As a result it may be generated only after the beam has passed the sample (Fig.1C). The reference field can even be a replica of the incident field that is guided around the sample cell [4] (Fig. 1D). When the phase difference between the x-polarized incident field and the y-polarized reference is turned from Sr2 to 0 or S, it interferes with the in-phase component of the CFID and optical rotation (VORD) instead of circular dichroism is detected [5]. Experiment A train of alternating left and right handed mid-infrared laser pulses is created by synchronizing a 1 kHz amplified femtosecond laser system to the intrinsic 50 kHz frequency of a photoelastic modulator that serves as a variable retarder. The triggers to the laser system are delayed in such a way that the modulator acts as a waveplate with opposite sign for consecutive pulses. Careful control of the exact modulator retardance at the moment a pulse is passing also allows us to compensate linear birefringence of the sample cell and the modulator itself. Almost perfectly symmetric left and right handed pulses can thus be generated, i.e. pulses with identical xcomponents and exactly opposite y-components [6]. The first successful transient VCD experiment was carried out using the experimental configuration in Fig. 1A. A transition metal complex was excited by visible light pulses, and the subsequent change in circular dichroism in the C-H stretch region was detected by alternating left-and right handed circular polarized mid-IR probe pulses with 5 picosecond time resolution [7] (see Fig. 2). In principle, the time resolution of a pump-probe experiment is only limited by the duration of the laser pulses, which is typically 100 fs in our case. However, in the experimental configuration of Fig. 1 the mid-IR probe pulses need to be spectrally narrowed in a monochromator before reaching the sample in order to eliminate all polarization-sensitive optics between sample and detector. Frequency dispersion behind the additional polarizer is possible for the experimental configurations shown in Fig.1B-D. Not only does this afford femtosecond time resolution but it can also significantly reduce the measurement time when an array detector is used to simultaneously record data over a 100-200 cm -1 frequency range, the typical bandwidth of mid-IR femtosecond laser pulses. Configurations 1B and 1C have been used with success to record transient electronic CD and ORD data [8,9] (although UV fs-pulses do not have sufficient bandwidth 03004-p.2 API'09 to record spectra), while approach 1D is only feasible in the mid-IR, as phase stability between the two beam paths is required. Benefitting from the enhancement factor in equation (4) static vibrational CD and ORD spectra, which are 1-2 orders of magnitude smaller than electronic signals, could recently be recorded using configurations 1B-1D [4,5]. Fig 3 shows VORD spectra of limonene, recently recorded in our laboratory using broad band detection on a 32 pixel MCT detector array [10]. 3. A Setup for the "single shot" recording of vibrational ORD spectra. The broad-band mid-IR laser beam is split into probe and reference, which are simultaneously recorded on a 2x32 pixel double array MCT detector. The PEM changes the phase of the y-polarized CFID alternatingly by 0 or S with respect to the incident x-polarized laser field. Depending on the detuning I of the second polarizer from horizontal, different portions of the incident laser field leak to the detector and act as a reference field. B Absorption and VORD spectra of limonene. Recording time 60 sec for each enantiomer for three different analyser orientations I. The high signal levels and short recording times required for these steady state measurements are very promising for the future recording of transient chiral vibrational spectra. Conclusion The vibrational circular dichroism and optical rotation of molecules in solution is very small and detecting its changes during a photoreaction is a challenge in current infrared laser spectroscopy. Concepts borrowed from ellipsometry, combined with a careful control of femtosecond mid-IR pulse polarization have already produced significant signal to noise enhancement in static laser-based chiral measurements. The decisive step toward transient experiments is now the supression of artifact signals due to pump-induced linear birefringence and phase changes.
2019-04-04T13:13:14.739Z
2010-01-01T00:00:00.000
{ "year": 2010, "sha1": "c639c70db9902cc710702b02bb3c1a367c10beb9", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2010/04/epjconf_api09_03004.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0e250d37a295bffbd46f0cca8fc091073a6ea2cb", "s2fieldsofstudy": [ "Physics", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
49183123
pes2o/s2orc
v3-fos-license
Influence of Maximal Running Shoes on Biomechanics Before and After a 5K Run Background: Lower extremity injuries are common among runners. Recent trends in footwear have included minimal and maximal running shoe types. Maximal running shoes are unique because they provide the runner with a highly cushioned midsole in both the rearfoot and forefoot. However, little is known about how maximal shoes influence running biomechanics. Purpose: To examine the influence of maximal running shoes on biomechanics before and after a 5-km (5K) run as compared with neutral running shoes. Study Design: Controlled laboratory study. Methods: Fifteen female runners participated in 2 testing sessions (neutral shoe session and maximal shoe session), with 7 to 10 days between sessions. Three-dimensional kinematic and kinetic data were collected while participants ran along a 10-m runway. After 5 running trials, participants completed a 5K treadmill run, followed by 5 additional running trials. Variables of interest included impact peak of the vertical ground-reaction force, loading rate, and peak eversion. Differences were determined by use of a series of 2-way repeated-measures analysis of variance models (shoe × time). Results: A significant main effect was found for shoe type for impact peak and loading rate. When the maximal shoe was compared with the neutral shoe before and after the 5K run, participants exhibited an increased loading rate (mean ± SE: pre–maximal shoe, 81.15 body weights/second [BW/s] and pre–neutral shoe, 60.83 BW/s [P < .001]; post–maximal shoe, 79.10 BW/s and post–neutral shoe, 61.22 BW/s [P = .008]) and increased impact peak (pre–maximal shoe, 1.76 BW and pre–neutral shoe, 1.58 BW [P = .004]; post–maximal shoe, 1.79 BW and post–neutral shoe, 1.55 BW [P = .003]). There were no shoe × time interactions and no significant findings for peak eversion. Conclusion: Runners exhibited increased impact forces and loading rate when running in a maximal versus neutral shoe. Because increases in these variables have been associated with an increased risk of running-related injuries, runners who are new to running in a maximal shoe may be at an increased risk of injury. Clinical Relevance: Understanding the influence of running footwear as an intervention that affects running biomechanics is important for clinicians so as to reduce patient injury. Lower extremity injuries have consistently been problematic for runners regardless of footwear. Taunton and colleagues 17,18 reported that over a 13-week training period, 30% of runners incurred a running-related injury, most commonly patellofemoral pain, iliotibial band friction syndrome, and plantar fasciitis. Since the inception of the cushioned running shoe, its fundamental purpose has been to protect the foot in an effort to reduce running-related injuries. Despite significant advances in shoe technology over the past 50 years, the rate of sustaining a runningrelated injury has remained relatively stable. 11 Numerous variations of running shoes have been developed to accommodate different types of runners, running styles, and running conditions. Footwear manufacturers have modified the basic components of their running shoe models to accommodate these differences, including midsole cushioning and heel-toe drop. Historically, running shoes fell into 1 of 3 cushioning classifications: (1) neutral, (2) stability, and (3) motion control. In general, individuals with a high amount of pronation were directed to a motion control shoe, those with a moderate amount of pronation were directed to a stability shoe, and those with a minimal amount of pronation, or individuals who supinated, were directed to a neutral shoe. Up until the past 7 years, traditional running shoes tended to have a heel-toe drop, which refers to the difference between the heel elevation and forefoot elevation of the midsole, of greater than 10 mm. In 2009, the minimalist shoe, defined by very little cushioning and heel drop, became popular among runners. 4 Popularity of these shoes spiked largely because their benefits were espoused by shoe manufacturers and authors of popular-press books, 12 who claimed that a lack of cushioning would reduce injuries by promoting a more natural forefoot-strike pattern. 10 However, popularity of minimal shoes has declined, largely due to research suggesting that adopting a forefoot-strike pattern does not decrease injury risk, improve running economy, or reduce the impact peak or loading rate of the vertical ground-reaction force. 6 Research continues to examine how transitioning from a traditional shoe to a minimal shoe influences running style, lower extremity biomechanics, and risk for injury. 7,15,19 At about the same time that minimal shoe popularity was rising, a company called Hoka One One introduced a highly cushioned "maximal" running shoe, a stark contrast to the minimal shoe. Currently, there is no academic definition of a maximal shoe, but in industry, the defining feature is increased cushioning of the midsole. Since 2010, maximal shoes have slowly gained popularity, with more than 20 variations of maximal running shoes now on the market. Conceptually, this increase in cushioning is thought to improve shock attenuation and reduce the risk of injury. Anecdotally, runners have expressed in the popular press that maximal running shoes reduce or eliminate runningrelated pains that often appear several miles into their run. However, despite increased popularity of maximal shoes in the marketplace, no research to date has investigated the effect of a maximal shoe on biomechanical variables associated with injury, including the loading rate and impact peak of the vertical ground-reaction force 2,13,14 and peak eversion of the rearfoot. 14 Therefore, the primary purpose of this study was to examine the effect of a maximal running shoe versus a neutral running shoe on lower extremity running biomechanics before and after a 5-km (5K) run. We hypothesized that the maximal shoe would result in lower vertical impact peak and loading rates, but there would be no change in peak ankle eversion, compared with a neutral shoe. We also hypothesized that the impact peak and loading rate would increase after the 5K run in the traditional neutral shoe but not in the maximal shoe. Participants Participants were 15 female recreational runners (age range, 23-51 years; mean age, 34 years) who ran a minimum of 15 miles per week and had not run in a minimal shoe for the 6 months prior to the study. Before participating in the study, all participants were running in some form of traditional running shoe, including rearfoot control and neutral shoes. All runners considered themselves heelstrikers, which is described as a runner who strikes his or her heel to the ground first when running (vs midfootstriker or forefoot-striker). We focused on heel-strikers, as it is estimated that 90% of recreational runners have this footstrike pattern. 3,8 In addition, our study required that runners had not had an injury within the past month that limited their running for more than 1 week, 21 were not pregnant, and did not have any neurological or vascular disorders. All participants signed an informed consent document approved by the institutional review board at Oregon State University prior to participation on the first day of testing. Instrumentation Kinematic data were collected by use of a Vicon 8-camera 3dimensional motion analysis system (Oxford Metrics Ltd) at a sampling frequency of 250 Hz. The cameras were interfaced to a microcomputer and placed around a floorembedded force platform (Advanced Mechanical Technologies Inc). The force platform (1000 Hz) was interfaced to the same microcomputer that was used for kinematic data collection via an analog to digital converter. This interface allowed for synchronization of kinematic and kinetic data. Procedures Participants attended the biomechanics laboratory for 2 separate testing sessions, with 7 to 10 days between sessions. For one of the testing sessions, the participants wore a neutral running shoe (New Balance 880: drop, 10.1 mm; heel height, 33.3 mm; forefoot height, 23.2 mm), and for the other testing session, they wore a maximal shoe (Hoka One One Bondi 4: drop, 6.9 mm; heel height, 41.6 mm; forefoot height, 34.7 mm) ( Figure 1). The order of shoes worn was randomized across participants. The procedures were the same for each testing session. Prior to biomechanical data collection, participants' height and mass were recorded. Reflective markers (14-mm spheres) were placed bilaterally over the following anatomic landmarks: the first and fifth metatarsal heads, distal interphalangeal joint of the second toe, medial and lateral malleoli, medial and lateral femoral epicondyles, greater trochanters, and iliac crests. A single marker was placed on the joint space between the fifth lumbar and the first sacral spinous processes. Quadrads of rigid reflective tracking markers were attached bilaterally to the participant's thigh and leg with a custom adhesive taping system. In addition, triads of rigid reflective tracking markers were placed bilaterally on the heel counter of the shoe. Markers were always placed by the same researcher (J.A.T.), who had several years of biomechanics laboratory experience placing markers. After marker placement, the participant was asked to stand in the center of the calibration area so we could collect a static calibration trial. Once the calibration trial was captured, all markers were removed except those on the quadrads and triads as well as the iliac crest, anterior superior iliac spine, and fifth lumbar/first sacral markers. The participants completed 5 successful running trials for their dominant leg (defined as the leg they prefer to use when kicking a soccer ball). The participants were allowed 3 to 5 practice trials in order to become familiar with the procedures and instrumentation. For each trial, participants ran toward the force plate from a distance of about 7 m and continued to run for about 3 m beyond the force plate. They were asked to run at a pace that was considered a "natural running pace," and this pace was used for all running trials (before and after the 5K run and during each data collection session). We measured and controlled for their pace by using timing gates placed along the runway. Running trials were considered successful if the participant was able to contact the specified foot entirely on the force plate. Following completion of the 5 running trials, each participant was taken to a treadmill located in the same laboratory. The participant was asked what her average pace was for a 5K run (in minutes per mile) and was given 2 minutes to warm up on the treadmill at her pace of choice. After the 2-minute warm-up, the treadmill pace was gradually ramped up to the testing pace over a 30-second duration. Once the treadmill pace was set, the participant ran at that pace for the 5K, and all reflective markers remained on the participant during the run. After the participant completed the treadmill run, she was immediately walked back to the capture area in the biomechanics laboratory. At that time, the participant performed 5 successful running trials for the dominant leg as she had done prior to the 5K treadmill run. Data Analysis Coordinate data were digitized in Vicon Workstation software (Oxford Metrics Ltd). Kinematic data were filtered by use of a fourth-order, zero-lag, Butterworth 12-Hz, lowpass filter, while kinetic data were filtered with a fourthorder, zero-lag, Butterworth 50-Hz, low-pass filter. 16 Visual3D software (C-Motion Inc) was used to quantify 3-dimensional ankle joint kinematics. Joint kinematic properties were calculated by use of a joint coordinate system approach. Peak eversion angle was defined as the maximum ankle joint angle in the frontal plane during stance phase. The method for calculating average vertical loading rate was consistent with that described by Willy and Davis, 20 which entailed the middle 60% of the vertical ground-reaction force curve from heel-strike to the vertical impact peak. These calculations were made with custom Excel software (Microsoft Corp). Statistical Analysis Variables of interest included the impact peak of the vertical ground-reaction force, loading rate normalized by body weight (BW), and peak ankle eversion. Differences were determined via a series of 2-way repeated-measures analysis of variance (ANOVA) (shoe  time) (P .05). When significant differences were found, post hoc comparisons were made with paired t tests (P .05). DISCUSSION The aim of this study was to examine the effect of maximal running shoes on lower extremity running biomechanics before and after a 5K run compared with neutral running shoes. Despite the popularity of maximal running shoes, we Figure 2. Loading rate comparison between the traditional neutral running shoe condition and the maximal running shoe condition following a 5K run. Error bars represent standard error. A significant difference was found between shoe types, P .05. BW, body weight. The Orthopaedic Journal of Sports Medicine Maximal Running Shoe Biomechanics 3 believe this is the first scientific investigation reported in the literature to make such a comparison. Contrary to our hypothesis, the impact peak and loading rate were greater in the maximal shoe compared with the traditional neutral shoe. No differences were seen in peak rearfoot eversion. The majority of recreational runners are classified as heel-strikers, 3,8 who generally exhibit two distinct vertical ground-reaction force peaks: an impact peak and an overall peak (Figure 4). 9 The impact peak is of clinical interest, as high impact peaks have been associated with common running-related injuries such as plantar fasciitis and tibial stress fractures. 13,14 Baltich and colleagues 1 examined the influence of midsole cushioning on the vertical impact peak in 93 recreational runners and found that runners exhibited increased vertical impact forces when wearing softer midsole shoes. The investigators suggested that participants either were "bottoming out" in the soft midsole condition or were modifying their lower extremity stiffness. In the post-data collection discussions for the current study, participants reported they could "really feel" the extra cushioning of the maximal shoe, and many reported that the shoes felt "springy." As such, we doubt that participants were "bottoming out" but rather were relying more heavily on the shoe to attenuate impact forces, which in turn resulted in a higher impact peak. As previously discussed, a higher impact peak could place runners at a greater risk of developing an injury. 13,14 However, it is important to note that the high impact peak occurs with heel-strike and likely causes increased loads to the tibia, calcaneus, and plantar fascia. The increased midsole cushioning likely does not increase loading of the metatarsals; however, we were not able to confirm this under the current study design. In this study, we found that runners displayed a greater loading rate when wearing a maximal shoe compared with the neutral shoe. A higher loading rate, which represents the slope of the vertical ground-reaction force prior to the impact peak (Figure 4), has been associated with a higher risk of developing a running-related injury. 13,14 Thus, similar to impact peak, higher loading rates in the maximal shoe may place a runner at an increased risk of developing an injury. We also hypothesized that the impact peak and loading rate would increase after the 5K run in the traditional neutral shoe but not in the maximal shoe. This hypothesis was based on anecdotal reports from recreational runners in the community, who reported "feeling the extra cushion" after 2 to 3 miles into their run. However, we found that the 5K had no influence on the impact peak or loading rate in either shoe condition, indicating that neither a brief accommodation period nor muscular fatigue likely influenced these kinetic variables. In addition to examining kinetics, we were also interested in whether the maximal shoe influenced peak eversion, since this is another biomechanical variable that has been associated with running-related injuries. 14 The maximal shoe is unique in that it offers a highly cushioned midsole, but manufacturers claim that it also provides a considerable amount of motion control and stability because of its wide rearfoot base of support. The maximal shoe midsole/outsole used in this study was wider than the neutral shoe, particularly in the rearfoot portion of the shoe (maximal shoe: forefoot width, 109 mm and rearfoot width, 96 mm; neutral shoe: forefoot width, 103 mm and rearfoot width, 80 mm). However, our findings revealed no difference in peak eversion between the neutral running shoe and the maximal running shoe condition. Therefore, it appears that there is no difference in the influence of a maximal shoe versus a neutral shoe when participants are running over solid surface in a laboratory setting. Finally, because recent studies have found that runners, over time and training, may modify their heel-strike pattern to a midfoot-or forefoot-strike pattern when transitioning from a neutral shoe to a minimal shoe, 5 we conducted a post hoc analysis of all running trials to determine whether our participants modified their foot-strike pattern in the maximal shoe condition. This post hoc analysis consisted of viewing all maximal shoe running trials in Vicon software and identifying which portion of the foot hit the force plate first. We found that all participants continued to exhibit a heel-strike pattern across all conditions. Comparison of impact peaks between the traditional neutral running shoe condition and the maximal running shoe condition following a 5K run. Error bars represent standard error. A significant difference was found between shoe types, P .05. A limitation of this study is that the maximal shoe condition was novel to the participants. The observed differences were not changed by the 5K run; however, we did not assess whether these differences persisted over a greater duration of exposure to the shoe. Allowing runners to gradually transition or adapt to the shoe over a period of several weeks may yield different results. Placing markers directly on the shoes limited our ability to quantify true ankle eversion. In addition, the exclusion of male runners limits our findings to only healthy female runners within the given age range. A final limitation is related to testretest reliability. Our kinematic model is commonly used in the running biomechanics research reported in the literature; however, this model is most reliable for measuring sagittal plane kinematics. Future studies should examine how runners adapt to running in a maximal shoe over a period of time such as 6 weeks. CONCLUSION Runners who were classified as heel-strikers exhibited increased impact forces and loading rate when running in a maximal shoe compared with a traditional neutral shoe. Because increases in these variables have been associated with an increased risk of running-related injuries, runners who are new to running in a maximal shoe may be at an increased risk of injury. Therefore, runners should consider this potential increased risk for injury when switching from a neutral shoe to a maximal shoe; however, further work is necessary to better understand the longer term impact of this type of footwear.
2018-07-03T01:13:47.155Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "cc9b48ca4e16f9eb56bb5019ba73f15022fa3c58", "oa_license": "CCBYNCND", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2325967118775720", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cc9b48ca4e16f9eb56bb5019ba73f15022fa3c58", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
1159474
pes2o/s2orc
v3-fos-license
Core modular blood and brain biomarkers in social defeat mouse model for post traumatic stress disorder Background Post-traumatic stress disorder (PTSD) is a severe anxiety disorder that affects a substantial portion of combat veterans and poses serious consequences to long-term health. Consequently, the identification of diagnostic and prognostic blood biomarkers for PTSD is of great interest. Previously, we assessed genome-wide gene expression of seven brain regions and whole blood in a social defeat mouse model subjected to various stress conditions. Results To extract biological insights from these data, we have applied a new computational framework for identifying gene modules that are activated in common across blood and various brain regions. Our results, in the form of modular gene networks that highlight spatial and temporal biological functions, provide a systems-level molecular description of response to social stress. Specifically, the common modules discovered between the brain and blood emphasizes molecular transporters in the blood-brain barrier, and the associated genes have significant overlaps with known blood signatures for PTSD, major depression, and bipolar disease. Similarly, the common modules specific to the brain highlight the components of the social defeat stress response (e.g., fear conditioning pathways) in each brain sub-region. Conclusions Many of the brain-specific genes discovered are consistent with previous independent studies of PTSD or other mental illnesses. The results from this study further our understanding of the mechanism of stress response and contribute to a growing list of diagnostic biomarkers for PTSD. Background Post-traumatic stress disorder (PTSD) is an anxiety disorder that is triggered after exposure to traumatic events. Individuals with PTSD have persistent fear memory and often feel emotionally numb. If left untreated, PTSD can be life-threatening, as it is often linked with substance abuse and severe depression. A study of 289,328 Iraq and Afghanistan veterans who were first-time users of Veterans Affairs (VA) health care between 2002 and 2008 showed that 22% of veterans were diagnosed with PTSD and 17% were diagnosed with depression [1]. Given the predominance of PTSD and its negative consequences to long term health, it is very important to identify measurable and quantifiable biological parameters, i.e., biomarkers, which can serve as prognostic and diagnostic indicators for PTSD. Recent studies have proposed several candidate brain gene biomarkers that are associated with PTSD [2,3]. Even though PTSD is an illness of the brain, taking brain biopsy or spinal fluid is not a viable option for diagnosis. Instead, blood can be used as a surrogate for brain tissue for the purpose of identifying biomarkers [4][5][6][7][8]. Specifically, Rollins et al. recently found over 4,100 brain transcripts co-expressed in the blood of healthy human subjects [9]. Furthermore, it was shown that the mRNA levels of certain transcripts in PTSD patients remain changed with respect to controls even 16 years after the traumatic event [8,10]. Thus, blood gene expression assays are of particular interest for both short-term and long-term diagnosis, prognosis, and treatment of PTSD. However, the identification of predictive blood markers requires the accurate separation of biologically relevant core markers from unrelated downstream signals. This task is particularly challenging when using surrogate tissues, since biological noise from the surrogate is confounded with noise from the primary tissue. Fortunately, studies performed with model organisms allow the direct assay of both surrogate and primary tissues. By characterizing the molecular changes present in both tissues simultaneously, we can more effectively filter out spurious signals in the surrogate. We recently used repeated exposures of mice to a trained aggressor mouse as a "social defeat" model for evaluating PTSD symptoms [11]. This social defeat model has often been used to induce anxiety, depression-like and avoidance symptoms, which are the most prominent psychiatric features of PTSD and common co-morbidities. Using a "cage-within-cage residentintruder" protocol (designed to model unpredictable threats of daily trauma), we exposed individual subject male C57BL/6 J mice to single aggressors for six hours daily for 5 or 10 days, and we placed individual control subject mice in the same cages but in the absence of any other mice. After allowing the subject animals to recover for either 1 or 10 days (5 day exposure) or 1 or 42 days (10 day exposure), we then collected tissue samples of blood and seven brain regions of mice under the different stress conditions and measured gene expression levels of these tissues using DNA microarrays. As described in [11], the durations of aggressor exposure were chosen to simulate shorter term (5-day) and longer term (10-day) stress. The shortest recovery phase duration (1 day) was chosen to study the immediate effects of stress. The longer of the two recovery phase durations for each exposure time were selected based on behavioral tests conducted throughout the study. These tests demonstrated 5-day exposure defeated mice showed signs of recovery around 10 days post-exposure, and 10day exposure defeated mice showed signs of recovery at much longer times (up to 42 days post-exposure). Because PTSD represents a persistent stress response, it is important to identify differentially expressed genes (DEGs) active both immediately after the exposure and after a long recovery period. Thus, in the current work we focus on genes consistently over-/under-expressed across all experimental conditions, rather than on DEGs from individual conditions (we will address the latter in future work). The seven brain regions analyzed in this study were chosen due to their known roles in fear memory formation, emotion regulation, and decisionmaking-all processes important to the development and pathology of PTSD [3]. In particular, the amygdala regulates fear memory and emotional aspects; the hippocampus is the center for short term memory, and the prefrontal cortex controls decision-making. In addition, the ventral striatum is strongly associated with emotional and motivational aspects of behavior, the stria terminalis serves as a major output pathway of the amygdala, and the septal area plays a role in reward and reinforcement along with the ventral striatum. We note that a similar protocol has also recently been used to profile social defeat-induced gene expression changes in the nucleus accumbens, ventral tegmental area, and blood plasma [12][13][14]. The field of systems biology has demonstrated that complex diseases such as PTSD are not caused by changes in a single gene or pathway. Rather, changes occur in a hierarchy of gene modules which collectively contribute to disruption of essential cellular functions [15][16][17]. To characterize this module hierarchy, many researchers have adopted an unsupervised approach [17][18][19][20][21][22][23][24][25] that constructs a network based on gene expression data and identifies functional modules based on network topology or "guilt-by-association". However, these methods usually face the problem of underdetermination, where the number of interactions to be inferred far exceeds the number of independent measurements [22]. Other studies have adopted a supervised network identification approach that begins with a list of "seed" genes and gradually expands the list by adding interacting genes, ultimately resulting in a compact gene module network [26][27][28]. These supervised approaches have shown good performance for classification tasks, and we expand upon one of them in this work. Previous computational and experimental work suggests that functional gene modules are highly conserved across conditions, tissues, and species [17,29,30]. Direct comparisons have been made between multiple mouse tissues [21], between human and mouse brains, and between human blood and brain tissue [4]. However, modules inferred separately from different conditions yield partial overlaps at best, which makes drawing comprehensive biological conclusions difficult. Recently, we developed a new module identification tool entitled COMBINER (COre Module Biomarker Identification with Network Exploration) that identifies distinct conserved expression modules across various conditions. The fundamental idea behind COMBINER is to infer candidate modules from data of one condition and validate the inferred modules in other conditions using supervised classification. Those candidate modules that perform well in classifying samples from multiple conditions are then defined as "core modules". There are three advantages to this approach: (1) The resulting modules are compact and thus exclude unrelated downstream signals; (2) The modules are distinct and welldefined with respect to which conditions/tissues/species invoke them; (3) This method provides multiple robust discriminative biomarkers co-validated in at least two experimental conditions. Given these advantages, we have applied a customized version of COMBINER to mouse social defeat gene expression data deriving from seven brain regions along with blood to identify common expression modules. In this work, we have attempted to answer two biological questions: 1. Which expression modules act in common between blood and brain tissue of the social defeat mouse model? 2. Which modules act in common between different brain regions? To do so, we first performed a pair wise comparison of differential gene expression, biological pathways, and GO terms between tissues. We then applied a new version of COMBINER which we modified in two ways (discussed below). First, we used linear models to deconvolve time-dependent effects on gene expression from effects due to social defeat. Second, we developed an improved consensus feature elimination method to identify robust modules from data with a relatively small sample size. Our results, in the form of blood-brain and brainbrain social defeat core module networks, provide a concise biological description of social defeat and generate many candidate PTSD biomarkers for future study. Overlaps of DEGs/DEGOs/DEPaths First, we identified differentially expressed genes (DEGs) in each individual tissue across all time points using a limma moderated t-test [31]. The numbers of significant DEGs (p ≤ 0.05) for each tissue are listed in blue on the diagonal in Figure 1a. We then established the significance of DEG overlaps by computing a hypergeometric p-value for each pairwise tissue combination (listed in the offdiagonal). Hypergeometric p-values ≤ 0.05 are considered significant (cells highlighted in red in Figure 1a). Next, we identified differentially expressed Biological Process GO terms (DEGOs) in each tissue by first ranking all genes in descending order of limma significance and then performing Iterative Group Analysis (iGA) [32] for each GO term with ≤ 100 constituent genes. We computed p-values for each term's iGA score using a null distribution obtained via 1000 random permutations of the original gene order. The numbers of significant DEGOs (p ≤ 0.05) for each tissue are listed in blue on the diagonal in Figure 1b. We established the significance of DEGO overlaps in the same manner as in Figure 1a. Finally, we identified differentially expressed MSigDB [33] (www.broadinstitute.org/gsea/msigdb/) canonical We consider the DEPATH overlap between hippocampus and stria terminalis to be marginally significant (red font), as it has a p-value ≤ 0.1 and is supported by a highly significant DEG overlap between the same tissues. sub-pathways (DEPATHs) in the same manner as DEGOs with the following modification. For each pathway, we performed iGA separately for all ordered sub-pathways ranging in size from three to 10 genes (when genes are ordered in terms of limma significance). We selected the highest scoring sub-pathway and established significance as before by repeating the procedure on 1000 random gene order permutations. The numbers of significant DEPATHs and significant DEPATH overlaps are denoted in the same manner as above. The overlaps of particular interest include amygdalahippocampus (AY-HC) and hippocampus-stria terminalis (HC-ST), as these two scored significantly in the DEG comparison and significantly or nearly significantly, respectively, in the DEPATH comparison. These DEPATHs describe processes such as inflammation, diabetes, apoptosis, and immune response. Tables 1 and 2 show the significantly overlapping DEPATHs of AY-HC and HC-ST, respectively. We list the original name of each subpathway along with the following information from the iGA sub-pathway analysis conducted on the hippocampus data: number of genes in the highest scoring sub-pathway (Sig. Genes), sub-pathway permutation p-value, and Benjamini-Hochberg corrected sub-pathway false discovery rate (FDR). We note that none of these pathways would have been identified as significant from the hippocampus data alone when using a FDR ≤ 0.05 cut-off. We also note significant overlaps in the blood-septal region and blood-Hemibrain comparisons, where DEGOs related to apoptosis and DEPATHs related to insulin/diabetes, respectively, were identified. Additional file 1: Table S1 and Additional file 2: Table S2 contains detailed lists of these DEGOs and DEPATHs, respectively. Core module network Although the differential expression overlap analysis provided some biological insight into the pairwise molecular similarities between mouse tissues during social defeat, overlap results between DEGs, DEGOs, and DEPATHs were not always consistent. Overlap analysis between multiple tissues is more desired, while these overlaps are very limited due to the high noise-to-signal ratio of microarray. In addition, it was not obvious how best to combine the results into an overall biological description of mouse social defeat. Thus, we turned to a network-level analysis to provide deeper insight. Because the desired diagnostic biomarkers should be generally over-expressed in both the stress treatment and recovery period, we extended the COMBINER method [28] to accommodate all four conditions, which resulted in multiple-time-segment data. However, we would expect an age effect in the control mice. For example, the gene expression patterns of control mice in the 10-day treatment 1-day recovery and 10-day treatment 42-day recovery groups were significantly different due to mouse age. Thus, we used the limma software [31] to deconvolve the undesired effects of differing mouse ages as explanatory variables in a linear model, and we subtracted these variables from the original gene expression values. We then applied COMBINER to the "time standardized" data to construct a blood-brain network (common modules co-expressed in blood and seven brain regions, Figure 2) and a brain-brain network (common modules co-expressed in six brain regions, Figure 3). Blood-brain network We first investigated the expression modules active in both blood and multiple brain regions. Starting with the top 100 candidate modules (when ranked by pathway activity absolute t-score-see Methods) inferred from blood sample data, we identified modules that were also active in each brain region. To do so, we removed features using Consensus Feature Elimination until the average classification Area Under the ROC Curve (AUC) evaluated on each brain region exceeded 0.75 (see [28] for additional details). After repeating this procedure separately for all brain regions, a total of nine core modules remained. Figure 2a presents each module's brain region-specific expression patterns. We used average time curves (see Methods) to show the time-specific expression pattern of the modules as heat maps in Figure 2a. Figure 2b further shows the expression of the core modules and the protein-protein interactions (PPIs) between their gene products. The color of each gene denotes its expression level in the blood. Blue lines denote known PPIs within modules, while gray lines denote known PPIs between modules. Figure 2c lists the putative biological functions of the core modules; detailed module information is summarized in Additional file 3: Table S3. We note that use of COMBINER resulted in seven discriminative blood biomarker sets (average 0.81 mean AUC and 0.26 mean error rate) which have each been validated using data from one of the brain regions. Table 3 lists the final number of modules identified from each blood-brain region pair with the associated mean AUC and mean error rate. The resulting nine core modules represent biological functions related to molecular transport, integrin and tight junction function, retinol metabolism, cell cycle, and mRNA transcription. Although initially inferred from blood tissue, most of these processes have been previously implicated in normal and pathological brain function. For example, tight junctions and ABC efflux transporters are present in the blood-brain barrier (BBB) and the blood-cerebrospinal fluid barrier (BCSFB) [34,35], and SLC transporters encode facilitated transporters and ion-coupled secondary active transporters such as neurotransmitters. The latter also represent the major class of transporters used in the delivery of drugs to the brain [36]. In addition, overexpressed integrin genes lead to vascular remodeling, which is believed to be highly correlated with mild Traumatic Brain Injury (mTBI) [37], a disease related to PTSD. Finally, retinoids are important for the maintenance of the nervous system and may play a role in Alzheimer's disease [38]. The resulting 43 core genes also exhibit ample evidence for association with brain function and/or PTSD. In particular, the genes Abca4, Fech, Magoh, Ppp1r12b, and Uros were previously shown to be differentially expressed in a human PTSD signature discovered by Segman et al. [8]. Seven of the 43 genes closely resemble genes from a blood signature for depression (Ahsp, Dhrs9, Map2k2, Slc13a2, Slc16a1, Slc39a3, U2af1) [39,40], while Hmbs, Pafah1b1, Sfrs2, and Yes1 were previously identified as bipolar disorder blood markers [41]. In addition, Ugt2b5 and Slc6a9 are also present in a blood signature for brain injury [42], while Dbh, Itgb1, Ltc4s, and Rhoa were reported to be relevant to mTBI [43]. Many of the other genes have been associated with various mental illnesses and neurodegenerative diseases, including Schizophrenia, Alzheimer's disease, and sleep disorder. Detailed associations and references are listed in Additional file 4: Table S5. Brain-brain network In a similar manner as before, we first used COMBINER to infer the top 100 candidate modules for each brain region. We then identified common modules for each remaining brain region separately, removing features using Consensus Feature Elimination until the average AUC of the second region exceeded 0.75. Table 4 lists the final number of modules identified from each brain region pair, as well as the number of "core" modules and "core" genes for each brain tissue (i.e. those present in the majority of pair wise comparisons). In total, 37 core modules with 177 genes were identified in the brainbrain network. We list the final number of modules identified from each brain region pair, as well as the overall numbers of core modules and core genes for each region. Figure 3a displays the tissue-and time-specific expression patterns of each brain-brain core module. Figure 3b shows the expression levels of the genes in each module, as well as the known PPIs occurring between genes. Unlike the blood-brain network, the shape of a gene represents the brain region in which it was inferred. Table 5 provides the putative biological functions of the core modules as inferred, while detailed module information is summarized in Additional file 5: Table S4. In the brain-brain core module network, Modules 6, 8, 33, and 15 are of particular interest. An active Module 6 (Creb3l2, Prkx, Avp) in the hippocampus indicates a down regulated PKA-CREB long term potentiation pathway, which has been shown to impair memory [44]. In addition, the activity of Module 8 (Prka1b, Hspa1a, Nfkbia, Jun, Cpt1b) in the septal region shows down regulation of a heat shock protein (HSPA1A). Such activity has previously been found in other PTSD studies [45]. Module 33 depicts an up regulated dopamine pathway in the ventral striatum. This activity could potentially send excessive dopamine to the amygdala and other brain regions, which has been shown to lead to increased anxiety [46,47]. Finally, Module 15 implies an active proinflammatory response in the medial prefrontal cortex (MPFC) that agrees with the study in [48]. Other validated findings include olfactory impairment in the stria terminalis (ST) (module 32) [49]; alteration of complement pathways in the MPFC (module 20) [50] and activated coagulation function in the ST (module 31) [51]. The above findings highlight that while the putative biological functions of the brain-brain core modules (See figure on previous page). Figure 3 Brain-brain network. (a) application of COMBINER to brain data yields thirty-seven core modules. The tissue-and time-specific expression patterns of each module are presented in the same manner as before. (b) the expression levels and known PPIs of the core module genes are displayed. The shape of a gene represents its inference region, and the color denotes its expression level in that region. Blue lines denote known within-module protein-protein interactions (PPIs), while gray lines denote between-module PPIs. (HB: hemibrain (hemisphere), AY: amygdala, HS: hippocampus, MPFC: medial prefrontal cortex, VS: ventral striatum, SE: septal region and ST: stria terminalis; 5D-1D/10D: 5 day treatment, 1 day/10 day recovery, 10D-1D/ 6 W: 10 day treatment, 1 day/6 week recovery). largely encompass the DEPATHs identified in the statistical overlap analysis (Tables 1 and 2), the COMBINER network-based analysis provides a much richer molecular description of mouse responses to social defeat. With additional validation in human studies, we expect these findings to yield robust prognostic and diagnostic biomarkers for PTSD. Conclusions The identification of diagnostic and prognostic blood biomarkers for PTSD currently is of great interest. In this work, we have improved the COMBINER methoda computational framework for identifying gene expres-sion modules that are activated in common across experimental conditions-and applied it to blood and brain data from a mouse social defeat model. The resulting gene networks highlight stress-related biological processes active in both brain and blood and provide a comprehensive molecular description of social defeat. In total, our approach identified seven blood biomarker sets that have each been validated for classification performance in one brain sub-region. Some of the genes and processes discovered are consistent with previous independent studies of PTSD or other mental illnesses, while others represent novel candidate PTSD biomarkers. We note that the same approach can be readily applied to other disease models to construct gene networks that are activated in common across tissues; future work will focus on this task. Blood, organ and tissue collection Terminal organs, brain regions, and blood samples from subject and control C57BL/6 mice were collected after 24 hours, or 6 weeks (42 days) post 10-day social stress, and 24 hours or 1.5 weeks post 5-day social stress. Brains of C57BL/6 mice were carefully removed from the skulls, and left or right hemi-brain from each defeated or control mouse was dissected into different anatomical and functional regions: Hemibrain (Hemisphere) (HB), amygdala (AY), hippocampus (HS), medial Table 5 The putative biological functions of the core modules in brain-brain network prefrontal cortex (MPFC), ventral striatum (VS), septal region (SE) and stria terminalis (ST). The number of defeated and control mice in different regions and conditions are summarized in Table 6. RNA isolation and quality assessment Total RNA was isolated according to the Trizol® method (Invitrogen Inc., Grand Island, NY) from homogenized whole blood and brain regions. RNA from blood was isolated using the PreAnalytiX PAXgene® blood RNA kit (Qiagen Inc., Valencia, CA). We collected the seven organ tissues from 5-6 control and defeat mice, respectively. We evaluated RNA integrity using the Agilent Bioanalyzer and excluded samples of low quality, which appears to either have low total RNA, or low ribosomal RNA (rRNA) mass ratio between 28S and 18S rRNA and high amount of non-ribosomal RNAs in the electropherograms. Microarray hybridization Microarray assays were performed using Agilent's genome wide mouse expression array (GE 4x44K v2 two color microarray) slides and kits (Agilent Technologies Inc., Santa Clara, CA) following the manufacturer's protocol. To minimize batch effects, each sample was hybridized with a universal common reference that was used for all experiments. Hybridized microarray slides were scanned using Agilent Technologies Scanner G2505C US09493743. Microarray data processing Genespring with feature extraction 10.x (Agilent, CA) was used to process all two-color chips. Log2 transformation, Lowess normalization, and quantile normalization were applied to normalize within and between microarrays. For the latter, we applied quantile normalization separately on data from each tissue. Outlier spots were converted to missing values. If more than half of the expression values of a probe were missing, we removed the probe from consideration. We then imputed missing values using the knearest neighbor imputation method. To avoid incurring a bias in favor of genes represented by a greater number of probes, we aggregated multiple probes from the same Entrez Gene together by computing the mean of the "sibling" probes. We have deposited all microarray data for this study at the Gene Expression Omnibus (GEO): http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc= GSE45035. Linear model We used a linear model-based approach to deconvolve the experimental time effects from the social defeat expression data. Assuming log-additive effects, our method estimates the contributions of each of the four experimental time points and subtracts them away from the remaining effect of social defeat. The linear model we used to deconvolve the experimental time effect is defined as follows: where D i and C i i = 1, …, 4 denote log 2 gene expression values of defeated and control mice in condition i, α defeat denotes the overall effect of social defeat and β 1 , …, β 4 are the undesired time effects. In practice, we solve the above over determined system for each gene separately using least squares (implemented in the limma package), carrying forward only the gene-specific defeat effect for subsequent analyses. Differential expression analysis As described above in Section 2, we used the R/Bioconductor limma package and iterative Group Analysis (iGA) method for differentially expressed gene and GO term/pathway identification, respectively. COMBINER As shown in Figure 4, the COMBINER method first infers the statistically discriminative modules from an inference dataset, then validates them in various validation sets using consensus feature elimination. If a validated final module is co-expressed in at least half of the validation sets, then it is defined as a core module. Finally, we project these core modules onto known PPI Table 6 Defeated and control mice (in the form of (number of defeated) / (number of control)) in different regions and conditions networks. To remove features, we generated 250 groups of 500 classifiers in parallel and applied Linear Discriminant Analysis (LDA) with recursive feature elimination [52] to each to compute AUCs as well as weight vectors. Each feature was then ranked by its average normalized weight. The most consistently low-ranking feature was then removed recursively until the average AUC threshold of 0.75 was achieved. At this point, the remaining features were considered to comprise the final modules. In our previous work [28], we applied both the Condition Responsive Genes (CORG) [53] and Core Module Inference (CMI) [28] methods to infer candidate modules and express them as pathway activities (PAs). In the greedy search process, CORG picks up either up-or down-regulated genes, while CMI identifies genes of both directions together. However, because of the multiple-time-point nature of the social defeat data, the application of the CMI method is not straightforward. Thus, in this work we used only the CORG method with the procedure described as follows. For a given pathway, we first rank the standardized gene expressions by their limma moderated t-score. If up-regulated genes are dominant, we rank the t-score in descending order; otherwise, ascending order is chosen. Next, we aggregate the first two genes using the formula y ¼ x 1 þ x 2 ð Þ = ffiffi ffi 2 p ; if the expression of this aggregate yields a larger absolute t-score than the first gene, this combination is retained as a module with the combined expression becoming the PA. Otherwise, the procedure further adds the third gene using y ¼ x 1 þ x 2 þ x 3 ð Þ = ffiffi ffi 3 p , and so on, until the module-size limit, 25 genes, is reached. Finally, we ranked all modules using the absolute value of the pathway activity t-score. We faced two major challenges when modifying our COMBINER method. First, the multiple-time-point nature of the data initially decreased the binary classification performance of the static LDA classifier [52]. Second, the small data sample size leads to a large variability of feature ranks after recursive feature elimination. To cope with the first challenge, we used a linear model to deconvolve the time effects from the original expression values. We solved the second problem by improving our method for consensus feature elimination. We generated 250 groups of 500 classifiers in parallel, then removed the bottom feature using the voting principle. In general, using additional groups of classifiers will further improve the reproducibility of the final modules. In our experience 250 groups were sufficient to yield a reproducible result (results not shown). Finally, we used a fixed average AUC threshold to determine the final modules instead of the max average AUC threshold described in [28]. This was required since the inference and validation sets can be very dissimilar, which leads to low values of the max average AUC. We obtained pathway information from the MSigDB v3.0 Canonical Pathways subset [54]. To decrease redundancy, we applied pathway filtering to remove bulky pathways. This resulted in a pathway dataset containing 791 pathways with 5,633 genes assayed in all regions. The protein-protein interaction information was obtained from String v9.0 [55]. Figure 4 Schematic overview of COMBINER. COMBINER first infers candidate modules as activity vectors from each pathway in an inference dataset. It then validates these modules in validation datasets by regenerating activity vectors and performing supervised classification. Finally, the modules present in at least half of the validation sets are considered to be core modules. The resulting core module markers are then projected onto a known protein-protein interaction network. We generated 250 groups of 500 classifiers in parallel using LDA with recursive feature elimination. Both the classifier AUC and weight vectors were computed, and each feature was then ranked by its average normalized weight. The most consistently low-ranking feature was then removed recursively until the average AUC threshold was achieved. At this point, the remaining markers were considered to comprise the final modules.
2016-02-24T08:38:05.773Z
2013-08-20T00:00:00.000
{ "year": 2013, "sha1": "f9bfaadb513be14e52f176d4a09b871095d61aea", "oa_license": "CCBY", "oa_url": "https://bmcsystbiol.biomedcentral.com/track/pdf/10.1186/1752-0509-7-80", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ab4ca53baa9896dd23c5e3930df93b009b5209e5", "s2fieldsofstudy": [ "Biology", "Psychology", "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
7544235
pes2o/s2orc
v3-fos-license
Identification of the hASCT2-binding domain of the Env ERVWE1/syncytin-1 fusogenic glycoprotein The cellular HERV-W envelope/syncytin-1 protein, encoded by the envelope gene of the ERVWE1 proviral locus is a fusogenic glycoprotein probably involved in the formation of the placental syncytiotrophoblast layer. Syncytin-1-induced in vitro cell-cell fusion is dependent on the interaction with hASCT2. As no receptor binding domain has been clearly defined in the SU of neither the HERV-W Env nor the retroviruses of the same interference group, we designed an in vitro binding assay to evaluate the interaction of the HERV-W envelope with the hASCT2 receptor. Using truncated HERV-W SU subunits, a region consisting of the N-terminal 124 amino acids of the mature SU glycoprotein was determined as the minimal receptor-binding domain. This domain contains several sub-domains which are poorly conserved among retroviruses of this interference group but a region of 18 residus containing the SDGGGX2DX2R conserved motif was proved to be essential for syncytin-1-hASCT2 interaction. Findings The cellular HERV-W envelope protein, also named syncytin-1, is a fusogenic glycoprotein probably involved in the formation of the placental syncytiotrophoblast layer [1][2][3]. This protein, encoded by the envelope gene of the ERVWE1 proviral locus [4] is synthesized as a gPr73 precursor and is specifically cleaved into two mature proteins, a gp50 surface subunit (SU) and a gp24 transmembrane subunit (TM) [5]. The HERV-W Env glycoprotein is phylogenetically related to an interference group of retroviruses [6] that includes feline endogenous virus (RD114), baboon endogenous virus (BaEV), simian retroviruses (SRV-1, SRV-2), avian reticuloendotheliosis viruses (REV-A) and spleen necrosis virus (SNV) [7]. All retroviruses of this group share a common cell surface receptor, the human sodium-dependent neutral amino acid transporter type 2 (hASCT2) [8,9]. Accordingly, syn-cytin-1-induced in vitro cell-cell fusion is dependent upon interaction with hASCT2 [1] and also with the amino acid transporter hASCT1 [10]. Moreover, HERV-W Env confers host cell resistance to infection by SNV [11]. To date, no RBD has been clearly defined in the SU of the HERV-W Env interference group. Within the type C mammalian retroviruses, the SU subunit harbors three contiguous regions consisting of the SU amino-terminal receptor binding globular domain [12][13][14], a proline-rich region (PRR) and the TM-interacting SU carboxy-terminal domain. A similar RBD has been also delineated in the SU N-terminal region of HTLV1, HTLV2 [15,16] and predicted in Mammary Tumor Virus [17] and Bovine Leukemia virus envelopes [18]. In this study, we have designed an in vitro binding assay to evaluate the interaction of HERV-W envelope-derived sol-uble SU domains with the hASCT2 receptor. We have identified a 124 residue region of the mature SU as the minimal domain that interacts with the hASCT2 receptor. The HERV-W soluble SU subunit was constructed, expressed and tested as follows. Firstly, we used the phCMV-EnvW (clonePH74) expression vector containing the complete env gene as a template to generate soluble gp50 protein. In order to produce a soluble tagged gp50 protein, a full-length domain (phCMV-EnvSU) was constructed as a fusion protein containing a C-terminal VHRGS(H6) sequence located just downstream from the RNKR cleavage site replaced with an AAAR sequence (Figure 1A). Secondly, Env protein was recovered from cell culture medium without serum after transient expression in HEK293T cells. Finally, the binding assay using the soluble SU subunit was performed as previously described by Lavillette [19]. Briefly, target cells expressing the relevant receptor(s) were incubated with the supernatant containing the soluble envelope. The cells were stained with antihistidine antibody (anti-RGS(H4) Mab; Qiagen) to detect soluble SU and analyzed by a fluorescence-activated cell sorter (FACS Calibur, Becton Dickinson). In order to verify that the soluble SU protein behaves similarly to the SU subunit within the context of the native syncytin-1, we first examined the interaction of SU protein with hASCT2 receptor using parental and receptorblocked TE671 human cells as target cells ( Figure 1B). Indeed, TE671 cells were originally used to demonstrate the fusogenic properties of the native syncytin-1 protein. The target cells were TE671 subclones stably expressing the envelope glycoproteins derived from the GALV and RD114 retroviruses that respectively recognize the PiT-1 [20] and hASCT2 (also named RDR for type D mammalian retrovirus receptor) [8,9] receptors, thus blocking the accessibility of exogenously presented retroviral envelopes to these receptors, as previously described [1]. As expected, the soluble SU protein bound to TE671 parental cells, which express the receptors for several retrovirus groups, to PiT1-blocked cells, but not to hASCT2-blocked cells. These data confirmed that soluble SU protein, similarly to syncytin-1, recognizes the type D mammalian retrovirus receptor expressed on human cells and thus behaves like retroviral glycoproteins of the interference group. To infer the sensitivity and specificity of the cell surface binding assay, soluble SU subunit was bound to a panel of target cell types expressing different combinations of hASCT1 and hASCT2 receptors: TE671 cells which contain hASCT1 and hASCT2 [10], HeLa cells which contain hASCT2 but not hASCT1 [21], HEK293T cells which contain small amounts of both hASCT1 and hASCT2 [22], and XC rat cell clones stably expressing either hASCT2 or hASCT1 human receptors and derived from XC cells by transfection of the pCD3.1VHR16/14 [9] and pCDNA3.1-hASCT1 [10] expression plasmids, respectively ( Figure 1C). In this assay, binding to HeLa cells was slightly higher than to TE671 cells, demonstrating the different expression and/or distribution of the hASCT2 receptor on these target cells. In contrast, the HEK293T cells weakly bound SU protein. Using XC-hASCT1 and XC-hASCT2 cells ( Figure 1C), we showed that the SU subunit could efficiently bind both to hASCT1 and hASCT2 receptors, as observed for both cell-cell fusion and infection assays [10]. Finally to exclude any potential binding of soluble SU onto non-human ASCT-related receptor expressed by parental XC cells, we compare the binding efficiency of SU between parental XC cells and XC-hASCT2 or XC-hASCT1 cells ( Figure 1D). As expected, the SU protein bound to XChASCT2 and XChASCT1 cells but not XC parental cells [1]. As the features of the SU recombinant protein have been shown to be similar to the native syncytin-1 protein, we designed a series of N-and C-terminal truncation mutants in order to identify the RBD of syncytin-1 ( Figure 2A). SU versus truncated SU-derived proteins were tested for their ability to bind hASCT2 receptor-presenting cells (XC-hASCT2) by flow cytometry and XC parental cells were used as a control. Domains of the SU subunit were generated by PCR, subcloned into the phCMV-EnvSU expression vector and sequenced. EnvSU, Env69-313, Env233, Env197 and Env168 proteins were expressed in transfected HEK293T cells and the expression level evaluated by Western blot analysis ( Figure 2B). EnvSU, Env233, Env197 were expressed, secreted into the cell culture medium and exhibited similar binding efficiencies (Figure 2C), demonstrating that the N-terminal 176 residues of the mature SU are sufficient for hASCT2 recognition. Interestingly, deletion of the 22-68 region leads to the loss of receptor recognition, also suggesting its implication in the RBD. The Env168 mutant was poorly detected in the supernatant, suggesting problems of expression, secretion and/or stability for this truncated protein. Nevertheless, it still showed a weak binding capacity. Thus, in order to obtain similar quantities of soluble recombinant proteins derived from smaller N-terminal regions, we decided to fuse those N-terminal fragments with a carboxy-terminal domain (residues 169 to 313) of the SU protein ( Figure 2A) which is unable to bind receptor (see below). A homogenous expression level of Env169-313, Env71, Env95, Env117 and Env144 truncated envelopes in cell culture supernatants was confirmed by Western blot analysis ( Figure 2B). Only the Env144 protein bound to its receptor ( Figure 2C). In order to verify if the receptor binding capacity of Env144 requires an interaction Cell surface binding assays of soluble SU Figure 1 Cell surface binding assays of soluble SU. A) Schematic representation of HERV-W envelope protein (Env-W) and SU protein (EnvSU). Surface (SU, 1-313) and transmembrane (TM, 318-538) domains and the consensus furin cleavage site (RNKR, 314-317) are indicated. ( | ) N-glycosylation sites) [5]. Gray boxes indicate the signal peptide (SP) and a 15-amino-acid AAARVHRGS-H6 sequence (Tag). Residues are numbered starting from the initiation methionine. The ISKP sequence corresponds to the carboxy-terminal amino acid residues of the native SU included within the construct, and the P underlined amino acid corresponds to numbered residue located just upstream from the tag. B) Interaction of the soluble SU with the type D mammalian receptor. Soluble SU protein was secreted from HEK293T cells transfected with the SU domain expression vector and cultured for 24 h in medium without serum. Parental TE671, TE671 RD (hASCT2-blocked cells) and TE671 GALV (PiT1blocked cells) cells were incubated at 37°C for 1 h in supernatants with (shaded) or without (white) soluble SU protein, collected from transfected or native HEK293T cells, respectively. Binding of the tagged soluble SU onto the cells was detected by incubating 1 h at 4°C the cell-protein mixture with an anti-histidine antibody (anti-RGS(H4)-Mab; Qiagen) in PBA (PBS with 2% fetal calf serum and 0.1% sodium azide). Cells were washed once and incubated with fluorescein isothiocyanate-conjugated antibody (DAKO) for 1 h at 4°C in PBA. The viable cells were analyzed by flow cytometry. C) Binding assay of SU soluble protein with target cells expressing various levels of hASCT1 and hASCT2 receptors. TE671, HeLa, HEK293T, XC-hASCT2 and XC-hASCT1 cells were incubated at 37°C for 1 h in HEK293T cell culture supernatants with (shaded) or without (white) soluble SU protein as indicated above. Binding assays were performed as described in 1B. D) Binding of SU soluble protein is restricted to human ASCT receptors. Soluble SU protein were incubated at 37°C for 1 h with parental XC rat cells (white) and XC-hASCT2 or XC-hASCT1 (shaded) stable cells Binding assays were performed as described in 1B. Cell surface binding assays of SU and truncated proteins Figure 2 Cell surface binding assays of SU and truncated proteins. A) Schematic representation of SU and truncated proteins. Gray boxes indicate signal peptide (SP) and 15-amino-acid AAARVHRGS-H6 sequence (Tag). Residues are numbered starting from the initiation methionine. ISKP, THTS, NFRP and VSLF sequences correspond to the carboxy-terminal amino acid residues of the native SU included within each construct, SU, Env233, Env197 and Env168, respectively. The underlined amino acids correspond to the numbered residue located just upstream from the tag. B) Detection of SU and truncated proteins in culture medium. Culture supernatants were collected from HEK293T cells transfected with either the SU domain or truncated envelope expression vectors as described in fig 1B. 20 µl of supernatant was denatured (0.5% sodium dodecyl sulfate [SDS], 1% β-mercaptoethanol) at 100°C for 10 min and analyzed by SDS-10% polyacrylamide gel electrophoresis. Blots were probed with with an anti-histidine antibody (anti-RGS(H4)-Mab; Qiagen). Blots were developed using horseradish peroxidase-conjugated antibodies (Jackson) together with an enhanced chemiluminescence kit (Amersham Pharmacia). * indicates the position of Env168 SU protein. C) Identification of the receptor binding domain. SU and truncated soluble proteins were incubated at 37°C for 1 h with XC-hASCT2 cells (shaded) or with parental XC cells (white). Binding assays were performed as described above. The binding capacity of each recombinant protein onto hASCT2 receptor is depicted with a green (efficient) or red (inefficient) highlighted name. To formally identify the Env144 protein as a functional RBD product we have evaluated its properties to compete with the wild-type envelope during receptor recognition, using an heterologous cell-cell fusion assay [1,5]. Briefly, TELCeB6 indicator cells expressing the wild-type envelope and characterized by a nucleus expressing β-galactosidase were co-cultured with HEK293T indicator target cells, characterized by uncolored nuclei. The co-culture of both indicator and producer cells leads to the formation of syncytia containing generally one or several blue nucleus and tens white nuclei. The expression in HEK293T of any SU subdomain exhibiting RBD properties would reduce syncytia formation by a receptor interference mechanism. The fusion positive control was characterized by about 730 syncytia per well and an average of 77 ± 42 nuclei per syncytia containing one to ten blue nuclei, as previously observed [1] ( Figure 3A). Conversely, expression of EnvSU in HEK293T lead to a decrease in syncytia formation characterized by about 270 syncytia per well and an average of 25 ± 12 nuclei per syncytia containing two to ten blue nuclei, indicating that wild-type envelope could not efficiently interact with hASCT2 receptors on HEK293T cells. Env144 protein expression induces a decrease in fusogenic activity similar to the total SU domain (240 syncytia per well, 26 ± 9 nuclei per syncytia). Conversely, a non-receptor binding protein such as Env71 did not alter fusion efficiency (680 syncytia per well, 66 ± 35 nuclei per syncytia). This demonstrated that the Env144 protein contains all determinants required for hASCT2 receptor binding. In addition, the binding of Env144 and EnvSU to the hASCT1 receptor was found to be similar (data not shown). The absence of binding in cell surface assays using Env71, Env95 and notably Env117 protein ( Figure 2C) suggested the loss of at least one receptor-binding determinant within the 117-144 region. This region contains the epitope of an anti-SU polyclonal antibody (anti-SU raised against residues 112-129 of EnvW) [5]. This antibody was Variable amino acids (aa) (green dot) among hominoids (upper-case), human (lower case) and neutral insertions preserving the envelope functions (white dots) are indicated above the sequence [23]. Strictly conserved aa (grey boxes) and similar aa (straight line) within the D interference group are underlined. Mutations altering spleen necrosis virus infectivity are indicated as black dots [24]. Blue arrows covering aa 21-69 and 117-144 indicate deletions detrimental to hASCT2 binding. The red arrow corresponds to the SU-EnvW peptide used for rabbit immunization and affinity purification [5]. used to detect native syncytin-1 in primary cytotrophoblasts [3] and in b30BeWo carcinoma cells [5]. Hence, the Env144 protein was pre-incubated with the anti-SU antibody. The binding of the protein-antibody complex onto the receptor was significantly altered in the presence of the specific anti-SU antibody but not in the presence of a rabbit anti-TM antibody ( Figure 3B). This result indicates that this domain is essential for hASCT2 receptor binding. Validation of RBD functionality In conclusion, we have used HERV-W SU soluble subunits to determine that the 124 N-terminal amino-acids of the HERV-W mature SU protein are sufficient to interact with the hASCT2 and hASCT1 amino-acid transporters ( Figure 3C). Although still not elucidated, the envelope-receptor interaction seems to some extent to be tolerant of the RBD amino acid content. Thus, 17 positions out of the 124 residues of the RBD were shown to be variable in ERVWE1 envelopes of hominoids and humans, without altering the hASCT2-dependent fusogenic properties of each envelope variant [4]. In addition, a weak sequence conservation exists between retroviral members of the interference group with the notable exception of three cysteine-containing motifs (PCXC, CYX 5 C and CX 8-9 CW) and a SDGGGX 2 DX 2 R motif. Interestingly, it was shown previously that introduction of 5 residues downstream from serine 51 (see Figure 3C) preserves the fusogenicity of the syncytin-1 protein [23]. This suggests that the first 30 aa of the RBD, that include the conserved PCXC motif, constitute an actual sub-domain. It should be noted that regions of variable sizes are observed between the CYX 5 C and CX [8][9] CW motifs of the retroviruses of the interference group, which suggests a certain flexibility of the interdomain. In addition, the conserved SDGGGX 2 DX 2 R motif seems to be directly involved in receptor interaction, as supported by receptor-binding inhibition with a regioselective antibody and receptor recognition alteration due to aspartic acid mutated residues in SNV-related retrovirus [24]. Lastly, the carboxy-terminal end of the RBD presents a conserved predicted alpha-helix located immediately upstream of the SDGGGX 2 D sequence. Taken together, these results suggest, for all the retroviruses of the interference group, the existence of 5 structurally or sequenceconserved sub-domains. Crystallography studies will be required to confirm that these sub-domains interact. Finally, as the syncytin-1 belongs to the HERV-W family, many natural HERV-W truncated envelope ORFs probably contain the RBD. When reactivated, these partial envelopes could interfere with the hASCT receptors, which suggests putative physiopathological functions for such truncated envelopes.
2014-10-01T00:00:00.000Z
2006-07-04T00:00:00.000
{ "year": 2006, "sha1": "b5188008c1bd4a679f45b42429f3845020bf87fe", "oa_license": "CCBY", "oa_url": "https://retrovirology.biomedcentral.com/track/pdf/10.1186/1742-4690-3-41", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b5188008c1bd4a679f45b42429f3845020bf87fe", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
265340664
pes2o/s2orc
v3-fos-license
Sialic acid–modified der p 2 allergen exerts immunomodulatory effects on human PBMCs Background House dust mite extract–based allergen immunotherapy (AIT) to treat house dust mite allergy is substantially effective but still presents some safety and efficacy concerns that warrant improvement. Several major allergen-based approaches to increase safety and efficacy of AIT have been proposed. One of them is the use of the group 2 allergen, Der p 2. Objective We sought to investigate the immunomodulatory effects of sialic acid–modified major allergen recombinant Der p 2 (sia-rDer p 2) on PBMCs from healthy volunteers. Methods We activated PBMCs with anti-CD3/CD28 antibodies and incubated them at 37°C for 6 days in the presence or absence of either native rDer p 2 or α2-3 sialic acid–modified rDer p 2 (sia-rDer p 2). We assessed the changes in CD4+ T-cell activation and proliferation by flow cytometry and changes in T-lymphocyte cytokine production in cell culture supernatant by ELISA. Results We observed that PBMCs treated with sia-rDer p 2 presented with a markedly decreased expression of CD69 and an increased abundance of LAG-3+ lymphocytes compared with cells treated with rDer p 2. Moreover, PBMCs treated with sia-rDer p 2 showed a reduced production of IL-4, IL-13, and IL-5 and displayed a higher IL-10/IL-5 ratio compared with rDer p 2–treated PBMCs. Conclusions We demonstrate that sia-rDer p 2 might be a safer option than native rDer p 2 for Der p 2–specific AIT. This is most relevant in the early phase of AIT that is often characterized by heightened TH2 responses, because sia-rDer p 2 does not enhance the production of TH2 cytokines. The standard therapeutic strategies for managing immune dysregulation, such as the excessive immune response in allergic reactions, mostly involve the use of immunosuppressive drugs, such as corticosteroids, Janus kinase, and calcineurin inhibitors, which lead to generalized immunosuppression. 1 Because of the multiple side effects associated with immunosuppressive drugs, preventing or attenuating immune dysregulation in a specific manner is highly desirable.3][4] These strategies generally aim to alter antigenpresenting cells (APCs) to suppress inflammatory T cells and induce regulatory T (Treg) cells. 5irectly targeting APCs including dendritic cells (DCs), monocytes, and B cells is an attractive strategy to modulate T-cell function and to induce antigen-specific tolerance. 5In particular, inhibitory receptors expressed by APCs, such as the Siglec (sialic acid-binding immunoglobulin-like lectin) family of carbohydrate-binding proteins, are a good target.Siglecs recognize sialic acid-containing glycoproteins and glycolipids and can be targeted to induce immunosuppressive responses. 5,6ost of the Siglecs, including Siglec-9 and Siglec-10, discussed in this study possess immunoreceptor tyrosine-based inhibitory motifs, which transmit immune deactivating signals on recognition of their ligand (sialic acids) and can thus be therapeutically exploited for the management of inflammatory diseases. 6e previously demonstrated that ex vivo and in vivo DC-targeting of a2-3 sialic acid-modified antigen to the mouse homolog of human Siglec-9 (ie, Siglec-E) drove naive CD4 1 T-cell differentiation into antigen-specific Treg cells. 7Moreover, DCs treated with sialic acid-modified antigen dampened T-cell differentiation to effector T cells even in the presence of native antigen-loaded DCs. 7Also, exposing LPS-stimulated human monocyte-derived DCs to Siglec-9 ligands, a2-3 sialic acids, or anti-Siglec-9 antibodies suppressed the production of IL-6 and IL-12. 8,9Together, these reports show that targeting Siglec-9 inhibits T H 1 responses.However, very little is known about whether Siglec-9-mediated interferences can also alter allergen-associated T H 2 responses, characterized by the secretion of IL-4, IL-5, and IL-13. We therefore evaluated the immunomodulatory effects of a2-3 sialic acid-modified antigen on the T H 2 arm of T H cells in human PBMCs using Der p 2 as a model antigen.Der p 2 is one of the established major allergens from house dust mite (HDM) and is an important risk factor for the development of allergic rhinitis and asthma. 10,11Consequently, research into the use of Der p 2 for allergen immunotherapy (AIT) against Der p 2-HDM allergy instead of HDM extracts has grown in the past decades because AIT with HDM still presents some safety and efficacy concerns that warrant improvement. 4Exploiting the sialic acid-Siglec axis by modifying Der p 2 via its 12 free lysine residues 12 with sialic acids could provide Siglec-mediated crosslinking and signaling to attenuate allergen-specific T H 2-mediated immune responses. 13THODS a2-3 Sialic acid-recombinant Der p 2 binding to Siglec-human Fc chimeras NUNC MaxiSorp plates (Greiner Bio-One, Fredensborg, Denmark) were coated with 10 mg/mL of recombinant Der p 2 (rDer p 2) or a2-3 sialic acid-rDer p 2 (sia-rDer p 2) overnight at room temperature.After washing with HBSS (Gibco, New York, NY), wells were blocked with carbo-free blocking buffer-glycoprotein-free blocking agent (Vector Laboratories, Newark, Calif) diluted 1:10 in HBSS for 1 hour at room temperature and subsequently incubated with Siglec-Fc chimeras and goat anti-human Fc-peroxidase antibody (Jackson Laboratory, Bar Harbor, Me).The following Siglec-Fc chimeras (R&D Systems, Minneapolis, Minn) were used: Siglec-2-hFc (1968-SL-050), Siglec-3-hFc (1137-SL-050), Siglec-7-hFc (1138-SL-050), Siglec-9-hFc (1139-SL-050), and Siglec-10-hFc (2130-SL-050). For all colorimetric assays, plates were developed using 3,39,5,59-tetramethylbenzidine (Merck, Germany) as a substrate and H 2 SO 4 as the stop solution.The iMark microplate absorbance reader (Bio-Rad) was used to measure absorbance at 450 nm. Cell Isolation and culture Blood from healthy volunteers for the isolation of PBMCs was supplied by the Sanquin Blood Bank (Sanquin, Amsterdam, The Netherlands).Healthy donors (nonallergic) gave their written consent for the use of blood donation for research purposes.We performed an ImmunoCAP assay to confirm the allergic status of donors.PBMCs were isolated from whole blood by density gradient centrifugation (at 2000 rpm for 30 minutes) using Lymphoprep (Serumwerk, Serumwerk, Germany).After isolation, PBMCs were frozen and stored at 2808C (or in liquid nitrogen) until use.PBMCs were thawed and plated in 96-well round-bottomed plates (Greiner Bio-One, Alphen aan den Rijn, The Netherlands) at a density of 10 3 10 6 cells/mL in RPMI 1640 (Thermo Fisher, Waltham, Mass) supplemented with 10% inactivated FBS (Biowhittaker, Switzerland), 1% glutamine (Thermo Fisher), and 1% penicillin-streptomycin (Lonza, Basel, Switzerland).The PBMCs were left inactivated or activated with plate-bound anti-CD3 (10 mg/mL) (clone SPV-T3b, Monoclonal Antibody Facility Department of Molecular Cell Biology and Immunology, Amsterdam, The Netherlands) and soluble anti-CD28 (2 mg/mL) (Sanquin).The activated PBMCs were subsequently either left untreated (control) or treated with rDer p 2 (10 mg/mL) or sia-rDer p 2 (10 mg/mL) for 6 days at 378C and 5% CO 2 in humidified air.Supernatants were harvested and stored at 2208C for cytokine measurements and cells were harvested for flow cytometry. Prevention or reversion of PBMC responses To investigate whether sia-rDer p 2 can revert an existing inflammation involving rDer p 2, anti-CD3-/CD28-activated PBMCs (10 3 10 6 cells/mL) were first treated with rDer p 2 (10 mg/mL) for 2 days, after which cells were washed with RPMI medium and sia-rDer p 2 (10 mg/mL) was added to the cells and cultured for another 4 days.To investigate whether sia-rDer p 2 can prevent inflammation involving rDer p 2, anti-CD3-/CD28activated PBMCs were first treated with sia-rDer p 2 (10 mg/mL) for 2 days, cells were washed with RPMI medium, and rDer p 2 (10 mg/mL) was added to the cells and cultured for another 4 days.This setup was adapted from that previously described. 14 Flow cytometry For the characterization of the different cell populations within PBMCs and the evaluation of Siglec expression, cells (1 3 10 6 /well) were stained with mAbs (Table I) for 30 minutes on ice in a 96-well V-bottomed plate (Greiner Bio-One).Antibodies were diluted in PBS containing 0.1% BSA (Roche, Rotkreuz, Switzerland), 0.02% sodium azide, Fc block (in-house), and True-Stain monocyte blocker (BioLegend, Amsterdam, The Netherlands).Fixable Viability stain, Zombie NIR (BioLegend), or LIVE DEAD Blue (Thermo Fisher) was used to stain dead cells for 15 minutes before staining surface markers.For intracellular Ki67, IL-10, and FoxP3 detection, cells were incubated with 0.5 mg/mL of phorbol 12-myristate 13-acetate (Sigma-Aldrich, Sofia, Bulgaria) and 1 mg/mL of ionomycin (Sigma-Aldrich) in the presence of GolgiStop and Golgiplug (BD Biosciences, Erembodegem, Belgium) for 4 hours.After staining cell surface markers, cells were fixed-permeabilized using FoxP3/Transcription Factor Staining Buffer Set (Thermo Fisher, Waltham, Mass) as directed by the manufacturer and stained with intracellular antibodies (Table I).Labeled cells were fixed with 1% paraformaldehyde (Electron Microscopy Sciences, Hatfield, Pa) and stored at 48C until acquisition.Fluorescence minus 1 controls were prepared for each cell marker and used for gating.Stained samples were acquired with the 4-or 5laser Aurora flow cytometer (Cytek Biosciences, Amsterdam, The Netherlands).FCS files were analyzed using FlowJo software v10.8 (BD Biosciences). Binding of sia-rDer p 2 to PBMCs PBMCs (1 3 10 6 /well) were washed with 0.5% BSA in HBSS (ice-cold) in a 96-well V-bottomed plate.rDer p 2 or sia-rDer p 2 (10 mg/mL) or 10 mg/mL polyacrylamide-a2-3 biotin (positive control) was added to the cells and incubated for 1 hour at 378C.Subsequently, 10 mg/mL of biotinylated anti-Der p 2 antibody (Absolute Antibody, Amsterdam, The Netherlands) was added and incubated for 45 minutes at 48C.Cells were then stained with mAbs to characterize PBMC subsets (Table I) and streptavidin-phycoerythrin to detect binding (Jackson Laboratories) for 30 minutes at 48C.Stained cells were fixed in 1% paraformaldehyde and acquired with the 5-laser Aurora flow cytometer (Cytek Biosciences). Statistical analysis Data are presented as mean 6 SD (as indicated in figure legends).P values were determined by the Wilcoxon paired test using GraphPad Prism version 9 (GraphPad, San Diego, Calif) (as indicated in figure legends).Differences in values were considered significant at a P value of less than .05. Sia-rDer p 2 binds to APCs a2-3 sialic acids were chemically conjugated to rDer p 2 through a maleimide-thiol reaction to produce glyco-allergen conjugates (sia-rDer p 2) (see the Methods section in this article's Online Repository at www.jaci-global.org)(Fig 1, A).We performed a lectinbinding ELISA to confirm the presence of a2-3 sialic acids on sia-Der p 2 (see the Methods section and Fig E1 in this article's Online Repository at www.jaci-global.org).We then assessed whether sia-rDer p 2 would bind to selected Siglec-Fc chimeras and found that sia-rDer p 2 but not rDer p 2 interacted primarily with Siglec-9-Fc (Fig E1 sia-rDer p 2 modulates T H 2 cytokine production but not T H 1 or T H 17 cytokines Given that sia-rDer p 2 equally modulated both T H 1-and T H 2-cell activation, we investigated whether a similar effect would occur in their cytokine profiles.When compared with baseline, rDer p 2 upregulated the production of IL-4 (Fig 3 E).However, sia-rDer p 2 downregulated the production of IL-5 (Fig 3, C) but did not affect IL-13, IL-4, IL-7, and IFN-g (Fig 3, A, B, D, and E).Moreover, compared with rDer p 2, sia-rDer p 2 induced lower production of IL-13 and IL-4 (Fig 3, A and B).We then investigated whether the effects of sia-rDer p 2 on IL-5 were a result of direct binding to CD4 1 T cells or mediated by APCs present in PBMCs by coincubating rDer p 2 or sia-rDer p 2 with pure CD4 1 T cells activated with anti-CD3/CD28.The production of IL-5 was not affected (Fig 3 , F).We can therefore conclude that sia-rDer p 2downregulates IL-5 secretion and that this may be via APC-T-cell interactions. The Treg/T H 2 cytokine ratio is higher with sia-rDer p 2 than with native rDer p 2 Successful AIT can be characterized by a high IL-10/IL-5 ratio. 12We therefore analyzed culture supernatants of anti-CD3-/ CD28-activated PBMCs to determine the impact of rDer p 2 or sia-rDer p 2 on IL-10 production.IL-10 secretion was lower in PBMCs treated with rDer p 2 compared with the control.Sia-rDer p 2 also attenuated IL-10 secretion but to a lesser extent than rDer p 2 (Fig 4 , A).We then determined the Treg/T H 2 cytokine balance by calculating the ratio of IL-10 to IL-4, IL-13, and IL-5, respectively.We observed that the IL-10/IL-4, IL-10/IL-13, and IL-10/IL-5 ratios were higher in sia-rDer p 2-treated PBMCs than in rDer p 2-treated PBMCs (Fig 4, B-D).To further determine the role of IL-10 in the reduced expression of T H 2-cell cytokines, we stimulated PBMCs in the presence of an IL-10-blocking antibody and analyzed the IL-5 and IL-13 production.In the presence of an IL-10-blocking antibody, Sia-rDer p 2-treated PBMCs produced more IL-5 and IL-13 (Fig 4 , E and F).These data suggest that sia-rDer p 2, unlike rDer p 2, prevents the disruption of the Treg/T H 2 cytokine balance and that IL-10 plays a role in the downregulation of IL-5. 6][17] We therefore assessed whether sia-rDer p 2 could expand Treg cells.Given the vast phenotypic heterogeneity of human Treg cells, we analyzed only the following subsets: To do this, anti-CD3-/CD28-activated PBMCs were exposed to sia-rDer p 2 either 48 hours before (prevention of response) or after (reversal of response) coincubation with rDer p 2 (Fig 6, A).We observed that treating PBMCs with sia-rDer p 2 before treating with rDer p 2 resulted in the downregulation of both IL-5 and IL-13 (Fig 6 , B).However, the percentages of both CD69 1 CD4 1 and Ki67 1 CD4 1 T cells (Fig 6 , C) were unaltered.In contrast, treating PBMCs with sia-rDer p 2 after pretreating them with rDer p 2 resulted in an increase in IL-5 and IL-13 (Fig 6 , D).Nonetheless, the proportions of CD69 1 CD4 1 and Ki67 1 CD4 1 T cells appeared to be reduced (Fig 6, E).Together, these data demonstrate that sia-rDer p 2 can prevent the activation of T H 2 cytokine responses on challenge with rDer p 2. DISCUSSION This study aimed to investigate the immunomodulatory capacity of sialic acid-modified recombinant Der p 2 on the T H 2 arm of CD4 1 T cells in human PBMCs from nonallergic individuals ex vivo.We also evaluated whether sia-rDer p 2 could induce the expansion of Treg cells.We noted that sia-rDer p 2 moderately suppressed the activation of CD4 1 T cells, evidenced by a reduction in the expression of CD69 in both T H 1 and T H 2 cells and downregulated the production of T H 2 cytokines IL-5 and IL-13 but not the T H 1 cytokine IFN-g or the T H 17 cytokine IL-17.We also showed that sia-rDer p 2 binds to Siglec-9 present on monocytes, DCs, and B cells, and that these cells may be involved in the suppression of CD4 1 T-cell activation and in the diminished production of T H 2 cytokines. 9][20] This is because there is still ongoing debates about properly standardizing the components of the crude HDM extracts that are currently used in clinics, 21 and the risk of unwanted side effects is still quite high. 22Moreover, 79.2% of people with asthma have high IgE titers specific for Der p 2, 23 and Der p 2 itself is a strong risk factor for the development of asthma. 10Moreover, Der p 2 is a good model antigen for investigating the effects of sialic acid-protein modification on immune responses because it contains 12 free lysine residues on which sialic acid molecules could be conjugated.In sialic acid-Siglec interactions, the use of multivalent ligands is necessary to achieve sufficient avidity to stably bind Siglecs and subsequently transduce strong immunomodulatory signals. 24We hypothesized using Der p 2 decorated with sialic acids instead of native Der p 2 would prevent the initial spike in T H 2 responses 25 observed during HDM AIT and potentially decrease the time needed to develop full tolerance because of the induction of immunomodulatory responses mediated by the sialic acid-Siglec axis. We observed that when PBMCs from nonallergic individuals were treated with rDer p 2, they produced more IL-4 and IL-13, features that are typical of an allergic phenotype.Indeed, nonatopic individuals have been reported to possess allergen- specific cells, 26,27 although the response to the allergen is of a lower intensity compared with those of atopic individuals. 28his could explain the observed increase in IL-4 and IL-13.Notably, however, the production of IL-4 and IL-13 from sia-rDer p 2-treated PBMCs from the same nonallergic individuals was unaltered, whereas the production of IL-5 was downregulated.This indicates that sia-rDer p 2 may be safer than native Der p 2 because it does not enhance T H 2 cytokine production.In addition, sia-rDer p 2 did not affect the production of both IL-17 and IFN-g, which can enhance the severity of allergic diseases and asthma. 29,30In an in vitro prophylactic setting, where we treated PBMCs with sia-rDer p 2 before treating them with rDer p 2, sia-rDer p 2 downregulated the production of IL-5 and IL-13.Collectively, these data show that sia-rDer p 2 inhibits T H 2 cytokine production, especially IL-5, and may therefore be a good candidate for Der p 2-specific AIT or be used for prophylactic vaccination against Der p 2-HDM allergy. When tested in an ex vivo therapeutic setting (PBMCs were first treated with rDer p 2 and then with sia-rDer p 2), sia-rDer p 2 did not produce the suppressive effects described earlier.We hypothesize that the activation signal produced by a combination of anti-CD3/CD28 and rDer p 2 was too strong to be overcome by sia-rDer p 2. The fact that lowering the amounts of anti-CD3/CD28 did not result in a different outcome (data not shown) might indicate that the changes provoked in the PBMCs after activation with anti-CD3/CD28 cannot be reversed by the immunosuppressive signals generated by sialic acids.Indeed, anti-CD3-mediated T-cell activation can be overridden only when the suppressive compound is administered simultaneously with anti-CD3. 31Notably, the fact that sia-Der p 2 modulated only T H 2 cytokines, and not IFN-g or IL-17, as opposed to previous reports with ovalbumin as antigen, 7 indicates that the activity of sia-Der p 2 may be driven by APCs.Moreover, the involvement of APCs is also substantiated by our finding that sia-rDer p 2 does not affect T H 2 cytokine secretion when incubated with pure CD4 1 T cells. Beyond the T H 2 cytokine response, we measured the effect of sia-rDer p 2 on the activation of CD4 1 T cells by the expression of CD25 and CD69.CD25 is widely recognized as the primary marker for cellular activation, 32 whereas a prolonged increase in the expression of CD69 is associated with allergic inflammation. 33Our findings show that sia-Der p 2 markedly suppressed the expression of CD69.The effects of sia-rDer p 2 on CD69 were similar in both T H 1 and T H 2 cells.Shinoda et al 34 reported that CD69-deficient CD4 1 T cells cannot help B cells to produce high-affinity antibodies and fail to mediate the generation of longlived plasma cells.Therefore, immunotherapy with sia-rDer p 2 would indirectly inhibit the generation of long-lived allergic plasma cells. Another important mechanism that has been proposed to be involved in successful AIT is the induction of Treg cells. 12,22The 2 most commonly described subsets of Treg cells that play a key role in allergen tolerance are FoxP3 1 Treg cells 35,36 and IL-10 1 Tr1 cells. 16,37We showed that, compared with rDer p 2, treatment of PBMCs with sia-rDer p 2 did not affect the frequency of FoxP3 1 Treg cells but expanded the frequency of Tr1 cells.Of note, Tr1 cells, having strong suppressive activity, are expanded after AIT, and they correlate with a decrease in clinical scores. 16ecause LAG-3 is increasingly implicated in the downregulation of T-cell responses, 38,39 we measured the frequency of LAG-3 1 Treg-cell subsets.We observed that the frequencies of CD25 1 LAG-3 1 Treg cells, notably the FoxP3 2 subset and PD-1 1 LAG-3 1 , were higher in the sia-rDer p 2 than in the rDer p 2 condition.Dawicki et al 40 have reported that CD25 1 LAG-3 1 FoxP3 2 Treg cells can contribute to tolerance induction.These data suggest that AIT with sia-rDer p 2 instead of Der p 2 might maintain the abundance of necessary Treg-cell populations, which may accelerate the development of tolerance to Der p 2. Overall, we have demonstrated that a2-3 sialic acid-modified rDer p 2 is able to suppress the activation of both T H 1 and T H 2 cells and downregulate the production of T H 2 cytokines IL-5 and IL-13 in PBMCs from nonallergic individuals.Moreover, sia-rDer p 2, unlike rDer p 2, does not enhance the activation and proliferation of CD4 1 T cells, does not alter the Treg/T H 2 cytokine balance, and does not alter the frequency of Treg cells.All these findings illustrate that the use of sia-rDer p 2 instead of rDer p 2 for Der p 2-specific AIT would potentially be more beneficial especially in the early phase of AIT, which is often characterized by a heightened T H 2 response and corresponding allergic side effects, although this needs to be confirmed using cells from subjects with established HDM allergy.Further studies, especially in cells from HDM-allergic donors, are needed to broaden the knowledge of the present findings and to confirm their possible use in routine clinical practice. DISCLOSURE STATEMENT This study was funded by HEALTH HOLLAND (HH LSHM19073) and DC4U Technologies. Disclosure of potential conflict of interest: E. R. J. Li and Y. van Kooyk are involved in DC4U Technologies, which develops glycan-based technologies that enable steering the human immune response.R. van Ree receives consultancy fees from HAL Allergy BV, Citeq BV, Angany, Inc, Reacta Healthcare Ltd, AB Enzymes, Mission MightyMe, and The Protein Brewery; receives speaker fees from HAL Allergy BV, ALK, and Thermo Fisher Scientific; and possesses stock options at Angany, Inc.The rest of the authors declare that they have no relevant conflicts of interest. FIG 1 . FIG 1. sia-rDer p 2 binds to Siglec-9 and/or Siglec-10 expressed on APCs.A, Schematic representation of the thiol-malemide reaction for the conjugation of a2-3 sialic acids to rDer p 2. B, Representative histograms showing the binding of rDer p 2 and sia-rDer p 2 on classical monocytes (CD14 1 CD16 1 ), CD11c 1 DCs, and B cells.C, The percentage of cells among monocytes, CD11c 1 DCs, and B cells that bound polyacrylamide-a2-3 (positive control [PC]), rDer p 2, and sia-rDer p 2 (n 5 3).Data are shown as mean 6 SD.D, Representative histograms showing the expression of Siglec-9 on classical monocytes after incubation of cells with either rDer p 2 or sia-rDer p 2. ).To investigate whether sia-rDer p 2 would bind to Siglecs expressed on immune cells, we first measured the expression of Siglec-9 and Siglec-10 on the different immune cells within PBMCs (gating strategy; FigE1).Monocytes and DCs but not B cells expressed high levels of Siglec-9, whereas CD14 2 CD16 1 TABLE I . mAbs used for flow cytometry, including clone, manufacturer, and catalog number FITC, Fluorescein isothiocyanate; MCBI, Molecular Cell Biology and Immunology; NA, not applicable; PE, phycoerythrin.
2023-11-22T16:08:20.028Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "5de4c1cfa572c17f59124e9709dd185b7d9abcdc", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.jacig.2023.100193", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ecde2930b815b3a0c86a52fc04111fff1c4f48ec", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
15081939
pes2o/s2orc
v3-fos-license
Adaptive response and enlargement of dynamic range Many membrane channels and receptors exhibit adaptive, or desensitized, response to a strong sustained input stimulus, often supported by protein activity-dependent inactivation. Adaptive response is thought to be related to various cellular functions such as homeostasis and enlargement of dynamic range by background compensation. Here we study the quantitative relation between adaptive response and background compensation within a modeling framework. We show that any particular type of adaptive response is neither sufficient nor necessary for adaptive enlargement of dynamic range. In particular a precise adaptive response, where system activity is maintained at a constant level at steady state, does not ensure a large dynamic range neither in input signal nor in system output. A general mechanism for input dynamic range enlargement can come about from the activity-dependent modulation of protein responsiveness by multiple biochemical modification, regardless of the type of adaptive response it induces. Therefore hierarchical biochemical processes such as methylation and phosphorylation are natural candidates to induce this property in signaling systems. Introduction Organisms in general and cells in particular face the continuous challenge of sensing their environment and responding accordingly [1,2]. Cell membranes are populated by a variety of proteins, such as ligand-binding receptors and various signal responding channels, that carry out this sensing and transmit information to downstream processes in the cell. In many cases these sensing molecules can be described as having two distinct functional states, such as active/inactive for a receptor or conducting/non-conducting for a channel. The transitions between functional protein states often depend directly on incoming signals, but are also regulated on longer timescales by various cellular processes. Many of these sensing and signaling proteins exhibit adaptive response, or desensitization, following a strong and persistent stimulation. The qualitative hallmark of such responses is observed when exposing the system to a step input signal: an abrupt change of response is followed by a slow relaxation on a longer time scale. These responses have been described and studied for many years in different areas of biology. Recently they have been the topic of much theoretical work, mainly along two lines of research. On one hand, elaborate models were developed to describe specific systems (most notably bacterial chemotaxis [3,4,5,6], but also other systems). These models are highly successful in reconstructing experimental results, but due to their complexity it often remains unclear what model ingredient is responsible for which system property. This situation impedes on our ability to generalize models from one system to the other and to distinguish universal from system-specific properties. The other line of research has taken a more abstract approach, characterizing and classifying small simple sub-networks with respect to their adaptive response properties [7,8,9,10]. An adaptive response in a signaling system indicates underlying processes on more than one timescale, but does not necessarily imply any particular functionality. If the adaptive response is precise, namely it returns at steady state to a fixed value insensitive to the input, it can serve a homeostasis mechanism that keeps some variables constant in the face of changing environments. However, from a more general perspective of the cell sensing its environment and transmitting information, it is not a-priori clear how adaptive response is related to signal processing properties, for example the determination of effective input dynamic range. Cellular signaling systems are known for the remarkably broad dynamic range of inputs to which they can respond. In particular they can respond to relatively small changes in signal on top of large constant backgrounds, effectively compensating the background and remaining sensitive to fluctuations around it. For example, the signaling system responsible for bacterial chemotaxis maintains sensitivity to changes in nutrient gradients over 5 orders of magnitudes of ligand concentrations [11]. Photoreceptors, as another example, respond to light intensities spanning 11 orders of magnitude [12]. The response to a broad dynamic range of input achieved by cellular signaling systems was often associated with their adaptive response, led by the intuition that the return of the system to its previous state allows renewed sensitivity to signal [13,5,10]. However, the quantitative relation between the two properties was not examined in detail. In this work we focus on this relation by utilizing a previously developed general model for adaptive response [14], which has proven useful in making the distinction between universal and system-specific features. We find that the relation between the two properties -adaptive response and dynamic rangeis more subtle than may be expected; in particular, any form of adaptive response is neither sufficient nor necessary for implementation of efficient dynamic range enlargement by background compensation. In Section 2 we review for completeness a general class of 3-state models [14] and its properties. In Section 3 we quantify and compute the degree of dynamic range enlargement by these models, and conclude that state-dependent inactivation or any particular form of adaptive response is insufficient to ensure such an enlargement. In Section 4 we generalize the concept of protein unavailability (or inactivation) from a binary to a graded scale of protein responsiveness, and show that such a generalization robustly results in adaptive enlargement of the dynamic range, regardless of kinetic details or of the temporal form of adaptive response. We conclude by relating the results to experiments and reviewing some open questions. A three-state model for adaptive response In our previous work we introduced a general simplified model for adaptive response unifying many biological systems and enabling a mapping to control circuits [14]. The model is based on an ensemble of protein molecules that can be active or inactive, and in addition display slow transition between available and an unavailable pools. Available proteins can rapidly respond to the input signal, switching between the two distinct functional states active/inactive. Unavailable proteins, by contrast, cannot respond on a short time-scale but only after recovering back to the available pool. The physical mechanisms for proteins to become unavailable or to recover back to the available pool are diverse; they share the properties of being slow and activity-dependent. The general structure of the model is described by the following scheme: in which u(t) is the input signal; α(u(t)), β(u(t)) are the input-dependent transition rates between active and inactive states of the protein; γ is the rate constant for inactivation, assumed to be first-order in the concentration of active molecules x(t); and ∆ is a general term for recovery to availability that can be constant, first-order, or history-dependent (see also illustration in Fig. 1). In this model the available pool of proteins, represented by the dynamical variable A(t), registers the system past activity and feeds it back to the observable system output x(t) via multiplicative feedback. The equations describing the system dynamics are written in terms of the activity x(t) and availability A(t): The direct influence of the input signal on the system enters through the transition rates, α(u), β(u). Assuming these transitions are rapid, each value of input u instantaneously defines a balance between the two functional states, such that the activity reflects the input through the input/output function p(u): where A denotes the total concentration of available proteins. The details of this function depend on the physical properties of the sensing proteins. On longer timescales, the slow dynamics A(t) comes into play. These dynamics reflect various cellular processes that modulate the number of available proteins, and the details of these dynamics depend on the particular mechanism of inactivation and recovery. We have shown in our previous work that the dynamical variable A(t) effectively input u(t) Inactive Active Available Unavailable Active A(t) Figure 1: General 3-state kinetic scheme providing an abstract model for statedependent (i.e. activity-dependent) inactivation. Proteins can be active or inactive, with a balance between them depending on the input u(t). Population of the active state x(t), which equals the total activity, is the system's output. In addition proteins can transit, slowly and only through the active state, to an "unavailable" state in which they cannot respond to the input. Kinetics of recovery from unavailability, denoted by the abstract term ∆, may vary according to physical implementation found in various biological systems. For detailed analysis see [14] encodes an integral over past system activity, with a kernel that depends on details of its kinetics. In particular, for different recovery kinetics ∆ the induced adaptive response to step changes can take various temporal forms: exponential or power-law, precise or input-dependent. However, regardless of these forms, under conditions of timescale separation the response is still x(t) ≈ A(t)p(u(t)). Thus the response to abrupt changes will be characterized by a rapid change in p(u) followed by slow changes in A which multiplies p(u). The three-state general model presented above is equivalent to previously studied models in several special cases. For a first-order recovery with conservation of the number of molecules, ∆ = δ(1−A), it is equivalent to a model proposed and studied to describe the kinetics of voltage-gated ion channels [15]. This system exhibits exponential adaptive response which relaxes to a steady-state level dependent on the constant background signal. For a zero-order recovery, ∆ = δ, it is equivalent to a model of bacterial chemotactic receptors [13]. This system also exhibits exponential adaptive response, but its steady-state level is independent of background input, a property known as precise adaptation. It was suggested that this model is related to integral feedback control which implements the maintenance of output at a constant value [16]. Recent theoretic work has devoted much attention to the special case of precise adaptive response, in particular in the context of 3-state models which provide the simplest networks to exhibit such a response [17,9,10]. While a circuit that implements precise adaptive response is well suited for homeostatic regulation of its own output, the relation of such a response to the system's sensitivity and dynamic range is often mentioned but rarely quantified. In the following sections we use the model presented here and its extensions to shed light on this relation. Adaptive enlargement of dynamic range We have seen that two functional protein states can define an input/output function p(u) by the balance between them as a function of input signal. Different proteins define different input/output functions depending on the mechanism of interaction with the input. Whatever the exact form of p(u), it has a limited dynamic range that allows a response resolution only within a fraction of the range of possible input signals. A possible engineering solution to this problem, implemented in many manmade control systems, is a feedback loop which integrates past output, feeds it back and subtracts it from the input. Such feedback shifts the input/output curve on the axis of the input signal and allows its high-gain region to move [18]; it effectively subtracts constant or slowly varying backgrounds and retains sensitivity to fluctuations around it. However, in the biochemical systems described by the above models, the feedback loop integrates past activity and feeds it multiplicatively onto the input [14]. This form of feedback does not shift the response function but only rescales its magnitude. Therefore it cannot compensate for constant backgrounds or enlarge the dynamic range for response. To make these statements more quantitative, we use the model to calculate the response to step inputs ∆u that appear on top of constant backgrounds u 0 . Consider the three-state model introduced in the previous section, for the particular case of zero order recovery, resulting in a precise adaptive response. This model is equivalent to the toy model for an adaptive module proposed by Barkai and Leibler [13] and later studied and extended by many others in the context of bacterial chemotaxis. Assuming a separation of time scales between the rapid input-dependent transitions and the slow inactivation Eq. (3), one can solve the response dynamics of Eq. (2) with ∆ = δ for an arbitrary input signal u(t). For a step input, the leading term of this approximation will yield: where τ = 1/(γp(u 0 + ∆u)). The infinite-time response here x ∞ := lim t→∞ x = δ/γ is independent of the input, the defining feature of precise adaptive response, and the relaxation is exponential. The magnitude of the transient in this expression shows manifestly how the nonlinear response function p(u) decreases the response magnitude to a step ∆u as the background u 0 increases. A graphical illustration of this result for several step functions is shown in Fig. 2 1 . To quantify the ability of a system to enlarge its dynamic range by background compensation, one can use such step-function inputs as follows: for each ambient background signal u 0 , the system is probed by increasing steps of input ∆u on top of this background. For each background a curve is then plotted as a function of the step magnitude. An effective enlargement of dynamic range occurs when the different curves shift horizontally on the input axis, such that their sensitive, unsaturated portion is centered around the constant background. Fig. 2(c) shows this plot for the three-state model with precise adaptive response considered here. The response curve is not shifted horizontally but remains in the same vicinity of input while significantly decreasing in amplitude. We conclude that an adaptive response in system activity, even if precise, is insufficient for an adaptive enlargement of dynamic range by background compensation; moreover it is insufficient to maintain the entire dynamic range of system output, although it returns to the same output value at steady state regardless of the constant background (this is reflected by all curves in the figure starting at the same output value at ∆u = 0). In fact, none of the three-state models represented by the general class Eq. (1) shows the property of dynamic range enlargement. In the following section we show a generalization of the available/unavailable states to a graded ladder of responsiveness, that allows a population of receptors to implement a weighted sum of several input/output functions and thus to shift and rescale the range of input to which it effectively responds. 4 Graded responsiveness as a mechanism for adaptive dynamic range enlargement In the simplified class of models discussed above (Eq. (1)), we considered a division of the ensemble of molecules into two classes: available or unavailable to respond to input signal. This is a coarse-grained picture of the molecular system which simplifies the model and is often invoked for simplicity of analysis. The resulting three-state models exhibit adaptive response which, if precise, can serve as an effective homeostasis mechanism. However we have shown that this property does not reflect an enlargement of the dynamic range for response to signals. Here we show that a more gradual modulation of the molecule into classes of varying sensitivity, can account for an effective background compensation over a broad range of inputs, such as observed in experiments on several signaling systems [19]. In bacterial chemotaxis, it is well established that multiple methylations play a central role in adaptation to background. Receptors have several methylation sites and adaptation involves a change in the average methylation level per receptor [20]. Asakura and Honda [21] constructed a multi-state model relying on the properties of methylation in bacterial receptors. In order to account for experimental results they constrained the model parameters so as to obtain a precise adaptive response. We present below a generalized version of the mechanism they proposed, and analyze its properties in terms of adaptive response and dynamic range. Our con- tribution to previous results will be in quantifying the relations between the kinetic details of the model and the emerging system properties, namely adaptive response and dynamic range. The main conclusion will be that this model enlarges dynamic range regardless of the type of adaptive response it induces. Relieving this constraint makes the mechanism more generally applicable; we discuss such possible applications later. Imagine a set of protein states, physically modified one with respect to another; this is displayed by the scheme in Fig. 3 with n + 1 different classes, occupied by relative concentrations A 0 to A n ( n i=0 A i = 1). Within each class there are both an active (occupied by a concentration x i ) and an inactive (with A i − x i ) state. Each class is characterized by a different degree of responsiveness to the input signal, reflected by a shifted balance between the active and inactive states at a given value of input. Thus, as in the simpler model, we distinguish two types of labels on the molecular states: active and inactive are functionally distinct, whereas responsiveness class i includes both these states but exhibits a different affinity towards the input signal. This is a generalization of the availability concept presented in the basic 3-state model (Eq. (1) and Fig. 1) incorporating only two classes, one of which was completely non-responsive to the signal. To reflect this analogy we keep the notation similar, with class occupancies denoted by A i and class decrease and increase transitions by Γ i and ∆ i , correspondingly. The responsiveness to input is characterized, as before, by an instantaneous input/output function for each class, p i (u). Indeed measurements on chemotactic receptors with fixed methylation levels [22], have shown that the input/output functions are shifted one with respect to the other, covering different regions of input signal by their sensitive parts. In particular the least methylated receptors are first to saturate when the signal increases, and the most methylated saturate last [19,22]. Therefore this type of state space provides a degree of freedom for the ensemble of receptors to modulate its response by redistribution among the modified states. The weighted sum x = x i = p i (u)A i represents the system total activ-ity and is assumed to be directly related to its output to downstream processes. Adaptive responses are usually referred to and measured in terms of this output. In addition to this measurable quantity, the state of the system is also characterized by the underlying distribution of receptors among responsiveness classes, {A i } n i=0 -see numerical example in Fig. 4(b). The mean modification level per receptor is the average of this distribution: Transitions between responsiveness classes, here represented by the arbitrary terms Γ i and ∆ i , are slow and activity dependent: This activity dependence provides the necessary feedback from the system output to its effective response function. Fig. 5(a) shows an example of the total activity x in response to steps, in this case exhibiting a precise adaptive response. Fig. 5(b) shows that the redistribution among classes is altered, although system activity returns to the same value it had before the additional step input. The adaptive response in system activity is a result of the existence of two separated timescales in the system, causing two stages of response to a change in input. Equilibration between active and inactive states happens rapidly with the change in signal, whereas the redistribution among responsiveness classes is a slower process that results in the later relaxation in activity. This is a property of the general class of models depicted in Fig. 3, irrespective of the detailed kinetics of transitions. However, in order for the system activity to display precise adaptive response, special assumptions need to be made on the kinetics of transitions between responsiveness classes. Different assumptions were made by different authors in order to constrain the model to display this precise adaptive response (see Appendix). However, the structure defined by Fig. 3 exhibits an adaptive dynamic range enlargement well beyond these constraints, for various types of kinetics defined by the abstract symbols Γ and ∆. Therefore adaptive dynamic range enlargement can be found in such a model independent of the precision of adaptive response. For example, if the kinetics of modification and de-modification is first order with arbitrary rate constants, the adaptive response is imprecise (steady state is input-dependent), but nevertheless the system maintains its sensitivity to input changes on top of large backgrounds. Fig. 6 shows the quantitative summary of these statements for the graded-responsiveness model with precise ( Fig. 6(a)) and imprecise ( Fig. 6(b)) adaptive response. As expected, the dynamic range of response is slightly diminished by the imprecision of the steady state, since the range of activity (output) is itself limited (range of curve in the y-axis). However the sensitivity towards input fluctuations on top of a background is broadened in a similar manner in both cases, demonstrating a logarithmic horizontal shift of the response curves with the signal (x-axis). We recall that such behavior is not exhibited by the 3-state model (compare to Fig. 2(c)). Discussion Adaptive response and high sensitivity over an extended dynamic range are properties of many biological signaling systems. In addition to developing realistic models describing the specific aspects of each system, it is of interest to study on a fundamental level which model ingredient gives rise to what system property. In this work we have used abstract simplified models to disentangle adaptive response properties from signal-processing properties, namely the ability to compensate for constant background and maintain sensitivity to transients over a broad range of inputs. The characteristic feature of background compensation is a shift of the transientresponse function along the input axis as the background changes. This shift allows the sensitive portion of the input/output function to be centered where fluctuations in signal are expected to occur; it is generally implemented by internal degrees of freedom that are not directly reflected by the system output. On the other hand, adaptive response is a dynamic property of system output. It is not surprising therefore that the relation between these two properties is not one-to-one. We demonstrated that adaptive response alone is insufficient to provide an enlarged dynamic range, even if it is precise. By analysing the simplest biochemical models displaying adaptive response we have shown in our previous work that the circuit implemented by these systems contains a multiplicative, rather than additive, feedback branch [14]. Such feedback can induce a constant (input-independent) steady-state output, namely a precise adaptive response. Here we have shown explicitly that this property does not ensure availability of the entire range of output for further stimulation. More importantly, it does not induce an adaptive shift of the input/output response curve on the input axis, and thus does not allow its sensitive portion to move to different regions of input. Our conclusion is that these simplified models display only a phenomenological dynamic effect of adaptive response, which does not necessarily fulfill any functional role in signal processing. Additional internal degrees of freedom are required to induce the flexibility of adaptive background compensation. One mechanism that has been proposed [21] is the modulation of the protein input/output response by multiple modifications, with each modification shifting the sensitive region around different input values. Changes in the distribution of proteins among the different modification classes can then effectively change the total response of the system. Quantitatively, the extent and nature of the response shift will be dictated by the input/output response curves at the different modification classes. On a coarse-grained level of description, the change in distribution can be approximated by the change in average modification level of the receptor population. This can be described by a continuous modulating parameter of the average input/output response curve of the entire population [23]. Building on this previous model for bacterial chemotactic receptors [21], we have analyzed a general structure of protein states which induces the property of dynamic-range enlargement. We have shown that this mechanism maintains sensitivity to changes on top of background within a broad range of input signals, independent of several kinetic details and regardless of the type of adaptive response it induces. Precise adaptive response can expand the range of transient output to cover the entire available range of system activity, namely affect the amplitude of response. However, the horizontal shift representing the background compensation is a property of a much broader class of models that displays arbitrary adaptive response. In bacterial chemotactic receptors, multiple methylation of the receptor provides the ladder of modification, and recent studies are starting to unravel the molecular mechanisms underlying graded responsiveness in these receptors [24]. Other processes such as multiple phosphorylation can in principle induce similar properties in other signaling proteins. Indeed it has been suggested that in photoreceptors multiple phosphorylation plays an analogous role to methylation [20,19]. The results presented in Fig. 6 are remarkably similar to experimental results on bacterial chemotaxis (Fig. 3A of [25]) and on photoreceptors (Figs. 7 and 9 of [26]), supporting the generality of this mechanism in widely diverse sensory systems (bacterium cell vs. turtle's photoreceptor cell). Further experiments are needed to test this idea and to explore the degree of universality between the mechanisms. Appendix: Precise adaptive response in the gradedresponsiveness model In this appendix we consider constraints on the kinetics of the general model displayed in Fig. 3 such that precise adaptive response ensues. Asakura and Honda [21] in their original model proposed the following conditions: (1) both class-increase (methylation) and class-decrease (de-methylation) reactions are state-dependent, i.e. de-methylation works only on active receptors and methylation works only on inactive ones. (2) the ratio between methylation and de-methylation transition rates are independent of methylation level; i.e. ∆ i /Γ i is independent of i. (3) the end states A 0 and A n are characterized by a very extreme equilibrium such that each of them includes effectively only one activity state. Under these assumptions a precise adaptive response is observed over a range of parameters. Barkai and Leibler [13], on the other hand, attributed the appearance of precise adaptive response to the action of the de-methylation enzyme CheR at saturation [27,28] such that the total rate of this reaction is fixed. An application of this assumption to the current model is presented below. In either case the requirement for precise adaptive response, such as is observed over a range of parameters in bacterial chemotaxis experiments, places constraints on the kinetics of the model. We have seen that within the simplified binary model of state-dependent inactivation, a zero-order kinetics of return from unavailability results in exponential, precise adaptive response. Now in the graded-responsiveness generalization, assuming the de-modifying enzyme acts independently of the current modification level, the saturation condition implies n i=1 ∆ i = δ. Activity-dependent kinetics enters here by the de-modification reaction acting only on active receptors, hence depending on the concentrations of active molecules x i . Assuming that this reaction is first-order and also insensitive to modification level, it depends on the total concentration of active molecules: Under these assumptions all transitions in this direction depend on the total activity; relaxing it implies that it depends on a more complex weighted average of activities at different classes. In the adiabatic approximation x i equilibrates rapidly in response to a change in the signal such that x i ≈ p i (u)A i . The slower variables A i then obey the following equations: A n = +∆ n − Γ n whereas the mean modification level obeyṡ These assumptions imply that the mean modification retains the same relation with the total system activity as the availability A in the three-state model [14]: showing manifestly that the system's steady state response is independent of the input stimulus compare this result to [16].
2011-06-26T15:01:00.000Z
2010-03-14T00:00:00.000
{ "year": 2010, "sha1": "5acb3eb9de1c6b783b66c4321b3e04d37fb8f292", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3934/mbe.2011.8.515", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "5acb3eb9de1c6b783b66c4321b3e04d37fb8f292", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine", "Computer Science" ] }
247813571
pes2o/s2orc
v3-fos-license
Tunable second harmonic generation by an all-dielectric diffractive metasurface embedded in liquid crystals We experimentally demonstrate the possibility to modulate the second harmonic (SH) power emitted by nonlinear AlGaAs metasurfaces embedded in a liquid crystal (LC) matrix. This result is obtained by changing the relative in-plane orientation between the LC director and the linear polarization of the light at the excitation wavelength. According to numerical simulations, second-harmonic is efficiently radiated by the metasurfaces thanks to the sizeable second-order susceptibility of the material and the resonant excitation of either electric or magnetic dipole field distributions inside each meta-atom at the illuminating fundamental wavelength. This resonant behavior strongly depends on the geometric parameters, the crystallographic orientation, and the anisotropy of the metasurface, which can be optimized to modulate the emitted SH power by about one order of magnitude. The devised hybrid platforms are therefore appealing in view of enabling the electrical control of flat nonlinear optical devices. Introduction All-dielectric metasurfaces are attracting increasing interest from the scientific community for optical wavefront manipulation [1][2][3]. The possibility of controlling the phase of the light scattered by each unit of the metasurface and the lossless nature of the optical resonances in high refractive index dielectric nanostructures are key assets [4,5]. A wide range of applications including lenses, beam deflectors and holograms have been shown [6][7][8][9]. The considerable potential of these devices has stimulated the demonstration of tunable metasurfaces [10] and several mechanisms have been proposed to reach this goal. A straightforward realization of tunable metasurfaces relies on mechanical deformations to reconfigure the entire structure [11,12]. For instance, the authors of reference [12] have proposed an array of dielectric resonators embedded in an elastomeric matrix, where remarkable optical resonance shifts have been experimentally demonstrated to stem from the application of uniaxial strain. Another appealing strategy to attain reconfigurability is to fabricate metasurfaces out of phase change materials (e.g. chalcogenides, correlated oxides), whose optical properties strongly depend on the application of an external electric field stimulus [13,14]. Faster control mechanisms have also been reported, such as the ultrafast photo-injection of a dense electron-hole plasma into a dielectric Mie-resonant nanoparticle [15,16], leading to femtosecond transient dielectric permittivity. The possibility to thermally tune the metasurface behavior has also been proposed, owing to optical heating of single all-dielectric nanoantennas [17][18][19]. In this context, Figure 1. (a) Schematic of the sample and the illumination geometry. GaAs: gallium arsenide; AlOx: aluminum oxide; AlGaAs: aluminum gallium arsenide; LC: E7 Merck liquid crystal; ITO: indium tin oxide. Both the excitation (at frequency ω, gray beam) and the collected light (at frequency 2ω, red beams) travel through the semi-transparent ITO side, thus collection occurs in the backward direction. The blue double arrow indicates the LC director. On the right, scanning electron micrographs of the square (b) and diamond (c) lattice metasurfaces for a nanodisc radius of 280 nm. The metasurface period is p = 910 nm. The inset in (c) shows the orientation of the AlGaAs crystalline axes, which is common to all metasurfaces. LCs represent valid candidates for the implementation of tunable metasurfaces thanks to their high birefringence (n e − n o ∼ 0.2), which can be controlled by temperature or by an externally applied electric field [20][21][22][23]. Moreover, LC-based metasurfaces might profit from well-established fabrication techniques developed for the display industry [24][25][26]. The re-shaping of the light scattered from a dielectric metasurface as a function of the orientation of the liquid crystal (LC) director has been thoroughly investigated in the linear regime and already exploited to realize ultracompact gas sensors [27]. Conversely, the control of the nonlinear harmonic signals generated by metasurfaces embedded in LC matrices is still at an early stage. In this scenario, we recently discussed the possibility to obtain a SHG modulator by using a commercial LC as the immersion medium of a high-refractive index metasurface made of AlGaAs nanodiscs over an AlOx substrate [28]. Our numerical simulations show that the SHG can be switched on or off by changing the LC director from the planar to the homeotropic (namely, out of the metasurface plane) alignment. Here, we discuss and experimentally demonstrate the modulation of the SHG from an optimized dielectric metasurface embedded in a LC matrix, as a function of the relative orientation between the incident pump beam (λ = 1551 nm) polarization and the LC director kept in the plane of the metasurface (see figure 1(a)). As a result, for a well-defined orientation of the array with respect to the crystallographic axes and optimized geometrical parameters of the metasurface, we obtained a second harmonic (SH) emitted power that is about one order of magnitude higher when the pump polarization is switched from collinear to orthogonal to the LC director. These results pave the way to ultrathin nonlinear modulators where the LC anisotropy is electrically, thermally, or optically controlled. Numerical simulations To assess the combined effect of the relative orientation of the AlGaAs crystalline axes and the metasurface lattice with respect to the LC director, we have investigated two geometries with AlGaAs nanodiscs, having thickness of 200 nm, as meta-atoms arranged in a square lattice with axes parallel to either (i) the [110] and [110] crystallographic directions (square geometry, figure 1(b)) or (ii) the [100] and [010] (diamond geometry, figure 1(c)), with the director always parallel to [110]. To find indications on the geometry enabling a large SH modulation, we have simulated the nonlinear optical response of such metasurfaces as a function of the radius r of each nanodisc, lattice period p, and orientation of the pump linear polarization. We have performed finite-element method simulations using the commercial software COMSOL Multiphysics. The unitary cell consists of a dielectric AlGaAs disc lying on an AlOx substrate, with refractive indices n 3.4 and n = 1.6, respectively, in the wavelength range of interest. To model the E7 LC matrix that surrounds each nanopillar, we assumed a homogeneous anisotropic dielectric medium with extraordinary and ordinary refractive indices n e = 1.6 and n o = 1.5, respectively, at both pump and SH wavelength. In fact, although in this wavelength for the E7 LCs n e = 1.69 and n o = 1.5 [29], we found that a reduced anisotropy such as the one above, allows to attain a better match between simulations and experimental data (see section 4). Such a reduced LC anisotropy, indeed, better depicts a realistic situation in which the effects of disorder at room temperature and a specific anchoring of the LC molecules to the nanodiscs cannot be neglected. The pump beam was described as a plane wave incident normal to the substrate with a wavelength λ = 1551 nm. To model the SH emission, we simulated in the frequency domain the scattered field distribution at the pump frequency ω to determine the SH sources in terms of current densities. Given the zincblende crystalline structure (F43m crystallographic space group) of AlGaAs, the ith Cartesian component of the external current density J i is calculated as: the ith Cartesian component of the electric field at ω, and χ (2) ijk the second-order susceptibility, which in the case of AlGaAs was set to 200 pm V −1 [30]. In our simulations the dielectric nanodiscs were the sole sources of SH since both the LC and the AlOx substrate have negligible χ (2) [31]. Figure 2 displays the calculated overall SH emitted power in the backward direction for a metasurface period, p, varying from 895 nm to 960 nm. In this range, SHG is dominated by the metasurface resonances associated with either an electric dipole (ED) or a magnetic dipole (MD) field distribution at ω inside each nanodisc (see figures 2(e) and (f)) [28]. It is worth noting that, in the investigated region of the parameter space, the ED is the main responsible for SH enhancement in the square geometry (see figures 2(a) and (b)), while the MD constitutes the major contribution to SHG in the diamond geometry (see figures 2(c) and (d)). One can readily notice that these resonances significantly depend on the lattice period p, a behavior that indicates some degree of near-field coupling between meta-atoms. In addition, while the MD resonance is much sharper than the ED due to its lower radiative losses, it shows a weak dependence on the relative orientation between the LC director and the pump polarization. Conversely, although weaker and broader, the ED resonance is strongly affected by how the director is oriented with respect to the light polarization and is, therefore, very promising in view of SH modulation. To verify this concept, we compare the two geometries for p = 910 nm (see black dashed lines in figure 2), where the ED contribution to the SH emission is expected to be stronger. Sample fabrication Metasurfaces were fabricated, similarly to reference [32], starting from a 1 μm-thick Al 0.98 Ga 0.02 As film and a 200 nm-thick layer of Al 0.18 Ga 0.82 As successively grown on a GaAs substrate by molecular beam epitaxy. The nanostructures are lithographically patterned out of the Al 0.18 Ga 0.82 As layer. A final step of selective oxidation of the Al-rich film creates a low-index AlOx layer, which is essential to attain effective field confinement in the meta-atoms. To investigate the effect of the meta-atom geometry on the SHG yield and rule out possible fabrication inaccuracies that might result in systematic variations of the sizes with respect to nominal values, a set of metasurfaces with both square and diamond lattice configurations was fabricated with nanodisc radius varying from 260 to 315 nm and p = 910 nm. Representative scanning electron micrographs of the two configurations are shown in figures 1(b) and (c), respectively. The crystallographic axes of AlGaAs in the images are oriented as indicated in the inset of panel c. The metasurfaces were subsequently embedded in a 12 μm-thick E7 LC matrix using the following procedure. To exploit the full birefringence of the LC, the first step is to adopt an alignment layer that enables a preferential direction of the LC molecules in the rest condition. A solution of Poly(vinyl alcohol) 0.5% wt in distilled water is thus spin-coated at 3000 rpm for 30 s on both the metasurface substrate and the ITO superstrate. We would like to point out that, although in this paper we perform static measurements without an external applied voltage, we adopted a cell design with an ITO superstrate in view of future voltage dependent studies. Then the sample is baked at 120 • C for 1 h, resulting in an alignment layer of few tens of nanometers thickness. Successively, a rubbing machine is used to rub the polymer layer along a specific direction, using a rotating drum covered by a cloth with short fibers. The device is eventually assembled by sandwiching the treated metasurface with an ITO cover slip as superstrate, put at controlled distance by means of a 12 μm thick mylar spacer. The obtained cell is sealed with UV glue, taking care to leave two opposite entrances, for the following LC filling step (during the cell assembly, attention is devoted to lay it down on the metasurface with parallel alignment direction). To prevent the formation of air bubbles, the filling process is performed in a vacuum chamber at a temperature of 80 • C, well above the clearing point of the LC. Finally, the device is slowly cooled down and the open entrances are sealed with UV glue. This last choice has two advantages: (i) avoid the LCs to pour out from the cell and (ii) keep their alignment fixed. Results and discussion To investigate the nonlinear emission of the metasurfaces, we employed a home-made confocal microscope coupled with an ultrafast pump laser with emission centered at λ = 1551 nm and delivering 160 fs pulses at 80 MHz repetition rate (OneFive Origami, NKT Photonics). The experimental setup was already described elsewhere (see reference [33] for details). Briefly, to excite the whole metasurface, the fundamental pump beam is focused to the back-focal-plane (BFP) of a 60× objective (Nikon, CFI Plan Fluor 60XC, numerical aperture, NA = 0.85) mounted on-axis to obtain a collimated beam that impinges at normal incidence on the sample. A zero-order half-wave retarder working at the pump wavelength is inserted in the excitation path to rotate the polarization of the pump beam. The emitted SH is collected through the same objective used for the excitation, and chromatically filtered using a narrow band-pass filter centered at 775 nm (bandwidth 25 nm). A Bertrand lens inserted in the detection path images the SH diffraction orders in the BFP of the objective onto a cooled CCD camera (iKon M-934, Andor Technology). The average excitation power employed is 10 mW, which corresponds to an average power density on the sample of about 2 kW cm −2 based on a beam diameter of about 25 μm. The employed excitation wavelength, which lies in the transparency window of the investigated materials, along with the low power density level, prevents any temperature-induced nematic-isotropic transition in the LC. The periodic arrangement of the nanodiscs in the metasurface combined with the interference of the coherent SHG by each individual meta-atom results in a directional beaming of SH in the first diffraction orders of the metasurface, which acts as a diffraction grating for the SHG. According to Bragg's diffraction law with a period p = 910 nm, the first SH diffraction orders are expected at an angle α = sin −1 775 nm 910 nm ∼ = 58 • , which falls just within the numerical aperture of our collection objective (NA ∼ = 0.85). The nonlinear BFP maps featuring the highest SHG yield are shown in figure 3 as a function of the relative orientation of the linear pump polarization with the LC director, for both the square and diamond metasurface geometries. The first diffraction orders fall at the edge of the objective NA and are therefore partially cut off. In figure 3, one can observe SH emission around the normal direction (especially visible in panel a). As previously observed, selection rules forbid normal emission for a single AlGaAs nanodisc [34], but the metasurface periodicity allows to attain sizeable emission in the paraxial direction also for normal incidence excitation [35]. Moreover, also spurious effects might appear in the paraxial direction due to possible fabrication tolerances [36]. However, in the following we will solely focus on the effects on first diffraction orders. It can be readily noticed that, in the square geometry, the SH power is significantly higher in panel a, namely when the pump polarization is parallel to the LC director. In this case the SHG power is enhanced by a factor 7 for all the first diffraction orders with respect to the value obtained for an orthogonal alignment between the pump polarization and the director (panel (b)). In the diamond geometry (panels (c) and (d)) the SH power modulation is more than a factor 2 less pronounced, with higher yield corresponding to a pump polarization orthogonal to the LC director. We stress that in both geometries the relative orientation between the crystal axis of AlGaAs and the pump polarization is kept fixed to rule out possible effects associated with the χ (2) tensor of the material. We have then recorded a series of BFP maps as a function of the nanodisc radius and pump polarization to compare experimental evidence to the simulations in figure 2 and confirm the physical mechanism behind the observed modulation. Figure 4(a) shows the radius-dependent SH emitted power integrated over all the first diffraction order by the metasurfaces featuring a square geometry. Each data point represents a different metasurface, where r is varied from 260 to 315 nm, while p is set at 910 nm. From these plots we conclude that, by changing the polarization of the pump beam from parallel (blue curve) to perpendicular (red curve) with respect to the LC director, the condition for optimal SH emission significantly changes, shifting to longer radii. As previously mentioned, we ascribe this behavior to the non-negligible near-field coupling between the meta-atoms (see figure 2), which causes a sizeable shift of the ED as a function of the LC/surrounding refractive index at the excitation wavelength. Thereby, for a fixed meta-atom radius of 285 nm, we can attain a SH power modulation up to almost one order of magnitude. Both these features are well reproduced by the simulations, reported for comparison in figure 4(b). In particular, the blue and the red plots correspond to the white dashed line drawn in figures 2(a) and (b), respectively, which indicate the SH simulated power for p = 910 nm. In figure 4(c) are shown measurements performed on the diamond geometry, where-at variance with the square case-changing the orientation of the pump polarization does not affect the MD resonance (i.e. the radius of the resonating nanodisc) but rather the total SH power. Even in this case the numerical simulations are in good qualitative agreement with the experiment. Yet, the experimental data in the co-polarized configuration do not show a peak as pronounced as in the simulations. This could be attributed to small deviations of the nanodisc radii from the nominal values as well as to a possible nanodisc ellipticity introduced by the fabrication process. We would also like to point out that the LC orientation, which has a key role in the SH emission, is extremely sensitive to the local anchoring to the metasurface and, therefore, might be affected by the sample morphology itself. This phenomenon, which is critical to control and investigate in detail, might introduce uncertainties in the experiments, such as the discrepancy reported above. Nevertheless, the combination of experiments and simulations indicates that, while the diamond geometry is more efficient in directing SH power to the first diffraction order, it is less sensitive to the LC anisotropy. Finally, we would like to point out that the SH in the experiments is several orders of magnitude lower than the numerical prediction-most markedly so for the diamond lattice. This quantitative mismatch can be ascribed to the significant power drop caused by the unavoidable spatial selection imparted by the finite NA of our objective, but also to the fact that numerical simulations were conducted in the CW regime using a single wavelength coherent pump excitation, while ultrafast broadband pulse excitation was employed in the experiment. Conclusions We have experimentally demonstrated that the SHG power from a nonlinear metasurface embedded in a LC matrix can be modulated by about one order of magnitude by changing the relative orientation between the LC director and the pump polarization. In our experimental realization, the fundamental-wavelength beam impinges normal to the metasurface, and the modulation is performed by keeping both the LC director and the illumination field in the plane of the metasurface. This represents a convenient geometry, which could enable the electrical control of the SH emission with bias electrodes fabricated directly on top of the metasurface. Our numerical simulations indicate that the SHG is ruled by electric and MD field distributions inside each meta-atom at the pump wavelength, which depend on the geometrical parameters and the crystallographic orientation of the metasurface as well as on the anisotropy of the environment. Numerical simulations also suggest that one can leverage many degrees of freedom to tune the nonlinear properties of the metasurface. Indeed, the size and shape of the meta-atoms, their near-field coupling, the orientation of the meta-atom lattice with respect to the AlGaAs crystallographic axes, the exciting polarization, and the orientation of the LC director are all parameters that strongly affect the SH emission by the metasurface, offering large flexibility and the possibility to find optimal solutions for a wide range of applications of nonlinear optics with flat devices.
2022-03-31T16:39:55.481Z
2022-03-28T00:00:00.000
{ "year": 2022, "sha1": "b055ce9e532a9abb221a4511771a9aefb5a8bca7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1367-2630/ac61d2", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "af1c8a676be8ecc6a8fdd27f1d6f869a65f8327d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
199435165
pes2o/s2orc
v3-fos-license
Zebrafish as a Smart Model to Understand Regeneration After Heart Injury: How Fish Could Help Humans Myocardial infarction (MI) in humans is a common cause of cardiac injury and results in irreversible loss of myocardial cells and formation of fibrotic scar tissue. This fibrotic tissue preserves the integrity of the ventricular wall but undermines pump function, leading to congestive heart failure. Unfortunately, the mammalian heart is unable to replace cardiomyocytes, so the life expectancy for patients after an episode of MI is lower than for most common types of cancers. Whereas, humans cannot efficiently regenerate their heart after injury, the teleost zebrafish have the capability to repair a “broken” heart. The zebrafish is probably one of the most important models for developmental and regenerative biology of the heart. In the last decades, the zebrafish has become increasingly important for scientific research: it has many characteristics that make it a smart model for studying human disease. Moreover, adult zebrafish efficiently regenerate their hearts following different forms of injury. Due to these characteristics, and to the availability of genetic approaches, and biosensor zebrafish lines, it has been established useful for studying molecular mechanisms of heart regeneration. Regeneration of cardiomyocytes in zebrafish is not based on stem cells or transdifferentiation of other cells but on the proliferation of preexisting cardiomyocytes. For this reason, future studies into the zebrafish cardiac regenerative mechanisms could identify specific molecules able to regulate the proliferation of preexisting cardiomyocytes; these factors may be studied in order to understand regulation of myocardial plasticity in cardiac repair processes after injury and, in particular, after MI in humans. and the irreversible formation of non-contractile fibrotic scar tissue (1). Fibrotic scarring preserves ventricular wall integrity but, at the same time, undermines pump function leading to congestive heart failure (2). The adult human heart has a very low regenerative capacity on a macroscopic scale after injury; the cardiomyocytes annual renewal rate is estimated between 1% at the age of 25 and 0.45 % at the age of 75 (3). For these reasons, the human heart cannot replace cardiomyocytes lost after MI and, as a result, patients have a lower quality of life and often die prematurely (4). Considering that the one-year mortality rate for MI patients is more than 26%, the life expectancy following an episode of MI is lower than for most common types of cancers, with the exception of lung cancer (5,6). Medicine has improved prevention and early intervention of MI but it is currently incurable without heart transplantation. Nevertheless, the limited number of donors for heart transplantation does not make this a viable option for most patients. Modern medicine obtained significant advances in the management of the patients after MI with different methodologies: (i) with the use of cardiomyocyte stem cells, that are know to have abilities in generation of new cardiomyocytes, and could be used for repair the injured heart (7-10); (ii) with the reprogramming of fibroblasts into new cardiac cells (11), and (iii) with the production of new biomaterials (12). Although remarkable progress has been made in generating cells of the cardiovascular lineage, a major challenge now is creating engineered tissue architecture to integrate a microvascular circulation. In order to reduce morbidity, and mortality after MI in humans, it is very important to know its regenerative properties, and molecular mechanisms involved in heart regeneration. Hopefully, in the close future, the new knowledge of myocardium biology applied to tissue engineering will make possible to bridge the gap from bench to bedside for a clinically tractable engineered cardiac tissue (12). The human heart was not always considered a nonregenerative organ. In fact, in the past, it was mostly accepted that the myocardium had some regenerative abilities. It was believed that cardiac hypertrophy was due to the production of new cardiomyocytes, and this idea changed only when different detailed studies demonstrated that pathological cardiac growth was due to increased cardiomyocyte size, and not by cell division (13). Multiple studies have analyzed the mammalian heart after different injuries (14), and these experiments have demonstrated that, in general, the adult mammalian heart does not exhibit the ability to regenerate. However, this classical view has been changed again by fundamental discoveries in the last decade. Porrello et al. demonstrated a transient ability of neonatal mouse heart to regenerate within the first week of postnatal life (15)(16)(17). Moreover, other studies performed using stable isotope incorporation during DNA replication have demonstrated that a small number of cardiac cells are renewed during adult life in mammals (18)(19)(20)(21). The rate of cardiomyocyte renewal in mammalian adult hearts is clearly not sufficient to compensate for the loss of myocardium after an MI, but the implication resulting from these experiments are promising. Since the advent of stem-cell biology, the heart has been investigated in order to find stem and/or progenitor cells. The discovery that mammalian adult heart preserves a pool of cardiac stem cells (CSCs) able to participate in cardiac homeostasis and repair opened the field of CSC-based therapy (22). Cell replacement therapy represents a fascinating strategy for myocardial degenerative diseases but despite the encouraging results obtained in small animals, the outcomes from the majority of clinical trials has been poor (10). In the past, CSCs were identified by the expression of c-kit, a typical stemness marker; however, the identification of c-kit positive cell population is necessary but not sufficient in order to define CSCs, because the presence of a heterogeneous population of c-kit positive cells in the heart has been identified. This heterogeneity of ckit positive cells has probably generated controversy regarding the existence and role of CSCs in the adult heart (23). Recently, different authors have demonstrated that a negative sorting for CD45 and CD31 is necessary to eliminate the lineage-committed cells from the c-kit positive population. At the end, only 1% of the total myocardial c-kit positive cells are really multipotent CSCs (10,24). The identification of this real multipotent group of cells among positive c-kit cardiac cells could be the key to undertaking new interesting and effective CSCs therapies in the cure of patients after MI. If the human heart has some endogenous regenerative abilities, these can be supported to promote myocardial regeneration (2). For these reasons, in the last years, scientists have focused their attention on studying natural models of cardiac regeneration. The identification of new therapeutic strategies for regenerating human hearts would reduce morbidity and mortality for millions of people every year. In this review, we will focus on the description of current knowledge about heart regeneration in zebrafish and we will discuss how the information obtained from the study of this interesting model may be used to induce heart regeneration in the adult mammalian heart. Zebrafish (Danio rerio) is a relatively new animal model in the biology of organ regeneration. Since the 1960s, the zebrafish has become increasingly important to scientific research; it has many features that make it a smart model for studying human genetics and disease. Benefits of using zebrafish are: (i) it is small and robust; (ii) it is cheaper to maintain than mice; (iii) it produces hundreds of offspring; (iv) it grows externally, and at an extremely fast rate; (v) it has a short reproductive cycle; (vi) zebrafish embryos are nearly transparent, allowing researchers to easily examine the development of internal structures; (vii) its genome is fully sequenced to a very high quality; (viii) over 70% of human genes have a true ortholog in the zebrafish genome; (ix) as a vertebrate, the zebrafish has the same major organs and tissues as humans; (x) zebrafish have the unique ability to repair heart muscle. Moreover, there are many advantages of using zebrafish to study human disease (25). In particular, essentially due to its willingness to genetic approaches (26), and to the availability of different pathway reporter lines (27)(28)(29), zebrafish has been quickly established as a very useful system for studying molecular mechanisms of physiological organ regeneration, including heart (30) (Figure 1). The zebrafish adult heart has one atrium and one ventricle; it is smaller and simpler than the mammalian heart but the histological and structural composition is very similar to that of other vertebrates. The extraordinary capability of zebrafish silent heart (sih b109 ) mutant embryos of surviving up to 5 days post fertilization thanks to diffused oxygen, in absence of active circulation, leads to considerer zebrafish as the gold standard in the field of developmental cardiovascular research. The identification of multilevel controls that are able to regulate the expression of contractile proteins is fundamental to understanding cardiomyocyte function, dysfunction and regeneration (31). Moreover, more recently new genes required for cardiovascular development, have been identified in zebrafish models through genetic screening strategies (26). For these reasons, zebrafish can be considered an excellent model for study vertebrate development and diseases, but there are some intrinsic disadvantages to the system. Because zebrafish are evolutionarily more distant from humans than murine models, sometimes results obtained from fish experiments will likely have to be confirmed in mammals before being associated to human therapy. Adult zebrafish are able to regenerate different organs, including all fins (32), the spinal cord (33), the retina (34), the heart (35), the telencephalon (36), and the kidney (37). Interestingly, the mechanisms that control regeneration seem to be organ-specific. For instance, fin regeneration depends on the formation of a structure composed of highly proliferative de-differentiated cells named blastema, able to give rise to all components of the regenerated fin (38). In contrast, regeneration of the telencephalon does not involve the formation of a blastema but requires the activation of a population of cells characterized by the high expression of the Notch target gene her4.1 (36). Poss et al., in 2002, described for the first time in a zebrafish model the most robust cardiac regenerative response in a vertebrate (35). They demonstrated that zebrafish is able to regenerate its heart after amputation of up to ∼20% of its ventricle. The injury leads to the formation of an initial fibrin clot that remains 7-9 days post-injury (dpi); this fibrin clot is replaced by new cardiomyocytes in the following weeks. After 60 dpi the size and shape of the ventricle, as well as the contractile capability of beating heart, gets back to normal (35). This seminal paper opened a new and challenging field of study regarding cardiac regeneration. Interestingly, as underlined by González-Rosa et al. this study raised many questions that unfortunately were not completely explained in the original paper: (i) why does the zebrafish heart not develop a fibrotic scar? (ii) What are the cellular sources of regenerated tissue? (iii) What signals are involved in regeneration (2)? These questions have been partially clarified in the last 15 years thank to the contribution of different laboratories working in cardiac regeneration field. In humans, the heart is unable to regenerate the lost cardiomyocytes after MI; instead, the injury triggers the activation of fibroblasts that secrete collagen able to prevent heart rupturing. This collagen-based non-contractile scar persists in the heart and contributes to abnormal systolic function due to its inflexibility, and non-contractibility. Scars formation could eventually lead to heart failure and as consequence it is more a damage than a help to the heart after MI. The zebrafish has a remarkable capability to regenerate the heart after ventricular injury or amputation, mainly by the ability of the remaining cardiomyocytes to de-differentiate, and proliferate to replace the lost cardiac tissue (35,39). In 2011 a cryoinjury zebrafish model was generated, which more closely mimics the pathophysiological process experienced by the human heart after MI (40). Different authors demonstrated that the zebrafish heart regenerates after cryoinjury-induced myocardial infarction (41). This model seems to be very interesting because, after cryoinjury, a collagen scar forms at 14-21 dpi, but the zebrafish heart is able to resolve the scar concomitantly to myocardial regeneration, which mammals cannot perform. In zebrafish, after ventricular cryoinjury, cell death, inflammatory infiltration, and increased mechanical forces lead to fibroblasts trans-differentiation into myofibroblasts, and secretion of collagen and ECM components in the wound area. This deposition of ECM components is important in order to maintain the integrity of the cardiac wall following cardiomyocytes death. Progressively, ECM components were degraded by matrix metallopeptidases (MMPs) secreted by cardiac cells and neutrophils. Due to the MMPs important role during cardiac remodeling and end-stage heart failure, better understanding biological function of MMPs in tissue remodeling, and repair after injury in humans remains an essential matter. Among of the MMPs identified to date, MMP-2, and -MMP9 seem to be the main involved in post-MI remodeling. However, the comprehension of MMPs roles is complicated by interactions between different MMPs: sometimes, different MMPs compete each other for the same substrate; moreover, due to compensatory effects, inhibition of a specific MMP can result in the increase of other ones (42). For these reasons, zebrafish could help in the comprehension of pathophysiological MMPs processes post-MI in order to develop novel therapeutic targets able to inhibit specific MMP actions and, as a consequence, to limit the appearance of heart failure post-MI. Particularly, Gamba, et al. demonstrated that, following cryoinjury, transcripts of matrix metalloproteinase genes, mmp2 and mmp14, and Mmp2 enzymatic activity are increased, suggesting the involvement of these proteases in collagen degradation (43). In literature, it is well-known that in mammals, myocardial infarction-induced fibrosis, and cardiac remodeling are regulated by Smad3-dependent TGFβ signaling (44). More recently, Chablais et al. demonstrated that zebrafish and mammals share similar mechanisms of scar formation. They showed that in zebrafish Smad3-dependent TGFβ signaling is important in the balance between the reparative and regenerative processes and that this signaling is also important for the formation of the transient scar (45). To explain how the scar is resolved in fish hearts and not in neonatal mouse hearts, Gamba et al. propose a model mechanism of potential scar resolution in zebrafish heart after injury. In the neonatal mouse heart, after cryoinjury, damage leads, at the same time, to the synthesis of collagen, and collagenolytic activity in the wound. The inability to regenerate cardiomyocytes leads to a balance of intra-ventricular mechanical forces and of collagen synthesis and degradation, resulting in both persisting uncontractile collagen scar and in extracellular matrix (ECM) remodeling. In zebrafish heart, the injury leads to a comparable fibrotic response: the intra-ventricular mechanical forces decrease during myocardial regeneration, leading to a down regulation of collagen synthesis and, eventually to the removal of the scar (43). Contraction and generation of mechanical forces are very important for cardiac development and for general cardiac function. Using animal models, researchers have demonstrated that intra-ventricular mechanical forces change with the animal ages, suggesting that tissue composition, such as ECM crosslinking density, and ECM interactions is modified as well (46)(47)(48). Moreover, improper mechanical signaling from surrounding tissue, such as the reduction of cardiomyocytes after injury, can lead to the development of defects in the balancing of intra-ventricular mechanical forces and, as a consequence, of collagen synthesis, and degradation (43,48). In 2009 Ausoni and Sartore proposed the lack of fibroblasts as an explanation for the regenerative capacity of the zebrafish heart (49). More recently, other authors confirmed the presence of cardiac fibroblasts in the zebrafish heart after injury and revealed that they not only contribute to the fibrotic response but also are necessary for proliferation of cardiomyocyte during heart regeneration (50). In the mouse, cardiac repair upon MI has been demonstrated to happen mainly by ECM deposition from intracardiac fibroblasts, and epicardium. González-Rosa et al. showed that, in the context of heart regeneration, preexisting fibroblasts such as endocardial cells can contribute to collagen production but the main contributor to heart fibrosis is not the endocardium itself. Endocardial cells at the injury edge failed to show a complete fibroblast-like phenotype, probably because they do not undergo full epithelial-to-mesenchymal transition (EMT). Cells from the epicardial border are able to produce both periostin and collagen, whereas, the endocardial cells produced only collagen. This difference in ECM environments surrounding the injury area may play an essential role in heart regrowth (2). Conversely to mammals, in which ECM persists after MI, in the zebrafish heart ECM is degraded. González-Rosa et al. demonstrated that decreased of ECM production by fibroblast have an essential role in fibrosis regression and that this mechanism does not involve the complete elimination of ECM-producing cells: fibroblast are not eliminated but are inactivated (2,50). Other authors described that, in zebrafish, the limitation of fibrotic response by genetic suppression of col1a2-expressing cells compromised cardiomyocytes proliferation. Therefore, they concluded that in regenerative process cells able to produce ECM could be key players (50). This new information regarding how fibrosis influences myocardial regeneration in a species such as zebrafish, with endogenous regenerative potential, could have important implications for future regenerative strategies after Frontiers in Cardiovascular Medicine | www.frontiersin.org MI also in humans; indeed that therapies targeting on fibroblast inactivation could be more efficient than anti-fibrotic ones (2,50). Different authors demonstrated that zebrafish regenerate cardiac tissue through the proliferation of pre-existing cardiomyocytes, and neovascularization but, at today, it is not completely clear what signals are involved in zebrafish heart regeneration. After ventricular injury, a blood clot is formed to seal the wound that is subsequently replaced by fibrin, and collagen. Few hours after the injury, the epicardium is activated, and epicardial cells undergoing EMT are able to proliferate and migrate to the injury area. Moreover, Choi et al. showed that FGFs stimulate epicardial cell activation and EMT together with neovascularization during the zebrafish heart regeneration process (51)(52)(53). In the last decade, different authors demonstrated a role of the epicardium, the external epithelial layer of the heart, in myocardial growth through secretion of soluble growth factors (54). When activated, the epicardium secretes different molecules with the ability to regulate heart regeneration. The epicardium has been proposed to stimulate cardiac cells proliferation both in embryonic heart development and adult heart regeneration. The epicardium also stimulates extra cellular matrix (ECM) components able to maintain the integrity of cardiac tissue and electrophysiological properties (55). Huang et al. reported that in zebrafish epicardial IGF signaling is essential for cardiomyocyte proliferation in heart regeneration, suggesting that epicardium could regulate the adult cardiomyocytes developmental gene expression profile during post-injury remodeling of the heart (56) (Figure 2). These data open a new scenario for patients after MI, in which the identification with specific bio-markers of proregenerative epicardial cells will support genetic methods able to manipulate precisely only the most appropriate cells after cardiac injury (55). The majority of these factors activate the Ras/MAPK pathway, which is controlled by an ERK phosphatase named feedback attenuator dual specificity phosphatase 6 (Dusp6). Missinato et al. showed that suppressing Dusp6 function, either by small molecules such as BCI and BCI215 or by gene inactivation, enhances zebrafish heart regeneration within 4-7 dpi but not beyond 12 dpi (60). Importantly, this effect was observed after cardiac amputation but not in uninjured hearts, implying that the effects of these compounds are injury-dependent. It will be interesting to determine whether BCI and BCI215 could have the same increased proliferation effect on cardiomyocytes also in other injury models, such as cryoinjury or cardiomyocyte ablation (45,(61)(62)(63), or in the neonatal mouse (64). The ability of mammals to repair the heart is a very limited event but the identification of signaling pathways able to enhance cardiac proliferation could be used in order to promote mammals FIGURE 2 | Visualization of heart regeneration in adult zebrafish. After injury the heart activated factors, such as epicardial IGF, able to activate the Ras/MAPK pathway. These activated epicardial cells drive cardiomyocyte regeneration after injury and are essential for cardiomyocyte proliferation in heart regeneration. Frontiers in Cardiovascular Medicine | www.frontiersin.org repair ability. One molecule that is fundamental in regulating cardiomyocyte proliferation in both zebrafish and mice is Neuregulin 1 (Nrg1), a cell adhesion molecule essential for the normal development of the nervous system, and heart. Recent findings show that Nrg1 can stimulate heart repair. Blocking Erbb2, the Nrg1 co-receptor, using the chemical inhibitor AG1478 restricts cardiomyocyte proliferation in zebrafish heart regeneration after injury. So BCI could be used in mammals to enhance Nrg1 signaling. Unfortunately, different trials showed that Nrg1 therapies induce tumor formation (65). However, a recent study performed using mice as model organism showed the absence of neoplastic growth after Nrg1 administration (66). Nevertheless, the idea of the use of lower concentrations of Nrg1 together with chemical inhibition of Dusp6 could be used to stimulate cardiomyocyte proliferation for cardiac repair (60). Mammals and lower vertebrates adult cardiomyocytes have significant difference in proliferative capacity, probably due to ontogenetic, and/or phylogenetic factors. Understanding these factors could be useful for the development of novel therapeutic strategies that encourage cardiomyocyte proliferation. In the last years, different molecular pathways are under investigation for their potential ability to influence cardiomyocyte proliferation both in mammals and fish: hippo/YAP/TAZ, Meis1, Wnt/β-catenin, IGF, Ros, TGFβactivin, Hypoxia, Monocyte/macrophage, CDK9/PTEFb, and miRNA (67). The Hippo/Yap/Taz pathway seems to be important in enhancing cardiac regeneration; this pathway plays an important role both in the heart development and in postnatal cardiomyocyte proliferation. IGF2 has been demonstrated to be able to activate cardiomyocyte proliferation and is required for heart regeneration in zebrafish, whereas TGF β/activin signaling seems to be a key regulator in cardiomyocyte proliferation and scar formation. Puente et al. have demonstrated that in adult cardiomyocytes cell cycle arrest could be triggered by mitochondrial reactive oxygen species-mediated oxidative DNA damage, and as a consequence, hypoxia, and redox signaling also could be regulators of cardiac renewal (68). This suggests that, to proliferate efficiently, cells responsible for cardiomyocytes renewal, such as immature myocytes or progenitor population, could need an environment with a lower concentration of oxygen. At the end, also miRNAs seem to be important for cardiomyocyte proliferation. Different authors demonstrated that miRNAs are able to affect cardiomyocytes proliferation by inhibiting or activating the cell cycle. Interestingly, the role of some of these pathways in mammal cardiomyocyte proliferation has been identified using zebrafish as model for cardiac repair studies (67). Future studies into the zebrafish cardiac regenerative mechanisms could identify specific molecules able to regulate heart regeneration; these molecules may be used to understand how myocardial plasticity can be maintained during regeneration in order to promote cardiac repair after MI also in humans. Moreover, the identification of factors that trigger heart regeneration could not be enough to completely understand how myocardial plasticity is regulated after heart injury or MI. Signaling pathway networks and epigenetic regulation represent intriguing factors to be analyzed in future studies to understand how myocardial plasticity is blocked and reactivated. In this perspective, also for these analyses of cardiac plasticity zebrafish can be used as a preclinical model, useful to identify new therapeutic strategies to reduce the damages associated with MI. AUTHOR CONTRIBUTIONS The author confirms being the sole contributor of this work and has approved it for publication. ACKNOWLEDGMENTS The author thanks Dr. A. Vettori (Department of Biotechnology, University of Verona, Verona, Italy), and Prof. N. Tiso (Department of Biology, University of Padua, Padua, Italy) for critical comments, and suggestions on the manuscript. GB was supported by CARIPARO Foundation project SHoCD Searching for disease modifiers in arrhythmogenic cardiomyopathy: focus on exercise and sexual hormones to chase novel targets to prevent sudden death and PRIN project (from Italian Ministry of Education, University and Research) 20173ZWACS Molecular and cellular dissection of inflammation and tissue repair in Arrhythmogenic Cardiomyopathy.
2019-08-06T13:02:57.405Z
2019-08-06T00:00:00.000
{ "year": 2019, "sha1": "d24341e37f3f205d206029bf1e6524333f9498a4", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2019.00107/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d24341e37f3f205d206029bf1e6524333f9498a4", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
15466895
pes2o/s2orc
v3-fos-license
Concentration of antibodies against Porphyromonas gingivalis is increased before the onset of symptoms of rheumatoid arthritis Background The periodontal pathogen Porphyromonas gingivalis is hypothesized to be important in rheumatoid arthritis (RA) aetiology by inducing production of anti-citrullinated protein antibodies (ACPA). We have shown that ACPA precede RA onset by years, and that anti-P. gingivalis antibody levels are elevated in RA patients. The aim of this study was to investigate whether anti-P. gingivalis antibodies pre-date symptom onset and ACPA production. Methods A case–control study (251 cases, 198 controls) was performed within the Biobank of Northern Sweden. Cases had donated blood samples (n = 422) before the onset of RA symptoms by 5.2 (6.2) years (median (interquartile range)). Blood was also collected from 192 RA patients following diagnosis. Antibodies against P. gingivalis virulence factor arginine gingipainB (RgpB), and a citrullinated peptide (CPP3) derived from the P. gingivalis peptidylarginine deiminase enzyme, were analysed by ELISA. Results Anti-RgpB IgG levels were significantly increased in pre-symptomatic individuals (mean ± SEM; 152.7 ± 14.8 AU/ml) and in RA patients (114.4 ± 16.9 AU/ml), compared with controls (p < 0.001). Anti-CPP3 antibodies were detected in 5 % of pre-symptomatic individuals and in 8 % of RA patients, with elevated levels in both subsets (4.33 ± 0.59 and 9.29 ± 1.81 AU/ml, respectively) compared with controls (p < 0.001). Anti-CPP3 antibodies followed the ACPA response, with increasing concentrations over time, whilst anti-RgpB antibodies were elevated and stable in the pre-symptomatic individuals with a trend towards lower levels after RA diagnosis. Conclusions Anti-P. gingivalis antibody concentrations were significantly increased in RA patients compared with controls, and were detectable years before onset of symptoms of RA, supporting an aetiological role for P. gingivalis in the development of RA. Electronic supplementary material The online version of this article (doi:10.1186/s13075-016-1100-4) contains supplementary material, which is available to authorized users. (Continued from previous page) immunosorbent assay; HLA, Human leukocyte antigen; IQR, Interquartile range; OR, Odds ratio; PAD, Peptidylarginine deiminase enzyme; PD, Periodontitis; PTPN22, Protein tyrosine phosphatase non-receptor type 22; RA, Rheumatoid arthritis; RF, Rheumatoid factor; RgpB, Arginine gingipainB; ROC, Receiver operating characteristic; RPP3, Argininecontaining control peptide; SD, Standard deviation; SE, Shared epitope; SEM, Standard error of mean Background Rheumatoid arthritis (RA), a complex chronic inflammatory disease with a worldwide prevalence of 0.5-1.0 % [1], is characterized by production of anti-citrullinated protein/peptide antibodies (ACPA) in the majority of patients and persistent inflammation in the synovial tissue of the joints leading to destruction of cartilage and bone [2][3][4]. The aetiology of RA remains unknown, although a complex interplay exists between genetic and environmental factors [5,6]. Chronic periodontitis (PD), which is the world's commonest inflammatory disease often resulting in destruction of alveolar bone and tooth loss, has been suggested as an environmental determinant for the occurrence and severity of RA [7][8][9]. Elucidation of the potential aetiological link between PD and RA has progressed [8,10,11] and a number of studies have identified similarities between these two diseases that possibly explain the epidemiological association. Both PD and RA display systemic markers of inflammation (e.g. C-reactive protein and pro-inflammatory cytokines) [6,12,13], and an association with HLA-DRB1 alleles [14] and smoking [15,16] has been described for both RA and PD. Furthermore, citrullinated proteins have been detected in both rheumatoid joints and inflamed gingival tissue, as well as in other tissues in relation to inflammatory conditions [17][18][19]. However, an association between PD and established RA could not be confirmed in one of our recent publications [20]. Moreover, data from one of our other studies suggest that the oral pathogen Porphyromonas gingivalis, rather than PD, may be linked to the development of RA [21]. P. gingivalis is a common periodontal pathogen associated with PD [22,23], and is the only prokaryote known to express an endogenous peptidylarginine deiminase enzyme (P.PAD), a virulence factor capable of citrullinating human and bacterial proteins, including autocitrullination [24][25][26]. P.PAD interacts with another major virulence factor, arginine gingipainB (RgpB), an arginine-specific extra-cellular protease expressed on the surface of the bacterial outer membrane [24,25]. RgpB is essential for P. gingivalis to citrullinate peptides; that is, only after degradation by RgpB can P.PAD convert peptidylarginine into peptidylcitrulline [24,26]. It is hypothesized that citrullination by P. gingivalis causes a chronic exposure of citrullinated peptides in the inflamed periodontium, possibly leading to a break of immune tolerance in genetically susceptible individuals and subsequent production of ACPA. Epitope spreading, induced by molecular mimicry and cross-reactivity with citrullinated epitopes exposed in the joint, could lead to progression of chronic RA [27][28][29]. ACPA appear many years before the onset of RA [30,31], suggesting that the initial immune dysregulation occurs long before symptoms of RA develop, outside the joints, potentially at mucosal sites such as the gingival tissue [17,27,32]. Citrullinated P.PAD has been demonstrated to be a target of the ACPA response [25] and we recently demonstrated that elevated anti-RgpB antibody levels have a stronger association with RA than smoking [21], identifying P. gingivalis as a potential mechanistic link between PD and RA. Importantly, in the same study we could also show that anti-RgpB IgG levels were significantly increased in sera from PD patients compared with periodontally healthy individuals, supporting anti-RgpB IgG as a surrogate marker for oral infection by P. gingivalis. This study investigated whether raised anti-P. gingivalis antibody levels precede onset of symptoms of RA and the ACPA response in order to elucidate the role of P. gingivalis as a potential trigger of autoimmunity and the development of RA. Importantly, because we have no data on periodontal status in our cohorts, we have only focused on P. gingivalis, not on PD, in the present study. Consequently, we analysed the antibody response to RgpB and CPP3, a citrullinated peptide derived from P.PAD, in blood samples collected prior to the onset of symptoms and at the diagnosis of RA. Identifying a potential mechanism capable of breaking immunological tolerance at the earliest stage of disease would provide an insight into the aetiopathology of RA, and could indicate new clinical therapies and interventions. Study population A case-control study was performed within the Medical Biobank of Northern Sweden and the Maternity cohort. The cohorts within the Medical Biobank are populationbased health surveys, and all habitants of Västerbotten county are continuously invited to participate. Information concerning recruitment, blood sampling and storage conditions has been described previously [30]. The Maternity cohort is based on blood samples collected from pregnant women being screened for immunity to rubella [30]. To identify individuals having donated blood samples prior to onset of symptoms of RA, the register of patients fulfilling the 1987 classification criteria for RA [33] was linked to those of the Medical Biobank and the Maternity cohort. The study included 251 pre-symptomatic individuals (58 men/193 women, mean age ± SD 50.5 ± 11.9 years) who had donated a total of 422 plasma/serum samples (375 from the Biobank cohorts and 47 from the Maternity cohort) at different time points pre-dating onset of RA symptoms. From the same cohorts, 198 (31 men/167 women, mean age ± SD 49.3 ± 14.8 years) populationbased controls (173 from the Biobank cohorts and 25 from the Maternity cohort), matched for sex and age, with sufficient plasma/serum volumes, were identified. The median time pre-dating the onset of RA symptoms was 5.2 years (interquartile range 6.2). At least one sample was identified for each of the 251 pre-symptomatic individuals; two samples were identified for 92 individuals (36.6 %), three samples for 46 individuals (18.3 %), four samples for 22 individuals (8.8 %), five samples for nine individuals (3.6 %) and six samples for two individuals (0.8 %). At the time of RA diagnosis (≤12 months of symptoms), 192 patients (144 females/48 males) donated blood samples at the early arthritis clinic. One hundred and fifty-three of these early RA patients were identified within the group of pre-symptomatic individuals because they had donated blood samples before onset of symptoms (median 5.6 years, IQR 6.3). Data detailing periodontal status or treatment were not available for these cases. The participants gave their written informed consent and the Regional Ethics Committee at Umeå University approved the study. Smoking status was defined as ever smoker (including former and current smokers), current smoker or never smoker. Because information on being either former or current smoker was lacking for a number of cases, we present data for both groups. Of the pre-symptomatic individuals, 64 % (160/250) were ever smokers and 25.6 % (55/215) were current smokers, while 49.2 % (89/ 181) of the controls were ever smokers and 13.1 % (18/ 137) were current smokers. HLA-DRB1 shared epitope (SE) alleles (0101/0401/0404/0405/0408) and PTPN22 1858C/T polymorphism were genotyped as described previously [34,35]. Demographic data for the three groups are presented in Table 1. Antibody analysis Using in-house ELISAs as described previously [21,25], all plasma/serum samples were analysed blinded for antibodies against the RgpB protein purified from P. gingivalis [36], a synthetic cyclic citrullinated peptide (CPP3) derived from P.PAD and the corresponding arginine-containing control peptide (RPP3) (Innovagen AB, Lund, Sweden). Serial dilutions of antibody-positive serum pools (anti-RgpB and anti-CPP3 IgG, respectively) were included on all ELISA plates to generate standard curves in order to compare antibody concentrations between cases analysed on different ELISA plates. All antibody levels are presented as arbitrary units/ml (AU/ml). Treating anti-CPP3 IgG as a "classical" ACPA, a cut-off value for antibody positivity was defined using receiver operating characteristic (ROC) curves, based on the anti-CPP3 IgG responses in RA patients and controls. The cutoff value for positivity was set at >29.19 AU/ml, giving a specificity of 96 %. No cut-off value could be assigned for the anti-RgpB IgG response, due to a lack of data regarding PD status of the study subjects. Statistical analysis Continuous data were compared using a non-parametric Mann-Whitney U-test/Wilcoxon signed rank test including two groups, and a Kruskal-Wallis test including several groups. The chi-square test or Fisher's exact test was used when analysing categorical data. Correlation analysis was performed using Spearman's rank correlation. Because of the lack of a cut-off value for the anti-RgpB antibody response, we stratified the anti-RgpB antibody concentrations into above or below the 75th percentile in the analyses. Logistic regression analysis was used to identify associations between antibodies and risk factors in the development of RA adjusted for age and sex. Associations were presented as odds ratios (ORs) with 95 % confidence interval (CI). Standard methods were used for analysing interactions [38]. All adjustments were based on previously performed studies and hypothesis. The statistical analyses were performed using SPSS 23.0 software (Chicago, IL, USA). Results The anti-RgpB antibody response and anti-CPP3 antibody response in pre-symptomatic individuals, RA patients and controls The concentration of anti-RgpB IgG was significantly increased in RA patients (mean ± SEM 114.4 ± 16.9 AU/ ml) and in particular in pre-symptomatic individuals, calculated using all 422 samples (152.7 ± 14.8 AU/ml) or one sample per individual (when more than one sample was available, the sample closest to symptom onset was chosen) (133.4 ± 16.2 AU/ml; data not shown), compared with control subjects (82.2 ± 12.1 AU/ml, p < 0.001 for all three groups) (Fig. 1a). The anti-CPP3 IgG levels were significantly increased in RA patients (mean ± SEM 9.29 ± 1.81 AU/ml) compared with pre-symptomatic individuals, both when all samples were analysed (4.33 ± 0.59 AU/ml) (Fig. 1b) and when only the sample closest to disease onset was analysed (5.56 ± 0.89 AU/ml; data not shown); both comparisons were analysed at group level (p < 0.001). Antibody concentrations in both RA patients and pre-symptomatic individuals were also significantly increased compared with controls (2.36 ± 0.58 AU/ml, p < 0.001) (Fig. 1b). The frequency of anti-CPP3 antibodies was 4.5 % in pre-symptomatic individuals when calculated for all 422 samples, or 6.8 % when calculated for the 251 individuals who were ever positive, and 7.8 % in RA patients (data not shown). Less than 2 % of all individuals showed reactivity towards the arginine-containing control peptide RPP3 (data not shown). Anti-RgpB IgG levels increased over time until symptom onset of RA, with a significant increase observed when analysing individuals with four consecutive pre-dating samples (p < 0.05; data not shown). There was a trend towards lower levels following diagnosis of RA (p < 0.088) (Fig. 2a). Similar to the anti-RgpB IgG response, the levels of anti-CPP3 IgG were found to increase constantly over the predating time (Fig. 2a). However, no relationship was found between anti-CPP3 antibody positivity and the levels of anti-RgpB IgG (data not shown). The mean concentration of anti-RgpB antibodies in pre-symptomatic individuals (n = 64) exceeded that of the controls already 12 years before symptom onset, whilst the corresponding time for anti-CPP3 (n = 126) was 8 years before symptom onset. However, anti-CPP3 antibody concentrations above the mean value of the controls were detected in 11 out of 64 (17.2 %) pre-symptomatic individuals more The anti-CPP3 and anti-RgpB antibody response in relation to the ACPA response The accumulated frequency of anti-CPP3 antibody positivity increased constantly over time until symptom onset (Fig. 2b). This pattern mimics that of the "classical" ACPA response (defined as antibodies against CCP2, CEP-1, cFibβ36-52 and cfilaggrin) from the same time points, although at a lower frequency (Fig. 2b). The majority of anti-CPP3 IgG-positive RA patients (11 positive/15 analysed) and also pre-symptomatic individuals (11 positive/17 analysed) were confined to the anti-CCP2-positive subset (Fig. 3a, b, and Additional file 1: Table S1). In pre-symptomatic individuals, anti-CPP3 positivity was associated with positivity for anti-cFibβ36-52 antibodies (OR = 3.22; 95 % CI 1.24-8.36, p < 0.05), and anti-CPP3 antibody levels correlated with the concentrations of both anti-CCP2 (r s = 0.14, p < 0.01) and anti-CEP-1 antibodies (r s = 0.11, p < 0.05). The median pre-dating time for anti-CPP3 antibody positivity was closer to onset (−3.42 years) compared with anti-CCP2 (−4.56 years), anti-cFibβ36-52 (−5.17 years) and anti-CEP-1 (−3.49 years) antibody positivity. There was also a significant correlation between anti-RgpB IgG levels and anti-CEP-1 antibodies (r s = 0.10, p < 0.05) in pre-symptomatic individuals (data not shown). No significant relationships were found between anti-RgpB or anti-CPP3 antibodies, respectively, and RF in the presymptomatic individuals or in RA patients (data not shown). Anti-RgpB and anti-CPP3 antibody responses in relation to cigarette smoking and RA risk genes No associations were detected between anti-RgpB antibody levels and ever smoking in pre-symptomatic individuals, calculated in cases for whom several measurements were available or for the highest values of anti-RgpB antibodies (data not shown). In RA patients, both ever smoking and current smoking was associated with significantly lower levels of anti-RgpB antibodies (p < 0.012 and p < 0.019, respectively). No associations were identified between smoking and anti-CPP3 IgG positivity in presymptomatic individuals, or in RA patients (data not shown). Moreover, no associations were observed between carriage of HLA-DRB1 SE or PTPN22 T-variant and levels of anti-RgpB or anti-CPP3 IgG positivity, in pre-symptomatic individuals. In RA, HLA-DRB1 SE was unrelated to the antibodies, while the PTPN22 Tvariant was associated with lower levels of anti-RgpB antibodies (p < 0.05; data not shown). However, caution should be taken when interpreting these data due to the low statistical power. Anti-RgpB and anti-CPP3 antibodies in relation to the development of RA An association was identified between anti-RgpB antibody levels, stratified for above the 75th percentile vs below, in pre-symptomatic individuals (OR = 2.31; 95 % CI 1.41-3.78, p < 0.001) ( Table 2). Analyses for ever smoking or carriage of HLA-DRB1 SE or the PTPN22 Tvariant did not affect the OR (Table 2). Adjustments for age and sex in each of these analyses did not change the ORs (data not shown). Levels of anti-RgpB antibodies were not associated with having RA (OR = 1.20; 95 % CI 0.75-1.92, p = 0.44). Adjustments for smoking or HLA-DRB1 SE or the PTPN22 T-variant did not affect the association between anti-RgpB antibodies and RA (Table 2), and neither did further adjustments regarding sex and age (data not shown). Anti-CPP3 antibodies were not significantly associated with the development of RA in pre-symptomatic individuals, irrespective of analyses including smoking or HLA-DRB1 SE or the PTPN22 T-variant (data not shown) or with further adjustments for sex and age (data not shown). However, anti-CPP3 IgG was associated with RA, but only when adjusting for age, sex and HLA-DRB1 SE (OR = 3.12; 95 % CI 1.06-9.19, p < 0.039) or the PTPN22 T-variant (OR = 2.96; 95 % CI 1.02-8.57, p < 0.045) (data not shown). Adjustment for smoking, in addition to sex and age, was non-significant (OR = 2.66; 95 % CI 0.97-7.26, p = 0.056). Anti-CPP3 antibodies in combination with smoking or risk genes in the development of RA When combining the major genetic risk factor for RA (HLA-DRB1 SE) with anti-CPP3 IgG positivity, an increasing risk was observed for being pre-symptomatic (OR = 6.74; 95 % CI 1.43-31.81) compared with HLA-DRB1 SE-positive/anti-CPP3-negative individuals (OR = 3.55; 95 % CI 2.32-5.42), although no significant interaction between the two factors was found (Table 3). Smoking in combination with anti-CPP3 antibody positivity showed no association with being pre-symptomatic (OR = 2.83; 95 % CI 0.86-9.4) ( Table 3). In RA patients, smoking combined with anti-CPP3 IgG increased the OR significantly from 1.73 (95 % CI 1.10-2.72) in anti-CPP3 IgG-negative ever smokers to 3.61 (95 % CI 1.05-12.44) in anti-CPP3 IgG-positive ever smokers (Table 4). HLA-DRB1 SE also yielded significantly higher OR in combination with anti-CPP3 IgG positivity than in combination with anti-CPP3 negativity (OR = 8.80; 95 % CI 1.80-43.03 vs OR = 3.33; 95 % CI 2.11-5.23) (Table 4). However, no significant interactions were observed between smoking and anti-CPP3 IgG, or between SE and anti-CPP3 IgG. Carriage of the PTPN22 T-variant in combination with anti-CPP3 antibody positivity revealed no significant association with the development of RA in patients or pre-symptomatic individuals (data not shown). Current smoking yielded similar results to ever smoking, although slightly weaker associations, which could be due to fewer current smokers compared with ever smokers, which also include former smokers. Discussion In the present study we investigated the role of the oral pathogen P. gingivalis in the development of RA, by focusing on the anti-P. gingivalis antibody response in RA patients prior to onset of symptoms. Our recent data show elevated antibody levels against the potent P. gingivalis virulence factor arginine gingipainB in patients with RA, especially in ACPA-positive RA [21]. We have Table 3 Association of smoking or HLA-DRB1 SE and anti-CPP3 IgG in pre-symptomatic individuals compared with controls adjusted for age and sex HLA-DRB1 shared epitope (SE) defined as 0101/0401/0404/0405/0408 OR odds ratio, CI confidence interval, RERI relative excess due to interaction, AP attributable proportion due to interaction, SI synergy index, MI multiplicative interaction also shown that these antibodies are clearly elevated in patients with PD, compared with periodontally healthy individuals, demonstrating that anti-RgpB IgG probably represents a good surrogate marker for P. gingivalis infection, which has been associated with PD [22,23]. With the present study, we report increased concentrations of these antibodies in a subset of individuals years before onset of symptoms of RA. Consistent with Quirke et al.'s data in RA [25], concentrations of anti-CPP3 antibodies directed against, a synthetic cyclic citrullinated peptide derived from another P. gingivalis-specific virulence factor, P.PAD, were also increased in a subset of both pre-symptomatic individuals and RA patients, compared with controls. In line with our previous findings in RA patients, the association between the development of RA in presymptomatic individuals and the anti-RgpB antibody response was not dependent on smoking habits or presence of HLA-DRB1 SE or the PTPN22 T-variant [21]. Our data on lower anti-RgpB IgG levels in RA patients who were ever smokers or current smokers compared with non-smokers were also in line with a number of previous reports showing lower anti-P. gingivalis antibody levels in smokers compared with non-smokers [21,39,40]. One explanation for the observed trend of lower anti-RgpB antibody levels in RA patients compared with pre-symptomatic individuals (Figs. 1a and 2a) could therefore potentially be the higher frequency of smokers among RA patients (67.2 %), compared with presymptomatic individuals (64 %). Furthermore, as we recently showed for the anti-RgpB IgG response in RA, the HLA-DRB1 SE in combination with anti-CPP3 IgG reveals a stronger association with RA and with being pre-symptomatic than HLA-DRB1 SE alone. The same effect occurred when combining smoking with anti-CPP3 IgG positivity, although only in RA patients, not in pre-symptomatic individuals. In this study, stratification of data into ACPA-positive and ACPA-negative sub-groups was not possible due to the limited number of individuals. However, a weak correlation between the concentration of anti-RgpB antibodies and that of the "classical" ACPA, measured as anti-CEP-1 antibodies, could be observed. Notably, ACPA (anti-CEP-1, anti-cFibβ36-52 and anti-cfilaggrin antibodies) were analysed by the ISAC multiplex assay, which is only a semiquantitative method [37], and thus are not completely comparable with results from the ELISA used for measuring anti-RgpB IgG. No relationship was detected between anti-RgpB and anti-CPP3 antibodies. This was unexpected considering the origin of both RgpB and CPP3 as P. gingivalis-specific antigens, and our interpretation of these two antibodies as surrogate markers for a P. gingivalis infection. Although the concentrations of anti-CPP3 antibodies, like anti-RgpB antibodies, were significantly increased in both RA patients and pre-symptomatic individuals compared with controls, the frequency of anti-CPP3 antibodies was only significantly increased in RA patients. Moreover, the anti-RgpB antibody response was elevated (compared with controls) several years earlier (12 years) than the anti-CPP3 antibody response (8 years). The anti-CPP3 antibody response was similar to the "classical" ACPA response; that is, there was no reactivity to the arginine-containing control peptide RPP3, the majority of anti-CPP3 antibody-positive cases were also anti-CCP2 antibody positive and the anti-CPP3 antibody response (both concentrations and the accumulated frequency of positive samples) increased gradually during the pre-dating time until symptom onset [37]. Taken together, these data may suggest that the anti-CPP3 antibody response, rather than being P. gingivalis specific, simply belongs to the generic ACPA response, or rather represents cross-reactivity with another citrullinated antigen. The low frequency of these antibodies (8 % in RA) could point to this. Moreover, in-vivo auto-citrullination by P.PAD has been debated [39], and the CPP3/RPP3 peptide-with its internal rather than C-terminal citrulline residue-may not represent an in-vivo-generated antigen. Still, the CPP3/RPP3 peptide sequence is bacterial derived and does not correspond to any human protein sequence. Results recently published by Fisher et al. [41] conflict with ours in that they were unable to identify associations between anti-RgpB or anti-CPP3 IgG and pre RA. The two studies differ in several aspects that may explain the discrepant results: their study was based on a smaller study population (n = 103) than ours (n = 251, with 422 samples), with participants from a number of different southern European countries whilst our study subjects were recruited from a geographically defined area in northern Sweden; and not all of the pre-symptomatic individuals in Fisher et al.'s study were confirmed to develop RA, as they were in our study [41]. Additionally, the periodontal microbiota has been shown to vary between countries [42] and bacterial strain diversity has been described previously for P. gingivalis [43]. Altogether, these differences could contribute to the variability in the results of these studies. Supporting our results, Mikuls et al. [44] reported increased concentrations of anti-P. gingivalis antibodies in high-risk individuals compared with controls. Also in accordance with our data is a study by de Smit et al. [45], in which elevated anti-P. gingivalis antibody levels were observed in arthralgia patients with RF or ACPA positivity compared with controls. However, anti-P. gingivalis antibody levels in de Smit et al.'s study were not higher in arthralgia individuals who developed RA compared with those who did not. This was not possible to evaluate in our study [45]. We believe this to be the largest population-based study analysing the anti-P. gingivalis antibody response in individuals before onset of symptoms of RA performed to date. Our study has limitations: the samples analysed were from different population surveys, and were not collected on a regular basis; no information regarding periodontal status or treatment was available, hence there was an inability to set a cut-off value for the anti-RgpB antibodies response; and it was not possible to investigate the relationship between PD and the development of RA. Also, data on the presence of P. gingivalis DNA were not available, and analysis of the anti-RgpB IgG and the anti-CPP3 IgG responses in relation to the presence of the bacteria was consequently not possible. However, as in our previous study [21], the anti-RgpB IgG response was interpreted as a surrogate marker for a P. gingivalis infection, past or present, whilst our data suggest that the anti-CPP3 antibody response, which follows the "classical" ACPA response, should be considered ACPA specific rather than a P. gingivalis-specific antibody. Conclusions Our data demonstrate that antibodies against P. gingivalis are significantly increased in patients with RA compared with controls, and that these antibodies are detectable years before the onset of symptoms, supporting an aetiological role for P. gingivalis in the development of RA. Studies on larger cohorts with samples collected on a regular basis are needed for a deeper understanding of the relationship between P. gingivalis, anti-P. gingivalis antibodies, ACPA and the development of RA. Additional file Additional file 1: Table S1. Presenting the frequency of ever positivity for anti-CPP3 antibodies in relation to positive/negative anti-CCP2 antibodies or ACPA. (DOCX 15 kb)
2018-04-03T05:14:17.785Z
2016-09-07T00:00:00.000
{ "year": 2016, "sha1": "dc784236f7bd18aeb6e91491397cd5512ef91ddc", "oa_license": "CCBY", "oa_url": "https://arthritis-research.biomedcentral.com/track/pdf/10.1186/s13075-016-1100-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "368e14bd77ca69da9bc415d06808eae4aab06eb1", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
237327143
pes2o/s2orc
v3-fos-license
Diabetic Kinome Inhibitors—A New Opportunity for β-Cells Restoration Diabetes, and several diseases related to diabetes, including cancer, cardiovascular diseases and neurological disorders, represent one of the major ongoing threats to human life, becoming a true pandemic of the 21st century. Current treatment strategies for diabetes mainly involve promoting β-cell differentiation, and one of the most widely studied targets for β-cell regeneration is DYRK1A kinase, a member of the DYRK family. DYRK1A has been characterized as a key regulator of cell growth, differentiation, and signal transduction in various organisms, while further roles and substrates are the subjects of extensive investigation. The targets of interest in this review are implicated in the regulation of β-cells through DYRK1A inhibition—through driving their transition from highly inefficient and death-prone populations into efficient and sufficient precursors of islet regeneration. Increasing evidence for the role of DYRK1A in diabetes progression and β-cell proliferation expands the potential for pharmaceutical applications of DYRK1A inhibitors. The variety of new compounds and binding modes, determined by crystal structure and in vitro studies, may lead to new strategies for diabetes treatment. This review provides recent insights into the initial self-activation of DYRK1A by tyrosine autophosphorylation. Moreover, the importance of developing novel DYRK1A inhibitors and their implications for the treatment of diabetes are thoroughly discussed. The evolving understanding of DYRK kinase structure and function and emerging high-throughput screening technologies have been described. As a final point of this work, we intend to promote the term “diabetic kinome” as part of scientific terminology to emphasize the role of the synergistic action of multiple kinases in governing the molecular processes that underlie this particular group of diseases. Introduction The diabetic kinome consists of protein kinases that control and regulate protein functions involved in diabetes. A range of experimental evidence indicates that pharmacological modulations of the diabetic kinome are inextricably linked to changes in metabolic homeostasis. Numerous cell types and signaling pathways in diabetes have been identified (i.e., PI3K-AKT/PKB, ERK/MAPK, growth factor, and hormone signaling pathways) [1][2][3]. Among others, the selected proteins' kinase activity, i.e., hexokinase, pyruvate kinase M2 ketohexokinase isoform A, phosphoglycerate kinase 1, and nucleoside diphosphate kinase 1 and 2 (NME1/2), contribute to altered metabolic homeostasis [4,5]. Insulin regulates glucose homeostasis by modulating protein kinases' activity in target tissues. The impairment of the kinome response to insulin leads to insulin resistance. Thus, many kinases, including (i) Jun N-terminal kinase (JNK), (ii) I kappa beta kinase (IKK), (iii) protein kinase C (PKC) theta, (iv) glycogen synthase kinase 3 (GSK3), (v) S6 kinase-1 (S6K1), and (vi) 5'AMP-activated protein kinase (AMPK), are critical factors that regulate the insulin-dependent processes. Moreover, many of them are also related to the pathogenesis of diabetes [6]. More recently, the role of dual-specificity tyrosine phosphorylation-regulated kinase A (DYRK1A) was identified in β-cells' function. Due to the large amount of data related to mutations, overexpression, and dysregulation of protein kinases in the pathogenesis of many diseases, this family of enzymes has become one of the most important drug targets during the past 20 years [7,8]. A milestone was the FDA approval (in 2001) of the first kinase inhibitor, imatinib, also known under the trade name Gleevec®(Novartis, Basel, Switzerland). It is an oral chemotherapy drug used to treat leukemia and gastrointestinal stromal tumors whose mechanism of action involves potent inhibition of the constitutively active BCR-ABL fusion protein [7][8][9]. Imatinib is also being studied in phase II clinical trials to treat type I diabetes (T1D) (NCT01781975) [10]. This study aimed to investigate the possibility of short-term therapy with imatinib to induce tolerance and long-term remission of T1D [10]. The development of small-molecule kinase inhibitors has emerged as one of the most extensively pursued areas of drug discovery. In recent years, significant progress has been made in the battle against diabetes mellitus (DM) in understanding its biological mechanisms. However, despite a large amount of data, the search for diabetes-relevant kinases inhibitors with their binding modes and structural features is required. Especially, there are still unexplored gaps in the knowledge of how protein kinases from the DYRKs family affect apoptosis, cell cycle regulation, cellular proliferation, and insulin resistance in diabetes. DYRK1A has been confirmed as a regulator of regenerative pathways essential for proper pancreatic β-cells function in humans. Inhibitors of this kinase have been extensively studied to treat various types of diabetes [11,12]. Harmine and its derivatives are one of the most frequently studied-and still the most potent therapeutic of this group of compounds [13][14][15][16]. Recently published review papers on diabetes and CNS disorders highlight the importance of DYRK inhibitors in the therapy of cancer and neurological disorders [17][18][19] and suggest the directions in the design and development of small-molecule inhibitors ( Figure 1) [12,[20][21][22][23]. This review describes recent reports on the initial self-activation of DYRK1A by tyrosine autophosphorylation, the development of DYRK1A inhibitors, and their importance in the treatment of diabetes mellitus (DM). In addition, advances in understanding the structure and function of DYRK kinases and functions and emerging HTS technologies are described. Modulating the activity of DYRK1A kinase, which is significantly involved in diabetes, with small-molecule inhibitors could be an attractive therapeutic strategy to tackle diabetes. The global diabetes population is estimated to be 9.3% (463 million people) in 2019, rising to 10.2% (578 million) by 2030 and 10.9% (700 million) by 2045 [24]. Thus diabetes has become a challenging health problem affecting the global population, and the prevalence is higher in developing countries [24]. Diabetes mellitus (DM) is caused by chronic hyperglycemia due to impaired β-cells from the islets of Langerhans, distributed throughout the endocrine pancreas to produce appropriate insulin levels or ineffective insulin usage [25]. It is also associated with vascular complications, mainly diabetic neuropathy (DN), with an incidence of about 50% [26]. The DN progresses with decreasing nerve functionality with a high risk of pain, trophic changes, and autonomic dysfunction. Diabetes may also lead to ketoacidosis, retinopathy, nephropathy, and skin complications. Moreover, diabetes dramatically increases the risk of various cardiovascular problems, including coronary artery disease with chest pain (angina), heart attack, stroke, and narrowing of arteries (atherosclerosis) [27,28]. According to the latest classification, there are several types of diabetes: type 1 diabetes designated as T1D, type 2 diabetes marked as T2D, gestational diabetes, and other variants listed in Table 1. It should be emphasized that the pathogenesis of each form differs significantly. T1D is an autoimmune disorder caused by the T-cell-mediated destruction of the insulin-producing pancreatic β-cells. T2D is a consequence of impaired glucose tolerance and insulin resistance with the prominent risk factors: obesity and physical inactivity. In addition to these most common, there is also monogenic diabetes (for example, MODY, neonatal diabetes), extrinsic pancreatic diseases (for example, cystic fibrosis-related diabetes, pancreatic diabetes [or type 3c], and drug-induced diabetes [29]. In type 1 diabetes, a significant reduction in pancreatic β-cells, resulting in insulin insufficiency and hyperglycemia, is observed. Type 2 diabetes is associated with insulin resistance, which causes the compensatory expansion of pancreatic β-cells and increases plasma insulin levels [30,31]. Finally, insufficient β-cell mass and insulin secretion also cause mature onset diabetes of the young and gestational diabetes [32]. Therefore, modern antidiabetic therapies are based on increasing functional pancreatic β-cell mass. This review briefly discusses only the most common forms of diabetes. According to the American Diabetes Association (ADA) position statement, "Diagnosis and Classification of Diabetes Mellitus" (Table 1) provides a detailed classification of diabetes by etiology [33,34]. Table 1. Classification of diabetes mellitus by etiology [33]. • Lipoathropic diabetes and others The disease of the exocrine pancreas: • pancreatitis, trauma/pancreatectomy, neoplasia, cystic fibrosis, hemochromatosis, fibrocalculous pancreatopathy, and others Endocrinopathies: • acromegaly, Cushing's syndrome, glucagonoma, pheochromocytoma, hyperthyroidism, somatostatinoma, aldosteronoma Drug or chemical-induced: • e.g., vacor, pentamidine, nicotinic acid, glucocorticoids, thyroid hormone, diazoxide, β-adrenergic agonists, thiazides, Dilantin, γ-IFN Infections: • congenital rubella • cytomegalovirus and others Uncommon forms of immune-mediated diabetes: • Stiff-man syndrome • anti-insulin receptor antibodies and others Other genetic syndromes sometimes associated with diabetes • Down syndrome, Klinefelter syndrome, Wolfram syndrome, Friedreich ataxia, Huntington chorea, Laurence-Mood-Biedl syndrome, myotonic dystrophy, porphyria, Prader-Willi syndrome and others Insulin Homeostasis and Diabetes Insulin plays a crucial role in many metabolic processes, including (i) facilitation of cellular uptake of glucose, (ii) prevention of the glucose release by the liver, (iii) activation of the muscle cells to take up amino acids, and (iv) reduction the breakdown, conversion, and release of fats ( Figure 2) [35]. In several tissues, such as the liver, muscle, and adipose tissue, insulin participates in glucose metabolism by stimulating glucose uptake and influencing both glycolysis and gluconeogenesis. Localized in the islets of Langerhans, pancreatic β-cells respond to blood glucose levels, resulting in the release of the proper amounts of insulin. Insulin affects the liver, muscles, brain, erythrocytes, and adipocytes. The loss of β-cells leads to insufficient insulin production, resulting in increased blood glucose levels and eventually causes diabetes or insulin resistance. Adopted, modified, and re-drawn from [36]. Insulin stimulates glycogen synthesis by inhibiting glycogen synthetase kinase, enhances its production through mTOR activation, promotes fatty acid synthesis, and inhibits lipolysis by activating Acetyl-CoA Carboxylase. It also inhibits hormone-sensitive lipase and modulates gene transcription through the MAPK pathway or Akt-mediated phosphorylation of FOXO transcription factors [37]. The secretion of insulin from the β-cells can be triggered either by somatotropin or by glucagon. The most important stimulant for insulin release is glucose, and when blood glucose levels rise, insulin is released to balance this process. The deficiency in insulin regulatory function may be caused by inadequate insulin secretion and/or reduced tissue response. It also results from a complete inability of islet cells to produce insulin (T1D) or the failure to produce enough insulin ( Figure 3) [38][39][40]. Diabetes is a heterogeneous disease, but most cases corresponding to type 1 and type 2 diabetes. Nevertheless, a considerable proportion of patients does not fit into this classification and are known to have hyperglycemia caused by a mutation in a single gene. Despite the rapid evolution of molecular diagnosis methods, many MODY cases may be misdiagnosed as type 1 or type 2 diabetes. Thus, in the following sections, we briefly characterized only these most common types of diabetes. Type I Diabetes (T1D) T1D, referred to as insulin-dependent diabetes or juvenile-onset diabetes, concerns ca. 5-10% of cases. As already mentioned, it results from autoimmune destruction of the β-cells, accompanied by cellular invasion by both CD4+ and CD8+ T cells, leading to a decrease in β-cell mass [42,43]. Several markers of β-cell immune destruction include autoantibodies to islet cells, insulin, glutamic acid decarboxylase (GAD65), and tyrosine phosphatases IA-2 and IA-2β that are usually present in ca. 90% of patients [34]. T1D has genetic predispositions; the human leukocyte antigen (HLA) complex linked to the DQA and DQB genes constitutes the most relevant susceptibility region [44,45]. It is also related to poorly defined environmental factors. A small percentage of T1D patients (<10%) display no autoimmune response evidence and are categorized as type 1B diabetic or idiopathic population [43]. Type 2 Diabetes (T2D) The most common form of diabetes is T2D (ca. 90-95% cases), also named non-insulindependent diabetes or adult-onset diabetes. Even though the etiologies of T2D are not fully explored. In this case (in contrast to T1D), autoimmune destruction of β-cells does not occur [25]. Lifestyle factors, including physical inactivity, sedentary lifestyle, smoking, and frequent alcohol consumption, play an important role in developing T2D [3,31]. It is characterized by increased hyperinsulinemia, insulin resistance, and β-cell dysfunction, with up to 50% cell loss at the time of diagnosis. T2D leads to a decrease in glucose transport into the liver, muscle cells, and fat cells. Recently, the involvement of impaired α-cell function has been recognized in the pathophysiology of T2D [46]. Consequently, glucagon and hepatic glucose levels that rise during fasting are not suppressed with a meal due to inadequate insulin concentration, increased insulin resistance, and increased fat breakdown with hyperglycemia [32]. T2D contributes to a substantial increase in the risk of cardiovascular disease. Other mechanisms for developing hyperglycemia-induced micro and macrovascular compli-cations include endothelial dysfunction, advanced glycation end-product formation, hypercoagulability, increased platelet reactivity, and overexpression of sodium-glucose cotransporter-2 (SGLT-2) [47][48][49]. Fibrinolysis and platelet aggregation can be remarkably improved by metformin therapy. Glucagon-like peptide-1 (GLP-1) receptor agonists have been confirmed to have protective effects on the endothelium, which may help to reduce inflammation (Figure 4.) [50][51][52]. . Schematic illustration of the main types of diabetes in which the pancreas does not produce enough insulin, or the body's cells do not respond appropriately to the insulin produced. Adopted and changed from a stock image. Maturity-Onset Diabetes of the Young (MODY) Other forms of diabetes are associated with monogenetic defects in β-cell function. In maturity-onset diabetes of the young (MODY), the onset of hyperglycemia occurs early (generally before age 25). This type of DM is characterized by impaired insulin secretion and minimal or no defects in insulin action [53]. The most common form of MODY is associated with mutations in the hepatic transcription nuclear factor HNF-1α [54][55][56][57]. It can also be related to mutations in the glucokinase gene, which serves as a "glucose sensor" for the β-cell [58,59]. Due to the impairments in the glucokinase gene, higher plasma glucose levels are required to elicit normal insulin secretion [60]. The critical factor distinguishing MODY from type 1 diabetes is the autoantibody negativity. Although GADA was reported in 1% of individuals with MODY, the undetectable C-peptide concentration and lower HbA1c (GCK-MODY) may be determined [61,62]. Besides the genetic background, the features distinguishing MODY diabetes from T2D are the onset of the disease usually in the second or third decade of life, most often the absence of obesity, lower BMI, and the predominance of insulin secretion defect in the absence of insulin resistance or even high insulin sensitivity [63]. Alzheimer's Disease-Diabetes Mellitus (AD-DM) Relation and Type 3 Diabetes (T3D) In recent years, a significant increase in the incidence of Alzheimer's Disease related to T2D, is observed. Patients with T2D are almost twice as likely to develop AD than patients who only have insulin resistance. T2D and Alzheimer's Disease patients have similar β-amyloid deposits in the pancreas as in the brain [64][65][66]. Several researchers have suggested that this new pathology is referred to as type 3 diabetes (T3D) ( Figure 5) [67]. Some of the target receptors in T2D, e.g., IGF-1R, [68][69][70] PPAR, [71] IDE, [72,73], are also crucial regulators of tau protein expression and phosphorylation. For instance, it was reported that both hyperinsulinemia and IDE might be risk factors for Alzheimer's disease [72,74]. The function of glucose transporter (GLUT) protein is controlled by the insulin-like growth factor (IGF) family, consisting of three ligands (insulin, IGF-1, and IGF-2), six IRs, and up to seven IGF-binding proteins (IGFBP1-7). IGF-1 and insulin can regulate neuronal excitability, metabolism, and survival through the insulin/IGF-1 signaling pathway. Few evidence on Alzheimer's Disease patients' brains showed a deficit ratio of insulin and resistance in IGF-1, suggesting that AD might be diabetes type 3 [74]. Nevertheless, several studies also suggest the protective role of insulin against apoptosis through various signaling pathways that suppress intracellular oxidative stress. For instance, the insulin/IGF/Akt pathway is considered to promote β-cell survival. [74]. The DYRK1A kinase is involved in molecular pathways relevant to human pancreatic β-cell proliferation, thereby providing a potential therapeutic target for β-cells regeneration in T1D and T2D [75,76]. A further target of DYRK1A has been identified as insulin receptor substrate-2 (IRS2) [77,78]. Moreover, hyperphosphorylation caused by DYRK1A overexpression has been implicated in many pathogenetic changes attributed to brain diseases, particularly in Down Syndrome and Alzheimer's Disease [79,80]. Diabetic Kinome Protein kinases are key regulators of signal transduction pathways in many physiological processes. Protein phosphorylation catalyzed by them is one of the major intracellular mechanisms of structural and enzymatic protein regulation. Reversible phosphorylation/dephosphorylation is involved in all physiological events, and its disruption can lead to many pathological cases [81,82]. A kinome is a set of genes for protein kinases in its genome. Serine and threonine kinases contribute to insulin resistance and the development of diabetes (T2D) [83][84][85]. Kinases such as AMP-activated protein kinase (AMPK), Ikβ kinase (IKK), protein kinase C (PKC), and mitogen-activated protein kinases (MAPKs) play important roles in the development of insulin sensitivity and insulin resistance [6,[86][87][88]. Rho-associated coiled-coil-containing protein kinase (ROCK) and RNA-activated protein kinase (PKR) are also involved in the pathogenesis of insulin resistance [89,90]. AMPK regulates lipid and glucose metabolism, therefore, this enzyme appears to be one of the main factors responsible for maintaining energy homeostasis in the body. Activation of this protein leads to inhibition of anabolic pathways, and its dysregulation is one of the mechanisms responsible for insulin resistance-induced diabetes [91,92]. Thus, understanding the interplay between diabetes and protein kinome may help develop the targeted drug therapies to minimize insulin resistance. Moreover, it may be critical for the prevention of diabetes. Therefore, the most crucial approach in discovering new drugs against diabetes is searching for pharmacological inhibitors of specific kinases. Initially, the focus was mainly on tyrosine kinase inhibitors and cancer indications, but the field is rapidly expanding towards serine/threonine kinases. Inflammation, Diabetes, and Kinase Inhibition Inflammation of pancreatic islets has emerged as a key contributor to the loss of functional β-cell mass in both T1D and T2D. In T1D, β-cells are the target of an autoimmune assault. Chronic low-grade inflammation and activation of the immune system are major factors in obesity-induced insulin resistance and T2D [93,94]. Obesity is a strong antecedent of T2D, and both diseases are associated with adverse cardiovascular risk profiles. Inflammatory pathways have been suggested as the underlying unifying pathogenic mediators for excess weight, diabetes mellitus, and cardiovascular diseases. Chronic inflammation is a common feature in the natural course of diabetes, and levels of inflammatory biomarkers (secreted mainly by adipocytes) correlate with prevalent and incident diabetes and major complications and cardiovascular diseases in particular [93,94]. The development of insulin resistance is also associated with lowgrade tissue-specific inflammatory responses induced by various pro-inflammatory and/or oxidative stress mediators, notably pro-inflammatory cytokines such as IL-1β, IL-6, TNF-α, several chemokines, and adipocytokines. Chronic exposure of pro-inflammatory mediators stimulates cytokine-signaling proteins, which ultimately blocks insulin signaling receptors' activation in β-cells of pancreatic islets [95]. Some of the protein kinases are directly involved in these inflammatory processes that underlie and accompany the progression of DM and its complications [95]. For instance, Iκβ kinase β (IKKβ), a central coordinator of inflammatory responses through activation of NF-κB, has been implicated as a critical molecular link between inflammation and metabolic disorders [96]. Phosphorylation by IKKβ targets IκBα to degrade proteasome that liberates NF-κB for translocation from the cytoplasm into the nucleus to promote expression of numerous target genes and consequently induce insulin resistance [74]. Xu et al. identified inhibitors of noncanonical Iκβ kinases (IKKs), TANK-binding kinase 1 (TBK1), and IκB kinase ε (IKKε), as enhancers of β-cell regeneration [97]. In the progression of T1D and T2D, a common feature is decreasing β-cell mass by cytokine-and/or glucolipotoxicityinduced apoptosis. Thus, prevention of β-cell loss by diabetic kinome inhibition can be an alternative approach for increasing β-cell mass in diabetes [97]. DYRK Family of Protein Kinases Among the 518 human kinases, dual-specificity tyrosine phosphorylation-regulated kinase 1A (DYRK1A) is a conserved eukaryotic serine/threonine protein kinase. Other kinases belonging to the DYRKs family are DYRK1B, DYRK2, DYRK3, and DYRK4. DYRKs are from the CMGC group, which also includes other kinases: CDKs (cyclin-dependent kinases), CDKLs (CDK-like kinases), CK2 (casein kinase 2), CLKs (CDC-like kinases), GSKs (glycogen synthase kinases), and MAPKs (mitogen-activated protein kinases). Among them, CDKs, CKs, and MAPK have been well investigated in their functions in transcription, DNA damage repair, protein degradation, and neurogenesis [98,99]. However, DYRKs and CLKs in the signaling pathways remain not completely understood. DYRKs isoforms are subdivided into two classes based on their subcellular localization. DYRK1A/B belonging to class 1 are found in nuclei, whereas those of class 2 prefer cytoplasmic localization. They all possess a kinase domain [100]. DYRKs Activity and Regulation Many protein kinases can adopt an active and inactive conformation. The transition between these conformations is regulated by the reversible phosphorylation of discrete serine, threonine, or tyrosine residues in the 'activation loop' [101]. The DYRKs activation is dependent on the phosphorylation of a conserved tyrosine residue in the activation loop. The phosphorylated tyrosine forms salt bridges with two arginines in the P + 1 loop [102,103]. In DYRK1A, pY321 is important in the same interactions with two arginines (R325, R328) ( Figure 6) [98,99,[102][103][104]. DYRK possesses dual specificity, as it can autophosphorylate tyrosine Y321 in the activation loop and phosphorylate its substrates on either serine or threonine residues [103,105]. While dual MAPK phosphorylation is the primary process of upstream kinase regulation, tyrosine phosphorylation of DYRK and GSK3 occurs via autophosphorylation [106]. Activated DYRKs phosphorylate their substrates only on serine or threonine residues and cannot rephosphorylate on tyrosine. Therefore, a translational intermediate folding of DYRKs with different biochemical properties and tyrosine phosphorylation ability has been proposed [107]. It has also been suggested that the dual specificity of DYRKs is associated with dual sensitivity to kinase inhibitors [108]. Tyrosine phosphorylation during DYRKs activation is required to switch the conformation, but it does not maintain this state. For this purpose, the stabilizing effect of salt bridges formed between phosphotyrosine and the two arginines in the P + 1 loop may play a crucial role [102]. DYRK1A Expression and Its Role in Neurological Diseases DYRK1A is a dosage-sensitive gene, and the imbalance in its expression affects brain structure and function [109]. It was reported that DYRK1A deficiency might lead to autosomal dominant mental retardation [109]. Therefore, both low and high DYRK1A expression can participate in developing several disorders [110]. DYRK1A expression is regulated by transcriptional factors, tumor suppressors, neurogenic factors, and proteinprotein interactions. Reduced expression of the repressor complex AP4 results in premature overexpression of DYRK1A in the fetal brain [111]. It was also shown that the β-amyloid peptide increases DYRK1A mRNA SH-SY5Y cells [112]. Overexpression of another transcription factor, E2F1, enhanced DYRK1A activity by increasing its mRNA level in phoenix cells. Thus, DYRK1A may also be involved in cell-cycle regulation [113,114]. The DYRKs are key proteins regulating NFAT1 phosphorylation [115,116]. Overdosage of DYRK1A associated with the DSCR1 gene (resident of the "Down syndrome candidate region" and as a shock or stress gene) was reported to diminish NFATc activity in the immune response [117]. Protein p53‚ a 345 well-known tumor suppressor gene-has been identified to reduce DYRK1A expression. This process is mediated through the induction of miR-1246, resulting in the nuclear retention of NFATc1 and induction of apoptosis (overexpression of miR-1246 reduces DYRK1A levels) [118]. There is also evidence that upregulation of DYRK1A leads to changes in neuronal proliferation in Down Syndrome [119]. The WDR68 protein (also called HAN11, DCAF7) may act as a regulatory subunit of DYRK1A and DYRK1B [120,121]. Its overexpression inhibited the DYRK1A stimulation of GLI1-dependent reporter gene activity [122]. The circadian changes in DYRK1A levels have been reported, and DYRK1A was identified as a molecular clock component leading to CRY2 degradation [123]. Recently, SPRED1 and SPRED2 (sprouty-related protein with an EVH1 domain) were found to interact with the catalytic domain of DYRK1A, leading to the inhibition of phosphorylation of tau and STAT3 [124]. Thus, the DYRK1A-STAT pathway is involved in DS development. Phosphorylation by DYRK1A at tau Thr212 residue primes tau phosphorylation by GSK3 at the Ser208 residue, resulting in increased neurofibrillary accumulation tangles exists in the brain of Alzheimer's Disease patients [125,126]. DYRK1A plays an important role in cytoplasm homeostasis by localizing in the nucleus, as evidenced by increased immunoreactivity in this area [125]. The importance of DYRK1A in several biological processes is summarized in (Figure 7). DYRK1A Expression Affects Mechanisms of Diabetes DYRK1A has been found to affect multiple signaling processes in DM context by activating/inactivating signals of transcriptional and translation factors (RNAPII CTD [128], Sprouty2 [129], DREAM complex [130], CREB [131], FKHR [132]), splicing factors (regulation of Cyclin D1 turnover as well as miscellaneous proteins including caspase-9 [109,133,134], Notch [135], as well as glycogen synthase. It was shown that DYRK1A is involved in GSK3 phosphorylation at the Ser640 residue [136]. This interaction subsequently causes the activation of glycogen synthase, a key enzyme in glycogen synthesis regulated by insulin ( Figure 8) [136,137]. DYRK1A is an important kinase for β-cell growth [138]. Studies using DYRK1A haploinsufficient mice have confirmed that they are burdened with severe glucose intolerance, reduced β-cell mass, and proliferation, leading to diabetes. Upregulation of DYRK1A in β-cells was found to enhance this phenomenon significantly [11,30]. DYRK1A emerged in the drug discovery field as one of the most attractive therapeutic targets for developing selective inhibitors as new drugs. They may have a high therapeutic potential for diabetes. The involvement of DYRK1A in the molecular pathways of different diseases is well described (see above). Therefore, we have focused on the new DYRK1A inhibitors discovered or specifically developed to provide the basis for the future development of these promising drugs. Current Treatments of Diabetes Pharmacological treatment of DM is based on the following strategies: (i) insulin infusion; (ii) administration of drugs that increase insulin secretion (sulfonylureas, meglitinides); (iii) enhancement of insulin sensitivity (metformin, thiazolidinediones); (iv) prevention of glucagon synthesis (DPP-4 inhibitors and GLP-1 receptor antagonists; and (v) application of substances that increase glucose excretion (SGLT-2 inhibitors) [140]. These strategies offer reasonable control of disease symptoms. However, there is simply no therapy to help DM patients to return to euglycemia. Thus, the focus on DM treatment's development shifted toward population of functional, insulin-producing β-cells. It allows the patient to achieve insulin homeostasis and relieve hyperglycemia. One of the most promising advances in diabetes therapy is enabling β-cells to replicate. Several classes of drugs, hormones, or growth factors such as PPARg agonists, GLP-1 agonists, DPP-4 inhibitors, GSK3β inhibitors, prolactin, IGF-1, HGF, and PTHRP have been tested for their ability to stimulate β-cell proliferation [141,142]. However, all the proposed approaches failed in inducing β-cell proliferation in clinical conditions. Nevertheless, in the majority of treated diabetic patients, some β-cells were able to survive after the treatments listed above. In view of these considerations, it seems reasonable that modifications of current drugs or new appropriately designed small molecules or molecular targets could lead to β-cells proliferation/restoration. Under physiological conditions, human β-cells replicate at a low rate, about 2% per day, and only in the first few years of life. So far, attempts to stimulate adult human β-cells to replicate have failed pharmacologically. Nevertheless, it changed in 2015 with the discovery of harmine and other DYRK1A inhibitors, discussed below [16]. Harmine, a well-known DYRK1A kinase inhibitor, was identified by Wang et al., through a high throughput screening (HTS) campaign. It induces a mild level of c-Myc protein expression in rodent islets. The mechanism of action involves inhibiting the DYRK1A kinase (likely a primary target of harmine), which allows the NFAT pathway to induce c-Myc expression [16]. Moreover, it serves as the terminator of NFAT dephosphorylation by rephosphorylating NFAT and acts as a brake on the cell cycle [142]. Several DYRK1A inhibitors were identified among other known protein kinases inhibitors, and a few are used as tool compounds for β-cell regeneration [138]. Harmine and Its Analogues-SAR Approach Screening of a panel of 69 kinases identified harmine as a potent DYRK1A inhibitor [143]. Comparative in vitro assays revealed that harmine is moderately specific towards DYRK1A [144]. The IC50 for DYRK1A has reached 33 nM. The DYRK1B showed an IC50 of 166 nM, and the other distant members DYRK2, DYRK3, and DYRK4 indicate IC50 values as following: 1.9 µM, 0.8 µM, and 80 µM [144]. Additionally, cell culture assays confirmed the potency of harmine for DYRK1A and its lack of toxicity. The DYRK1A ⁄ harmine complex's crystal structure showed harmine blocking the ATP-binding pocket and interacting with the backbone NH of methionine on position 240 of the hingeregion, as well as with the conserved lysine on position 188, by forming two hydrogen bonds ( Figure 9) [145]. Furthermore, the DYRK1A/harmine complex structure suggests that the accessible volume of the ATP-binding pocket can accommodate substituents at the β-carboline structure [102]. Consequently, harmine can likely be modified into an even more potent and selective DYRK1A inhibitor. Although several new DYRK1A inhibitors have been identified and described so far, none meet the selectivity standards required for kinase-targeted probe molecules. Harmine has been recognized as a potent monoamine oxidase (MAO) inhibitor, which is associated with a number of side effects. Due to the limited selectivity of harmine, its derivative AnnH75 ( Figure 10) was developed, which, unlike harmine, does not interact with MAO while maintaining DYRK1A inhibition. Epigallocatechin gallate (EGCG) from green tea has also been shown to be a DYRK1A inhibitor. In 2018 an integrated approach to investigate the structure-activity relationship of harmine derivatives for diabetes management (DYRK1A activity and β-cell proliferation) was developed [15]. Structure-based drug design and development were used to identify kinome and CNS off-targets and harmine-like molecules for more specific therapy. The crystal structure of DYRK1A with ATP-binding inhibitor DJM2005, 1, 7, and 9-amino harmine analogs were synthesized and examined in terms of their effect on DYRK1A binding and β-cell proliferation ( Harmine analogs with polar substituents, e.g., hydroxymethyl or -acetyl, at position 1-C ( Figure 12), showed good DYRK1A inhibition (IC50 49-67 nM). However, the 1-hydroxy moiety negatively impacted DYRK1A inhibition. The presence of a halogen atom at the position 1-C significantly increased the inhibition potency, making the 1-chloro substituted analog the most potent DYRK1A inhibitor, with an IC50 of 8.8 nM. Among synthesized 8 compounds with IC50 < 250 nM against DYRK1A, only four affected human β-cell proliferation. In contrast, the 1-amino analogs indicated no effect on β-cell proliferation. Notably, 1-and 3-hydroxymethyl compounds were most effective in vitro, indicating that these modifications improve the β-cell proliferation and potentially increase the selectivity toward DYRK1A [15]. In the subsequent paper, the same authors reported the set of harmine derivatives modified in 7-position as DYRK1A inhibitors with activity on human β-cell proliferation and targeted drug delivery [147]. The harmine backbone was substituted by terminal methyl ester, carboxamide, carboxylic acid, and amino/substituted amino groups with various carbon (1-5) chain lengths. Biochemical assays allow the selection of two 7-O analogs with activity toward DYRK1A (>100 nM). These compounds indicated an increase in β-cell proliferation (3-fold less than harmine). The reduced (in comparison to harmine) efficacy of described 7-O derivatives may be caused by several structural and biological factors, including lower potency for DYRK1A inhibition, limited cell permeability, reduced DYRK1A targeting and/or off-target kinase activity [147]. 9-N-substituted analogues of harmine were also studied in order to eliminate the disadvantages of kinase and off-target [148]. The library of 62 compounds was tested, and among them, 4-(7-methoxy-1-methyl-β-carbolin-9-yl)butanamide proved to be the most promising DYRK1A inhibitor. The compound was tested in vivo and was significantly more effective than bare harmine ( Figure 13). After treatment with 4-(7-methoxy-1-methyl-βcarbolin-9-yl)butanamide, Ki67 expression was increased in C57 mouse and human β-cells ( Figure 13). In the PPX model, faster regeneration of β-cells was observed at a 10-fold lower drug dose than harmine. Similar results were also obtained in the NOD-SCID mouse model with transplanted human islets. Furthermore, no CNS side effects were observed at the dose of 30 mg/kg. Thus, this compound was selected as the lead candidate with high in vivo efficacy. Noteworthily identified inhibitor is also characterized by improved selectivity and CNS off-targets and superior activity in β-cells restoration-crucial for the treatment of diabetes [148]. The studies described above, the success of which has been confirmed in several animal models, demonstrate the validity of the modifications used within the harmine structure. This research direction is definitely worth continuing, but large-scale clinical trials will only provide answers to whether the most effective compound of this family will be equally effective in clinical treatment. Figure 13. Chemical structure of 4-(7-methoxy-1-methyl-β-carbolin-9-yl)butanamide. Perha Pharmaceutics Inhibitors Despite the therapeutic potential of DYRK1A inhibitors, only a few of them have been well-characterized to date in terms of selectivity and biological effects [149]. These include a series of (i) pyrazolidine-3,5-dione derivatives, (ii) 6-arylquinazolin-5-amines, (iii) the β-carboline alkaloid harmine, (iv) the green tea polyphenol epigallocatechin-3-gallate, (v) the benzothiazole INDY, (vi) bauerine C derivatives, and (vii) leucettines [150]-a group of aminoimidazolinones derived from the marine sponge natural product leucettamine L41 was shown to be a most promising kinase inhibitor ( Figure 14). The molecular interactions of leucettine L41 with its targets and its neuroprotective properties were extensively studied. [150] leucettine L41 (an ATP-competitive inhibitor of DYRKs and CLKs) may also interact with GSK3β and CK2. Moreover, it causes cellular effects, including pre-mRNA splicing, HT22 hippocampal cells protection cell death. Furthermore, it may induce autophagy and inhibit tau phosphorylation [151]. It was recently reported that leucettine L41 could prevent DYRK1A proteolysis, inhibit STAT3α phosphorylation, and reduce pro-inflammatory cytokine secretion (IL1-β, TNF-α, and IL-12) in APP/PS1 mice model. These results confirm the role of DYRK1A proteolysis in Alzheimer's disease (AD) and suggest a possible mechanism as a novel target to counteract the disease [152]. Another approach for discovering DYRK1A inhibitors was screening a library of plant and fungal extracts. [153]. Several compounds were identified, including harmine, anthraquinone emodine and several flavonoids. These molecules were isolated and characterized as the active constituents from four plant extracts. However, due to the moderate activity of selected anthraquinone and flavonoids, the potential for further development is limited. In particular, flavonoids are known to be very promiscuous kinase inhibitors [153]. Lamellarins are natural marine products isolated from mollusks, ascidians, and sponges. Lamellarin D displays broad-spectrum kinase inhibition (i.e., CDK1, CK5, GSK3, PIM1, and DYRK1A) in the sub-nanomolar range. It is also toxic to cancer cells due to strong topoisomerase I inhibition. Lamellarins B and D differ only in the OH and OMe groups' number and position on a common pyrrolo(2,1-a) isoquinoline scaffold ( Figure 15). The synthetic model for modulation of lamellarins' activity has been developed [154]. Based on the natural structure's fine-tuning, it was possible to eliminate topoisomerase affinity and cytotoxicity while retaining the kinase inhibition ( Figure 16). The pyrrole moiety was replaced with an indole skeleton and designing new chromeno [3,4-b]indoles. The other parts of lamellarin D are unchanged (C, B, A,) as are the substituents most strongly interacting with the A ring, i.e., OH and -OCH 3 groups. As a result of the presence of a hydroxyl group at position C-2, topoisomerase inhibition is lost. Interestingly, selective inhibition of DYRK1A was observed simultaneously. Without any other substituent or the addition of a hydroxyl group in C-10, two derivatives (4-hydroxychromeno(3,4-b)indol-6(7H)-one and 3-hydroxychromeno(3,4-b)indol-6(7H)-one) were selected with IC50 = 74 and 76 nM, respectively [154]. Similarly, DYRK1A inhibitors comprising: (i) meridianines, (ii) indirubin 5 -carboxylates, (iii) thiazolo [5,4-f]quinazolines, (iv) pyrido [2,3-d]pyrimidines, (v) 3,5-diaryl-7-azaindoles (DANDYs), (vi) KH-CB19, (vii) 2,4-disubstituted thiophenes, and (viii) hydroxybenzothiophenes are tested not only towards DYRK1A selectivity but, additionally, towards structurally closely related kinase isoforms [155]. The Meijer group has been intensively investigating the C, N, S-or C, N, O-containing heterocycles representing precursors of biologically important molecules able to alter the kinases activity. One of the most promising compounds from this group is 8H-thiazolo[5,4-f ]quinazolin-9-ones, with micromolar DYRK1A inhibitory potency ( Figure 17). Benzo-, pyrido-and pyrazinothieno [3,2-d]pyrimidines derivatives as DYRK1A inhibitors were also investigated. Thiazolo [5,4-f]quinazoline scaffolds also indicate the potential for DYRK1A inhibition. Among the compounds of this library, methyl 9-(4-methoxyphenylamino)thiazolo [5,4- It was also reported that the other derivative, the methyl 9-anilinothiazolo[5,4f ]quinazoline-2-carbimidate (EHT 5372), inhibits DYRK1A and DYRK1B at subnanomolar concentrations with IC50 = 0.22 nM for DYRK1A and 0.28 nM for DYRK1B, respectively). EHT 5372 and its derivatives are one most potent DYRK1A inhibitors reported so far, with high selectivity toward DYRK1A compared to other kinases of the CMGC group ( Figure 19). EHT 5372 also inhibits cellular DYRK1A-mediated tau phosphorylation and Aβ production. However, it indicates significantly lower potency with IC50 1.06-1.17 µM [156,157]. Another compound belonging to the DYRK1A inhibitor class and characterized by nanomolar IC50 values is 8-cyclopropyl-2(pyridin-3-yl)thiazolo[5,4-f ]quinazolin-9(8H)-one (also called FC162, Figure 20) [158]. FC162 has emerged as the most promising candidate based on in vitro cell studies than well-characterized DYRK1A inhibitors (e.g., leucettine 41 and EHT1610). It was reported that FC162 could cross the BBB and is effective in Thr212 phosphorylation [158]. In further studies, the activity of FC162 on tau-4R cells (SH-SY5Y cells which overexpressing the four-repeat human tau isoform) was examined. The results indicated a dose-dependent inhibition of tau phosphorylation at Thr212. Moreover, the decrease in cyclin D3 phosphorylation at Thr283 was observed in murine pre-β-cells. After long-term treatment of FC162, a decreased G0 cell population was observed. Thus, these data reveal that FC162 phenocopies the effect of Dyrk1a genetic deletion [158]. Moreover, the following compound from this group-10-iodo-11H-indolo[3,2-c]quinoline-6-carboxylic acid (KuFal194) [155] also indicated an in vitro activity against DYRK1A with IC 50 = 6 nM and considerable selectivity in comparison to DYRK1B and CLK1. Nevertheless, due to the low water solubility of KuFal194, further in vitro and in vivo studies should be performed with caution. It seems reasonable to use appropriate formulations that not only solve the problem of lipophilicity but, perhaps, by increasing stability, also improve other important parameters. The SAR evaluation of KuFal194 derivatives and kinome selectivity analysis including DYRK1A and CMGC protein kinases: CDK1/cyclin B, CDK2/cyclin A, CDK5/p25, CK1, GSK-3, and ERK2 were performed. Substituents in the 8-position eliminated the DYRK1A inhibitory activity, suggesting steric exclusion from the ATP-binding pocket ( Figure 21). Moderate, selective inhibitors of GSK3 were also obtained by adding polar H-bond acceptor substituents in 8-position. These showed no activity against DYRK1A. Strikingly, the 10chloro derivative (10-chloro-11H-indolo[3,2-c]quinoline-6-carboxylic acid) showed two-fold higher DYRK1A inhibitory potency (IC50 = 31 nM) than 11H-indolo[3,2-c]quinoline-6carboxylic acid without inhibiting other kinases [155]. In order to enhance the physicochemical properties of KuFal194, a set of [b]-annulated chloro-substituted indoles were designed and developed. Compared to the iodine atom, the main rationale was that chlorine decreases the molar mass and lipophilicity and diminishes the overall toxicity [159]. The results of kinase inhibition studies performed using proper bioassays have revealed that most of the tested compounds, except Mannich base, act as DYRK1A inhibitors with micromolar or even sub-micromolar concentrations applied. Compared to KuFal194, these novel compounds were less active and non-selective towards CLK1 [158]. 4-chlorocyclohepta[b]indol-10(5H)-one was identified as a novel dual DYRK1A/CLK1 inhibitor with slightly better solubility. X-ray structure analysis confirmed the binding mode of this compound to DYRK1A, exploiting mainly shape complementarity for tight-binding ( Figure 22) [158]. In summary, inhibitors such as harmine, INDY, and leucine L41 have shown some promise in cellular assays due to their significant DYRK1A inhibitory activity. Results obtained against related kinases were no longer as promising, indicating their low selectivity. On the other hand, KuFal194 and EHT 5372 are characterized by proper selectivity against DYRK1A, but their use in in vivo studies is still limited due to high lipophilicity ( Figure 23) [154]. Therefore, further design of improved water-soluble derivatives or the use of appropriate formulations is required. A halogen-substituted indole group was chosen to develop more hydrophilic DYRK1A inhibitors with reduced molecular weight. The received fragment served as a template to design and develop a series of substituted indole-3-carbonitriles with inhibitory properties against CMGC kinases [160]. Computational studies indicated that the halogen substituents, at a 7 position of the indole ring, are most likely to interact with the hinge region by a water-mediated halogen bond [160]. At a 2 position of the indole core, only aromatic or lipophilic residues were tolerated. The 2-phenyl-substituted derivative (7-Iodo-2-phenyl-1H-indole-3-carbonitrile) was the most potent inhibitor of the series (IC50 against DYRK1A at 10 nM) and DYRK1A-mediated phosphorylation of SF3B1 in HeLa cells (IC50 = 320 nM) ( Figure 24) [160]. However, it resulted in only low selectivity to related kinases of the CMGC group and poor aqueous solubility. To increase the solubility of the compounds, hydrophilic or aliphatic residues at a 2 position were introduced. By replacing the 2-phenyl substituent with pyridin-3-yl or cyclopentyl residues, the reduction of logP value and increased solubility were obtained, while the DYRK1A activity was only slightly affected. Further modifications of the 7-halogenindole-3-carbonitrile parent structure are underway to develop potent, highly selective, and water-soluble DYRK1A inhibitors [160]. The tetracyclic V-shaped pyridine-, pyrazine-or indole-containing compounds represent the next set of molecules that target DYRK1A in the nanomolar range [12,161]. Figure 27) [165]. Due to its structural analogy with harmine, further optimization was performed. The pyridazino [4,5-b]indol-4-one series, the furan-2-yl-substituted derivative, was selected as a compound with submicromolar IC50 (0.22 µM) against DYRK1A. It was only four-fold less active than harmine (0.06 µM) and indicated no activity towards the other kinases. The mechanism of its activity was explained theoretically. Based on the presented docking model, the authors suggested differences in its affinity towards harmine. Harmine interacts with the Leu241 residue via hydrogen bonding and the presented inhibitor probably bind to the pyridazinone ring (backbone atoms of Glu239 and Leu241). Furthermore, selectivity to CDK5 and GSK3 kinases may be due to an additional hydrogen bonding interaction between a methoxyl group and an aspartate residue (Asn244) located in the kinase pocket (Asp86 in CDK5; Thr138 in GSK3) [165]. Azaindoles Azaindoles are structurally related to indoles, widely present in natural products and pharmaceuticals ( Figure 29). Azaindole molecules appear to inhibit kinases than other targets preferentially. Moreover, their biological/pharmacological features are beneficial to treat many diseases, including DM [167]. With some azaindoles being successfully developed as antidiabetic drugs, the 6azaindole and 7-azaindole derivatives have also been tested as DYRK1A inhibitors. For instance, the 3,5-diaryl-7-azaindole derivative, also called DANDY, represents one of the most potent inhibitors (IC50 = 3 nM) of DYRK1A. Besides DANDY, numerous DYRK1A inhibitor scaffolds have been reported (Figure 30) [168]. Based on molecular docking study at the ATP-binding site, it was demonstrated that there were multiple H-bond interactions with the peptide backbone (Glu239, Leu241) for the 7-azaindolecore, and the hydroxyl substituents interacted with Lys188 and Ileu165. The hydroxyl derivatives showed more remarkable activity than their methoxy derivatives [168]. Moreover, it was indicated that 6-azaindole derivatives were considerably less active than the 7-azaindole ones. Interestingly, when these derivatives were tested against a representative kinase panel, a relative selectivity appeared with compounds acting mainly on the DYRK1A family [169]. The SAR study of the azaindoles shows that the nitrogen at the 7-position is indispensable, as replacing the azaindole ring with the indole ring led to an inactive compound [170]. Methylation of the nitrogen at the N1-position of 7-azaindole indicates a similar effect. Thus, it suggests that the azaindole's NH belongs to the pharmacophore. Furthermore, the addition of nitrogen to the azaindole ring led to a less active molecule. Moreover, additional substitution at the 2 position results in a significant decrease in its activity [170]. It appeared that the 7-azaindole core was indeed critical for the strong protective effect effect of INS-1E cells in the CK assay. Thus, 5-(3,4-difluorophenyl)-3-(pyrazole-4-yl)-7-azaindole (GNF3809) was selected for both, ex vivo and in vivo proof-of-concept efficacy studies (Figure 31) and demonstrated protective effects of β-cells. Future efforts directed at further optimization of GNF3809 and the elucidation of its molecular mechanism of action hold the substantial potential to address the unmet medical needs of T1D patients [170]. Aminopyrazines The aminopyrazine scaffold was identified from a phenotypic high-throughput screening campaign measuring β-cell proliferation using mouse R7T1 β-cells. Lead optimization results in identifying a promising dual DYRK1A and GSK3β inhibitor aminopyrazine GNF4877 ( Figure 32) [171]. Priming of GSK3β substrates by DYRK1A has linked the former kinase and diabetes. The implication of these kinases in β-cell proliferation has been demonstrated via several screening tests and biological activity experiments. GSK3B action leads to NFAT nuclear localization. Inhibition of GSK3β is required for β-cell proliferation. The inhibition of DYRK1A may stimulate the NFAT signaling, which influences the β-cell proliferation. SAR studies on the aminopyrazine scaffold targeted the enzymatic inhibition of DYRK1A using a structure-directed approach. GNF4877 is an inhibitor not only of DYRK1A but also of GSK3β. It affects the proliferation of β-cells in both in vitro and in vivo conditions. Nevertheless, inhibition of GSK3β may also lead to the appearance of some side effects. For this reason, GNF4877 was not selected for further clinical trials. Nevertheless, preclinical studies of this series of compounds have established a solid ground for the discovery of the next generation of selective DYRK1A inhibitors. Aminopyrazine compounds (GNF series) were designed and developed to increase β-cell proliferation in adult primary islets. Oral administration of these compounds to diabetic mice induced β-cell proliferation and increased insulin content and consequently improved glycemic control. Biochemical, genetic, and in vitro studies confirmed that DYRK1 affects β-cell proliferation induced by GNF7156. Furthermore, dual-inhibition of DYRK1A and GSK3β increased β-cell proliferation ( Figure 33) [171]. However, GSK3β regulates various cellular processes, including behavior, immunity, and circadian rhythm. Its inhibition may also activate other pathways and lead to undesired side effects. Less than a year ago, research into optimizing the structure and function of aminopyrazine led to the discovery and development of GNF4877 as a dual-function DYRK1A and GSK3β inhibitor of β-cell proliferation. Notably, compared to previously reported derivatives, this dual-mode agent was already active in nanomolar concentrations [171]. Another 6-azaindole derivative named GNF2133 has been developed as a DYRK1A inhibitor and has been shown to promote β-cell proliferation and restore its function ( Figure 34). It was reported that the 6-azaindole was the most promising in DYRK1A inhibition and selectivity over GSK3β inhibition. It demonstrated significant dose-dependent glucose disposal function and insulin secretion in response to glucose potentiates arginine-induced insulin secretion (GPAIS) in rat insulin promoter and diphtheria toxin A (RIP-DTA) mice. Therefore, it should be concluded that it is an up-and-coming candidate for the treatment of type I diabetes [172]. Three novel compounds, GNF-9228, GNF-4088, and GNF-1346 (Figure 35), effectively stimulated β-cell proliferation, but not the expression of homeobox genes NKX6.1 or VGF, were described [173]. Subsequent studies demonstrated several salutary effects of the VGF prohormone and its encoded peptides, such as TLQP-21, on β-cell survival and function [173][174][175]. The most promising, GNF-9228, selectively activates human β-cell relative to α-cell proliferation and does not affect δ-cell replication [173]. GNF-9228 stimulates proliferation by a mechanism distinct from DYRK1A inhibitors because DYRK1A overexpression does not influence it and does not activate NFAT translocation [173]. In conclusion, a small molecule with pleiotropic positive effects on islet biology was characterized, including stimulation of human β-cell proliferation and insulin secretion and protection against multiple agents of cytotoxic stress [173]. AC Inhibitors A set of DYRK1A inhibitors were identified by employing KINOMEscan [176] screening. These compounds, designated as AC, represent six different chemical scaffolds [177]. Selected compounds comprise a broad spectrum of biological activity towards DYRK1A kinase: from little to strong inhibition, measured by remaining activity (70-100% up to <5%). The measured Ki of the inhibition of the phosphorylation of DYRKtide (peptide RRRFRPAS-PLRGPPK) shows the variation among the compounds, with preservation of the chemical differences between the scaffolds [177]. Compounds 23 and 27 showed the highest activity in cellular assays at concentrations significantly lower than harmine. Comparing the activity of these two inhibitors with harmine, a 5-fold and 50-fold increase in activity was observed for 23 and 27 at concentrations of 1 µM and 0.1 µM, respectively. Excessive increase in the dose of the inhibitor in the cells leads to a decrease in activity, which may indicate a toxic effect, while dose-dependent inhibition is observed for harmine at these concentrations [177]. Moreover, this diverse set of scaffolds revealed the ability to prevent tau phosphorylation. Some of the inhibitors were co-crystallized with DYRK1A (12,15,24,25,28,22,27). The obtained crystal structures show that, with one exception, the inhibitors are typical hinge binders [177]. The most promising of the reported compounds from the AC series, 27, has no hydrogen bond to the hinge. It is a unique feature. Hydrogen bonds with K188 and E203 are formed to its diazole group and N244 via its carbonyl. Additionally, the N292 side chain forms a hydrogen bond with the fluorinated arene. Bridging water between the hinge and the compound was found in only one of the chains of the tetramer. It resides at a hydrogen-bonding distance of 2.7 Å from the 1,6-phenanthroline nitrogen, nearby (2.8 Å) the main chain nitrogen of L241 and 2.6 Å to the carbonyl of E239. The tri-fluoromethyl, fluorobenzyl ring is in perpendicular π-stacking with 1,6-phenanthroline and diazole rings. The trifluoromethyl group penetrates a hydrophobic pocket formed by G166 of the glycine-rich loop, I165 and V173 side chains. Compounds EHT1610 and EHT5372 (the most selective DYRK inhibitors identified so far) share remarkable similarities to compound 27. This suggests that the canonical hinge binding may be less critical for high affinity binding to DYRK2, as seen for this inhibitor. The benzyl rings of those scaffolds are roughly perpendicular to each other. While the overall orientation of the inhibitors differs, all three compounds interact with the P-loop. The trifluoromethyl moiety in 27 and the 2-fluoro-and 2-chlorobenzyl groups of EHT1610 and EHT5372 fill the same subpocket. The overall shape of these molecules can be described as "U" shaped. The opening of the "U" in AC27 is directed toward the P-loop (F160). This arrangement is reversed for the EHT inhibitors ( Figure 37) [177]. Newly discovered binding features, such as CH-O interaction with Asn292 or binding water molecules that serve as catalytic lysine anchors, may provide valuable information for the optimization of these DYRK1A inhibitors and related kinases, which could be used in the future to treat not only diabetes but also neurodegenerative diseases, particularly Alzheimer's Disease. The reported findings, once again, confirm the importance of a multidirectional approach in the search and development of new DYRK1A-inhibitors [177]. There is emerging evidence demonstrating a role for DYRK1A in diabetes and β-cell proliferation, which expands the potential for pharmaceutical applications of DYRK1A inhibitors. The diversity of the novel scaffolds and the binding modes determined by crystal structure and in vitro assays may lead to novel strategies for diabetes treatment. Small molecular inhibitors of DYRK1A developed in our group indicate specific and strong binding affinity with promising therapeutical applications. One of our inhibitors is a potential regulatory agent for restoring pancreatic β-cell mass, secretory and regulatory functions to the organ. Hence, one of the aims is to further optimize the development of such inhibitors, depicting the mechanisms involved in the progression of diabetes. We have shown that the AC inhibitors developed by us are able to potentiate the glucose-stimulated insulin secretion in cultured β-cells and isolated mouse islets of Langerhans. These results correlate with the inhibitory efficacy of the compounds against DYRK1A kinase selectivity and human β-cell proliferation. We assessed the AC27 inhibitor for its ex vivo activity in the freshly isolated pancreatic islets from mice. The results show that in both hiPSC-islets and isolated mouse islet models, AC27 significantly increased insulin secretion relative to untreated groups. Furthermore, this effect may be improved by co-addition of RepSox, a selective inhibitor of the TGF-β type 1 receptor, or LY364947, a selective ATP-competitive TGF-β receptor kinase I inhibitor. Among others, the pathogenesis of impaired GSIS observed in T2D can be alleviated by these molecules. Controlled, stable regulation of cell function at the molecular level is taking its toll in regenerative medicine. Stimulation of functional cell growth will be more promising assuming that small molecule-induced human β-cell proliferation is reachable in clinical practice. This set of studies provides proof-of-concept that small-molecule-induced human β-cell proliferation is achievable and lends considerable promise to the goals of regenerative medicine for diabetes treatment. Miscellaneous Scaffolds and Drug Combinations In 2020, the novel DYRK1A inhibitor named KVN93 was identified. This tau kinase inhibitor interacts with DYRK1A by targeting the ATP-binding site in its active conformation when the activation loop is phosphorylated. It was investigated in Alzheimer's Disease treatment as a compound able to regulate cognitive function, β-amyloid pathology, and neuroinflammation. The in vivo studies revealed that KVN93 improves long-term memory and reduces amyloid plaque levels in 5XFAD mice by increasing the Aβ degradation enzyme. KVN93 can modulate neuroinflammation in microglial cells by regulating TLR4/AKT/STAT3 signaling. The experiments carried out in wild-type mice injected with LPS confirmed that KVN93 treatment reduced microglial and astrocyte activation. These data suggest that KVN93 is a potential therapeutic DYRK1A inhibitor and is able to regulate (i) cognitive/synaptic function, (ii) Aβ plaque load, and (iii) neuroinflammatory reactions [178]. The studies described by Allegretti and co-authors revealed that the anticancer kinase inhibitor OTS167 may act as a structurally novel, remarkably potent DYRK1A inhibitor to induce human β-cell replication [179]. Despite the OTS167's target promiscuity and cytotoxicity, the multidimensional compound optimization was performed to tailor kinase selectivity towards DYRK1A and reduce its cytotoxicity. Indeed, the series of 1,5-naphthyridine derivative characterization yielded several leads with exceptional DYRK1A inhibition and human β-cell replication promoting potencies but substantially reduced cytotoxicity. The results suggest that these compounds are the most potent human β-cell replication promoting molecules described and exemplify the potential purposefully leverage off-target activities of advanced stage compounds for the desired application [179]. In order to elucidate the molecular pathways that control β-cell growth, Abdolazimi et al. screened about 2400 bioactive compounds for rat β-cell replication-modulating activity [180]. In this library, the CC-401 was identified as a small molecule that promoted human β-cell replication ( Figure 38). CC-401 is known as an advanced clinical candidate previously characterized as a c-Jun-N-terminal kinase inhibitor. However, these studies revealed that CC-401 also acts via DYRK1A/B inhibition [180]. Moreover, it was reported that DYRK1A/1B inhibition-dependent induction of β-cell replication is multifactorial. CC-401 treatment led to rodent (in vitro and in vivo) and human (in vitro) β-cell replication via DYRK 1A/1 B inhibition. In contrast to rat β-cells, which were broadly growth responsive to compound treatment (replication-inducing compounds like GSK3β or ALK5/TGFβ inhibitors), human β-cell replication was only consistently induced by DYRK1A/B inhibitors. In many reports, researchers identified the DYRK1A/B inhibition-dependent activation of NFAT as the primary mechanism of induction of β-cell-replication. Nevertheless, NFAT activity inhibition had a limited effect on CC-401-induced β-cell replication. Thus, the additional effects of CC-401-dependent DYRK1A/B inhibition were investigated. It has been found that CC-401 inhibited DYRK1A-dependent phosphorylation/stabilization of the β-cell-replication inhibitor p27Kip1. Additionally, CC-401 increased the expression of numerous replication-promoting genes generally suppressed by the dimerization partner, RB-like, E2F, and multi-vulval class B (DREAM) complex depends upon DYRK1A/B activity for integrity, including MYBL2 and FOXM1. These data demonstrate CC401 derivatives (abbreviated as STF compounds) and one of the commonly used DYRK1A inhibitors like harmine as a valuable resource for manipulating the signaling pathways that control β-cell replication and leverage DYRK1A/B inhibitors to expand understanding of the molecular pathways that control β-cell growth [180]. Additionally, the potential of combining small molecule inhibitors to augment the limited replication response of human β-cells was demonstrated. This effect was enhanced by simultaneous glycogen synthase kinase-3β (GSK3β) or activin A receptor type II-like kinase/transforming growth factor-β (ALK5/TGFβ) inhibition [30]. It was reported lately that the combination of inhibition DYRK1A with transforming growth factor-beta superfamily (TGFβSF)/SMAD signaling leads to a synergistic increase in human β-cell proliferation and the number of β-cells in both mouse and human islets. This effect is related to the activation of cyclins and CDKs with the decreased levels in key cell-cycle inhibitors (including CDKN1C and CDKN1A) through altering their Trithorax-and SMAD-mediated transactivation. Additionally, this dual DYRK1A and TGFβ inhibition allow the preservation of β-cell functions. These effects were proved in healthy human-and stem cell-derived β-cells as well as patients with T2D both in in vitro and in vivo investigations [181]. Furthermore, the relationship between DYRK1A and insulin receptor substrate-2 (IRS2) has been thoroughly discussed. The loss of IRS2 expression in β-cells contributes to T2D. It was also indicated that IRS2 might be one of the DYRK1A targets. DYRK1A directly interacts with IRS2 through the N-terminal domain of DYRK1A. Moreover, DYRK1A promotes tyrosine(Y)-phosphorylation and K48linked poly-ubiquitination of IRS2 with the proteasomal degradation of IRS2. In vitro evaluation revealed the expression of DYRK1A in MIN6 cells and β-cells islets and pointed its role in inducing apoptosis. Furthermore, IRS2 expression was slightly reduced in the hippocampus and islets of young APP/PS1 mice (3-month-old), while it was significantly suppressed in older animals (6-month-old mice). It was postulated that it might be related to other mechanisms, e.g., activation of GSK3β and neuroinflammation in the early stage of the disease [182]. These findings also complement the current understanding of the relationship between DM and AD [182]. Some evidence was also provided that the combination of any GLP1R agonist class member with any DYRK1A inhibitor class member induces a synergistic increase in human β-cell replication accompanied by an increase in human β-cells mass [50]. A combination of small-molecule DYRK1A inhibitor (such as harmine, INDY, leucettine, 5-IT, GNF4877) to any one of the antidiabetic drugs that directly (GLP-1 analogs) or indirectly (DPP4 inhibitors) activate the GLP1R and convert the mitogenically inactive GLP1R agonists into potent β-cell proliferative agents [50]. Combining these two agents boosted human pancreatic β-cell proliferation and expanded β-cell mass in human cadaveric islets ex vivo. For instance, cadaveric human islets were transplanted into immunodeficient mice with diabetes induced by streptozocin, and the mice were then treated with the drug combination. The animals showed increased insulin production and improved glycemic control than mice treated with either compound alone or no treatment. Both phosphorylated NFAT and cAMP-PKA mediated the synergistic effect of the two molecules on pancreatic β cell expansion-dependent activation of cell cycle genes such as cyclin-dependent kinases and β-cell-specific genes (e.g., GLUT2, PDX1, and NKX6.1) [50,183]. The resulting proliferation rates exceeded those of DYRK1A inhibitors alone and may be in a range that could restore β-cell mass in people with T2D and T1D [50,149,183]. It is well known that the nature of diabetes is unlikely to be fully addressed by the modulation of any single target. The "paradigm shift" determines the research when complexity prevails, and radical specificity is no more the ultimate target. Simultaneous targeting of both DYRK1A kinase enhances the restoration of the β-cell population. Alternatively, the combinatory treatment with inhibitors, hypoglycaemic agents (glucagon-like peptide-1 (GLP-1) receptor agonists), and cell markers (e.g., TGF-β) improves the proliferation rate in human cadaveric β-cells. However, current inhibitors lack target specificity, with risks of adverse effects. Thus, the need to identify drugs that provide an accelerated human β-cell proliferation of improved specificity remains the priority. The development of inhibitors is frequently compromised by suboptimal pharmacokinetics. Evidence has recently emerged that simultaneous targeting of both DYRK1A and GSK3β may further benefit in restoring the insulin-producing β-cell population. Moreover, the recent studies on the DYRK family show the compensatory mechanism for DYK1A and DYRK1B synergistic effect on the proliferation of the β-cells in mammalian cell culture models. Diabetes As mentioned above, several DYRK1A inhibitors are able to enhance β-cell proliferation and improve insulin secretion and glucose homeostasis [16]. The gold standard in this research, harmine, may increase human β-cell proliferation in culture by ca. 2%. Nevertheless, DYRK1A inhibitors, including leucettine-L41 and INDY, indicate comparable proliferative potential, while 5-IT and GNF4877 were found to be 10-fold more potent. Therefore, inhibition of DYRK1A is an important mechanism underlying β-cell proliferation and emphasizes that the diabetic kinome is a key target that can increase the mitogenic activity of β-cells. Following DYRK1A inhibitor treatment, proliferation is enhanced by induction and nuclear translocation of NFAT transcription factors that affect the cell cycle. Furthermore, it is suggested that DYRK1A inhibitors attract other targets involved in the stimulation of β-cell proliferation. Thus, the role of the diabetic kinome seems to be crucial for the future development of anti-diabetic strategies. Several studies reveal that each of these DYRK1A inhibitors also inhibits other kinases, particularly members of the CMGC family, including (i) cyclin-dependent kinase (CDK), (ii) mitogen-activated protein kinase (MAPK), (iii) glycogen synthase kinase-3 (GSK3), and (iv) CDC-like kinases notably: DYRKs, CLKs, GSKs [149]. Noteworthily, it can be speculated that each of them may be involved in human β-cell proliferation. Importantly, GSK3 (involved in insulin signaling and the replication of β-cells) may be recognized as the most prominent target of DYRK1A inhibitors because DYRK1A functions as a priming kinase for GSK3 signaling and plays a substrate role in preparation for GSK3 phosphorylation. The interaction of DYRK1A inhibitors with GSK3β has been shown to lead to β-cell proliferation in rodents [3]. Furthermore, it has been suggested that dual-mode inhibition of DYRK1A and GSK3β may contribute to the efficacy of the aminoprazine derivative GNF4877 [171]. GSK inhibitors (LiCl, 1-Akp) have also been shown to increase human β-cell proliferation from 0.17% to 0.71% [5]. However, DYRK1A inhibitors act in a dose-dependent manner, with proliferation peaking after treatment with the optimal inhibitor concentration and decreasing at higher doses. These are results suggesting interactions with other kinases/targets at higher doses [2,6,7]. Furthermore, off-target effects are not necessarily limited to protein kinases. 5-IT has also been found to be an adenosine kinase inhibitor, and its β-cell mitogenic capacity may be attributed to adenosine kinase inhibition [8]. It is also possible that DYRK1A inhibitors may affect targets other than kinases. Harmine not only inhibits DYRK1A in human β-cells but also reduces the abundance of SMAD proteins [6]. In addition, it acts as an MAO inhibitor [16,149]. Nevertheless, it can be stated that the mitogenic effects and enhanced proliferation mediated by DYRK1A inhibitors act as translocation of NFAT transcription factors to the nucleus, with the consequent transactivation of cyclins (cyclin A, CDKs) and repression of CDK-inhibitors such as p15INK4, p21CIP1, and p57KIP2 [149]. Other possible mechanisms involving DYRK1A, necessary for β-cell restoration are (i) phosphorylation and stabilization of p27KIP1, (ii) phosphorylation of D-cyclins and acceleration of their degradation; (iii) phosphorylation of the DREAM complex member, LIN52, enforcing cell cycle arrest; and (iv) phosphorylation of tau protein crucial for AD [149]. All these data indicate that regulation of DYRK1A kinase activity is an important mechanism underlying human β-cell proliferation. Other potential kinases and therapeutic targets capable of enhancing β-cell mitogenic activity are also indicated. Therefore, a better understanding of the diabetic kinome is crucial for the design and development of new and innovative, more potent and selective small molecules. Other Diseases This review focuses on DYRK1A inhibitors developed for β-cell restoration and treatment of diabetes. However, it is worth noting that the development of DYRK1A inhibitors may be beneficial in the treatment of other diseases, including neurological disorders such as Alzheimer's Disease(AD), Parkinson's and Huntington's diseases, Down Syndrome (DS) [79] and cancer [20]. Neurological Disorders DYRK1A has been implicated in neuronal development and many others related signaling pathways. In DS, the triplication of chromosome 21 results in ca.1.5-fold higher DYRK1A levels than the general euploid population. This DYRK1A overexpression has been linked to the cognitive deficits associated with Down Syndrome [184]. Moreover, through hyperphosphorylation of tau protein (Alzheimer's Disease protein) and the formation of insoluble tau aggregates, DYRK1A is also involved in neurodegeneration and neuronal loss appearing in AD [185,186]. Therefore, a therapeutic strategy for cognitive deficits associated with DS, and ultimately AD, would involve controlled inhibition of brain DYRK1A activity [187]. Over the past few years, several DYRK1A inhibitors have been developed, most of which bind to the enzyme's active ATP site. However, there are selected exceptions, such as epigallocatechin gallate (EGCG), an allosteric inhibitor of DYRK1A that improves cognition in Ts65Dn mice (a well-established in vivo model for DS [185,188]. It was also reported that T65Dn mice with a normalized DYK1A gene copy number (two copies) were characterized by a decrease in (i) senescent cells population in the hippocampus and cortex, (ii) cholinergic neurodegeneration, as well as (iii) APP that promotes the production of pathogenic Aβ, and tau levels, in comparison to Down Syndrome mice with three copies of DYRK1A [189]. These data indicated that DYRK1A inhibition and normalization of its level could reduce or delay AD neuropathology [189]. Cancer Both overexpression and downregulation of DYRK1A are associated with neurological defects, reflecting the extreme gene-dosage sensitivity of this protein. It was reported that DYRK1A could act as both an oncogene and a tumor suppressor [190]. DYRK1A works as a negative regulator of the cell cycle, and its dosage can direct cells toward proliferation or exit from the cell cycle. It may also promote the survival of malignant cells by inhibiting pro-apoptotic pathways since the loss of DYRK1A can activate p53 (the increased degradation of DYRK1A caused by p53 activation is mediated by MDM2, which was found to interact with and ubiquitinate DYRK1A, ultimately leading to its proteasomal degradation) [191,192]. DYRK1A likely plays a tumor type-specific role, so whether DYRK1A inhibition would promote or inhibit tumor cell growth depends on the tissue type and tumor microenvironment.Although DYRK1A is most widely characterized for its role in brain development, DYRK1A is overexpressed in various diseases, including many types of cancers, such as leukemia [193,194], pancreatic adenocarcinoma [195][196][197], and gliomas [198,199]. It was also reported that DYRK1A could positively regulate the STAT3/EGFR/Met signaling pathway in human EGFR wild-type NSCLC cells. In addition, DYRK1A inhibition (by siRNA or an inhibitor) increased the anticancer activity of AZD9291 (EGFR inhibitor, Osimertinib) NSCLC cells [200]. Furthermore, it was reported that inhibition of DYRK1A destabilizes EGFR and reduces EGFR-dependent glioblastoma growth [201]. It was indicated that DYRK1A reduces the level of Cyclin D1 by phosphorylating on Thr286, inducing the proteasomal degradation of Cyclin D1 and cell cycle G1 phase arrest. Furthermore, DYRK1A suppression can promote the degradation of EGFR and reduce the self-renewal capacity of glioblastoma cells [200,201]. Pozo et al. investigated the ability of harmine and INDY to inhibit GBM tumor growth and survival [201]. They suggested that DYRK1A functions upstream of SPRY2 to modulate EGFR lysosomal targeting. Phosphorylation of SPRY2 by DYRK1A decreases its inhibitory influence on FGF-induced MAPK activation. In glioblastomas, several members of the SPRY family are included in a transcriptome module associated with the EGFR amplification status in GBMs, suggesting that they could act as oncogenes. Thus, destabilization of EGFR by DYRK1A inhibition may be a potential therapeutic target for a subset of EGFR-dependent GBMs [201]. Another example is CX-4945 (silmitasertib), a casein kinase 2 inhibitor currently in clinical testing for various cancers [202]. It was subsequently found to also potently inhibit several members of the CLK and DYKRK families, including DYRK1A, and was able to block DYRK1A-related tau phosphorylation in a mouse model of Down Syndrome. Summary Diseases related to diabetes and obesity are one of the major threats to human life. According to WHO, approximately 300 million people will be obese in 2035 [203]. This ever-increasing trend is difficult to prevent due to changing lifestyles around the world and energy-rich diet availability. Only in 2015, more than 1.6 million human deaths were caused by hyperglycemia and diabetes. Type 2 diabetes is now treated with various pharmaceuticals, but in fact there is no effective treatment. Other types of diabetes rely solely on supplementing the body with external insulin. Despite significant advances in insulin-based and other therapies, patients with diabetes will continue to receive medication throughout their entire lives. This causes an enormous healthcare burden and limits the comfort of patient's life. Two main types of diabetes-T1D and T2D-share similar mechanisms of β-cellfunction failure via an insufficient mass of the endocrine pancreatic cell fraction. In T1D, this phenomenon is driven by autoimmune assault against own cells, while T2D is characterized by insulin resistance and subsequent β-cell mass decrease. In general, T1D and T2D are, by definition, a blood hyperglycemia condition caused by total or relative deficits in β-cell mass. Existing therapies improve glycemic control but provide only a temporal relief, with lifetime dependency. Several early prevention measures and strategies offered for diabetic patients of T2D put this disease into reasonable control to delay the clinical onset. Such interventions do not exist for T1D. As stated by The Global Report of the World Health Organization (WHO), T1D cannot be prevented with current knowledge. Although effective approaches are available to prevent T2D, no cure for advanced disease is available. The optimal approach should reverse its pathologic changes to provide a cure rather than a lifetime pharmaceutical supplementation. Thus, finding an accurate cure for diabetes is of critical importance. Restoring metabolic homeostasis would free the patient from constant reliance on pharmaceuticals and monitoring glucose level. Nevertheless, so far, all the possible therapies are only in very early preclinical stages. The treatment strategies rely mainly on promoting β-cells differentiation. This promising strategy requires selective alteration of cellular differentiation to obtain a new, regenerated population of β-cells. Unfortunately, direct alteration of transcription factors is complicated, and there is no efficient strategy to affect the pancreas selectively. Therefore, more upstream biomolecular targets are sought. The importance of targeting protein kinases with small molecules is an irrefutable and great tool to establish therapeutical pathways to understand disease mechanisms. In particular, the finding of DYRK1A, a crucial protein kinase that has been implicated as a potential regulator of β-cells, raises its potential application in diabetes. DYRK1A is involved in cellular processes related to the proliferation and differentiation of β-cells. Thus, DYRK1A is one of the most extensively studied targets for β-cells regeneration. The β-cells differentiation observed when DYRK1A kinase activity is modulated points to a possibility of using "diabetic kinome" as a target for future DM therapies. Scientific investigations and the pharmaceutical industry have confirmed the role of DYRK1A kinase in various molecular processes. This review aims to highlight the knowledge and approaches taken under action within the past few years. These last five years have brought progress and even more questions about the actual position of the approach for many scientific fields. We present recent developments in diabetic kinome inhibitors, with a particular focus on DYRK1A. We have paid particular attention in this review to the fact that no DYRK1A inhibitors, to date, have met the selectivity standards needed for use as probe molecules. Harmine, one of the most commonly used inhibitors in DYRK1A-related research, possesses strong cross inhibition of monoamine oxidase (MAO), which would cause some adverse effects. The low selectivity also makes harmine unsuitable as a probe to test DYRK1A inhibition in cell lines. Efforts to eliminate the MAO inhibition while keeping DYRK1A inhibition led to the harmine derivative AnnH75. Another DYRK1A inhibitor, green tea flavonol epigallocatechin-gallate (EGCG), was shown to correct cognitive deficits in Down Syndrome mouse models and humans. However, it also potentially has multiple targets (and correspondingly is under consideration for use in a broad range of disorders) and cannot be considered a DYRK specific inhibitor. Thus, structural modifications may be introduced to achieve high selectivity. The so-called "gatekeeper" identity was identified early as a principal determinant of inhibitor selectivity. This residue initiates the "hinge" segment that links the two folding lobes of protein kinases, and its side chain lies adjacent to ATP inhibitors that bind via hydrogen bonding to the hinge. DYRK protein kinase targets consist of phenylalanine, which simultaneously offers good opportunities for inhibitor design and polypharmacology. Another opportunity for selectivity and favorable binding kinetics is covalent binding to sulfhydryl groups. The cysteine in the HCD (histidine cysteine aspartate) motif is the most prominent target for the DYRK1A. A third opportunity involves linking ATP-site inhibitors to peptides corresponding to substrate recognition sequences. This allows for high potency and selectivity for research compounds. In order to briefly summarize all the DYRK1A inhibitors discussed in this review, their IC50 values, targets, biological activity with future direction of development are listed in Table 2. ↑↑ key issues are the development of strategies to target regenerative compounds selectively to the β cell ↑↑ further optimization and elucidation of its molecular mechanism of action needed Compared to the well-known and the best to date inhibitor for increasing human pancreatic β-cell replication, the advantages of the newly identified fragments give us a privileged position in the race to the new therapeutics. Future studies should provide proof-of-concept that small-molecule-induced human β-cell proliferation is achievable with the use of regenerative medicine for diabetes therapy. The generation of the iPSCderived β-cells has been one of the most desired strategies, with several protocols being invented. Functional iPSC-derived β-cells bring real hope for diabetic patients, who are not qualified for transplantation, with severe glycemic lability, recurrent hypoglycemia, and a reduced ability to sense symptoms of hypoglycemia (reduced hypoglycemia awareness). Providing an unlimited source of autologous, engineered cells from the somatic pool could significantly shift the availability of transplants from very limited to plentiful. Therefore, every finding and improvement in the prolonged intervention of diabetes is of the highest value. Current knowledge on the transplantation of the pancreatic islets tackles the severe problem of engraftment and stable implantation of the delivered cell mass into the organ. The importance of resolving this issue has been demonstrated broadly, and multiple methods are proposed to alleviate the problem. We would also like to propagate the term "diabetic kinome" within scientific terminology to emphasize the role of multiple kinases' synergistic action in directing molecular processes that underlie this particular set of diseases. The human kinome constitutes over 500 kinases, responsible for every biological function and regulation in the cell. Therefore, finding the optimal selectivity profile for kinase inhibitors is of essential importance. Conflicts of Interest: The authors declare no conflict of interest. Sodium-glucose co-transporter-2 SMAD Family of structurally similar proteins that are the primary signal transducers for receptors of the transforming growth factor-beta SPRED2
2021-08-28T06:17:22.323Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "5d9f82c4da95f8d22b31e3fb5c6d8fd09fa916ff", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/22/16/9083/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2bb6381b847bcfa0cb597a6ffc83a5dbd9fac5a1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253276091
pes2o/s2orc
v3-fos-license
Characterizations of Hyperideals and Interior Hyperideals in Ordered Γ -Semihypergroups We give some conditions on ordered Γ -semihypergroups under which their interior hyperideal is equal to the hyperideal. In this paper, it is shown that in regular (resp., intraregular, semisimple) ordered Γ -semihypergroups, the hyperideals and the interior hyperideals coincide. To show the importance of these results, some examples and conclusions are provided. Introduction and Preliminaries Heidari and Davvaz [1] gave the idea of an ordered semihypergroup in 2011. Connection between ordered semihypergroups was studied by Tang et al. [2]. For some works on ordered Γ-semihypergroups, we may refer to Ref. [3]. Te general structure of factorizable ordered hypergroupoids is studied in Ref. [4]. Tang et al. [5] and Tipachot and Pibaljommee [6] combined the fuzzy set with ordered hyperstructures and proposed the concept of fuzzy interior hyperideal and proved some results. Te notion of hypergroups was initially founded by F. Marty [7] in 1934. Te notion of uni-soft interior Γ-hyperideals is investigated in Ref. [10]. Motivated by these studies, this note investigates the ordered Γ-semihypergroups that their interior hyperideal is equal to the hyperideal. We prove that in regular (resp., intraregular, semisimple) ordered Γ-semihypergroups, the concepts of interior Γ-hyperideals and Γ -hyperideals coincide. Let A and B be two nonempty subsets of H. We defne together with a partial order relation ≤ such that for any h, h ′ , x ∈ H and α ∈ Γ, we have Here, C ⪯ D means that for any c ∈ C, there exists d ∈ D such that c ≤ d, where ∅ ≠ C, D⊆H. Now, let Ten, (H, Γ, ≤ ) can be called as follows: (1) Regular (resp., intraregular) if K⊆(KΓHΓK] (resp., K⊆(HΓKΓKΓ H ( ])) for every K⊆H Note that each hyperideal of an ordered hyperstructure H is an I-Γ-hyperideal, but an I-Γ-hyperideal need not be hyperideal. Defne the hyperoperation c (as shown in Table 1) and (partial) order relation ≤ on H as follows: In this note, we investigate on the ordered Γ-semihypergroups that their interior hyperideal is equal to the hyperideal. Main Results Tis section aims to outline sufcient conditions for an I-Γ -hyperideal to be a Γ-hyperideal. We continue our study with the characterization of regular (resp., Intraregular, semisimple) ordered Γ-semihypergroup in terms of I-Γ -hyperideals. Tables 2 and 3). Now, we set Now, let a ∈ KΓK ]. Ten, a ⪯ kck ′ for some k, k ′ ∈ K and c ∈ Γ. By hypothesis, there exist h, h ′ ∈ H and μ, λ, δ ∈ Γ such that a ⪯ hμaλaδh ′ . We have Conclusions Tis paper gives some conditions under which the I-Γ -hyperideals are Γ-hyperideals. By Teorems 1-3, we prove that in a regular (resp., intraregular, semisimple) ordered hyperstructure H, every interior hyperideal of H is a hyperideal. By Teorems 3 and 4, H is a semisimple ordered hyperstructure if and only if every interior hyperideal of H is idempotent. Our future work will concentrate on some results which are related with the fuzzy interior hyperideals of ordered hyperstructures. Data Availability No data were used to support this study. Conflicts of Interest Te authors declare that they have no conficts of interest.
2022-11-04T18:32:21.441Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "c162e450053acb2d2d15e0e2d580d5318f84e77a", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/mpe/2022/2292712.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7e6815c63dd31348763a961aed2e3c635c0b50c2", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
78417198
pes2o/s2orc
v3-fos-license
Real time B-Scan evaluation of Posterior chamber & extraocular pathologies of Eye & Orbit Objective: Aim of this study is to assess the usefulness and accuracy of high frequency real time ultrasound using non dedicated all purpose scanner to detect and characterise posterior chamber and extraocular pathologies in trauma and non-traumatic eye and to establish etiology of proptosis. Material & Method: A total number of 138 cases (145 eyes) were included in the study. Ultrasound evaluation of eyes with diagnosed or suspected posterior segment pathology, cases of trauma, suspected or diagnosed intraocular tumors / extraocular pathologies and cases presenting with proptosis. Patients lost to follow up and eyes with normal scan were excluded. Supplementary investigation (CT, MRI) done where ever needed. The findings of B-scan then correlated with the ophthalmoscopic/ surgical findings/histology/follow up after treatment. Results: Distinction between intra and extra-ocular pathologies was 100%. The commonest posterior segment pathology in non traumatic eyes was retinal detachment and in traumatic eye was vitreous haemorrhage with diagnostic accuracy of 98.9% and 97.9% respectively. Retinoblastoma and pseudotumor (15%) were the commonest intra and extraocular mass lesions respectively. Accuracy in localization of foreign bodies and dislocated lens, in diagnosing retinoblastoma, in diagnosis and characterization of thyroid orbitopathy, cavernous hemangioma, tumors, cellulitis, dermoids, cysticercosis and intraorbital cysts was excellent. “Triple wall” sign is typical for intraorbital hydatid cyst. Conclusion: B-mode real time ultrasound using non-dedicated all purpose high frequency transducer provides nonionizing, cost effective, non-invasive technique with excellent image quality and high accuracy for diagnosis and assessment of posterior segment and extraocular pathologies. Introduction Starting with abdominal and obstetric application, ultrasound has made an impact on virtually every area of human body. The first application of ultrasound in ophthalmology using one dimensional A-scan was done by Mundt and Hughes in 1956 [1]. The B-scan (Brightness modulation scan) was first introduced by Baum and Greenwood in 1958 [2]. In opaque ocular media there is no mean of assessment of posterior segment of eye ball [3]. As technology improvised, it become apparent that high frequency ultrasound is an excellent tool to take a peep in the dark world behind the opaque media which precluded all optical techniques. Being a pilot study in our region we included all cases of posterior segment and extraocular pathologies in our study. Here we presenting few uncommon pathologies along with common pathologies and there B-scan appearances in our study. Material & Method This study was a prospective, observational and descriptive study conducted at Pt. J.N.M. medical College and Hospital, Raipur (C.G.). A total number of 138 cases (145 eyes) were included in the study. Aloka Prosound (SSD4000) Colour Doppler machine with a 7.5MHz linear transduser was used. Contact technique was used for scanning. All the measurements were done by the use of electronic callipers. Supplementary investigation (CT, MRI) done where ever needed. Eyes with diagnosed or suspected posterior segment pathology with opaque or clear media, cases of trauma, suspected or diagnosed intraocular tumors, suspected or diagnosed extraocular pathologies and cases presenting with proptosis were included in the study while patients lost to follow up and eyes with normal scan were excluded. Findings of B-scan were correlated with the ophthalmoscopic /surgical findings, histology or follow up after medical or surgical treatment. Incidence (in percentage) of various posterior chamber and extraocular pathologies and sensitivity, specificity and accuracy of real time ultrasound in their diagnosis was calculated using standard formulae. Observation Total 145 eyes (138 patients) included in the study were divided into 4 groups (Table No I). We found 62 posterior segment pathologies in blunt & penetrating trauma cases (Table No-III). There is considerable overlap between the pathologies with blunt and penetrating trauma. However dislocation of lens and edematous retinochoroid was seen only in blunt trauma whereas intraocular foreign body and endopthalmitis was seen in penetrating trauma. The commonest pathology in trauma was vitreous haemorrhage (23 eyes), followed by retinal detachment (12 eyes). International Journal of Medical Research and Review Available online at: www.ijmrr.in 927 | P a g e We found 14 cases of Retinoblastoma, out of which 11 were endophytic type and others were exophytic, recurrent and retinoblastoma with retinal detachment. Calcification is seen in all except in recurrent type. Most common extra ocular pathology (Table IV) noted is pseudotumor in 5 eyes (4 patients) followed by dermoid cyst in 4 eyes. Diagnosis on Bscan was confirmatory in cases of thyroid orbitopathy, cavernous haemangioma, optic nerve sheath meningioma, cellulites (abscess), dermoid, conjunctival cyst and cysticercosis. In other pathologies mentioned in table VI, diagnosis on B-scan was doubtful, correlated and confirmed by other investigation (histology, CT, MRI) and follow up (cases of pseudotumors responded to steroid). Metastases (3 eyes) are diagnosed as case of AML which is bilateral in one and unilateral in another. Two case of cysticercosis were found in MR muscle and one in lacrimal gland region. All 3 cases show characteristic pattern on B-scan and completely resolved after treatment with albendazole. Most common cause of proptosis (Table V) was macrophthalmos (increased axial length of eye ball) also known as pseudoproptosis (25%), followed by pseudotumors (15%) and metastasis. Other causes of proptosis were thyroid orbitopathy and parasitic infection, Metastasis from leukaemia, pseudotumor in the form of bilateral dacryoadenitis with lid masses, plexiform neurofibroma, carotico-cavernous fistula, dermoid, hydatid cyst, fibrous dysplasia, Intraorbital abscess, Optic nerve neurofibroma, cavernous haemangioma and Rhabdomyosarcoma. and Sen et al [11]. One falsely diagnosed case of tractional retinal detachment was found in non trauma non tumor group which was turn out to be thick vitreous band due to chronic vitreous haemorrhage. A similar difficulty was also reported by Mc Nicholas et al [12], Azzolini et al [13] and Mc Quown [14]. Vitreous haemorrhage, vitreous membranes and posterior vitreous detachments: Fresh vitreous haemorrhage appeared as low to medium level echoes in the vitreous and old vitreous haemorrhage was seen in the form of free floating membranes or the membrane attachment to the retinal surface. Posterior vitreous detachment was seen as thin low reflective membranes which show considerable after movements on kinetic imaging. These findings were similar to those described by Coleman et al [5,6], Vashisht and Berry [8], OP Sharma [10], Bedi et al [15] and Aironi et al [16]. Our study shows sensitivity, specificity and accuracy of 97.9% in diagnosing vitreous haemorrhage. Result was similar to the study by OP Sharma [10]. 1 case of vitreous haemorrhage was falsely diagnosed as tractional retinal detachment and 1 case was diagnosed as vitreous haemorrhage which were turn out to be endopthalmitis on follow up. International Journal of Medical Research and Review Available online at: www.ijmrr.in 929 | P a g e Choroidal detachment: On B-scan Choroidal detachment was presenting as uniformly thick biconvex membrane located peripherally, not attached to the optic disc and show no after movement on kinetic imaging. The subchoroidal space was clear in serous detachment while extensive bullous (hemorrhagic) choroidal detachment shows thick echoes in subchoroidal space. The bullous (hemorrhagic) choroidal detachment was associated with history of malignant hypertension and was confusing with large choroidal melanoma. Similar findings were described by Kwong et al [17], Mc Nicholas et al [13], Chugh et al [18], O P Sharma [10], Puodziuviene et al [19]. It was correctly diagnosed by B scan ultrasound in all cases in present study. Similar results were obtained by Kwong et al [17] and Chugh et al [18]. Asteroid hyalosis: Asteroid hyalosis were seen as multiple high amplitude dots like echoes in the vitreous, which showed considerable after movements on kinetic scanning. There was a clear space between the echoes and the retina. We found 4 eyes with asteroid hyalosis, all in elder age group (>60yrs). Similar findings were described by Bronson NR [20], O P Sharma [10], Bedi et al [15]. Optic nerve drusen, Papilloedema and optic neuritis -: On the B scan ultrasound drusen seen as elevation at optic nerve with calcification. Papilloedema was seen as elevation of the optic disc whereas optic neuritis was seen as thickening of the optic nerve in addition to the elevated disc. Findings are similar to that described by Bronson NR [20]. Endopthalmitis: Endopthalnitis appeared as intermediate level free mobile internal echoes and membranes associated with thickening of the ocular coat (>3mm). Similar findings were also described by Berrocal et al [21], Dacey MP et al [22] and Puodziuviene et al [19]. Trauma: Vitreous haemorrhage was the commonest finding in trauma cases (43.3%) followed by retinal detachment (26.4%). Other findings include choroidal detachment (7.5%), dislocated lens (7.5%), intraocular foreign bodies (7.5%) and endopthalmitis. Similar findings were described by Kwong et al [17], OP Sharma [10] and Aironi et al [16]. Foreign bodies were seen as small bright areas with marked posterior reverberation artefacts posteriorly. Similar findings were described by Vashisht and Berry [8], McNicholas et al [13] and Chugh et al [18]. Carotid-cavernous fistula ( fig. 2) was presenting with lid swelling and proptosis having history of blunt trauma 1 month back. B-scan shows engorged dilated superior ophthalmic vein with reverse pulsatile flow on color doppler and multiple small dilated channels in retrobulbar space. Contrast CT scan shows similar findings along with dilated ipsilateral cavernous sinus. Similar findings were described by Duan et al [23], Belden et al [24] and Lieb et al [25]. International Journal of Medical Research and Review Available online at: www.ijmrr.in 930 | P a g e Out of 3 cases, 2 extraocular cysticercosis were found in medial rectus muscle and 1 was found in the region of lacrimal gland. In the study by Kaliaperumal et al [26] and Madigubba et al [27] they showed intraocular cysticercosis to be more common then orbital involvement. Subretinal cysticercosis showed less response to albendazole then extraocular cysticercosis which showed good response and resolved completely. Similar findings were mentioned by Prasad et al [28], Lombardo J [29] and Das et al [30]. Pseudotumors (idiopathic orbital inflammation) were seen as diffuse ill-defined homogenously hypoechoic masses. These three cases (1 bilateral and 1 unilateral) were diagnosed as dacryoadenitis. One of them shows extensive involvement of extraconal muscle involvement ( fig.3a,b). CECT orbit showed soft tissue density enhancing masses involving lid, lacrimal gland and retro bulbar space ( fig. 3d). There was prompt response to corticosteroid ( fig. 3e) which confirmed the diagnosis. Similar findings were described by Mc Quown [14], OP Sharma [10] and Chaudhary et al [31]. Capillary hemangioma was found in 2 cases, presented with lid mass. Both the patients were <5yrs of age. They showed ill defined irregular masses of high internal echo reflectivity with foci of calcification and vascularity on doppler. Similar findings were described by Berrocal et al [18] and OP Sharma [7]. One 35 yrs old female presented with proptosis, on B scan showed a well defined homogenously hypoechoic intraconal mass with no detectable color flow on Doppler, diagnosed as cavernous hemangioma. Lieb WE [25] stated that in cavernous hemangioma stagnant/slow blood flow is below detection level and no flow seen on color doppler as it was in our case. Thyroid orbitopathy was found in 3 patients as bilateral eye involvement. There was asymmetrical involvement with medial rectus being most commonly involved muscle. All patients have raised thyroid status. Similar findings were described by McQuown [14], Dubey et al [32] and OP Sharma [10]. Dermoid cyst was found in 4 patients. 2 patients presented with lid mass at superolateral margin, one in superior orbital space and one in lacrimal gland region. On ultrasound they showed cystic lesion with homogenous low level internal echoes. CT scan showed fat attenuation in cystic lesion confirming the diagnosis. Conjunctival cyst showed well defined anechoic cyst anterior to the eye ball showing movement with the globe on dynamic scan. We found one case of optic nerve meningioma which showed hypoechoic mass encasing the optic nerve. It was confirmed by CT scan and histopathology correlation. Similar finding was described by OP Sharma et al [10]. International Journal of Medical Research and Review Available online at: www.ijmrr.in 931 | P a g e We found two cases of orbital metastasis in our study which were below 10yrs of age and were known case of myeloid leukaemia. It was bilateral in one case and unilateral in other. In 2 eyes it was extraconal while in one it has diffuse involvement. On ultrasound it shows homogenously low internal reflectivity in all 3 eyes. All previous reports shows neuroblastoma to be the most common orbital metastasis in children. Dubey et al [32] stated that in children orbit is more frequently involved then globe. Bianciotto et al [33] showed melanoma as the most common primary for orbital metastasis. One case of plexiform neurofibroma was seen as soft tissue mass over the eyelid with intraorbital extraconal extension through superolateral margin. One case of solitary neurofibroma presents as lid mass with no intraorbital extension. One case of rhabdomyosacoma was seen as an extraconal hypoechoic mass with high vascularity on doppler. One Case of intraorbital abscess showed rounded hypoechoic area with peripheral vascularity. Similar findings for the lesions described above are also mentioned by McQuown [14] and OP Sharma [10]. Two cases of intraorbital hydatid cysts in our study shows unilocular cyst with well defined "triple layer" wall. CT scan and MRI shows cyst with well defined wall ( fig. 4). Betharia et al [34] described a diagnostic "double wall" sign for intraorbital hydatid on ultrasound. Summary and conclusion: Real time B scan ultrasound proved to be essential for screening of all eyes with opaque media with suspected posterior segment abnormalities. Ultrasound is easy, quick, non-invasive, non-hazardous, well tolerated, bed side procedure done with no prior patient preparation. Satisfactory evaluation of both intraocular and extraocular pathologies was done by using non-dedicated all purpose high frequency transducer, which is widely available in most radiology departments. Distinction between intraocular and extraocular pathologies was made in 100% of the cases. B-scan is highly accurate in diagnosing posterior chamber pathologies. Diagnostic difficulty only arise in eyes with multiple pathologies, dense International Journal of Medical Research and Review Available online at: www.ijmrr.in 932 | P a g e membranes and focal retinal detachments, especially in differentiating focal retinal detachments from dense echogenic preretinal vitreous membranes. However, thorough scanning in all planes and attachment sites should be carefully looked to reduce the diagnostic errors. Ultrasound helps in localizing the foreign bodies and dislocated lens accurately. Ultrasound was 100% accurate in diagnosing retinoblastoma however all eye with retinoblastoma should undergo CT scan to know exact extent and intracranial extension. Although CT scan is essential for studying extraocular pathologies, the tissue characteristics are better resolved by ultrasound and vascularity can be assessed by doppler. Ultrasound has a complementary role to CT in the evaluation of eyes with proptosis. B-scan ultrasound shows 100% accuracy in diagnosis and characterization of cases of thyroid orbitopathy, cavernous hemangioma, optic nerve sheath tumor, cellulites (abscess), dermoid, conjunctival cyst, cysticercosis and intraorbital cysts.
2019-03-16T13:13:52.414Z
2015-10-31T00:00:00.000
{ "year": 2015, "sha1": "b20d5309dc4ae4fb74d2d5a443dd9ace4bad1b2a", "oa_license": "CCBYNC", "oa_url": "https://ijmrr.medresearch.in/index.php/ijmrr/article/download/346/675", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "6a818a03bc1bb102e60db7bd9e3ddc7f42eb0bec", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256081576
pes2o/s2orc
v3-fos-license
Identification of CDPKs involved in TaNOX7 mediated ROS production in wheat As the critical sensors and decoders of calcium signal, calcium-dependent protein kinase (CDPK) has become the focus of current research, especially in plants. However, few resources are available on the properties and functions of CDPK gene family in Triticum aestivum (TaCDPK). Here, a total of 79 CDPK genes were identified in the wheat genome. These TaCDPKs could be classified into four subgroups on phylogenesis, while they may be classified into two subgroups based on their tissue and organ-spatiotemporal expression profiles or three subgroups according to their induced expression patterns. The analysis on the signal network relationships and interactions of TaCDPKs and NADPH (reduced nicotinamide adenine dinucleotide phosphate oxidases, NOXs), the key producers for reactive oxygen species (ROS), showed that there are complicated cross-talks between these two family proteins. Further experiments demonstrate that, two members of TaCDPKs, TaCDPK2/4, can interact with TaNOX7, an important member of wheat NOXs, and enhanced the TaNOX7-mediated ROS production. All the results suggest that TaCDPKs are highly expressed in wheat with distinct tissue or organ-specificity and stress-inducible diversity, and play vital roles in plant development and response to biotic and abiotic stresses by directly interacting with TaNOXs for ROS production. Introduction In multicellular organisms, including plants calcium ion (Ca 2+ ) is recognized as a vital and conserved secondary messenger that is necessary for signaling transduction. As the Ca 2+ sensors and responders, calcium-dependent protein kinase CDPKs/CPKs universally present in green algae, oomycetes, protists, especially in higher plants, but absent in animals and fungi (Valmonte et al., 2014), while CDPK-related receptor-like kinases CRKs, that share some conserved homology from the parent CDPKs, are only observed in plants (Hrabak et al., 2003). For example, 34 AtCDPKs and 8 CRKs in Arabidopsis (Arabidopsis Thaliana) (Yip Delormel and Boudsocq, 2019), 29 OsCDPKs in rice (Oryza sativa L.) (Asano et al., 2005), 42 ZmCPKs in maize (Zea mays) , 44 BaCDPKs in banana (Musa paradisiaca) , 30 PtCDPKs in black cottonwood (Populus trichocarpa) (Zuo et al., 2013), 128 WmCDPKs and WmCRKs in watermelon (Citrullus lanatus) (Wei et al., 2019), and so on, were all identified and described in plants. CDPKs play important roles in many biological processes, such as growth and development, physiological regulation and control, and response to biotic and abiotic stresses in plants. For example, in Arabidopsis, AtCPK1 was found to be involved in the regulation of cell death by phosphorylating the senescence master regulator ORE1, a NAC transcription factor also called AtNAC2/ANAC092 (Durian et al., 2020); AtCPK12 performed a negative ABA-signaling regulator functions in seed germination and post-germination growth ; AtCPK33 plays an important role in strigolactones (SLs) induced stomatal closure (Wang et al., 2019a). In rice, OsCDPK5/13, as the negative regulators, participate in aerenchyma formation of roots (Yamauchi et al., 2017). While, OsCPK12 performs the positive effect on delaying leaf senescence and providing the potential productivity in plants (Wang et al., 2019b). In okra (Abelmoschus esculentus L.), AeCDPK6 can prolong fullblooming period by regulating hyperoside biosynthesis indirectly (Yang et al., 2020a). Moreover, AtCPK28 is not only involved in the regulation of stem elongation and secondary growth (Matschi et al., 2013), but also acts as a negative regulator and plays a crucial role in immune signaling (Monaghan et al., 2014). Similarly, GmCDPK38 also plays a dual role in coordinating flowering time regulation and insect resistance of soybean (Glycine max) (Li et al., 2022a). In addition, AtCPK5 directly phosphorylates AtLYK5, a lysin motif receptor-like kinases, and regulates chitin-induced defense responses in Arabidopsis (Huang et al., 2020). In terms of abiotic stress, AtCPK12 is involved in plant adaptation to salt stress by regulating Na + and H 2 O 2 homeostasis ; StCDPK32 positively modulates physiological properties and photosynthesis in response to salinity stress in potato (Solanum tuberosum) (Zhu et al., 2021); overexpression of GmCDPK3 improved soybean tolerance to drought and salt stresses (Wang et al., 2019c). Whereas, PheCDPK22 functions as a negative regulator of drought stress in moso bamboo (Phyllostachys edulis) (Wu et al., 2020). CDPKs also participate in many biological signaling networks in plants, especially by interacting with NOX (also called respiratory burst oxidase homolog, RBOH/Rboh) family proteins, the key producers of reactive oxygen species (ROS) of plants. For example, StCDPK5 directly activates and phosphorylates StRbohB in a calcium-dependent manner to regulate the oxidative burst for defense responses to pathogens (Kobayashi et al., 2007); AtCPK5 phosphorylates AtRbohD and thereby enhances ROS production for defense responses and bacterial resistance (Dubiella et al., 2013). In addition, BnaCPK6L was reported to play an important role in ROS accumulation and hypersensitive response (HR)-like cell death by interacting with and phosphorylating BnaRbohD (Pan et al., 2019). Additionally, StCDPK23 may participate in the wound healing of potato tubers by regulating StRbohs for H 2 O 2 production (Ma et al., 2022). Intriguingly, OsCPK12 promotes the tolerance of rice to salt stress by repressing the expression level of OsRbohI and reducing the accumulation of ROS (Asano et al., 2012;Boudsocq and Sheen, 2013). More importantly, a MtCDPK5 can directly phosphorylate three Rbohs MtRbohB, MtRbohC, and MtRbohD respectively, which can trigger immune responses to regulating rhizobial colonization in symbiotic cells of barrel medic (Medicago Truncatula) (Yu et al., 2018a). Conversely, an OsRbohH can be stimulated by two CDPKs CDPK5 and CDPK13 for ROS production, which is essential for aerenchyma formation in rice roots (Yamauchi et al., 2017). In wheat (Triticum aestivum), our previous studies showed that TaCDPK13 directly interacts with and activates TaNOX7 for ROS production for plant fertility regulation and drought tolerance (Hu et al., 2020a). Besides these, NADPH oxidases TaNOXs, as the key producers of ROS, play crucial roles in various biological processes in plants (Hu et al., 2018). All the results mentioned above prompt us to speculate that there may also be complicated interactions between TaCDPKs and TaNOXs family members in wheat. However, as of today, only a few of CDPKs in wheat were characterized according to their gene evolutionary (Geng et al., 2011) and expressional characteristics (Martıńez-Noël et al., 2007), the functions and signal network relationships of wheat CDPK family genes involved in plant growth regulation and environmental stress response are still largely unknown. In the present study, comprehensive analyses based on bioinformatics approaches and experimental methods were performed to identify the wheat CDPK family genes, their functions and signal network relationships during the plant development and stress response. Based on the results, the interactions between TaCDPK2/4/14/16/20/21 and TaNOX7 were further studied and verified that TaCDPK2/4 can interact with TaNOX7 and coexpression of TaCDPK2/4 with TaNOX7 enhanced ROS production in plants. The results obtained here will largely broaden our understanding of the roles of TaCDPKs and the signal network relationships between TaNOXs and TaCDPKs in wheat. Sequence retrieval and identification of the CDPK gene family in wheat We retrieved the potential sequences of CDPK members in wheat from IWGSC (http://www.wheatgenome.org/, last accessed May 25, 2021), NCBI (https://www.ncbi.nlm.nih.gov/, last accessed May 20, 2021), and Ensembl Plants (http://plants. ensembl.org/Triticum_aestivum/Info/Index, last accessed May 20, 2021) websites, with the well-known CDPK sequences as queries. We identified each CDPK member by predicting the conserved domains. For further information, we analyzed some physicochemical parameters, predicted the subcellular localization and the numbers of transmembrane helix, and performed amino acid sequence alignment (the detailed information in Table S1 and Figure 1). Exon/intron structure analysis and chromosomal location The exon/intron logos of individual CDPK genes were obtained from the Gene Structure Display Server (http://gsds. cbi.pku.edu.cn) by aligning the coding or cDNA sequences with their corresponding genomic DNA sequences (for the detailed information in Figure 1C). The chromosomal distributions of 79 candidate genes TaCDPKs were displayed using TBtools software (https://www.yuque.com/cjchen/hirv8i/ra35nv). MCScanX and BLASTP were used to analyze gene duplication events of TaCDPK genes in the Triticum aestivum genome ( Figure 2). (See the detailed information about the synteny between homologous genes in Table S2). Prediction and functional analysis of cis-regulatory elements We selected 2,000-bp genomic DNA sequences upstream of the transcriptional start sites of TaCDPKs as the promoter sequences to analyze the cis-acting elements using the databases: PlantCARE (http://bioinformatics.psb.ugent.be/ webtools/plantcare/html/) according to the method we previously used (Hu et al., 2018). Signal network relationships analysis between the members of CDPK and NOX family The signal network relationships between the members of CDPK family, NOX family were drawn by using Cytoscape software and Adobe Photoshop, based on the information from STRING (http://string-db.org/cgi/input.pl?sessionId= bdYxf9Fv5NiI&input_page_show_search=on). (See the detailed information in Figure 3 and Table S3). Firefly luciferase complementation imaging (LCI) assay To verify the interaction between TaNOX7 and TaCDPK2/ 4/14/16/20/21, the firefly luciferase complementation imaging (LCI) assay was performed according to a previously described method (Hu et al., 2020a), with TaCDPK13 as the positive control group. First, we constructed the expression vectors TaCDPK2/4/14/16/20/21-cLUC (the C-terminal gene fragment of the Luciferase) and TaNOX7-nLUC (the N-terminal gene fragment of the Luciferase), then transformed them into wild tobacco (Nicotiana benthamiana) leaves by an Agrobacteriummediated transient transform method mentioned above. After inoculating for 2 or 3 days, the chemiluminescence images and the fluorescence intensity profiles were all taken by a plant living imaging system (Lumazone Pylon2048B, Princeton). The primers used for the vector construction are listed in the Supporting Information (Table S4). Bimolecular fluorescence complementation (BiFC) assay The bimolecular fluorescence complementation (BiFC) assay was performed with TaCDPK13-TaNOX7 interaction as the positive control group (Hu et al., 2020a), according to the method described by Walter and others (Walter et al., 2004). The Phylogenetic relationship, domain organization, and exon/intron structure analysis of CDPK family members in wheat. (A) The unrooted maximum-likelihood phylogenetic tree of TaCDPK family members were made with MEGA 6.06. Numbers above the nodes represent bootstrap values from 1,000 replications. (B) Domain organization of the TaCDPKs. The logos of domain organization were obtained from EMBL-EBI and SMART websites and were amended with Adobe_Photoshop_CS6. The domains: V represents variable domain; K represents catalytic domain; I represents auto-inhibitory domain; C represents the region of calcium binding motifs: EF_hands. (C) The exon/intron structures of CDPK family genes in wheat. The numbers 0, 1, and 2 represent the phase of each intron in the sequence. coding regions of TaCDPK2/4/16 genes were cloned into the pSPYNE vector with the N-terminal gene fragment of the yellow fluorescent protein (nYFP), and TaNOX7 was cloned into the pSPYCE vector with the C-terminal gene fragment of the yellow fluorescent protein (cYFP). Then, the Agrobacterium-mediated transient transform method was used to transiently coexpress TaNOX7-cYFP and TaCDPK2/4/16-nYFP in N. benthamiana leaves. The fluorescence visualization in leaves was ultimately observed with a confocal microscope (A1R, Nikon, Tokyo, Japan). The primers used for the vector construction are listed in the Supporting Information (Table S4). Co-immunoprecipitation (Co-IP) assays The total proteins were extracted from N. benthamiana leaves using a membrane protein extraction method with some modification (Liu et al., 2016). The protein extracts were denatured and separated by SDS-PAGE (sodium dodecyl sulphate-polyacrylamide gel electrophoresis) and then the gel was stained with Coomassie Brilliant Blue. For the coimmunoprecipitation (Co-IP) assay, TaNOX7 (1047 bp) -GFP-tagged and TaCDPK2/4/13-6*Myc-tagged proteins were detected by monoclonal anti-GFP antibody and anti-Myc antibody (SA003; ABclonal, Wuhan, China), respectively. HRP (horseradish peroxidase) goat anti-mouse IgG antibody (SA003; ABclonal) and antigen-protein complex were detected using the ECL protein gel blot detection kit (GE Healthcare Life Sciences, Beijing, China) and Light-Capture equipped with a CCD camera (ATTO, Shanghai, China) as described by Kobayashi and others (Kobayashi et al., 2007). The primers used for the vector construction are listed in Supplementary Table S4. TaNOX7 (1047 bp) represents the truncated gene sequence from the start codon ATG to the 1047th base of TaNOX7, and TaNOX7 (1047 bp) includes the conserved functional domain NADPH_Ox and CDPK binding sites (Hu et al., 2018). Detection of ROS production The histochemical analyses of H 2 O 2 accumulation in plant tissues were conducted by 3, 3′-diaminobenzidine (DAB) using an Agrobacterium-mediated instantaneous transformation system according to a previously described method (Kumar et al., 2014). The leaves of N. benthamiana after agroinfiltrated for 2-3 days were separated from plants and put into DAB staining solutions in darkness at room temperature for several hours. After exposed in light for 2~3 h, the samples were then immersed into bleaching solution (ethanol: acetic acid: glycero = 3:1:1) and boiling in water bath for 10~15 min. The bleaching process was repeated 2~3 times for the clearer photographs. Subcellular localization analysis The subcellular location of TaCDPK2/4 and TaNOX7 were examined with N. benthamiana as the materials using an Agrobacterium-mediated instantaneous transformation system according to the method with some modifications (Chen et al., 2009). The full-length open reading frame of TaNOX7 and TaCDPK2/4 gene sequences were used to construct fusion expression vectors containing the gene sequences of GFP (green fluorescent protein) or mCherry (red fluorescent protein): pCAMBIA1301-2*35S-TaNOX7-eGFP, pCAMBIA131-2*35S-TaCDPK2/4-mCherry. At the same time, the membrane protein AtCBL1n-eGFP was used as a positive control, and the constructed expression vector was transformed into tobacco mesophyll cells by transient transformation mediated by Agrobacterium tumefaciens. After 60-84 h of co-culture, the leaves were isolated, and the subcellular localization of proteins were observed by using laser confocal (A1R, Nikon, Tokyo, Japan) at 488 nm (eGFP), 561 nm (mCherry) and 637 nm (Chlorophyll) emission wavelengths. The primers used for the vector construction are listed in the Supporting Information (Table S4). Chromosomal locations of TaCDPK genes and their synteny in wheat. Chromosomal locations of TaCDPK genes and their synteny are illustrated by the circos diagram. Colored lines indicate similarity. The blue line shows the synteny between homologous genes on homologous chromosomes from different subgenome, such as Chr1A, 1B and 1D; The green lines represent the synteny between homologous genes on nonhomologous chromosomes from the same subgenome, such as Chr1A-3A, 1A-5A; The red line shows the synteny of homologous genes between non-homologous chromosomes from different subgenome, such as Chr1A-3B and Chr1A-4D. Plant materials, treatments, expression profile analysis Wheat (T. aestivum cv. Chinese Spring) seedlings growing in field were harvested from different developmental stages and used for gene cloning and expression profile analysis. For analysis of the inducible expression profiles of the CDPK genes, the spikelet at the early stage of wheat flowering infected by Fusarium graminae spore with the method of single flower infusion, and the 10 d old hydroponic seedlings treated with 4°C, 200 mM NaCl, 20% polyethylene glycol 6000 (PEG6000), 100 mM methl jasmonic acid (MeJA), 100 mM abscisic acid (ABA), 500 mM salicylic acid (SA), and 50 mM brassinosteroids (BR), respectively, for 0 h, 24 h and/or 48 h, and with 40°C for 0 h, 12 h and/or 24 h, were all used as the materials for RNA extraction using RNAiso TM Plus (Takara, Dalian, China) performance. In addition, tissue-specific expression profiles, inducible expression profiles of TaCDPK genes in wheat were performed using bioinformatics methods based on the online database Genevestigator (https://genevestigator.com/ gv/) and/or by quantitative real-time PCR (qRT-PCR) with TaActin (AB181991.1) and TaGAPDH (ABS59297.1) as the internal transcript level controls. All the results mentioned above were presented as heat maps or histograms. All the expression levels represent the mean ± SD of data collected from three independent experiments with each having three or four replicates. The primers used for qRT-PCR are listed in the Supplementary data (Table S4). Identification of CDPK family genes in wheat genome A Hidden Markov Model (HMM) search was performed to investigate and characterize the CDPK gene family in wheat genome, and a total of 79 candidates were identified (Table S1). The homologous genes from different subgenomes (A, B, and D) were assigned the same number in gene denomination due to their similarity in gene structure and protein size ( Figure 1). Intriguingly, the CDPK family genes in wheat genome are regularly distributed across the chromosomes ( Figure 2). It seems that all the predicted CDPK candidates are mainly distributed on Chr 5, followed by Chr 2, 4, 1, 3, 6, and 7 in turn. Moreover, the location information of CDPK25 is unclear, which is referred to as Chr Un. Considering the objective fact that the distributions of homologous genes (such as TaCDPK2A/ B/C on Chr 2) are symmetrical on subchromosomes. Therefore, we speculate that CDPK25 on Chr Un actually distributes on Chr 6D. This prediction was further confirmed by the cluster analysis using protein sequences of CDPK family as reference (Figure 1). In addition, the asymmetrical distribution of CDPK4/16/18 between Chr 4B/D and A in Figure 2, implying that there are orientation errors on chromosome localization. All these anomalies may provide references for the precise localization of CDPK4/16/18-colinked genes. Gene structure and domain composition As can be seen in Figure 1C, the gene structures are quite diverse between the TaCDPKs with different intron numbers and length except for the certain homologous genes from different subgenomes (A, B, and D). Except for the members TaCDPK6/ 20 (64 Kb and 36 Kb, respectively), the length of different TaCDPK genes varies from 2 Kb to 8 Kb. As shown in Figure 1B, almost all the members of CDPK family have four conserved domains, namely N-terminal variable domain (V), Catalytic domain (S_TKc)(also known as kinase active region; K), auto-inhibitory domain (I), and calcium binding domain (C) conceived with three or four EF_hand motifs. Based on their sequence homology, all the 79 members of TaCDPKs could be divided into four subgroups I, II, III, and IV. The domain composition of the proteins is also different between the subgroups. For example, the members of TaCDPK22/1/10/13 in subgroup I and II have no variable domains (V); TaCDPK3 in the subgroup III has only one degenerated EF_hand motif, and the variable domain (V) of TaCDPK11-5D is also abnormally present on the C-side. As speculated above, the member TaCDPK25 mapped on Chr Un were clustered together with their homologue TaCDPK25 from Chr 6A/6B, indicating that gene TaCDPK25-Un may be objectively present on Chr 6D. Surprisingly, the members come from different Chr with different serial numbers are also clustered together, such as TaCDPK26-4D and TaCDPK27-5A. In addition, different numbered members with different structures are also firstly grouped together, such as TaCDPK28-3A/B and TaCDPK3-3D ( Figure 1A). Taken together, the complexity of genes and protein structures and the confusing clustering relationships imply some complex evolutionary relationships and functional diversity between CDPK family members. Tissue and spatio-temporal specific expression of CDPK family genes in wheat To clarify the tissue and spatio-temporal expression profiles of CDPK family genes during the development of wheat, a set of microarray data for the gene expression was obtained from Genevestigator v3 (Figures S1, S2). To simplify the phraseology in the following experiments and optimize the graphs in this paper, the homologous genes located on different subchromosomes (Chr A, B, and C) that are referred to as TaCDPKx; for example, TaCDPK5-2A, -2B, and -2D were all named as TaCDPK5. The expression levels of TaCDPKs at 10 developmental stages and 43 tissues were presented different expression patterns with some genes dominantly expressed at a certain stage or tissue ( Figures S1, S2). Comparison between Figures S1, S2 showed that the tissue and spatio-temporal specific expression profiles of 79 genes echo confirm and complement each other. For example, almost all the members in Figures S1, S2 were all divided into two identical groups I and II except for TaCDPK17. The members in group I are all widely expressed in whole developmental stages and most of tissues with the highest level of TaCDPK26/27 in endosperm at dough development stage, TaCDPK16 in ovary at inflorescence emergence stage (Heading stage) and anthesis stage. Furthermore, TaCDPK9 in group I expressed with the peaking level in awn, anther, and glum at florescence emergence stages. TaCDPK4/10/12 are, in turn, expressed at the highest level with TaCDPK4 in pistio, spikelet, ovary, TaCDPK10 in pericarp, TaCDPK12 in coleoptile, floret; while they all expressed with the highest level at anthesis stage. The members in group II were expressed with low level or expressed restrictively in a certain tissue or at a certain stage. TaCDPK24 is expressed exclusively in anther at inflorescence emergence stage (Heading stage) as well as TaCDPK30 in embryo at inflorescence emergence stage. Besides these, there are also some contradictions between Figures S1, S2. For example, in Figure S1, the expression peak of TaCDPK5 is at germinating seeds, TaCDPK11 is at anthesis stage, TaCDPK18 is at seeding stage, TaCDPK17 is at tillering stage, and TaCDPK3/ 14/28 are at stem elongation stage; while, in Figure S2, the expression peak of them, in turn, is at seedling, radicle, shoot apex/ovary. Unsurprisingly, the members with high homology at protein sequence, such as TaCDPK26 and TaCDPK27, also have the similar expression patterns, as well as TaCDPK3 and TaCDPK28 in Figures S1, S2. Obviously, the different results in Figures S1, S2 may be attributed to the different experimental materials, which indicating that the results in Figures S1, S2 are complementary as well as confirming each other. In order to further study the expression specificity of TaCDPKs, the tissue and spatio-temporal expression profiles were also performed in 21 tissues from 8 different development stages by qRT-PCR (Figure 4). Due to the low expression level of TaCDPK6/11/18/23/24 as shown in Figure S1/S2, or the nonspecific amplification of TaCDPK9/10/14/17/28/29/30, the expression profiles of them could not be obtained here. From Figure 4, we can see that every member of TaCDPKs has its special expression pattern. For example, TaCDPK2, 4, 7 are expressed with peak in leaf at seedling stage, in sheath at seedling stage, and in flag leaf 1 at heading stage, respectively, though they are all highly expressed in the whole plant. Significantly, the expressive peaks of 9 TaCDPK members TaCDPK3/5/12/16/19/ 20/21/25/26, are all in the flag leaves at flowering stage. TaCDPK13 is mainly expressed in spikes at milk and heading stages; TaCDPK8 is expressed in all the tissues with no specialty. In addition, compared with Figures S1, S2, the expression patterns of TaCDPKs in Figure 4, are not always consistent. These different results are undoubtedly due to the different experimental methods, sampling period, or growth environment conditions. Therefore, based on previous studies and comprehensive analysis of Figure S1/S2 and Figure 4, we systematically illuminate the unique tissue and developmental expression profiles of TaCDPK family members (Shown in Table 1). Inducible expression profiles of TaCDPK family genes To further study the expression characteristics of wheat CDPK family genes under suboptimal conditions, we carried out a comprehensive analysis using both the wheat microarray data in Genevestigator v3 ( Figure 5) and qRT-PCR experiment ( Figure 6). As can be seen in Figure 5, different inducible expression patterns of CDPK genes could be seen in responding to different biotic and/or abiotic stresses. According to the expression patterns in Figure 5, all the members can be simply classified into three groups: Group I including TaCDPK1/2/4/7/12/14/15/18/21/25/26, and most of them were upregulated under the biotic stresses except downregulated under P. graminis (Puccinia graminis) stress; Group II including TaCDPK3/5/6/8/9/13/16/17/22/28 were downregulated under F. graminearum (Fusarium graminearum); Group III including TaCDPK11/19/20/23/24/29/30 had no obvious changes under all the biotic or abiotic stresses, but had high expression in another development. Interestingly, these members of Group III also had lower tissue and spatiotemporal expression level comparing with other members in Figures S1, S2. Furthermore, the high expression level of TaCDPK2/7/12/15/ 25/26 in group I, was further verified under Fusarium Head Blight (FHB) stress in Figure 6B. In addition, the expression of most TaCDPK genes were significantly up-regulated under the treatment of hormone MeJA and BR ( Figure 6A), which are consistent with the results that hormone responsive element JARE and ABRE are distributed on almost all the promoters of TaCDPK genes ( Figure S3). Only a few members, such as TaCDPK27 had obvious response to SA and heat ( Figure 6A). In order to more intuitively dissect the possible functions of TaCDPKs, we listed the specific expressions of each member in plant development or under stress treatment in Table 1. Interaction and co-localization relationships between the members of TaCDPKs and TaNOX7 Lots of studies have addressed that the roles of CDPKs in plant growth regulation and various stress responses are closely associated with NOX-/RBOH mediated ROS production in a Ca 2+ -dependent manner (Potockýet al., 2007;Potockýet al., 2012;Boisson-Dernier et al., 2013;Yamauchi et al., 2017). In addition, numerous literatures confirmed that CDPKs can directly interact with NOXs/RBOHs, and both of which synergistically involved in plant development and response to environmental stress (Kobayashi et al., 2007;Dubiella et al., 2013;Majumdar and Kar, 2018). Therefore, in order to obtain more insights into the function of TaCDPKs, the network signal relationships between 26 members of CDPK family and 9 members of NOX family from the network of STRING had been obtained and drawn with the software of Cytoscape and Adobe Photoshop (Figure 3). As expected, there are indeed complicated signal relationships between CDPK and NOX family members (Figure 3). In addition, our previous research showed that TaCDPK13 could directly interact with and activate TaNOX7 for ROS production, which perform a crucial role during plant development and stress tolerance (Hu et al., 2020a). Therefore, based on the results mentioned above, coupled with the information associated with subcellular localization of TaCDPKs (Table S1), we selected TaCDPK2/4/14/16/20/21 as the represents from plasma membrane, cytoplasm, whole cell, chloroplast, and mitochondrion localized members as shown in Table S1, and analyzed the relationships between them and TaNOX7 with the physical interaction between TaNOX7 and TaCDPK13 as a positive control. LCI assay showed that there are different intensity of fluorescence signals, presenting TaNOX7 has different interaction with TaCDPK2/4/14/16. The signals from TaCDPK4-TaNOX7 were the strongest, followed by TaCDPK2-TaNOX7, TaCDPK16-TaNOX7, and there are no obvious signals between TaCDPK14/20/21 and TaNOX7 ( Figure 7A). These indicated that TaNOX7 may interact with TaCDPK2/4/16, respectively, but not with TaCDRK14. Furthermore, as shown in Figure 7B, BiFC experiment further verified that TaNOX7 can interact with TaCDPK2/4, but not with TaCDPK16. In addition, Co-IP assays confirmed the conclusion once again ( Figure 7C). Moreover, the results of subcellular localization indicated that TaCDPK2/4 were all colocated on the cell membrane with TaNOX7 ( Figure S4), which further provide theoretical support for the TaCDPK2/4-TaNOX7 interaction. Coexpression of TaNOX7 and TaCDPK2/4 promoted ROS production Increasing literatures have reported that CDPK-mediated NOX activation promotes the production of ROS, which plays important role in plants. Consistent with these mentioned above, the results in Figure 8 showed the red-brown precipitates in the regions that co-expressing of cLUC-TaCDPK2/4 and nLUC-TaNOX7 were significantly higher than those in the control group, which implying that coexpression of TaNOX7 and TaCDPK2/4 promotes ROS accumulation in plant leaves. Hence, what is the biological significance of the interaction between TaCDPK2/4 and TaNOX7? Therefore, we further constructed the sophisticated tissue expression profiles of The signal network relationships between the members of CDPK and NOX family. The network signal relationships between the members of CDPK family and NOX family were preliminarily predicted on STRING (http://string-db.org/cgi/input.pl?sessionId=bdYxf9Fv5NiI&input_page_ show_search=on), and then the network signal diagram between them were also drawn by softwares of TBtools Cytoscape and Adobe Photoshop. The edge(line) width is positively correlated with the combined score in STRING (Min 1-Max 3). TaNOX7 and TaCDPK2/4 at wheat spikes from 12 different developmental stages ( Figure 9A). As shown in Figure 9A, compared with flag leaves, TaNOX7 was expressed at an absolutely higher level in the young panicles at all the examined stages. Unexpectedly, the expression level of TaCDPK2/4 was much lower in each stage of young panicles than that of flag leaves, which suggesting that there are no coexpression relationships between TaNOX7 and TaCDPK2/4 during wheat panicle development. Therefore, we further constructed tissue expression profiles of TaNOX7 and TaCDPK2/4 in six flower organs at heading stage ( Figure 9B). As shown in Figure 9B, the expression level of TaCDPK2 was still lower than that of flag leaves. Intriguingly, TaCDPK4 is expressed in the pistils with peak level. These results, coupled with the previous tissue expressions and protein interactions, we concluded that TaCDPK2 was mainly expressed in young leaves and flag leaves, and the interaction with NOX7 might be involved in the vegetative and reproductive growth of plants; TaCDPK4 was mainly expressed in the pistils, and its interaction with NOX7 might contribute to seed development. Wheat CDPKs are diverse in members and structures with a complicated evolution history In this study, a total of 79 CDPK family genes, which encode 30 TaCDPKs, were identified according to the sequence analysis and domain composition (Table S1). Interestingly, not every protein has three homologous genes distributed on the subchromosomes A, B, and D. For example, TaCDPK6 includes two homologous genes (TaCDPK6-6A and TaCDPK6-6D) but TaCDPK1 has only one (TaCDPK1-4B) ( Figure 1A). These means that their homologs on a certain chromosome might be lost during the long-term evolution and natural selection. In addition, the homologous genes from different subgenomes, such as TaCDPK27-4D and TaCDPK28-5A in Figure 1A, clustered together firstly, which may be attributed to the gene duplication, and/or exon shuffling. Moreover, the phenomenon of motif EF_hand loss is also common in TaCDPK protein sequences. In fact, these anomalies probably belong to genovariation including gene structural variation, rearrangement, DNA sequence loss, and transposon activation occurred frequently during the polyploidization of genome in plants (Jackson and Chen, 2010). Furthermore, degeneration and/or lose of the motif EF_hand in TaCDPKs (such as TaCDPK3) supports the viewpoint that the evolutionary process from TaCDPKs to CDPK-related receptor-like kinases TaCRKs (TaCRKs possess the conserved domains that the typical TaCDPKs have, but lack the EF_hand motifs). In summary, all the information mentioned above, along with the non-random distribution of TaCDPKs on 21 chromosomes (Figure 2), suggesting that wheat CDPKs underwent a complicated evolutionary history, which might endow TaCDPK family with the gene expansion, gene variation, and functional divergence, though further researches are needed to confirm these. In addition, the analyses of gene structure ( Figure 1C), protein cluster ( Figure 1A), and gene karyotype ( Figure 2) indicated that the map of TaCDPK25 may be mistake, but the corresponding location of TaCDPK25 on 6D may be more reasonable, which will provide references for the precise mapping of them and their linked genes. Down-regulated Up-regulated Legends F. gra., Fusarium graminearum; Flg22, Flagelin 22; P. gra., Puccinia graminis; P. str., Puccinia striiformis; NaHS, sodium hydrosulfide; MeJA, methl jasmonic acid; BR, brassinosteroid; SA, salicylic acid; ABA, abscisic acid. The tissue-specific and spatio-temporal expression profiles of TaCDPKs in wheat by qRT-PCR. Expression analysis of TaCDPKs in 21 tissues from 8 different development stages of wheat by qRT-PCR. All the expression levels represent the mean ± SD of data collected from the experiments with each having three or four replicates. Hu et al. 10.3389/fpls.2022.1108622 4.2 Wheat CDPKs exhibit a great specificity in expression and play vital roles in both the plant growth regulation and stress response Specific expression is a common characteristic of the genes of a certain protein family in plants, which often reflects the cross-talk and/or difference in the functions of the family members (Hu et al., 2018). In this study, we found that the expression patterns of CDPK family members in wheat showed overall regularity and individual specificity, which indicating their functional synergy and specificity. Firstly, in terms of organization and spatio-temporal expression, 67.7% of the family members (TaCDPK2-5/7-13/ 15/16/19/20/22-27/29/30) all expressed with peak level in different reproductive organs, suggesting that they probably all involved in the regulation of plant reproductive growth but in different reproductive organs ( Figures S1-2, 4 and Table 1). For instance, the expression profile that TaCDPK13 expressed in anther at the anthesis stage with peak level, which was consistent Inducible expression profiles of TaCDPKs in wheat by qRT-PCR. (A) The inducible expression patterns performed by qRT-PCR under cold (4°C), heat (40°C), 20 % PEG6000, salt (200 mM NaCl), ABA (100 mM), SA (500 mM), MeJA (100 mM) hormone treatments. In all treatments, the heat (40°C) treatment lasted for 12 h and 24 h, respectively, instead of 24 h and 48 h as indicated in the (A). The 10-day old hydroponic seedlings were used for the analysis. (B) Inducible expression profiles of TaCDPKs in wheat spike treated with Fusarium graminae spore suspension for 24 h. The expression level of every gene is the mean of results from three independent experiments, each having three or four replicates. CK: the control group; FHB: the experimental group treated with Fusarium graminae spore suspension, which are associated with Fusarium Head Blight (FHB). Hu et al. 10.3389/fpls.2022.1108622 Frontiers in Plant Science frontiersin.org with our previous results that TaCDPK13 functioned in plant fertility (Hu et al., 2020a). Moreover, increasing reports found that ZmCPK32, AtCPK32, and GmCDPK38 as the homologue of TaCDPK23, played important roles in modulating flowering time and pollen tube growth (Zhou et al., 2014;Li et al., 2018;Li et al., 2022a;Li et al., 2022b), implying that TaCDPK23 probably plays a crucial role in regulating the development of spike/anther at the anthesis stage. OsCDPK1 was proved that it played a functional role in rice seed development (Jiang et al., 2018). As shown in Table S5, TaCDPK8 and OsCDPK1 are also the homologues with the highest identification 94.2%, which tempt us to speculate that TaCDPK8 perhaps performs the similar functions to OsCDPK1 by regulating the development of ovary/anther at the inforescence stage in wheat. Secondly, under abiotic stresses, the expression of TaCDPK6-12 were all sensitive to heat and obviously upregulated under heat stress, as well as TaCDPK2/5 to cold stress, TaCDPK15 to drought stress, and TaCDPK15/25 to sodium hydrosulfide (NaHS) stress ( Figure 5 and Table 1). Previous studies showed that the homologues of CDPKs played versatile functions in plants in response to different abiotic stresses. For instance, StCDPK32 (Zhu et al., 2021), ZmCPK11 (Borkiewicz et al., 2020), AtCPK12 , OsCDPK21 (Asano et al., 2011), and AtCPK3 (Mehlmer Protein interactions between TaCDPKs and TaNOX7. (A) Verification of protein interactions between TaCDPK2/4/14/16/20/21 and TaNOX7 were performed using the method of firefly luciferase complementation imaging (LCI) assay; (B) Bimolecular fluorescence complementation (BiFC) assay showing the interactions between TaCDPK2/4/16 and TaNOX7. (C) The interactions between TaCDPK2/4 and TaNOX7 were confirmed using the assay of co-immunoprecipitation (Co-IP), in which input and immunoprecipitates were analyzed by immunoblotting using anti-GFP and anti-MYC antibodies. TaCDPK2/4-TaNOX7 interactions enhanced ROS production in plants. Transient coexpression of TaNOX7 with TaCDPK2 or TaCDPK4 all enhanced ROS production in the leaves of N. benthamiana. The level of ROS accumulation was detected by the method of DAB (3, 3´-diaminobenzidine) staining method. The DAB staining intensity in in situ ROS levels of the agroinfiltrated tobacco leaves was calculated based on the stain intensity of the control "cLUC + nLUC". Data are means ± SD (n = 10~15 leaves) from more than three independent experiments. Bars annotated with different letters represent values that are significantly different at P ≤ 0.05 according to one-way analysis of variance (ANOVA) analysis. FIGURE 9 Co-expression interactions between TaNOX7 and TaCDPK2/4 in wheat. (A) Co-expression of TaCDPK2/4 and TaNOX7 in panicles from 12 different developmental stages; (B) Co-expression of TaCDPK2/4 and TaNOX7 in six flower organs at heading stage. et al., 2010) were all required for plant adaptation or response to salinity stress. In rice, the expression level of OsCDPK13 was also increased in leaf sheath segments upon subjected to cold stress (Yang et al., 2003). On the contrary, ZmCPK1 was identified as a negative regulator in cold stress signaling (Weckwerth et al., 2015). CsCDPK20 and CsCDPK26 might act as a positive regulator in tea plant (Camellia sinensis) response to heat stress (Wang et al., 2018). In foxtail millet (Setaria italica), overexpression of SiCDPK24 in plants enhanced drought resistance and improved the survival rate under drought stress (Yu et al., 2018b). Moreover, overexpression of GmCDPK3 also improved plant tolerance to drought as well as salt stresses (Wang et al., 2019b). Besides these, OsCDPK1 also conferred drought tolerance in rice seedlings besides of its functions in seed development (Ho et al., 2013;Jiang et al., 2018). Based on the inducible expression profile of TaCDPK2, the high identification between TaCDPK2 and OsCDPK13 with 95.9%, we speculate that TaCDPK2 perhaps play the potential role in plant response to cold stress. In addition, although the induced expression profile (Figures 3, 7) did not give a clear picture of the response of TaCDPK27 to cold stress, the more cold response-elements in the promoter of TaCDPK27 ( Figure S3) and its high identification with ZmCPK1 (83.0% in Table S5) also suggesting its potential role in plant response to cold stress. Thirdly, under biological stresses, the expression levels of different groups showed different responses (in Figures 5, 6B and Table 1). For example, the members from the group I (TaCDPK2/7/ 12/15/25/26) were upregulated, but downregulated in the group II (TaCDPK5/6/8/9/13/17/22/28), and barely responded in the group III (TaCDPK11/19/20/23/24/29/30), under the treatment of F. graminearum, which is the main pathogen of FHB. While, TaCDPK4/10/12/14/15/25 were all downregulated under P. graminis infection, which is the main pathogen of wheat stem rust. TaCDPK5/10/15/25 were susceptive to P. striiformis (Puccinia striiformis, wheat stripe rust pathogen), as well as TaCDPK4/10/12 to flg22 (pattern pathogenic elicitors from bacteria). These results suggested that different members with different genes and protein structures may also have the similar biological functions, such as TaCDPK15 and TaCDPK25 to F. graminearum as well as to P. graminis and P. striiformis in plants. On the other hand, the same members of TaCDPKs may show different biological functions in response to different biological stresses. For example, the expression level of TaCDPK12 was upregulated under F. graminearum or flg22 stresses, but downregulated under P. graminis stress. Up to now, lots of CDPK homologs have been reported to be involved in plant immune processes. In Arabidopsis, AtCPK28, the homologs of TaCDPK6 (with the identification of 81.2%), as a negative regulator that continually buffers immune signaling by controlling the turnover of the plasma-membrane-associated cytoplasmic kinase BIK1, which is a rate limiting in pathogen-associated molecular patterns (PAMP)-triggered immunity (PTI) signaling (Monaghan et al., 2014). In addition, AtCPK5/CPK6 signaling pathways contribute to defense against Botrytis cinerea by promoting the biosynthesis of 4-methoxyindole-3ylmethylglucosinolate and camalexin in plants (Yang et al., 2020b). Finally, the induced expression profiles in Figure 6A indicated that TaCDPK1-5/7/12/16/19/22 were all sensitive to hormone MeJA, which are consistent with the results that most members of TaCDPKs harboring MeJA-responsive element (JARE) in their promoters ( Figure S3). The results mentioned above suggesting that these TaCDPK family members perhaps widely involved in MeJA-mediated signaling pathways, which play important roles in plant growth, development, senescence, and in response to biotic and abiotic stresses (Shu et al., 2020;Ma et al., 2021;Raza et al., 2021;Wei et al., 2021). Whereas, TaCDPK2/3/7/8/12/15/16/19-21/25/26 were all sensitive to hormone BR as well as TaCDPK12/ 27 to SA and TaCDPK13 to ABA. Moreover, increasingly compelling evidences indicated that CDPKs involved in many hormone-mediated signaling in plants. In Arabidopsis, AtCPK6 the homologue of TaCDPK1 (identification 70.3%) was demonstrated that it functioned as a positive regulator in MeJA signaling in Arabidopsis guard cells as well as ABA-induced stomatal closure (Munemasa et al., 2011;Brandt et al., 2012). Another report showed that AtCPK12 negatively regulates abscisic acid signaling in seed germination and postgermination growth . In addition, AtCPK29 was involved in the process of auxin efflux transport, polarity and auxin responses by specifically phosphorylating the target residues on the auxin efflux transporter (PIN) (Lee et al., 2021). In this paper, the expression level of TaCDPK13 in anther/flowering stage and its sensitivity to ABA (Table 1) were all consistent with our previous study, that TaCDPK13 played crucial roles in plant fertility, and drought tolerance (Hu et al., 2020a). Such results strongly support the idea that TaCDPK13 may be involved in drought response via ABA-dependent signal pathways. The complicated interactions between TaCDPKs and TaNOXs perhaps play vital roles in plant development and response to stresses by regulating ROS production It is well known that ROS and Ca 2+ are universal and important intracellular signaling molecules, and their homeostasis play important roles in plant growth and development as well as response to biotic and abiotic stresses. More importantly, ROS and Ca 2+ , both as the signaling messengers, have complex crosstalks during signaling. For instance, Ca 2+ binding can activate CDPKs to phosphorylate NADPH oxidases OsRbohB for ROS production (Kobayashi et al., 2007), which is necessary for Ca 2+ influx, and then the induced ROS in turn may trigger Ca 2+ efflux from intracellular Ca 2+ stores in vivo (McAinsh MR et al., 1996;Pei et al., 2000). It should be added here that NADPH oxidases (NOXs), mostly known as respiratory burst oxidase homologs (RBOHs), are the key producers of ROS in plants (Hu et al., 2020b). Moreover, both TaCDPKs and TaNOXs all contain the EF_Hand, which are capable of calcium binding domains and thereby activate the protease activity of CDPKs and NOXs by cooperating with other particles (Hu et al., 2020b). Therefore, there must be complicated interactions between the members of CDPKs and NOXs in plants. Previously, we found that TaCDPK13 can interact with TaNOX7 for the plant fertility and drought tolerance (Hu et al., 2020a). Here, we found that the two others, TaCDPK2/4, can also directly interact with TaNOX7 and coexpression of the CDPKs with TaNOX7 enhanced ROS production (Figures 7, 8). Intriguingly, increasing literatures reported that CDPKs-NOXs/RBOHs interactions played important roles in plants by regulating ROS homeostasis. For example, OsRboh mediated ROS production, which is induced by OsCDPK5/OsCDPK13, is essential for aerenchyma formation in rice roots (Yamauchi et al., 2017). In addition, StCDPK23 may participate in the wound healing of potato tubers by regulating StRBOHs for H 2 O 2 production (Ma et al., 2022). AtCPK5 phosphorylates AtRbohD and enhances ROS production for defense responses and bacterial resistance (Dubiella et al., 2013); BnaCPK6L phosphorylates BnaRBOHD and increases the accumulation of ROS and HR-like cell death (Pan et al., 2019). More intriguingly, OsCPK12 promotes the tolerance of rice to salt stress by repressing the expression level of OsRbohI and reducing the accumulation of ROS (Asano et al., 2012;Boudsocq and Sheen, 2013). Therefore, CDPKs can regulate the activity of NOXs/RBOHs for regulating ROS homeostasis in plants, which plays diverse and vital functions in plants. Then, what are the biological significances of TaCDPK2/4-TaNOX7 interactions? All the expression patterns and analyses from Figures S1/S2, 4-6, 9 and Table 1 showed that TaCDPK2 was mainly expressed in young leaves, flag leaves and has significant response to F.graminis and cold stress; TaCDPK4 was mainly expressed in the pistils and has obvious response to Flagelin 22 (Flg22). In addition, our previous results showed that TaNOX7 was expressed in almost all the tissues of wheat and had high sensitivity to many stresses (Hu et al., 2018). TaCDPK13 can also interact with and activate TaNOX7 for ROS production, which can enhance plant fertility and drought tolerance (Hu et al., 2020a). Based on all the results, we concluded that, as shown in the model in Figure 10, TaCDPK2/4-TaNOX7 interactions mediated ROS homeostasis perhaps also plays crucial roles during the progress of vegetative and reproductive growth, seed development, and fertility in plants, respectively. In addition, they perhaps also play essential roles in plant response to biotic and abiotic stresses, such as cold, F.graminis stresses and so on. In summary, wheat has multiple members of CDPKs with diverse but vital functions in plant growth, development regulation and stress responses. Every member of TaCDPKs has its specific expression pattern and function. Moreover, the synergistic or antagonistic interactions between TaCDPKs and TaNOXs are complicated and play important roles by regulating ROS level in plant, though the regulatory mechanism and biological significance of them are still under investigation. Therefore, the results obtained here have provided a valuable foundation for further exploring the functions and the signal pathways of CDPK superfamily members, especially the interactions between TaCDPKs and TaNOXs in wheat. Data availability statement The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors. Author contributions K-M C, L-L L, and K-S M proposed the concept and content. C-H H, and B-B L wrote the manuscript. P C, H-Y S, and W-G X revised the manuscript. Y Z, Z-H Y and H-X W helped in the sample collection and experiment. All authors contributed to the article and approved the submitted version. TaCDPK2/4/13-TaNOX7 interactions play crucial roles in plant by regulating ROS production. TaCDPK2/4 and TaCDPK13 (Hu et al., 2020a) can interact with and activate TaNOX7 for ROS production, which plays crucial roles in plant development.
2023-01-23T14:40:24.458Z
2023-01-23T00:00:00.000
{ "year": 2023, "sha1": "4a8f6a4db9f929d0a0e9d075ea0837528db1a06d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "4a8f6a4db9f929d0a0e9d075ea0837528db1a06d", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
210488
pes2o/s2orc
v3-fos-license
Electronic correlations at the alpha-gamma structural phase transition in paramagnetic iron We compute the equilibrium crystal structure and phase stability of iron at the alpha(bcc)-gamma(fcc) phase transition as a function of temperature, by employing a combination of ab initio methods for calculating electronic band structures and dynamical mean-field theory. The magnetic correlation energy is found to be an essential driving force behind the alpha-gamma structural phase transition in paramagnetic iron. The properties of iron have fascinated mankind for several thousand years already. Indeed, iron has been an exceptionally important material for the development of modern civilization and its technologies. Nevertheless, even today many properties of iron, e.g., at high pressures and temperatures, are still not sufficiently understood. Therefore iron remains at the focus of active research. At low pressures and temperatures iron crystallizes in a body-centered cubic (bcc) structure, referred to as α-iron or ferrite; see Fig. 1. In particular, at ambient pressure iron is ferromagnetic, with an anomalously high Curie temperature of T C ∼ 1043 K. Upon heating, iron exhibits several structural phase transformations [1,2]: at ∼ 1185 K to the face-centered cubic (fcc) phase (γ-iron or austenite), and at ∼ 1670 K again to a bcc structure (δ-iron). At high pressure iron becomes paramagnetic with a hexagonal close packed structure (ǫ-iron). Density functional theory (DFT) in the local spin density approximation gives a quantitatively accurate description of the ordered magnetic moment and the spin stiffness of bcc-Fe [3], but predicts the nonmagnetic fcc structure to be more stable than the observed ferromagnetic bcc phase [4]. Only if the spin polarized generalizedgradient approximation (GGA) [5] is applied does one obtain the correct ground state properties of iron [6]. Stoner theory of ferromagnetism [7] can give a qualitatively correct description of several magnetic and structural properties of iron, but predicts a simultaneous magnetic and structural change at the bcc-fcc phase transition with a local moment collapse while, in fact, the bccfcc phase transition occurs ∼ 200 K above T C ; see Fig. 1. Clearly, to account for finite temperature effects of itinerant magnets one requires a formalism which takes into account the existence of local moments above T C . While the spin-fluctuation theory, which describes the paramagnetic state above T C as a collection of disordered moments, gives an overall good qualitative explanation of the pressure-temperature phase diagram of iron [8] it fails to provide a reasonably quantitative description and, in particular, predicts the bcc-fcc phase transition to occur below T C . The LDA+DMFT computational scheme [9], a combination of the DFT in the local density approximation (LDA) with dynamical mean-field theory (DMFT) [10], goes beyond the approaches discussed above since it explicitly includes many-body effects in a non-perturbative and thermodynamically consistent way. LDA+DMFT was already used to calculate the magnetization and the susceptibility of α-iron as a function of the reduced temperature T /T C [11]. The calculations gave overall good agreement with experimental data. The problem has been recently revisited by Katanin et al. [12] who found that the formation of local moments in paramagnetic α-Fe is governed by the e g electrons and is accompanied by non-Fermi liquid behavior. This supports the results obtained with the s-d model for the α-phase of iron [13]. A recent implementation of the LDA/GGA+DMFT scheme in plane-wave pseudopotentials [14,15] now allows one to investigate correlation induced lattice transformations such as the cooperative Jahn-Teller distortion in KCuF 3 and LaMnO 3 . The method was not yet used to study structural phase transitions in a paramagnetic correlated electron system with temperature (or pressure) involving a change of symmetry. This will be the goal of the present investigation. In this Letter we employ the above-mentioned implementation of the LDA/GGA+DMFT scheme [14,15] to explore the structural and magnetic properties of paramagnetic iron at finite temperatures. In particular, we will study the origin of the α-γ structural phase transformation, and the importance of electronic correlations for this transition. We first compute the nonmagnetic GGA electronic structure of iron [16]. To model the bccfcc phase transition we employ the Bain transformation path which is described by a single structural parameter c/a, the uniaxial deformation along [001] axis, with c/a = 1 for the bcc and c/a = √ 2 for the fcc structure. Here the lattice volume is kept at the experimental volume of α-iron (a = 2.91Å) [2] in the vicinity of the bcc-fcc phase transition, while the c/a ratio is changed from 0.8 to 1.6. Overall, the GGA results qualitatively agree with previous band-structure calculations [6]. In particular, the nonmagnetic GGA yields the fcc structure to be more energetically favorable than the bcc one (see Fig. 2). Next we apply the GGA+DMFT approach [14,15] to determine the structural phase stability of iron. For the partially filled Fe sd orbitals we construct a basis of atomic-centered symmetry-constrained Wannier functions [15]. The corresponding first-principles multiband Hubbard Hamiltonian has the form wheren imσ =ĉ † imσĉ imσ andĉ † imσ (ĉ imσ ) creates (destroys) an electron with spin σ in the Wannier orbital m at site i. HereĤ GGA is the effective low-energy Hamiltonian in the basis of Fe sd Wannier orbitals. The second term on the right-hand side of Eq. 1 describes the Coulomb interaction between Fe 3d electrons in the density-density approximation. It is expressed in terms of the average Coulomb repulsion U and Hund's rule exchange J. In this calculation we use U = 1.8 eV which is within the theoretical and experimental estimations ∼ 1 − 2 eV and J = 0.9 eV [17]. Further,Ĥ DC is a double counting correction which accounts for the electronic interactions already described by the GGA (see below). In order to identify correlation induced structural transformations, we calculate [14] the total energy as where E GGA [ρ] denotes the total energy obtained by GGA. Here Ĥ GGA is evaluated as the thermal average of the GGA Wannier Hamiltonian. The third term on the right-hand side of Eq. (2) is the sum of the Fe sd valence-state eigenvalues. The interaction energy, the 4-th term on the right-hand side of Eq. (2), n imσnim ′ σ ′ which is calculated in DMFT. The double-counting correction E DC = 1 2 imm ′ ,σσ ′ U σσ ′ mm ′ n imσ n im ′ σ ′ corresponds to the average Coulomb repulsion between electrons in the Fe 3d Wannier orbitals calculated from the self-consistently determined local occupancies [18]. To solve the realistic many-body Hamiltonian (1) within DMFT we employ quantum Monte Carlo (QMC) simulations with the Hirsch-Fye algorithm [19]. The calculations for iron are performed along the Bain transformation path as a function of the reduced temperature T /T C . Here T C corresponds to the temperature where the spin polarization in the self-consistent GGA+DMFT solution vanishes. We obtain T C ∼ 1600 K which, given the local nature of the DMFT approach, is in reasonable agreement with the experimental value of 1043 K and also with earlier LDA+DMFT calculations [11]. We find that T C depends sensitively on the lattice distortion c/a. It has a maximum value for the bcc (c/a = 1) structure and decreases rapidly for other values. In particular, for all temperatures considered here the fcc phase remains paramagnetic. In Fig. 2 we show the variation of the total energy of paramagnetic iron with temperature along the bcc-fcc Bain transformation path. The result exhibits two welldefined energy minima at c/a = 1 (at low temperature) and c/a = √ 2 (at high temperature), corresponding to the bcc and fcc structures, respectively. We find that for decreasing temperature the inclusion of the electronic correlations among the partially filled Fe 3d states considerably reduces the total energy difference between the α and γ phases. In particular, the bcc-to-fcc structural phase transition is found to take place at T struct ∼ 1.3 T C , i.e., well above T C [20]. Our result for ∆T ≡ T struct − T C , the difference between the temperatures at which the magnetic transition and the structural phase transition occur, is in remarkable agreement with the experimental result of ∆T ∼ 200 K. This finding differs from conventional band-structure calculations which predict the magnetic and structural phase transition to occur simultaneously. Both T struct and T C vary sensitively with the value of the Coulomb repulsion U employed in GGA+DMFT calculation. We find that T struct increases for increasing U values, whereas T C decreases, in agreement with the Kugel-Khomskii theory [21]. In addition, we performed LDA+DMFT calculations to determine the phase stability of iron at the bcc-fcc phase transition as a function of temperature. In contrast to the standard band structure approach where it is essential that the spin-polarized GGA is used to obtain the correct ground state properties of iron, we find that both the LDA+DMFT and GGA+DMFT schemes give qualitatively similar results. In particular, both schemes find the bcc-to-fcc structural phase transition at ∼ 1.3 T C , i.e., well above the magnetic transition. Explanations of the bcc-fcc structural phase transition and the fact that T struct = T C obviously need to go beyond conventional band structure theories. This clearly demonstrates the crucial importance of the electronic correlations among the partially filled Fe 3d states. Next we perform a structural optimization and compute the equilibrium volume and the corresponding bulk modulus of paramagnetic iron (see Table I). The bulk modulus is calculated as the derivative of the total energy as a function of volume. We find that at the bcc-fcc phase transition the equilibrium lattice volume simultaneously shrinks by ∼ 2 %, a result which is in good agreement with the experimental value of ∼ 1 % [1]. The volume reduction is accompanied by an increase of the calculated bulk modulus. Overall, the equilibrium volume and bulk modulus computed by GGA+DMFT agree well with the experimental data [1,2,22]. Finally we compute the square of the instantaneous local moment m 2 z = ( m [n m↑ −n m↓ ]) 2 of paramagnetic iron for the distortions c/a considered here. In 3 we show the result plotted for various temperatures. At low temperatures, the squared local moment depends quite strongly on the value of c/a, and is maximal in the bcc and minimal in the fcc phase, respectively. As expected, above T C , the square local moment gradually increases with temperature and becomes essentially independent of c/a, as indicated by the curve for T = 3.6 T C in Fig. 3 (we note that this is only a hypothetical curve since at such an elevated temperatures iron is already in its liquid state). This finding has important implications for our understanding of the actual driving force behind the bcc-to-fcc paramagnetic phase transition. For this we note that the squared local moment m 2 z determines the magnetic correlation energy − 1 4 I m 2 z , which is an essential part [23] of the total correlation energy of the Hamiltonian (1). At high temperatures, when the local moment is almost independent of c/a and the GGA+DMFT approach finds the fcc phase to be stable, the contribution of the magnetic correlation energy to the bcc-fcc total energy difference is seen to be negligible. This changes markedly when the temperature is lowered. Namely, upon cooling the contribution of the magnetic correlation energy gradually increases and becomes strong enough to overcome the DMFT kinetic energy loss E kin = E GGA [ρ] + Ĥ GGA − m,k ǫ GGA m,k for the bcc phase as compared with the fcc phase. Thereby the bcc phase with its larger value of the local moment is stabilized at T < 1.3 T C . We therefore conclude that the bcc-to-fcc paramagnetic phase transition is driven by the magnetic correlation energy. In conclusion, we employed the GGA+DMFT manybody approach to compute the equilibrium crystal structure and phase stability of iron at the bcc-fcc transition. In particular, we found that the bcc-to-fcc structural phase transition occurs well above the magnetic transition, and that the magnetic correlation energy is essen-tial to explain this structural transition in paramagnetic iron. The above result and those for the equilibrium lattice constant and the variation of the unit cell volume at the bcc-fcc phase transition agree well with experiment. We thank J. Deisenhofer, Yu
2011-03-21T12:50:40.000Z
2010-08-25T00:00:00.000
{ "year": 2011, "sha1": "f13dae39c5b754b5a643e0aee3a7898f27e53d62", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1008.4342", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d6e3bd54d964d4f730b6331db132ce25d06e2c69", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Materials Science", "Physics" ] }
211165619
pes2o/s2orc
v3-fos-license
Social Skills for Students in Helping Profession Working with Groups Under Risk The lack of comprehensive information concerning the social skills of students in helping professions (psychologists, social workers, pedagogues, and special educators) imposes an important task for educational trainers. Students in training should learn appropriate communion skills for working with diverse vulnerable clients and communities in order to have adequate response to those in need. The data presented in this paper were obtain with use of qualitative and quantitative methods to measure empathy, altruism, and assertiveness in 450 psychology, pedagogue, social work and special education and rehabilitation students (IRI Index of interpersonal reaction, Davis, 1996, Scale of altruism, Raboteg- Š ari ć, 1993 and Scale of assertiveness, Zdravkovi ć, 2004) The results showed that there is a positive relation between the level of empathy and altruism, and a negative relation between the level of empathy and assertiveness in students. In addition, there are significant differences in the birth order, gender, year of study, the quality and the quantity of the education in the field (practical work) that they have participated in during the studies. The obtained results cannot be generalize to all helping professions because of the sample limitation, but they are significant for seeing the current state in regards of the examined characteristics and for building a strategy for their improvement. At the same time, the results present a significant indicator that confirms the idea of redesigning the current study programs that would provide opportunities for the present students to get the needed competencies for providing their professional success. Introduction We live in times of change that cannot be stopped and that cause people to function differently, which can range from very effective to extremely ineffective. In conditions of economic collapse, lower living standards, limited employment opportunities, opportunities for adequate youth development are diminished. Also, the twenty four hour subsistence competition and overwork cause the need to communicate with competent individuals who can provide assistance and support. As such stand out members of the auxiliary professions, who in addition to theoretical readiness and expertise, are required to possess certain personal characteristics and specific skills for working with people. Supporting professions include social workers, psychologists, pedagogues, special educators, special educators, educators, experts working with people with disabilities, sociologists, educators and health professionals. In order to be successful in providing professional assistance, these individuals need to possess certain characteristics such as good communication skills, conflict resolution skills, mediation, emotional stability and balance, mental and emotional maturity, good self-control, management ability. own impulses, knowing and accepting one's own needs, desires and attitudes. These individuals also need to have the skills to recognize the situation of others, to care for others, and to be prepared to provide assistance. Among these skills and characteristics, empathy, altruism and assertiveness are the most essential for success in the support profession. Many times in life people, especially members of the auxiliary professions, whether in the private or professional field, can be exposed to discomfort. In such situations, adequate social behavior and communication with others is necessary. In those moments, a person's legal rights may be compromised, manipulated, labeled, or exploited psychologically and materially. In order to have an equitable relationship in a relationship, it is up to the man to find a way and fight for it. However, he needs to know how to handle it properly, how to govern himself, and how to express it. Social skills The most common understanding is that social skills are those skills that we use in interaction with other people on an interpersonal level (Hargie, Saundres, & Dikson, 1994). According to Phillips (Phillips, 1978), a person is socially adept at communicating with others, in a way that fulfills their rights, demands, obligations with others. In addition, this person is prepared to share rights and requirements openly and without limitation. The following definition defines social skills as specific components of the processes that enable the individual to behave in a way that will be judged as competent. Skills are the skills necessary to trigger behavior that will lead to the achievement of a goal that is part of a given task (Schlundt & McFall, 1985). These definitions highlight the macro elements of social behavior in terms of reciprocity or reciprocity. Skills are skills that can be developed to a greater or lesser degree. The review of the world literature on the relation of emotional intelligence, as well as the previously stated results of research on these constructs, concludes that the purpose and need for research of this kind in our country is multiple. Although it is a relatively new dimension, the literature on emotional intelligence is extensive and is one of the most explored variables that is attracting the attention of world researchers interested in contributing to greater life satisfaction, psychological adjustment, stress management, and improvement. work performance and the like. In these areas, interest in the contribution of emotional intelligence to daily functioning is currently growing, but research is lacking to empirically examine the contribution of perception, understanding, and regulation of emotions to life satisfaction, psychological well-being, and emotional well-being. self-confidence. We therefore hope that our research will serve as a starting point for further research in this area. Problem being investigated The problem in this research is to determine whether students in the support professions have the necessary social skills to work with at-risk children. Subject of research The main subject of this research is the social skills students need from support professions to work with children at risk. This research should give a picture of the possible differences between students in terms of gender, year of study, study group. Respondents The research was conducted on a sample of 134 respondents, students of assistive guidance. The final sample of respondents on which the data were processed consisted of 52 male and 82 female respondents. The condition that had to be met for the questionnaires to be processed was the number of omitted responses. If the total number of omitted responses on all scales was less than 6, the questionnaire is eligible for data processing. The omitted item was assigned the mean of the corresponding scale as indefinite. But if one of the tests was completely omitted then the questionnaire was not included in the processing. The age of the respondents ranged from 19 to 25 years. TMMS-30 The inherent meta-mood scale (Trait Meta-Mood Scale; TMMS; Salovey, Mayer, Goldman, Turvey, & Palfai, 1995), which is based on the Salves and Meyer model is a measure of self-esteem that touches on what researchers call Perceived Emotional Intelligence (PEI) or the knowledge that individuals have of their own emotional abilities as opposed to real or mental capacity. It measures three aspects of the thought processes that accompany mood states called meta-mood experiences. These are: Perception (the perceived ability to pay attention to one's own emotional states); Clarity (perceived ability to clearly distinguish feelings) and Emotional Regulation (perceived ability of the individual to regulate one's own emotional states and to "correct" negative moods). It is assumed that these meta-mood dimensions reflect a three-stage functional sequence. Specifically, it is assumed that (1) some degree of attention to emotions is required (2) for a clear understanding of emotions and, consequently, (3) that capacity to regulate negative moods and emotions will not be possible without some degree of emotion. emotional clarity. Evidence of such a proposed functional sequence was discovered using the analytical methodology (Martinez-Poms, 1997; Palmer, Gignac, Bates, & Stough, 2003). Our research uses a shortened revised three-factor version of 30 items, which, unlike the original 48-item five-factor scale, is more practical and easier to use and interpret the results, free of those items that were low-load in previous research. As the internal consistency of the subscales remained as high in the revised version as in the initial version (perception: α = 0.86; clarity: α = 0.88; adjustment: α = 0.82) it appears that TMMS -30 is optimal for use in the exploration of perceived emotional intelligence. The first factor in this scale is the perception of emotions (degree of attention to emotions) and consists of 13 items of which some are positive and some are negatively connotated. Therefore, the most positive item is the item "I pay a lot of attention to how I feel", and the most negative item is "I do not pay much attention to my feelings". The theoretical range of this subscale ranges from 13-65. The second factor is labeled as clarity of emotions since its most positively charged item was "I am usually very clear about my feelings" and the most negative item was "I can't find any sense in my feelings". This dimension consists of 11 items, and the theoretical range ranges from 11-55. The last factor is labeled as mood regulation because the items that burden it primarily relate to trying to "fix" the negative mood in order to maintain pleasant feelings. The highest positive load is on the item "Although I am sometimes sad, I usually have an optimistic view". The most negative item is "Although I sometimes feel happy, I usually have a pessimistic view". The additional items relate to describing active mood-enhancing strategies. This dimension is made up of 6 items and theoretically spans 6-30. The basic statistical indicators for the overall results of the scales and subscales used in this study are presented and are presented in Table 1. As we can see, the measurement of emotional intelligence shows that it is a sample with a relatively high EI (on the average and high score). It also entails high regulation of emotions and differentiation between them, which is confirmed by the achievement of the subscales emotional regulation and emotional clarity of our sample. What is surprising is the high score on the dimension of emotion perception, since I expected this population to nurture a Western functioning trend that requires neglecting emotions in making life-critical decisions and focusing only on cognitive reasoning. Probably the population in this sample is still at a critical age when in contact with their emotions and, given the high score of the other two subscales (emotional regulation and clarity), involves them in decision-making and thinking, which is particularly important in a situation which requires us to make choices and helps us choose what we want, not what is imposed or socially desirable. Gender differences of the tested variables To check that the results of male and female respondents differ significantly, a t-test of all variables was conducted. The results are shown in Table 2 The t-test, also known as a "student" test, is used to compare two sets of quantitative data when the data from the two samples are related in some way. In our study, data in both samples were collected in the same way, under the same conditions, with the same measuring instruments, and we can conclude that this test is ideal for comparing data obtained from male and female populations. What can be noted from the results in Table 2 is that a statistically significant gender difference can be observed only in the variable perception of emotion (p = 0.004; p <0.05). These results indicate that the female population is emotionally more intelligent than the male population, not only because of the significantly higher score of the dimension of emotion perception, but also because it is carried alongside the male respondents in the dimensions of emotional regulation and clarity. This may have been a hypothesis for a long time, but now with increasing interest in examining emotional intelligence and its contribution to everyday functioning, it has been repeatedly confirmed. The literature on EI is now full of assumptions as to why this is the case, due to the higher achievement of EI measures for female respondents. Such a conclusion is supported by a large body of research on gender differences in emotional aspects, which show, for example, that women are more capable of decoding nonverbal emotional information (Brody & Hall, 2000), have greater emotional understanding (Ciarrochi et al., 2005), are more sensitive to the emotions of others (Hall & Mast, 2008), are more expressive and exhibit greater interpersonal competencies. In addition, we have traditionally accepted that women are more familiar with the emotional world than men and that they may be biologically more prepared to perceive emotions. Baron-Cohen suggests that these differences between men and women may be the result of the "extreme theory of male brain autism" that men tend to systematize, while women tend to empathize and use emotions more frequently than men. . All of these findings and theoretical explanations can help explain why women achieve higher EI scores, including TMMS. Although numerous studies confirm that women are emotionally intelligent than men, most of them analyze the relationship between gender and EI only superficially. While some studies have explicit hypotheses about this association, many consider sex as a secondary goal rather than a primary variable that needs to be fully explored. However, these studies indicate that women possess greater emotional abilities, which confirms the need to consider sex as an exponential variable in the mechanisms of emotional functioning. Such a theoretical approach is problematic, as psychologists who deal with gender issues emphasize that sex itself has no exploratory power in the absence of socio-demographic variables such as age or socioeconomic status. In fact, sex always operates in interaction with other variables. Correlations between examined variables To find out whether there is a relationship between emotional intelligence and its components, Pearson correlation coefficients were calculated. As can be seen from the correlations given in Table 3, there is a strong correlation between most of the variables tested. What primarily interests me is the relationship between EI and its constructs, which is particularly important in determining the validity and reliability of TMMS. The total score of emotional intelligence in our study was associated with a strong correlation with all three components of emotional intelligence (emotion perception r = .61; emotion clarity r = .70; emotion regulation r = .65; all with p <0.001 ). Our assumption that adolescents and young adults with high expectations of their ability to understand and manage emotional experiences maintain more positive emotional states and are more satisfied with their lives. Significant results of scientific contribution Our research was designed to examine the association of perceived emotional intelligence. Another aim was to determine whether there are gender differences in the correlations between the variables studied. Appropriate self-rating scales were used for data collection for each of the tested variables. From statistical processing and analysis we have obtained some results that replicate and confirm previous research around the world, as well as new and unexpected results. Examining gender differences confirmed our expectation that female respondents have a more pessimistic view of the world and that they pay more attention to their feelings. We failed to statistically confirm the difference in emotional regulation and ability to differentiate between feelings, although we expected male respondents to be in that category. By examining the correlations between the individual variables we found that TMMS has a strong intercorrelation between internal scales, that is, the total score of emotional intelligence in our study is associated with a strong correlation with all three components of emotional intelligence (emotion perception r = .61; of emotions r = .70; emotion regulation r = .65; all with p <0.001). Conclusion When interpreting the results, it is necessary to pay attention to the limitations of the research. First, the data are collected from a convenient sample of students that is not representative of the population in any of the investigated variables, thus limiting the generalization of the findings. In addition, the measuring instruments used show some deficiencies, one of which is self-assessment. This makes it especially difficult to make statements about one's own feelings, since one is often unaware of how one feels or is reluctant to come up with one. The data collected is based on the respondents' own statements, so that the answers may also be influenced by social desires. In the future, the use of a measuring instrument which in addition to intrapersonal, includes interpersonal dimensions, or the use of an EI capability test, such as MSCEIT, could be considered. Another disadvantage is the sample size and the unequal distribution of male and female respondents. Namely, research of this kind in the future should be conducted on a sample of more respondents and equal number of male and female respondents for a more reliable comparison of sex differences. One has to take into account the fact that it is a correlational research that does not explain the causal relationship of the variables. For a more detailed explanation of the variance in life satisfaction, I propose to incorporate the variables of material and marital status in the future, and possibly the
2020-02-18T19:17:29.530Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "50b33678ea7c416abde2a5985178548748c71eea", "oa_license": null, "oa_url": "https://doi.org/10.26417/ejss-2020.v3i1-85", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9d67e26253d23a09ba75b54e2920d5021507f695", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
56074570
pes2o/s2orc
v3-fos-license
Optimization of industrial production of rifamycin B by Amycolatopsis mediterranei. II. The role of gene amplification and physiological factors in productivity in shake flasks Amplification of gene expression of the most productive colony type of Amycolatopsis mediterranei strain N1 under stress of chloramphenicol, resulted in isolation of a variant NCH with productivity of 2.56 g/l compared to 1.15 g/l by the parent strain N1 (2.2 fold increase) . This amplified variant has a further advantage of reduced variation in colony morphology with predominance of the most productive colony type. Using variant NCH, modification of the fermentation medium F1 by the addition of 0.1% yeast extract or the use of 1.8% KNO3 resulted in 3.8 and 5.8-fold increase in productivity, respectively, compared to strain N1. When the F1 medium was replaced by a new medium F2 containing soytone, instead of the particulate constituents (peanut meal and soybean meal) the yield by variant NCH reached 7.85 g/l (6.8-fold increase). Modification of the F2 medium by addition of glycerol or the replacement of glucose by glucose syrup decreased rifamycin B production. Changing the concentration of soytone increased the yield only slightly while replacing it with peptone or tryptone or the addition of 1 % corn steep liquor failed to increase the yield. On the other hand, the addition of 0.1 % yeast extract, or the replacement of 0.6% (NH4)2SO4 by 1.2% KNO3 or 0.4% NH4NO3, to F2 medium led to 8.2, 10.2 and 10.4-fold increase in productivity, respectively, compared to productivity of strain N1 in F1 medium. The change in the concentrations of either MgSO4 or CaCO3, the use of different types of antifoams and the use of higher concentrations of sodium diethyl barbiturate did not significantly influence the yield. These collective optimization attempts thus resulted in a 10.4-fold increase in productivity, from 1.15 to 11.99 g/l. INTRODUCTION Attempts have been made to improve fermentation and downstream processing parameters for better yieds of rifamycins. Continuous efforts since 1960 have led to the development of several industrial strains of Amycolatopsis mediterranei either with the ability to produce higher amounts of rifamycin B or mutant strains *Corresponding author. Tel: (202) 336 3222. Fax: (202) 362 0122. E-mail: omtayeb@link.net. that could directly produce active rifamycins and their derivatives (Chiao et al., 1988;Lal et al., 1995;Lancini and Hengeller, 1971;Lysko and Gorskaia, 1986;Ghisalba et al., 1982;Schupp and Divers 1986). Gene amplification is of widespread occurrence in prokaryotic and eukaryotic organisms (Anderson and Roth,1977;Schimke et al., 1982) where selection for increased gene dosage can be applied to generate strains that carry multiple copies of a gene and consequently high gene expression products (Young,1984). Biotechnology production processes are the result of time-consuming, expensive research. For each producing strain, the medium and the other process parameters must be adjusted to allow the maximal expression of the producing capacity. It is obvious that such informations are industrial properties, and for rifamycin, the composition of the media actually used for industrial production are not published. However, some information can be extrapolated from published laboratory data and patent literature that give a fairly good idea of the most suitable ingredients and their concentrations (Lancini and Cavalleri, 1997;Pape and Rehm, 1985). Our previous attempts to improve rifamycin B productivity of the industrial strain N1 by selection of the best producing colony type and by modifying the fermentation medium F1 resulted in an increase in the yield from 0.5 to 2.92 g/l (El-Tayeb et al., 2004). This is much lower than the economically viable yield. Therefore, in this study, we tried two approaches: the improvement of the producer strain by gene amplification and studying different physiological parameters for the process including media constituents. Bacterial strains A. mediterranei -RCP 1001 mutant strain N1 was obtained from El-Nasr Company for Pharmaceutical Chemicals, Egypt. Chemicals Chemicals used throughout this work were of laboratory reagent grade unless otherwise indicated. Glucose, KNO 3 , NH 4 NO 3 , NaNO 2 and propylene glycol were the products of ADWIC, Egypt. Sodium diethyl barbiturate (SDB) was the product of Grindstedvaerket A/S, Denmark. Media Tryptone, peptone, yeast extract, malt extract, beef extract, skim milk, soytone, and bacto agar were the products of Difco Laboratories, Detroit, U.S.A. Corn steep liquor and glucose syrup were obtained from the Egyptian Co. for Manufacture of Starch and Glucose, Egypt. Oat flakes, soybean meal and peanut meal were obtained from local commercial suppliers. Methods The methods used for maintenance, propagation, selection, preparation of inoculum and production of rifamycin B in shake flasks as well as determination of remaining glucose concentration and assay of rifamycin B were those previously reported by El-Tayeb et al. (2004). Yields of rifamycin B indicated are those obtained on day 8, unless otherwise indicated.The biomass was determined by dry cell weight method as described by Virgilio et al. (1964). Gene amplification Gene amplification was carried out as described by Kallio et al. (1987). A 5% v/v inoculum of strain N1 was added to a flask containing 100 ml vegetative medium (V1) containing 15 µg/ml chloramphenicol. The shake flask was incubated for 3 days at 28°C. One ml of this culture was used to inoculate another flask containing 30 µg/ml of the antibiotic, then the shake flask was incubated for 3 days at 28°C and the procedure was repeated using 60 µg/ml chloramphenicol. Several aliquots of 1 ml of the last culture were transferred onto the surface of Bennett's agar plates containing 120 µg/ml chloramphenicol. These plates were incubated for 12 -15 days at 28°C. Typical colonies (El-Tayeb et al., 2004) were selected and cultured onto the surface of Q/2 agar slants, incubated for 8 days at 28°C and used for propagation and selection. The variant obtained, NCH, was maintained as lyophylized material in skim milk. RESULTS The variant NCH was compared to strain N1 for productivity in F1 medium ( Figure 1) whereby it produced 2.56 g/l compared to 1.15 g/l by strain N1. The pattern of productivity by both cultures was somewhat similar, with variant NCH showing higher slope of increase between days 6 and 8. Both sugar consumption and pH were similar with both strains. The variant NCH showed the same colonial morphology and microscopical characteristics as the parent strain N1 on Bennett's agar and F1 medium. However, it showed less variation in colony morphology with the predominence of the best productive colony type. Modification of the F1 medium was carried out by the addition of yeast extract and by changing the concentrations of each of glucose and KH 2 PO 4 as well as replacing (NH 4 ) 2 SO 4 by KNO 3 (Figure 2). Addition of 0.1% yeast extract after 2 days of incubation increased rifamycin B production from 2.56 to 4.32 g/l (68%), while the use of 0.9 and 1.8% KNO 3 markedly increased rifamycin B production from 2.56 to 6.33 g/l (2.5-fold) and to 6.72 g/l (2.6-fold), respectively. Upon microscopical examination, It was observed that KNO 3 decreased branching and fragmentation of the mycelia in the fermentation medium. On the other hand, the use of glucose in concentrations above or below 14% (control) reduced the yield by 15-17% ( Figure 2). In addition, the use of KH 2 PO 4 in concentrations above 0.1% (control) caused a marked decrease (37-45%) in the yield ( Figure 2). Thus, these optimization attempts of the fermentation process using variant NCH resulted in a significant increase in the yield from 2.56 g/l to a maximum of 6.72 g/l (2.6-fold). To further increase the yield of rifamycin B using variant NCH, we resorted to radical modification of both the vegetative and the fermentation media. In addition to Figure 3. Comparison between F1 and F2 media with respect to rifamycin B production (P), remaining glucose concentration (S), biomass (X) and pH by variant NCH. failing to reach the desired yield, F1 medium was a particulate medium in which following biomass production was not possible through either dry cell weight or turbidity. We thus shifted to a soluble medium, F2. It was thought appropriate then to also shift the vegetative medium to a particle free one, V2. This approach resulted in an increase of rifamycin B production from 2.56 to 7.85 g/l (3.1-fold) when compared to production by variant NCH on unmodified F1 medium ( Figure 3). Using F2 medium we noted that while biomass formation started immediately and reached its peak after 4 days, production started only on day 2 and reached its maximum on day 8. Further optimization of rifamycin B production by the variant NCH in F2 medium included changes of most of its ingredients, their concentrations and the time of their addition and the results are presented in Table 1. In all cases, rifamycin B production, remaining glucose concentration, biomass and pH were followed in a time course. The carbon sources glucose syrup and glycerol, the nitrogen sources peptone, tryptone, corn steep liquor, soytone at a concentration of less than 3% and concentrations of (NH 4 ) 2 SO 4 as well as different concentrations of MgSO 4 and CaCO 3 failed to increase the yield (Table 1 and Figures 4 and 5). Different concentrations of SDB and different types of antifoam gave comparable yields (Table 1). On the other hand, the addition of 0.1 % yeast extract after 2 days of incubation (Table 1) increased the yield from 7.85 to 9.49 g/l (21%) and the replacement of (NH 4 ) 2 SO 4 with 0.6 to1.2% KNO 3 or with 0.05 to 0.8% NH 4 NO 3, increased rifamycin B production and the highest yield was achieved with 1.2% (1) Glycerol added after 1 day of incubation. DISCUSSION Although our previous optimization attempts to improve the fermentation process using strain N1 in F1 medium increased rifamycin B production, the yield was too low to be used for an economically viable technology and some variation in colony types continued to be observed (El-Tayeb et al., 2004). To obtain higher yields and to overcome the problem of variation in colony types, another approach to improve the strain had to be tried. Gene amplification as a method of strain improvement had been successfully used with Bacillus subtilis producing α-amylase (EL-Tayeb et al., 2000;Kallio et al., 1987). By application of gene amplification to N1 strain (by exposing the parent strain N1 to increasing concentrations of chloramphenicol), we achieved 2.2-fold higher yield (Figure 1) with less colony variation and predominance of the most productive colony type. The increase in the production of rifamycin B after treatment of the parent strain with chloramphenicol could be explained by the finding of Lenski et al. (1994) that repeated subculture of a plasmid-containing strain under selective conditions eventually gave rise to a variant of the plasmid that was much more stable in the absence of selection than the original form of the plasmid. Salyers and Amabile-Cuevas (1997) suggested that exposure of a bacterium with a newly acquired plasmid or conjugative transposon to antibiotic concentrations high enough to be slightly selective but low enough to allow bacteria to replicate could foster adaptive mutations that have the effect of fixing the element in its new host. They further suggested that any gene carried on the plasmid would have the chance during this period of selective pressure to increase their expression levels or to adopt to a better fit with their new host. On the basis of the above explanations, one may suspect the possible presence of plasmids in colony type 1 which are involved in the biosynthesis of rifamycin. Although Ghisalba et al. (1984) reported that no plasmids have been isolated from Nocardia mediterranei ATCC 13685 and its mutants and indicated that plasmids do not play a significant role in rifamycin biosynthesis, a genetically manipulated industrial strain such as strain N1 may show different genomic constitution. However, it should also be noted that, Kallio et al. (1987) studied α-amylase production by Bacillus subtilis in two different gene expression systems where α-amylase gene was incorporated either into a control 5% 10% 15% 20% 25% 30% Figure 5. Effect of addition of 6% glycerol after 1 or 3 days and of 1% corn steep layer after 1 day to F2 medium on rifamycin production (P) and pH by variant NCH. plasmid or in the chromosome. They found that gene amplification by chloramphenicol resulted in higher amplification of the chromosomally encoded gene copies for α-amylase than in plasmid containing or the parental strains. Further genetic studies on variant NCH are needed in order to determine whether the amplification of the gene for rifamycin B biosynthesis is plasmid or/and chromosomally encoded. Optimization of the process using variant NCH in F1 medium by the addition of yeast extract, changing concentrations of either glucose or KH 2 PO 4 or replacing (NH 4 ) 2 SO 4 by KNO 3 gave results which were similar to those previously obtained with strain N1 ((El-Tayeb et al., 2004). When glucose concentrations above or below the control were used the yield decreased by 15-17% (Figure 2). In conclusion, optimization of rifamycin B production by variant NCH using a modified F1 medium increased the yield from 2.56 g/l to a maximum of 6.72 g/l (2.6-fold). To further increase the yield of rifamycin B we shifted to soluble media, V2 and F2, recommended by Lee and Rho (1994). This approach resulted in a 3.1-fold increase of rifamycin B yield by variant NCH from 2.56 to 7.85 g/l as compared to production in the unmodified F1 medium. This increase in the yield was associated with a marked increase in the apparent rate of glucose consumption till day 4 along with a decrease in pH from day 2 to day 4 ( Figure 3). After day 4, the apparent rate of glucose consumption decreased, with F2 medium, along with a rise in pH from 6.5 to 7.7 while higher remaining glucose concentration was still available with F1 medium. The rise in pH after day 4 is possibly due to the disappearance of carbohydrates and the metabolism of the intermediate organic acids accumulated, as well as the slow release and metabolism of nitrogenous materials from proteins. Lee et al. (1983) concluded that the utilization of glucose at the idiophase is an influencing factor for rifamycin B production, since almost all the carbon units in the rifamycin B molecule are derived from glucose. They added that the optimal pH condition in the idiophase was found to be in the range of pH 7.0 to 7.5. Further attempts to optimize rifamycin B production by variant NCH in F2 medium were carried out by changing most of its ingredients, their concentrations and the time of their additions. The fermentation process lasts for 8 days and a consistent lag time of about 48 h was observed before the beginning of antibiotic production. Ghisalba et al. (1984) mentioned that the long lag phase observed during the fermentation of A. mediterranei is one of the problems encountered in rifamycin B production. However, they did not give any suggestion as to how this problem may be overcome. Replacing glucose with the less expensive glucose syrup (5-30%) decreased the yield. The use of 5, 10 and 15% glucose syrup decreased biomass till day 5 and consequently antibiotic production by 19 to 22% along with high pH values (above 7.5) in the trophophase (Figure 4). Taking into consideration that the glucose syrup used was prepared by acid hydrolysis of starch and hence contained mainly dextrins, maltose and low content of glucose, this rise in pH may point to a possible establishment of a critical balance in utilization of glucose, dextrins, and amino acids released from soluble proteins of the medium as energy sources with dextrins being least favored. With such possibility, a slight rise of pH could be expected as a result of release of ammonia from amino acids along with continuous consumption of intermediate organic acids produced during metabolic activity. However, the higher concentrations of glucose syrup (20-30%) markedly decreased the yield by 55 to 82%. It is possible that these high concentrations of glucose syrup might contain some undesirable ingredients such as hydroxy-methyl furfural, which led to inhibition of rifamycin B biosynthesis even though the results do not suggest inhibition of biomass production ( Figure 4). It is interesting that with all the tested concentrations of glucose syrup, the pattern of remaining reducing sugars was somewhat similar and differed from the control, showing a slower rate of carbohydrate utilization. It seems that the organism continuously replenishes the reducing sugars by hydrolysing more higher saccharides. Since A. mediterranei can utilize either glucose or glycerate as a precursor for the biosynthesis of 3-amino-5-hydroxybenzoic acid which is the chain initiator molecule in the biosynthesis of rifamycins (White et al., 1974), we used a supplement of 6% glycerol to the F2 medium containing 12% glucose after 1 or 3 days of incubation ( Figure 5). The addition of glycerol after 1 day was associated with reduced biomass during the idiophase with somewhat lower rifamycin B production, while its addition after 3 days led to a similar reduction in biomass but the yield was comparable to the control. As for the content of organic nitrogen sources in F2 medium, the soytone concentrations of 3% and higher gave comparable rifamycin B production while lower concentrations resulted in some reduction of the yield ( Table 1). Replacement of soytone by other enzymatically hydrolyzed proteins such as tryptone and peptone (Table 1) slightly decreased the yield. The addition of 0.1% yeast extract after 2 days increased the antibiotic yield by 21% (Table 1). In contrast, the use of 1% corn steep liquor after 1 day decreased the yield along with a marked decrease in pH during the idiophase, while the biomass was comparable to the control ( Figure 5). Yeast extract here is regarded more as a source of cofactors than of nitrogen nutrients. In one sense corn steep liquor is a comparable substrate. However, since it did not produce the same effect one may assume that corn steep liquor acts as a nitrgen source but not as a source of stimulatory cofactors such as the B-factor present in yeast extract (Kawaguchi et al., 1984(Kawaguchi et al., , 1988. This conclusion should be taken in conjunction with the observed lower pH which corn steep liquor produced after day 4 which is not favorable for antibiotic production (Lee et al., 1983). In this respect, Krishna et al. (2000) reported that some organic nitrogen compounds do not stimulate rifamycin SV production because of feed back effects which are strain specific. As for inorganic nitrogen sources, 0.4% (NH 4 ) 2 SO 4 led to a rifamycin B yield comparable to the control (0.6%), while higher concentrations (0.8-1%) resulted in a slight reduction in yield (Table 1). Lee and Rho (1994) suggested that the use of high concentrations of ammonium ion repressed rifamycin B production. Since nitrate stimulates rifamycin production by its regulatory effect on lipid and rifamycin biosynthetic pathways (Rui _ Shen et al., 1979), we replaced (NH 4 ) 2 SO 4 in F2 medium with different concentrations of KNO 3 and NH 4 NO 3 .The use of 0.6, 0.9 and 1.2% KNO 3 in F2 medium, led to an increase in rifamycin B production by 12, 13 and 50%, respectively (Table 1 and Figure 6). It is to be noted that 1.2% KNO 3 contains an equivalent nitrogen as 0.6% (NH 4 ) 2 SO 4 present in F2 medium (control). A similar increase has been previously observed when using KNO 3 in F1 medium with variant NCH (Figure 2) and strain N1 (EL-Tayeb et al., 2004). In contrast, the use of 1.8% KNO 3 led to 42% decrease in rifamycin B production, along with an increase in pH observed all over the time course of the fermentation, which also affected growth. When (NH 4 ) 2 SO 4 was replaced by 0.05-0.8% NH 4 NO 3 (Table 1) rifamycin B production increased from 7.85 g/l to a maximum yield of 11.99 g/l (53%). All the tested concentrations also showed almost the same pattern of glucose consumption and of biomass production but slight differences in pH patterns. This is in contrast with the findings of Lysko and Gorskaia (1986) who tested different inorganic nitrogen sources, and found that (NH 4 ) 2 SO 4 but not NH 4 NO 3 , NH 4 Cl and NaNO 3 provided optimum pH levels for antibiotic production. This disagreement may be due to differences in strains, since Rui _ Shen et al. (1979) reported that some strains did not utilize nitrate as a nitrogen source and that the effect of nitrate is strain specific. Although SDB was reported to act as an activator or an inhibitor of certain enzymes associated with rifamycin B production, causing a shift toward the production of rifamycin B (Lal et al., 1995;Mejia et al., 1998), concentrations of SDB above that present in F2 medium (0.1%) slightly increased the yield of rifamycin B (Table 1). Similarly, changes in concentrations of MgSO 4 and CaCO 3 did not cause a major shift in antibiotic yield (Table 1). Since antifoams, which are necessary when the process is conducted in a stirred tank fermentor, may affect growth and consequently antibiotic production, different types of antifoams namely, low and high density silicone oils and sunflower oil were tested. They all resulted in comparable antibiotic yields (Table 1). In conclusion, the application of gene amplification technique increased rifamycin B production from 1.15 to 2.56 g/l (2.2-fold) and reduced morphological colony variation. Modification of the fermentation medium F1 increased the yield from 2.56 to 6.72 g/l (2.6-fold). When the V1 and the F1 media were radically replaced by V2 and F2 media, the yield increased from 2.56 to 7.85 g/l (3.1-fold). Modification of the F2 medium by the addition of 0.1% yeast extract after 2 days of incubation or the use of either 1.2% KNO 3 or 0.4% NH 4 NO 3, instead of (NH 4 ) 2 SO 4, increased the yield from 7.85 to 9.49, 11.76 and 11.99 g/l, respectively. These yields are promising for further optimization for industrial production.
2018-12-14T18:56:14.624Z
2004-05-31T00:00:00.000
{ "year": 2004, "sha1": "125260a9cdafa85140e0355da9391007bd64aa15", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJB/article-full-text-pdf/853EFE432660.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ce504d0dbb4352248dfbccb2dd5650ef36a1287f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
1206509
pes2o/s2orc
v3-fos-license
MR Imaging of Adenomas of the Nonpigmented Ciliary Epithelium of the Eye BACKGROUND AND PURPOSE: ANPCEs are rare benign tumors of the eye arising from the NPCE in adults, which may be clinically mistaken for melanoma. This study was undertaken to delineate clinical and MR imaging features of these tumors. MATERIALS AND METHODS: Clinical presentation and MR imaging findings of 8 patients (6 women and 2 men; median age, 51 years) with pathologically confirmed ANPCEs were retrospectively reviewed. Location, size, shape, margin, signal intensity, and gadolinium-enhancement characteristics of all tumors were evaluated. Signal intensity and degree of enhancement were graded in comparison with the ipsilateral lacrimal gland. RESULTS: MR imaging revealed a circumscribed enhancing mass within the ciliary body of the eye in all 8 patients. The mass was ovoid in 6 patients and spheric in 2. Gadolinium enhancement was marked in 4 lesions and moderate in the other 4. Both T1 and T2 relaxation times were qualitatively identical to those in the lacrimal gland in 2 tumors. In the remaining 6 tumors, the T1 was identical to and the T2 longer than that in the lacrimal gland. CONCLUSIONS: ANPCE should be included in the differential diagnosis of a spheric or ovoid enhancing ciliary body mass with T1 similar to that in the lacrimal gland and T2 equal to or longer than that in the lacrimal gland. A NPCEs are rare benign tumors of unknown pathogenesis arising with equal frequency in adults of both sexes within the posterior chamber of the anterior cavity of the globe. 1 Because they arise from the epithelial covering of the posterior aspect of the ciliary body, ANPCEs more commonly protrude posteriorly through the posterior chamber and into the vitreous chamber of the globe but may alternatively extend anteriorly through the peripheral iris into the anterior chamber. [2][3][4][5][6][7] Although ANPCE is often asymptomatic, it may produce local inflammation, mass effect, or infiltration of the adjacent lens, resulting in development of a secondary cataract or lens subluxation, either of which can cause the patient to present with vision loss. [1][2][3][4]6 Because these tumors are located behind the iris in the ciliary body, they are difficult to see on slit-lamp examination. Although the preferred treatment for ANPCEs is local resection sparing the globe, they are often clinically indistinguishable from the more common ciliary body malignant melanoma that arises in the same location and thus are sometimes treated with unnecessary enucleation. [2][3][5][6][7][8][9][10] Local resection or incisional biopsy followed by a second surgery for more extensive definitive resection, if needed, after pathologic diagnosis may be a desirable alternative if a benign lesion such as an ANPCE is suspected. Unfortunately, this is undesirable for patients with suspected melanoma because the 2 surgeries that are needed to achieve definitive resection result in increased expense and increased risk of local and distant metastatic spread. Because melanoma is much more common, it is not practical or desirable to adopt this approach in all cases. This problem provides strong motivation for researchers seeking to use preoperative MR imaging to define a limited subset of patients with a high likelihood of ANPCE. If ANPCE can confidently be included high in the preoperative differential diagnosis, such patients could be managed more conservatively. This change would decrease the number of unnecessary enucleations. Because ANPCE lacks the melanin that confers characteristically short T1 and T2 relaxation times on most ocular melanomas, it may be possible to suggest this diagnosis in many cases on the basis of preoperative MR imaging. [11][12][13][14] The most recent and detailed review of the MR imaging of ANPCE literature revealed 4 published cases. 15 Of these 4 cases, 3 tumors had either a T1 or T2 similar to that in the vitreous and thus atypical for melanoma, supporting the hypothesis that the T1 and T2 of ANPCE might allow differentiation from adenoma in some cases. However, 2 of the 4 lesions had T1 shorter than that of the vitreous (like melanoma), 2 had T2 shorter than that of the vitreous (like melanoma), and 1 of the lesions had T1 and T2 shorter than those of the vitreous and was thus indistinguishable from melanoma. 6,15 The variability of reported findings of T1 and T2 relative to the vitreous and the overlap of these findings with the typical imaging features of melanoma motivated us to design a retrospective series to better delineate the clinical and MR imaging features of ANPCE in a larger series and, if possible, to identify MR imaging markers worthy of investigation in a future differential diagnostic study. Materials and Methods An electronic medical record review, approved by our institutional review board, revealed 8 patients who presented with pathologically confirmed ANPCE between June 2004 and January 2009, including 6 women and 2 men whose ages at the time of diagnosis ranged from 19 to 67 years (median, 51 years) ( Table 1). Patient data extracted from the medical records included age, sex, involved eye; history of ocular trauma or inflammation, visual acuity, and intraocular pressures were recorded. Tumor location, size, and color were noted. The presence or absence of prominently dilated episcleral sentinel blood vessels, cataracts, subluxation of the lens, secondary inflammatory signs, and extraocular extension was noted. Type of treatment was recorded. Available MR imaging data were retrieved for all 8 patients. All patients had undergone orbit MR imaging with dual 3-inch (7.62 cm) surface coils, performed on a 1.5T clinical MR imaging scanner (Signa TwinSpeed; GE Healthcare, Milwaukee, Wisconsin). Motion artifacts were reduced by instructing the patients to keep their eyes shut and to open and close them several times during the measurement breaks. The standard institutional orbit imaging protocol used in all cases included high-resolution SE T1WI and FSE T2WI with 3-to 4-mm spacing, 0.3-to 0.5-mm sections, 100 ϫ 100 mm FOV, and 288 ϫ 224 matrix. SE T1WI parameters were the following: TR/TE, 600/11.1 ms, 2 excitations, 1:39 acquisition time. FSE T2WI acquisition parameters were the following: TR/TE, 3000/120 ms, 3 excitations, and 1:36 acquisition time. Pregadolinium T1WIs and T2WIs in 2 planes (axial plus coronal or sagittal) and frequency-selective fat-suppressed axial postgadolinium T1WIs were acquired in all cases. Gadopentetate dimeglumine (0.1 mmol/kg, Magnevist; Bayer Schering Pharma, Berlin, Germany) was injected at a rate of 2.0 mL/s through a 21-gauge intravenous line with a power injector. MR imaging findings of the tumor were evaluated with emphasis on the location, size, shape, signal intensity, and enhancement. The signal intensity of the tumor was evaluated in comparison with the lacrimal gland and the vitreous. Signal intensity on postgadolinium T1WI higher than that of the lacrimal gland was defined as marked enhancement, and the signal intensity equal to that of the lacrimal gland was defined as moderate enhancement. Table 1 provides clinical detail on the 8 patients. No history of trauma was reported for any patient. No secondary glaucoma was identified. Slit-lamp examination revealed secondary cataract ipsilateral to the tumor in 6 patients, including 3 focal and 3 total cataracts. Prominent dilated episcleral blood vessels (sentinel vessels) overlying the tumor were detected in 6 patients by visual examination. In 6 cases, the tumor appeared nonpigmented with slit-lamp observation but was revealed to be white to light-tan during gross pathology. In the remaining 2 patients, the tumor appeared melanotic by slit-lamp observation but again proved to be amelanotic under gross pathologic examination. Intraocular inflammation was identified in 3 patients during surgery. Corresponding inflammatory cell infiltrate with lymphocytes and plasma cells and evidence of chronic inflammation were observed pathologically. Table 2 details the MR imaging features of all 8 patients. MR imaging revealed a slightly irregular margin in 2 patients (Fig 1) and a smooth margin in 6. In 2 patients, MR imaging revealed tumor involvement of the iris and extension into the anterior chamber of the globe through the peripheral iris ( Fig 2). This extension into the anterior chamber was only seen in patients with a displaced lens. The tumor ranged from 3 to 8 mm (median, 5 mm) in diameter. No extrascleral extension of tumor was detected in any case. Results The 6 patients with tumors that appeared nonmelanotic under slit lamp were managed by local tumorϪonly resection. The 2 patients with tumors that appeared melanotic under slit lamp had been referred for enucleation of a suspected melanoma and declined the surgeon's suggestion of local tumor excision. Although the pathology literature has also reported that ANPCEs have an irregular surface, 2,3,6 only slight irregularity was seen in our series, and this, in only 2 patients. In the remaining 6 patients in our study, the tumor surface was smooth on pathology, better corresponding to the MR imaging findings than to the previous reports. Discussion ANPCEs are rare slow-growing benign tumors arising from the inner surface of the ciliary body of the eye. Tumors of the NPCE can be divided into congenital and acquired tumors. The best known congenital tumor, medulloepithelioma, arises The pathogenesis of ANPCE remains unknown. Although some authors speculate that these tumors might represent a nodular reactive proliferation initiated by trauma or inflammation, 6 most published cases report no history of trauma or severe prior intraocular inflammation, 2-10 and none was elicited in our patients. Although ANPCEs are benign, they can exhibit locally aggressive behavior, resulting in cataract ipsilateral to the tumor. 1 For this reason, patients with ANPCE may present with painless vision loss due to secondary focal cataracts developing at an early stage or total cataracts or subluxed lens developing later. [2][3][4]6 Similarly, although generally taken to be signs of intraocular malignancy found in malignant melanoma or metastases, both intraocular neovascularization and development of episcleral sentinel vessels overlying the tumor may arise in patients with ANPCE, sometimes leading to cystoid macular edema. 2,6,10 The 2 tumors in our study that appeared melanotic under slit-lamp observation proved to be amelanotic by gross pathologic examination, consistent with the prior pathology literature. 2,3,6 In retrospect, it seems likely that in these 2 cases, pigmented epithelium adjacent to the tumor was misinterpreted as a melanotic surface of the tumor by slit-lamp observation. These 2 patients were referred for and underwent enucleation of the eyes that might have been spared had ANPCE been suspected preoperatively. This experience emphasizes both the present need to better define the appearance of and increase physician awareness of ANPCE within the differential diagnosis of intraocular masses. It also suggests a clear need for continuing investigation of complementary noninvasive imaging techniques, including MR imaging methods that could differentiate ANPCE from the more common malignant intraocular melanoma and metastases. Because the literature to date neither describes ANPCE MR imaging findings in detail nor presents any indication of how MR imaging could contribute to the differential diagnosis of ANPCE from melanoma or other malignant lesions, 6,15 we set out primarily to define in detail the conventional MR imaging of ANPCE with a view to identifying features distinct enough to merit investigation in future diagnostic controlled trials. Because the prior literature reporting the T1 of ANPCE, uveal melanoma, and metastasis in comparison with the vitreous has demonstrated a broad overlap between these lesions, 17 we evaluated the T1, T2, and degree of enhancement of the ANPCE with reference to the lacrimal gland as well as the vitreous. On pregadolinium imaging, all 8 ANPCEs were found to have a T1 similar to that of the lacrimal gland, including the 2 cases clinically misdiagnosed as melanotic on slit-lamp examination. Although we did not design the current study to include formal assessment of uveal melanoma and we have not found any literature directly reporting the T1 of melanoma with respect to lacrimal gland, in our experience, the T1 of malignant melanoma is generally shorter than that in the lacrimal gland, suggesting that this feature may deserve further study as a potential differentiator of ANPCE in a future controlled trial. As expected from the fact that the T1 of the vitreous is longer than that of the normal lacrimal gland, in 7 of the 8 cases, the ANPCE T1 was also shorter (higher signal intensity on T1WI) than that in the vitreous. In the remaining case, the ANPCE T1 was essentially identical to that of the vitreous. Because nearly all intraocular melanoma and metastases have a T1 shorter than that in the vitreous, comparison of tumor with vitreous on T1WI is unlikely to contribute to the differential diagnosis of these lesions. On T2WI, 6 of our ANPCEs had T2s longer than those in the lacrimal gland (producing higher signal intensity on T2WI), and 2 had T2 similar to that of the lacrimal gland. Again, because we recognize that the design of our current study does not include formal assessment of uveal melanoma, in our experience, the T2 of malignant melanoma is generally shorter than that of the lacrimal gland, suggesting that T2 longer than or equal to that of the lacrimal gland could contribute to the differentiation of ANPCE from melanoma and that this feature deserves to be included in further differential diagnostic MR imaging studies. The T2 of ANPCE in our study was also shorter than that of the vitreous in most cases. Unfortunately, the literature documents that melanoma characteristically also has a T2 shorter than that of the vitreous, suggesting that lacrimal gland tissue may offer a superior reference to the vitreous or orbital fat for assessing the signal intensity of ciliary body tumors on T2WI and T1WI. Our results indicate that ANPCE most commonly presents as an oval or round mass. While roughly one-third of melanomas have a characteristic mushroom-shape not observed in our series, up to two-thirds of melanomas and nearly half of metastases may appear as oval or round masses. 11,12,14,17,18 Thus, while a mushroom shape may be used to favor melanoma over ANPCE, an oval or round shape cannot reliably differentiate ANPCE from melanoma or metastasis. In addition, MR imaging evidence of extraocular growth was not seen in our series and would be very unusual for a benign lesion such as ANPCE but has frequently been reported in malignant melanoma. 11,12,14 Thus, although the round or oval shape and the absence of extraocular extension characteristically observed in ANPCE cannot be used to favor ANPCE over melanoma to any significant degree, the observation of a mushroom shape, or more conclusively extraocular extension, may help to exclude ANPCE from the differential diagnosis. The most important consideration in the differential diagnosis of tumors of the ANPCE is ciliary body melanoma because melanoma is more common and, unlike ANPCE, requires enucleation. Other main considerations in the differential diagnosis are medulloepithelioma, adenoma, or adenocarcinoma of the ciliary pigment epithelium, leiomyoma, schwannoma, metastatic carcinoma, and granuloma. [2][3][4][5][6][12][13][14]16,19,20 Unlike ANPCE, many of these lesions require enucleation. Medulloepithelioma, a congenital tumor with its onset in the first decade of life and often associated with lens coloboma, iris neovascularization, and signs of persistent primary vitreous, 16 can be differentiated readily by clinical findings. Metastatic carcinoma is more likely to occur in patients with a history of carcinoma and concurrent metastases elsewhere. 12,13 Ciliary body granuloma is always associated with more severe uveal inflammation, and the patient tends to have systemic manifestations of a granulomatous condition. 12,13 Two relatively less common lesions in the differential diagnosis that may present difficulty clinically include adenomas or adenocarcinomas of the ciliary pigment epithelium. The distinction between these 2 entities is based on the degree of histologic invasion, but because these may be distinguished from unpigmented lesions such as ANPCE on slit-lamp examination due to their pigmented appearance, they could be confused with melanoma. Our limited experience suggests that the 2 adenomas of the ciliary pigment epithelium we have encountered were hyperintense on T1WIs and hypointense on T2WIs relative to lacrimal gland tissue and presented a potential clinical and MR imaging mimic of melanoma. Similarly, leiomyoma and schwannoma of ciliary body may be indistinguishable from ANPCE by clinical or imaging findings but may also be treated with local resection, like ANPCE. 12,20 The most appropriate management for ANPCE is generally local tumor resection only, rather than enucleation, if ANPCE is detected clinically at a relatively early stage. Although ANPCE can behave aggressively locally, it appears less likely to recur after removal of the tumor locally. The life expectancy for patients with ANPCE is excellent. Although ANPCE may evolve into adenocarcinomas of the NPCE, adenocarcinomas of the NPCE have no tendency to metastasize, so differentiation of these 2 pathologically distinct but clinically similar entities is of little practical importance. 6 Thus while a method to suspect ANPCE rather than melanoma preoperatively in appropriate cases would not completely resolve the problem of diagnosing ocular masses, this differentiation could prevent the unnecessary enucleation of a number of benign lesions and seems an important goal for orbital MR imaging. Conclusions Adenoma of the NPCE is an important locally resectable benign entity that should be included in the differential diagnosis of ocular masses when an oval or round mass with a T1 similar to that in the lacrimal gland and a T2 longer or equal to that in the lacrimal gland is detected on MR imaging in the ciliary body. Because these tumors may be mistaken clinically for the more common uveal melanoma, which is treated with enucleation, a diagnostic controlled study seems indicated to assess whether these MR imaging characteristics can be used to distinguish ANPCE from melanoma and other ocular masses.
2017-06-18T21:26:54.464Z
2010-05-01T00:00:00.000
{ "year": 2010, "sha1": "aaa056fe814840c6879c2b2b3cdaff2544694bc7", "oa_license": "CCBY", "oa_url": "http://www.ajnr.org/content/31/5/886.full.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "aaa056fe814840c6879c2b2b3cdaff2544694bc7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226292426
pes2o/s2orc
v3-fos-license
Speech frequency-following response in human auditory cortex is more than a simple tracking The human auditory cortex was recently found to contribute to the frequency following response (FFR) and the cortical component has been shown to be more relevant to speech perception. However, it is not clear how cortical FFR may contribute to the processing of speech fundamental frequency (F0) and the dynamic pitch. Using intracranial EEG recordings, we observed a significant FFR at the fundamental frequency (F0) for both speech and speech-like harmonic complex stimuli in the human auditory cortex, even in the missing fundamental condition. Both the spectral amplitude and phase coherence of the cortical FFR showed a significant harmonic preference, and attenuated from the primary auditory cortex to the surrounding associative auditory cortex. The phase coherence of the speech FFR was found significantly higher than that of the harmonic complex stimuli, especially in the left hemisphere, showing a high timing fidelity of the cortical FFR in tracking dynamic F0 in speech. Spectrally, the frequency band of the cortical FFR was largely overlapped with the range of the human vocal pitch. Taken together, our study parsed the intrinsic properties of the cortical FFR and revealed a preference for speech-like sounds, supporting its potential role in processing speech intonation and lexical tones. Introduction The phase-locking response is an important way that the auditory system preserves temporal information in sounds ( Langner 1992 ;Schnupp 2011 ;Wang 2018 ) and can be noninvasively recorded as the frequency-following response (FFR) in humans ( Bidelman 2018 ;Chandrasekaran and Kraus 2010 ;Coffey et al., 2019 ). In recent years, studies have shown that the auditory cortex also contributes to the FFR recorded via EEG and MEG, which has thus been referred to as the "cortical FFR " ( Bidelman 2018 ;Coffey et al., 2016 ;Coffey et al., 2017b ). Compared with the subcortical responses, the cortical FFR shows more relevance to speech perception and attentional modulations ( Coffey et al., 2017a ;Hartmann and Weisz 2019 ;Holmes et al., 2018 ;Puschmann et al., 2018 ). In speech sounds, the fundamental frequency (F0) and its dynamic change constitute the intonation to facilitate speech perceptions ( Binns and Culling 2007 ;Fairbanks 1940 ;Steinhauer et al., 1999 ). Speech FFR, as a measurable indicator of neural synchrony to the speech F0 and its harmonics, is closely related to an individual's speech pitch, patterns of pitch contour were further analyzed and recognized ( Griffiths 2003 ;Patterson et al., 2002 ). In the human cortex, this hierarchy seems to start from posterior-medial Heschl's gyrus (HG) and proceed to lateral HG, and then to more anterior regions ( De Angelis et al. 2018 ;Patterson et al., 2002 ;Penagos et al., 2004 ). Specifically, left HG is proposed to play an important role in the processing of linguistic pitch Xu et al., 2006 ). However, how the dynamic pitch is represented in these regions remains unclear. We hypothesize that the human auditory cortex may encode the pitch contour in speech signals by temporal tracking of dynamic F0, and this coding may have a specific spatial distribution, thus providing precise F0 information for subsequent processing of intonation and lexical tones. Another widely debated question is about the frequency limitation of cortical FFR ( Bidelman 2018 ;Coffey et al., 2019 ;Tichko and Skoe 2017 ). There is a general trend that the upper limits of frequencyfollowing responses decrease gradually along the ascending auditory pathway in humans ( Bidelman 2018 ;Zhang and Gong 2019 ). Using an EEG-FFR sourcing technique, the contribution of the auditory cortex was observed in the low-frequency range ( < 100 Hz) ( Tichko and Skoe 2017 ). Studies have also shown that the FFR at low F0s is more sensitive to attentional modulations than that at high F0s ( Galbraith and Doan 1995 ;Hartmann and Weisz 2019 ;Holmes et al., 2018 ), indicating that the cortical contributions to speech FFR likely occur in the low F0 range. However, direct evidence is still lacking and the full range of speech F0 has not yet been examined. Under the hypothesis that the cortical FFR encodes F0 contours for speech signals, we infer that the bandwidth of the cortical FFR is optimized to fit the F0 distribution of human speech. Scalp EEG and MEG, though with sufficient temporal resolutions, are still hindered by their limited spatial resolution to pinpoint the source of cortical FFR ( Bidelman 2018 ;Coffey et al., 2017b ;Hartmann and Weisz 2019 ;Penagos et al., 2004 ). Intracranial EEG (iEEG) directly recorded from the human cortex, has a high resolution both in time and space, allowing fast fluctuating responses to be recorded in milliseconds and accurate localization of neural sources from the primary auditory cortex, which is embedded in the Sylvian fissure ( Nourski and Howard 2015 ;O'Sullivan et al., 2019 ;Zhang et al., 2018 ). Taking advantage of the intracranial EEG, we designed a group of speech-like F0 modulated stimuli, with the F0 covering the range of human vocal pitch, to quantify the tracking accuracy and frequency limits of the cortical FFR directly. To further investigate the role of the cortical FFR in speech processing, natural speech with lexical tones was also tested. An F0 tracking response (FFR at F0) to both speech and speech-like harmonic complex stimuli was observed bilaterally in the human auditory cortex, including Heschl's gyrus (HG) and the surrounding superior temporal gyrus (STG) regions. We further demonstrated the preference of the cortical-FFR in encoding speech F0s with harmonics and depicted an overlapping of the cortical FFR frequency band with the human vocal range. Subjects The experiments were carried out in 8 epilepsy patients (1 female; median 22.5 years, range 12-35 years; Table 1 ) who were implanted with intracranial depth electrodes (8-16 macro contacts on each depth electrode, 0.8-mm diameter, 3.5-mm spacing centre to centre) as part of their clinical evaluation for epilepsy surgery. The paths of electrode implantation were determined by the patients' clinical needs. No seizures were observed one hour before or after the tests in any patients. All patients were right-handed native Mandarin Chinese speakers with normal hearing and normal speech perception evaluated on the Chinese version of WAB scale (western aphasia battery, WAB) ( Kertesz 1982 ). None of the patients had music training experience. All patients signed informed consent forms. The experimental protocol was approved by the institutional review board at Tsinghua University, and the affiliated Yuquan Hospital, Tsinghua University. Non-speech stimuli Non-speech stimuli were generated in MATLAB (MathWorks, Natick, MA, USA). Since the F0 contour in speech is continuously varying ( Zatorre and Baum 2012 ), and mainly in the F0 range of approximately 85-255 Hz ( Baken 2010 ;Keating and Kuo 2012 ), a group of F0 modulated stimuli (sweeps) with the F0 ranging from 20 Hz to 335 Hz were designed to cover the speech fundamental frequency ( Fig. 2 A, top panels ). The sweeping rate of the F0 contour was 157.5 Hz per second upwards (or − 157.5 Hz per second downwards), and each sweep lasted 2 s. Futhermore, since speech is also featured with rich harmonic structures, stimuli with different harmonic structures but the same F0 contour were designed: a harmonic complex sweep (HCS, with F0, 6 equal-amplitude harmonics, 1st harmonic/F0 -6th harmonic/H6, added to sine phase), a missing fundamental sweep (MFS, without F0, 6 equal-amplitude harmonics, 2nd harmonic/H2 -7th harmonic/H7, added to sine phase), and a pure tone sweep (PTS, 1 component, F0 only). The spectrograms of all stimuli were generated directly in MATLAB. In the spectral domain, the three stimuli in the same direction had the same F0 contour but distinct spectrograms ( Fig. 2 A, top panels ). In the temporal domain, their waveforms had the same periodic structure (blue dots, Fig. 2 A, bottom panels ) but different fine structures in each period. If the neural response is a simple tracking of F0 contour, then similar responses would be expected for these F0 modulated stimuli. However, if the cortical response shows a particular preference to speech-like sounds, unbalanced responses should be observed. Furthermore, to confirm that the F0 tracking responses were due to the F0 contour rather than acoustic energy at the fundamental frequency, we expected the response to missing fundamental stimuli was the same with the F0 present stimuli ( Coffey et al., 2017b ;Tang et al., 2017 ). Speech stimuli with and without F0 In Mandarin Chinese, the F0 contours at the syllable level are categorically perceived as lexical tones ( Duanmu 2000 ), which are essential for determining the meaning of a word. For example, the syllable /yi/ can be accented in four lexical tones (i.e., level tone T1, rising tone T2, dipping tone T3, and falling tone T4) to represent four distinct word meanings: medicine " , " aunt " , " desk " , " or difference " , " respectively ( Si et al., 2017 ). In this study, 7 Chinese syllables (/da1/, /da4/, /da2/, /yi1/, /yi4/, /yi2/, and /yi3/, Fig. 5 A) were retrieved from the Mandarin monosyllabic speech corpora of the Chinese Academy of Social Sciences Institute of Linguistics and concatenated into a sound stream that had the same length as the synthesized sweeps (~2 s). The speech stream was meaningless at the sentence level but was still recognizable syllable by syllable. All syllables were spoken at normal speed (~270 ms) by a male speaker and were of good quality. ( Boersma 1993 ). By setting the spectral energy around the F0 compo-nent to zeros, a group of syllables without F0 were obtained. We estimated the spectrograms of all stimuli by short-time Fourier transform with a 50 ms Gaussian window ( Fig. 5 A). The speech stimuli were to examine the cortical FFR to more perceptually meaningful sounds. But it should be noted that in addition to the linguistic differences, there are also acoustic discrepancies between speech stream and harmonic sweeps. For example, the synthesized nonspeech signals are continuous, while speech signals consist of seven discrete tones; the speed and consistency of direction by which F0 contour changed are not consistent between the speech and non-speech stimuli. Passive listening task Speech streams and synthesized sweeps were normalized to the same RMS amplitude in MATLAB (MathWorks, Natick, MA, USA) and then played to participants binaurally at a sampling rate of 44,100 Hz via a pair of insert earphones (ER2, Etymotic Research, USA) under control of the Psychophysics Toolbox ( Brainard 1997 ). All stimuli were played in random order and were repeated 30 times. To avoid the effect of expectation, the stimulus onset interval was set to 1100 ms, with 5% jitter. During sound listening under any conditions (both speech and non-speech), silent films were played for subjects on a tablet to keep them awake. For each subject, the sound was played at the subject comfortable level based on self-report, but for the same subject, the sound intensity of different stimulus materials was consistent. To ensure that harmonic distortions created by the headphones did not reintroduce energy at F0 ( Norman-Haignere and McDermott 2016 ), we measured the sound output from both sets of earphones post hoc with a precise microphone (PCB378C10, PCB Piezotronics Inc; MA3 stereo microphone amplifier) and TDT RZ6 (Tucker-Davis Technologies, Gainesville, FL, USA) and found no evidence that any power had been reintroduced at F0 (Fig. S1). Data acquisition and preprocessing Intracranial EEG (iEEG) signals were recorded using a g.USBamp amplifier/digitizer system (G. TEC, Graz, Austria). The amplifier sampled data at 1200 Hz with a high-pass filter with a cut-off frequency of 0.1 Hz and notch filters centred at 50 Hz harmonics to remove power-line noise. The data were subsequently re-referenced using a local scheme whereby the signal for each electrode was adjusted concerning the signals of its nearest neighbours ( O'Sullivan et al., 2019 ;Stolk et al., 2018 ). The electrodes showing epileptiform activity during clinical monitoring would be labelled by the clinician, and were removed from the following analysis. Anatomical location of electrodes The locations of the electrodes relative to the cortical surface were determined according to the recommended procedure provided by FreeSurfer ( Fischl 2012 ;Stolk et al., 2018 ). (1) A presurgical highresolution T1-weighted structural MRI scan and a post-implantation CT scan were acquired for each subject. The CT images were registered to the presurgical MRI images with statistical parametric mapping (SPM) implementations of the mutual information-based transform algorithm ( Wells et al., 1996 ;Zhang et al., 2013 ) in FreeSurfer ( Fig. 1 A). (2) For each bipolar iEEG channel, the best channel position was located between the two corresponding electrode positions, and its location on the surface of the cortex was identified as the vertex nearest to the contact site in the 3D-volume space. Recording sites with contact-vertex distance larger than 5 mm were not used ( Zhang et al., 2018 ). (3) The auditory cortex of interest was divided into three parts within each subject's individual space ( Fig. 1 B) ( Desikan et al., 2006 ), which is HG, pSTG, and aSTG. The STG regions anterior to HG were considered "anterior STG " (aSTG), and the STG regions posterior to HG were considered "posterior STG " (pSTG) ( Da Costa et al. 2011 ;Sammler et al., 2015 ). The electrodes were classified according to these anatomical parcellations. (4) For electrode visualization, the individual brains were coregistered to the Fsaverage Standard Brain in FreeSurfer, and electrodes' locations were then non-linearly projected onto the standard brain ( Fig. 1 C) ( Greve and Fischl 2009 ;Thomas Yeo et al. 2011 ). Auditory response analysis iEEG data processing was performed in MATLAB. With the Hilbert transform, the analytic amplitude of eight Gaussian filters (centre frequencies: 60-200 Hz, the high gamma band) was computed. The high gamma power was taken as the average analytic amplitude across these eight bands ( Khalighinejad et al., 2017 ) and was then downsampled to 200 Hz and z-scored to a silent baseline. The baseline period was defined as 50-300 ms before stimulus onset and the task-related analysis window was defined as 0-2300 ms after stimulus onset. An electrode was regarded as auditory responsive if it had a significantly larger response power than baseline for a period lasting at least 50 ms (paired test according to Wilcoxon signed-rank test, Bonferroni correction, p < 0.05) ( Si et al., 2017 ). Electrodes without auditory responses to any of the stimuli were excluded from subsequent analyses. Out of the 344 electrodes on the brain surface, 63 (LH: 34 of 183; RH: 29 of 161, Fig. 1 C) were responsive to the auditory stimuli. Follow-up analyses were restricted to these 63 electrodes. The spectral amplitude of the F0 tracking response (FFR-F0) The spectral amplitude of the FFR provides information on the robustness of auditory processing ( Krizman and Kraus 2019 ). The FFR amplitude was calculated in the time-frequency plane. Corticograms were generated with the pre-processed data by the filter-bank method ( Edwards et al., 2009 ;Stolk et al., 2018 ): (1) Time-frequency analysis was performed using a Gaussian filter-bank and Hilbert transformation to identify the envelope of each frequency channel separately; the centre frequencies ranged from 10 Hz to 350 Hz in 1 Hz increments; (2) Spectrograms with data averaged across trials were generated to create corticograms; (3) The corticogram was downsampled to 200 Hz and z-scored to the baseline (50-300 ms before stimulus onset) at each frequency channel. To demonstrate how the iEEG responses tracked the F0 contours of the stimuli 0( ) (blue line, Fig. 2 B), dominant frequencies ( ( ) , the frequency with peak amplitude) were first extracted. Then, the latency of the F0 tracking response was estimated as the peak lag of the cross-correlation between the ( ) and the 0( ) . After the response latency was calibrated, the F0 tracking range was defined by the criterion | − 0 | < 5 ( Bidelman and Powers 2018 ; Chandrasekaran and Kraus 2010 ). The calibrated DF of the exemplar response that tracked F0 is shown as a white line in Fig. 2 B. To prevent false alarms, the F0 tracking segment had to cover at least 100 ms (~16 Hz sweep). With this criterion, 39 electrodes (out of 63 electrodes) were identified as significant "F0 tracking electrodes " (gray dots in Fig. 3 B, 3 C). The FFR amplitude was calculated as the mean of spectral amplitude within ± 5 Hz around the stimulus F0 ( Bidelman and Powers 2018 ; Krizman and Kraus 2019 ). For visualization, the spectrograms of iEEG responses (corticograms) were temporally shifted according to the response latencies so that they aligned with the F0 contours, as displayed in Fig. 2 B. Phase coherence of the F0 tracking response Phase coherence is considered a frequency-specific measure of timing fidelity Omote et al., 2017 ). The calculation of phase coherence across trials was also in the time-frequency plane, based on the phase spectrogram ( , ) . The phase of each pixel in ( , ) was averaged across trials to obtain the intertrial phase-locking value ( , ) ( Krizman and Kraus 2019 ; Zhang and Gong 2019 ). The Red dots represent auditory responsive electrodes. Blue dots represent the non-responsive electrodes or noise-contaminated electrodes. Electrode E-L3 from subject S5 is also marked by a green dot. (C) Location of all the auditory responsive electrodes (red dots, N = 63/344) on the inflated Standard Fsaverage brain. White dots represent the non-responsive electrodes, which were excluded from further analysis. original ITPL varied between 0 (not synchronized in phase across trials at all) and 1 (perfectly synchronized in phase across trials). To facilitate the comparison of the phase coherence and spectral amplitude, the ( , ) was downsampled to 200 Hz and z-scored to the baseline (50-300 ms before stimulus onset). ITPL used to be between 0 and 1, would no longer be limited to this range after normalization according to the baseline. The phase coherence of the FFR at F0 was calculated as the mean of ITPL within ± 5 Hz around the stimulus F0 (blue line, Fig. 2 C). FFR band limit characterization Since the F0 changed linearly from 20 Hz to 335 Hz for the sweep stimuli, the spectral amplitude or phase coherence was calculated for each frequency channel separately (projected onto the F0 axis). The original spectral profile where the DF did not track F0 was set to zero and then averaged across four harmonic sweeps (HCSup, HCSdown, MF-Sup, MFSdown) to obtain the spectral profile of the FFR for each site ( Fig. 6 A). For the sweep stimuli, the upper limit and lower limit were defined as the upper frequency and lower frequency of the F0 tracking response, respectively. The bandwidth of FFR was defined as the difference between the upper limit and the lower limit. To estimate the F0 distribution of the human vocal pitch ( Fig. 6 D), long narrative stories spoken by a male speaker and a female speaker (3 stories per speaker, ~4 min per story) were retrieved from the Mandarin monosyllabic speech corpora of the Chinese Academy of Social Sciences Institute of Linguistics. Frequency tuning Short pure tones (270 ms) at 20, 40, 80,…, 10 240 Hz (logarithmically spaced) were used to measure the frequency tuning curve (FTC, Fig. 6 E) of the recording sites. The frequency tuning curve shows the high-gamma response as a function of the frequency of pure tones, which reflect the frequency selectivity of the local neural population ( Jenison et al., 2015 ;Palmer 1987 ). The high gamma power (60-200 Hz) was averaged to quantify the response to each pure tone and was considered significant if it was larger than the baseline for at least 50 ms ( Si et al., 2017 ). The FTC shapes were characterized, and the following tuning properties were extracted: (1) best frequency (BF), which is the frequency at which the iEEG response power is the largest; and (2) bandwidth of the frequency tuning, for which the FTC was first zscored across frequencies, and the width of one octave of the FTC above 0 was defined as the bandwidth ( Bitterman et al., 2008 ). Fifty-three sites responded to pure tones and were included in the analysis of the relationship between the tuning properties and FFR properties ( Fig. 6 F, 6 G). Statistical analysis In this study, repeated-measures analysis of variance (ANOVAs) and post hoc comparisons (with Bonferroni corrections) were used to quantify the effect of the type of stimulus and electrode locations on FFR measures ( Figs. 3 4 , -5 ). The number of sites (each site was treated as an entry) was 63 for all tests. For repeated-measures ANOVAs, the number of within-subject factors is stated in the Results section; Greenhouse-Geisser corrections were applied if sphericity could not be assumed. For pairwise comparisons, the t-value and effect size (Cohen' d ) ( Cohen 1988 ) were also calculated. Data and code availability The data and codes that support the findings of this study are available upon request. Cortical FFRs to F0 modulated sweeps In this study, the subjects listened to the F0 modulated sweeps passively, and the cortical responses were recorded with intracranial electrodes. An auditory responsive electrode (E-L3 from subject S5) adjacent . Distribution of the FFR amplitude for the harmonic complex sweeps on the inflated brain, for which the anatomical boundary of HG, aSTG, and pSTG was delineated by dashed white lines. (E). Comparison of the FFR amplitude for the harmonic complex sweeps across the 3 cortical regions (mean ± SEM, * * p < 0.01, * p < 0.05, post hoc comparison, Bonferroni correction). (F). Linear regression of the FFR amplitude with the response latencies for electrodes ( N = 32) from HG (dark blue dots) and pSTG (green dots). to the left posterior Heschl's sulcus was chosen for demonstration (green dot in Fig. 1 A). The iEEG responses (corticograms) of the example site to the F0 modulated sweeps are displayed in Fig. 2 B, which shows that the F0 contours of the stimuli (blue line, Fig. 2 B) were encoded by the dominant frequency (DF, white line, Fig. 2 B) of the iEEG responses. At this site, the dominant frequency of the responses to harmonic complex stimuli (HCS and MFS in both directions) tracked the F0 contours consistently, with a mean difference between the DF and F0 contours of less than 0.5 Hz. Conversely, the F0 tracking responses to the pure tone sweeps (PTS) were much weaker than were those of the harmonic sweeps. Corresponding to the sound waveform displayed in Fig. 2 A (duration: 100 ms, bottom panels), a short segment of the averaged iEEG response (duration: 100 ms) was plotted beneath each corticogram in Fig. 2 B, which clearly shows the temporal synchrony between the responses (black traces) and the stimulus peaks (blue dots). In addition to the spectral amplitude, we also examined the intertrial phase-locking (ITPL) of the iEEG response ( Fig. 2 C), which is more sensitive to the timing fidelity across trials ( Coffey et al., 2017b ;Krizman and Kraus 2019 ;Zhang and Gong 2019 ). It was found that the ITPL tracking was restricted to a narrower range compared to the spectral energy, especially for the upward harmonic sweeps ( Fig. 2 C). The enhanced ITPC does not appear at other times, suggesting it's a frequency-band specific phase coherence of the F0 contour. Since the cortical FFRs to sweeps at the first harmonic (H1 = F0) were dominant, and responses at higher harmonics were weak, we focused on the FFR-F0 in the subsequent analyses. Stimuli specificity and anatomical distribution of the cortical FFR By pooling the responses of all responsive sites ( N = 63), we obtained an overall picture of the cortical responses, which demonstrated that the F0 tracking responses (white line, Fig. 3 A) precisely encode the F0 contour (blue line, Fig. 3 A). The averaged discrepancy between the and the 0 for the F0 tracking electrodes was 0.02 Hz ± 0.29 Hz (mean ± SD). At the group level, the cortical FFR was also more sensitive to harmonic complex sounds than pure tone sweeps. To examine the selectivity of cortical FFRs to stimuli, repeated-measures ANOVAs were performed for two complementary FFR measures (FFR amplitude and ITPL, Materials and Methods ) separately. Two within-subject factors (spectral structures: HCS, MFS, PTS; directions: upward, downward) and two between-subject factors (hemispheres: LH, RH; regions of interest: HG, pSTG, aSTG) were considered to build the full model. For the FFR amplitude, the repeated-measures ANOVA revealed a significant main effect of spectral structures (F 1.2, 68 = 43.841, p < 0.001, 2 = 0.435), while the effect of direction was not significant (F 1, 57 = 0.576, p = 0.451, 2 = 0.01), and the interaction effect was weak (F 1.98, 112. 8 = 3.740, p = 0.033, 2 = 0.062). Post hoc pairwise comparisons (with Bonferroni corrections) showed that the FFR amplitudes for HCS (with F0) and MFS (without F0) were significantly stronger than that for PTS (F0 only) (HCS > PTS, t 62 = 6.896, p < 0.001, d = 1.143, Fig. 3 B; MFS > PTS, t 62 = 7.126, p < 0.001, d = 1.193). Moreover, the difference between the amplitudes for HCS (with F0) and MFS (without F0) was not significant (n.s., t 62 = − 2.155, p = 0.74, d = − 0.085, Fig. 3 C). Although our analysis was based on 63 auditory response electrodes, we visualized the FFR measures of significant tracking electrodes (gray, N = 39) and non-tracking electrodes (blue) differently in Fig. 3 We further investigated whether the F0 tracking response was different between the primary auditory cortex and the associative auditory cortex. The FFR amplitudes were averaged across the harmonic sweeps for each responsive site and then projected onto the averaged brain ( Fig. 3 D). The distribution of FFR amplitudes on the brain surface shows a tendency of a stronger FFR in HG than in other regions. Then, we calibrated the model of repeated-measures ANOVA by removing the measures for pure tones. Results of the between-subject analysis show that the effect of the ROIs on FFR amplitude was significant (F 2, 57 = 9.77, p < 0.001, 2 = 0.255). But the effect of the hemisphere and interaction effect were not significant (hemisphere: F 1, 57 = 0.795, p = 0.376, 2 = 0.014; interaction: F 2, 57 = 0.308, p = 0.736, 2 = 0.011). Post hoc comparisons of the FFR amplitudes across the 3 cortical regions showed a stronger FFR in HG ( > aSTG, t 32 = 5.310, p < 0.001, d = 1.878; > pSTG, t 44 = 2.379, p = 0.026, d = 0.743; pSTG > aSTG, t 44 = 3.388, p = 0.016, d = 1.058; with Bonferroni corrections, Fig. 3 E). Besides, we analyzed the relationship between response latencies and FFR amplitudes. Fig. 3 F shows a negative correlation between the FFR and the processing latencies, which also indicates the degradation of the FFR along the processing hierarchy. Anatomically, ITPL showed a similar gradient distribution across the three ROIs with the FFR amplitude (F 2, 57 = 6.595, p = 0.003, 2 = 0.188). Post hoc comparisons reveal highest ITPL in HG (HG > aSTG, t 32 = 4.183, p < 0.001, d = 1.479; HG > pSTG, t 44 = 2.387, p = 0.004, d = 0.746; pSTG > aSTG, t 44 = 2.471, p = 0.047, d = 0.772; with Bonferroni corrections, Fig. 4 E). But the effect of the hemisphere was also significant (LH > RH, F 1, 57 = 21.236, p < 0.001, 2 = 0.271). We mapped Fig. 1 A) to the speech stimuli (same sites as those shown in Fig. 2 B, adjusted by 40 ms). The F0 contour is shown as a blue line. Bottom panel : Phase coherence of the example iEEG responses to speech stimuli. (C). Comparison of the FFR amplitudes for speech stimuli across the 3 cortical regions (mean ± SEM, * * p < 0.01, * p < 0.05, post hoc comparison, Bonferroni correction), with averaged FFR amplitudes for the harmonic sweeps shown by a gray line. (D). Comparison of the phase coherence (ITPL) for the speech stimuli across the 3 cortical regions (mean ± SEM, * p < 0.05, post hoc comparison, Bonferroni correction); the comparisons of the phase coherence for the speech stimuli and harmonic sweeps were performed within 3 cortical regions separately (HG & pSTG: * * p < 0.01, pairwise t -test). (E). Comparison of the FFR amplitude for speech and harmonic sweeps at the single electrode level, green and yellow dots represent the electrodes from the left and right hemispheres, respectively. In the bar plot, we compared the amplitude of speech-evoked FFR between the left (L) and right (R) hemispheres directly. (F). Comparison of the ITPL for the speech and harmonic sweeps. In the bar plot, we compared the phase coherence of the speech-evoked FFR between the left (L) and right (R) hemispheres directly. (G). Distribution of the ITPL for speech stimuli. The anatomical boundary of the HG was delineated by dark blue lines. the phase coherence to harmonic sounds on the brain surface and found that the electrodes with high timing fidelity concentrated in the left HG ( Fig. 4 D). A negative correlation between the phase coherence and the processing latencies was also observed for ITPL ( Fig. 4 F), which also reveals the degradation of phase coherence along the processing hierarchy. Despite few discrepancies between the two measures, it's consistent that the cortical FFR to harmonic complex sweeps was significantly stronger than that to pure tones, suggesting a harmonics preference of the cortical FFR. The results also revealed that the cortical FFR attenuates from the primary auditory cortex to the associate auditory cortex. Cortical FFR to speech stimuli We used F0-modulated sweeps as a simplified version of speech signals. An additional question is whether the cortical FFR to harmonic complex sweeps can be used to predict the responses to speech stimuli. To determine the role of the F0 tracking response in speech processing, speech stimuli composed of 7 simple Chinese syllables (SY) with different lexical tones (F0 contours) were also presented to participants ( Fig. 5 A). After all of the stimuli listening were finished, the subjects successfully repeated the syllables they heard, which suggested they had a clear perception of the lexical tones and syllable identities. At the exemplary site ( Fig. 1 A), in contrast to the responses to harmonic complex sweeps detailed above in Fig. 2 B, less obvious FFR but more induced high gamma responses that may represent the phonetic features of speech were observed ( Fig. 5 B, top panel ) ( Cheung et al., 2016 ;Mesgarani et al., 2014 ). However, the phase coherence (ITPL, Fig. 5 Parallel repeated measures ANOVAs (1 within-subjects factor: speech vs. missing fundamental speech; 2 between-subjects factors: ROIs and hemispheres) were performed for the measurements of speech evoked FFRs. The results on the FFR amplitude were highly consistent with those of the harmonic sweeps ( Fig. 5 C). First, the effect of the missing fundamental was not significant (F 1, 57 = 0.590, p = 0.445, 2 ≤ 0.001). Then, the main effect of the ROI was significant (F 2, 57 = 9.464, p < 0.001, 2 = 0.249), but the effects of the hemisphere (F 1, 57 = 0.114, p = 0.736, 2 = 0.002) and the interaction between the ROI and hemisphere (F 2, 57 = 3.145, p = 0.051, 2 = 0.099) were not significant. Post hoc comparisons showed stronger FFRs in HG than in the STG ( > aSTG, t 32 = 5.281, p < 0.001, d = 1.867; > pSTG, t 44 = 2.636, p = 0.02, d = 0.823; pSTG > aSTG, t 44 = 2.651, p = 0.022, d = 0.828; with Bonferroni corrections, Fig. 5 C). However, the phase coherence of speech evoked FFRs was observed to be significantly higher than that of harmonic sweeps (t 62 = 4.760, p < 0.001, d = 0.622, Fig. 5 D). Another discrepancy was that the effect of the hemisphere on phase coherence of the speech evoked FFRs was We further compared the FFR measures between speech and harmonic complex stimuli. For the spectral amplitude, it did not differ considerably between speech evoked FFRs and harmonic complex stimuli evoked FFRs (t 62 = 1.074, p = 0.6, d = 0.099, Fig. 5 E scatter plot ). The amplitudes of the speech evoked FFRs also showed no significant differences between the two hemispheres (t 61 = 1.512, p = 0.136, d = 0.388, two-sample t -test, Fig. 5 E bar plot inset ). In contrast, the phase coherence of the speech evoked FFRs was higher than that of the harmonic complex stimuli evoked FFRs ( Fig. 5 D,5 F scatter plot ) and showed left lateralization (t 61 = 7.360, p < 0.001, d = 1.891, two-sample t -test, Fig. 5 F bar plot inset ). In Fig. 5 G, we mapped ITPL for the speech evoked FFRs on the brain surface, showing a prominent high timing fidelity of the left auditory cortex. Frequency limits of the cortical FFR By design, the F0 in the sweep stimuli changed as a linear function of time, which provided a spectral profile for the phase-locked activity with increasing frequency ( Fig. 6 A, top panel ). Since the cortical FFR to the harmonic sweeps was stronger than that to the pure tone sweeps and was similar for both directions, we combined the spectral profiles of the FFRs to the harmonic sweeps (HCS up , HCS down , MFS up , and MFS down ) for all sites. The spectral profiles of the ITPL (blue) and FFR amplitude (orange) were overlaid ( Fig. 6 A, bottom panel ). Electrodes with significant F0 tracking responses ( N = 39) were included in this analysis and were sorted according to the bandwidth of the FFR amplitude. The FFR bandwidth measured by the amplitude was correlated with that by ITPL ( r = 0.64, p < 0.001, Pearson's correlation). According to the FFR bandwidth, these electrodes were split into two groups of equal size, that is, the "wide " group and "narrow " group ( Fig. 6 A, bottom panel ). Electrodes with F0 tracking responses were primarily from HG ( N = 16) and the pSTG ( N = 19), among which 12 electrodes from HG and 7 electrodes from the STG were included in the "wide FFR " group, and 4 electrodes from HG and 12 electrodes from the STG were included in the "narrow FFR " group. The F0 tracking response was restricted to a specific band rather than the full frequency range of the pitch contours, which also indicated that the F0-modulated sweeps used in this study were sufficiently wide to investigate the frequency limits of the cortical FFR. We averaged the spectral profile of the FFR amplitudes and phase coherence values across electrodes within the wide group separately ( Fig. 6 B, mean ± SEM), and the results clearly show the bandpass property of the cortical FFR. The passband of the FFR amplitudes was relatively wider than that of phase coherence. However, both measures provided direct evidence supporting the notion that the cortical FFR is limited to the low F0 range. On average, the cortical FFR from the wide group covered the F0 from ~50 to ~160 Hz ( Fig. 6 C, mean ± SEM, top bar), with the widest FFR band ranging from ~40 to ~240 Hz ( Fig. 6 C, bottom bar). This FFR band covered the F0 used in previous studies that have reported cortical contributions to the FFR ( Bidelman 2018 ;Brugge et al., 2009 ;Coffey et al., 2016 ;Coffey et al., 2017b ;Griffiths et al., 2010 ;Nourski et al., 2013 ). Fig. 6 D shows the F0 distribution of the male Chinese speaker (90%: 85-220 Hz) and female Chinese speaker (90%: 125-335 Hz), which also matches the previously reported F0 range of English speakers (male: 85-180 Hz; female: 165-255 Hz) ( Baken 2010 ;Keating and Kuo 2012 ). We can see that the frequency limits of the cortical FFR largely overlap with the human vocal pitch, especially those of male speakers. To gain insight into the underlying mechanisms of the cortical FFR, we tested the relationship between the FFR and frequency tuning properties of the local neuronal population. We measured the frequency tuning curve for each site ( Fig. 6 E) and extracted the key tuning properties (i.e., the best frequency and the tuning bandwidth). Although the frequency tuning ranged from ~100 Hz to 5 kHz, the specific band of the F0 tracking response showed a small amount of variation ( Fig. 6 A). We first compared the best frequency (BF) for the wide group and the nar-row group and found that the electrodes with the wide FFR tend to have lower preferred frequency (t 33 = − 2.062, p = 0.047, d = − 0.7182, two-sample t -test, Fig. 6 F). The tuning bandwidth for the low-frequency channels tends to be wider in animal studies ( Fishman et al., 2013 ;Sayles and Winter 2008 ). We also compared the tuning bandwidth (in octaves relative to the BF) for the wide group and the narrow group and found that the difference in the tuning bandwidth was not significant (t 33 = 1.354, p = 0.185, d = 0.471, two-sample t -test, Fig. 6 G). By comparing the band limit of the FFR and that of frequency tuning, we found that the cortical FFR in the preferred frequency region of 100 Hz ~1 kHz has a wider F0-FFR band, which more closely matches the human vocal range. Discussion The F0 contour gives rise to the intonation and the lexical tone in human speech ( Binns and Culling 2007 ;Brown et al., 2011 ;Laures and Bunton 2003 ;Plack et al., 2005 ;Steinhauer et al., 1999 ;Tang et al., 2017 ). Using iEEG recordings of the human auditory cortex, we investigated the potential role of the cortical FFR in speech processing and tested our hypothesis that the cortical FFR is optimized to benefit the processing of F0 contour in speech. On one hand, the cortical FFR showed a significant harmonic preference, and pure tones in the same frequency range did not evoke the cortical FFR effectively. Speech syllables that contain rich harmonic structures induced higher phase coherence of F0 tracking compared to the synthesized harmonic complex stimuli. On the other hand, the prominent F0 tracking responses to the harmonic complex stimuli were found in the frequency range of 50 to 160 Hz, which largely overlapped with the F0 distribution of the human vocal pitch. Electrodes with a wider frequency following band tended to be located in the frequency tuning region of (100 Hz ~1 kHz) the auditory cortex. Anatomically, the cortical FFR for both speech and harmonic complex stimuli was the strongest in the primary auditory cortex and decreased significantly in the associative auditory cortex. Although the cortical FFR is not an exclusive encoding mechanism for the F0 contour in speech, the convergence of the encoding space of the cortical FFR into the speech statistics may suggest the beginning of speech specialized processing ( Tang et al., 2017 ;Zatorre and Baum 2012 ). Moreover, both the stronger encoding of harmonics and the frequency limits of the cortical FFR may inform the future non-invasive FFR studies using EEG and MEG. F0 encoding and harmonic preference in cortical FFR The present study provided direct evidence that the human auditory cortex has the capacity to encode the dynamic F0 contours of speech and harmonic complex stimuli in a phase-locking manner. Previously, the temporal coding of periodic fluctuations has been shown to gradually transform into rate coding as the signals travel from the midbrain to the auditory cortex ( Gao and Wehr 2015 ;Langner 1992 ;Wang et al., 2008 ). However, electrophysiological studies in humans and macaque monkeys have shown that the temporal coding of F0 in the cortex can reach a frequency of 200 Hz or higher ( Fishman et al., 2013 ;Griffiths et al., 2010 ;Johnson et al., 2012 ). These findings, together with our data, suggest that there are still specialized neural populations in the human auditory cortex with fast temporal response characteristics. Based on this study, the cortical FFR shows the stronger encoding of harmonics. The cortical FFR was not effectively driven by the pure tone sweeps, while the FFRs to the missing fundamental stimuli were similar to the F0 present harmonic complex stimuli ( Figs. 3 B, 4 B). The harmonic preference of cortical responses has previously been reported both in humans and animals ( Hall et al., 2002 ;Hullett et al., 2016 ;Liang et al., 2002 ). Our results support the harmonic preference and extend it to the phase-locking response. In this study, we also measured the FFR induced by natural speech signals through a group of Chinese syllables. Different from harmonic stimulation, natural speech signals not only have linguistic meanings (i.e. determining lexical tone), but also have more abundant acoustic features, such as the syllable rate, complex formant structure, more variable speed and direction by which the F0 contour changed. It should be noted that the differences between the speech and non-speech have different effects on the phase coherence and spectral amplitude of FFR. The phase coherence has a better tracking of the speech F0 than the spectral energy, which suggests that the timing fidelity of cortical responses may have ensured the precise encoding of dynamic F0 in speech sounds ( Fig. 5 D, 5 F). In EEG studies, a similar FFR may be evoked by pure tone stimuli and missing fundamental stimuli ( Galbraith 1994 ;Galbraith and Doan 1995 ;Greenberg et al., 1987 ), and pure tone sweeps are proved effective to study aging process and hearing impairment ( Clinard and Cotter 2015 ;Fu et al., 2019 ). Although pure tone stimuli failed to evoke FFR in the auditory cortex, the finding of harmonic preference in the auditory cortex cannot be taken as a biomarker so far to differentiate the cortical FFR from other components. Since the auditory system is highly nonlinear, FFRs at all levels may be influenced by factors such as stimulus polarity and stimulus sets ( Lerud et al., 2014 ;Skoe and Kraus, 2010 ). It's still under debate about the origins of the human FFR, particularly when elicited to speech sounds. Band limit of cortical FFR The subcortical origins of the FFR have been widely explored by the scale-EEG ( Bidelman 2018 ;Tichko and Skoe 2017 ), however, it's still under debate about the frequency limits of the cortical FFR. In this study, using the synthetic harmonic complex stimuli with F0s covering the human vocal range, the F0 tracking response was found to be concentrated in the frequency range of 50-160 Hz, with the maximum tracking range being 40-240 Hz ( Fig. 6 A, 6 D). Although the upper limit of the cortical FFR is much lower than that of the subcortical FFR ( Bidelman 2018 ;Tichko and Skoe 2017 ), the F0 of human speech mostly ranges between 85 and 255 Hz ( Baken 2010 ;Keating and Kuo 2012 ); thus, the phaselocking response within this range in the cortex may be important in supplying precise temporal information on speech intonation and lexical tones. The F0 tracking response observed in this study had a lower boundary of approximately 50 Hz, which mainly reflected the tracking of periodicity ( Rosen et al., 1992 ). Additionally, the temporal tracking of lower temporal fluctuations Griffiths et al., 2010 ;Nourski et al., 2013 ;Nourski et al., 2009 ) and other temporal structures of linguistic units (i.e., syllable rates, phrases) also occurs in largescale cortical networks ( Ding et al., 2016 ;Giraud and Poeppel 2012 ). Together with our data, these findings suggest that cortical frequencyfollowing responses at different time scales concurrently track the time course of speech structures and constitute an important mechanism for speech encoding at the cortical level. There is also a large number of EEG studies on the auditory steady-state response (ASSR), which peaks at 40 Hz and also originated from the auditory cortex ( Krishnan et al., 2009b ;Ross et al., 2002 ). It is believed that the ASSR is more than a superposition of individual evoked potential responses to individual clicks, but involving the perceptual binding process of a large neural network ( Krishnan et al., 2009b ;Ross et al., 2005 ). In contrast, the cortical FFR we recorded with intracranial EEG is a response of local neural populations to detail spectro-temporal features of sound, which may not be able to reflect the 40 Hz signature of network integration ( Waldert et al., 2009 ). Frequency tuning is another basic property of neurons in the auditory cortex and is informative for pitch-related processing ( Bendor and Wang 2005 ;Feng and Wang 2017 ;Fishman et al., 2013 ;Kikuchi et al., 2019 ). Electrodes with a wider FFR band were found to have tuning frequencies within 100 Hz ~1 kHz ( Fig. 6 E, 6 F), which was highly consistent with the findings in macaque studies ( Fishman et al., 2013 ). A specialized pitch processing region was also identified in the low-frequency tuning region in the primary auditory cortex of marmosets ( Bendor and Wang 2005 ), where pitch extraction relies on temporal cues for low F0s and spectral cues for high F0s ( Bendor et al., 2012 ). The F0 tracking response located in the low tuning frequency region may be a candidate temporal mechanism for pitch extraction. Nevertheless, it should be noted that the findings in the present study are observed at the population level, which reflect both the neural firing pattern and the neural synchrony of the local population ( Denker et al., 2011 ;Ray et al., 2008 ). Our findings also support the notion that neural populations may be better suited for the temporal coding of pitch contours than single neurons ( Bendor et al., 2012 ;Fishman et al., 2013 ;Johnson et al., 2012 ;Yin et al., 2011 ). Anatomical distribution of cortical FFR Imaging studies have suggested that the processing of pitch involves the HG, the PT, and the STG region ( Allen et al., 2017 ;De Angelis et al. 2018 ;Hall and Plack 2009 ;Warren et al., 2003 ). Some other studies have further ranked these brain areas into a pitch processing hierarchy, in which central processing moves anterolaterally from the primary auditory cortex as the patterns in pitch variations are processed ( Griffiths 2003 ;Patterson et al., 2002 ). Consistent with previous studies, the cortical FFR was observed bilaterally in HG and its surrounding areas in the STG. Moreover, the FFR strength decreased from the primary auditory cortex to the associative auditory cortex ( Figs. 3 E, 4 E) Nourski et al., 2013 ). Previous studies on auditory pitch processing have also suggested different functional roles of anterior STG (aSTG) and posterior STG (pSTG) ( Liégeois-Chauvel et al., 1998 ). Different connectivity has also been shown for aSTG and pSTG ( Sammler et al., 2015 ). Generally, it is found the pSTG is tuned for temporally fast varying speech sounds and has a short temporal integration window, while the aSTG is tuned for temporally slow varying speech sounds and has a longer temporal integration window ( Hullett et al., 2016 ). Specifically, imaging studies and intracranial EEG studies have found that pSTG and planum temporale (PT), which was posterior to the HG, are highly related to linguistic pitch processing ( Liang and Du, 2018 ;Warren et al., 2003 ;Xu et al., 2006 ), including lexical tone ( Si et al., 2017 ) and intonation ( Tang et al., 2017 ). In contrast, the function of aSTG is relatively less known. Some studies have suggested that activation of aSTG may be affected by melodic context and expectations ( Seger et al., 2013 ;Warrier and Zatorre, 2004 ). In this study, we observed stronger cortical FFR in pSTG than in aSTG, which was in line with the notion that aSTG and pSTG have different functional focus and neural connections. The cross-linguistic differences of FFR have long been the focus of auditory research, and many studies have shown that native speakers of tonal languages may have better FFR ( Krishnan et al., 2009 ;Song et al., 2008 ;Zhao and Kuhl 2018 ). Although the current data did not provide a direct comparison, we propose that non-tonal language speakers also have cortical FFR, but they may differ in response properties. In the present study, we observed a higher phase coherency of the left HG. The tendency of higher timing fidelity in left HG may indicate an influence of language experience on primary auditory processing ( Krishnan et al., 2005 ;Warrier and Zatorre 2002 ;Xu et al., 2006 ). It is meaningful for future works to further explore the difference between the cortical FFR of the tonal-language speaker and non-tonal language speaker. Limitations and future research questions The present work studied the F0 tracking response in the human auditory cortex directly taking advantage of the intracranial EEG recording. Based on our findings, we proposed that the cortical FFR would optimize the processing of F0 contour in speech signals in two main aspects, that is the preference of cortical FFR to harmonic structures and selective coding of F0 in the range of human vocal pitch. However, it should be noted that the F0-tracking response observed in our study may not be a direct representation of pitch perception ( Gockel et al., 2011 ), but may facilitate subsequent intonation and lexical tone processing in higher stages by the encoding of the pitch-bearing information, especially in the vocal pitch range. Although the intracranial EEG benefits the investigation of cortical FFR, it should be noted that the number of electrodes and the coverage areas were defined according to the patient's clinical needs, so it's difficult to make them consistent between subjects. The limited accessibility and clinical setting further make it difficult to well control the recruitment of subjects, and do not permit complex experiment setting ( Parvizi and Kastner, 2018 ). Although subjects included in this study were beyond the maturation period of FFR ( Skoe et al., 2013 ), future studies with patients in a similar age and consistent electrode coverages across hemispheres and subjects can help further investigation on FFR lateralization and its developmental effects. Data and code availability statement The data and codes that support the findings of this study are available upon request. Declaration of Competing Interest None.
2020-11-11T14:37:49.131Z
2020-11-10T00:00:00.000
{ "year": 2021, "sha1": "0376337fb3e82fb1143b14d6f4a3a3aa9d4d4fac", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.neuroimage.2020.117545", "oa_status": "GOLD", "pdf_src": "Elsevier", "pdf_hash": "c1cf372265d5adb002f143823a9045962577d9ec", "s2fieldsofstudy": [ "Physics", "Biology" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
254488671
pes2o/s2orc
v3-fos-license
Antimicrobial Resistance Surveillance of Tigecycline-Resistant Strains Isolated from Herbivores in Northwest China There is no doubt that antimicrobial resistance (AMR) is a global threat to public health and safety, regardless of whether it’s caused by people or natural transmission. This study aimed to investigate the genetic characteristics and variations of tigecycline-resistant Gram-negative isolates from herbivores in northwest China. In this study, a total of 300 samples were collected from various provinces in northwest China, and 11 strains (3.67%) of tigecycline-resistant bacteria were obtained. In addition, bacterial identification and antibiotic susceptibility testing against 14 antibiotics were performed. All isolates were multiple drug-resistant (MDR) and resistant to more than three kinds of antibiotics. Using an Illumina MiSeq platform, 11 tigecycline-resistant isolates were sequenced using whole genome sequencing (WGS). The assembled draft genomes were annotated, and then sequences were blasted against the AMR gene database and virulence factor database. Several resistance genes mediating drug resistance were detected by WGS, including fluoroquinolone resistance genes (gyrA_S83L, gyrA_D87N, S83L, parC_S80I, and gyrB_S463A), fosfomycin resistance genes (GlpT_E448K and UhpT_E350Q), beta-lactam resistance genes (FtsI_D350N and S357N), and the tigecycline resistance gene (tetR N/A). Furthermore, there were five kinds of chromosomally encoded genetic systems that confer MDR (MarR_Y137H, G103S, MarR_N/A, SoxR_N/A, SoxS_N/A, AcrR N/A, and MexZ_K127E). A comprehensive analysis of MDR strains derived from WGS was used to detect variable antimicrobial resistance genes and their precise mechanisms of resistance. In addition, we found a novel ST type of Escherichia coli (ST13667) and a newly discovered point mutation (K127E) in the MexZ gene of Pseudomonas aeruginosa. WGS plays a crucial role in AMR control, prevention strategies, as well as multifaceted intervention strategies. Introduction One of the biggest threats to global health is antimicrobial resistance, affecting the environment, animals, and humans [1]. Gram-negative bacteria such as Escherichia coli (E. coli), Klebsiella pneumoniae (K. pneumoniae), Pseudomonas aeruginosa (P. aeruginosa), and Salmonella typhimurium (S. Typhimurium) are important zoonosis pathogens [2]. With the extensive application of antibiotics in livestock breeding and treatment, the rapid increase in the prevalence of extensively drug-resistant (XDR) Gram-negative bacteria, particularly carbapenem-resistant Enterobacteriaceae and Acinetobacter spp., have affected the efficacy of carbapenems. For example, Enterobacter, as an indicator of the prevalence of Gram-negative bacteria, is a rich antibiotic resistance gene pool and a mobile center for drug resistance gene exchange [3,4]. Therefore, in many investigations of Gram-negative drug-resistant bacteria, many strains were found together with E. coli. Additionally, K. pneumoniae, P. aeruginosa, and S. Typhimurium have been mentioned and found to contain multiple drugresistant (MDR) strains in previous reports [5,6]. In brief, the cross-infection of multiple Gram-negative drug-resistant bacteria carrying different drug-resistant genes have brought significant challenges to clinical prevention and the treatment process [7]. Therefore, it is extremely critical to distinguish and characterize the drug-resistance characteristics of different drug-resistant strains. At present, MCR-type colistin-resistant strains are widely reported in Enterobacteriaceae, and tigecycline is one of the last resort antibiotics for treating these superbugs [8,9]. Originally derived from tetracycline, tigecycline was designed to overcome tetracycline resistance's common mechanism [10]. Tigecycline inhibits bacterial growth by binding to the 30S ribosome and blocking the entry of tRNA, thus preventing protein synthesis. Furthermore, tigecycline escapes tetracycline resistance mechanisms due to its different binding orientation [11]. Tigecycline is regarded as a last-line antibiotic against infections caused by MDR or XDR bacterial pathogens, so long-term use of tigecycline is not recommended. However, several cases of tigecycline resistance have been reported in the scientific community since tigecycline was first used clinically [12][13][14]. Most cases of tigecycline resistance were attributed to one or more of the following mechanisms: mutations within the ribosomal binding site, acquisition of mobile genetic elements carrying tetracycline-specific resistance genes, and/or chromosomal mutations leading to the increased expression of intrinsic resistance mechanisms [15,16]. Therefore, when tetracycline-resistant strains become prevalent, we will face the dilemma that there is no effective antibiotic available. Furthermore, as the natural pasture of animal husbandry in China, the northwest region has a unique advantage in herbivore breeding. However, the prevalence of drugresistant strains has brought serious economic losses to the aquaculture industry in this area [17][18][19]. So far, there has been no investigation of tigecycline-resistant strains in northwest China, and no whole-genome sequencing (WGS) analysis of tigecycline isolates. Therefore, it is particularly crucial to analyze the drug resistance mechanism and molecular characteristics of these tigecycline-resistant strains to provide a theoretical basis and new programs for clinical treatment and prevention. In this study, fresh stool samples were collected from herbivores of different varieties under different farming environments in northwest China from June 2021 to May 2022, and tigecycline-resistant strains were analyzed in the samples to address the inadequacy of previous studies on the drug-resistant bacteria of animal origin in this area. We performed WGS on the tigecycline-resistant isolates (including E. coli, K. pneumoniae, P. aeruginosa, and S. Typhimurium) to uncover the prevalence and genetic diversity of tigecycline-resistant strains derived from animals. Sample Collection and Bacterial Isolates In the present study, 300 stools were sampled in eight study plots located on 12 largescale farms in northwest China from June 2021 to May 2022. We took stool samples from 150 cattle and 150 sheep in various breeding modes, including males, females, and young animals ( Figure 1 and Table 1). We incubated 0.5 g of feces in 5 mL of Luria-Bertani (LB) to enrich bacteria for 6 h. We screened the tigecycline-resistant strains on an LB plate with 2 µg/mL tigecycline [20]. Species and genera of the screened single strains were identified by 16S sequencing and preserved in 60% glycerol. Whole-Genome Sequencing Using a commercially available bacterial genomic DNA isolation kit (Generay, China), DNA was obtained from isolates that displayed tigecycline resistance according to the manufacturer's instructions. A NanoDrop 2000 spectrophotometer (Thermo Fisher, Waltham, MA, USA) was used to measure the DNA concentration in the extracted samples. The genome samples were interrupted, and the sticky ends were repaired into flat ends by T4 DNA Polymerase, Klenow DNA Polymerase, and T4 PNK. By adding a base 'A' at the 3' terminal, the DNA fragment could be connected to a special junction with an 'A' base at the 3' terminal. DNA fragments of the target size were selected by the magnetic bead method and the high-fidelity PCR enzyme-enriched DNA-seq library. Finally, qualified libraries were fully sequenced by whole genome paired-end sequencing (Illumina, San Diego, CA, USA), thus producing 150-bp paired-end reads (PE 150). Identification of Antimicrobial Resistant Genes, Multilocus Sequence Typing Analysis, and Virulence Factors Each draft genome was screened for genes associated with AMR. As a reference for determining drug-resistance genes in isolates, the most updated AMR gene database was downloaded from the NCBI National Database of Antibiotic-Resistant Organisms (accessed on 16 August 2022) [23]. A database of virulence factors and the Virulence Finder 2.0 software were used to predict virulence factors [24,25]. PubMLST (https://pubmlst.org/; accessed on 20 September 2022) was used to perform multi-locus sequence typing (MLST) of assembled bacterial genomes [26]. Statistical Analysis Sangerbox software (v1.1.3) (http://vip.sangerbox.com; accessed on 3 August 2022) was used to make heat maps of drug resistance characteristics, drug resistance genes, and the virulence factors of isolates. In the cluster analysis of drug-resistant characteristics, the presence of the above resistance phenotype received a score of 1, the intermediate received a score of 0, and the susceptibility received a score of −1. In the cluster analysis of drug-resistant genes and virulence factors, the existence received a score of 1, and the nonexistence received a score of 0. Tigecycline Resistant Isolates The specific sampling time, sampling volume, prevalence, and geographical location are shown in Table 1 and Figure 1. A total of 11 tigecycline-resistant strains were isolated and identified from 300 stool samples collected in northwest China, with an isolation rate of 3.67%. Four E. coli strains resistant to tigecycline were identified from 60 samples collected from Shaanxi (6.67%). Two strains of E. coli and one strain of K. pneumoniae resistant to tigecycline were identified from 60 samples collected from Xinjiang (5.0%). One E. coli strain resistant to tigecycline was identified from 30 samples collected in Sichuan (3.33%). Two strains resistant to tigecycline, S. Typhimurium and P. aeruginosa, were detected from 90 samples collected in Gansu (2.22%). One K. pneumoniae strain resistant to tigecycline was identified from 30 samples collected from Qinghai (3.33%), and no tigecycline-resistant isolate was identified from 30 samples collected from Tibet (0%). It is clear from this data that Shaanxi province had the highest isolation rate, while Tibet had no tigecyclineresistant strains. Meanwhile, tigecycline-resistant isolates from Xinjiang, Qinghai, and Gansu provinces showed a diversity of strains rather than a predominance of E. coli. Antibiotic Sensitivity Test The sensitivity test results of 11 tigecycline-resistant strains to 14 antibiotics are shown in Figure 2 and Table 2. E. coli and K. pneumoniae strains have the most serious drug resistance and were the most prevalent tigecycline-resistant strains in the region. In terms of the MIC distribution (Table 2), the MIC values of the antibiotics tetracycline, ampicillin, and sulfamethoxazole were significantly higher. In addition, all isolates were sensitive to ceftiofur, meropenem, gentamicin, and colistin; only SX-3E-11 was extremely sensitive to florfenicol. This may suggest potential drug delivery strategies in these areas. In addition, SX-2E-03 and SX-1E-06 had the same drug resistance spectrum, and they were isolated from the same farm. Other isolates showed different drug resistance profiles, even those isolated from different farms in the same area (SX-3E-11 and SX-1E-06). The outcome of the antibiotic resistance pattern is depicted in Figure 2. All tigecycline-resistant isolates could be typed into four different antibiotypes, and among these resistance patterns, profiles number 6 (with 5 isolates) and 7 (with 3 isolates) had the highest frequencies. Among the 11 isolates, only one strain showed resistance to five of the tested antibiotic categories (XJ-1E-02: AMP-CIP-SXT-TIG-TET). Two of these strains showed resistance to eight of the tested antibiotic categories, with two types of resistance, AMP-CTX-AMK-SXT-TIG-TET-FFC-FOS-resistant (LZ-1S-01) and AMP-AMC-AMK-CIP-SXT-TIG-TET-FFC-resistant (LZ-1P-09). The multidrug resistance assay indicated that all tigecycline-resistant strains are common and have diverse and wide AMR spectra. This suggests that the difference in the antimicrobial spectrum of strains may be due to the frequency of the types of antibiotics used by different farms in clinical breeding and treatment. Whole Genome Sequencing Analysis The final assembly of the isolates, based on WGS, ranged from 104 to 187 contigs of >500 bps/sample in E. coli isolates with N50 values between 49,186 and 88,174. A total of 127, 155, 152, 104, 187, 129, and 188 Whole Genome Sequencing Analysis The final assembly of the isolates, based on WGS, ranged from 104 to 187 contigs of Table 3. Distribution of Antimicrobial Resistance Genes and Virulence Factors of Isolates The genomes of all 11 tigecycline-resistant isolates were sequenced, with 12 AMR genes predicted from them ( Figure 3A); these include three fluoroquinolone resistance genes (gyrA_S83L, gyrA_D87N, S83L, parC_S80I, and gyrB_S463A), two fosfomycin resistance genes (GlpT_E448K and UhpT_E350Q), one beta-lactam resistance gene (FtsI_D350N, S357N), one tigecycline resistance gene (tetR_N/A), and five chromosomally encoded genetic systems that confer MDR (MarR_Y137H, G103S, MarR_N/A, SoxR/SoxS_N/A, AcrR_N/A, and MexZ_K127E). In conjunction with the drug-resistance profiles of the isolates, it can be seen that the distribution of drug-resistance genes is highly consistent with the drugresistance profiles of the isolates as a whole. It also confirms the correlation between the presence of these resistance genes and resistance phenotypes. Discussion Antimicrobial resistance is one of the greatest threats to human health in the 21st century, especially with regard to zoonotic pathogens. E. coli, K. pneumoniae, S. Typhimurium, and P. aeruginosa are significant zoonotic pathogens that cause a wide range of clinical diseases [2]. Tigecycline is an important drug for the treatment of drug-resistant strains in the clinic, and it is the last line of defense for the treatment of bacterial infection [27]. We present a study in which we first identified the presence of tigecycline-resistant strains in northwest China and then analyzed the drug resistance as well as the WGS of the isolates. Furthermore, this study found that herbivores in northwest China were relatively low in carrying tigecycline-resistant bacteria, which is an interesting finding. Antibiotic resistance genes were widely distributed in isolates, including fluoroquinolone resistance, fosfomycin resistance, and other genes endowed with resistance to β-lactamase, fosfomycin, aminoglycosides, sulfonamides, quinolones, tetracycline and chloramphenicol, and several chromosomally encoded genetic systems that confer MDR. All isolates except LZ-1P-09 were MDR phenotypes that carried at least one β-lactamase gene and the MICs of carbapenem antibiotics supported the presence of resistance genes of these antibiotics. Only one drug resistance gene, MexZ, was detected in the P. aeruginosa isolate LZ-1P-09, and MexZ is the main reason for P. aeruginosa's natural resistance to tigecycline [28]. By comparing the protein sequence of the gene, we found that there was a mutation form K127E in MexZ which had not been previously reported. In addition, it is interesting that the fosfomycin resistance gene in E. coli and S. Typhimurium is GlpT, and that in K. pneumoniae is UhpT. Although we screened the isolates by adding tigecycline to the culture medium, only the tetR gene was detected. TetR is the repressor of the tetracycline resistance element, wherein its N-terminal region forms a helix-turn-helix structure and binds DNA. The binding of tetracycline to tetR reduces the repressor affinity for the tetracycline resistance gene (tetA) promoter operator sites [29,30]. Therefore, we believe that the reason for tigecycline-resistant isolates may be due to the existence of MarR, SoxR/SoxS, AcrR, and MexZ. In short, the MICs of the isolates we tested for 14 antibiotics supported the presence of drug-resistance genes in these isolates as well as the existence of MICs for these antibiotics. Among the drug resistance genes mentioned earlier in this article, gyrA and ParC are genes encoding DNA helicase and topoisomerase IV in cells, and the latter two are the target sites of quinolone drugs [31]. Mutations in gyrA and ParC can change the target sites, making the drugs unrecognizable, thus leading to the formation of drug resistance [32]. GlpT and UhpT are the transport proteins of fosfomycin, and they are also symporters of glycerol-3-phosphate and glucose-6-phosphate. When GlpT and UhpT are mutated, fosfomycin cannot be transported into the cell, resulting in a significant decrease in cell sensitivity to fosfomycin [33,34]. FtsI encodes penicillin-binding protein 3 (PBP3), which is the active site of beta-lactam. Mutations in FtsI make drugs unrecognizable [35]. MarR represses the transcription of MarRAB by binding to MarO and negatively controlling the MarA-dependent expression of other genes in the regulon [36]. By mutation of MarR or MarO, the repressor is rendered inactive. The resulting overexpression of MarA produces antibiotic resistance by increasing the expression of the major multidrug efflux pump AcrAB-TolC and down-regulating the outer membrane protein OmpF via the small RNA (sRNA) MicF [37]. SoxR/S is a chromosomally encoded genetic system that confronts lowlevel MDR in E. coli and S. Typhimurium [38]. The single point mutations or other unknown changes of SoxR lead to the high expression of SoxS, which can increase efflux pump activity and decrease cell permeability, creating resistance to a variety of antibiotics [39]. AcrR is an HTH-type transcriptional regulator, a local transcriptional inhibitor, which can inhibit the transcription of the acrB gene, which encodes multi-drug efflux pump acrB. When point mutation occurs in acrR, it loses its inhibitory effect on acrB, resulting in the high expression of acrB and an increase in the number of efflux pumps [40]. MexZ plays a negative role in the expression of the mexXY efflux pump in P. aeruginosa. MexXY plays an important role in the efflux of a variety of antibiotics. MexZ mutants' cloud loses the inhibition on mexXY which increases the number of pumps [41]. To the best of our knowledge, MexZ_K127E is a new point mutation of the MexZ gene in P. aeruginosa found in this study. In addition, through the MLST analysis of 11 isolates, we found that only SX-2E-03 and SX-1E-06 belonged to the same ST (ST101). It was evident from the STs of isolates from different regions, as well as from isolates within the same region, that there is a wide genetic diversity among them. It is imperative to adopt more flexible strategies for clinical treatment and prevention because there are not only many kinds of drug-resistant bacteria in northwest China, but also many STs with different drug-resistance profiles. A number of mechanisms are thought to contribute to Gram-negative bacteria's intrinsic and acquired drug resistance. In our data, WGS accurately identifies the exact mechanism of antibiotic resistance for Gram-negative isolates. Conclusions According to our findings, tigecycline-resistant bacteria were found on farms in Gansu, Qinghai, Xinjiang, Sichuan, and Shaanxi in northwest China. There are many different types of multidrug-resistant STs bacteria. As a result of sequencing and analyzing the WGS of the isolates, we identified drug-resistance genes and virulence factors. A joint analysis of the drug-resistance genes and drug-resistance spectrum of the isolates also confirmed the presence of drug-resistance genes. In addition, based on epidemiological investigation and WGS analysis, despite the low resistance rate of tigecycline, we believe that the multidrug resistance of tigecycline-resistant isolates in northwest China is a serious problem; additionally, the mechanism of drug resistance is complex, which makes prevention and control more difficult. In light of this, we should carry out more research on MDR bacteria and increase surveillance of these bacteria. Author Contributions: Q.C. and X.G. designed this project and conceptualization; Q.C. and Y.C. supervised this project; Q.C., Y.Y. and C.S. performed the experiments; Q.C., Y.Y., H.Q., X.G., D.L. and Y.C. analyzed the data; Q.C., Y.Y. and X.G. prepared figures; Q.C., X.G., Y.Y. and C.S. drafted this manuscript and visualization. All authors have read and agreed to the published version of the manuscript.
2022-12-10T16:08:00.140Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "ee581df73cea6dadde558cd18f06d7d3fae1ca02", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2607/10/12/2432/pdf?version=1670490223", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ee1d8fa24519fc215cc921c5cd595bb9e74f87ba", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
262107813
pes2o/s2orc
v3-fos-license
Airway pressure release ventilation Airway pressure release ventilation was introduced to clinical practice about two decades ago as an alternative mode for mechanical ventilation; however, it had not gained popularity until recently as an effective safe alternative for difficult-to-oxygenate patients with acute lung injury/ acute respiratory distress syndrome This review will cover the definition and mechanism of airway pressure release ventilation, its advantages, indications, and guidance. key words: Introduction A irway pressure release ventilation (APRV) was introduced to clinical practice about two decades ago as an alternative mode for mechanical ventilation; however, it had not gained popularity until recently as an effectivesafe alternative for difficult-to-oxygenate patients with acute lung injury/ acute respiratory distress syndrome (ALI/ARDS).APRV has many appealing features applicable to our current understanding of ALI/ ARDS treatment, such as minimizing ventilator-induced lung injury (VILI) using lung protective strategies.There have been only few studies on APRV, mostly on animals and even fewer on humans, some showing superiority to the conventional ventilatory methods but none showing any mortality differences.In this review, we will answer four important questions about APRV:What is it (definition and mechanism of action)?Why use it (advantages) When to use it (indications and contraindications)How to use it (guidelines and troubleshooting) What is airway pressure release ventilation (APRV) APRV was described initially by Stock and Downs in 1987 [1,2] as a continuous positive airway pressure (CPAP) with an intermittent release phase.APRV applies CPAP (P high) for a prolonged time (T high) to maintain adequate lung volume and alveolar recruitment, with a time-cycled release phase to a lower set of pressure (P low) for a short period of time (T low) or (release time) where most of ventilation and CO 2 removal occurs [Figure 1]. Using high-flow (demand valve) CPAP circuit, unrestricted spontaneous breathing can be integrated and can happen any time regardless of the ventilator cycle.If the patient has no spontaneous respiratory effort, APRV becomes typical to 'inverse ratio pressure'-limited, 'time cycle'-assisted mechanical ventilation (pressurecontrolled ventilation). [3] acute respiratory distress syndrome (ARDS), the functional residual capacity (FRC) and lung compliance are reduced, and thus the elastic work of breathing (WOB) is elevated.By applying CPAP, the FRC is restored and inspiration starts from a more favorable pressurevolume relationship, facilitating spontaneous ventilation, and improves oxygenation. [4]pplying 'P high' for a 'T high' (80-95% of the cycle time), the mean airway pressure is increased insuring almost constant lung recruitment (open-lung approach), in contrast to the repetitive inflation and deflation of the lung using conventional ventilatory methods, which could result in ventilator-induced lung injury (VILI); [5,6] or the recruitment maneuvers, which have to be done frequently to avoid derecruitment.Mean airway pressure on APRV is calculated by this formula: (P High × T High) + (P Low × T Low) (T High + T Low) Minute ventilation and CO 2 removal in APRV depend on lung compliance, airway resistance, the magnitude and duration of pressure release and the magnitude of the patient's spontaneous breathing efforts.Spontaneous breathing plays a very important role in APRV, allowing the patient to control his/ her respiratory frequency without being confined to an arbitrary preset inspiratory: expiratory ratio (I:E), thus improving patient comfort and patient-ventilator synchrony with reduction of the amount of sedation necessary.[10] The addition of pressure support ventilation (PSV) above 'P high' to aid spontaneous breaths is feasible, but this addition contradicts limiting the airway pressure and may cause significant lung distention; furthermore, the imposition of PSV to APRV reduces the benefits of spontaneous breathing by altering the normal sinusoidal flow of spontaneous breath to a decelerating assisted mechanical breath as flow and pressure development are uncoupled from patient effort.[13] On the other hand, the use of automatic tube compensation (ATC) during APRV may help overcome the artificial airway resistance during spontaneous breathing using computerized ventilator algorithms without causing overt lung distention while preserving the sinusoidal flow pattern of spontaneous breath. [14]y use APRV (advantages) I. Effects on oxygenation The improved oxygenation parameters (PaO 2 / FiO 2 , lung compliance) during APRV are attributed to the beneficial effects of spontaneous breathing through better gas distribution and better V/Q matching to the poorly aerated dorsal region of the lungs, along with higher mean airway pressure obtained compared to conventional ventilation: 'open lung approach.'7][18] II. Effects on hemodynamics During spontaneous breathing, the pleural pressure decreases, leading to a decrease in the intra-thoracic and right atrial pressure -thus increasing venous return and improving the pre-load and consequently increasing the cardiac output. [3]aplan et al. [19] compared the hemodynamic effects in patients with ALI/ ARDS on APRV versus inverse ratio PCV; they found significantly higher cardiac index (CI l/min/m 2 )), oxygen delivery (DO 2 ml/min), mixed venous oxygen saturation (SVO 2 %), urine output (ml/kg/h) and significantly lower vasopressors and inotropes usage, lactate concentration (mmol/L) and CVP (mmHg) while on APRV.Putnsen et al. [7] compared APRV and PCV in 30 trauma patients, and they found significantly less vasopressors and positive inotropes usage, with significantly increased CI, DO 2 . III. Effects on regional blood flow and organ perfusion In a study by Hering et al., [20] APRV improved respiratory muscle blood flow in 12 pigs with ALI.In a similar study by the same authors, [21] APRV showed improved blood flow to stomach, duodenum, ileum and colon in 12 pigs with ALI.Kaplan et al. [19] found significantly improved urine output and glomerular filtration rate in patients on APRV as compared to PCV. IV. Effects on sedation and neuromuscular blockades usage The level of analgesia and sedation required during CMV is usually equivalent to a Ramsay score of between 4 and 5 (i.e., a deeply sedated patient) during APRV; a Ramsay score of between 2 and 3 can be targeted (i.e., an awake patient who is responsive and cooperative). APRV had shown to decrease the need for neuromuscular blockades use by 70% and the use of sedation by about 40% compared to conventional mechanical ventilation in other studies. [7,10,16,18,19]The decreased usage of sedatives and neuromuscular blockers may translate into decreased length of mechanical ventilation and ICU length of stay. [7,22]en to use APRV Indications Based on clinical and experimental data, APRV is indicated in patients with ALI, ARDS and atelectasis after major surgery. [8,9,11,17,19,20] Contraindications Because of the lower levels of sedation used to allow spontaneous breathing, APRV should not be used in patients who require deep sedation for management of their underlying disease (e.g., cerebral edema with increased intracranial pressure or status epilepticus). To date, no data are available on the use of APRV in patients with obstructive lung disease (bronchial asthma exacerbations or chronic obstructive pulmonary diseases).Theoretically, using short release time is not beneficial in those patients who require prolonged expiratory time. Likewise, use of APRV has not been investigated in patients with neuromuscular disease and is not supported by any evidence. How to adjust pressures and tidal volumes during APRV Mechanical ventilation with positive end-expiratory pressure (PEEP) titrated above the lower inflection point of the static pressure-volume curve and a low tidal volume at 6 ml/kg (TV) Daoud: Airway pressure release ventilation Conclusions APRV is a simple, safe and effective ventilatory method for patients with ALI/ ARDS; currently there is some but no strong evidence to suggest its superiority above other ventilatory methods in regard to oxygenation, hemodynamics, regional blood flow, patient comfort and length of mechanical ventilation.There is no evidence of improved mortality outcome by using APRV as compared to other modes of mechanical ventilation.There is a need for large human trials to compare APRV to conventional mechanical ventilation using lung-protective strategies before drawing final conclusions about this interesting mode of ventilation.Currently we do not recommend APRV for every patient with ALI/ ARDS; but for carefully selected patients, consultation with specialist and respiratory therapist with expertise in using APRV may be necessary. are thought to prevent alveolar collapse at end-expiration and over distension of lung units at end-inspiration in patients with ARDS.This lung-protective strategy has shown improvement in mortality in patients with ARDS. [23,24]he setup at the bedside is simple and the goals are the same: to maintain adequate oxygenation and ventilation without overt lung distention during 'P high' and avoiding lung derecruitment and/ or intrinsic PEEP during 'P low.' Setting pressures: 'P high' should be below the high inflection point (HIP) on the static volume-pressure curve, while 'P low' should be above the low inflection point (LIP) on the same curve [Figure 2]. Setting times: 'T high' should allow complete inflation of the lungs, as indicated by an end-respiratory phase of no flow when spontaneous breathing is absent, and 'T low' should allow for complete exhalation with no gas flow at its end to assure absence of intrinsic or auto PEEP [Figure 3]. It is also recommended setting ATC to 100% and trying to avoid over-sedation. Troubleshooting • Maneuvers to correct poor oxygenation include 1) increase either 'P high,' 'T high' or both to increase mean airway pressure; 2) change the patient position to the prone position along with APRV. [3] Maneuvers to correct poor ventilation include 1) increase 'P high' and decrease 'T high' simultaneously to increase minute ventilation while keeping stable mean airway pressure (preferred method); 2) increase 'T low' by 0.05-0.1 s increments; 3) decrease sedation to increase the patient's contribution to minute ventilation. [3] Figure 1 : Figure 1: Pressure-time curve for APRV.'P high' is the high CPAP, 'P low' is the low CPAP, 'T high' is the duration of 'P high,' and 'T low' is the release period or the duration of 'P low.' Spontaneous breathing appears on the top of 'P high.' Figure 2 :Figure 3 : Figure 2: Static pressure-volume curve during volume-controlled mechanical ventilation.High pressure ('P high') is set below the high inflection point (HIP) and low pressure is set above the low inflection point (LIP).
2018-04-03T03:36:24.874Z
2007-10-01T00:00:00.000
{ "year": 2007, "sha1": "6c904ea0dc18f93def09eecfc0c75b70d30dc818", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4103/1817-1737.36556", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ed8872349edbae1c216d4b87050b11c49c05ce15", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2781464
pes2o/s2orc
v3-fos-license
Kaon weak interactions in powers of 1/Nc I review a recent analytic method for computing matrix elements of electroweak operators based on the large-Nc expansion. In particular, I give a rather detailed description of the matching of the quark Lagrangian onto the meson chiral Lagrangian governing K0-K0bar mixing. In the DeltaS=1 sector, I explain how to obtain an estimate for both epsilon'/epsilon and the DeltaI=1/2 rule. Finally, I give an example of how the method can also be useful for lattice calculations. Introduction The study of kaon weak interactions is difficult due to the fact that the strong interactions become nonperturbative at a scale Λ QCD ∼ 1 GeV which is higher than the kaon mass M K . However, because there is the large hierarchy Λ QCD /M W ≪ 1 with respect to the scale of weak interactions, M W , the problem can be simplified with the use of Effective Field Theory techniques. In the Effective Field Theory of the Standard Model valid at the Kaon scale, one deletes from the Lagrangian any field whose associated particle mass is higher than Λ QCD . At the same time one introduces higher dimension operators with the u, d, s quarks as degrees of freedom in order to reproduce the processes that in the Standard Model were taking place due to virtual exchange of the W and other heavy fields. The introduction of these higher dimension operators modifies drastically the ultraviolet properties of the Lagrangian making it look "nonrenormalizable". However, if the basis of higher dimension operators is complete, it is possible to absorb all the divergences with the counterterms supplied by these operators. Of course, this takes care of the divergences, but it leaves a finite piece behind. Furthermore, it is a finite piece which is dependent on the conventions chosen to do the calculation (such as, e.g., the regularization scheme chosen -MS or MS-, the precise definition of γ 5 employed -NDR or HV-, evanescent operators, etc...). Obviously physical results cannot be scheme/convention dependent and the ambiguity is resolved in the so-called matching conditions. These conditions equate a Green's function computed in the Effective Field Theory and the full Standard Model, imposing that the physics is the same even though the heavy particles are missing in the former. The advantage of the Effective Field Theory technique is that it is simpler. This simplicity allows, e.g., the systematic resummation of all powers of the strong coupling α s accompanied by large logarithms in combinations such as α n s log n M W /Λ QCD using renormalization group techniques 1 . However, below Λ QCD there is no point in resumming powers of α s . Fully nonperturbative effects take place and, in fact, kaon transitions are described in terms of a Chiral Lagrangian with the kaon field as an explicit degree of freedom, and no longer in terms of fields for the u, d, s quarks. The matching condition between the quark Lagrangian and the Chiral Lagrangian requires a nonperturbative treatment. This is where the large-N c expansion comes in. This expansion is very well suited for this because it can be implemented both at the level of quarks and at the level of mesons 2 . However, even at the leading order in the large-N c expansion, Green's functions in QCD receive a contribution from an infinity of resonances whose masses and couplings are unknown. This is why one actually has to resort to an approximation to large-N c QCD. This approximation, which has been termed the "Hadronic Approximation" (HA) 3 , consists of the ratio of two polynomials whose coefficients are fixed by matching onto the first few terms in the chiral and operator product expansions of the Green's function one is interested in. The rational approximant so constructed constitutes an interpolator between the low-and high-momentum regimes, which one can use to perform the necessary calculations. Leading contribution: dimension-six quark operators. In the Standard Model, a double exchange of W bosons generates through the famous box diagram a ∆S = 2 transition amplitude between the K 0 and the K 0 . Below the charm mass, there arises the effective operator where c are the corresponding quark masses, η 1,2,3 are some numerical coefficients of O(1) 23 , and expresses the running under the renormalization group of the operator in Eq. (1). The parameter κ contains the scheme dependence and equals 0 (−4) in the naive dimensional regularization (resp. 't Hooft-Veltman) schemes. In the case of the top one defines the effective mass 23 , to take into account that the top mass is heavier than the W . Physical observables such as the K L − K S mass difference and ǫ K get a contribution from the real and imaginary part, respectively, of the matrix element Below Λ QCD ∼ 1 GeV, it no longer makes any sense to think of an operator written in terms of quarks and, instead, one writes a Chiral Lagrangian. The ∆S = 2 chiral operator at order O(p 2 ) reads 5 where [λ 32 ] ij = δ i3 δ 2j is a (spurion) matrix in flavor space, F 0 ≃ 0.087 GeV is the pion decay constant (in the chiral limit) and U is a 3 × 3 unitary matrix collecting the Goldstone boson degrees of freedom, which transforms as U → RU L † under a flavor rotation (R, L) of the group SU (3) L ×SU (3) R . The scale Λ 2 S=2 inherits some dependence on short-distance physics, but there is also a coupling constant, g S=2 , to be determined via a matching condition. What is this matching condition? Since covariant derivatives contain external fields l and r, i.e. D µ U † U = ∂ µ U † U + iU † r µ U − il µ , one sees that Eq. (4) contains a "mass term", r μ ds rd s µ , for the external field r μ ds . This r μ ds is precisely the external field that couples to the right-handed currentd R γ µ s R in the kinetic term for the quark field in the QCD Lagrangian. Furthermore, a mass term for the r μ ds field changes strangeness by two units, so that it can only come about from the quark Lagrangian because of the presence of the operator (s L γ µ d L ) 2 in Eq. (1). Consistency demands that the two mass terms be the same. Equating the term r μ ds rd s µ obtained from Eq. (4) to that obtained from Eq. (1) (plus gluon interactions) one obtains the matching condition 8 where µ 2 had is an arbitrary scale used to define the integral in terms of a dimensionless variable z ≡ Q 2 /µ 2 had , where Q is a loop momentum. The coupling g S=2 is renormalization group invariant. The function W (z) is defined as where, in turn, W LRLR is defined through an integral over the solid angle of the momentum q, and Q 2 ≡ −q 2 . In Eq. (8) the function W µανβ LRLR (q, l) stands for W µανβ LRLR (q, l) = (9) Since , with the unity stemming from the factorized part of the function W µανβ LRLR . Notice that the function W µανβ LRLR (q, l) is essentially a 2-point "left× left" Green's function, with incoming momentum Q, with a double insertion of a right-handed current at zero momentum (i.e. l → 0). Even though W (z) is an order parameter of spontaneous chiral symmetry breaking and, therefore, receives no contribution from perturbation theory to all orders in α s , the integral in Eq. (6) is divergent (i.e. ill-defined) and must be regularized. Consistency demands that the regularization used be the same as in the calculation of the Wilson coefficient (2) and this is why the integral in Eq. In principle, knowledge of W (z) for the full range of z is required to calculate the coupling g S=2 (µ). However, lacking the solution of QCD at large N c , this information is not known. What is known, nevertheless, is the low-and high-z expansions of W (z) because they are given by chiral perturbation theory and the operator product expansion, respectively. If we build a good interpolator for the region in between, the answer is ready. In the large-N c limit the function W (z) is a meromorphic function, i.e. it has an infinity of isolated poles on the negative z axis, but no cut. The Hadronic Approximation (HA) I shall use is an approximation to this function which consists in keeping only a finite number of poles, fixing their residues so that the coefficients of the chiral and operator product expansions are reproduced. In mathematics, this is called a rational approximant. In general the convergence of this type of approximants is difficult to establish for an arbitrary function 6 . However, in our case we expect it to work reasonably well. For one thing we expect W (z) to be a smooth function since it is a Green's function in the euclidean. Furthermore, if the operator product expansion sets in at scales 1 GeV and the chiral expansion sets in at scales M ρ , then the gap to interpolate by the approximant is not very large and, consequently, one may expect the error to be reasonably small. In particular, we have verified within a model that this approximation works nicely 7 and, in fact, the traditional success of vector meson dominance shows an indication that things may work similarly for QCD. At any rate, whenever possible, we shall check on the convergence of the approximation for the case at hand. Computing the chiral expansion for W (z) at low z one finds 11,8 where the first (resp. second) term comes from the contribution of O(p 2 ) (resp. O(p 4 )) in the strong chiral Lagrangian. At high z the operator product expansion yields The δ K term in Eq. (12) is the contribution from dimension-eight operators in the operator product expansion. We may use this term to monitor the convergence of our expansion in the following way. First, we may start by considering the δ K term in the operator product expansion as "subleading" and neglect its contribution altogether. After reinstating it later on, and redo the calculation, we may compare the two results thus obtained. Clearly, if our approximation is to make sense, both results should be close to one another. So let us neglect δ K in Eq. (12) for the time being. Since we wish to calculate the integral in Eq. (6), we need to interpolate between the low-and high-z regimes in Eqs. (11,12). To this end we observe that the pole structure of the diagrams depicted in Fig. 1 shows single, double and triple poles (those which can be obtained by cutting through the solid line between crosses). In the strict large-N c limit the Mittag-Leffler's theorem 9 for meromorphic functions allows us to write W (z) in the form where the infinite sum extends over all possible resonances allowed by quantum numbers. The analytic structure of (13) suggests that, as a first step, we restrict ourselves to just one resonance in this sum. The HA interpolator so constructed reads then, where ρ = M 2 ρ /µ 2 had , and we determine the constants a V , b V , c V by imposing that it reproduces the behavior in Eqs. (11,12). The result of this interpolation is shown as the dash-dotted curve in Fig. 2. This figure also shows the OPE and chiral expansion as dashed curves at large and small values of z, respectively. Notice that the shape of the interpolating function W (z) shown in Fig. 2 is very smooth. This kind of shape for the Green's function is very representative and we have found similar shapes in all the different cases we have studied. One can now use the example of W (z) to develop some intuition of the role played by the different energy scales in the problem. Although in a one-scale theory like QCD all scales are ultimately related to each other, it is a fact of life that there is a clear numerical hierarchy. First, there are chiral parameters such as F π or ψψ 1/3 whose scale is ∼ 100 − 200 MeV. Second, there is a typical resonance mass, or mass gap, whose scale is much c , a naive use of the strict large-N c limit would lead one to the erroneous conclusion that Λ QCD /F π is negligible. In matching conditions like (6) this observation is crucial. The reason is that the factorized contribution is governed by F π -which in Eq. (6) has been scaled out and this is why the first term is unity-whereas the unfactorized contribution is governed by Λ QCD . Since W (z) ∼ 1 in the region z Λ 2 QCD /µ 2 had -before the OPE sets inand neglecting the logarithmic divergence from the OPE tail, this means that which says that the correction to unity in Eq. (6) will be of order Λ 2 QCD /(4πF π ) 2 , which is not a negligible contribution at all. In this case keeping only the factorized contribution is not safe because the N c → ∞ limit happens to select a scale which is "abnormally small", F π , as compared to the larger scale Λ QCD , which can only show up in the next-to-leading 1/N c terms. In these cases it is not unnatural to expect large unfactorized contributions. As a matter of fact, later on we shall see that the unfactorized contribution is ∼ 50% of the factorized contribution in the case of B K but it is several times larger in the case of the strong penguin contribution to ε ′ /ε. However, once one has included the unfactorized contribution with its Λ QCD scale, there is no further scale in the game. Therefore, there is no reason to expect that subleading effects will still yield larger contributions with the consequent breakdown of the large-N c expansion. These subleading effects are typically either an OZI-violating amplitude or related to resonance widths, and all evidence so far is compatible with these two being reasonably small. Notice that something similar to this happens in the relationship between the η ′ mass and the topological susceptibility for different number of colors 10 : even though the η ′ mass is not at all small as compared to Λ QCD -whereas it vanishes when N c → ∞-the 1/N c expansion gives a good description of the lattice data for different values of N c , i.e. the expansion does not break down. Let us come back to our matching condition (6). Having disregarded the δ K contribution, one sees that the interpolator W (z) in Eq. (14) crosses the OPE curve at z ∼ 0.4 (which is equivalent to Q ∼ 900 MeV). Merging into the OPE only takes place at values of z which are larger than those shown in the plot. Assuming that the OPE is a fair description starting from Q ∼ 900 MeV onwards, one can now get an estimate of the integral in Eq. (6) by first integrating with the interpolator W (z) in the region 0 ≤ z 0.4 and then with the OPE curve in the region 0.4 z < ∞. The result is usually presented in the form: and one finds B χ K = 0.38 ± 0.11, where the error is an estimate of higher order 1/N c corrections, ∼ 30%. This error 8 covers all reasonable variations in the input parameters (e.g. the ρ mass). The combination appearing in Eq. (16) parametrizes the K 0 − K 0 matrix element of the ∆S = 2 operator in Eq. (1), after including chiral corrections (this is why B K = B χ K ), and governs quantities such as ε K and the K L − K S mass difference. The inclusion of these chiral corrections is a necessary ingredient in order to make contact with the physical world since, moreover, there are indications that they may be sizeable 12 . We plan to be able to report on this in the near future but, for now, the result after Eq. (16) is still in the chiral limit. One may now include the contribution from the δ K term in the operator product expansion of Eq. (12). This coefficient δ K parameterizes the following matrix element and a sum rule analysis 13 gives δ 2 K = 0.12 ± 0.07 GeV 2 . Since we now have one more condition at large z, we have to include one more resonance in the interpolator in order to achieve matching. First, we observe that the low-z behavior of W (z) depends on L 3 and this low-energy constant, unlike the other L i 's appearing in (11), receives contribution from scalars resonances 14 . Second, a simple pole contribution from a scalar resonance is compatible with the analytic structure and quantum numbers shown in Fig. 1. Therefore, we enlarge the interpolator in Eq. (14) with a scalar pole to read and allow for a generous variation of the scalar resonance mass M S = 900 ± 400 MeV. Imposing that W (z) V,S matches both expansions in Eqs. (11,12) one can obtain the unknown residues and determine W (z), Eq. (19). This function is the solid line plotted in Fig. 2. One sees that it matches the OPE in a smoother way than the interpolator obtained earlier (dash-dotted line in Fig. 2), but there is no dramatic difference in the area underneath. In summary, the neglect of higher order terms in the expansions (11,12), as a first approximation, is a self-consistent procedure a . Using now this improved function W (z) and Eq. (6), we can calculate B χ K once again and obtain as our final value. We emphasize that, because the coupling g S=2 in Eq. (5) is of O(p 2 ) in the weak chiral Lagrangian (4), our value is in the chiral limit. This result is in nice agreement with recent determinations done on the lattice, where it now begins to be possible to consider dynamical fermions 15 . Corrections from dimension-eight quark operators. The passage from the Lagrangian in terms of quarks and gluons (1) to the Lagrangian (4) in terms of Goldstone mesons requires the integration of all the resonances. But in this integration the mass scale involved is the mass gap, i.e. Λ QCD ∼ 1 GeV, which is not negligible compared to the charm mass. Therefore, in the matching condition for Λ 2 S=2 in Eq. (5) there may be extra contributions of O(Λ 2 QCD ). To analyze the presence of these extra contributions one obviously must consider effects which are of O(Λ 2 QCD /m 2 c ), relative to the contributions in Eq. (5). Therefore, after integrating out the charm quark, one must go to a Lagrangian of quark operators which are dimension eight rather than dimension six as in Eq. (1). Since in this case there is not the competition between the scales F π and Λ QCD we discussed earlier, to simplify matters, we shall take the large-N c limit. Dimensional analysis shows that ∆S = 2 dimension-eight operators, unlike the dimension-six operator in Eq. (1), cannot come with a quark mass out front. It is not surprising, then, that the GIM mechanism becomes fully operational for scales m c < µ < m t , M W while the charm quark is active, arranging combinations like (λ c + λ u ) 2 = λ 2 t which is numerically negligible since, unlike in the dimension-six case of Eq. (1), it cannot be compensated by a large m t factor. At µ = m c , however, the charm quark is integrated out and the above mechanism no longer applies. At this scale, therefore, ∆S = 2 dimensioneight operators do get generated. Another simplification occurs, however. Notice that we eventually want to match onto the O(p 2 ) chiral Lagrangian in Eq. (4). Therefore any ∆S = 2 operator whose matrix element between a K 0 and a K 0 is of higher chiral order is of no interest. Furthermore, in the large-N c limit four quark operators factorize, i.e. they become a product of two color singlets. A dimension-eight operator can be arranged in only two ways: either as a product of two dimension-four operators, or as a product of a dimension-3 operator times a dimension-5 one. However, the first case yields contributions of higher chiral order. This is due to the fact that which can be seen by contracting with g µν and using the equations of motion -in the chiral limit-. So, only the combination O(dim − 3) × O(dim − 5) is possible. However, s G µν γ ν d(x) is the only dimension-five current connecting a kaon to the vacuum 17 . Consequently, this means that there is only one ∆S = 2 dimension-eight operator to consider. This operator is given by where g s is the strong coupling constant. Integrating out the charm quark yields for the coefficient c 8 the matching condition 16 Below the charm mass the Effective Theory is given simply by (24) Notice that the second operator is the only one producing ∆S = 1 transitions in the large-N c limit. Away from this limit its Wilson coefficient is not unity as in Eq. (24) but gets corrected by 30% − 40%, as a naive estimate of typical 1/N c corrections would say. At scales µ m c , while one can still consider QCD in the perturbative regime, the Wilson coefficient c 8 (µ) runs due to the fact that the square of the ∆S = 1 operator in Eq. (24), with the two up-quarks propagating in a loop, mixes into the direct ∆S = 2 operator. At lowest order in the strong coupling constant, which is enough for our purposes, one finds 16 for scales µ ∼ Λ QCD ∼ 1 GeV, where c 8 (m c ) is given by Eq. (23). At scales µ ∼ M K ≪ Λ QCD perturbation theory is no longer a valid approximation and the matching of the Lagrangian (24) to the chiral Lagrangian (4) requires again the machinery of the Hadronic Approximation to large-N c we used in the previous section. Skipping the details of this calculation 16 , we find that the coupling constant Λ 2 S=2 gets the following contribution, where δ 2 K is the parameter defined in Eq. (18), and the error is an estimate of the size of a typical 1/N c correction. Notice that all renormalization scale and scheme dependence has canceled out, as it should. This is one of the advantages of using the HA framework. The first contribution in (26) bears the mark of charm in λ c and stems from the matching condition for the Wilson coefficient c 8 (m c ) when charm gets integrated out. Although it has an imaginary part that contributes to ε K , its size is governed by the dynamical scale δ K ∼ 350 MeV, which is small enough relative to m c to yield a very small correction to ε K . The second contribution is the result of the running below the charm mass and subsequent matching onto the chiral Lagrangian (4); this is where resonances get integrated out. This is why only λ u appears and, also, why the energy scale, which is a combination of resonance masses and couplings, is essentially given by the mass gap Λ QCD ∼ 1 GeV. Since λ u is purely real it cannot contribute to ε K . The energy scale in (26) is comparable to m c , which explains why the K L − K S mass difference gets a sizeable ∼ 10 − 20% correction 16 . At scales µ m c the Standard Model gives rise to 10 four-quark operators capable of producing ∆S = 1 transitions 1 . All these operators mix as the renormalization scale evolves. On the other hand, at the scale of the Kaon, the Effective Field Theory for ∆S = 1 transitions is described by the chiral Lagrangian 5 where In this section we shall concentrate on the contribution to ε ′ /ε from Q 6 which, together with the one from Q 8 27,28 , is the dominant one 29 . As we shall see, the analysis will tell us interesting things about the ∆I = 1/2 rule as well. We may now run the Effective Field Theory from M W down to the charm mass as usual but, at this point, make some simplifications in the analysis not to have to deal with the full 10 × 10 operator mixing matrix. Our first simplification will be to stay within the leading-log approximation. Furthermore we shall go to first subleading order in the 1/N c expansion but keeping only those terms which are enhanced by an extra factor of n F , the number of flavors. In this case, the operator Q 6 only mixes with Q 4 , where and the sum over color indices within brackets is understood. The matching condition then reads 18 In the previous expression the subscript MS is a reminder that these integrals are UV divergent and have to be regularized and renormalized using the same scheme as for the Wilson coefficients c 4,6 . This is also true for ψψ and L 5 . As far as the N c counting goes, all the unfactorized contributions, which are of O(n F /N c ), are contained in the terms proportional to the functions W DGRR and W LLRR . The terms proportional to L 5 and unity correspond to the factorized contribution from Q 6 and Q 4 -respectivelyand, formally, are of O(N 0 c ). The functions W DGRR and W LLRR are defined through the connected four-point Green's functions after integration over the solid angle in q-momentum space, as in Eq. (8). In these expressions It is not a coincidence that the pair of fermion bilinears which make up the operators Q 6,4 in Eq. (28) also appear in these Green's functions, although they are located at different space-time points. It is the integral over Q in Eq. (29) which puts these two points back on top of each other. The matching condition (29) imposes that the same "mass term" r α du r us α is obtained when computed from the covariant derivatives in the Chiral Lagrangian (27) -which yields directly g 8 -as when computed from the four-quark Effective Lagrangian, which requires the insertion of the combination c 6 Q 6 + c 4 Q 4 in the form shown in Eq. (29). In order to calculate the unfactorized contribution one should now construct the Hadronic Approximation. As in the previous case of B K , an explicit cancelation of the renormalization scale dependence is achieved. This is good news since the scale dependence of the factorized contribution is very large: it changes by a factor ∼ 2 if the renormalization scale is varied in the range M ρ ≤ µ ≤ 1 GeV. Since this dependence has to be canceled by the unfactorized contributions, it is not unthinkable that these contributions be large. As discussed in the previous section on more general grounds, this could even be expected. And, indeed, it was found 18 that the unfactorized contribution from the operator Q 6 to the coupling g 8 in the matching condition (29) was a factor ∼ 3 larger than the factorized contribution at a scale µ = 0.8 GeV. Although the effect was somewhat smaller for Q 4 , an enhancement was found there as well. Both large contributions come from what in the jargon is called an "eye" diagram (i.e. the contraction of the "dummy" quark q in the sums in Eq. (28)). On the basis of a Nambu-Jona-Lasinio model, this enhancement has also been found by Bijnens and Prades 20 . Decomposing the Wilson coefficients as 1 the imaginary part of the coupling constant g 8 in Eq. (29) becomes where the error is the result of varying the quark condensate (which is the source of the biggest uncertainty), in the range ψψ 1/3 (2 GeV) = (0.240 − 0.260) GeV. As to Imτ , its current value 23 is −6.0(5) × 10 −4 . We can also estimate Reg 8 and g 27 . This is a very tough test for any calculational framework, as it is in these parameters that the ∆I = 1/2 rule is rooted; a rule which still to this date defies detailed understanding. However, at the scale µ = m c , the situation is particularly simple since only the operators have nonvanishing real Wilson coefficients. Although at scales µ ≪ m c penguin operators will again come into play, it is not crazy to neglect this effect and stay "as if µ = m c " in a first approximation. In this case the contribution to Reg 8 from Q 1,2 can be estimated because the non-eye diagrams are related to those appearing in the matching condition for g S=2 in the case of B K , 25 while the eye diagram of Q 2 can be estimated from the eye diagram of Q 4 in the matching condition (29), just setting n F = 1. In numbers this leads to 18 Reg 8 = 2.1 ± 0.8, to be compared to the experimental result 21,24 Reg 8 = 3.6 ± 0.1, after subtraction of chiral corrections. In spite of the large errors involved I find this result quite encouraging, mainly because the strict large-N c result (factorization) would lead to Reg 8 = 0.6 which is way too small. We begin to see the large unfactorized contributions which are indispensable for understanding the ∆I = 1/2 rule although, clearly, a more detailed analysis is needed before victory can be claimed. SU (3) allows to relate g S=2 in B K to g 27 because their corresponding matching conditions to the four-quark Lagrangian are SU (3) rotations of each other 26 . This leads to g 27 = g S=2 = 0.27 ± 0.12, to be compared to g 27 = 0.30 ± 0.01 as extracted from experiment 21,24 . Current calculations of ε ′ /ε done on the lattice require getting rid of the fermion determinant in the path integral in order to be numerically efficient. This is accomplished by introducing some ghosts quarks which, although spin 1/2 particles, commute. This lattice technique, known as "quenching", has some dramatic consequences. In particular the flavor symmetry group is changed from the usual SU (3) L × SU (3) R to a graded SU (3|3) L × SU (3|3) R 31 . Furthermore, and even more importantly, there is no reason for the quenched theory to have the same weak low-energy constants as the true theory. The fact is that the current result for ε ′ /ε on the lattice is 3 times smaller than the experimental result, and with the opposite sign 32 . A particularly clear example of this is the transformation undergone by the strong penguin operator Q 6 in the quenched theory 33 . After quenching, the operator Q 6 in Eq. (28) is no longer a singlet under SU (3) R , or rather under SU (3|3) R . Instead, it can be decomposed as with qΛq = sd, M the quark-mass matrix, and STr is the so-called supertrace. N exhibits the non-singlet structure of Q QN S 6 . Comparing Eqs. (27) and (36), one sees that the couplings α (8,1) q1,2 have a counterpart in the true theory (the weak mass term of α is not written in Eq. (27)), but α N S q is a total quenching artifact. As we did in previous sections, it is now straightforward to apply large N c to determine these coupling constants α (8,1) q1 , α N S q . One conclusion follows immediately: because of the presence of the degenerate ghosts quarks, the sum over flavor in the internal propagator of the eye diagrams exactly vanishes. But this contribution was actually the dominant one in the QCD case!. Thus one obtains 34 : • the contribution from Q 6 to α (8,1) q1 is much smaller than its counterpart in the true theory, g 8 ; and, = 57 (13) , in nice agreement with our prediction (not postdiction!) above. Conclusions. Nature has an approximate chiral symmetry because the u, d, s quark masses are small. On a lattice, Nature is approached from the side of heavy quark masses by using chiral symmetry to guide the extrapolation. However, chiral symmetry is a property which is very difficult to achieve on the lattice, taking long hours of calculations with sophisticated codes and expensive computers and, regretfully, this becomes an important source of error. Large-N c QCD offers the possibility to approach the problem from the other end. In the continuum chiral symmetry is much easier to achieve: it only takes the time needed to write a chiral Lagrangian. Furthermore, analytic calculations yield an understanding of the problem which allows building physical intuition. However, the flip side is that chiral symmetry is only exact when the quark masses are zero. To get to the real world, one must work one's way up to realistic quark masses by computing chiral corrections. I hope that the work presented here shows, among other things, how the continuum large-N c expansion may complement the lattice approach towards understanding kaon weak interactions.
2014-10-01T00:00:00.000Z
2004-11-23T00:00:00.000
{ "year": 2004, "sha1": "1a841554b5b5bbb7dbdb8942f79f6d80da053ac0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0411308", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7825dc3ece47cf7e15f459e469af2e954af2c1a4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
189927981
pes2o/s2orc
v3-fos-license
Achieving Conservation of Energy in Neural Network Emulators for Climate Modeling Artificial neural-networks have the potential to emulate cloud processes with higher accuracy than the semi-empirical emulators currently used in climate models. However, neural-network models do not intrinsically conserve energy and mass, which is an obstacle to using them for long-term climate predictions. Here, we propose two methods to enforce linear conservation laws in neural-network emulators of physical models: Constraining (1) the loss function or (2) the architecture of the network itself. Applied to the emulation of explicitly-resolved cloud processes in a prototype multi-scale climate model, we show that architecture constraints can enforce conservation laws to satisfactory numerical precision, while all constraints help the neural-network better generalize to conditions outside of its training set, such as global warming. Motivation The largest source of uncertainty in climate projections is the response of clouds to warming (Schneider et al., 2017). The turbulent eddies generating clouds are typically only O (100m − 10km) -wide, meaning that climate models need to be run at spatial resolutions as fine as O (1km) to prevent large biases. Unfortunately, computational resources currently limit climate models to spatial resolutions of O (25km) when run for time periods relevant to societal decisions, e.g. 100 years (IPCC, 2014). Therefore, climate models rely on semi-empirical models of cloud processes, referred to as convective parametrizations (Stevens and Bony, 2013;Sherwood et al., 2014). If designed by hand, convective parametrizations are unable to capture the complexity of cloud processes and cause well-known biases, including a lack of extreme precipitation events and unrealistic cloud structures (Daleu et al., 2015;2016). Recent advances in statistical learning offer the possibility of designing data-driven convective parametrizations by training algorithms on short-period but high-resolution climate simulations . The first attempts have successfully modeled the interaction between small-scale clouds and the large-scale climate, offering a pathway to improve the accuracy of climate predictions (Brenowitz and Bretherton, 2018;Krasnopolsky et al., 2013). However, machine learning-based climate models do not intrinsically conserve energy and mass, which is a major obstacle to their adoption by the physical science community for several reasons, e.g.: 1) Realistic simulations of climate change respond to relatively small O 1W m −2 radiative forcing from carbon dioxide. Inconsistencies of this magnitude can prevent this small forcing from being communicated down to the surface and the ocean where most of the biomass lives. 2) Artificial sources and sinks of mass and energy distort weather and cloud formation on short timescales, resulting in large temperature and humidity drifts or biases for the long-term climate. Current machine-learning convective parametrizations that conserve energy are based on decision trees (e.g. random forests), but these are too slow for practical use in climate models (O'Gorman and Dwyer, 2018). Since neuralnetwork convective parametrizations can significantly reduce cloud biases in climate models while decreasing their overall computational cost , we ask: How can we enforce conservation laws in neural-network emulators of physical models? After proposing two methods to enforce physical constraints in neural network models of physical systems in Section 2, we apply them to emulate cloud processes in a climate model in Section 3, before comparing their performances and how they improve climate predictions in Section 4. Theory Consider a physical system represented by a function f : R m → R p that maps an input x ∈ R m to an output y ∈ R p : (1) Many physical systems satisfy exact physical constraints, such as the conservation of energy or momentum. In this paper, we assume that these physical constraints (C) can be written as an under-determined linear system of rank n: where C ∈ R n × R m+p is a constraints matrix acting on the input and output of the system. The physical system has n constraints, and by construction, n < p + m. Our goal is to build a computationally-efficient emulator of the physical system f and its physical constraints (C). For the sake of simplicity, we build this emulator using a feed-forward neural network (NN) trained on preexisting measurements of x and y, as shown in Figure 1. We measure the quality of . (NN) using the mean-squared error, defined as: (3) where y NN is the neural network's output and y the "truth". Our reference case, referred to as "unconstrained neural network" (NNU), optimizes (NN) using MSE as its loss function. To enforce the physical constraints (C) in our neural network, we consider two options: 1. Constraining the loss function (NNL): In this setting, we penalize our neural network for violating physical constraints using a penalty P, defined as the residual from the physical constraints: We apply this penalty by giving it a weight α ∈ [0, 1] in the loss function L, which is similar to a Lagrange multiplier: 2. Constraining the architecture (NNA): In this setting, we augment the simple network (NN) with n conservation layers to enforce the conservation laws (C) to numerical precision (Figure 2), while still calculating the MSE loss over the entire output vector y. The feed-forward network outputs an "unconstrained" vector u ∈ R p−n whose size is only (p − n), where n is the number of constraints. We then calculate the remaining component v ∈ R n of the output vector y NN using the n constraints. This defines n constraint layers (CL 1..n ) that ensure that the final output y NN exactly respects the physical constraints (C). A possible construction of (CL 1..n ) solves the system of equations (C) from the bottom to the top row after writing it in row-echelon form. Note that the loss is propagated through the physical constraints. Application to Convective Parametrization for Climate Modeling We now implement the three neural networks (NNU, NNL, NNA) and compare their performances in the particular case of convective parametrization via emulation of the 8,192 cloud-resolving sub-domains embedded in the Super-Parametrized Community Atmosphere Model 3.0 (Collins et al., 2006;Khairoutdinov et al., 2005). We simulate an "ocean world" where the surface temperatures are fixed with a realistic equator-to-pole gradient (Andersen and Kuang, 2012). To facilitate the comparison, all networks have 5 hidden layers with 512 nodes each, and use leaky rectangular unit activation functions: x → max (0.3x, x) to help capture the system's non-linearity. We use the RMSprop optimizer (Tieleman et al., 2012) to train each network during 20 epochs, using 3 months of climate simulation with 30-minute outputs as training data. We evaluate the performances of (NNU, NNL, NNA) on two different validation datasets: (+0K) An "ocean world" similar to the training dataset. (+4K) An "ocean world" where the surface temperature has been uniformly warmed by 4K, a proxy for the effects of climate change. We do not expect (NN) to perform well in the Tropics, where this perturbation leads to temperatures outside of the training set. Table 1 compares the performance and the degree to which each neural network violates conservation laws, as measured by the mean-squared error and the penalty P, respectively. Results All neural networks perform better than the multiple-linear regression model (MLR), derived by replacing leaky rectangular units with the identity function and optimized in-dependently. While the reference "unconstrained" network NNU performs well as measured by MSE, it does so by breaking conservation laws, resulting in a large penalty P. Enforcing conservation laws via architecture constraints (NNA) works to satisfactory numerical precision on both validation datasets, resulting in a very small penalty P. Giving equal weight to MSE and P in the loss function (NNL α=0.5 ) leads to mediocre performances in all areas. In contrast, surprisingly, introducing the penalty P in the loss function with a very small weight (α = 0.01) leads to the best performance on the reference validation dataset (+0K) . Both constrained networks NNL α=0.01 and NNA generalize better to unforeseen conditions (+4K) than the "unconstrained" network, suggesting that physically constraining neural networks improves their representation abilities. This ability to generalize is confirmed by the high R 2 −score when predicting the outgoing longwave radiation (Figure 3), which can be used as a direct measure of radiative forcing in climate change scenarios. Overall, our results suggest that (1) constraining the network's architecture is a powerful way to ensure energy conservation over a wide range of climates and (2) introducing a very small information about physical constraints in the loss function or/and the network's architecture can significantly improve the generalization abilities of our neural network emulators.
2019-06-15T21:55:02.000Z
2019-06-15T00:00:00.000
{ "year": 2019, "sha1": "56ae95ed2263a92cae3745ae73476f11caa69e27", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d34600f0dde4e74f8fa9e5d85889053ae4e631b8", "s2fieldsofstudy": [ "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
119265258
pes2o/s2orc
v3-fos-license
Modelling the brightness increase signature due to asteroid collisions We have developed a model to predict the post-collision brightness increase of sub-catastrophic collisions between asteroids and to evaluate the likelihood of a survey detecting these events. It is based on the cratering scaling laws of Holsapple and Housen (2007) and models the ejecta expansion following an impact as occurring in discrete shells each with their own velocity. We estimate the magnitude change between a series of target/impactor pairs, assuming it is given by the increase in reflecting surface area within a photometric aperture due to the resulting ejecta. As expected the photometric signal increases with impactor size, but we find also that the photometric signature decreases rapidly as the target asteroid diameter increases, due to gravitational fallback. We have used the model results to make an estimate of the impactor diameter for the (596) Scheila collision of D=49-65m depending on the impactor taxonomy, which is broadly consistent with previous estimates. We varied both the strength regime (highly porous and sand/cohesive soil) and the taxonomic type (S-, C- and D-type) to examine the effect on the magnitude change, finding that it is significant at early stages but has only a small effect on the overall lifetime of the photometric signal. Combining the results of this model with the collision frequency estimates of Bottke et al. (2005), we find that low-cadence surveys of approximately one visit per lunation will be insensitive to impacts on asteroids with D<20km if relying on photometric detections. Introduction The main asteroid belt is collisionally dominated with large asteroids' shapes, sizes and surface geology controlled by impacts. Studies of collisions help us to understand the evolution of the shape of the asteroid population and in turn the formation of our Solar system. These studies may involve laboratory experiments, computer modelling or observational programmes. The evidence for collisions can be seen indirectly in main-belt asteroid families (Cellino et al., 2002), asteroid satellites and binaries (Merline et al., 2002). It can also be seen directly in recently observed collisions (Snodgrass et al., 2010;Jewitt et al., 2011;Stevenson et al., 2012). There are three possible collisions observed to date. In 2009 the 120 m diameter asteroid P/2010 A2 suffered a collision with a 6-9 m estimated diameter impactor (Snodgrass et al., 2010) (but see section 4.4). In 2010 another asteroid, (596) Scheila (113 km diameter), was hit with a ∼ 35m diameter impactor Jewitt et al. (2011). The most recent potential collision involved the object P/2012 F5 (Gibbs), which like others was originally identified as a potential main-belt comet Stevenson et al. (2012). Events like the (596) Scheila collision should occur approximately every 5 years and collisions with asteroids <10m even more often (Bodewits et al., 2011). Several recent surveys are capable of detecting collisions and cratering events. For example, the Canada-France-Hawaii Telescope Legacy Survey was used to search for Main-Belt comets among 25240 objects in 2003(Gilbert and Wiegert, 2010, the Thousand Asteroid Lightcurve Survey (924 objects) was conducted with the Canada-France-Hawaii Telescope in September 2006 (Masiero et al., 2009) and the Hawaii Trails project was conducted in 2009 (599 objects) (Hsieh, 2009). While none of the surveys mentioned above were specifically looking for main belt collisions, the methods used in search for main belt comets would have also revealed any collisional events. There are also current surveys fully or partly dedicated to discovering Near Earth Asteroids, such as Pan-STARRS 1 (Kaiser et al., 2002), the Lincoln Near-Earth Asteroid Research (LINEAR, responsible for discovery of P/2010 A2) (Stokes et al., 2000), the Catalina Sky Survey (Spahr et al., 1996) and the VST ATLAS survey (Shanks et al., 2013) that are all capable of detecting main-belt collisions. Much work has been done in modelling the parameters (i.e. shape of debris, brightness, total ejected mass, impactor mass) of known observed collisions , (Ishiguro et al., 2011a), (Holsapple and Housen, 2007), (Housen and Holsapple, 2011) ; and hydrodynamic modelling of generalised collision (Benz and Asphaug, 1999). This work focuses solely on the magnitude change following an impact as it is most likely to be observable by optical telescopes. Rather than looking at a specific object in the main belt, the described model looks at what would be expected with generic asteroids. Cratering physics Our model is based on the work by Holsapple and Housen (2007), who provide a summary of scaling laws that allows calculation of crater size using properties of the target and impactor, based on the results of impact experiments. These laws can also be used to calculate the evolution of the ejecta dispersal and consequently estimate the amount of material ejected and increase in brightness following a collision. The decrease in magnitude of the target asteroid is going to depend on the amount of material that was ejected and whether it is optically thin or not. At high impact speeds, transfer of the energy and momentum of the impactor into the target occurs over area on the order of impactor size, while the resulting crater usually exceeds this size by many times. It is therefore a reasonable approximation to assume that impact occurs as a point source. Using theoretical analyses of mechanics of crater formation, Holsapple and Housen showed that the crater and ejected material characteristics depend on the quantity aU µ δ ν , where a is the radius, U is the normal velocity component of the impactor and δ is the density of the impactor; µ and ν are scaling exponents. The scaling exponents depend on the material properties. Theoretical values of µ range from 1/3 to 2/3 (Holsapple and Schmidt, 1987) and are a measure of the energy dissipation by material; a more porous material can dissipate energy more effectively and will have a lower value of this exponent. Experimentally determined values of µ are ∼0.55 for nonporous materials (e.g. rocks and wet soils), 0.41 for moderately porous materials (e.g. sand and cohesive soils) and 0.33 to 0.40 for highly porous materials (Holsapple and Schmidt, 1987). Experimental values for ν were found to be the same for all materials at around 0.4 (Holsapple and Schmidt, 1987). By selecting appropriate material scaling parameters for a given impact and inserting them into a general expression for the relationship between radii of involved objects and crater size, a reasonably accurate estimate of the crater size (as well as crater formation time and transient crater growth) can be made. We now summarise how we use the previous studies in our calculations. Consider a spherical, non-rotating asteroid of radius r following an impact from an object with radius a at sub earth point. The general form of equation for crater size R consists of strength and gravity term: Here K 1 is a scaling parameter (1.03, 1.17, 0.725 for sand/cohesive soil, wet soils/rock and highly porous material respectively; (Holsapple and Housen, 2007)), Y is the average strength of the target material, ρ is the grain density of target, U = 5 km s −1 (Bottke et al., 1994) is the normal velocity component, δ grain is the grain density of the impactor, S g is the surface gravity of the target asteroid with mass M and radius r, calculated as follows: Depending on the asteroid type, different values of bulk density (for calculation of target asteroid mass) and grain density (for calculation of ejected mass) are used. The values and their sources are summarised in Table 1. Bulk densities of C-and S-type asteroids were taken from weighted averages of corresponding subclasses as summarised in Table 3 of Carry (2012). Grain densities of C-and S-types are assumed to be the same as their most likely meteorite analogues (Britt et al., 2002). Density of D-type asteroids is approximated by bulk and grain densities of the Tagish lake meteorite (Zolensky et al., 2002;Izawa et al., 2010). The range of material strengths used is presented in Table 2. The strength value selected in this study was varied for each taxonomic type to explore the relationship between the type, strength and the corresponding magnitude change. The crater radius R calculated in this way has a corresponding mass M crater : As we are interested in the ejected mass, since it is only that which contributes to the observed magnitude change of the asteroid, the full crater mass will give an overestimate of brightness. The total crater volume is made up of a volume of ejected mass, a volume of the mass that is uplifted near the crater rim and a volume due to compaction. The fraction k ejecta of the total crater mass that corresponds to ejected mass is of order 0.2-0.5 (Housen and Holsapple, 2011). Throughout this work we assume k ejecta of 0.3 as being most appropriate to asteroids. Velocity shell model We consider the ejecta leaving the asteroid surface after the collision event. For simplicity, we assume that the debris expands spherically outwards from asteroid, the debris with each velocity v n forming a shell of radius r s (see Figure 1). Effects from rotation of the target or impactor are beyond the scope of the current model. Impact experiments show that there is no significant correlation between velocity and mass of the particles (Holsapple et al., 2002). Therefore, each velocity shell is taken to have the same parti-cle size distribution described below, in Section 2.3. As our aim is to model observable brightening from Earth, we assume that the ejecta cloud is centred on the asteroid, as at early epochs the asteroid itself and the ejecta will be unresolved. We also assume that the brightness of the asteroid plus ejecta is measured through an aperture of fixed radius r ap and centred on the asteroid. To obtain the amount of material that is visible in the aperture, and consequently the visible increase in asteroid magnitude, we need to look at the fraction of each debris shell that fits completely or partially in the aperture (ejecta visibility fraction f vis ). There are three possibilities for an individual shell at a given time t: 1. the entire shell fits within the aperture and hence f vis = 1 2. none of the shell is visible, i.e. the material is not above the surface of the asteroid as it has fallen back, f vis = 0 3. part of the shell fits into the aperture (0 < f vis < 1) (see Figure 1) In the latter case, the amount of material visible can be estimated by calculating the ratio of the surface area corresponding to the arc that fits into the aperture (spherical cap) to the total surface area of the sphere. Here h is the height of the cap and the extra factor of 2 accounts for near and far side material, h = r − r 2 − r 2 ap and r ap is the aperture radius that equals the radius of the cap. In the situation of no gravity acting on the debris, the radii at each velocity can be described straightforwardly by r s = vt, however we need to consider the gravitational case. For simplicity, we consider a single debris shell of radius r s and velocity v over the asteroid of radius r. The acceleration due to the asteroid's gravity that the debris experience is inversely proportional to the square of shell radius ( 1 r 2 s ) and also happens to be the second derivative of radius with respect to time. C is a constant of proportionality which we calculate by considering the boundary conditions. At the asteroid surface r s = r and the acceleration due to gravity is the surface gravity S g . At the surface the original equation reduces to Cr −2 = S g , which in turn gives C = S g r 2 . Therefore, the radius of the spherical shell of debris at velocity v is governed by a second order differential equation of the form: with the boundary conditions that at t = 0, r s = r, drs dt = v initial . Solving this equation will give the shell radii at all times t. The total mass within the aperture M vis is the integral of all velocity shell fractions within the aperture i.e. the shell mass M shell multiplied by In the code it is calculated by taking a sum of the individual contributions from each velocity shell, thus M vis = Σ vmax v=v min M shell f vis As the asteroid fragments will be ejected at a continuous range of velocities, increasing the number of shells considered improves the accuracy of calculations. Magnitude change calculation At this stage we implicitly assume that the scattering function for the asteroid and the ejecta particles is the same and that the ejecta is optically thin throughout. Assuming all the ejected particles contribute to reflected light with the same albedo as the target asteroid, the total visible surface area after the collision is A c = A a + A p where A a is the area of target asteroid, A p is the area of ejected particles. The latter can be calculated assuming that the particle size distribution follows a power law of the following form (Ishiguro et al., 2011a): Here a min = 0.1 µm is chosen to be the minimum particle radius because scattering is inefficient for particle sizes much smaller than the wavelength of the scattered light, a max is the maximum particle radius, N 0 is the reference dust abundance at reference size a 0 and q = 3.5 is the assumed power law index (Jewitt, 2009). The maximum particle radius a max is calculated using the expression derived from the power-law function of the ejection velocity of dust particles (Ishiguro et al., 2011a). Here V 0 =80 m s −1 is the reference ejection velocity of 1 µm particles, k = 1/4 is the power index of size dependence of ejection velocity, M and r are mass and radius of the target respectively. After integration we get cross-sectional area per unit mass of σ a (m 2 kg −1 ), and the total cross-sectional area of particles around the asteroid A p are: The change in magnitude is going to be related to the ratio of the total area after collision A c and initial target asteroid area A a : Any ejecta in front of the asteroid will obscure a cross-sectional area equal to its own area. Given that we assume the same scattering function for asteroid and ejecta, this means ejecta directly in front of the asteroid does not contribute to the net brightness. Any ejecta behind the asteroid is clearly not visible. Therefore all ejected material along the line of sight of the asteroid should be ignored for brightness calculations. This is implemented by the subtraction of a pair of spherical caps with radius equal to that of the asteroid. Model setup We have looked at three taxonomic types that are common in the main belt -C, S and D types (DeMeo and Carry, 2014). The differences in the types are accounted for using their corresponding grain and bulk densities during calculation of the ejected mass and magnitude change. In the simulation we varied the taxonomic type (C, S, D) and strength (sand/cohesive soil and highly porous) value to see if any patterns emerge. We have used a 50 by 50 point grid of target and impactor radii, ranging from 1-100 km and 1-100 m respectively and logarithmically spaced, and 500 velocity shells linearly spaced over interval 10 −2 to 10 2 m s −1 . The lowest initial velocity has to be above zero for the purposes of calculations and the amount of material being ejected at and beyond 10 2 m s −1 is a very small fraction of the total ejected mass and therefore considered negligible, so we take 10 2 m s −1 as a upper limit of the initial velocity range. Given that there is a range of target and impactor pairs that would result in catastrophic disruption (i.e. a dispersive shattering event leaving no remnant larger than 1/2 of the original mass of target asteroid (Greenberg et al., 1978)) rather than in sub catastrophic collision, the results of our model for that region will not be physical. We have estimated the location of the disruption region by calculating the approximate impactor radius necessary for catastrophic disruption of a basalt target assuming average density of 2500 kg m −3 and impact velocity of 5 km s −1 (Benz and Asphaug, 1999). This region is marked in black on the figures. The code for this model was written in Python 2.7 (includes NumPy and SciPy packages), plotting of the output was performed in MatLab. The code requires a separate run for each pair of values for the taxonomic type and strength regime (i.e. C-type in sand/cohesive soil and C-type in highly porous regime would be two different code runs). Each run from impact until 7 days after impact with the above parameters takes approximately 2 hours on a single core of an Intel Core i5 2410M processor running at 2.3 GHz. Figure 2 shows the resulting predicted change in magnitude post-collision for a C-type asteroid with both a highly porous and a sand/cohesive soil strength at two epochs after the impact (1 and 7 days). Figure 3 shows the calculated time it takes for the brightness increase to reach −1.0 magnitudes for C-types in the highly porous and the sand/cohesive soil strength regimes. This value has been chosen because the one sub-catastrophic asteroid collision confirmed so far between the known main-belt asteroid (596) Scheila and an impactor was observed to have a brightness increase of −1 magnitude (Jewitt et al., 2011). Conclusive attribution of a collisional nature to a smaller magnitude decrease may be difficult, due to natural variations in brightness caused by rotational modulation of the light curves and uncertainty in asteroidal absolute magnitudes (Pravec et al., 2012). Target radius (km) Impactor radius (m) Magnitude change 1 day post collision (C target, C impactor, highly porous strength) : C-type impactor collision with C-type target: 2(a)-2(b) highly porous strength regime: magnitude change 1 and 7 days post-collision for a range of target and impactor radii. 2(c)-2(d) sand/cohesive soil strength regime: magnitude change 1 and 7 days post-collision for a range of target and impactor radii. Impactor radii range from 1 to 100 m, target radii: 1-100 km. Colour bar shows the magnitude change. Catastrophic disruption region is marked in black. Region where optical thickness of the debris is potentially significant is above the dashed line. Solid black lines indicate contours where magnitude change is -1, -3, -5, -7 and -9. There are vertical structures in these plots that are not of physical significance, but are rather features of the model limitations. Three processes affect the total magnitude decrease post collision: the constant expansion of debris moving out of the aperture, material falling back on the surface of the target asteroid and therefore disappearing from observation; and material re-entering the aperture due to the gravitational attraction of the target asteroid. In an ideal simulation of the process, the change in magnitude as a function of time would be a smooth function, being the result of the interplay of all three processes. The function would be smooth because we expect the size and velocity distributions of the debris to be continuous. As velocity shells in this model are quantised, this leads to the creation of artefacts at certain values of target radius. The effect of this can be seen clearly in Figure 4, which shows the predicted brightness increase after an impact of a 20 m radius impactor onto 1325, 1600, 1930 m radius target (all S-type, sand/cohesive soil strength regime). For lower velocity shells that fall back onto the asteroid at early times these steps merge to give a continuous slope. Higher velocity shells never return as they have reached escape velocity, and thus never produce a sharp decline. For a small number of shells with intermediate velocity, the return of the shell to the surface will be sufficiently distinct temporally from the other shells to produce a clearly identifiable step. The velocity shells leaving the aperture do not produce a sharp decline, because the fraction of the shell in the aperture decreases as a continuous function of time after the impact. The location of these steps is independent of the impactor radius and depends solely on the target radius due to the dependance on the surface gravity. These steps are clearly visible in Figures 2 and after as vertical structures that should not be interpreted as being significant. One of the ways to minimise this and have a good resolution in the images is by using a large number of shells, however this increases the computation time. The 500 shells used in this study was selected as a compromise between resolution and run time. Figure 5 shows a comparison between a plot using 500 shells and 5000 shells (20 hours run-time). The latter shows a reduction in vertical structures without substantial difference in output. Difference images show a discrepancy between the 500 shells and 5000 shells results being confined to the vertical structures only. We have chosen to only run the model up to 7 days post-collision, because a large proportion of the material (by surface area) would leave the aperture due to radiation pressure effects, which are not currently included in the model. The time t it takes for where a is acceleration, that can be defined in terms of ratio between radiation pressure acceleration and the local gravity, β, and gravitation acceleration to the Sun at 1 AU g : where R h is heliocentric distance. This gives the approximate time for radiation pressure to remove material from the aperture as where R h is distance from the sun to the asteroid in question. Assuming β is on the order of ≈ 0.1 (Fulle, 2004) for the small grains in the ejecta expected to dominate the scattering, R h = 3 AU (mid-main belt), the time t is approximately 2 days. Therefore, we do not present any model results that go beyond 7 days post collision, as without the inclusion of the radiation pressure they do not reflect physical reality. Optical depth effects Throughout the modelling, all ejecta within the aperture and not behind the asteroid is treated as contributing to the brightness increase via the projected surface area. Effectively, it is assumed that optical depth τ is sufficiently low that the approximation τ 1 − e −τ is valid. (For τ < 0.2 this leads to a brightening overestimate of less than 10%). To estimate the potential optical depth effects, the ejecta cloud was divided into a series of concentric rings and the column density of each was calculated, then converted, using the particle size distribution, into the optical depth τ . Any rings with τ < 0.02 were ignored (this corresponds to an error of less than 1%), and any cases where more than 10% of ejecta mass was contained in rings with τ > 0.02 were considered to have their brightness potentially overestimated. We found that at one day after impact, ignoring opacity may lead to overestimating the brightness increase by up to twenty percent (∼ 0.2) magnitudes. The region where this may be relevant is outlined in shown in figures 2(a), 2(c), 7(a) and 8(a) showing the brightness 1 day post collision. Clearly for all impacts apart from those on the largest asteroids, the correction to the calculated magnitude change will have little effect on detection and can be ignored. By one week after impact, we calculate that optical depth effects are negligible for all target impactor pairs. Discussion A collision between asteroids is a multi parameter problem requiring us to make some initial assumptions. This section is therefore divided into 4 parts. First we describe an application of the model to the only known collision in the parameter space explored -(596) Scheila. In the second part we examine the effect of the different strength regimes in a collision between C-type asteroids (as the most numerous in the outer belt where (596) Scheila is situated). In the third part we keep the strength regime the same (sand/cohesive soil) and vary the taxonomic types to examine the effect it has on the magnitude change. We conclude with application of the model to predicting the type of observable collisions by currently active surveys such as Pan-STARRS 1 and the Catalina Sky Survey. Modelling the magnitude decrease in the (596) Scheila collision In December 2010 asteroid (596) Scheila was observed with the 0.68 meter Catalina Schmidt telescope to have increased in brightness by approximately 1.3 magnitudes in comparison to the previous month (Larson, 2010). There have been multiple photometric observations of (596) Scheila reported as summarised in Table 3 As has been mentioned in the previous section, we believe that in its current state the model gives best description of reality if we limit the results to 7 days post-collision. However, in the case of Scheila collision, which is an important real observed collisional event, we lack a sufficient number of data points of good quality in the first 7 days to make a meaningful comparison. However, the large aperture used in this study includes a large portion of the ejecta even after being significantly affected by radiation pressure, allowing comparison with our model. The model results are in approximate agreement with estimated impactor diameters by other authors. The differences most likely come from a variety of initial assumptions made by the different researchers, i.e. density, type of ejecta particle distribution, range of particle sizes and the estimated ejected mass. It is also clear from these results, that to the first order the taxonomic type of the impactor asteroid is not an important factor in diameter estimation. Effect of strength regime on post-collision magnitude change for C-type asteroids. We consider a collision where both impactor and the target are generic C-type asteroids. Following the collision, a shock wave travels through the target asteroid and the outcome of the collision depends on how the stress wave propagates and is attenuated through the target. The cratering and amount of ejecta following an impact is determined by the target internal structure, porosity and strength. A porous asteroid would react to the impact differently than a more solid body and it is particularly important to consider impacts on these as most observable main belt asteroids should have significant macroporosities, apart from the very largest. It has been found that highly porous objects such as (253) Mathilde are particularly good at attenuating stress waves due to energy going into the compaction of the pores (Britt et al., 2002). An object with large macroporosity is most likely to be highly fractured or even a rubble pile. Dark and primitive asteroids are more likely to have significant porosity, however it is worth noting that it is not exclusive to C-types. Magnitude change following a collision of (596) Scheila with C−type impactors of varying diameters. 49m (fit to Jewitt data) Gibbs in Larson et al.,2010Larson, 2010Hsieh et al.,2012Bodewits et al.,2011Jewitt et al.,2011 (b) S-type impactors 65m (fit to Jewitt data) Gibbs in Larson et al.,2010Larson, 2010Hsieh et al.,2012Bodewits et al.,2011Jewitt et al.,2011 (c) D-type impactors considered, however the sand/cohesive soil regime shows a larger magnitude decrease due to more material ejected, while in the highly porous regime some of the impact energy goes into compaction. This tendency is also reflected in plots of amount of time taken for the magnitude change to decrease to 1.0 magnitudes above the pre-impact brightness ( Figure 3). The brightness increase caused by the ejecta reaches the assumed observable limit of approximately 1 magnitude faster for the highly porous case than the corresponding impactor-target pair undergoing a collision in the sand/cohesive soil regime. This is due to less ejecta being produced. Recent research by Carry (2012) indicates that the density may be dependant on target size, rather than being a fixed quantity for each taxonomic type. However, the data is sparse in the size range used in this model and unavailable for the D-types. Therefore, our model makes a simplifying assumption that the bulk density stays fixed for all sizes considered and depends only on the taxonomic type. an S-type target, the largest magnitude change is produced by a S-type impactor, then C-type and D-type producing the least magnitude change. The same pattern is followed by a C-type target being impacted by S-, C-and D-type asteroids. However, a D-type target deviates from this pattern, with an S-type impactor giving the largest change in magnitude, followed by D-types and C-types giving the least magnitude change. Overall, S-type impactors produce the brightest collision signature in all types of targets considered, due to their inherent high grain density. Target radius (km) Impactor radius (m) Magnitude change 1 day post collision (S target, S impactor, sand/cohesive soil strength) Detection from current sky surveys The catastrophic collision rate in the main belt has been calculated from observationally constrained dynamical models as ∼ 1 per year at a diameter D ∼ 100 m (Durda et al., 1998;O'Brien and Greenberg, 2003;Bottke et al., 2005). The general hope within the scientific community has been that such collisions would be detected via the on-going wide-field NEO surveys. However, several objects initially identified as potential collisional disruption events are now suspected of being caused by rotational disruption due to YORP spinup e.g. P/2010 A2 , P/2013 P5 . Additionally, a recent study by Denneau et al. (2014) using 1.2 years of Pan-STARRS 1 data found only one plausible candidate of a collision event, and concluded that collisional disruptions of 100-m scale asteroids may be extremely rare. Hence observing much more frequent sub-catastrophic collisions may be a viable method for constraining the overall collision rate. To look at the likelihood of detecting these events, we first look at collisions with 100 km diameter targets similar to Scheila. According to Bottke et al. (2005), the size of impactor that would disrupt a D = 100 km asteroid is D ≥ 25km and such a disruption would happen every ∼ 10 7 years in the main belt. Using their CoDDEM model size distribution, during the first 1.2 years of operation was < 0.05. Therefore we conclude that observational losses together with the rapid dispersal of the ejecta can significantly decrease the possi-bility of photometric detection of sub-catastrophic impacts smaller that the (596) Scheila event, and this implies a current low probability of detection using automated photometry software such as MOPS (Denneau et al., 2013). Finally, another additional factor may be inaccurate absolute magnitudes in current reference catalogues, as investigated by Pravec et al. (2012). They found that smaller asteroids were systematically fainter than expected from catalogues. Hence predicted magnitudes would be brighter than in reality, and any small brightness increase due to a collision could be masked by an incorrectly calculated residual magnitude. Of course, during the first few days to weeks the ejecta may be visible as an extended ejecta cloud around the target asteroid, and could be detected via direct manual observation or software algorithms designed for comet comae detection, as in the case of (596) Scheila. On the other hand, if the cadence of a survey is a week or less i.e. significantly less than the decay timescale within the aperture, the chance of photometrically detecting a small impact should be substantially higher. In Figure 10 we plot the predicted total V -band magnitudes for C-on-C and S-on-S collisions 1 day and 7 days after impact, for asteroids at opposition at R h = 3.5 AU. (Although S-types are predominantly found in the inner main belt, we calculate the predicted apparent magnitude at the likely maximum distance). First, it is clear that for the larger (but less frequent) impacts, the total magnitude will be relatively bright and would produce saturated images in large aperture surveys such as Pan-STARRS. However it may be possible to deal with this situation by either fitting to the wings of asteroid image point-spread function, or by recognising that an expected asteroid in the field was rejected in software processing due to its increased brightness. For fainter impact events, it is clear that Pan-STARRS with a limiting magnitude of V ∼ 22 is able to detect such impacts throughout the asteroid belt, as long as the asteroid is observed soon after the collision. Importantly, the forthcoming ATLAS programme will have a nightly cadence over the visible sky and have a limiting magnitude of V 20 (Tonry, 2011). From comparing both Figure 10 with the brightness increases presented earlier, we find that ATLAS would be able to detect almost all of our studied collisions as well, but here the survey cadence should not be an issue. Therefore we conclude that current and future high-cadence all sky surveys should be able to detect many more asteroid collisions at early epochs. Target radius (km) Impactor radius (m) Visual magnitude 1 day post collision (C target, C impactor, sand/cohesive soil strength) Visual magnitude following a collision. 10(a)-10(b) show magnitude 1 and 7 days postcollision for C-type impactor and target in the sand/cohesive soil strength regime for a range of target and impactor radii. 10(c)-10(d) show magnitude 1 and 7 days post-collision for S-type impactor and target in the sand/cohesive soil strength regime for a range of target and impactor radii. Impactor radii range from 1 to 100 m, target radii: 1-100 km. Colour bar shows the visual magnitude. Catastrophic disruption region is marked in black. Region where optical thickness of the debris is potentially significant is above the dashed line. Summary The model presented in this paper predicts the brightness increases caused by impacts of small asteroids on larger asteroids with radii ≥ 1 km. We use the scaling laws of Holsapple and Housen (2007) and Housen and Holsapple (2011) to estimate the amount of ejected material. Our model separates the ejecta into discrete velocity shells, and assuming an ejecta size distribution calculates the magnitude change post-collision by calculating the surface cross-sectional area of the asteroid plus ejected material in a small photometric aperture. The scope of the model is limited to non-rotating asteroids whose ejecta particles do not interact with each other, however by extending the model results to the large apertures used for the reported ejects from the (596) Scheila collision, we find an estimate for the impactor size of 40 − 65 m (depending on taxonomic type) similar to previous studies in the literature. The model could be further improved by introducing particle light scattering laws, radiation pressure on the ejecta and the effect of rotation of the target asteroid. However this will lead to a significant increase the complexity of the model and will be developed in future. We believe our results are generic enough to be used to estimate the possibility of detection of such events by current automated surveys such as Pan-STARRS1 and the Catalina Sky Survey. The model estimates that within the parameter space examined (impactor radius 1 − 100 m, target radius 1 − 100 km, sub-catastrophic collisions) , a magnitude change of less than −1 is observable by an automated survey like Pan-STARRS 1 with effective aperture radii of 1000 km for only 10-20 days, which implies low probability of detection given the current low cadence for individual asteroids. However, detection may still be possible by direct manual observation or software algorithms designed for comet coma detection. Acknowledgements EMcL acknowledges support from the Astrophysics Research Centre, QUB. AF acknowledges support by STFC grant ST/L000709/1. EMcL and AF also thank Robert Jedicke for helpful comments during the preparation of this paper. Figure 11: D-type impactor collision with S-type target, sand/cohesive soil regime: magnitude change 1 and 7 days post-collision for a range of target and impactor radii. Impactor radii range from 1 to 100 m, target radii: 1-100 km. Colour bar shows the magnitude change. Catastrophic disruption region is marked in black. Region where optical thickness of the debris is potentially significant is above the dashed line. Solid black lines indicate contours where magnitude change is -1, -3, -5, -7 and -9. Figure 12: C-type impactor collision with S-type target, sand/cohesive soil regime: magnitude change 1 and 7 days post-collision for a range of target and impactor radii. Impactor radii range from 1 to 100 m, target radii: 1-100 km. Colour bar shows the magnitude change. Catastrophic disruption region is marked in black. Region where optical thickness of the debris is potentially significant is above the dashed line. Solid black lines indicate contours where magnitude change is -1, -3, -5, -7 and -9. Figure 13: S-type impactor collision with C-type target, sand/cohesive soil regime: magnitude change 1 and 7 days post-collision for a range of target and impactor radii. Impactor radii range from 1 to 100 m, target radii: 1-100 km. Colour bar shows the magnitude change. Catastrophic disruption region is marked in black. Region where optical thickness of the debris is potentially significant is above the dashed line. Solid black lines indicate contours where magnitude change is -1, -3, -5, -7 and -9. Figure 14: D-type impactor collision with C-type target, sand/cohesive soil regime: magnitude change 1 and 7 days post-collision for a range of target and impactor radii. Impactor radii range from 1 to 100 m, target radii: 1-100 km. Colour bar shows the magnitude change. Catastrophic disruption region is marked in black. Region where optical thickness of the debris is potentially significant is above the dashed line. Solid black lines indicate contours where magnitude change is -1, -3, -5, -7 and -9. Figure 15: S-type impactor collision with D-type target, sand/cohesive soil regime: magnitude change 1 and 7 days post-collision for a range of target and impactor radii. Impactor radii range from 1 to 100 m, target radii: 1-100 km. Colour bar shows the magnitude change. Catastrophic disruption region is marked in black. Region where optical thickness of the debris is potentially significant is above the dashed line. Solid black lines indicate contours where magnitude change is -1, -3, -5, -7 and -9. Figure 16: C-type impactor collision with D-type target, sand/cohesive soil regime: magnitude change 1 and 7 days post-collision for a range of target and impactor radii. Impactor radii range from 1 to 100 m, target radii: 1-100 km. Colour bar shows the magnitude change. Catastrophic disruption region is marked in black. Region where optical thickness of the debris is potentially significant is above the dashed line. Solid black lines indicate contours where magnitude change is -1, -3, -5, -7 and -9.
2015-04-23T08:43:58.000Z
2015-04-20T00:00:00.000
{ "year": 2015, "sha1": "14f09fc7507d50b19541db879d7773a4ce056abb", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1504.05235", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5789c9c074d625120683be5dd6381f7db23ffb70", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
114310384
pes2o/s2orc
v3-fos-license
Sawdust ash as an inhibitor for reinforcement corrosion in concrete The deterioration of reinforced concrete structures remains a major problem with the cost of repairing or replacing deteriorated structures becoming a major liability in many countries Nigeria is no exception. Corrosion protection systems used for reinforced concrete structures include the use of corrosion-inhibiting admixtures, epoxy-coated reinforcing steel, water proofing membranes, penetrants and sealers, galvanized reinforcing steel, electrochemical removal of chlorides, and cathodic protection.1 Corrosion-inhibiting admixtures may influence concrete properties such as the compressive strength, elastic modulus, etc and therefore, the need to determine their effects as inhibitors becomes very paramount. Introduction The deterioration of reinforced concrete structures remains a major problem with the cost of repairing or replacing deteriorated structures becoming a major liability in many countries -Nigeria is no exception. Corrosion protection systems used for reinforced concrete structures include the use of corrosion-inhibiting admixtures, epoxy-coated reinforcing steel, water proofing membranes, penetrants and sealers, galvanized reinforcing steel, electrochemical removal of chlorides, and cathodic protection. 1 Corrosion-inhibiting admixtures may influence concrete properties such as the compressive strength, elastic modulus, etc and therefore, the need to determine their effects as inhibitors becomes very paramount. Sawdust ash (SDA) is one of the admixtures tested for this purpose, and characterized for possible use as an inhibitor. Elinwa and Sani 2 have published preliminary results on SDA-OPC concrete. Some of their findings are highlighted in this work. The microstructure and durability characteristics of this material (SDA) as it affects concrete and its hydration process are presented in Figures 1 to 6 and discussed as below. 2 I. SDA was effective in controlling the porosity of the concrete and as such imparts strength on the concrete. II. The evaluation showed that by using SDA the amount of un-hydrated cement was drastically reduced as the curing progressed. The implication of this is that the quality of the concrete material is enhanced. The levels of Ca(OH) 2 that were produced with age drastically reduced. This is because SDA went into secondary reaction with the excess Ca(OH) 2 produced during hydration process to form C-S-H which is responsible for strength. When the Ca(OH) 2 produced during the hydration process comes in contact with CO 2 or ion, an acidic environment with a pH<10 is created. This results in the initiation of steel corrosion. The corrosion reaction mechanism of reinforcement is an electrochemical reaction and its effect is the degradation of concrete structures. In the preliminary works of using SDA for durability assessment, SDA-OPC concrete was assessed in sulphate and acid media, using Na 2 SO 4 and HNO 3 . The monitoring was for eight weeks at sevenday intervals. The results showed that there was minimal decay to sulphate attack and aggressive decay to acid attack (Figures 5 & 6). All specimens treated with 10% SDA showed better resistance at the end of the 8 th week. The effect of this action by SDA is that corrosion of the embedded reinforcement will be minimized if not totally eliminated. Other researchers have also studied the use of admixtures as corrosion-inhibiting materials. Valente et al. 3 studied the effects of addition of fly ash and corrosion inhibitors over time on concrete strength against chloride penetration. They concluded that permeability to both water and chlorides were greatly reduced. Robertson and Newton 4 researched on the performance of corrosion inhibitors in concrete exposed to marine environment. One of the conclusions reached was that the use of silica fume and fly ash showed significant reduction in the half-cell readings compared with the control specimen. Sounthararajan and Sivakumar 5 took measurements on corrosion in reinforced fly ash concrete containing steel fibres using strain gauge techniques. They discovered that corrosion started in steel bars embedded in plain concrete immediately after the 7 th day but at increased fibre dosage, the intensity of corrosion was minimized. Also, there was reduction in the corrosion process as the fineness of fly ash improved the pore refinement and, thus minimized assess to deteriorating agents. The present research work is a furtherance of the development of using SDA as an inhibitor in controlling the menace of corrosion in our construction works. The same mix ratios that were used in the first part of this work 2 was also used in the present work, that is -a total cement content of 375kg/m 3 , fine aggregate of 600kg/m 3 , coarse aggregate of 1115kg/m 3 and w/c ratio of 0.56. A total of 5 (five) mixes containing SDA as replacement materials for cement in the proportions of 0%, 5%, 10%, 20% and 30% by weight of cement were added. The mix containing the 0% is the control mix. The physical/chemical properties of the SDA and other characteristics are contained in the earlier publication. The importance of this work points to the fact that there is no universally accepted practice with regard to testing corrosion inhibitors when admixed into concrete. Half-cell potentials Corrosion is an electrochemical process and electrons flow from the site of corrosion (Anode) to the non-corroding site (Cathode). In the experimental set-up for this case (Figure 7), use was made of SDA in proportions of 5%, 10%, 20%, and 30% by weight of cement as discussed earlier. The test specimens are concrete cylinders of 100mm high and 80mm diameter containing two 10mm diameter embedded steel bars, marked STB-1 and STB-2, respectively. Copper wire cables were connected to each steel bar for the electrochemical measurements. The surface of the steel bars was thoroughly cleaned of any rust material before weighing. The steel bars and a copper rod used as a reference electrode (RE) were embedded in each mold to a depth of 20mm with a concrete cover of 20mm as shown in the Figure 7. The concrete specimens were stored under the laboratory conditions for 24 hours before de-molding and cured in water for 7 days. In order to accelerate corrosion activity under laboratory conditions, the concrete specimens were subjected to cycles of drying for 5 days and wetting for 2 days respectively in 3.5% NaCl solution. The specimens were covered with epoxy resin araldite to protect the connection of steel with copper cable against corrosion. The corrosion potentials of the steel embedded in the concrete cylinders were determined according to ASTM 876-880 beginning at the 7 th day after casting using an Arm field Corrosion Study Kit, in Chemical Engineering Programme, Abubakar Tafawa Balewa University, Bauchi, Nigeria, having digital voltmeter range of 200mV-1000mV. Electric direct current with constant voltage of 30.6V was impressed with the positive terminals of the digital voltmeter connected to the two embedded steel bars and the negative terminals connected to the copper, the reference electrode (RE). The potentials of the embedded steel bars were recorded at the end of each regime of drying and wetting periods and the average of two readings recorded at 7 days intervals for a period of 90 days. Results are shown in Table 1. Carbonation depth Carbonation is the reaction of carbon dioxide from water or air with alkaline hydroxides in concrete. This reaction results in the reduction of the pH of the pore water of the concrete. At lower pH values of about 8.3, the protective passive layer of the steel is removed and corrosion takes place in the presence of oxygen and moisture. The objective of the test is to evaluate the corrosion activity at steel-concrete interface resulting from the penetration of oxygen and moisture through the concrete cylinder specimens. Since the corrosion potential method has the disadvantage of producing results that are qualitative, without establishing the actual rebar corrosion rate, the carbonation depth technique could be used for a better evaluation of corrosion activities. The carbonation depth of each concrete cylinder specimens was determined at the end of the corrosion test (90 days). The specimens were carefully split open manually using a hammer and the steel bars carefully removed from the concrete. The freshly broken surfaces were then treated with phenolphthalein indicator. The free Ca(OH) 2 is colored pink while the red coloring indicates the depth of carbonation. An average of three measurements was taken on each steel bar and results are presented in Table 2. Weight loss The measurement of weight loss can serve as a means of quantifying the corrosion activity in concrete structures and can be used to determine the rate of corrosion in concrete. Thus, the data generated can be modeled to predict the service life of the reinforcing bars in a corrosive environment. This test was carried out at the end of the corrosion monitoring test of 90 days. The weight losses were determined after the rusts on the steel bars were removed by proper cleaning using hydrochloric acid. At the end of the 90 days of corrosion test, the concrete specimens were split open and the steel bars immersed in hydrochloric acid for 15minutes and washed with water, alcohol and acetone. The weight loss of each steel bar (STB1 and STB2) was determined by subtracting the final weight of steel from the initial weight recorded earlier before embedding in concrete. Results are shown in Table 2. Results and discussions Based on the classification table of ASTM C876 6 the data in Table 1 are further classified in Table 3, to ascertain the effectiveness of SDA in arresting corrosion. Deductions from Table 3 show that: i. For probability of corrosion less than 10% to occur Table 3 shows that specimens treated with SDA only, are likely to start from the 7 th day and, that with SDA and NS, will start at the 35 th for all other replacements except that for 10%, which starts at the 42 nd day. ii. For conditions that corrosion may or may not occur, the control specimen (SDA only), ends at the 4 9 th day while, for 5%, 10%, 20%, and 30%, ends at 56 to 63 rd day, respectively. The specimens containing NS in addition to SDA, ends at the 56 th day (control), while the rest of the replacement levels end at the 70 th day. iii. For the condition that the probability of corrosion is 90%, this starts at the 56 th day for the control (SDA only) and 63 rd , 77 th , 70 th & 70 th day for 5%, 10%, 20%, 30% respectively. From Figure 8, the 10% SDA improved the carbonation depth by approximately 24% and the weight loss by approximately 43%. This shows the efficacy of SDA in controlling corrosion. Conclusion From the evaluations carried out on the characteristics of SDA-OPC concrete, we can conclude that SDA can be used to enhance the life of concrete structures. A. The porosity of concrete was greatly improved when SDA was added to the mix. B . S D A is pozzolanic in nature and went into secondary reaction with the hydration product of cement (Ca (OH) 2 ), to produce C-S-H which imparts strength to the concrete. C. Possible start of corrosion from the results of the half-cell potentials showed delayed action when SDA is added and best results were at the optimum replacement level, which is 10%. D. SDA addition improved the carbonation depth and weight loss. The best results were obtained at the 10% replacement and the values are 24% and 43% respectively for carbonation and weight loss.
2019-04-15T13:09:50.925Z
2016-11-30T00:00:00.000
{ "year": 2016, "sha1": "037e6000390feb581591e75cb6d2b8ed8070aef8", "oa_license": "CCBYNC", "oa_url": "https://medcraveonline.com/MOJCE/MOJCE-01-00015.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a5c432954e8bab488fe3d3738550d5b945e6dcbb", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
16285425
pes2o/s2orc
v3-fos-license
Application of Traditional Chinese Medicine in Treatment of Atrial Fibrillation Atrial fibrillation (AF) is the most common cardiac arrhythmia, which is related to many cardiac and cerebral vascular diseases, especially stroke. It can therefore increase cardiovascular mortality and all-cause death. The current treatments of AF remain to be western drugs and radiofrequency ablation which are limited by the tolerance of patients, adverse side effects, and high recurrence rate, especially for the elderly. On the contrary, traditional Chinese medicine (TCM) with long history of use involves various treatment methods, including Chinese herbal medicines (CHMs) or bioactive ingredients, Chinese patent medicines, acupuncture, Qigong, and Tai Chi Chuan. With more and more researches reported, the active roles of TCM in AF management have been discovered. Then it is likely that TCM would be effective preventive means and valuable additional remedy for AF. The potential mechanisms further found by numerous experimental studies showed the distinct characteristics of TCM. Some CHMs or bioactive ingredients are atrial-selective, while others are multichannel and multifunctional. Therefore, in this review we summarized the treatment strategies reported in TCM, with the purpose of providing novel ideas and directions for AF management. Introduction Atrial fibrillation (AF) is the most common cardiac arrhythmia which will continue to grow rapidly. The estimated lifetime risk of developing AF is 25% in people over 40 years [1]. Another investigation [2] showed that the elderly had higher prevalence than the young. Current four principles of AF treatment include rate control, rhythm control, antithrombotic therapy, and dealing with the primary disease (or risk factors). So far western medicines remain the major treatment strategy of rate and rhythm control. Their positive actions are definite in emergency situations but still limited by recurrence of AF, visceral injuries, and inevitable adverse effects, such as severe ventricular arrhythmia [3], especially for long-time use. Another effective option is catheter ablation, which however has high recurrence and patients often have to receive once more operations. So this is one bottleneck met in the treatment of AF. The other one neck of bottle occurs in antithrombotic therapy which contains anticoagulation, antiplatelet aggregation, and fibrinolytic in general. All of them have high risk of bleeding, especially for the old persons. Even if antithrombotic therapy is the key of AF treatment, many patients have not received enough treatment or have been in poor INR control [4,5], owing to the factors of both doctors and the patients [6]. Thereby in spite of some progress made in AF management, there still exists many critical problems. On the contrary, traditional Chinese medicine (TCM), as complementary and alternative treatment, therefore becomes a viable option for these AF patients. It is reported that some single Chinese herbal medicines (CHMs) or bioactive ingredients, traditional Chinese patent medicines, and nondrug methods including acupuncture, Qigong, and Tai Chi Chuan can play active roles in the four principles of AF treatment, respectively, with low adverse reaction. What is more, they can lower the degree of discomforts like palpitation, chest discomfort, and dyspnea and then increase the tolerance of patients to disease, thus obviously improving their life quality. Further, multichannels are reportedly involved in the action mechanisms, including regulation of ion channels [7,8], inhibition of inflammatory factors [9], activity of antioxidant [10,11], and effect of antiplatelet aggregation [12]. Evidence-Based Complementary and Alternative Medicine Additionally, some CHMs or bioactive ingredients selectively work on the atrial myocytes, thus avoiding the property of proarrhythmia. In this review, we summarized these distinct types of TCMs with the purpose of demonstrating the alternative treatment of AF in perspective of TCM. Single CHMs or Bioactive Ingredients CHMs have long history of use basis with the theory of TCM in China. A former research showed around 13,000 herbs were estimated to be usually used, and more than 100,000 medicinal recipes had been recorded in China [13]. These data are bound to be updated since the wider acceptance and usage now than before. The effectiveness and safety of these herbs are confirmed by more and more researches, and potential mechanisms are gradually being discovered. AF, as a common refractory disease, has attracted people over the world to study therapeutic Chinese herbs targeting itself, which in return highlights the advantages of CHMs. 1753) is the most well-known species of Berberis that belongs to family Berberidaceae and evergreen shrubs throughout the temperate and subtropical regions of the world (apart from Australia). Its berries were used for culinary purposes in ancient Europe and Iran, and the flowers had the function of treating musculoskeletal pain in the theory of TCM for many centuries while current researches have focused on the extract from barberry root, berberine, which belongs to isoquinoline alkaloid. Berberine is considered to be a potential agent for AF because of its effects on primary disease, rate control, and rhythm control. It is reported to have these medicinal properties of vasodilation, positive ionotropic, and negative chronotropic actions [7], by which some primary diseases of AF like coronary heart disease and heart failure can be relieved. Further, it can suppress acetylcholine-induced AF in the rabbit through increasing effective refractory period (ERP) and prolonging the action potential duration (APD) of atrial myocytes [14]. This function was demonstrated to be in a dose-dependent manner [7] and in the biochemical mechanism of inhibiting the integral of transient outward potassium current (Ito) [7]. Moreover, berberine is a typical multichannel ion blocker. Previous studies have showed that it can inhibit KATP [15], IKV [16], IKCa [16], IK1, and IK [17]. Regrettably, though berberine shows promising function on AF (Table 1), the advantages have not been systematically studied in human clinical trials. And we should use it with caution, for it is demonstrated to be a potent inhibitor of CYP3A4 enzyme which involves in the metabolism of many medications [18]. Recent studies further illustrated that Saussurea and the sesquiterpene lactone fraction of Saussurea lappa roots had anti-inflammatory and analgesic effects [19][20][21]. In addition, the flavone compound of Saussurea, acacetin, was showed to prolong APD and ERP without prolonging the corrected QT interval in atrial myocytes [8], thus suppressing AF in dogs. The potential mechanism was the inhibition of acetylcholine-activated potassium current (Kach), ultrarapid delayed rectifier (IKur), and transient outward (Ito) potassium current as an atrial-selective agent [8]. Compared with the TCM theory, acacetin is a new discovery which is an effective and promising choice for AF (Table 1), but limited to animal experiments. Moreover, researchers failed to discover the relationship between anti-inflammatory and restraint myocardial remodeling, which may provide new direction for preventing the occurrence and development of AF. Therefore, further studies are needed to excavate the potential mechanism and relevant clinical investigations are urgently required. Crataegus rhipidophylla Gand. Crataegus rhipidophylla Gand. (1872) is also called hawthorn that belongs to a large genus of shrubs and trees in the family Rosaceae. It is believed to promote blood to run around the whole body in the TCM theory. Based on this theory, a research discovered that two proanthocyanidins of its flower heads, catechin and epicatechin, inhibited the biosynthesis of thromboxane A2, leading to antiplatelet effect, but may increase the risk of bleeding [12]. Currently, hawthorn is widely used in cardiovascular diseases, especially arrhythmia [22], congestive heart failure [23][24][25][26], and hypertension [27]. Studies demonstrated that the biochemical mechanism of antiventricular arrhythmia is prolonging the APD through blocking the delayed (IK) and inward (IK1) rectifier potassium currents [22]. However, Long et al. [28] preferred that hawthorn was consistent with the effects profile of phosphodiesterase-3 (PDE3) inhibitors, which was different from Muller's opinion [22]. Hawthorn extract LI 132 (Faros5 300, CRA) was further found to increase cardiac contractility with prolonging the ERP, thus avoiding arrhythmogenic potential [26]. In addition, the other extract, WS 1442, would improve exercise capacity and then decrease mortality in heart failure patients [24,25]. Furthermore, it was indicated that the epicatechin and hyperoside in the hawthorn fruit tincture [29] and the fluid extract of hawthorn had antioxidant activity [30]. From the above results, we find hawthorn extracts have the characteristics of antiplatelet aggregation, rhythm control, antioxidant activity, and management of the primary disease like heart failure, which makes it possible agent for AF treatment (Table 1). Nevertheless, the identity of the cardioactive constituent is still uncertain [31]. Therefore, corresponding experimental and clinical researches targeting hawthorn on AF are needed and deserved. Corydalis turtschaninovii Besser. Corydalis turtschaninovii Besser (1834) is a medicinally important species of Corydalis, which comes from the family Papaveraceae. Its Flower [19,21], root [20] Sesquiterpene lactone fraction [20] Animal study (rat) dried tubers, also known as Yanhusuo in China, can invigorate the circulation of blood and relieve pains, thus having been widely used in the treatment of cardiovascular diseases including arrhythmia [32]. Previous pharmacological studies reported that rotundium, an alkaloid of Corydalis turtschaninovii was antiarrhythmic by blocking the calcium channel in some animal experiments. And a later clinical research demonstrated that rotundium was an effective and safe medicine to treat AF, especially paroxysmal AF. The corresponding mechanism may be prolonging the ERP of atrial and atrioventricular node [33]. Except rotundium, there are four other alkaloids, d-corydaline, d-glaucine, protopine, and l-tetrahydrocolumbamine, isolated from the methanol extract of Chinese Corydalis tuber (CMe), which showed inhibitory action on blood platelet aggregation [34]. And the CMe was found to inhibit the decrease of blood platelets in disseminated intravascular coagulation (DIC) and inhibit pulmonary thromboembolism [34]. In addition, pseudocoptisine, a quaternary alkaloid with a benzylisoquinoline skeleton, was also extracted from the Corydalis turtschaninovii tubers and showed to have anti-inflammatory property. The potential molecular mechanism was reducing levels of the proinflammatory mediators, such as iNOS, COX-2, necrosis factor-alpha (TNF-alpha), and IL-6 through the inhibition of nuclear factor kappa B (NF-kappaB) activation via the suppression of ERK and p38 phosphorylation in RAW 264.7 cells [9]. Taken together, we can find that Corydalis turtschaninovii or its methanol extract has rhythm control or antiplatelet aggregation function. What is more, it can inhibit the inflammatory response that involves myocardial remodeling and thrombosis [35,36]. Therefore, Corydalis turtschaninovii is a multifunctional agent (Table 1), which is worthy of in-depth researches targeting AF. 2.5. Leonurus cardiaca L. Leonurus cardiaca L. (1753), known as motherwort as well, is a herbaceous perennial plant in the mint family, Lamiaceae. It can be found worldwide, spreading largely due to its use as a herbal remedy. Motherwort with long history of use as a traditional herb in Asia is often used for uterine infection or other gynecological diseases in TCM, while the aqueous extracts from the aerial parts of Leonurus cardiaca have been used as a remedy against tachyarrhythmia and other cardiac disorders in Europe [37]. A research demonstrated that the primary and refined extracts of Leonurus cardiaca L. can inhibit the inward calcium (ICaL) and potassium (IKr) channels and lengthen Q-T, P-Q intervals and the activation time constant of I (f) in pacemaker cells, which lead to prolongation of cardiac cycle and activation recovery interval and APD in sinoatrial node cells and ventricular myocytes [37,38]. Thus they decrease the blood pressure and heart rate and increase coronary blood flow and treat ventricular or sinus tachyarrhythmias [37]. In addition, motherwort also decreased blood viscosity and fibrinogen volume and increased the deformability of Rbc and antiplatelet aggregation effect [39]. Moreover, extracts from motherwort including polyphenolic compounds, mainly flavonoids (rutin) and derivatives of hydroxycinnamic acid, demonstrated antioxidant activity in several in vitro studies [10,11,29,30,40]. With the function of rate control, fibrinolysis, and antiplatelet aggregation effect and antioxidant activity which may be useful for addressing the primary disease (Table 1), motherwort or its extracts are promising and deserve further research in AF patients. Traditional Chinese Patent Medicines Traditional Chinese patent medicines are mainly comprised of Chinese herbal medicines or their extracts. With large numbers of clinical and experimental researches, many Chinese classical formulas are currently made into patent medicines for their efficacy, safety, and convenience. Therefore, more and more traditional Chinese patent medicines for specific diseases including AF are produced and used clinically. Wenxin Granule (WXG). Wenxin granule (WXG) is a Chinese medication, which contains Codonopsis pilosula Nannf., Polygonatum sibiricum Red., Panax notoginseng, Nardostachys chinensis Batal., and amber. WXG reportedly benefited patients with atrial arrhythmias and heart failure [41]. It is indicated to shorten conversion time, decrease the required dosage of amiodarone, and avoid the adverse reaction of longterm use of amiodarone, when combined with amiodarone [42]. A recent meta-analysis concluded that WXG alone or in combination with other antiarrhythmics decreased the recurrence of paroxysmal AF, with very few adverse effects [43]. The potential mechanism was demonstrated to be the inhibition of sodium channels (INa) selectivity in atrium, owing to more negative steady-state inactivation, less negative resting membrane potential, and shorter diastolic intervals in atrial versus ventricular cells at rapid activation rates [41,44]. In addition, WXG produced prolongation of the ERP selectively in the atrial cardiomyocytes, without APD prolongation, thus lengthening the P-wave duration and preventing persistent AF [41]. This is a novel property, especially APD shortening and postrepolarization refractoriness in an atrial-selective manner, which is controversial to the well-known paradigm that efficacious atrial specific antiarrhythmic drugs have to significantly prolong APD and/or wave length [45]. What is more, another thought-provoking aspect was put forward by Kalifa and Avula [46]. They pointed out the fact that WXG significantly shortened APD90 in a manner that is not merely compatible with the late INa blockade, which usually produces only moderate APD shortening. So other ion currents besides INa are assumed in this performance. The ultrarapid delayed rectified potassium current (IKur) also presents in atria but not ventricles in human heart [47,48]. So whether IKur is also participating in the action of WXG on AF deserves further discussion and research. Recently attentions have been already paid to developing selective inhibitors of the human atrial IKur or hKv1.5 channels [49], which may be a good news for AF patients. Additionally, WXG was found to lower AF inducibility after the ganglionic plexi (GP) ablation, without increasing the levels of atrial natriuretic peptide (ANP), tumour necrosis factor-alpha (TNF-), interleukin-(IL-) 6, and expression of connexin 43 in atrial tissues, 6 Evidence-Based Complementary and Alternative Medicine thus suppressing atrial substrate remodeling induced by GP ablation [50]. Although the current randomized controlled trials of WXG in treating AF (Table 2) are evaluated as low [43,51], it surely shows certain treatment in different investigations and attracts people's attention to TCM. ShenSongYangXin Capsule (SSYXC). ShenSongYangXin capsule (SSYXC) is another Chinese medication which is made up of Panax ginseng, Ophiopogon japonicus, Cornus officinalis Sieb., Salvia miltiorrhiza Bge., Ziziphus jujuba Mill., Taxillus chinensis Danser, Paeonia lactiflora Pall., ground beetle, Nardostachys jatamansi, Coptis chinensis Franch., Schisandra sphenanthera Rehd., and fossilia ossia mastodi. SSYXC has been used for the treatment of tachyarrhythmias [52]. A randomized, double-blind, and controlled multicenter research has been conducted to find that SSYXC has efficacy in the treatment of paroxysmal AF, which is similar to propafenone [53]. It prolonged the APD significantly, with the potential mechanism of depressing the L-type calcium channel current (ICa-L), transient outward potassium current (Ito), and inward rectifier potassium current (IK1) in a concentration-dependent manner [54]. And the adverse reaction rate of this medication is indicated to be low [53]. Additionally, SSYXC is used for premature ventricular contractions and ventricular arrhythmias as well [55,56], possibly by blocking multiple ion channels, namely, INa, ICaL, Ito, and IK1 [57,58]. That is to say, it is a broad-spectrum antiarrhythmic drug, rather than the atrium-selective one. A meta-analysis [59] involving 22 trials and 2,347 paroxysmal AF (PAF) patients showed that although SSYXC appeared to improve P-wave dispersion (Pwd) and the frequency of PAF, the results included were inconsistent. Therefore, in order to further confirm the role of SSYXC (Table 2), more rigorous researches are needed to be done. Shenmai Injection. Shenmai injection is derived from a Chinese classical formula which is an important component of TCM and has effective function of improving palpitation symptom. The injection is composed of ginseng saponins, radix ophiopogonis saponins, radix ophiopogonis flavonoids and traces of ginseng polysaccharides, and radix ophiopogonis polysaccharide as well. A current research showed that it could prolong ERP and convert AF into sinus rhythm, thus having synergistic effects with amiodarone [60] (Table 2). However, more reportedly studies are lacked to evaluate the function of shenmai injection on AF, and the potential mechanism is still unknown. Therefore, large numbers of experimental and clinical researches are urgently needed. Nondrug Methods TCM involves other therapeutic methods besides drugs, including acupuncture and Qigong that are illustrated to be effective in AF. Elderly people are the crowds in high risk of developing AF, with low level of liver and kidney condition; thus they easily get into drug intoxication. Therefore, nondrug methods are viable alternative for old patients and those with low tolerance for drugs. Acupuncture. Acupuncture is an important and indispensable part of TCM. According to the traditional theory, many meridians and acupoints have antiarrhythmic effect, especially the Meridian of Minister of Heart that is associated with heart rate and blood flow. Neiguan spot is a spot in the area of this meridian, which locates in the middle of the forearm between the tendons [61]. It is an acupoint with high frequency of use and usually taken as the main spot for treating AF. Based on puncturing of the Neiguan, Shenmen, and Danzhong spots, a research probed the feasibility of acupuncture in conversion of paroxysmal AF and atrial flutter. The results showed that the rate of cardioversion was higher [62] and the duration to cardiovert [62,63] was also shorter in patients treated with acupuncture than amiodarone. As to persistent AF, other studies have suggested that acupuncturing Neiguan, Shenmen, and Xinshu spots and so forth helped to decrease recurrences of persistent AF after cardioversion [63,64], whose action was similar to amiodarone. In addition, the combination of acupuncture and Chinese patent medicine is perhaps a better choice for AF. A study indicated that, combined to simple Wenxin granule therapy in the treatment of paroxysmal AF, acupuncture combined with Wenxin granule had a better effect [65]. Therefore, acupuncture could be an effective nondrug and rhythm control tool in the management of these patients (Table 3). However, recent studies evaluating the effect of acupuncture are limited by small sample sizes and need to be validated in a larger population. And researches evaluating the mechanism of action have not yielded consistent results [66]; thus more experiments are ought to be conducted. Qigong. Qigong is a special and unique part of TCM. Through physical exercises, it regulates the Qi of human body, which is a kind of material with important function, thus achieving health protection and treatment of diseases. Pippa et al. [67] confirmed that Qigong training was well tolerated and, compared with baseline, trained AF patients had better functional capacity and physical rehabilitation. In addition, Baduanjin, as one mind-body exercise of Qigong, could promote multisystem or organ functions (e.g., digestive and circulatory systems), increase immunity, make bodies relax, and improve mood and confidence of elderly populations [68]. Investigations further showed that Baduanjin was an effective therapy for hypertension [69] and dyslipidemia [70] and could reverse adverse left ventricular remodeling in post-myocardial infarction patients [71]. Additionally, it had beneficial effects on increasing antioxidant enzymes and reducing oxidative stress in middle-aged women by increasing superoxide dismutase and reducing malondialdehyde level [72]. Although Baduanjin was shown to be primary prevention of stroke in community old population with high risk factors [73], more rigorously designed RCTs about Baduanjin or Qigong and AF are still warranted. Therefore, Qigong is a promising and noninvasive method for management of primary diseases and risk factors for AF patients (Table 3), which need further mechanism researches. Tai Chi Chuan (TCC). Tai Chi Chuan (TCC) is also known as shadow boxing. It is another way of physical exercise that is a combination of martial art and the regulation of Qi in the theory of TCM. It integrates the breath, mind, and physical activity, thus making exercisers achieve greater awareness and a sense of inner peace. TCC has been popularly accepted and practised as a health care approach and an alternative method for the reduction of symptoms and risk factors in cardiovascular diseases (Table 3). A recent research further indicated its efficacy in the improvement of cardiorespiratory function for old adults [74]. And a metaanalysis demonstrated that TCC lowered blood triglyceride level with a trend to decrease blood total cholesterol level [76]. What is more, as an aerobic exercise of moderate intensity, TCC could maintain better health and improve quality of life, thus reducing CVD risk factors [75]. In addition, TCC intervention could produce positive effects on patients with mental problems [77]. Therefore, TCC would be a good option for cardiovascular patients and serves as an adjunctive exercise modality to rehabilitation programs for patients with CHD or CHF [78]. However, we failed to find direct researches concerning TCC and AF, which has a scientific priority for future investigation. Discussion TCMs showed in the above researches indicate that they can work on more than one principle of AF management simultaneously through multichannels. By means of regulating ion channels [7,8,22,33,37,38,41,54], they act positive roles in rhythm or rate control. They also give a hand to the risk factors by vasodilation [7], anti-inflammatory effect [9,19], antioxidant activity [10,30,72], and even resistance of myocardium remodeling [50,71] and benefit some primary diseases such as heart failure [7,26,41], hypertension [27,69], and dyslipidemia [70,76]. In addition, antiplatelet aggregation [12,34,39] and fibrinolysis effect [39] are showed in some CHMs or bioactive ingredients. Therefore, TCMs have four significant characters that can be put forward here. To begin with, some medicines including Saussurea involucrata, Corydalis turtschaninovii Besser, and Wenxin granule can selectively work on the atrial myocytes with no effect on ventricular electrical parameters. It is just the one that the current strategy for suppressing AF is developing [48,79] and the one that existing antiarrhythmic drugs lack. Next, the multichannels and multifunctions increase the therapeutic management of AF but reduce the side effects and the amount of drugs that only target one principle. Moreover, since exercise capacity [23], physical rehabilitation [67], and some mental problems [77] can be improved by TCMs, especially the nondrug methods, patients' life quality can be markedly enhanced. Finally, as AF is age-related and the old person has low tolerance for drugs because of declined function of liver and kidney, TCMs which have low adverse reaction [43,53] will be a better choice for them. Nevertheless, two problems still exist in the current results, namely, the relatively insufficient and poor clinical studies and the limitation in anticoagulation. In conclusion, despite suffering from lack of adequate mechanistic and clinical studies, TCM research in this area is exciting and may lead to the development of new drugs just like the discovery of amiodarone which is extracted from a plant named Ammi visnaga.
2018-04-03T01:33:58.992Z
2017-01-24T00:00:00.000
{ "year": 2017, "sha1": "85c26aaa2fadfd646579628c7017782c62a99681", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ecam/2017/1381732.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6dbf313d5f208f321cf368dde380cb124b498cc4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
228860658
pes2o/s2orc
v3-fos-license
Research on extended hypoplastic model and its verification for deposits soil In order to illustrate the mechanical properties of the coarse-grained deposits soil, a series of large scale triaxial tests of deposits soil from the southwest region of China were carried out. Based on the test data and the basic idea of hypoplasticity, the Wu-Bauer hypoplastic model was extended to describe the mechanical behavior of the deposits soil, so that the constructed equations were induced for developed analysis. The model parameter identification method was proposed for the establishment of extended hypoplastic constitutive model by using improved differential evolution algorithm, which was targeted to find the optimization of the error function between test and theoretical results instructed by the inverse analysis principle. Then, an optimization calculation program was written to achieve the identification of the constitutive model parameters, which showed that the proposed identification method was better than the conventional method of parameter determination. Moreover, the test data and numerical calculation of one-dimensional compression test and triaxial test were compared, which indicated that the extended Wu-Bauer hypoplastic model could reflect the different mechanical properties of the deposits soil under different initial conditions. Introduction Coarse-grained loose deposits is widely developed and distributed in the mountainous areas of western China. The special geological bodies are composed of various loose slope deposits, avalanche slope deposits and alluvial deposits. Its structure is disordered and poor in sortability which contains soil-rock mixtures [1]. Lots of engineering geological problems concerning the deposits have caused a series of geological disasters which threaten human survival, life and engineering safety. The mechanical properties of loose deposits are affected by many factors, such as the particle size, porosity, and the adhesion between the particles. The stress-strain behave of the material changes with different confining pressures and pore ratios. There is a big difference in the curve, for example, the dilatation occurs in the dense state, and the strain softening phenomenon occurs in the low confining pressure; however, these phenomena does not occur in the loose state [2][3]. The variety of influencing factors and the complexity of mechanical phenomena have brought great difficulties to the constitutive description of the cohesive soil-rock geomaterials. Existing numerical simulations of actual engineering use various models to describe the mechanical properties of coarse-grained deposits and other soil-rock mixtures, such as the Duncan-Chang's nonlinear model [4], the Shen's double yield surface model [5] and the Tsinghua K-G nonlinear model [6] etc. However, the constitutive models still have the problems that the main mechanical characteristics 2 and parameters of coarse-grained soil cannot be described reasonably and comprehensively. Then various new constitutive model theories have been proposed one after another, which not only improves the level and quality of the simulation of various geotechnical materials, but also promotes the rapid development of non-classical plasticity theory. The representative non-classical plastic mechanics constitutive theories include multiple plastic potential models [7], boundary surface models with nested yield surfaces [8] and hypoplastic constitutive models [9]. Hypoplasticity theory is a new type of geotechnical constitutive theory based on thermodynamics developed in the late 1980s. Its basic idea was promoted and developed by Kolymbas [10]. Hypoplasticity theory discards the concepts of total strain decomposition into elastic strain and plastic strain, yield criterion, hardening law, plastic potential and flow rule in traditional elastoplastic mechanics and some artificial assumptions. It had established the function relationship between stress rate and strain rate directly using tensor formula in continuum mechanics. Based on the understanding of the mechanical properties of coarse-grained soil, it focuses on the introduction of a suitable one for the description of sand and other non-cohesive soils by WU & Bauer hypoplastic model [11]. The four parameters included in this model are only related to the physical properties of the material, but not to the density of the material. Considering the limitation of the model used to describe the characteristics of the deposits material, this article proposes an extended model of the W-B model, and conducts a comprehensive identification method study of its model parameters. The extended W-B hypoplastic model can be used at a higher density and pressure within the scope, a good quantitative description of the mechanical response of the accumulation body is made, which is in good agreement with the results of triaxial tests. General form of hypoplastic model Hypoplasticity is a generalization of hypelasticity. Its stress-strain relationship does not need to be connected by a potential function, but is established by an isotropic nonlinear tensor value function directly [12][13]. In general, the basic equations of the hypoplastic model are expressed in increments of tensors as: ̆= ( , ) (2.1) where ̆ is the Jaumann-Zaremba stress rate tensor, which is defined as ̆=̇+ − ; W is the rotation tensor; ̇ is Cauchy Stress rate tensor; D is the strain rate tensor. According to the basic assumptions and requirements of continuum mechanics [14], the general expression of the tensor value function of the hypoplastic constitutive model can be decomposed into a linear part and a nonlinear part. ̆= ( , ): + ( , )‖ ‖ (2.2) where is the fourth-order tensor operator; is the second-order tensor operator; ‖•‖ represents the Euclidean norm; κ is the material state variable. According to the definition of the yield surface of the hypoplastic model, ∀ ∈ , ∃ ≠ , when ̆= , the stress was said to enter the hypoplastic yield state. The stress field that satisfies the condition: ∈ = { |̆= }, constitutes a continuous surface in the stress space called the hypoplastic yield surface. At least one strain rate D was existed corresponding to zero stress rate on the yield surface. Meanwhile, the volumetric strain rate disappears in the critical state as Cam-Clay's model. From this, the yield surface expression can be derived. Wu-Bauer's hypoplastic model Wu and Bauer made some improvements to the specific form of linear and nonlinear terms on the basis of retaining the basic form of the early model of Kolymbas (1985). It makes up for some of the deficiencies of the original model [15]. A practical Wu-Bauer hypoplastic model [11] [16] was proposed for the first time with its specific form as follows. where Ci(i = 1, 2, 3, 4) are dimensionless constants; T * is the partial stress tensor. The four parameters can be calibrated using the initial tangent modulus Ei, the initial Poisson's ratio μi, the internal friction angle φ and the dilatancy angle ψ from conventional triaxial test. The linear tensor operator and non-linear tensor operator can be derived in the principal stress space. The Wu-Bauer hypoplastic model can not only describe the stress-strain relationship and yield characteristics during loading and unloading, but also describe the shear expansion and contraction behaviour. Extended W-B hypoplastic model The Wu-Bauer constitutive model has both advantages and disadvantages. It solves the problem of nondecreasing tangent stiffness of the Kolyma's model in the triaxial compression experiment. It also can better describe the stress-strain characteristics of non-cohesive sand. However, the critical state parameter C3 = -C4 is derived under different stress paths, which causes the nonlinear terms of the constitutive equation (2.4) to be merged and degenerate into a three-parameter form. ̆= 1 ( ) + 2 : This defect severely limits the universality of the model. For example, the initial Poisson's ratio does not change with material changes, and it is not possible to simulate geotechnical media with large differences in density [17]. To this end, add the term tr(D)T, tr(T)tr(D)I disappears in the critical state (trD = 0), the addition of this item can describe the critical state under different stress paths, and a new four-parameter is obtained hypoplastic constitutive model. ̆= 1 ( ) + ( 2 + 3 : ) + 4 ( + * )‖ ‖ (2.6) It should satisfy both the stress and the volume strain to remain unchanged when the soil reaches the critical state according to the theory of critical state soil mechanics. ̆= = 0 (2.7) Examining the spherical stress tensor and partial stress tensor in Equation 2.6. In the critical state, the condition (2.7) is satisfied, then the expressions (2.8) and (2.9) are both equal to zero, and can be reduced to (meanwhile using trT* = 0). In order to describe the shrinkage and dilation caused by the bulk characteristics of the accumulation body, the density factor function multiplier fd is introduced into the nonlinear term of the extended hypoplastic model (2.6): where Dc is the modified relative density ≜ crt − crt − min ; ω∈ (0, 1) is the material parameter. The rate change of the void ratio is directly related to the volume strain rate trD according to the law of conservation of mass expressed as follows. Then the relationship between the void ratio and the body strain can be obtained using the definite integral operation of 2.15. where e0 is the initial porosity ratio, εv is the volumetric strain and εv0 is the initial volumetric strain. The non-linear relationship between the void ratio and volume strain is reflected in equation 2.16. The void ratio decreases along with the soil body is compressed, whereas the void ratio increases along with the soil body expands. The Wu-Bauer hypoplastic model was originally used to simulate non-cohesive soil without considering the effect of cohesion. The stress tensor T in the extended constitutive equation is replaced with a new tensor Ω := T -cI containing cohesiveness parameters in this paper (c is the cohesiveness parameter. A new expression of the constitutive equation (2.17) is obtained by introducing the multiplier fd of the dense density factor function and replacing the original stress tensor. ̆= 1 ( ) + ( 2 + 3 : ) + 4 ( + * )‖ ‖ (2.17) The yield surface of this model is related to the porosity ratio, which means that the porosity ratio of the soil along different stress paths to failure is different, and the stress state will develop into a multi-yield surface area. If the given time step is not as long as the convergence requirement, the next iteration time step is updated as follows: In order to further control the calculation capacity and avoid too many iterations, the number of iteration steps is also controlled by 'Nitermax ≤ Niterlimit'. Figure 3.1, the tangent modulus E and Poisson's ratio μ can be expressed by partial stress, axial strain and radial strain at two points A and B from the experimental data. The parameters Ci can be determined by solving the equations. In this article, several groups of parameters were determined by the results of a traxial compression tests for the deposits soil with relative density 0.80 (shown in Table 3.1). The model parameters have large deviations depending on the position of the selected calculation point. Comparing the results of the triaxial consolidation drainage test under the confining pressure of 600kPa for the deposits soil, the numerical simulation results were calculated by substituting the above determined parameter groups into the extended hypoplastic model (shown in Figure 3.2). Figure 3.2 Comparison of experiment and numerical simulation under serials of parameters It can be seen from Figure 3.3 that there is a large deviation between the simulation results of each group of parameters and the test data. The general method for determining parameters of hypoplastic model has a significant error. Parameter identification of extended Wu-Bauer hypobplastic model based on differential evolution algorithm Differential Evolution Algorithm (Differential Evolution Algorithm) is enlightened from Darwin's biological evolution theory. R. Storn and K. Price [18] proposed a differential evolution algorithm for solving Chebyshev polynomial fitting problems. The differential evolution algorithm is widely concerned and used because of its few control parameters, high optimization efficiency, and good robustness. The structural basis of the differential evolution algorithm is composed of three operations: selection, crossover and mutation, including: generating initial population (Initialization), mutation operation (Mutation), crossover operation (Crossection) and selection operation (Selection). The nine parameters of the model were determined by means of the improved differential evolution algorithm and test fitting comprehensive identification based on the triaxial experimental data of soilrock mixed deposits. Firstly, construct multi-variate error function between calculation results using extended hypoplastic model and real experimental results from triaxial tests. Then, the optimal model parameters and the minimum value of the objective function was found by means of differential evolution The optimization algorithm is used to minimize the formula (3.10) to obtain the optimal parameter value: ( ) = ‖ ′ ( ) − ( )‖ 2 (3.10) In formula (3.10), ||· || 2 is a 2-norm; x is the parameter vector of the model to be determined; v(x) is the experimental observation value, and v'(x) is calculation value, such as the principal stress difference (σ1-σ3) and volume strain εv of triaxial test. The variable x can only take values within a given range during optimization iteration. Equation The parameters of extended hypoplastic model were calibrated by differential evolution global optimization algorithm using the conventional triaxial test data of deposits soil under the confining pressure of 400kPa. The program file is written by MATLAB software. The specific steps are as follows: (1) The five parameters to be optimized in the model are written into variable form according to equation (2.7), ie x = [C1, C2, C3, C4, ω]. Substitute it into the triaxial test differential algebraic equation system. (2) Define an m file, Function 'fval = Hypoplasticity (x)', enter the measured main stress difference (σ1-σ3), axial strain ε1 and volume strain εv in the defined Hypoplasticity.m file; and in the form of input vector x Write the equation relationship in step (1). (3) Write an adaptive time step Runge-Kutta differential equation implicit numerical integration format solution algorithm program for Hypoplasticity.m to call. (4) Compare the test data (σ1-σ3), εv with the results obtained by solving the hypoplastic model (T1-T3), D2 in the 'Hypoplasticity.m' program file. Establish the multivariate error objective function after conversion, and return with fval value. (5) An improved differential evolution algorithm was written to identify parameters by interactively iterating 'Hypoplasticity.m' error multivariate objective function. The specific program structure pseudocode is shown in Table 3.2. Table 3.3). Validation of 1-dimensional lateral compression test model The initial stress state is taken as the self-weight stress field, and the initial porosity ratio e0 is 0.253 and 0.112, respectively. The calculation includes loading and unloading, first applying axial pressure to P2 = 300kPa, and then unloading. Figure 4.1(a) shows the relationship between axial stress and radial stress. The axial stress and the radial stress are close to a linear relationship in the loading stage otherwise nonlinearity in the unloading stage. The incremental ratio between the axial stress and the radial stress is defined as the static earth pressure coefficient in the unloading stage. The axial stress decreases faster than the horizontal stress during the loading phase. In the loading stage, the static earth pressure coefficient K changes insignificantly which is close to the initial static earth pressure coefficient (0.5) calculated by the Jaky's empirical formula. Validation of triaxial test model An axisymmetric unit of 30cm×60cm is used in the calculation. The initial isotropic pressure T11 = T22 = T33 is applied, and then the confining pressure is kept constant, and the load is applied axially at a constant rate. The test model takes two different initial porosity states e0 = 0.253 and e0 = 0.112. It can be seen from the figure that the initial dense deposits soil (e0 = 0.112) exhibits strain softening and volumetric dilatancy characteristics under certain pressure, while the initial loose deposits soil body (e0 = 0.253) exhibits strain hardening and volumetric shrinkage characteristic. In addition, the failure strength of the deposits in the initial dense state is greater than that in the initial loose state. Therefore, the previous theoretical analysis of the extended hypoplastic model reflects some basic mechanical properties of deposits soil objectively. Figure 4.4 illustrates the relationship between the corresponding axial strain and the porosity ratio. It can be seen from the figure that with the continuous development of the triaxial shear process, the void ratio of the deposits soil in the initial loose state continues to decrease, and the material becomes more and more dense. On the other hand, the void ratio of the deposits soil in the initial dense state behave a slight decrease first, and then increase so called phenomenon of dilatation. The critical porosity ratio was eventually approached under the corresponding stress state reflected in both curves. In summary, the model uses the same set of model parameters at two different initial states (e0 = 0.253 and e0 = 0.112), which can reflect the different mechanical properties of the deposits soil material in different initial states. It is indicated that the initial state has a wide range of applications, which is difficult to describe with the four-parameter Wu-Bauer hypoplastic model without state variables. Conclusions (1) Explicit representation form are derived with advantages and disadvantages based on the in-depth study of the Wu-Bauer hypoplastic model. The model parameter determination method are also explained. (2) The hypoplastic modelling makes an approach to describe deposits soil. The extended hypoplasticity equation was derived to explain the stress-strain relationship of deposits soil based on the Wu-Bauer hypoplastic model. (3) The comprehensive parameters identification of extended constitutive model was proposed by means of differential evolution algorithm. An iterative solution program for the differential equations was compiled for the hypoplastic constitutive equation. This method can determine the constitutive parameters based on some of the obtained experimental data better than the conventional method. (4) The numerical simulation of the conventional triaxial compression test based on the extended hypoplastic model was adopt to analyze some mechanical characteristics of the deposits in comparison with laboratory results.
2020-11-19T09:16:17.478Z
2020-11-12T00:00:00.000
{ "year": 2020, "sha1": "6318fe5777167f9a5961c5275de5961e6ab3e360", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/570/6/062034", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "3c743580210cebe40a6570416d04c0763bf43405", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Mathematics" ] }
222118897
pes2o/s2orc
v3-fos-license
The effect of outer ring elastodynamics on vibration and power loss of radial ball bearings Ball bearings are an integral part of many machines and mechanisms and often determine their performance limits. Vibration, friction and power loss are some of the key measures of bearing performance. Therefore, there have been many predictive analyses of bearing performance with emphasis on various aspects. The current study presents a mathematical model, incorporating bearing dynamics, mechanics of rolling element-to-races contacts as well as the elastodynamics of the bearing outer ring as a focus of the study. It is shown that the bearing power loss in cage cycles increases by as much as 4% when the flexibility of the outer ring is taken into account as a thick elastic ring, based on Timoshenko beam theory as opposed to the usual assumption of a rigid ring in other studies. Geometric optimisation has shown that the lifetime power consumption can be reduced by 1.25%, which is a significant source of energy saving when considering the abundance of machines using rolling element bearings. The elastodynamics of bearing rings significantly affects the radial bearing clearance through increased roller loads and generated contact pressures. The flexible ring dynamics is shown to generate surface waviness through global elastic wave propagation, not hitherto taken into account in contact dynamics of rollers-to-raceways which are generally considered to be subjected to only localised Hertzian deflection. The elastodynamic behaviour reduces the elastohydrodynamic film thickness, affecting contact friction, wear, fatigue, vibration, noise and inefficiency. Introduction Key issues affecting the performance of rolling element bearings are wear and fatigue of rolling mating surfaces, 1-3 as well as vibration and noise. [4][5][6][7] Thermal stability also plays an important role in terms of bearing's limiting performance. 8,9 All these parameters act in an integrated manner, determining the eventual bearing performance. Furthermore, during operation, the rolling elements within a bearing undergo complex motions such as rolling and sliding relative to the raceway grooves, as well as convergence and separation of the bearing rings. 10 Therefore, use of a bearing dynamic model is a prerequisite to any investigation of its performance, such as reliability, structural integrity, fatigue, wear and operational efficiency. Sunnersj€ o 11 was one of the first to investigate the effect of applied inertial forces using a two degree of freedom lateral transverse analytical bearing dynamics model. Meyer et al. 12 studied the effect of distributed defects such as waviness of the rolling mating surfaces upon the bearing vibration response using an analytical two degrees of freedom bearing model. Rahnejat and Gohar 5 presented a 2-degree of freedom radial deep groove ball bearing model with lubricated ball-to-races contacts under various regimes of lubrication. They also studied the effect of inner race waviness. Their work was extended by Aini et al. 13 to a 5-degree of freedom bearing dynamics model, including the effect of bearing moment loading, as well as applied axial thrust. With rolling element bearings, the effect of roller tilting, yawing and squeeze film motions, as well as lubricated contacts were taken into account by various researchers. 10,14,15 Misaligned rollers cause high edge contact pressures [14][15][16][17] which can lead to fatigue spalling and pitting. The presence of faults such as cracks and pits on rolling surfaces cause secondary bearing vibrations which have been extensively studied through numerical analysis as well as signal processing of vibration response of bearings. [18][19][20] Other faults include the presence of off-sized bearing elements 21 as well as unstable cage dynamics. 22 In most of the analyses described thus far the bearing housing has been considered as rigid. However, in many applications housing compliance can lead to rotor/spindle misalignment. Therefore, its effect should normally be considered. 23,24 Most analyses also consider the rolling elements to be massless (negligible mass and inertia, when compared with the mass of the supported rotor). When, this assumption cannot be upheld, inertial dynamics of individual rolling elements should be taken into account leading to much more complex n-degrees of freedom bearing dynamics. [25][26][27] The effects of generated contact friction and heat generation are also taken into account in some studies. 8,9,28 Therefore, there are many interacting multi-physics issues which affect bearing dynamics, hence the plethora of research work in this area. Non-concentricity of bearing rings or bearing housing plays an important role in its dynamics as it affects the bearing radial clearance. Such issues can also be caused by assembly faults in the case of bearing housing and through thermo-elastic deformation of bearing rings under generated distributed contact loads and temperature. These often limit the operational performance of bearings. Cavallaro et al. 29 presented a thin ring elastic model to account for the centrifugal expansion of the bearing raceways using the 2-dimensional equation for a disk detailed by Aramaki et al. 30 for high speed machine tool spindles. Cavallaro's model was further developed by Leblanc et al. 31 for thin elastic rings. They noted that the elastic model was a reasonable approximation for thin rings subjected to a static load. The FEA approach has been employed by Daidi e et al. 32 as well as by Olave et al., 33 requiring high mesh density around the rollers, which leads to long computation times and limits the use of this approach under transient conditions. Other FEA-based models include the work of Lacroix et al., 34 linking FEA of the rings to an analytical contact model and Wagner et al. 35 who used an FEA model of the flexible outer ring to demonstrate its influence upon the performance of a high speed ball bearing. The current paper presents bearing dynamics, including the transient flexible response of the bearing's outer ring. In this approach both the localised Hertzian deflection and global deformation of the bearing's outer ring are taken into account. For this purpose, the bearing's outer ring is considered as an elastic thick complete circular ring. It is important to note that in real applications, the bearing rings should be considered as thick rings, when the radius-to-width ratio is less than 10 according to Chidamparam and Leissa. 36 The numerical solution of a full bearing dynamics with a flexible ring is quite time consuming. Therefore, the investigation into the effect of ring elastodynamics on the vibrations of radial ball bearings requires the development of an initial thick ring dynamic model for amalgamation into an appropriate analysis. Thus, the outer ring elastodynamic model is based on the thick ring theory established originally by Timoshenko. 37 Problem formulation Model description The lateral excursions of the bearing centre determine the share of dynamic load carried by the ball complement in their orbital motion ( Figure 1). The following assumptions are made in arriving at a 2-DOF bearing dynamics model: 1. The outer and inner bearing races are perfectly circular 2. The balls are perfectly spherical with identical diameters 3. The balls are considered as massless and equipitched around the bearing rings 4. The bearing is loaded only in the radial transverse directions. 5. The thermal effects are neglected. 6. The inner bearing ring and the supported rotor are rigid. 7. The effect of structural and contact damping is neglected. The above assumptions lead to a 2-DOF bearing model for a radial deep groove ball bearing, accounting for the oscillations of the rigid supported shaft in Figure 1. Two degree of freedom deep groove ball bearing model. the x and y radial transverse directions ( Figure 1). This is the basic 2-DOF bearing dynamics model described in Refs. 5,38 The equations of motion become: where, M is the mass of the supported shaft, F x and F y are the external applied forces in the x and y-directions, h i is the instantaneous circumferential angular position of the ith ball and W i is its instantaneous contact load, acting radially towards the centre of bearing. The balls-to-race contact reactions are given by the classical Hertzian contact theory as: where, w i is the contact reaction for the ith ball, d i is the localised Hertzian contact deflection and K is the contact stiffness non-linearity. The exponent n is 1.5 for a ball bearing and 1.11 for a roller bearing. 1,38,39 The localised contact deflection, d i , is obtained as 38 : where, C is the local radial clearance, h i is the lubricant film thickness, and x and y are the movements of the bearing centre in the lateral directions. h i denotes the instantaneous circumferential orbital position of the ith ball. The deflection in equation (4) is altered to include the effect of an outer flexible ring's local radial deflection, u i , at any contact location, i, as: The combined contact stiffness non-linearity, K of any ball to the inner and outer races' contacts is determined as 40,41 : where, e is the curvature sum for the ball to the inner and outer raceways as: in which, R 1 and R 2 are the radii of curvature of the inner and outer raceways respectively, and R 0 1 and R 0 2 are the radii of curvature of the ball in the zx and zy planes of contact (clearly: R 0 1 ¼ R 0 2 ). In addition, j C represents the elastic contact proportionality constant as 40 : k in equation (6) is a coefficient which is a function of the elliptical footprint contact dimensions a and b and the contact parameter w (Tables 1 and 2) 41 : where, / is the contact angle. The semi-major and semi-minor half-widths for the elliptical contact footprint for the ith ball-raceway contact becomes 40,41 : The effective contact stiffness for ball-to-the inner and outer raceways becomes: The maximum Hertzian contact pressure is 39 : And the Hertzian pressure distribution for each ball to races contact is obtained as: where, x 0 and y 0 are the coordinate positions within the contact footprint of any ballraces contact. The Hertzian theory adequately describes the mechanics of contact of ball-to-raceways' grooves when the contact conditions in practice follow elastohydrodynamic regime of lubrication, which is the case for bearings with no emerging clearances, and with adequate preloading and interference fitting. However, the Hertzian theory assumes frictionless contact; an assumption which may be relaxed by stating a coefficient of friction. However, the correct approach is to calculate the viscous shear of the lubricant. Non-Newtonian shear of the lubricant leads to thin films and flattening of asperities on the mating opposing surfaces. Thus, friction can be obtained through determination of contact film thickness and shear stress. The instantaneous central contact lubricant film thickness, assuming isothermal conditions, is obtained for any ball-to-race contact using the extrapolated lubricant film thickness equation of Hamrock and Dowson 42 as: where, H à ci is the dimensionless central contact lubricant film thickness: where, h i is the film thickness and R x is the effective radius of curvature of the contacting bodies in the direction of lubricant entrainment into the contact. The dimensionless operating parameters are U à (the speed or rolling viscosity parameter), W à (the load parameter) and G à (the materials' parameter): where, g 0 is the lubricant atmospheric dynamic viscosity, l is the lubricant entrainment velocity (average speed of the contacting surfaces), p a;i is the asymptotic iso-viscous pressure and E à is the equivalent Young's modulus of elasticity of contacting bodies as: In addition, e à p is the contact footprint ellipticity parameter and is defined as: The non-Newtonian shear stress is determined as 43 : where s is the viscous shear stress, s 0 is the characteristic shear stress, 44 e is the slope of the lubricant limiting shear stress with variations in pressure, and p m is the mean contact pressure: The instantaneous generated viscous friction for the ith contact becomes: And the instantaneous power loss is determined as: where, f s is the shaft rotational frequency and R o is the inside radius of the outer race. The specifications of the bearing considered in this study are listed in Table 2. To guard against a number of undesired phenomena such as ball skidding, skewing, rattling and cage collisions, a 5 mm radial interference fit is applied. Therefore, all the ball-to-races contacts remain in compression throughout their orbital motions (and the classical Hertzian contact theory can be upheld). The radii of curvature of the raceways' grooves ensure a contact conformity of merely 7% and a contact angle of / ¼ 45 . The number of balls for a tightly packed arrangement is Z ¼ 14, whilst for a moderately packed arrangement is Z ¼ 12. Elastic outer ring vibrations: An overview In order to include the global modal deflection of an outer flexible ring into the 2-DOF bearing dynamics (u i in equation (5)), the outer ring is considered as a thick circular elastic ring. Kuhl 45 measured the in-plane and out-of-plane vibration frequencies of a number of thick rings. Seidel and Erdelyi 46 developed equations for inextensible deformation of thick rings. Comparisons of thin and thick ring predictions were made for response frequencies with the measurements reported by Kuhl. 45 Rao and Sundararajan 47 derived an equation for free in-plane vibrations of a circular ring, including the effects of shearing deformation and rotary inertia. Good agreement was obtained with the measurements of Kuhl. 45 Rao 48 demonstrated the effect of transverse shear and rotary inertia on the out-of-plane motions of thick rings and incomplete rings with free simply supported and fixed ends. Experimental measurements for the transverse vibrations of free rings were reported by Peterson, 49 when investigating gear noise. Rao 48 used Peterson's measurements to validate his model. An extensive review of literature for flexural vibrations of thick and thin rings is provided by Chidamparam and Leissa. 50 More recently, Tufekci and Dogruer 51 showed that for doubly symmetric cross-sections, the in-plane and out-of-plane vibration responses of the rings can be decoupled. The treatment of flexural vibrations of thick rings has been progressively improved, including the recent work by Yu and Fadaee, 52 validating the predictions of their in-plane FEA-based vibrations with the measurements of Kuhl. 45 Research in the dynamic behaviour of thin rings has received more attention, mainly because of the importance of incomplete circular compression rings as seals in internal combustion engines. [53][54][55] In-depth study of bearing races as thick rings and its integration with bearing dynamics is long overdue. This is the approach undertaken in this paper. For a 2-DOF bearing dynamic model the in-plane flexural dynamics of the thick outer bearing ring is developed. The out-of-plane outer race dynamics occurs in the axial direction of the rotor, which is not taken into account in the current 2-DOF bearing dynamics. The out-of-plane motion is resisted by the dry friction between the outer race and bearing housing. The inplane vibrations are included in the 2-DOF bearing dynamics model. In-plane elastodynamics of thick rings. Figure 2 shows a segment of a thick circular ring, representing the outer race of a ball bearing. Definitions of all terms used in the methodology are shown in the figure, as well as the employed coordinate system and the in-plane forces and moments acting on an element of the ring. The in-plane equations of motion for a thick ring segment, including shearing deformation and rotary inertia are obtained as 47,56 : where, A is the ring's cross-sectional area, R is its radius, I 1 the second area moment of inertia of ring's cross-section, E the Young's modulus of elasticity, G the shear modulus, q is the density of ring's material, j the shear correction factor, f the radial applied force, and p is the circumferentially applied force. The parameter u is the radial deflection due to the global ring elastodynamics and w is the circumferential deflection of the ring while u i in equation (5) is the radial deflection of the flexible outer ring at the instantaneous position of the ith contact. The parameter / is the shear deformation of the ring crosssection, and t denotes time. The following assumptions are made in the derivation of the in-plane motion of a thick ring segment: • The ring cross-section remains unaltered. • The undeformed ring segment centreline follows a circular arc. • There are no boundary conditions applied to the ring segment. • Structural damping effect is neglected. Method of solution The coupled equations of motion (1) and (2) are solved together with equations (24) to (26) for the radial in-plane motion of the flexible outer ring using a combination of central finite difference method (FDM) and step-by-step Newmark linear acceleration technique. A mesh dependency study is carried out to ensure independence of the results from chosen mesh density. The in-plane equations of motion are rearranged in order to obtain the equivalent mass (M) and stiffness (K) matrices (see Online Appendix A). The formulation can then be used to obtain the natural frequencies of the flexible thick outer bearing ring system, where M g and K g represent the corresponding characteristic mass and stiffness matrices of the structure, thus: where: The resultant mode shapes of the structure can then be found using: where, the term x n represents the nth natural frequency of vibrations of the system, and x n is its associated modal displacement vector. In this manner the frequency response of the bearing includes the response of its outer flexible ring as well as the speed-dependent bearing frequencies due to cage rotation and its multiples. Validation of the thick ring methodology It is essential to validate the expounded method with available experimental measurements. Like other similar studies, the validation is carried out against the reported measurements of Kuhl. 45 Table 3 lists the specification of the thick circular ring in Kuhl 45 for which in-plane ring response was reported. Table 4 lists the predicted results alongside the reported measurements of Kuhl. 45 A good agreement is observed between the predictions and measurements of Kuhl 45 (a maximum difference of 1.34%), as well as the analytical solutions of Kirkhope 57 and those of Yu and Fadaee, 52 all using the same validation example. The complete ring analysed in Table 4 has a thickness-to-radius ratio of approximately 0.479, demonstrating the applicability of the theory to very thick rings for in-plane motions. Figure 3 shows the first three in-plane flexible mode shapes. A closer look at the results in Table 4 indicates that the reported frequencies in this study slightly underpredict those reported by Kuhl. 45 The same trend can also be observed for the results reported by Kirkhope. 57 The results reported by Yu and Fadaee, however, seem to slightly overestimate the experimental results. Such variations between numerical and experimental results can also be observed elsewhere. For example, the results obtained by Gardner and Bert 58 also underpredict the same experimentally obtained frequencies, while the results reported by Lin and Soedel 59 underestimates some of experimental frequencies, and overpredicts some other such as those associated with mode numbers 2 and 3 by a very small amount of 0.09%. In the case of the current study, it is expected that the predicted results to improve further by, for instance, increasing the mesh density in the employed finite difference method. The experimental results would also vary with sampling time. Given these issues one can conclude that good agreement has been found. Effect of flexible ring elastodynamics on radial ball bearing performance Figure 4 shows a typical cyclic ball-to-races contact load in the radial transverse direction 5 during a cage cycle with the assumption of rigid bearing rings. This is for the case of a radial interference fit of C ¼ À5mm (in equation (4)), ensuring no emerging clearances in the bearing (i.e. all balls-to-races contacts are subject to compression with an applied load). An insufficient interference fit, or preload can result in the emergence of unloaded zones and lead to excess noise and vibration. The variation of contact load in a typical ball orbital motion results in bearing vibrations as a function of cage rotational speed as shown both Figure 5. The spectrum of vibration can be obtained through fast Fourier transformation of several cage cycles as shown in Figure 5. The fundamental frequencies of the system occur, including f b , the base natural frequency of the system, which is dependent on bearing load and its dynamic stiffness, which is itself a function of number of balls and amount of interference fit or preload. 5,38 Cage frequency is the other main frequency of the system due to the repetitive cyclic cage rotations. Other contributions include multiples of cage frequency and its modulations with the system natural frequency. A greater number of harmonics of cage frequency occur with more defined (narrowed) loaded regions, which is not the case here. With very defined loaded regions (zero interference fit and emerging clearances), cage harmonics up to Nf c (ball-pass frequency), where N is the number of balls can appear in the bearing vibration spectrum, a phenomenon which is referred to as the variable compliance effect. 5,6,38 The ball-pass frequency is the speed of balls relative to a stationary outer race. Applying the balls-to-races contact forces to the elastic outer ring model causes it to deflect around its circumference as demonstrated in Figure 6(a). The localised ring deflection contributes further to the localised contact deflection as indicated in equation (5), which in turn increases the load per ball in each cage cycle in equation (3). In effect, the global modal shape of a flexible outer ring increases the extent of interference fitting and contact loads. Figure 6(b) shows the resulting percentage increase in a typical ball-to-races contact deflection. In effect, the ring elastodynamics represents a form of elastic wave propagation represented as an elastic wavy surface of the outer race. Figure 7(a) shows an increase in a ball contact load throughout a typical cage cycle. This is due to the inplane radial deflection of the elastic outer ring. Owing to the contact stiffness non-linearity, the percentage increase in load per ball contact, as the result of ring deformation, can be as much as 6.25% (Figure 7(b)). This increase in contact load can compromise bearing performance through increased contact pressure, friction as well as wear and a rise in sub-surface stresses with the implication of reaching the onset of fatigue. Figure 7(a)). If the shaft frequency coincides with another structural response frequency, then resonance would occur. Premature resonance can occur well before any of the system component frequencies reach the rotor speed, as one would usually surmise. This occurs as any structural component would resonate with the ball pass frequency, which is at least an order of magnitude higher than the cage frequency, particularly with any emerging narrow loaded zone created by the deflection of an outer elastic ring, which is often ignored in any analysis. Therefore, an elastic ring behaviour demonstrated in Figure 8 can cause resonance orders of magnitude higher than that at the cage frequency or a given rotor speed (shaft out-ofbalance frequency). For the case shown here the ball pass frequency is at 159.66 Hz and its harmonics occur at 319.41 Hz, 479.61 Hz, 638.82 Hz and 798.30 Hz respectively. These are clearly visible in the frequency analysis of the bearing centre displacement time history in Figure 8(b). The dominant frequency in Figure 8(b) is the second order of the ball pass frequency acting at 2f r ¼ 319:2 Hz. The global ring elastodynamic deflection demonstrated in Figure 6(a) can affect the cage frequency calculation of Lynagh et al. 7 by altering the pitch diameter D P ð Þ. This can in turn alter the ball pass frequency through f r ¼ Nf c . 7 However, this is deemed to be negligible as the deflection in the ring @ max ¼ 0:14lm ð Þis significantly smaller than the pitch circle diameter D P ¼ 56:3 mm ð Þof the bearing. Figure 9(a) shows the time history of the predicted elastohydrodynamic lubricant film thickness for a ball-race contact in a steady state cage cycle. Any reduction in the film thickness results in an increase in friction, shown in Figure 9(b) with the associated power loss per cage cycle per ball contact shown in Figure 9(c). The total bearing power loss, comprising all the ball-to-races contacts is shown in Figure 9(d). This shows that an elastic outer ring increases the bearing power loss by approximately 4% (in the studied case). The fluctuations in bearing power loss, observed in Figure 9(d), is an additional source of vibration in any rotor-bearing system. Additionally, bearing reliability is of paramount importance. Any increased contact pressures and shear can lead to increased sub-surface stresses with increased chance of inelastic deformation. There have been numerous studies into the fundamental mechanisms, affecting life of bearings. These include the distortion energy hypothesis known as the von Mises criterion, as well as he maximum shear stress hypothesis, known as the Tresca criterion. Broszeit and Zwirlein 60 and others 2,39,61 have shown that the alternating shear stress hypothesis should be used to predict the ultimate useful life of a bearing. The equivalent stress r e ð Þ due to the alternating shear hypothesis is 62,63 : Figure 10 shows the sub-surface orthogonal reversing shear stresses AEs zxm . The cyclic compressivetensile nature of these stresses is often responsible for the failure of contacting surfaces. The results in The lifetime power loss for the bearing can be predicted based on the usual assumption of 10% bearing failures due to fatigue after 2 million cycles and approximately 50% failure after 10 million cycles. 39 These are shown for the cases of an assumed rigid as well as the more realistic flexible outer race in Figure 11. The results are obtained by evaluating the lubricant viscosity at an assumed operational temperature of 60 C. The results indicate that the cumulative effect of ring elasticity over a large number of cycles is marginal, but not insignificant (4% increase in power loss per bearing cycle), considering the ubiquitous nature of bearings in all forms of rotating machinery. However, such a relatively small difference in the predictions for one cycle can result in considerable differences in the prediction of the consumed lifetime power by the bearing. For instance, based on the results in Figure 11, the difference for a bearing during a nominal lifespan of 10 million cycles, which is expected for 50% of all bearings, can accumulate to approximately 0.16 kWh rotating at a cage speed of 13.3 Hz. The associated power loss would significantly increase in high speed applications. Optimisation of the elastic ring can improve the system efficiency in applications where the bearing weight is negligible such as in fixed rotating machinery. The outer race or housing can be optimised to reduce the losses associated with their flexibility. In applications where the machine is not fixed, such as the automotive industry, both the bearing weight and the power loss due to the flexible ring should be optimised in concert. Important geometrical design parameters for the bearing outer race are its outer diameter d 0 , bearing width, a 0 and the race radial thickness, b 0 . These geometric properties can play an important role in the bearing total power loss per cycle, the outer ring maximum deflection and frequency response as shown in Figure 12. The pitch circle diameter of the bearing remains unaltered for the bearing outer race throughout the geometric parametric study. The original ring parameters are listed in Table 2 as d 0 , a 0 and b 0 . The non-dimensional parameter, d à represents the ratio of altered ring parameter to its original value. Figure 12 shows the effect of geometric parameters of the outer race power loss per cycle (a), deformation (b) and its in-plane frequency response (c) of the radial deep groove ball bearing. Optimisation of the outer ring allows the total power loss per cycle to be reduced by 1.25%, which subsequently reduces the running energy costs and the corresponding carbon emissions. It is important to note that in certain applications, the additional weight may compromise the predicted energy savings. The results also show that the maximum deflection, hence the generated wavy surface of the elastic ring can be altered to change the bearing response and increase the useful working life of the bearing. It is important to note that although the rings deflection can be minimised through an increase in its diameter, width and thickness, both its deflection and the wavy structural response is still evident. It is also important to note that for high performance applications reductions in the diameter, width and thickness may cause significant increases in the ring deflection. To avoid a compromise in performance it is important to ensure that the bearing outer ring resonant frequencies do not coincide with the bearing operational frequencies. For the case of the bearing in Table 2, a moderately packed ball complement arrangement comprises 12 balls and for a tightly packed arrangement there are 14 balls. Therefore, the tightly packed arrangement can excite the resonance frequency of the bearing at lower operational frequencies than the moderately packed one. The bearing outer race needs to be designed so that its operational structural modal frequencies are outside any system excitation frequency. The excitation frequency is primarily provided by the shaft rotation frequency, but other source of vibration can excite the structure as well, such as multiples of cage frequency and surface waviness of the bearing outer race. Figure 12(c) demonstrates the effect of different geometric properties on the outer ring resonant frequencies. It can be seen that the radius has the most effect compared with its thickness. The ring width does not affect the in-plane frequency response of the structure. Unfortunately, the shift in frequency response is associated with a shift in bearing outer race ring deflection as demonstrated in Figure 12(b). Therefore, it is important that these two design factors are considered together in an integrated approach. Ring width is an important geometric parameter in that it has no effect on the structural resonance frequency, but it can be used to reduce the maximum deflection and power loss due to the outer flexibility. It is noteworthy that in certain applications the radial bearing can also accommodate an axial load. Furthermore, thrust bearings mainly support axial loads but can also accommodate for radial loads. In applications where the radial load is accompanied by axial loading it is important that the out-of-plane ring frequency is also considered. Concluding remarks The total power consumption per revolution is shown to increase when the flexibility of bearing rings or housing is taken into account (approximately 4% for the case studied under isothermal conditions). Therefore, the predicted power loss in motors, rotors and gearboxes where bearings are used in abundance increase with realistic bearing models such as the one presented in this study. The study also shows that flexibility of the ring enables optimisation of bearing performance through selection of appropriate geometrical parameters. A geometric investigation has shown that the losses induced by an elastic outer ring can be reduced by up to 1.25%. This can provide significant power savings through the lifetime of systems, especially with the abundance of bearings in almost 95% of all machines and mechanisms. Flexible ring dynamics causes surface waviness of the bearing raceway through global elastic wave propagation. This has not hitherto taken into account in the rollers-to-raceway contact dynamics, all of which have generally considered that the contacts are only localised, subject to Hertzian deflection. Furthermore, the elastodynamic behaviour of the ring reduces the elastohydrodynamic film thickness which in turn affects a number of fundamental ball bearing performance attributes such as, contact friction, wear, fatigue behaviour, vibration, noise and inefficiency. The ball bearing model could be expanded to include thermal effects of the lubricant using for example an analytical thermal network model. Inclusion of contact damping as well as structural damping of the flexible bearing rings would also improve the practicality of the methodology. However, it should be noted that under elastohydrodynamic regime of lubrication with an adequately interference fitted/preloaded bearing, damping due to lubricant action has been shown to be insignificant. 64,65 The main source of contact damping would be due to material hysteresis in localised deformation. 66 Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The authors gratefully acknowledge the financial support of the Engineering and Physical Sciences Research Council (EPSRC) under the Centre for Doctoral Training in Embedded Intelligence; grant reference EP/ L014998/1 as well as AVL List GmbH. Supplemental Material Supplemental material for this article is available online.
2020-08-27T09:06:53.280Z
2020-08-24T00:00:00.000
{ "year": 2020, "sha1": "0b6fab7a50e2178adc38277dff2202c78d1cc062", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1464419320951398", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "ff71b94aa38420643066464566fb116a147af364", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
254768136
pes2o/s2orc
v3-fos-license
GERD after Peroral Endoscopic Myotomy: Assessment of Incidence and Predisposing Factors BACKGROUND: Peroral endoscopic myotomy (POEM) is an effective intervention for achalasia, but GERD is a major postoperative adverse event. This study aimed to characterize post-POEM GERD and identify preoperative or technical factors impacting development or severity of GERD. STUDY DESIGN: This is a retrospective review of patients who underwent POEM at our institution. Favorable outcome was defined as postoperative Eckardt score of 3 or less. Subjective GERD was defined as symptoms consistent with reflux. Objective GERD was based on a DeMeester score greater than 14.7 or Los Angeles grade C or D esophagitis. Severe GERD was defined as a DeMeester score greater than 50.0 or Los Angeles grade D esophagitis Preoperative clinical and objective data and technical surgical elements were compared between those with and without GERD. Multivariate logistic analysis was performed to identify factors associated with each GERD definition. RESULTS: A total of 183 patients underwent POEM. At a mean ± SD follow-up of 21.7 ± 20.7 months, 93.4% achieved favorable outcome. Subjective, objective, and severe objective GERD were found in 38.8%, 50.5%, and 19.2% of patients, respectively. Of those with objective GERD, 24.0% had no reflux symptoms. Women were more likely to report GERD symptoms (p = 0.007), but objective GERD rates were similar between sexes (p = 0.606). The independent predictors for objective GERD were normal preoperative diameter of esophagus (odds ratio [OR] 3.4; p = 0.008) and lower esophageal sphincter (LES) pressure less than 45 mmHg (OR 1.86; p = 0.027). The independent predictors for severe objective GERD were LES pressure less than 45 mmHg (OR 6.57; p = 0.007) and obesity (OR 5.03; p = 0.005). The length of esophageal or gastric myotomy or indication of procedure had no impact on the incidence or severity of GERD. CONCLUSION: The rate of pathologic GERD after POEM is higher than symptomatic GERD. A nonhypertensive preoperative LES is a predictor for post-POEM GERD. No modifiable factors impact GERD after POEM. stasis. Therefore, the management of achalasia is targeted to relieve this obstruction. Pneumatic dilation, laparoscopic Heller myotomy, and more recently, peroral endoscopic myotomy (POEM) are the durable interventions performed to achieve this goal. POEM is a safe and effective endoscopic treatment, utilized to alleviate obstructive physiology at the EGJ or distal esophagus. Patients with achalasia and a few other esophageal motility disorders (eg esophagogastric junction outlet obstruction [EGJOO], jackhammer esophagus, and diffuse esophageal spasm [DES]) benefit from this procedure. POEM was developed to mimic the Heller myotomy through an endoscopic platform, thereby avoiding body wall trauma and preserving extraesophageal anatomy. This less invasive procedure is rapidly being adopted by clinicians in the US. A recent study shows a 19-fold increase in use of POEM during an 8-year period. 1 Several studies have compared the outcomes of POEM to that of Heller myotomy with Dor fundoplication (HMD), and found equivalent efficacy with comparable safety 2,3 ; however, investigators have expressed concern about the high rate of GERD after POEM. The clinical challenge in achalasia management is the relief of dysphagia without inducing debilitating gastroesophageal reflux. Surgical or endoscopic disruption of the LES compromises the competency of the EGJ against acidic gastric refluxate. Therefore, the development of GERD after myotomy is a frequent problem. The reported prevalence of reflux symptoms or objectively proven GERD after treatment in patients with achalasia ranges from 5% to 60%. [4][5][6] This wide variability is related to the definition, method of reflux measurement, and, most importantly, the type of treatment. Since the POEM procedure does not include the creation of a concurrent antireflux mechanism, it is associated with the highest rate of iatrogenic GERD among definitive procedures. A prospective cohort study of POEM outcomes reported rates of subjective GERD, endoscopic esophagitis, and abnormal distal esophageal acid exposure at 43%, 60%, and 56%, respectively. 4 Although several studies have reported the rate of GERD after POEM, there is limited data on the factors predicting the occurrence or severity of reflux after POEM. Therefore, we designed the current study to characterize GERD after POEM using both subjective and objective parameters and to determine potential preoperative criteria or technical elements that may predict the development of GERD or its severity. Study population This was a retrospective review of prospectively collected data of patients who underwent POEM at Allegheny Health Network hospitals (Pittsburgh, PA) between January 2013 and June 2021. This study was evaluated and approved by the IRB of the Allegheny Health Network (IRB No. 2021-239). Patients with a diagnosis of achalasia subtypes, EGJOO, DES, or jackhammer esophagus; who were 18 years or older; and had at least 6 months of follow-up after surgery were included in this study. Demographic, clinical, quality of life questionnaire, intraoperative, and objective testing data were assessed for impact on the development and severity of GERD after POEM. Disease-related quality of life measures All patients were asked to complete validated questionnaires preoperatively and then again at 6 and 12 months postoperatively. The validated questionnaires included the Of the AMA PRA Category 1 Credits TM listed above, a maximum of 1 credits meet the requirement for Self-Assessment. GERD Health-Related Quality of Life (GERD-HRQL) and Eckardt symptom score. The GERD-HRQL consists of 16 questions with scores from 0 to 5, specifically addressing GERD symptoms. 7 The Eckardt score stages severity of achalasia and consists of 4 questions, each with scores from 0 to 3, for an aggregate score of 0 to 12, assessing weight loss, dysphagia, retrosternal pain, and regurgitation. A total Eckardt score greater than 3 was considered abnormal. 8 Preoperative clinical and objective evaluation All patients underwent a comprehensive clinical evaluation with a focus on their foregut symptoms and their use of antisecretory medications. They also completed GERD-HRQL and Eckardt questionnaires. The routine preoperative objective assessment included several tests. A videoesophagram was used to evaluate gross pharyngeal and esophageal motility, delineate the anatomy, and assess for masses, mucosal lesions, hiatal hernia, stricture, esophageal dilation, distal esophageal tapering, or stasis. An esophagogastroduodenoscopy (EGD) assessed esophageal dilation, tortuosity, esophagitis, stasis of liquid or residual food, resistance at the EGJ, and other anatomic considerations such as Hill classification and presence and size of hiatal hernia. High-resolution impedance manometry utilized a 4.2-mm ManoScan ESO catheter (Medtronic, Minneapolis, MN) with 36 pressure sensors spaced 1 cm apart to record baseline resting measurements, followed by ten standard swallows of saline that were separated by at least 20 seconds. Tracings were analyzed using ManoView software (Medtronic, Minneapolis, MN) to assess manometric characteristics of upper and lower esophageal sphincter (LES), esophageal body, and bolus clearance. An integrated relaxation pressure greater than 15 mmHg defined impaired LES relaxation and resting pressure greater than 45 mmHg defined hypertensive LES. Diagnosis of achalasia subtypes, EGJOO, DES, and jackhammer esophagus were made per Chicago Classification version 3.0 criteria. 9 Esophageal pH monitoring was done using a Bravo pH capsule (Medtronic, Minneapolis, MN) placed 6 cm above the EGJ during EGD. Patients taking proton pump inhibitors held their medications 10 days before pH testing. Abnormal distal esophageal acid exposure was defined as a DeMeester score greater than 14.7. 10,11 Surgical technique Patients were placed on a clear liquid diet for at least 24 hours before surgery. Preoperative prophylactic antimicrobial therapy included a single dose of ampicillin-sulbactam and fluconazole within 30 minutes of mucosotomy. The patients were placed in the supine position and general anesthesia was administered. An EGD was performed. The desired length of esophageal myotomy was determined based on diagnosis, manometric findings, and endoscopic evaluation. The site for the anterior esophageal mucosotomy was identified 2 cm above the proximal extent of the intended myotomy. Orise solution (Boston Scientific, Natick, MA) was injected at the 12-o'clock position to create a submucosal cushion and a 1.5 to 2 cm mucosotomy was performed using a triangle tip electrosurgical knife. The endoscope was inserted, and a submucosal tunnel was created with a combination of blunt dissection, carbon dioxide insufflation, hydrodissection, and careful use of a triangle tip electrosurgical knife. The tunnel was extended past the EGJ, 2 to 3 cm onto the gastric cardia. A proximal to distal circular myotomy was performed, taking care to preserve the longitudinal muscle layers of the esophagus and stomach. Easy passage of the endoscope through the EGJ and retroflexed evaluation of the valve confirmed an adequate myotomy. The submucosal tunnel was then irrigated with gentamycin solution and the mucosal incision was closed using endoscopic Resolution 360 Clips (Boston Scientific, Natick, MA). All patients were evaluated with a water-soluble contrast esophagogram on the first postoperative day. They were then discharged on a clear liquid diet and placed on a 2-week regimen of triple antacid therapy consisting of an H2 receptor antagonist, a proton pump inhibitor, and sucralfate. Follow-up protocol Subjective outcomes were evaluated at 2 weeks, 6 weeks, 6 months, 1 year, and then annually after surgery. Patients were maintained with triple acid-reducing therapy for 2 weeks after surgery and then only proton pump inhibitors until 6 months after surgery. The GERD-HRQL and Eckardt questionnaires were completed while patients were off antisecretory medications at 6 months, 12 months, and then annually after surgery. Objective testing was repeated at 12 months after surgery and annually thereafter in the form of EGD and Bravo pH monitoring while off antisecretory medications. Outcome and definitions Favorable outcome after POEM was defined as an Eckardt score of 3 or less after surgery. Subjective GERD after POEM was defined as patient-reported perceived symptoms consistent with GERD. Objective GERD after POEM was defined as either a DeMeester score greater than 14.7 or a Los Angeles grade C or D esophagitis. Severe objective GERD was defined as a DeMeester score greater than 50 or Los Angeles grade D esophagitis. Statistical analysis Values were expressed as mean ± SD for continuous variables and frequency and percentage for categoric variables. Univariate logistic analysis was performed for predicting binary outcomes of subjective, objective, and severe objective GERD with respect to potential preoperative predictors. A multivariable logistic model for predicting each of the 3 outcomes was fitted using a stepwise selection that mandated a variable that was statistically significant or borderline significant in the univariate analysis. They were required to have a significant threshold of 0.30 and 0.10 to be opted and retained in the model, respectively. Due to the size of sample, Firth's penalized likelihood approach was applied to the univariate and multivariable logistic analyses. A statistically significant association between a predictor and an outcome was established if the p-value was 0.05 in a Wald chi-square test or the 95% CI of the OR did not cross 1.0. Bar graph was used to visualize the relationship between predicted probability of a binary outcome and LES resting pressure mean using a logistic model with Firth's penalized likelihood approach. A Kruskal-Wallis test was performed to examine difference for the predicted probability of the outcome among grouped LES resting pressure mean. A p-value less than 0.05 was considered statistically significant. All statistical analyses were performed using SAS software (v 9.4; SAS Institute, Cary, NC). Study population and overall outcomes A total of 183 patients underwent POEM during the study period. Baseline demographic and clinical characteristics of the study population are shown in Table 1. At a mean ± SD follow-up of 21.7 ± 20.7 months, Eckardt scores improved from 7.2 ± 1.9 to 1.4 ± 1.6 (p < 0.0001), with 171 (93.4%) patients achieving favorable outcome, defined by an Eckardt score of 3 or less. Of the 12 patients with unfavorable outcome, 10 required additional procedures (Heller myotomy and Dor fundoplication in 6 and esophagectomy in 4 patients). Major intraoperative complications were seen in 5 (2.7%) patients and consisted of full-thickness perforation requiring endoscopic clipping in 2 (1.1%) and development of pleural effusions requiring drainage in 3 (1.6%). A total of 40 (21.9%) patients required Veress needle decompression for capnoperitoneum. None of these patients had ventilatory or hemodynamic instability. These intraoperative complications were not associated with postoperative sequelae. A total of 71 (38.8%) patients reported symptoms of GERD after POEM. Of the 183 patients who underwent POEM, a group of 99 patients had routine postoperative objective testing in the form of EGD (n=99) and Bravo pH monitoring (n=60). Objective GERD was found in 50 (50.5%) of these patients. There were 19 (19.2%) patients who had severe GERD, defined by Los Angeles grade D esophagitis or a DeMeester score greater than 50. Postoperative Eckardt scores for each of the 3 GERD definitions are shown in Table 2. Patients with objective GERD had lower postoperative Eckardt regurgitation and total scores, as well as a higher rate of favorable outcome. Subjective GERD after POEM The results of the univariate analysis comparing the preoperative demographic, clinical, and physiologic parameters of patients with symptomatic GERD to those without are shown in Table 3. Patients with symptomatic GERD were more likely to be female and have a higher regurgitation score on their preoperative GERD-HRQL questionnaires. They were also less likely to have a dilated esophagus on the preoperative endoscopy. Multivariable logistic analysis showed that independent predictors of subjective GERD after POEM were female sex and a preoperative GERD-HRQL regurgitation score less than 3 (Table 4). Objective GERD after POEM The results of the univariate analysis comparing the preoperative demographic, clinical, and physiologic parameters of patients with objectively proven GERD to those without are shown in Table 5. Of the 50 patients with objectively proven GERD, there were 12 (24.0%) who denied reflux symptoms. Patients with objective GERD had lower preoperative mean LES resting pressures. LES overall length, intraabdominal length, and relaxation pressures had no impact on GERD. Patients with objective GERD were also less likely to have a dilated esophagus on preoperative EGD. The prevalence of objective GERD was similar between men and women. Furthermore, among patients with objectively proven GERD, females were not more likely to report subjective GERD (22 [78.6%] vs 16 [72.7%]; p = 0.7432]. The other demographic and clinical parameters and indications for the procedure were similar between groups. The indication for the procedure had no impact on the degree of esophageal acid exposure after POEM (Fig. 1). Multivariable logistic analysis showed that independent predictors of objective GERD after POEM were a nonhypertensive LES resting pressure on high-resolution impedance manometry and lack of esophageal dilation on endoscopy during preoperative work-up (Table 6). Multivariable analysis also showed that patients with a lower postoperative Eckardt score were more likely to have objective GERD (OR 0.713 [95% CI 0.534 to 0.953]; p = 0.0222). Severe objective GERD A subanalysis was performed to assess factors contributing to severe GERD after POEM. The univariate comparison of patients with severe GERD to those with less severe GERD is shown in Table 7. Patients with severe GERD had greater BMIs and were more likely to be obese (BMI>30 kg/m 2 ). They were also more likely to have a nonhypertensive preoperative LES resting pressure and higher percentage of incomplete bolus clearance. Multivariable logistic analysis showed that the independent predictors of severe objective GERD after POEM were a nonhypertensive LES resting pressure on preoperative high-resolution impedance manometry and obesity (BMI>30 kg/m 2 ) ( Table 8). Impact of the length of myotomy on GERD The mean length of the overall myotomy in the entire population was 14.0 ± 3.8 cm. The length of esophageal myotomy was 11.7 ± 3.8 cm and the length of extension to the gastric cardia was 2.3 ± 0.6 cm. The overall length of myotomy or the length of esophageal or gastric myotomy had no impact on the rate of subjective, objective, or severe objective GERD (Fig. 2). Probability of GERD based on preoperative LES resting pressure The predicted probability of subjective, objective, and severe objective GERD based on preoperative LES resting pressure is shown in Figure 3. There was a stepwise decrease in probability of objective and severe objective GERD for each 10 mmHg increase in resting pressure (p < 0.001 for both analyses). This trend was not observed for subjective GERD. All data given as mean ± SD, except where indicated otherwise as n (%). *Statistically significant. DISCUSSION Iatrogenic gastroesophageal reflux has been a significant tradeoff in the surgical management of achalasia since Ernst Heller first described his famous surgery in 1914. 12 Reflux rates as high as 55% to 100% after surgical myotomy prompted the addition of a partial fundoplication to the procedure 50 years later. 13 This addition became standard practice and substantially mitigated the problem of GERD after myotomy. 14 In fact, a prospective randomized trial found that 48% of patients had abnormal distal esophageal acid exposure after laparoscopic myotomy alone, compared with only 9% when a Dor fundoplication was added to the myotomy. 15 However, the advent of the endoscopic approach to myotomy in 2010 brought a resurgence of postoperative GERD, and it remains a problem today. 1, 16 We found that 38.8% of patients reported symptoms of GERD after POEM. The rate of objectively proven GERD was even higher at 50.5%. These findings highlight the necessity for thorough preoperative counseling and comprehensive postoperative objective testing and reflux management. Our high rate of GERD after POEM is consistent with reported rates in the literature. The POEM white paper by Stavropoulos et al. 17,18 and a publication by Inoue et al. 19 reported post-POEM GERD prevalence at longterm follow-up to be 20% to 46% and 20%, respectively. Similarly, a prospective cohort study of POEM outcomes reported that 43% of patients had subjective GERD, 60% had endoscopic esophagitis, and 56% had a positive DeMeester score. 4 Furthermore, a meta-analysis of 1,542 POEM patients from 17 studies found that the pooled rates of subjective GERD, endoscopic esophagitis, and abnormal esophageal acid exposure were 19.0%, 29.4%, and 39.0%, respectively. 5 These studies highlight the fact that GERD after POEM affects a substantial proportion of patients, and that rates of objective GERD are often higher than subjective GERD. In our study, objective GERD was more prevalent than subjective GERD, and among those with objective GERD, one in 4 were asymptomatic. A similar discordance between symptoms and objective GERD was observed by Karyampudi et al. 20 They compared patients with objective GERD after POEM to patients with nonachalasia GERD, and found that those with achalasia were less likely to report reflux symptoms. 20 A potential explanation for this contrast is that these achalasia patients have a degree of visceral desensitization that prevents them from perceiving reflux. It is well documented that degeneration of efferent neurons plays a role in the pathogenesis of achalasia; however, circumstantial evidence suggests that esophageal afferent pathways may similarly degenerate in patients with achalasia. 21,22 Rate et al. 23 used esophageal electromyography to measure responses to esophageal balloon distension, electric stimulation, and transcranial magnetic stimulation. They found that patients with achalasia had diminished or absent responses to all 3 types of stimuli, suggesting degeneration of the long-tract afferent neurons. 23 Other studies have attributed the decreased ability to detect reflux to chronic esophageal irritation due to food stasis and fermentation. 24 Some authors have hypothesized that mucosal denervation during submucosal tunneling and myotomy may result in esophageal hyposensitivity after POEM. 20 Further research is necessary to fully understand this pathophysiological difference between those with and without reflux symptoms despite objective GERD; however, our findings and the results of these studies suggest that symptoms are not a reliable index of pathologic reflux after POEM. Therefore, postoperative objective testing should be obtained regardless of symptoms, and these patients should be closely followed. We found that female patients were more than 3 times as likely to report reflux symptoms after POEM; however, the rate of objective GERD was similar between men and women. Furthermore, among those with objectively proven GERD, the rate of subjective GERD was similar between sexes. These findings suggest that female patients are more likely to perceive esophageal symptoms with subclinical stimulus. These results are consistent with studies showing that among healthy volunteers undergoing esophageal balloon distention tests, women have a significantly lower distention detection and pain perception threshold. 25 Variable expression of signaling receptors in the esophageal mucosa, such as the transient receptor potential vanilloid subfamily member-1 (TRPV1) receptor have been linked to differences in visceral sensitivity. This mucosal receptor is more frequently expressed in female patients with nonerosive reflux disease, but less frequently expressed in those with esophagitis. 26,27 The different distributions of TRPV1 receptors in these populations is a likely explanation for the higher rate of subjective GERD among women in our cohort, despite similar rates of objective GERD. The findings of these studies suggest increased vigilance and objective testing is necessary when following male patients, regardless of symptoms. The LES resting pressure is a key component of the reflux barrier. Dodds et al. 28 studied 12-hour manometry and pH recordings and found that, on average, patients with GERD have less than half the LES resting pressure of healthy volunteers. We found that a mean LES resting pressure less than 45 mmHg on preoperative manometry is an independent predictor for both objective and severe objective GERD after POEM. Additionally, the probability of objective and severe objective GERD increased in a stepwise fashion with each additional 10 mmHg decrease in preoperative resting pressure (Fig. 3). This is a novel finding in the literature on POEM outcomes. 29,30 However, our results are consistent with studies of Heller myotomy without fundoplication. Rice et al. 31 compared outcomes from 61 Heller myotomies without fundoplication to 88 HMDs and found that lower preoperative LES resting pressures was a predictor for postoperative GERD only in the group without fundoplication. Based on these findings, achalasia patients with lower preoperative resting pressures should be counseled that they are at high risk for GERD and six and a half times more likely to develop severe GERD. The decision to pursue POEM in these patients should be made with the understanding that they are likely trading dysphagia for GERD. Patients with no GERD symptoms after POEM in our study were more likely to have a dilated esophagus on preoperative endoscopic evaluation. Esophageal dilation in achalasia is an indication of advanced disease, which is more likely to be associated with a profound decrease in sensation. This desensitization may explain the less frequent symptomatic GERD in patients with a dilated esophagus. In contrast, we found a dilated esophagus to be a predictor for less objective and severe objective GERD. This unexpected finding has not been reported in the literature previously. As a cylinder, the luminal diameter of the esophagus is inversely proportional to the height above the LES that a given volume of refluxate will reach. Therefore, in patients with dilated esophagus, a larger volume of refluxate will remain undetected below the pH sensor that is conventionally placed 6 cm above the EGJ. Additionally, less mucosal surface area is exposed to acid, reducing the likelihood of grade C or D esophagitis; however, the determination of endoscopic dilation is subjective and may be operator dependent. Further investigation into the relationship between esophageal caliber and iatrogenic GERD using more objective measurements of esophageal dilation, such as esophagram, is warranted. Patients with achalasia are unlikely to be obese 32 ; however obesity was found to be an independent predictor for severe objective GERD after POEM in our study. The relationship between BMI and the severity of GERD is well documented. 33 Studies have demonstrated that obesity defined by a BMI greater than 30 kg/m 2 is an independent risk factor for developing GERD. 34 Additionally, obesity is also a major risk factor for hiatal hernia, which promotes GERD. 35 Moreover, obese patients have greater intra-abdominal pressures and increased frequency of transient lower esophageal sphincter relaxation (TLESR) in the postprandial period, further exacerbating GERD. 36,37 Failure of the LES to relax in achalasia constitutes an unwavering reflux barrier. After myotomy this barrier is disrupted and the effects of obesity become unmitigated, which explains our finding that obese patients are six and a half times more likely to develop severe GERD. Achalasia patients with a high BMI should be counseled that they are at increased risk of postoperative GERD, and may be better candidates for a procedure that includes an antireflux mechanism and the opportunity for hiatal hernia repair, like HMD. The association between hiatal hernia and GERD is well established, and hiatal hernia repair is a fundamental step in all antireflux surgeries. The rate of hiatal hernia in this study population was very low, limiting our ability to evaluate the impact of preoperative hernia on the development of GERD after POEM. This low rate is due to our practice's approach, as patients with achalasia who are found to have hiatal hernia are less likely to be considered for the POEM procedure. They will mainly undergo laparoscopic Heller myotomy with repair of the hiatal hernia. A major advantage of POEM is the ability to tailor technique, for example, calibrating myotomy length, to the patient's diagnosis and manometric features. Modifications in POEM technique have also been attempted to reduce rates of GERD. However, studies on the effect of the length, depth, and orientation of the myotomy on postoperative GERD report inconsistent results. Previous studies have suggested that the length of myotomy may influence GERD. 38 A meta-analysis of 36 studies comprising 2,373 patients found that studies with the highest rates of esophagitis had significantly longer myotomy lengths. 6 By contrast, we did not find that variations in esophageal, gastric, or total myotomy length had any impact on the development of GERD (Fig. 2). Our results are consistent with the findings of the 10-year follow-up study to the original POEM cohort, which did not find that length of myotomy had any impact on GERD. 39 Length of myotomy onto the gastric body and the division of the sling fibers has significant impact on dismantling the LES complex. Grimes et al. 38 found that a gastric myotomy length greater than 2.5 cm increased the severity of GERD but not the clinical efficacy of the procedure. However, 96% of the patients in this study underwent a posterior POEM so their results may not be generalizable to our population of anterior POEMs. 38 Prospective randomized trials are necessary given the contradictory results in the literature. In a randomized pilot study, anterior and posterior approaches demonstrated similar efficacy, but the posterior approach had a higher incidence of esophagitis. 40 This esophagitis was theorized to be due to disruption of the clasp and sling fibers in the LES complex. Due to the anatomic configuration of these fibers, the posterior approach is more likely to completely cut them both, which may promote GERD after POEM. 41 However, a subsequent anterior-vs-posterior approach, multicenter, blinded, randomized control trial was unable to identify any difference in safety, efficacy, or iatrogenic GERD between approaches. 42 We did not evaluate this technical aspect, as our practice is to perform anterior myotomies for all of our POEMs. Another technical variation, proposed by Tanaka and associates, 43 is the identification of 2 penetrating vessels between the circular and oblique muscles of the gastric cardia as a marker for the furthest extent of the myotomy. This modification led to preservation of the oblique muscle and lower rates of endoscopic GERD. 43 Despite previous studies demonstrating some agency over iatrogenic GERD through adjusting surgical technique, in the current study, we were unable to determine any impactful technical variations in relation to GERD after endoscopic myotomy. We found no modifiable preoperative or perioperative factors that can reduce the rate of GERD after POEM. However, despite high rates of GERD, we found that POEM is a highly effective and safe procedure. The rate of favorable outcome based on postoperative Eckardt score in this cohort was 93% with a major complications rate of just 0.3%. These results are consistent with a meta-analysis of 2,373 patients by Akintoyle and associates 6 that reported a pooled efficacy of 98%. These findings highlight one of the limitations of relying on Eckardt score alone as a metric in evaluating patients after POEM because it does not take into account postoperative GERD. In fact, our study demonstrated that patients with lower Eckardt scores were more likely to develop objective GERD, suggesting the better the sphincter disruption, the higher the risk of GERD. Therefore, careful risk stratification and patient selection is warranted to decide between HMD and POEM. Patients deemed to be better candidates for POEM should be counseled that GERD is a very common and often an inevitable consequence of POEM. They may develop asymptomatic GERD, and should be empirically tested and aggressively followed for medical GERD management, postoperatively. We acknowledge the limitations with this study including its retrospective nature and lack of postoperative objective testing in all patients. It is possible that patients who underwent testing had more severe symptoms, introducing an element of bias, which may have affected the different rates of subjective and objective GERD. However, when we compared postoperative GERD-HQRL total scores from those with subjective GERD who had objective testing with those who did not, no significant difference was found (21.3 ± 19.3 vs 18.0 ± 8.7; p = 0.7087). This finding suggests that even if the potential bias exists, it had little impact on these results. Furthermore, our findings are consistent with publications from other large-volume centers and meta-analysis, which have demonstrated higher rates of objectively proven GERD compared with reported reflux symptoms after POEM. 5,19 CONCLUSIONS We found that POEM is an effective and safe procedure, but half of patients demonstrate evidence of pathologic GERD on postoperative testing. Furthermore, 1 in 4 patients with objective GERD denied any GERD symptoms, likely due to esophageal desensitization, a common phenomenon in patients with achalasia. We also found that lower preoperative LES resting pressures increase the probability of developing GERD after POEM in a stepwise fashion. However, we were not able to identify any modifiable preoperative factors that reduce the risk of GERD. In particular, variations in surgical technique had no impact on iatrogenic GERD. Obesity was found to be an independent risk factor for development of severe objective GERD after POEM. As GERD symptoms are an unreliable marker of abnormal esophageal acid exposure in achalasia patients after POEM, we recommend objective testing in all patients after endoscopic myotomy to identify patients that require more aggressive reflux treatment and monitoring.
2022-12-17T16:07:57.971Z
2022-10-17T00:00:00.000
{ "year": 2022, "sha1": "9a91318664db94c991ec79e839a7615f06ce90aa", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c78d0567689caadc255088558995ec1a80f97c90", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
4699331
pes2o/s2orc
v3-fos-license
Altered Right Ventricular Mechanical Properties Are Afterload Dependent in a Rodent Model of Bronchopulmonary Dysplasia Infants born premature are at increased risk for development of bronchopulmonary dysplasia (BPD), pulmonary hypertension (PH), and ultimately right ventricular (RV) dysfunction, which together carry a high risk of neonatal mortality. However, the role alveolar simplification and abnormal pulmonary microvascular development in BPD affects RV contractile properties is unknown. We used a rat model of BPD to examine the effect of hyperoxia-induced PH on RV contractile properties. We measured in vivo RV pressure as well as passive force, maximum Ca2+ activated force, calcium sensitivity of force (pCa50) and rate of force redevelopment (ktr) in RV skinned trabeculae isolated from hearts of 21-and 35-day old rats pre-exposed to 21% oxygen (normoxia) or 85% oxygen (hyperoxia) for 14 days after birth. Systolic and diastolic RV pressure were significantly higher at day 21 in hyperoxia exposed rats compared to normoxia control rats, but normalized by 35 days of age. Passive force, maximum Ca2+ activated force, and calcium sensitivity of force were elevated and cross-bridge cycling kinetics depressed in 21-day old hyperoxic trabeculae, whereas no differences between normoxic and hyperoxic trabeculae were seen at 35 days. Myofibrillar protein analysis revealed that 21-day old hyperoxic trabeculae had increased levels of beta-myosin heavy chain (β-MHC), atrial myosin light chain 1 (aMLC1; often referred to as essential light chain), and slow skeletal troponin I (ssTnI) compared to age matched normoxic trabeculae. On the other hand, 35-day old normoxic and hyperoxic trabeculae expressed similar level of α- and β-MHC, ventricular MLC1 and predominantly cTnI. These results suggest that neonatal exposure to hyperoxia increases RV afterload and affect both the steady state and dynamic contractile properties of the RV, likely as a result of hyperoxia-induced expression of β-MHC, delayed transition of slow skeletal TnI to cardiac TnI, and expression of atrial MLC1. These hyperoxia-induced changes in contractile properties are reversible and accompany the resolution of PH with further developmental age, underscoring the importance of reducing RV afterload to allow for normalization of RV function in both animal models and humans with BPD. Infants born premature are at increased risk for development of bronchopulmonary dysplasia (BPD), pulmonary hypertension (PH), and ultimately right ventricular (RV) dysfunction, which together carry a high risk of neonatal mortality. However, the role alveolar simplification and abnormal pulmonary microvascular development in BPD affects RV contractile properties is unknown. We used a rat model of BPD to examine the effect of hyperoxia-induced PH on RV contractile properties. We measured in vivo RV pressure as well as passive force, maximum Ca 2+ activated force, calcium sensitivity of force (pCa 50 ) and rate of force redevelopment (k tr ) in RV skinned trabeculae isolated from hearts of 21-and 35-day old rats pre-exposed to 21% oxygen (normoxia) or 85% oxygen (hyperoxia) for 14 days after birth. Systolic and diastolic RV pressure were significantly higher at day 21 in hyperoxia exposed rats compared to normoxia control rats, but normalized by 35 days of age. Passive force, maximum Ca 2+ activated force, and calcium sensitivity of force were elevated and cross-bridge cycling kinetics depressed in 21-day old hyperoxic trabeculae, whereas no differences between normoxic and hyperoxic trabeculae were seen at 35 days. Myofibrillar protein analysis revealed that 21-day old hyperoxic trabeculae had increased levels of beta-myosin heavy chain (β-MHC), atrial myosin light chain 1 (aMLC1; often referred to as essential light chain), and slow skeletal troponin I (ssTnI) compared to age matched normoxic trabeculae. On the other hand, 35-day old normoxic and hyperoxic trabeculae expressed similar level of α-and β-MHC, ventricular MLC1 and predominantly cTnI. These results suggest that neonatal exposure to hyperoxia increases RV afterload and affect both the steady state and dynamic contractile properties of the RV, likely as a result of hyperoxia-induced expression of β-MHC, delayed transition of slow skeletal TnI to cardiac TnI, and INTRODUCTION Infants born prematurely are at increased risk for a number of comorbidities, including the development of chronic lung disease of prematurity, or bronchopulmonary dysplasia (BPD). After preterm birth, these infants generally require resuscitation, and are often supplemented with life-sustaining oxygen therapy for prolonged periods. Exposure to a relative hyperoxic environment at a time when infants should still be in utero in a hypoxic environment has been associated with perturbed development and has long-term consequences (Jobe and Bancalari, 2001). BPD is characterized by fewer and enlarged alveoli, increased lung collagen, blunted proliferation of arterioles, increased vascular tone, decreased vascular surface area, and thickening of arterial walls (Thibeault et al., 2003;Kaarteenaho-Wiik et al., 2004;Berkelhamer et al., 2013). The development of overt pulmonary hypertension (PH) results in increased afterload to the right ventricle (RV), leading to RV hypertrophy (RVH) and ultimately failure, with high infant morbidity and mortality (Bhat et al., 2012). Rodent models of BPD, characterized by postnatal hyperoxia exposure, recapitulate many of the findings of human disease, including arrested alveolar and vascular development, PH and RV dysfunction (Goss et al., 2015(Goss et al., , 2017Dumas de la Roque et al., 2017;Liang et al., 2017). Recently, a study determined the effects of neonatal hyperoxia exposure on RV function in mice, demonstrating that 14-day old mice develops RVH and elevated phosphodiesterase 5 (PDE5) expression and activity, and these hyperoxia-induced changes were reversible by day 56 (Heilman et al., 2015). Indeed, previous report (Joshi et al., 2014) suggests that in later stages of childhood RV function and pulmonary arterial pressure is not different in children born preterm with existing chronic lung disease compared to term-born children who do not have chronic lung disease. However, there is limited, if any, information regarding the effects of neonatal exposure to hyperoxia on contractile properties and myofibrillar protein expression in the rodent RV. Thus, our aim was to use similar age groups of rats pre-exposed to 14 days of postnatal hyperoxia as Heilman et al. to investigate the effects of hyperoxia on in vivo RV pressure and cellular contractile properties and myofibrillar protein expression in RV. However, we chose to use 21 and 35 day old rats to investigate the effects of hyperoxia on in vivo RV pressure and cellular contractile properties and myofibrillar protein expression in RV based on our preliminary experiments, in which we were unable to isolate useable trabeculae for mechanical measurements from 14 days old rats and found mechanical properties of 35-day hyperoxic rats to be similar to aged matched normoxic rats. We hypothesized that postnatal hyperoxia exposure in rats would result in altered RV function which will correspond with PH. METHODS AND MATERIALS Animals Timed pregnant Sprague-Dawley dams (Envigo, Indianapolis, IN) were allowed to deliver naturally at term in house. Irrespective of sex, the newborn pups were divided into two groups within 12 h of birth: (1) room air (normoxic), and (2) 14day hyperoxia (hyperoxic). Both groups were housed in standard cages within a 30 ′′ × 20 ′′ × 20 ′′ polypropylene chamber with a clear acrylic door. Oxygen concentration within the hyperoxia chamber was maintained at a fraction of inspired oxygen of 0.85 ± 0.03 using a continuous oxygen sensor, while the normoxia chamber was maintained at 0.21. Dams were rotated between room air and hyperoxia every 24 h to prevent oxygen-induced maternal toxicity. After 14 days, hyperoxic pups were returned to room air. Pups were weaned at 24 days. The UW School of Medicine and Public Health Animal Care and Use Committee approved all procedures involving animal care and handling. Invasive RV Pressure RV pressure measurements were performed at the University of Wisconsin Cardiovascular Physiology Core, as previously described (Hacker et al., 2006;Tabima et al., 2010). Briefly, 21 and 35-day-old rats were anesthetized with urethane (1.2 g/kg via intraperitoneal injection), orally intubated, and mechanically ventilated (Harvard Apparatus). The chest cavity was entered through the sternum and the chest wall and lungs were gently retracted to expose the RV. A 1.9F variable segment length admittance pressure catheter (Scisense, London, Ontario, Canada) was introduced into the RV using a 24-gauge needle. The magnitude and phase of the electrical E3 admittance and RV pressure were continuously recorded and analyzed using commercial software (Notocord Systems, Croissy Sur Seine, France). On the day of an experiment, skinned trabeculae were incubated in relaxing solution for 30 min before cutting them free from the sticks and trimming their ends. The trimmed trabeculae were then transferred to a stainless steel experimental chamber containing pCa 9.0 solution (Moss et al., 1983). The ends of each trabecula were tied to the arms glued to a motor (model 312B, Aurora Scientific) and a force transducer (model 403; Aurora Scientific), as previously described (Moss et al., 1983). The chamber assembly was then placed on the stage of an inverted microscope (Olympus) fitted with a 40 × objective and a CCTV camera (model WV-BL600, Panasonic). The light from a halogen lamp was used to illuminate the skinned preparations. Bitmap images of the preparations were acquired using an AGP 4X/2X graphics card and associated software (ATI Technologies) and were used to assess mean sarcomere length (SL) during the course of each experiment. Changes in force and motor position were sampled (16-bit resolution, DAP5216a, Microstar Laboratories) at 2.0 kHz using SLControl software developed in this laboratory (http://www.slcontrol.com). Data were saved to computer files for later analysis. Passive force, active force-pCa, and k tr -pCa/force relationships were established at a mean SL of ∼2.2 µm as described previously (Olsson et al., 2004;Patel et al., 2012). Briefly, the skinned trabeculae were stretched to a mean SL ∼2.2 µm and after measuring length and width, the preparations were transferred first to pre-activating solution, then to Ca 2+ activating solution, and finally back to pCa 9.0 solution. Once in Ca 2+ activated solution, steady-state force and the apparent rate constant of force redevelopment (k tr ) was measured simultaneously using the modified multi-step protocol developed by Brenner and Eisenberg (1986) as described in detail previously (Patel et al., 2001) and illustrated in Figure 1. Briefly, after force reached a steady level in activating solution (pCa 6.2-4.5), the length of the preparation was rapidly reduced by ∼20%, held for ∼20 ms, and then re-stretched back to its original length. As a result, there was an initial transient increase, followed by a decrease in force (seen as a spike in the force trace) and subsequent slower recovery of force to near the initial steady-state level. k tr reported in the present study is the rate constant of force redevelopment after the spike. The drop in force recorded in solution of pCa 9.0 was considered to be passive force and was therefore subtracted from the drop in total force at each pCa to yield Ca 2+ activated force (P). The protocol was repeated to establish active force-pCa and k tr -pCa/relative active force relationships. After completing mechanical measurements, the trabeculae were detached from the points of attachment, placed in sodium dodecyl sulfate (SDS) sample buffer (8M Urea, 2 M thiourea, 0.05 M Tris pH6.8, 75 mM DTT, 3% SDS, and 0.01% bromophenol blue) and stored at −80 • C until subsequent protein analysis. SDS Page Silver Stain Protein Content Analysis To examine the expression profile of MHC isoforms, samples were prepared using RV free wall isolated from normoxic and hyperoxic treated rats and gels were prepared as described previously (Warren and Greaser, 2003). Briefly, RV free wall was homogenized in relaxing solution and homogenate washed with fresh relaxing solution. Next, the homogenate was incubated for 30 min in relaxing solution containing 1% Triton-100. After the incubation period, the homogenate was washed three times with relaxing solution and 2.4 mg (wet weight) of homogenate was suspended in 100 µL SDS sample buffer and stored at −80 • C until subsequent protein analysis. A 17 mL solution of resolving gel (6%T and 2% C) was prepared by mixing 8.27 mL water, 1.7 ml 50% glycerol (v/v), 2.55 mL 40% acrylamide (37.5:1 crosslinked with DATD), 4.24 mL 1.5 M Tris, pH 8.8, 170 µL 10% SDS (w/v), 50 µL 10% ammonium persulfate (w/v), and 20 µL TEMED. The resolving gel was poured into an empty BioRad Criterion cassette and water was added to the top of the resolving gel to form a flat surface. After an hour of polymerization, the gel was stored in the cold room overnight. Next day, water was drained out and stacking gel (3%T and 1.5%C; 1.15 mL water, 1 ml 50% glycerol (v/v), 1.5 mL 10% acrylamide (5.6:1 cross-linked with DATD), 1.3 mL 0.5 M Tris, pH 6.8, 50 µL 10% SDS (w/v), 30 µL 10% ammonium persulfate (w/v), and 25 µL TEMED) was poured over the resolving gel. A 12 well comb was inserted and stacking gel was allowed to polymerize for an hour. The gel cassette was then inserted into a BioRad Criterion gel box pre-filled with ice cold lower running buffer (25.09 mM Tris-base, 19.98 mM glycine, 3.47 mM SDS, and FIGURE 1 | Experimental protocol for determining of passive force, Ca 2+ -activatedforce and rate constant of force redevelopment (k tr ) in rat skinned right ventricular trabecula. The bottom panel shows the changes in force recorded before, during and after a step change in length (top panel) of a rat skinned right ventricular trabecula. Once active force reached a steady state in pCa 4.5 and 9.0 solutions, muscle length was rapidly slackened by 20%, held at this length for 20 ms and finally re-stretched back to its original length. The Ca 2+ -activated force was determined by subtracting the resting force measured at pCa 9.0 from the total force measured at pCa 4.5. k tr is the apparent rate constant of force redevelopment following re-stretch of trabeculae to its original length. 2 mM 2-mecaptoethanol). The comb was removed, wells were washed with water and the chamber was filled with ice cold upper running buffer (50.18 mM Tris-base, 39.96 mM glycine, 6.94 mM SDS, and 10 mM 2-mecaptoethanol). The samples were defrosted and 4 µL of sample was added to 36 µL of sample buffer. The diluted sample was heated (95 • C) for 3 min and allowed to cool down before loading 10 µL on to the gel. Electrophoresis was done at 16 mA constant current for 4 h in the cold room. At the end of electrophoresis, the gel was removed from the cassette and silver stained using method described previously (Shevchenko et al., 1996) with following modification (Stelzer et al., 2006). The gels were (a) incubated overnight in fixing solution containing 50% methanol and 10% acetic acid, (b) washed for 20 min (4x ddH 2 O changes) with ddH 2 O, (c) incubated for 1.5 min in 0.01% sodium thiosulfate solution and then rinsed 4x with ddH 2 O, (d) incubated for 20 min in 0.09% silver nitrate solution and then rinsed 4x with ddH 2 O, (e) incubated in developing solution containing 0.0004% sodium thiosulfate, 2% potassium carbonate, and 0.0068% formaldehyde until protein bands were visible and then rinsed 4x with ddH 2 O, (f) incubated for 20 min in destaining solution containing 10% methanol and 10% acetic acid, and (g) finally washed for 30 min (6x ddH 2 O changes) with ddH 2 O. To examine expression profile of myofibrillar proteins, the normoxic and hyperoxic trabeculae stored in SDS sample buffer were electrophoresed using 12% Tris-HCl Precast Criterion gels (BioRad, Hercules, CA). The gel cassette was inserted into BioRad Criterion gel box pre-filled with running buffer (25.09 mM Trisbase, 19.98 mM glycine, 3.47 mM SDS). The comb was removed, wells were washed with water and the chamber was filled with running buffer. The samples were defrosted, sonicated for 15 min and 8 µL of sample was loaded on to the gel. Electrophoresis was done at constant volts (150 V) for 1.5 h at room temperature. At the end of the run, the gels were removed from the cassette and silver stained as described above. Both gels were imaged and protein bands quantified using BioRad Chemi Doc MP Imaging System (BioRad, Hercules, CA). Data Analysis and Statistics Cross-sectional areas of skinned trabeculae were calculated by assuming that the trabeculae were cylindrical and by equating the width, measured from video images of the mounted preparations, to diameter. Each Ca 2+ activated force (P) at pCa between 6.2 and 5.4 was expressed as a fraction of the maximum Ca 2+ activated force (P o ) developed by the same preparations at pCa 4.5, i.e., P/P o . To determine the Ca 2+ sensitivity of isometric force (pCa 50 ), force-pCa data were fitted with the Hill equation: P/P o = [Ca 2+ ] n /(k n + [Ca 2+ ] n )], where n is the slope (Hill coefficient) and k is the Ca 2+ concentration for half-maximal activation (pCa 50 ). k tr was determined by linear transformation of the halftime of force recovery (k tr = −ln 0.5 × (t 1/2 ) −1 ), as described previously (Chase et al., 1994;Patel et al., 2001). All data are presented as means ± standard error (SE). Statistical analysis of the RV pressure data was performed using two-way ANOVA (age and group main effects; GraphPad Prism 6;GraphPad Software Inc., San Diego, CA), all other analysis with unpaired t-tests (Sigma Plot 11.Ink; Systat Software Inc., San Jose, CA), p-values <0.05 were taken as indicating significant differences. Recovery of RV Pressure from Day 21 to Day 35 in Hyperoxia Exposed Rats A total of 49 rats were exposed to postnatal normoxia, and 40 rats were exposed to postnatal hyperoxia. Sixteen normoxia and hyperoxia exposed rats were used for the determination of RV pressures at 21 and 35 days of age (n = 8 in each group), respectively. There was a significant difference (p < 0.05) in body weight at day 21 (51.4 ± 1.4 vs. 42.4 ± 1.6 g) but no difference in body weight at day 35 (127.0 ± 4.7 vs. 117.1 ± 3.2 g) in normoxia treated vs. hyperoxia treated rats, respectively. The systolic and diastolic RV pressure (Figure 2) was significantly higher in the hyperoxic group at day 21 compared to the normoxic group at the same time point. This difference between the normoxic and hyperoxic group resolved by day 35 with RV pressure in the hyperoxic group being significantly lower compared to day 21. This suggests a recovery of the hyperoxia-induced PH seen at day 21. Effects of Hyperoxia Exposure on Passive Force, Maximum Ca 2+ Activated Force, Apparent Cooperativity in Activation of Force (n H ), and Ca 2+ Sensitivity of Force (pCa 50 ) Steady-state mechanical properties were assessed using right ventricular trabeculae isolated from a second cohort of 21-day normoxic (n = 18 trabeculae/15 rats) and hyperoxic (n = 16 trabeculae/12 rats) and 35-day normoxic (n = 15 trabeculae/8 rats) and hyperoxic (n = 9 trabeculae/8 rats) ( Table 1). At pCa 9.0, passive force generated by 21-day hyperoxic trabeculae was almost twice (p < 0.001) than age matched normoxic trabeculae, and by 35-day hyperoxic trabeculae generated similar amount of passive force as age matched normoxic trabeculae ( Table 1). At pCa 4.5, maximum Ca 2+ activated force generated by 21-day hyperoxic trabeculae was ∼70% greater (p = 0.002) than age matched normoxic trabeculae, and by 35-day hyperoxic trabeculae were similar to age matched normoxic trabeculae (Table 1). At sub-maximal Ca 2+ (pCa 6.2-5.4), 21day hyperoxic trabeculae also generated more force than age matched normoxic trabeculae, resulting in a left shift of hyperoxic trabeculae sigmodal force-pCa relationships compared to age matched normoxic trabeculae ( Figure 3A). Fitting the force-pCa relationships with the Hill equation yielded significantly higher pCa 50 values ( pCa 50 = 0.23; p < 0.001), implying elevated Ca 2+ sensitivity of force, and lower n H values ( n H = 0.3; p = 0.005), which implies depressed apparent cooperativity in activation of force, for hyperoxic compared to normoxic trabeculae. On the other hand, sub-maximal forces generated by 35-day hyperoxic trabeculae were similar to those measured in age matched normoxic trabeculae and as a result there was no discernable difference between the sigmodal force-pCa relationships established in hyperoxic and age matched normoxic trabeculae ( Figure 3B). Fitting the force-pCa relationships with the Hill equation yielded similar pCa 50 values and n H values for hyperoxic and age matched normoxic trabeculae. These results suggest that the neonatal exposure to hyperoxia has profound early effects on steady-state force production and that these effects wane as the PH resolves. Effects of Hyperoxia on Rate of Force Redevelopment (k tr ) Irrespective of age, both normoxic and hyperoxic trabeculae exhibited [Ca 2+ ] free (and force)-dependent changes in the rate of force redevelopment (k tr ), confirming earlier results from rat (Wolff et al., 1995;Palmer and Kentish, 1998;Patel et al., 2012) myocardium. That is, increasing the [Ca 2+ ] free from pCa 6.2 to pCa 4.5 elevated the values of k tr from 2.62 ± 0.14 to 10.48 ± 1.08 s −1 and 1.93 ± 0.21 to 8.28 ± 1.02 s −1 in 21day normoxic and hyperoxic trabeculae and from 2.61 ± 0.12 to 11.74 ± 0.58 s −1 and 2.79 ± 0.17 to 10.43 ± 0.78 s −1 in 35-day normoxic and hyperoxic trabeculae. To illustrate this, records of force redevelopment at various levels of [Ca 2+ ] free are shown in Figure 4 for 21-day normoxic ( Figure 4A) and hyperoxic ( Figure 4B) and 35-day normoxic ( Figure 4C) and hyperoxic ( Figure 4D) trabecula, where steady state forces at each pCa were normalized to 1.0 to provide better visualization of variations in kinetics of force redevelopment. Figure 5 shows the curvilinear k tr -relative force relationships observed in 21-day ( Figure 5A) and 35-day ( Figure 5B) normoxic and hyperoxic trabeculae. At pCa 4.5, both 21 and 35-day hyperoxic trabeculae redeveloped maximum Ca 2+ activated force at similar rates as age matched normoxic trabeculae. At sub-maximal free [Ca 2+ ] (pCa 6.2-5.4), 21-day hyperoxic trabeculae redeveloped submaximal forces at significantly slower rates than age matched normoxic trabeculae and as a result the curvilinear k tr -relative force relationships established in hyperoxic trabeculae were to the right of those established in normoxic trabeculae ( Figure 5A). On the other hand, 35-day hyperoxic trabeculae redeveloped sub-maximal forces at rates similar to age matched normoxic trabeculae and as a result there was no discernable difference between curvilinear k tr -relative force relationships established in hyperoxic and age matched normoxic trabeculae ( Figure 5B). These results indicate that the neonatal exposure to hyperoxia has profound effects on cross-bridge cycling kinetics and that the effects are reversible in rat myocardium. These reversible effects are coincident with the normalization of RV pressure at 35 days. Figure 6 shows a typical SDS-PAGE analysis of MHC isoforms (Figures 6A,B) and myofibrillar protein expression (Figures 6C,D) in 21-and 35-day normoxic and hyperoxic myocardium. Figures 6A,B shows that 21-day hyperoxic RV expressed 14% less α-MHC and more β-MHC than aged matched normoxic RV. Whereas, 35-day hyperoxic RV expressed similar levels of both α-MHC and β-MHC as aged matched normoxic RV. It is also apparent from Figure 6C that both 21-and 35-day hyperoxic trabeculae expressed similar isoforms of key myofibrillar proteins as normoxic trabeculae (cardiac myosin binding protein C (cMyBP-C), actin, troponin T (TnT), tropomyosin (Tm), ventricular myosin light chain 1, and 2 (vMLC1 and vMLC2), respectively. While both 21 and 35-day normoxic trabeculae were found to express 100% cTnI, an observation consistent with a previous study which reported complete conversion of ssTnI to cTnI by 15 days after birth in rat (Warren et al., 2004), 21-day hyperoxic trabeculae expressed both cTnI (33 ± 2%) and ssTnI (67 ± 2%) ( Figure 6D). With the exception of one hyperoxic trabeculae expressing both cTnI (31%) and ssTnI (69%), 35-day normoxic and hyperoxic rat trabeculae expressed predominately cTnI. In addition, 21-day, but not 35-day, hyperoxic trabeculae expressed aMLC1 (15 ± 3%) and vMLC1 (85 ± 3%) (Figure 6D), an observation consistent with PH-induced expression of aMLC1in neonatal porcine right ventricle . These results suggest that neonatal exposure to hyperoxia elevates expression of β-MHC, disrupts transition of ssTnI to cTnI, stimulates expression of aMLC1, and indicates that most of these effects are reversible in rat myocardium. DISCUSSION The goal of the present study was to use skinned right ventricular trabeculae isolated from hearts of 21 and 35-day old rats The force transients were expressed relative to the peak steady state force attained after the step change in muscle length. Both, relative force and k tr values for a given [Ca 2+ ] free is shown in parentheses. to examine the impact of neonatal hyperoxia exposure on contractile properties and protein expression within the RV in a model of BPD. We demonstrated for the first time that neonatal exposure to hyperoxia has profound effects on passive force (increase), maximum Ca 2+ activated force (increase), Ca 2+ sensitivity of force (increase), apparent cooperativity in activation of force (decrease), cross-bridge cycling kinetics (slower), expression of β-MHC (higher), aMLC1, and the developmental transition of ssTnI to cTnI (delayed). Furthermore, these effects of neonatal exposure to hyperoxia on steady-state force production, cross-bridge cycling kinetics and protein expression normalize as RV afterload and PH improve by 35 days of age. Together these changes result in a hypercontractile RV as a neonatal adaptive response to hyperoxia-induced PH. Chronic lung disease of prematurity, or BPD, is frequently complicated by PH, which results in a significant increase in neonatal morbidity and mortality (Baker et al., 2014). We chose to use rats at the ages of 21 and 35-days postnatal life which corresponds with weaning (∼6 months of age) and prepubertal human stages of life, respectively (Sengupta, 2013). A previous study reported that preterm infants with PH and BPD demonstrated a mortality of 38% during a median follow up of 10.9 months (Khemani et al., 2007). However, the majority of individuals born preterm with chronic lung disease have a normalization of RV function and PA pressure in later childhood despite persistently abnormal lung function (Joshi et al., 2014). Our data demonstrates a similar finding to humans in which altered RV function and RV pressure in early stages of development normalizes with further developmental age. Furthermore, these findings of mechanical disruption of the RV is coincident with PH at 21 days postnatal rat life corresponds with a time in life where humans born preterm diagnosed with BPD and PH are most susceptible to higher mortality rates. Impact of Hyperoxia on Steady-State Contractile Properties of Right Ventricle The 21-day hyperoxic trabeculae generated twice as much force as normoxic trabeculae (Table 1), an effect similar to the PH-induced increase in maximum Ca 2+ activated force observed in the adult human (Rain et al., 2013), rat (Kogler et al., 2003), and mice (unpublished observation). While these results are consistent with the idea that higher density of thick and thin filaments allows 21-day hyperoxic trabeculae to generate more force, the finding of similar amount of maximum Ca 2+ activated forces in 35-day hyperoxic and normoxic trabeculae is inconsistent with this mechanism. Previous studies reported that myocardium expressing ssTnI generates a maximum Ca 2+ activated force similar to myocardium expressing cTnI (Fentzke et al., 1999;Arteaga et al., 2000;Konhilas et al., 2003;Ford and Chandra, 2012) whereas expression of aMLC1 results in a higher maximum Ca 2+ activated force than expression of vMLC1 . Interestingly, our findings of increased maximum Ca 2+ activated force and aMLC1 in 21-day old hyperoxia exposed rats was associated with PH, which is similar to a report in a porcine model of PH . Taken together, the earlier observation of increased maximum Ca 2+ activated force in 21-day hyperoxic trabeculae and the similar amount of maximum Ca 2+ activated force generated by 35-day hyperoxic trabeculae compared to normoxic trabeculae suggests that expression of aMLC1 is likely to play a prominent role in increasing maximum Ca 2+ activated force in 21-day hyperoxic trabeculae. At sub-maximal Ca 2+ , 21-day hyperoxic trabeculae generated more force than age matched normoxic trabeculae and as a result, the force-pCa relationships established in hyperoxic trabeculae were left-shifted by ∼0.23 pCa units compared to those in normoxic trabeculae (Figure 3A), which corresponds to similar changes found in monocrotaline-induced PH (Kogler et al., 2003). In cardiac muscle, expression of either ssTnI (Fentzke et al., 1999;Arteaga et al., 2000;Konhilas et al., 2003;Ford and Chandra, 2012) or aMLC1 (Morano et al., 1997;Diffee and Nagle, 2003;Diffee, 2004) are known to shift force-pCa relationships to the left. Since 21-day hyperoxic trabeculae expressed both ssTnI and aMLC1, it is difficult to say with certainty that the increased Ca 2+ sensitivity of force in 21-day hyperoxic trabeculae was due to the presence of ssTnI or aMLC1, or both. However, the finding of similar Ca 2+ sensitivities of force in 35-day hyperoxic trabeculae and age-matched normoxic trabeculae (Figure 3B), in which there is no aMLC1 present and predominant expression of cTnI (Figure 6C), suggest that the hyperoxia-induced changes in expression of MLC1/TnI isoforms at 21 days is reversible and may be responsible for the increase in Ca 2+ sensitivity in hyperoxic myocardium at this age. Taken together, our findings of increased maximum Ca 2+ activated force, Ca 2+ sensitivity of force and changes in expression of MLC1/TnI isoforms at 21 days of age in our hyperoxia exposed rats are likely an adaptive response to PH. This is highlighted by the findings at 35 days of age where there were no differences in RV pressure, RV myofibrillar isoform expression and contractile properties. Impact of Hyperoxia on Dynamic Contractile Properties of Right Ventricle In normoxic trabeculae, the rate of force redevelopment (k tr ) varied with the level of activating [Ca 2+ ] free (or force), increasing as [Ca 2+ ] free (or force) was elevated from sub-maximal to maximal levels (Figures 4, 5). These Ca 2+ -and force-dependent changes in k tr in normoxic trabeculae are consistent with previous results from rat (Wolff et al., 1995;Palmer and Kentish, 1998;Olsson et al., 2004;Patel et al., 2012), mice (Edes et al., 2007;Colson et al., 2012;Ford and Chandra, 2012), porcine (Edes et al., 2007) and human (Edes et al., 2007) myocardium. Both, 35-day normoxic and hyperoxic trabeculae also exhibited similar Ca 2+and force-dependent changes in k tr, suggesting no remaining significant effects of hyperoxia on these relationships. However, 21-day hyperoxic trabeculae redeveloped sub-maximal forces at a slower rate than age-matched normoxic trabeculae (Figure 4). Thus, when k tr values were plotted against force normalized to maximum force, k tr -force relationships in hyperoxic trabeculae were right-shifted compared to those in normoxic trabeculae (Figure 5A), i.e., at equivalent forces, k tr values were lower in hyperoxic trabeculae. In adult rat myocardium, a decrease in expression of α-MHC, and concomitant increase in expression of β-MHC, is known to slow the rate of force redevelopment (Fitzsimons et al., 1999;Rundell et al., 2005;Locher et al., 2011) and rate of relaxation (Fitzsimons et al., 1998). Thus, the possibility of the depressed cross-bridge cycling kinetics in 21day hyperoxic trabeculae may be exclusively or in part due to a decrease in expression of α-MHC and concomitant increase in expression of β-MHC ( Figure 6B). Previous studies have reported that expression of ssTnI has no significant effects on the rate of force redevelopment in skinned preparations (Ford and Chandra, 2012), whereas expression of aMLC1 increases contraction and relaxation time in whole heart (Morano et al., 1996Fewell et al., 1998;Abdelaziz et al., 2004). Since there is expression of aMLC1 and ssTnI in 21-day hyperoxic myocardium but not in normoxic myocardium, both aMLC1and ssTnI appears to be responsible for slower contraction kinetics in hyperoxic myocardium at this age. Alternatively, the ability of hyperoxic trabeculae to generate more force than normoxic trabeculae can be explained in the context of a two-state kinetic model of cross-bridge interaction proposed by Huxley (1957) and modified by Brady (1991). In this model, multiple states of the cross-bridge kinetic scheme are reduced to just two states, i.e., the transition from nonforce-generating to force-generating states is described by f app , whereas g app describes the transition from the force-generating state back to the non-force-generating state. Steady isometric force (P) is then equal to N × F × [f app /(f app + g app )], where N is the number of cycling cross-bridges, F is the average force per cross-bridge and k tr = f app + g app . Thus, the hyperoxiainduced increase in Ca 2+ sensitivity of force in the present study may be due to an increase in N, or F, or the proportion of cross-bridges in the force generating state as a result of an increase in f app , a decrease g app , or both. An increase in either the probability of myosin cross-bridge binding to actin (in case of aMLC1; Schaub et al., 1998) or the binding affinity of TnC for Ca 2+ (in case of ssTnI) would increase N and facilitate cooperative binding of myosin cross-bridges to actin. The latter would be manifested as a decrease in n H (an index of apparent cooperativity in activation of force; Table 1) of the force-pCa relationship, and a decreased cross-bridge detachment rate (g app ), which would be manifested as decrease in k tr . Interestingly, g app derived from natural logarithm of k tr -relative force relationship (data not shown) was lower in 21-day hyperoxic (1.44 s −1 ) than normoxic (2.17 s −1 ) trabeculae. Thus, it appears that the combined effect of β-MHC, ssTnI (increased binding affinity of TnC for Ca 2+ ) and aMLC1 (increased probability of myosin cross-bridge binding to actin) are important for the reduced kinetics in the RV at 21 days of age. Although, no previous studies have explored cross-bridge cycling kinetics in a model of BPD, we can infer that reduced cross-bridge cycling kinetics in 21 day hyperoxia exposed rats is likely due to the adaptive response to pulmonary pressure overload and the subsequent myofibrillar isoform expression changes at 21 days of age in this model. CONCLUSION In summary, we found hyperoxia-induced changes in expression of MHC, TnI, and MLC1 isoforms are reversible upon normalization of RV pressure and responsible for altering both steady-state force production and crossbridge cycling kinetics in rat myocardium. Increasing developmental age in this rodent model of BPD is associated with a reversal of PH-induced RV dysfunction, which has also been observed in humans with BPD associated with preterm birth. This work underscores the importance of reducing RV afterload to allow for recovery of RV function in both animal models and humans with BPD. AUTHOR CONTRIBUTIONS JP: Designed and conducted experiments, analyzed data and wrote the manuscript; GB: analyzed data, wrote and edited the manuscript; RB: Conducted experiments and edited the manuscript; KG: Contributed to design, wrote and edited the manuscript; KH: Conducted experiments; AH: Conducted experiments; GD: edited the manuscript; TH: Conducted experiments, analyzed data and edited the manuscript; RM: Oversaw experiments, wrote and edited the manuscript; ME: Contributed to design, oversaw experiments, wrote and edited the manuscript. FUNDING This work was supported by funding from the NIH-NHLBI R01 HL115061-03S1 (Eldridge).
2017-10-25T17:07:03.476Z
2017-10-25T00:00:00.000
{ "year": 2017, "sha1": "64226847ed7c2d213435d30df9e224499465c52a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2017.00840/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "64226847ed7c2d213435d30df9e224499465c52a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225095020
pes2o/s2orc
v3-fos-license
Transnasal flow reduction in endovascular treatment for anterior cranial fossa dural arteriovenous fistula ABSTRACT Transarterial embolization (TAE) is a useful option for anterior cranial fossa–dural arteriovenous fistula (ACF–dAVF) as endovascular devices have progressed. Liquid agents are usually injected via a microcatheter positioned just proximal to the shunt pouch beyond the ophthalmic artery; however, high blood flow from the internal maxillary artery (IMA) often impedes penetration of embolic materials into the shunt pouch. Therefore, reducing blood flow from the IMA before embolization can increase the success rate. In the present case, to reduce blood flow from branches of the IMA, we inserted surgical gauze infiltrated with xylocaine and epinephrine into bilateral nasal cavities. Using this method, we achieved curative TAE with minimal damage to the nasal mucosa. Transnasal flow reduction is an easy, effective and minimally invasive method. This method should be considered in the endovascular treatment of ACF–dAVF, especially in patients with high blood flow from theIMA. INTRODUCTION For anterior cranial fossa-dural arteriovenous fistula (ACF-dAVF), transarterial embolization (TAE) is a useful option as endovascular devices have progressed [1][2][3][4][5]. Liquid embolic agents are usually injected from a microcatheter positioned just proximal to the shunt pouch. However, TAE can be performed only in selected patients with suitable angiographic anatomy, to reduce complications such as vision loss [1]. ACF-dAVFs have two main feeding arteries such as the ophthalmic artery (OphA) and the distal internal maxillary artery (IMA) [5]. In TAE via the OphA, penetration of the embolic material into the shunt pouch can be impeded by high blood flow from the distal IMA. We achieved curative TAE by temporarily decreasing the blood flow from the IMA branches by inserting gauze infiltrated with xylocaine and epinephrine into the nasal cavities. We named this easy and minimally invasive method as 'transnasal flow reduction' (TFR). CASE REPORT A patient in their 70s with extracranial lymphoma was incidentally found to have ACF-dAVF via head computed tomography and magnetic resonance angiography. Digital subtraction angiography (DSA) confirmed ACF-dAVF with multiple feeding branches, arising from bilateral OphAs, distal IMAs and the left middle meningeal artery (MMA), with cortical venous reflex (Borden type III, Cognard type IV) ( Fig. 1). At the patient's request, we chose endovascular, rather than surgical, treatment. We injected a 20% N-butyl-2-cyanoacrylate (NBCA)lipiodol mixture into the fistula through bilateral ethmoidal arteries and the left MMA after we placed coils at the terminal branch of the right OphA. However, we could not achieve full penetration into the fistulous connections because of pressure secondary to high flow from the IMA branches, which resulted in incomplete obliteration (Fig. 1). Four months later, we repeated TAE by temporarily reducing nasal blood flow by inserting gauze infiltrated with xylocaine and epinephrine into the nasal cavities. After introducing the guiding catheter, an endonasal surgeon inserted X-ray-detectable surgical gauze infiltrated with 1% xylocaine and epinephrine (1:10 000) into bilateral nasal cavities using a nasal speculum, while paying full attention to avoid damage to the nasal mucosa. Then, we confirmed that the gauzes were placed in appropriate locations in the upper nasal cavity under fluoroscopic guidance. Immediately after insertion, we were able to confirm decreased blood flow from the IMA using DSA (Fig. 2). After this procedure, we navigated a DeFrictor Nano Catheter (Medico's Hirata, Osaka, Japan) into the terminal branch of the OphA, which was connected to the dorsal nasal artery. Even though there was still a distance from the tip of the microcatheter to the shunt pouch, the NBCA reached the shunt point and penetrated the venous portion (Fig. 3). Follow-up DSA demonstrated complete obliteration of the ACF-dAVF, and blood flow in the nasal mucosa from the IMA branches recovered normally (Fig. 4). When injecting embolic materials from branches of the OphA, reflux of liquid agents into the central retinal artery or internal carotid artery should be avoided. Although there is no consensus on the optimal embolic material for TAE for ACF-dAVF [1,3], Onyx (ev3, Irvine, CA, USA) was reported to be associated with a higher risk of complications [1]. Therefore, we decided to use NBCA in our patient. ACF-dAVF has two main feeding arteries such as the OphA, with bilateral supply in 93% of patients, and the distal IMA, in 62-66% of patients [5]. If we expect curative embolization via the distal branch of the OphA, the blood supply from IMA branches is an important factor. In patients with high blood flow from IMA branches through the nasal mucosa, the related pressure might repulse the embolic agent and impede effective injection into the shunt pouch. Therefore, high blood flow in the nasal mucosa must be decreased before TAE. Regarding methods used to reduce blood flow, TAE for the IMA branches using liquid agents or coils is not ideal. Liquid agents may damage the nasal mucosa, and coils may fail to decrease blood flow because of the rich vascular network in the nasal mucosa [6,7]. Therefore, we devised TFR, which worked well, as we expected, without damaging the nasal mucosa. In the endonasal procedure, damage to the nasal mucosa should be avoided because this may induce marked bleeding because of the high flow from the IMA and secondary to heparinization. Furthermore, an endoscope may be required in patients with deviated nasal septa. In this report, we described a novel and effective technique to achieve curative TAE for ACF-dAVF with minimal damage to the nasal mucosa. To our knowledge, this is the first report confirming the usefulness of TFR for TAE in patients with ACF-dAVF. This method should be considered in the endovascular treatment of ACF-dAVF, especially in patients with high blood flow from the IMA.
2020-07-30T02:10:03.515Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "435c67247fd7c6f062f0794baf61ce3e7dc86919", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/jscr/article-pdf/2020/10/rjaa327/33949204/rjaa327.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6f59e0dc18252ea6976d8bf51f6be4b647987807", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264509818
pes2o/s2orc
v3-fos-license
Investigation on risk fields assessment in the longwall working face with single side roof cutting along the gob The number of mines using roof cutting and pressure relief technology to extract mine deep coal resources is increasing daily. Most of these mines are facing the risk of high gas emission and residual coal spontaneous combustion disasters, and the composite disasters caused by these two risks also threaten the safety of mine production. On the basis of constructing a model for the evolution of porosity and permeability in a single side roof cutting along the gob, this study studied the occurrence locations of gas explosion risk areas, oxidation and heating risk areas, and composite disaster areas under different air supply and gas emission conditions, and summarized the evolution laws of composite disaster risk areas. The research results show that the roof rock collapse caused by roof cutting and pressure relief technology reduces the permeability of porous medium, which significantly reduces the sensitivity of the width of the oxidation temperature rise zone and the composite disaster area to the air supply. The increase in air supply causes the position of the composite disaster risk zone inside the goaf to shift toward the deep part of the goaf, while the width remains basically unchanged. The increase in gas emissions has suppressed the occurrence of coal spontaneous combustion in the goaf, while also keeping the gas concentration in a large area outside the upper limit of gas explosion. The research content enriches the research system of gas and oxygen flow fields in the goaf cutting face, and has positive significance for the promotion and application of goaf cutting technology and the understanding of secondary disasters caused by it. | INTRODUCTION In central and eastern China, deep coal mining faces are facing an increasing threat of gas disasters, increasing ground stress, rising ground temperature, and increasing threats of coal spontaneous combustion disasters.2][3][4][5] The composite disaster risk area of the goaf is the overlapping area of the gas explosion risk area and the oxidation temperature rise zone inside the goaf.Gas explosion risk and coal spontaneous combustion risk coexist in this area, which has a more complex and serious risk. With this background, Juha et al. 6 used the event tree method to evaluate the influencing factors of coal spontaneous combustion fires in underground space.Chu et al. 7 conducted a risk analysis on the air flow distribution, gas distribution effect, and oxidation zone width of the longwall working face.Tang et al. 8 analyzed the relationship between the extraction volume of high extraction roadway and the air flow leakage amount of the working face, and studied the distribution law of oxygen and gas in the gob.Yang et al. 9 studied the influence of air supply volume on the composite disaster zone of gas and coal spontaneous combustion in gob under "Y + HLDR" ventilation mode.Qin et al. 10 constructed a high borehole extraction model, evaluated the response characteristics of the negative pressure of drainage and the composite disasters of gas and coal spontaneous combustion in the gob of the working face, and proposed the collaborative relationship between the negative pressure of drainage and the composite disasters.Song et al. 11 proposed a Hurst index to evaluate the time trend of oxygen concentration and coal spontaneous combustion, which is used to predict the occurrence regularity of composite disasters of gas and coal spontaneous combustion.Xia et al. 12 studied the symbiotic model of gas and coal spontaneous combustion, and analyzed the sensitivity of parameters such as ventilation rate, mining speed of working face, and inclined length of working face.Karacan et al. 13 established an evaluation system for the pumping effect of boreholes1.Xu et al. 14 proposed a new technology for collaborative drainage of gas from gob in high and low levels roadways, providing a new way for gas utilization in high gas mines.The key research objective of the gob flow field is to build a model of void fraction and permeability in the gob.Hu et al. 15 studied the variation of void fraction and permeability of waste rock in gob through particle flow numerical simulation.Ma et al. 16 established the seepage coefficient and diffusion equation based on the variation law of porous materials.Chen et al. [17][18][19][20] studied the effects of various ventilation methods on the gas flow field and gas migration law in the gob.In addition, indepth research has been conducted on the communication between ground fractures and gas inflow into the working face in the gob.Tutak et al. 21compared and analyzed the statistical results of gas concentration indicators in U/Y types ventilation working faces, highlighting the advantages of Y-type ventilation.3][24][25][26][27] Gao et al. 28 constructed a gangue compression stability coefficient with the cutting height as the independent variable.The above scholars have conducted research on the air flow transport in goaf and the composite disaster risk area of goaf under the roof cutting and pressure relief mining mode, but have not combined the two.There is less research on the composite disaster risk area under the roof cutting and pressure relief mining mode. Under the roof cutting and pressure relief mining mode, the compression characteristics of the locally collapsed gangue on the roof cutting side have changed, leading to significant differences in the internal permeability characteristics of the goaf.This has led to certain differences in the composite disaster risk area of the goaf compared to traditional mining methods.Specifically, the blasting and roof cutting on the side of the machine tunnel caused the timely collapse of the overlying roof and filling into the space below.Compared with the filling characteristics of the goaf on the side without cutting the top, the particle size of the gangue is relatively small under the influence of blasting, and the porous medium gap formed by the accumulation of gangue blocks is relatively small.The porosity on the air inlet side of the machine lane directly affects the migration law of air flow in the goaf, resulting in corresponding changes in the distribution characteristics of the "three zones" within the goaf, ultimately evolving into changes in the location of risks such as gas accumulation and coal spontaneous combustion in the goaf. Based on the Percolation theory in Porous medium, the permeability model of porous medium in gob side cut top goaf was compiled.On this basis, the flow patterns of oxygen and gas in the goaf under U-shaped ventilation were studied, and the sensitivity of the goaf flow field to the two important factors of air supply and gas emission was analyzed.Finally, the response law of the goaf gas explosion and coal spontaneous combustion composite disaster area was obtained.The research content enriches the research system of gas and oxygen flow fields in the goaf cutting face, and has positive significance for the promotion and application of goaf cutting technology and the understanding of secondary disasters caused by it. | Theoretical analysis Obtaining the movement law of overlying strata through three-dimensional discrete element numerical simulation.The response of void fraction to strain rate was obtained by combining the characteristics of the movement of gangue under compression in laboratory. 29At the same time, build a void fraction and permeability model and embed it in a UDF file for flow field calculation to obtain the distribution law of oxygen concentration and gas concentration.Prediction of gob risk areas using corresponding indicators. | General condition of the working face The Shoushan No. 1 mine in Pingdingshan mining area is located in the midwestern parts of Henan Province.It is one of the typical mining areas with deep mining and high ground temperature in our country.With the deepening of mining, the threat of mine fire disasters is becoming increasingly serious.The geographical location of the mining area is shown in Figure 1.At present, the main mining area of Shoushan No.1 coal mine is No.15-17 coal, with an average coal thickness of 5.2 m and an inclination of 4-5°.The residual gas content in the coal seam is 3.1-4.3m 3 /t, the residual gas pressure is 0.10-0.22MPa, the coal dust explosion index is 20.01%, and the spontaneous combustion period is 38 days.It belongs to the coal seam that is prone to spontaneous combustion, and the ground temperature belongs to the secondary high temperature zone.The application of single side (machine lane) roof cutting and pressure relief technology in 12110 working face of Shoushan No. 1 mine in Pingdingshan mining area is intended to solve the difficult problem of mining and excavation connection.the schematic diagram of top cutting engineering of 12110 working face is shown in Figure 2. | Displacement characteristics of overlying rock 1][32][33] The method of arranging survey lines is used to express the subsidence law of overlying rock, thereby providing support for the construction of void fraction in the gob.During the simulation study in Figure 3, the roof-cutting design height is sandstone mudstone interbedding 26 m above the machine roadway along the roadway in 12110 working face.The block parameters and joint parameters in the Fluent model are shown in Tables 1 and 2. As shown in Figure 2, single side roof cutting along the empty roof results in a significant decrease in the void fraction of the gangue accumulation body at a certain distance from the end of the cutting side.In the numerical simulation results in Figure 3, the subsidence curve of the high level roof strata above the coal seam is extracted, and the strain value of the bending deformation of the high level roof in the inclined direction of the working face is calculated. The calculation method of strain rate is shown in Equation (1): where ΔH 1 is the subsidence displacement of the roof rock stratum, m; H s is the cutting depth, 26 m; H c is the mining height of the coal seam. To embed the strain rate formula into UDF, the strain rate is represented by a piecewise fitting method, as shown in Figure 4. The strain rate within the range of 0-20 m from the nonroof cutting along the gob is linearly fitted, and the fitted formula ( 2) is as follows: ( The strain rate from 20 m to the roof cutting along the gob is fitted using a quadratic term, and the subsection fitting formula (3): Fitting of subsidence displacement curve and strain rate formula for high level roof. (2) Model and grid size The model size of the gob is 300 × 160 × 30 m, and the grid size is 1 m.The size of the intake and return air tunnels is 4 × 5 m, and the grid size is 0.5 m.The working face is 160 × 5 × 5 m, and the grid size is 0.5 m. (3) Boundary conditions The set wind speed at the wind speed inlet is 1.5 m/s, and based on the sectional area calculation, it can be seen that the air supply volume is 1800 m 3 / min.The outlet is set as a pressure outlet with a pressure of −2 Pa. ( 4) Oxygen concentration at air inlet Set the 23% oxygen concentration in the air intake tunnel.The oxygen consumption rate of coal under different oxygen concentration conditions is quoted here: where v is the oxygen consumption speed, kg The emission rate of residual coal gas at different depths in the gob is different, and the calculation formula is: where a is the initial intensity of coal and gas emission from the gob, m 3 /min; b is the attenuation coefficient of coal and gas emission from the gob, min -1 ; x is the distance from the location of the residual coal to the working face, m; v is the average advancing speed of the working face, m/d.The curve of residual coal and gas emission in the gob is shown in Figure 5. (6) Inertial resistance, viscous resistance The momentum loss source term is added to the original momentum equation for gas seepage in the gob through porous media, and its momentum loss is divided into viscous loss and inertial loss.Blake-Kozeny formula 34 is used to describe the permeability, viscous resistance coefficient, and inertial resistance coefficient of the gob. where C 1 is the viscous resistance coefficient; C 2 is the inertial resistance coefficient; D P is the average particle diameter; e is the permeability; n is the porosity; According to the "O" ring theory, the gob is divided into three areas: natural accumulation area, pressurebearing crushing and swelling area, and compacted stable area, which respectively describe the permeability changes in the vertical direction.By cutting the roof along the gob roof, the gangue within the range of cutting height is fully collapsed and filled, thereby reshaping the void fraction distribution on the cutting side.As shown in Figure 6, the A-A profile in Figure 6 reflects the collapse rule of the gangue along the caving zone on the empty roof cutting side.It can be seen that the pressure-bearing crushed expansion zone on the roof cutting side expands in the Y direction to the air inlet chute.At the same time, the natural accumulation area is almost eliminated.Section B-B in Figure 6 shows the collapse rule of the gangue in the caving zone that does not follow the roof cutting along the gob.It can be seen that the empty cut top side clearly presents the distribution pattern of void fraction in the natural accumulation area, the pressure-bearing crushed expansion area, and the compaction stable area. Along the x-axis direction, the void fraction of the gob has an exponential function relationship with the distance of the working face, 35 which satisfies the following Equation (9): where n x is the porosity of z = 0 on the floor of the gob, dimensionless; L s is the depth of the gob, m; X is the strike distance from a point in the gob to the working face, m, with a value range of [0, L s ].The variation of porosity along the height direction (Z direction) is reflected by n′ z .The calculation formula (10) for the coefficient of variation of porosity along the positive direction of the Z-axis is as follows: x y z (11) The variation coefficient n′ y of porosity along the Y-axis is used to reflect the gob along the Y-axis direction.Meanwhile, the evolution formula for porosity of sandstone is selected 36 (12): By introducing Equations ( 2) and ( 3) into ( 12), the porosity variation coefficient deviating from the origin along the Y-axis direction of the working face conforms to Equation (13). | Setting of key parameters and boundary conditions To explore the impact of different air supply volumes and gas emission volumes on the composite disaster risk zone of coal spontaneous combustion under the roof cutting and pressure relief mining mode, the gas concentration, oxygen concentration, and distribution of the composite disaster risk zone in the gob under the conditions of air supply volumes of 600, 1200, 1800, 2400, and 3000 m 3 /min are respectively explored, with corresponding wind speeds of 0.5, 1.0, 1.5, 2.0, and 2.5 m/s, respectively.The air supply volume setting scheme is shown in Table 3. Explore the distribution of gas concentration, oxygen concentration, and composite disaster risk zone in gob under the conditions of 0.1, 0.5, 1.0, 10, and 20 times the actual gas emission amount.The gas emission setting scheme is shown in Table 4. | Field test and simulation verification To ensure the reliability of numerical simulation, the field test data and numerical simulation data of oxygen concentration are compared and analyzed.A gas concentration sampler is arranged inside the goaf at the side of the air inlet gateway of Working Face 12110, which is connected to the negative pressure sampling pump through the sampling pipe.As the working face progresses, oxygen concentrations at different depths in the goaf are collected, and oxygen distribution patterns collected on site are obtained.The sampling location of the 12110 working face is shown in Figure 7. From Figure 8, it can be seen that the width of the oxidation zone on the intake channel side of the goaf is 37.5 m.When the working face is mined to 50 m, the oxygen concentration in the goaf decreases to 18%.When the working face is mined to 87.5 m, the oxygen concentration in the goaf decreases to 8%.The oxygen concentration tested on site is basically consistent with the numerical simulation results, verifying the effectiveness of the numerical simulation parameters.F I G U R E 8 Simulation verification. | Analysis of air supply velocity on the risk zones The size of the air supply volume affects the air leakage situation in the gob, thereby affecting the distribution of the "three zones" in the gob.The air supply volume plays a key role in the spontaneous combustion of coal in the gob.After the completion of numerical simulation, a data monitoring line crossing the gob is set on the XY plane of the model to extract data from the gas and oxygen concentrations on the monitoring line.The position of the monitoring line is shown in Figure 5. | Effect of air supply velocity on the gas concentration zones in the gob The influence of different air supply rates on the gas concentration distribution in the gob under the roof cutting and pressure relief mining mode is shown in Figure 9. Figure 9A-E, respectively, correspond to the gas concentration distribution in the gob under the conditions of air supply volumes of 0.5, 1.0, 1.5, 2.0, and 2.5 m/s.In the shallow part of the gob, fresh air flow has a significant impact on the gas in the gob, and the gas concentration in the shallow part is relatively small.As the strike depth of the gob increases, the impact of air leakage on the gas in the gob gradually decreases, and the gas concentration presents an upward trend.With the increase of air supply, the area of low gas area (gas concentration < 5%) in the gob gradually increases and gradually extends toward the deep part of the gob; High gas area (gas concentration > 16%) gradually away from the working face.Meanwhile, the gas explosion risk zone (5% < gas concentration < 16%) gradually moves away from the coal mining face. The relationship between the gas concentration and the gob strike distance under different air supply F I G U R E 9 Gas explosion areas in the gob under different air supply conditions.A-E refers to the air supply volumes of 0.5, 1.0, 2.0, and 2.5 m/s, respectively.conditions is shown in Figure 10 after extracting gas concentration from data monitoring line in the gob. Along the strike direction of the working face, the change in air supply volume only affects the gas concentration in the shallow part of the gob.Under different ventilation conditions, the gas concentration shows an upward trend during the process from the shallow to the deep part of the gob.After reaching a certain depth, the impact of air leakage gradually decreases, and the gas concentration increases in a power function form.After reaching the deep part of the gob, the gas concentration is slightly affected by air leakage, reaching a maximum of 100% and no longer changing. For gas explosion risk zones, when the wind speed is 0.5 m/s, the gas explosion risk zone is located at a depth of 72-88.5 m and a width of 16.5 m.When the air supply wind speed is 1.0 m/s, the gas explosion risk zone is located at a depth of 106.0-127.0m and a width of 21 m.When the air supply wind speed is 1.5 m/s, the gas explosion risk zone is located at a depth of 124.0-146.0m and a width of 22 m.When the gas explosion risk zone in the gob with a wind speed of 2.0 m/s is located at a depth of 136-160.0 m and a width of 24 m.When the gas explosion risk zone in the gob with a wind speed of 2.5 m/s is located at a depth of 144-169 m and a width of 25.0 m.With the increase of air supply volume, the gas explosion risk zone gradually moves back away from the working face.However, the increase in air supply volume gradually increases the width of the gas explosion risk zone, and increases the risk of gas disasters in the gob. | Effect of air supply velocity on the oxygen concentration zones in the gob The influence of different air supply volume on the oxygen concentration distribution in the gob under the roof cutting and pressure relief mining mode is shown in Figure 11. Figure 11A-E, respectively, correspond to the distribution of oxygen concentration in the gob under the conditions of air supply volume. Under the action of fresh air flow, the concentration of O 2 in the shallow part of the gob is relatively high.As the depth of the gob increases, the O 2 concentration gradually decreases.According to the "three zones" division rule for oxygen concentration in the gob, as the air supply volume increases, the area of the heat dissipation zone (O 2 concentration > 18%) gradually increases and extends to the deep part of the gob.The area of the suffocation zone (O 2 concentration < 8%) gradually decreases and gradually moves away from the working face.The oxidation temperature rise zone (8% < O 2 concentration < 18%) gradually moves away from the working surface and presents a weak narrowing trend.Extract the O 2 concentration of the data monitoring line in the gob, and the relationship between the O 2 concentration and the strike distance of the gob under different air supply conditions is shown in Figure 11. According to Figure 12, when the air supply wind speed is 0.5 m/s, the risk zone is located between 55.0 and 89 m in the gob, with a width of 34 m.When the air supply wind speed is 1.0 m/s, the risk zone is located between 91.0 and 124 m in the gob, with a width of 33.0 m.When the wind speed is 1.5 m/s, the risk zone is located between 109 and 142 m in the gob, with a width of 33 m.When the air supply wind speed is 2.0 m/s, the risk zone is located between 121.0 and 153 m in the gob, with a width of 32 m.When the risk zone with a supply air volume of 2.5 m/s is located between 130.0 and 162 m in the gob, with a width of 32.0 m.With the increase of air supply volume, the risk zone of coal spontaneous combustion gradually moves toward the rear of the gob, and the width and zone of this zone change slightly.By analyzing the reasons, it can be seen that due to the dense filling of the gangue at the top cutting side, the reduction gradient of the air flow passage gap significantly decreases, resulting in a small change in the oxidation temperature rise zone.The sensitivity of the width of the oxidation temperature rise zone in the cut top working face to the wind flow is relatively low, while the sensitivity of the distribution of the oxidation temperature rise zone in the noncut top working face to the strong disturbance of the wind flow is essentially different. F I U R E 10 Gas concentration within the strike depth range of the gob. | Effect of air supply velocity on the composite risk zones in the gob The positional relationship between different air supply volumes and the composite disaster risk zone in the gob under the roof cutting and pressure relief mining mode is shown in Figure 13.When the air supply volume is 0.5 m/s, the composite disaster risk zone is located between 72 and 88.5 m in the gob, with a width of 16.5 m.When the air supply volume is 1.0 m/s, the composite disaster risk zone is located between 106.0 and 124.0 m in the gob, with a width of 18 m.When the air supply volume is 1.5 m/s, the composite disaster risk zone is located between 124.0 and 142.0 m in the gob, with a width of 18.0 m.When the air supply volume is 2.0 m/s, the composite disaster risk zone is located between 136.0 and 153.0 m in the gob, with a width of 17 m.When the air supply volume increases to 2.5 m/s, the composite disaster risk zone is located between 144.0 and 162.0 m in the gob, with a width of 18.0 m.The specific F I G U R E 11 Cloud chart of oxygen concentration zoning in gob under different air supply volume conditions.A-E refers to the air supply volumes of 0.5, 1.0, 2.0, and 2.5 m/s, respectively. F I G U R E 12 Oxygen concentration within the strike depth range of the gob.location and width of the composite disaster risk area are shown in Table 5. The increase in air supply volume causes the position of the composite disaster risk zone inside the gob to shift toward the deep part of the gob, and the width of the composite disaster risk zone remains basically It is worth noting that the increase in air supply volume makes the composite disaster risk zone inside the gob far away from the working face, avoiding the threat of composite disasters on the working face.To sum up, the broken waste rock caused by roof cutting is smoothly filled into the gob, resulting in a reduction in the sensitivity of the gas concentration changes in the gob to air flow, which in turn allows more air flow to flow to the working face, avoiding the impact of disaster gases in the gob. | Analysis of gas emission volume on the risk zones The gas emission in the gob will affect the phenomenon of coal spontaneous combustion.The research on the composite disaster risk under different gas emission amounts in the roof cutting and pressure relief mining mode has a theoretical guiding role in determining the composite disaster risk zone under high mining intensity. Composite disaster risk zone of gob under different air supply volumes conditions.A-E refers to the air supply volumes of 0.5, 1.0, 2.0, and 2.5 m/s, respectively. | Effect of gas emission volume on the gas concentration zones in the gob The influence of different gas emission amounts on the gas concentration distribution in the gob under the roof cutting and pressure relief mining mode is shown in Figure 14. Figure 14A-E, respectively, correspond to the gas concentration distribution in the gob under the conditions of gas emission 0.2, 1.0, 2.0, 20, and 40 m 3 /min.As shown in Figure 14, with the increase in the strike depth of the gob, the gas concentration in the gob presents a gradually increasing trend.However, with the increase of gas emission, the high gas area presents a rapid expansion trend, and the low gas area correspondingly decreases.The increase in gas emission also makes the gas accumulation in the upper corner more serious.Extract the gas concentration data from the data monitoring line in the gob.The relationship between the gas concentration and the gob strike distance under different gas emission conditions is shown in Figure 15. gas explosion risk zones, the increase in gas emission gradually brings the gas explosion risk area closer to the working face and gradually increases its width.The multiple increase in gas emission makes the gas concentration increase exponentially after reaching a certain strike depth along the gob.When the gas emission from the gob is 0.2 m 3 /min, the gas explosion risk zone is located between 116 and 132 m in the middle of the gob, with a width of 16 m.When the emission amount increases to 2.0 m 3 /min, the gas explosion risk zone moves to 71-87 m in front of the working face, with a width of 16 m.When the emission amount increases to 20.0 m 3 /min, the gas explosion risk zone moves between 0 and 40 m in front of the working face, and the width increases to 40 m.When the gas emission increases to 20-40 m 3 /min, the gas concentration at the return air corner is already within the gas explosion risk zone (5% < gas concentration < 16%), and the working face is exposed to the risk of gas disasters. F I G U R E 14 Cloud chart of gas concentration distribution in gob under different gas emission conditions.A-E refers to the gas emission of 0.2, 1.0, 2.0, 20, and 40m 3 /min, respectively. | Effect of gas emission volume on the oxygen concentration zones in the gob The impact of different gas emissions on the oxygen concentration distribution in the gob under the roof cutting and pressure relief mining mode is shown in Figure 16. Figure 16A-E, respectively, correspond to the distribution of oxygen concentration in the gob under the conditions of gas emission (0.2, 1.0, 2.0, 20, and 40 m 3 /min). From the distribution of oxygen concentration in the goaf in Figure 16, it can be seen that under the action of fresh air the oxygen concentration in the shallow part of the goaf is significantly higher than that in the deep part of the goaf, and the oxygen concentration in the lower corner is significantly higher than that in the upper corner; Under the five types of gas emissions, the oxygen concentration shows a decreasing trend in the direction of goaf direction; With the increase of gas F I G U R E 15 Gas concentration distribution law in the direction of gob strike. F I G U R E 16 Cloud chart of oxygen concentration zoning in gob under different gas emission conditions.A-E refers to the gas emission of 0.2, 1.0, 2.0, 20, and 40m 3 /min, respectively.emission, there is only a slight change in the position and width of the scattered zone, suffocation zone, and oxidation warming zone in the goaf. Extract the oxygen concentration data on the gas concentration monitoring line inside the goaf as shown in Figure 17.After the depth of the goaf reaches about 50 m, the oxygen concentration begins to rapidly decrease, and gradually flattens out after reaching 150 m; The oxidation temperature rise zone of the goaf under five types of gas emissions is located between 50 and 100 m, with widths of 45.0, 45.0, 38.0, 40.0, and 40.0 m, respectively.As the amount of gas emitted from the goaf increases, the oxidation heating zone gradually moves toward the working face, indicating that the gas emission occupies the main component of the goaf, compressing the storage space of oxygen and causing the oxidation heating zone to move toward the working face. | Effect of gas emission volume on the composite disaster risk zones in the gob The positional relationship between different gas emissions and the composite disaster risk zone in the gob under the roof cutting and pressure relief mining mode is shown in Figure 18.Due to the multiple increase in gas emission, the location of gas explosion risk zones within the gob has undergone significant changes.The position of the gas explosion risk zone moves from the deep part of the gob to the shallow part.While the coal spontaneous combustion disaster risk zone, namely the oxidation heating zone with the increase of the width is smaller but the location changes slightly.The two risk areas only overlap under the condition of gas emission of 2.0 m 3 /min.The composite disaster risk zone is located between 71 and 87 m of the gob, with a width of 16.0 m.No composite disaster risk zone is formed in gob under other gas emission conditions.The specific location and width of the composite disaster risk zone are shown in Table 6. To sum up, high intensity mining intensifies gas emission in the gob, which further affects the gas concentration and oxygen concentration distribution in the gob.Under the joint action of air leakage and residual coal gas emission from the working face, the gob is extremely prone to the occurrence of a gas explosion limit danger zone.In the case of high gas emission, most areas of the gob are in a good state of self inerting, with a low oxygen concentration, which is conducive to the prevention and control of coal spontaneous combustion in the gob.For composite disaster risk zones, a large amount of gas emission inhibits the occurrence of spontaneous combustion of coal in gob zones.Meanwhile, the gas concentration in a large area of the gob is outside the upper limit of gas explosion, and the dual disasters of coal spontaneous combustion risk and gas risk cannot exist simultaneously in the gob, inhibiting the occurrence of composite disasters.Taking 12110 working face with single side roof cutting and pressure relief in Shoushan No. 1 mine of Pingdingshan mining area as an example.Based on the permeability model of the working face under the roof cutting mode, the situation of composite disaster risk zones within the gob under different air supply and gas emission conditions was studied.The conclusion is as follows: (1) With the increase of air supply volume, the gas explosion risk zone gradually shifts to the deep part of the gob.Moreover, the increase in air supply volume gradually increases the width of the gas explosion risk zone, and increases the risk of gas disasters in the gob.Meanwhile, the risk area of coal spontaneous combustion gradually moves toward the depth of the gob, but the width and area of the area change slightly.faces to strong disturbances by wind currents.This difference also causes the migration of composite disaster risk zones to be less sensitive to wind currents.(3) The increase in gas emission makes the gas concentration in the central and deep parts of the gob increase exponentially.The increase in gas emission gradually brings the gas explosion risk zone closer to the working face, and the width gradually increases, affecting the production of the working face.When the gas emission increases to 20-40 m 3 /min, the gas concentration at the return air corner is already within the gas explosion risk zone, and the working face is exposed to the risk of gas disasters.(4) With the increase of gas emission, there is no significant change in the distribution of oxygen in the gob, and the relative position of the risk zone of coal spontaneous combustion remains basically unchanged.The composite risk zone only overlaps when the gas emission is 2.0 m 3 /min, and is located between 125.5 and 140.5 m in the gob, with a width of 15.0 m.The rapid increase in gas emission from gob areas has resulted in a large range of gas concentrations outside the upper limit of gas explosion in gob areas, inhibiting the occurrence of composite disasters. F I G U R E 1 Geographic location of the mining area.F I G U R E 2 Schematic diagram of single side roof cutting working face.MIN ET AL. | 4583 F I G U R E 3 Schematic diagram of working surface space.T A B L E 1 Physical and mechanical parameters of rock strata. 3 | Basic assumptions of numerical modeDue to the irregular collapse and filling of the overlying layered roof in the gob after fracture, it is necessary to simplify the boundary conditions in numerical simulation.Set the coordinates of the two ends of the gas concentration monitoring line to (0, 40, 1) and (300, 40, 1), respectively.The position of the detection line and the fluent model are shown in Figure5, the parameters of the fluent model are as follows:(1) Height of collapse zoneThe height of the collapse zone should be a combination of mining height and roof cutting height because the roof cutting engineering can induce roof caving in the gob and artificially modify the height of the collapse zone.That is H = 30 m. the volume fraction of oxygen at different oxidation times, %. c b is the volume fraction of residual oxygen, %. (5) Gas emission where A is the base number of the change in the void fraction index in the z-direction of the natural accumulation area, and is a dimensionless coefficient; B is the base number of the void fraction index change in the z-direction of the pressure-bearing crushing expansion zone, and is a dimensionless coefficient; a 2 is the long axis length of the ellipse in the pressure-bearing crushing and swelling zone, m; b 2 is the short axis length of the ellipse in the pressure-bearing crushing expansion zone, m; L i is the inclined length of the working face, m.The three-dimensional spatial distribution relationship of porosity n in the gob can be expressed by the product of n x , n′ y , and n′ z .The spatial continuous distribution equation of porosity in porous media in the collapse zone of the gob is: 67(0.06 + 0.004 − 0.00007 ) + 0.49, 20, −0.67(0.05+ 0.003 ) + 0.49, 20. F I G U R E 6 Schematic diagram of void fraction zoning model for single side roof cutting along the gob. T A B L E 3 7 Corresponding relationship between air supply volume and inlet wind speed.Inlet velocity (m/s)Air supply volume (m 3 /min) Schematic diagram of sampling points on the working face. T A B L E 5 Location and width of risk zone in gob. F I G U R E 17 Distribution of oxygen concentration in the direction of gob strike. ( 2 ) Due to the dense filling of the gangue on the roof cutting side, the decreased gradient of the air flow passage gap significantly decreases, resulting in a decrease in the sensitivity of the oxidation temperature rise zone of the roof cutting working face to air flow.This feature is essentially different from the sensitivity of the distribution of oxidation temperature rise zones in noncutting working F I G U R E 18 Composite disaster risk zone under different gas emission conditions.A-E refers to the gas emission of 0.2, 1.0, 2.0, 20, and 40m 3 /min, respectively.T A B L E 6 Location and width of gob risk areas. ) Physical and mechanical parameters of joints. T A B L E 2
2023-10-27T15:17:34.160Z
2023-10-24T00:00:00.000
{ "year": 2023, "sha1": "adbcee97d1a000c64399b680ef6835c90da39593", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ese3.1599", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "ca27fca47a34bd4d8bbf32a21b4ad1e46446cb35", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
52925417
pes2o/s2orc
v3-fos-license
Unknown genes, Cebelin and Cebelin-like, predominantly expressed in mouse brain We identified two genes, Cebelin and Cebelin-like, encoding unknown proteins in mice. Cebelin and Cebelin-like consist of 168 and 167 amino acids with putative secreted signal sequences. However, Cebelin and Cebelin-like are cellular proteins not secreted proteins. Cebelin and Cebelin-like were predominantly expressed in the brain among major tissues examined. The expression of Cebelin in the brain was predominantly detected in the internal granule layer of the cerebellum. Introduction Proteins with putative secreted signal sequences are mostly secreted or membrane proteins. Secreted proteins potentially play crucial roles as extracellular signaling molecules in cell proliferation, differentiation, and function. The identification and characterization of unknown genes encoding secreted proteins potentially provide new insights into morphogenesis, metabolism, and disease (Klee et al., 2004;Kassai et al., 2005;Wakahara et al., 2007;Koike et al., 2007;Miwa et al., 2009;Miyake et al., 2009;Ohta et al., 2015). Additionally, genes expressed by specific cells could become useful markers in developmental biology (Miwa and Era, 2015, 2016, 2018. We identified mouse cDNAs encoding unknown proteins with putative secreted signal sequences but not putative transmembrane domains from GenBank. We termed one of them Cebelin, which is also referred to as Fam163a, as the gene was predominantly expressed in the cerebellum. Results and discussion The full-length cDNA was cloned by polymerase chain reaction (PCR) with mouse brain cDNA as a template. Cebelin protein consists of 168 amino acids (AAs) with a putative secreted signal sequence (30 AAs) at its amino terminus but not putative transmembrane domains (GenBank accession code NM_177838) (Fig. 1A). Cebelin is a unique protein with no known functional motifs and no primary structure similarity to known functional proteins. Human CEBELIN cDNA was also identified by a homology-based search from GenBank. The AA sequence of human CEBELIN (167 AAs) with a putative secreted signal sequence (30 AAs) was highly similar (w85% AA identity) to that of mouse Cebelin (Fig. 1A). The coding region of Cebelin is divided with a single intron (data not shown). Mouse Cebelin is closely linked to Tor1aip1, Toriaip2, Tdrd5 and Nphs2 on chromosome 1 at G3. Human CEBELIN is also closely linked to these genes on chromosome 1 at q25.2-25.3, supporting that human CEBELIN is a human ortholog of mouse Cebelin (Fig. 1B). To examine whether Cebelin is a secreted protein, Myc and His 6 tags-fused Cebelin was overexpressed in mammalian cells, COS-7 cells. Both the medium and lysate of the cultured cells were examined by Western blotting using anti-Myc tag antibody. We could detect no bands in the medium or lysate of the control. A band was detected in the lysate but not the medium of the Cebelin-overexpressed cells, indicating that Cebelin is a cellular protein but not a secreted protein ( Fig. 2A). This result was discrepant from the previous study (Vasudevan et al., 2009). The observed molecular mass (w25 kDa) was larger than the calculated molecular mass of the recombinant Cebelin protein (w20.5 kDa), indicating that Cebelin protein might be subjected to post-translational modification. We also examined the cellular localization of Cebelin in the cells by immunocytochemical analysis using anti-Myc tag antibody. No signals were detected in the control. In contrast, Cebelin was widely detected in the Cebelin-overexpressed cells. Cebelin was most intensely co-localized with Mannosidase II, a marker protein for the Golgi apparatus (Moremen and Touster, 1986), indicating that Cebelin was most intensely detected in the Golgi apparatus (Fig. 2B). Cebelin is a cellular protein with a putative secreted signal sequence. As hydrophobic segments at the amino termini were reported to potentially function as type II membrane protein signal anchors (Yokoyama-Kobayashi et al., 1999), the putative secreted signal sequence in Cebelin might function as the type II signal anchor. The expression of Cebelin was examined in adult mouse tissues (postnatal day 56, P56) by reverse transcription (RT)-PCR using the specific primers for Cebelin. Although all the tissues examined expressed b-Actin (Tokunaga et al., 1986), the expression of Cebelin was predominantly detected in the brain (Fig. 3A). We also examined the expression of Cebelin in the brain at respective developmental stages (embryonic day 12.5, E12.5-P56). The expression of Cebelin was more abundantly detected in the postnatal brain than the embryonic brain (Fig. 3B). The expression of Cebelin was also examined in the adult brain by in situ hybridization using the antisense Cebelin RNA probe. Essentially we could detect no grains The sections of the brain were counterstained with cresyl-violet (a 0 er 0 ). Scale bars ¼ 5 mm. on any sections with the sense probe as a control. In contrast, the expression of Cebelin shown by black grains was predominantly detected in the internal granule layer of the cerebellum with the antisense probe (Fig. 3C). However, the expression of Cebelin was not significantly detected in any other region of the brain. Furthermore, we identified mouse cDNA encoding another unknown protein of 167 AAs (GenBank accession code NM_175427) (Fig. 4). As the protein is significantly similar (w43% AA identity) to Cebelin, we named it Cebelin-like, which is also referred to as Fam163b. Human CEBELIN-LIKE cDNA was also identified. The AA sequence of human CEBELIN-LIKE (166 AAs) was highly similar (w90% identity) to that of mouse Cebelin-like (Fig. 4). Cebelin-like was overexpressed in CHO-S cells in the same way as Cebelin was. Both the medium and lysate of the cultured cells were examined by Western blotting. The result indicates that Cebelin-like is also a cellular protein, whereas Brorin-like is a secreted protein as described previously (Miwa et al., 2009) (Fig. 5A). To examine the cellular localization of Cebelin-like in the cells, a green fluorescent protein (GFP)-fused Cebelin-like was overexpressed in COS-7 cells. In the result, (b, f, j, n), and immunocytochemical using anti-EEA1 antibody (c, g) or anti-GRP78 antibody (k, o) were merged (d, h, l, p). Scale bar ¼ 50 mm. the localization of Cebelin-like was similar to that of Cebelin and only partly overlapped EEA1, a marker protein for the endosome (Mu et al., 1995), or GRP78, the endoplasmic reticulum (Kozutsumi et al., 1988) (Fig. 5B). The expression of Cebelin-like was examined in the embryonic brains and adult tissues by RT-PCR. The expression profiles of Cebelin-like are also similar to those of Cebelin (Fig. 6). In conclusion, we identified two genes, Cebelin and Cebelin-like, encoding unknown proteins in mice and human. Both Cebelin and Cebelin-like are cellular proteins not secreted proteins and predominantly expressed in the brain. The present findings indicate that Cebelin and Cebelin-like are unknown genes encoding cellular proteins that potentially play roles in the cerebellum. Mice The Animal Research Committee of Kyoto University Graduate School of Pharmaceutical Sciences approved all study protocols. All mice were purchased from Shimizu Laboratory Supplies. Identification of Cebelin and Cebelin-like in mice and humans AA sequences predicted from mouse cDNAs of unknown function in nucleotide sequence databases were randomly analyzed using PSORT. The cDNAs encoding putative secreted proteins were identified and cloned in pGEM-T Easy vector (Promega). We named two of the cDNAs mouse Cebelin and Cebelin-like. Human The expression of Cebelin-like was examined in adult mouse tissues (P56) and brain at respective developmental stages (E12.5-P56) by RT-PCR. b-Actin was a control. The expected sizes of Cebelin-like and b-Actin cDNA are 543 and 408 base pairs, respectively. Fig. S4A and B are full images of the gels. CEBELIN or CEBELIN-LIKE cDNA was also identified in a homology-based search of human cDNA sequences in nucleotide sequence databases with the AA sequence of mouse Cebelin or Cebelin-like. Forced expression of Cebelin or Cebelin-like cDNA in COS-7 cells and CHO-S cells The Cebelin or Cebelin-like cDNA with a DNA fragment encoding a Myc tag and a His 6 tag or a GFP at the 3 0 terminus of the coding region was constructed in pcDNA3.1(þ) vector (Thermo Fisher Scientific). COS-7 cells and CHO-S cells were transfected with the respective vectors using Lipofectamine 2000 (Thermo Fisher Scientific) and cultured at 37 C in a humidified atmosphere of 5% CO 2 in air. Detection of recombinant Cebelin or Cebelin-like protein For Western blotting, the samples were separated by sodium dodecyl sulfatepolyacrylamide gel electrophoresis (SDS-PAGE) under reducing conditions and transferred onto Hybond-ECL (GE Healthcare). The recombinant proteins were detected using mouse monoclonal anti-Myc tag antibody (Cell Signaling Technology) (1:500) as primary antibody and HRP-conjugated rabbit anti-mouse IgG antibody (Thermo Fisher Scientific) (1:1,000) as secondary antibody. Immunoreactive bands were visualized using an enhanced chemiluminescence detection system (Perki-nElmer) as described (Yamashita et al., 2002). To detect Cebelin by immunocytochemical analysis, mouse monoclonal anti-Myc tag antibody and FITC-conjugated goat anti-mouse IgG (Sigma-Aldrich) were used as primary and secondary antibodies, respectively. To detect Mannosidase II, EEA1, and GRP78, rabbit anti-Mannosidase II antibody, anti-EEA1 antibody, and anti-GRP78 antibody (Abcam) and TRITC-conjugated goat anti-rabbit antibody (Sigma-Aldrich) were used as primary and secondary antibodies, respectively. RT-PCR Total RNA was purified with RNeasy Mini kit (Qiagen) and transcribed to DNA using M-MLV Reverse Transcriptase (Thermo Fisher Scientific). The cDNAs were amplified with Gene Taq NT (Nippon Gene) and the specific primers, which were listed in Table 1. DNA fragments were detected by agarose gel electrophoresis.
2018-10-22T06:13:30.650Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "458d7e8e23a842458687551d170c6a757a83aea3", "oa_license": "CCBY", "oa_url": "https://www.cell.com/heliyon/pdf/S2405-8440(18)30887-9.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "458d7e8e23a842458687551d170c6a757a83aea3", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
259993526
pes2o/s2orc
v3-fos-license
Impact of the COVID-19 lockdown on physical fitness among college women living in China Abstract Purposes The purpose of this study was to evaluate the effects of the COVID-19 lockdown on physical fitness among college women living in China and to explore how fitness changed with different physical conditions. Methods We performed repeated measures of BMI, 800 m running and sit-up performance assessment on college women from one university in China pre and post the COVID-19 lockdown. A total of 3658 (age 19.15 ± 1.08 yr.) college women who completed the same assessment pre and post the COVID-19 lockdown were included in the analysis. We analyzed the data using one way ANOVA and paired-samples t-test. Results Due to the COVID-19 lockdown, the result shows a significant increase in BMI by 2.91% (95% CI =0.33, 0.40) and a significant decline in 800 m running and sit-up by 7.97% (95% CI =0.69, 0.77) and 4.91% (95% CI = −0.27, −0.19), respectively. College women in the highest quartile level of physical condition (Quartile 4) had more decreases than college women in the lowest quartile level (Quartile 1). Their BMI level was increased by 3.69% and 0.98% in college women in Quartile 4 and Quartile 1, respectively. Their performance of 800 m running was decreased by 9.32% and 7.37% in college women in Quartile 4 and Quartile 1, respectively. Their performance of sit-up was decreased by 13.88% in college women in Quartile 4 while it increased by 10.91% in college women in Quartile 1, respectively. Conclusions The COVID-19 lockdown might increase the BMI level and decrease 800 m running and sit-up performance among college women living in China. The decrease for college women in higher quartile level of physical condition (Quartile 4) were more seriously while college women in lower quartile level of physical condition (Quartile 1) were modest. Introduction the World health Organisation (WhO) declared the coronavirus disease 2019 (cOViD-19) a global pandemic by on 11 March 2020 [1]. to contain the spread of the pandemic, strict lockdown and social distancing measures were implemented all over the country in china. schools and colleges in all provinces closed from mid-Jan 2020 until mid-June 2020 [2,3]. During the school closure and social isolation, students stayed at home and ceased their school-based exercises and outdoor physical education (Pe) courses such as running, jumping, basketball, football, tennis, etc. simultaneously, public sports venues, playgrounds, and parks were closed. although the school closure and confinement measures during the pandemic were necessary to prevent the pandemic, they probably limited students' engagement in sufficient physical activity (Pa) and exercises, leading to an increase of sedentary behaviour and the number of health disorders [4]. it is questionable if, in the lockdown period, college students would be led to less participation in physical activity and exercises that could contribute to significant physical fitness changes. it is known that physical activity and exercises are gold health standards and efficient non-pharmacological approaches in many chronic diseases [5]. however, previous studies have shown that nearly 80% of adolescents were insufficiently active before the pandemic all over the world [6]. Meanwhile, more than 77% of adolescents in school failed to meet the physical activity recommendation guideline of WhO [7][8][9]. the cOViD-19 lockdown had further reduced students' Pa levels and adversely affected their physical fitness [10][11][12][13]. One study investigated the physical fitness of 264 eighth-grade students living in the United states and found that students increased their body mass index and decreased their physical fitness performance during the cOViD-19 pandemic [14]. Moreover, a previous study evaluated the fitness status of 114 high school students living in croatia and found that the muscular fitness status of students was negatively influenced by the cOViD-19 lockdown [15]. similarly, a decrease of physical fitness performance was also observed in chinese high school students and college students after the cOViD-19 outbreak [16,17]. to make matters worse, the reduction of physical activity and fitness during the pandemic would probably increase the risk of anxiety, depression symptoms, and weight gain and for students [18][19][20]. to reduce the negative impact of the cOViD-19 lockdown on physical fitness among college students, many colleges carried out web-based Pe courses for students at home, as conventional outdoor Pe courses were unavailable [21,22]. learning at home has given college students more free time to participate in Pa and exercises, however, it probably existed enormous disparities in access to opportunities base on partners, friends, neighbourhood characteristics, and socioeconomic status at home [23,24]. Many students, especially those with low incomes, do not have indoor space, or adequate equipment to make home-based Pe courses or exercises [12]. Nevertheless, few studies [21,22] evaluated the effectiveness of web-based Pe courses in preventing the negative effects on physical fitness among college students during the cOViD-19 lockdown. however, there were two main gaps that the previous work did not address. to begin with, the reduced Pa level might adversely affect physical fitness among students during the cOViD-19 lockdown has been proved by previous studies [11,14,15,25], however, to what extent are the impact of the cOViD-19 lockdown on the physical fitness of college students still unclear. Besides, it has not been established the impact differs for college students with different physical conditions. in addition, in most of the previous studies, the effect of the pandemic on physical fitness was evaluated by self-reported fitness levels [26][27][28] or tested by small samples of hundreds [14][15][16]. however, few studies assessed the impact based on objectively repeated measures with the same individual cOViD-19 lockdown in pre and post the cOViD-19 lockdown among large samples. thus, this study aimed to assess the impact of the cOViD-19 lockdown on physical fitness (BMi, 800 m running and sit-up) among college women living in china. also, this study would like to explore the differences among college women in different quartile fitness conditions. thus, the novelty of this study was performing repeated physical fitness tests on a large sample of 3658 college women in 800 m running and sit-up performances before and after the cOViD-19 lockdown. it would provide statistical evidence to contrast changes in 800 m running and sit-up performance among college women with different quartile physical fitness baseline due to the cOViD-19 lockdown. Participants a total of 5121 college women at tsinghua University participated in the 2019 and 2020 physical fitness testing. 3658 (age 19.16 ± 1.08 yr.) college women both participated and finished the fitness testing in 2019 year (before the cOViD-19 lockdown) and 2020 year (after the cOViD-19 lockdown). the participants were divided into four groups based on their physical conditions reflected by their 800 m running and sit-up testing performance in 2019. college women who both completed the 800 m running and sit-up testing in the 2019 year and 2020 year were included in this study. Figure 1 shows the analytic sample selection flowchart for the participants in this study. Procedures all college women must take part in physical fitness testing every year except for some special reasons such as disability and illness at tsinghua University, china. according to the National student Physical health standard (NsPhs) in china [29], the physical fitness assessment for college students contains BMi assessment for both college men and women, while 800 m running and sit-up for college women only. For college women living in china, BMi, 800 m running and sit-up performance assessment were the main measures for evaluating their physical fitness [17,29,30]. trained teams performed the 800 m running and sit-up testing before and after the cOViD-19 lockdown. the lockdown period for the participants in this study was from mid-January 2020 to late august 2020. all of the participants stayed at home during the lockdown and had no access to engaged university-based physical activities. after the lockdown period, the participants went back to school and resumed their university-based physical activities. in this case, the pre-testing in this study was carried out from 9 september to 10 November 2019, and the post-testing in this study was carried out from 19 October to 14 November 2020. the testing was performed from 8:00 am to 12:15 am and 1:30 pm to 6:40 pm. the measurements and procedures of the pre-and-post testing were the same. this study was approved by the tsinghua University ethics review boards (iRB #2012534001). the study was not a retrospective review, and the data was obtained from college students who participated in physical fitness testing at tsinghua University pre-and-post the cOViD-19 lockdown. Before the testing, trained teams told participants that the testing records, except their personal information, would be used for scientific research. Participants with the agreement would sign the consent. all of the participants included in this study had signed the consent. ' We clarify that informed consent was obtained for participation in the study. the information participants received about the use of their testing data in the research, including their age, height, weight, 800 m running performance, and sit-up performance. Height & weight height & Weight were measured in the fitness assessment room at tsinghua University. all the college students were barefoot and wore very light clothes. height was measured by the length from college women's highest point of their head to their heel without shoes, and weight was measured for college women without shoes and wearing light clothes. the tester used was the height & Weight tester of tongfang health Fitness testing Products 5000 series (tongfang health technology co., ltd., Beijing, china) [31]. this instrument can measure height and weight at the same time. (range: 90-210 cm, considerations: 0.1 cm, precision: ± 0.1%) and (range: 5-150 kg, considerations: 0.1 kg, Precision: ± 0.2%). 800 m Running the 800 m running testing was carried out on the tsinghua University playground. the testing was measured by the time of an 800 m race and recorded in seconds. all of the participants warmed up under the guidance of the trained teams before the testing. During the testing, participants were required to wear a vest containing a timing chip that could automatically measure the running time of participants. Participants' running performance time in the range of 00:00-16:66 (min: sec) would be recorded. everyone only conducted the 800 m running testing once. the equipment used in this study was the running tester of tongfang health Fitness testing Products 5000 series (tongfang health technology co., ltd., Beijing, china) [32]. Sit-Up the sit-up testing was carried out in the fitness assessment room at tsinghua University. all of the college women participated in the testing were asked to warm up under the guidance of the trained teams before the sit-up testing. During the testing, the sit-ups were measured by the repetitions of the completed sit-ups of college women. the number of repetitions they performed the sit-ups in one minute would be recorded. Participants were asked to lie flat with their knees bent, feet flat on a mat, hands behind their heads, and fingers crossed. Participants were required to elevate their upper trunk until the elbow touched the thigh and lower their upper trunk until the shoulder blades touched the mat. everyone was tested only once. the sit-ups in the range of 0-99 repetitions would be recorded and included in this study. the equipment used in this study was the sit-up tester of tongfang health Fitness testing Products 5000 series (tongfang health technology co., ltd., Beijing, china) [33]. Statistical analyses Descriptive statistics including means, standard deviations, and percentiles were used to analyze the basic characteristics of the sample in this study. One-way aNOVa was used to compare the baseline characteristics of college women in four groups (in different physical conditions). two-way aNOVa was used to compare college women's mean level of fitness pre-and-post the cOViD-19 lockdown by different physical conditions and testing times. Paired sample t-test was used to assess college women's body mass index (BMi), 800 m running and sit-up performance pre-and-post the cOViD-19 lockdown. We performed paired t-test using bootstrapping and stratified, the variables of age and BMi were the covariate. Furthermore, we analyzed the effect size (es) of cohen's d (small es, 0.2 ≤ d<0. 5 in this study, we also analyzed the differences of fitness change of a subgroup among college women in different physical conditions. We performed visual binning on the variables of BMi, 800 m running and sit-up performance at baseline. We selected BMi, 800 m running and sit-up variables in 2019 whose values will be grouped into bins and divided into four quartiles (Quartile 1, Quartile 2, Quartile 3, and Quartile 4) with a width of 25%. Quartile 1 ranked by participants in the bottom 25% of physical conditions (BMi, 800 m running and sit-up performance at baseline), Quartile 2 ranked by participants in the 25 to 50% group, Quartile 3 ranked by participants in the 50 to 75% group, and Quartile 4 ranked by participants in the top 25% group. the statistical software used in this study was iBM sPss statistics 28.0 (sPss, inc., chicago, il, Usa). in this study, p < 0.05 was set as the statistical significance level. table 2 shows the descriptive statistics of participants' physical fitness performance assessment in 2019 (pre the cOViD-19 lockdown) and 2020 (post cOViD-19 lockdown). Based on the different physical conditions of the participants in the baseline, we performed a two-way aNOVa to examine the physical condition and testing time as factors to assess the interaction between physical conditions (Quartile 1, Quartile 2, Quartile 3, and Quartile 4) and testing time (pre-cOViD-19 testing and post-cOViD-19 testing). The physical fitness testing result the result showed that the mean level of participants's BMi in the 2019 and 2020 years were 24.32 ± 2.29 Figure 2). the college women's 800 m running performance decreased by 7.97% in total (p<0.001), and the es was medium (d = 0.73). in addition, the 800 m running performance decreased for college women in Quartile 1, Quartile 2, Quartile 3, and Quartile 4 by 7.37% (p<0.001), 7.65% (p<0.001), 7.49% (p<0.001), and 9.32% (p<0.001), respectively. the es for college women in Quartile 2 and Quartile 3 was large (d = 0.90, d = 0.88, respectively), the es for college women in Quartile 1 and Quartile 4 was medium (d = 0.55, d = 0.77, respectively) (see table 3, Figure 3). the college women's sit-up performance decreased by 4.91% in total (p<0.001), and the es was small (d = −0.23). Furthermore, the sit-up performance decreased for college women in Quartile 2, Quartile 3, and Quartile 4 by 5.00%, 10.87%, and 13.88%, respectively. interestingly, the table showed that sit-up performance of college women in Quartile 1 increased by 10.91%. the es for college women in Quartile 3 was medium (d = −0.66) and the es for college women in Quartile 1, Quartile 2 was small (d = 0.33, d=-0.24, respectively). the es for college women in Quartile 4 was large (d = −1.10) (see table 3, Figure 4). Discussion this study evaluated the impact of the cOViD-19 lockdown on the BMi, 800 m running and sit-up performance among college women living in china. it was found that college women's BMi significantly increased while 800 m running and sit-up performance significantly reduced due to the cOViD-19 lockdown. Moreover, this study also found that college women in different physical conditions at baseline were differently affected. the study results showed that college women in Quartile 4, i.e. participants in the highest level of physical condition in the classification, experienced a more significant decline in physical fitness performance. interestingly, we also found that the college women at Quartile 1, i.e. participants in the lowest level of physical condition in the classification, performed better in sit-up testing after the lockdown. this study provided statistical evidence to prove the negative impacts of the cOViD-19 lockdown on the 800 m running and sit-up performances among college women living in china. it was founded that college women's BMi increased by 2.91 kg/m 2 (p<0.001) in total. this finding was consistent with previous studies reflected that college students gained weight and increased their BMi during the cOViD-19 pandemic [34]. additionally, it was founded that college women's 800 m running and sit-up performance decreased by 7.97% (p<0.001) and 4.91% (p<0.001) in total, respectively. this finding was consistent with previous studies showing a significant decline in adolescents' fitness after the cOViD-19 lockdown [14,16,35]. Previous study reported the physical inactivity and sedentary behaviour negatively affected the physical fitness of the population during the cOViD-19 pandemic [35]. a study investigated 264 adolescents (133 girls) living in the United states and found that the mean level of the participant's sit-up performance decreased by 19.4% (from 22.7 repetitions to 18.3 repetitions) during the pandemic [14]. a recent study reported that the mean level of 800 m running performance among chinese students decreased by 9.12% (from 226.9 s to 247.6 s) after the cOViD-19 lockdown. however, the study was based on a small sample of only 115 girls [16]. similarly, previous research proved the negative impact of cOViD-19 on 1000 m running and pull-up performance among college men living in china [36], but the study did not take the difference of college men's physical conditions into consideration. however, this finding was inconsistent with a study showing no significant change in sit-up performance mean level of college women pre-and-post cOViD-19 lockdown [22]. One possible reason for this inconsistency could be that the testing dates of the two studies were different. the previous research carried out the pre-and-post testing in september 2019 and september 2020, respectively. Nevertheless, our study carried out the pre-testing from 9 september to 10 November 2019, and the post-testing from 19 October to 14 November 2020, respectively. in addition, university-based Pe courses and college women's exercise behaviour might take effect in the university in our study. We found that the baseline of college women's sit-up level in our study was higher than in the previous study (42.41 repetitions vs. 33 repetitions). even after the lockdown, college women's sit-up level in our study (39.82 repetitions) was higher than in the previous study (33 repetitions). interestingly, we found that the impact of cOViD-19 lockdown on college women's physical fitness varies by their physical condition at baseline. college women in good physical condition at baseline would be more negatively affected by the cOViD-19 lockdown. college women in college women in Quartile 4 (the lowest BMi level at baseline) increase their BMi most by 3.69% while Quartile 1 (the highest BMi level at baseline) increase their BMi lest by 0.98%, after the lockdown. although college women in Quartile 2, Quartile 3, and Quartile 4 significantly declined their 800 m running and sit-up performance, college women in Quartile 1 increased their sit-up performance by 10.91% after the lockdown. One possible explanation for this finding was that college women had different Pa and exercise participation before the lockdown, because Pa and exercise are positively associated with physical fitness [35]. this study shows that before the lockdown, the sit-up performance of college women in Quartile 1, Quartile 2, Quartile 3, and Quartile 4 were 30.05, 40.64, 48.41, and 54.53 repetitions, respectively. therefore, we could infer that college women in higher physical condition at baseline were more physically active than others before the cOViD-19 lockdown. thus, college women with a higher level of physical condition at baseline are more likely to be exposed to more negative influences by the cOViD-19 lockdown. several factors could contribute to the decline in physical fitness of college women living in china. First, school closure and home confinement measures were likely to have a negative impact on most chinese college students' psychical activity behaviour during the cOViD-19 lockdown, which had been reported by previous research [37,38]. Physical activity behaviour was positively linked with physical fitness [35,39], but the cOViD-19 lockdown might lead to a sedentary behaviour. Reduced levels of physical activity would also favour the development of several chronic diseases such as obesity, cardiovascular diseases, and immune system diseases [4,40]. less chance of physical activity and exercise could contribute to the reduction of college women's physical fitness during the cOViD-19 lockdown. second, another possible reason for decreasing college women's physical fitness was that the university-based Pe courses had been replaced with web-based Pe courses [22]. although web-based Pe courses had some positive influence on the improvement of physical fitness in students, the limited field, equipment, and non-face-to-face classes usually affected the effectiveness of the course compared to outdoor Pe courses [22,41]. thus, web-based Pe courses during the lockdown might decrease college women's exercise volume and intensity and have a negative effect on their physical fitness. Public health should take the decrease in physical fitness by the cOViD-19 lockdown seriously. Previous studies showed that the 800 m running and sit-up performance were associated with cardiorespiratory and muscular fitness, respectively [14,30]. the decline in cardiorespiratory fitness is likely to increase the risk of cardiovascular disease, stroke, diabetes, cancer, and all-cause mortality [42][43][44][45][46]. similarly, poor performance in muscular fitness was usually associated with cardiovascular disease, cardiometabolic disease, obesity, bone health, and all-cause mortality [43,[47][48][49][50][51][52]. Usually, physically active students have more chance in better physical fitness [53]. however, over the past decade, the physical fitness level of adolescents has decreased significantly worldwide [54], and the cOViD-19 lockdown might lead to a more difficult situation for the decline of fitness in adolescents around the world [35]. thus, the cOViD-19 lockdown measures have further amplified the value of Pa and exercises that could broadly benefit college women. More attention should be paid to improving chinese college women's Pa level and exercises after the pandemic. Strength and limitations the strength of this study is that it performed repeating assessments on the BMi, 800 m running and sit-up performance among college women with a large sample before and after the cOViD-19 lockdown. this study used a quasi-experimental study design and made a classification of the subjects depending on their physical condition at baseline. however, some limitations were also existed in this study. Firstly, this study assessed college women's BMi, 800 m running and sit-up performance at one university in china. thus, the results of this study cannot be extended to the entire college population in chinese universities. Further studies should consider replicating the findings in this study at other universities. secondly, although this study assessed the subjects by BMi, 800 m running and sit-up performance, future studies could provide more specific measures such as the percentage of fat, cardiovascular parameters, blood pressure, heart rate, flexibility to assess physical performance and criteria for more interesting findings. Conclusions the cOViD-19 lockdown decreased the 800 m running and sit-up performance among college women living in china. the impact of the cOViD-19 lockdown on college women's physical fitness varied by their physical condition at baseline. the negative impact seemed serious for the college women in higher quartile level of physical condition at baseline while being modest for those in lower quartile level of physical condition at baseline. after the lockdown, public policies are urgently needed to improve the fitness performance of college women living in china. Data availability Statement the datasets generated and/or analyzed during the current study are not publicly available due to confidentially reasons, but are available from the corresponding author on reasonable request.
2023-07-21T06:17:49.502Z
2023-07-19T00:00:00.000
{ "year": 2023, "sha1": "143092e1b995b1d84c2a91ffa3d3b7abd0bd5317", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "58b1b527ebdf5ab28dafc42482dee23a4de3df26", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
267164677
pes2o/s2orc
v3-fos-license
Pandemic effects on social capital in residents and non-residents of Chinese immigrant enclaves in Philadelphia The COVID-19 pandemic’s effect on established Chinese ethnic enclaves, which faced socio-economic disruptions as well as anti-Asian sentiment, is unknown. We compared the pandemic’s effect on social capital among residents and non-residents of Chinese ethnic enclaves in Philadelphia. Despite declines in group participation and citizenship activity (joining with others or speaking with local officials to address a neighborhood problem), the pandemic increased support received from other individuals and cognitive social capital (e.g., neighborhood trust and sense of belonging), with more pronounced changes in enclaves. Our findings provide evidence of both greater vulnerability and resilience in terms of social capital among Chinese immigrants during the pandemic. Understanding the pandemic’s effects on social capital in different neighborhood contexts can underscore communities’ strengths, and ways to improve resilience to future challenges. Introduction The COVID-19 pandemic has had substantial impacts on Chinese communities throughout the US, where social and economic disruptions were compounded by anti-Asian sentiment based on the racialization of COVID-19 as a 'Chinese Virus'.(Rogers et al., 2020) In particular, the imposition of public health guidelines for social distancing changed the norms that govern social interactions, which in turn may have influenced access to social capital.Social capital is broadly defined as the resources-such as information, social support, and instrumental assistance-available from reciprocal network connections, which can be used to pursue individual or collective goals (Wang and Ganapati, 2018;Yu et al., 2021).Social capital is often conceptualized as having structural and cognitive components.The structural component refers to observable social interactions, such as the formal groups in which one participates or individual relationships that provide different forms of support.The cognitive component refers to perceptions of trust, reciprocity, norms, and values within the community (Wang and Ganapati, 2018;Ehsan et al., 2019).Social capital is available at different levels of interaction.Ties with those who share one's social identity give rise to bonding social capital; ties with people of a different social identity outside of one's close social networks give rise to bridging social capital; and ties to government or other structures of power or authority provide a basis for linking social capital (Ehsan et al., 2019;De Silva et al., 2006). Communities with greater social capital have mounted a more effective response to the COVID-19 pandemic (Liu et al., 2022;Mathbor, 2007;Aldrich, 2012), with some evidence of better adherence to social distancing (Borgonovi and Andrieu, 2020;Durante et al., 2020), and lower infection (Bartscher et al., 2020;Wu, 2021) and death rates (Fraser et al., 2020).Whether and how social capital itself has been affected by the pandemic, however, is unclear.Guidelines about social distancing may have disrupted communities' ability to facilitate and organize social capital.On the other hand, the experience of the pandemic may have strengthened social bonds by building a sense of solidarity through shared hardship, as has been observed in studies of communities after natural disasters (Dussaillant and Guzmán Astete, 2015).This stronger sense of solidarity might be more pronounced for individuals of Chinese ethnicity in the US in the wake of anti-Asian violence, and more particularly among residents of ethnic enclaves, where fear of COVID-19 drastically reduced patronage of Chinatown businesses early in the pandemic (Fiorillo, 2020;Carman and Heil, February 14, 2020;Aratani, 2020).Residents of ethnic enclaves are generally also thought to have more social capital, through opportunities to connect with others who share similar social and cultural backgrounds.(Becares and Nazroo, 2013) Worth noting, however, is that different types of enclaves may also have different capacities for resilience.In addition to high co-ethnic density, for example, established enclaves, often located in central urban areas, generally also include neighborhood institutions such as culture-and language-specific community organizations, businesses, and churches that contribute to a sense of connection and belonging.(Walton, 2016;Wagner et al., 2021) Newly emerging enclaves, the result of movement to neighborhoods outside of urban centers, often have recent increases in co-ethnic density but not the social and economic structures available in established enclaves (Wagner et al., 2021).This lack of social and cultural institutions in emerging enclaves may mean they are less supportive of some forms of structural social capital, such as group membership and bridging and linking capital. How the pandemic has affected social capital in Chinese ethnic enclaves, and whether this relationship is different in established vs. emerging enclaves, are unknown.Understanding how the COVID-19 pandemic relates to social capital in different ethnic neighborhood contexts can underscore communities' strengths, as well as methods to improve resilience in future challenges.The current study investigates the impact of the pandemic on social capital among residents and non-residents of established and emerging Chinese ethnic enclaves in Philadelphia.We hypothesized the following (Fig. 1): 1. Enclave residents would have higher social capital at baseline, with the highest in established enclaves. 2. The pandemic would have different effects on different components of social capital -specifically, a. because of public health guidelines about social distancing, a decrease in group participation, and b. because of shared hardship and a sense of solidarity, increases in support from individual relationships and in cognitive social capital. 3. Because cultural institutions and closer social ties may improve resilience in enclave neighborhoods, the increases in individual support and cognitive social capital would be more pronounced among residents of enclaves, especially among residents of established enclaves. Study sample From September 2018 to December 2019, we recruited a convenience sample of n = 520 Chinese immigrant adult men and women into a longitudinal study on neighborhoods and cardiometabolic risk through community organizations, events, businesses, chain referrals, and contacts within the Chinese community in the Philadelphia region.Research staff screened interested participants for eligibility, obtained written consent to participate, oriented participants to study procedures, and scheduled appointments for interviews and data collection visits.Because the study was designed primarily to examine post-migration determinants of cardiometabolic risk trajectories, the sample was limited to healthy individuals who had immigrated as adults.We used a non-probability quota sampling approach to draw approximately equal proportions of the sample from each of 3 neighborhood types: established, emerging, and non-enclave.Established and emerging enclaves were identified through a systematic process using data from the American Community Survey 5-Year Estimates (2014)(2015)(2016)(2017)(2018).First, we calculated Location Quotients (LQ), as the ratio of the proportion of Chinese residents in a given census tract to the proportion of Chinese residents in the total population of the Philadelphia metropolitan area.(Smaje, 1995) A z-score cutpoint of >2.58 SD (significant at 0.01 level) (Poulsen et al., 2010) above the mean for the Philadelphia metropolitan area identified four areas (Chipman et al., 2016).We then used local knowledge as well as academic (Sze, 2010;Li et al., 2013) and lay (Patusky and Ceffalio, 2004;Bahadur, 2005 Level of acculturation was assessed at baseline using an abridged, 11-item version of the General Ethnicity Questionnaire -American version (GEQA) (Tsai et al., 2000), which assesses the respondent's degree of engagement with and acculturation into American culture, and activities (e.g., 'I celebrate American holidays', 'At home, I eat American food'), with a minimum of 1.0 (least acculturated) and a possible maximum score of 5.0 (most acculturated).The scale demonstrated high internal reliability in the present sample (Cronbach's α=0.86) and in prior studies (Tseng and Fang, 2014).1).(De Silva et al., 2006) As above, affirmative responses were summed for a possible range of 0-4. Similar to prior studies, (De Silva and Harpham, 2007;Dinesen et al., 2013;Flores et al., 2014) social capital variables were dichotomized depending on their distributions: group membership and citizenship activities were dichotomized as any vs. none; individual support was dichotomized as support from ≥3 vs. <3 individuals; and cognitive capital was dichotomized as a score of 4 vs. <4.Construct validity of SASCAT has been demonstrated among low-income samples internationally, (De Silva et al., 2006;Dewitt et al., 2005) and other studies have shown associations of various aspects of social capital measured using SASCAT with life satisfaction, (Takahashi et al., 2011) post-traumatic stress disorder following the 2007 earthquake in Peru, (Flores et al., 2014) and child nutritional status.(De Silva and Harpham, 2007) Internal reliability was not assessed for the group membership, individual support, and citizenship components of SASCAT since they are indexes whose individual items are not necessarily correlated.(Streiner, 2003) However, coefficient alpha for the 4-item cognitive social capital scale was 0.76, demonstrating good internal reliability for this construct in our sample. We examined other Census tract-level variables commonly used as indicators of socioeconomic disadvantage (Krieger et al., 1997) as potential confounders.These included proportion of adults age 25 and older with a college degree; percent of occupied housing units that were owner-occupied; percent of adults age 18-64 years living in poverty; and median household income.Additionally, ethnic density was operation-alized as the proportion of Census tract residents who were Chinese.To reflect characteristics of the sample during the period of data collection, we used 2016-2020 data from American Community Survey 5-Year Estimates. Statistical analysis Of the 520 participants recruited into the study, two were excluded for missing covariate data, leaving a sample of 518 for this analysis, 417 of whom also completed follow-up interviews.We used analysis-of-variance and Cochran-Mantel-Haenszel test statistics to evaluate unad-justed, bivariate associations of neighborhood type (established, emerging, and non-enclave) with social capital and other covariates.Measures of social capital were: group membership, support from individuals, citizenship, and cognitive social capital; additional analyses considered support from individuals from separate sources in terms of bonding, bridging, and linking capital. To test Hypothesis 1, we used logistic regression analyses to model baseline associations between neighborhood type and higher vs. lower social capital, with social capital variables dichotomized as described above.Variables expected a priori to be associated with neighborhood type and/or social capital were included as potential confounders in fully adjusted models.These were age at baseline (years), gender, marital status (married or not), education level (<8 years, 8-11 years, high school graduate, Bachelors degree or higher), occupational category (blue collar, service, or white collar occupation), length of residence in the US (years), acculturation level (continuous GEQA score), percent of adults in the Census tract with a college degree, median household income of the Census tract, percent of adults in poverty in the Census tract, and percent of homes in the Census tract that were owner-occupied. To test Hypothesis 2, we ran logistic regression models to quantify the likelihood of having higher social capital during the pandemic as compared to baseline using Generalized Estimating Equations (GEE) with an exchangeable correlation matrix, to account for repeated measures.These models included 935 observations (518 baseline + 417 follow-up observations) and adjusted for the covariates listed above. Finally, to test Hypothesis 3, we examined effect modification of the association between time (pandemic vs. baseline) and social capital by neighborhood type in the same GEE logistic regression models but including a time (baseline or pandemic) x neighborhood type interaction term.Interaction p-values < 0.10 were investigated further by modeling change in social capital separately for each neighborhood type. Results were similar when we extended enclave boundaries to include a ¼-mile buffer, which resulted in an additional 33 participants in established enclaves and an additional 29 in the emerging enclave.The findings presented here represent the original boundaries without the ¼-mile buffer. Results Of 518 participants, 128 lived in an established enclave, 171 in the emerging enclave, and 219 in a neighborhood categorized as non-enclave.Mean (SD) age was 52.7 (7.7) years, 34.2 % were male, and 84.4 % were married (Table 2).Most participants had not completed college and were in blue collar or service occupations.Residents across neighborhood types were not significantly different with respect to age, gender, marital status, and level education.However, non-enclave residents had the highest mean length of US residence and acculturation scores, and emerging enclave residents were the least likely to be whitecollar or self-employed.Established enclave residents lived in census tracts that were more ethnically dense, had higher proportions of college-educated adults, and had higher median household income.Emerging enclave residents lived in census tracts with lower proportions of college-educated adults, higher proportions of owner-occupied housing units, and lower median household income. With respect to social capital at baseline, while 14 % of the sample participated in one group, most people (82 %) did not participate in any.The most commonly reported groups overall were religious groups (8.5 %), community associations or co-ops (7.1 %), and work-related groups or trade unions (5.0 %) (data not shown).Although overall group participation did not differ across neighborhoods, non-enclave residents were more likely to participate in work-related organizations (7.8 %) than either established (3.1 %) or emerging (2.9 %) enclave residents (p = 0.05) (data not shown).On the other hand, most participants (75 %) reported receiving some form of support from individuals -mostly family (72.6 %), friends (48.1 %), and neighbors (39.6 %).Accordingly, participants reported more bonding social capital but markedly less bridging or linking capital; while 75 % reported at least one form of bonding social capital, fewer than 10 % reported at least one form of either bridging or linking capital.Social capital in the form of citizenship was generally low, with 85 % reporting neither form of citizenship activity.In contrast, almost 80 % of the sample reported the maximum score of 4 for cognitive social capital, and no participants reported the minimum score of 0. In multivariate analyses of pre-pandemic social capital, emerging enclave residents were significantly more likely to report support from other individuals (OR 1.79, 95 % CI 1.09, 2.96)) -namely in the form of bridging (OR 3.44, 95 % CI 1.39, 8.49) and linking (OR 3.06, 95 % CI 1.00, 9.37) capital -than residents of non-enclaves (Table 3).They were also significantly more likely to report citizenship activities than were non-enclave residents (OR 2.09, 95 % CI 1.02, 4.29).Contrary to expectation, established enclave residents did not report higher levels of any of the forms of social capital compared to emerging enclave residents, although they were marginally significantly more likely to report linking capital compared to residents of non-enclaves (OR 3.02, 95 % CI 0.95, 9.52).Other forms of social capital -group membership, bonding social capital, and cognitive social capital -did not differ significantly across neighborhood types at baseline. Multivariate analyses including repeated measures indicated a significant decline in group membership; overall, study participants were about half as likely to participate in a group during the pandemic as compared to baseline (OR 0.55 (95 % CI 0.37, 0.81) (Table 4). While the decrease occurred across all neighborhoods, it was most pronounced and only statistically significant in established enclaves (Chinatown and South Philadelphia), where group membership declined from 22.3 % to 6.8 % (pandemic vs. baseline OR 0.23, 95 % CI 0.09, 0.58).The most marked declines were for participation in community associations and religious organizations, while membership in work-related groups remained stable (data not shown). Citizenship activities also declined significantly, with participants only a third as likely to report any citizenship activities during the pandemic compared to baseline (OR 0.34, 95 % CI 0.21, 0.55).The overall decrease was driven by statistically significant decreases among established and non-enclave residents, while it was not significant for residents of emerging enclaves (interaction p = 0.016).Established enclave residents again reported the most pronounced decline, from 14.8 % at baseline to 1.0 % during the pandemic (pandemic vs. baseline OR 0.03, 95 % CI 0.003, 0.25). In contrast, individual support increased overall (OR 4.33, 95 % CI 3.27, 5.74) and across all neighborhoods.The proportion who reported receiving assistance from 3+ sources increased from 35.5 % at baseline to 69.3 % during the pandemic, with the greatest increases concentrated among the three sources of bonding capital: family, friends, and neighbors (data not shown).The increases in both overall individual support and bonding capital in particular were largest among residents of established enclaves and least pronounced among emerging enclave residents, who had higher levels at baseline (interaction p = 0.006 for overall individual support, interaction p = 0.046 for bonding social capital).Linking social capital as a source of individual support, however, decreased across all neighborhoods (OR 0.10, 9 % CI 0.03, 0.34).This decrease was significant among established and emerging enclave residents but not in non-enclave residents, whose linking social capital at baseline was already very low. Finally, cognitive social capital increased significantly overall (OR 17.97, 95 % CI 8.24, 39.17).During the pandemic, 98.3 % of participants reported the maximum score of four, compared with 77.6 % at baseline (data not shown).The increase was more pronounced among established and emerging enclave residents (interaction p = 0.011); during the pandemic, all participants residing in established enclaves reported the maximum possible score for this component of social capital. Discussion Primary findings of this study are that: (1) emerging enclave residents reported higher pre-pandemic levels of individual support, particularly in the form of bridging and linking capital, and citizenship activities; (2) despite declines in group participation and citizenship activities, the pandemic increased support received from individuals, especially in the form of bonding social capital, and cognitive social capital among Chinese immigrants in all neighborhood types; and (3) despite a more pronounced decrease in group membership, established enclave residents also had more pronounced increases in individual support (mainly in the form of bonding social capital) and cognitive social capital. We expected a greater level of social capital at baseline and greater resilience in social capital with the pandemic in established enclaves, which provide both the social structures to connect with people who share cultural heritage, values, and norms, as well as physical structures such as churches and community centers to facilitate such interactions.Consistent with this expectation was the observation that non-enclave residents were the least likely to report bridging or linking social capital and citizenship activities at baseline.However, contrary to expectation, although established enclave residents had the most pronounced increase in individual support during the pandemic, they did not report significantly higher levels of social capital at baseline.Moreover, a reliance on neighborhood-based meeting spaces for community and religious groups might have made them more vulnerable to restrictions on social distancing, as they showed the greatest declines in group membership.Instead, emerging enclave residents reported higher levels of individual support -mainly because they were more likely to report support from sources of bridging and linking capital -compared to residents of both established enclaves and non-enclave neighborhoods. They also seemed to show greater resilience during the pandemic overall -demonstrating significant increases in individual support and cognitive social capital. As expected from the pandemic's social disruptions and severely restricted opportunities for group activities, the pandemic related simultaneously to a decrease in group participationbased social capital and to increases in individual support and cognitive social capital.In studies of the effect of the pandemic among youths in China, social capital remained stable for most participants, but changes that occurred were generally consistent with the current findings: a decrease in participation in community organizations, and an increase in living with, having a good relationship with, and receiving support from family (Yu et al., 2021).Similarly, during the pandemic, older adults in Japan participated less in groups while their social cohesion increased (Sato et al., 2022).In our study, although the increase in individual support was somewhat more pronounced in established enclaves, that it increased across all neighborhood types suggests that enclaves did not uniquely facilitate this form of social capital. The lack of difference in cognitive social capital at baseline -which was high overall -was surprising since we expected better social cohesion and trust in enclaves where shared identity should play a role.A mixed-methods study conducted among ethnic minority communities in England (Becares and Nazroo, 2013) also suggests that the association of enclave neighborhoods with social capital is complex.In that study, the association differed by ethnic group and was also only associated when aggregated as an area-level measure.Whereas Indian participants referred to amenities such as temples, community centers, and social networks, Caribbean participants, who were also more likely than Indian participants to live in the most deprived areas, did not remark on these positive attributes, suggesting that neighborhood impacts on social capital depend on the context and position of the immigrant community in wider society. On the other hand, our finding regarding the increase in cognitive social capital with the pandemic is consistent with studies conducted after earthquakes in Japan, Pakistan, and Chile, which have generally shown an increase in cognitive social capital following these disasters, particularly in places where pre-disaster levels of social capital were higher (Dussaillant and Guzmán Astete, 2015).Disasters may strengthen social bonds by building a sense of solidarity and common identity through shared hardship, creating opportunities for collective activity, and increasing trust of strangers (Dussaillant and Guzmán Astete, 2015;Lee and Fraser, 2019;Ntontis et al., 2018;Partelow, 2021;Toya and Skidmore, 2014;Yamamura, 2016).The COVID-19 pandemic similarly seems to have had positive impacts on social capital, (Cappelen et al., 2020) especially in enclaves, where we observed more pronounced increases.The racialization of COVID-19 led to widespread avoidance of Chinese neighborhoods, with severe economic impacts, (Fiorillo, 2020;Carman and Heil, February 14, 2020;Aratani, 2020) and may have contributed to the increase in anti-Asian violence, including in Philadelphia.(Orso, 2021;Falk and Conant, 2021) Such severe economic and social disruptions likely increased the need for and reliance on some of the structures for social capital -in particular, existing close ties (bonding social capital)and community solidarity that is the basis for cognitive social capital.Also worth noting is that the significant decline in group membership during the pandemic, while an indicator of reduced structural social capital, may also indicate high cognitive social capital if individuals did not participate in groups to protect the health of community members. While cognitive social capital in the established enclaves was high -during the pandemic, 100 % of participants in these neighborhoods reported the maximum score of four on this scale -these neighborhoods still remain vulnerable to changes that might disrupt sense of community.As early as 2013, sociodemographic shifts associated with gentrification have been noted in Philadelphia's Chinatown, raising concerns over the enclave's survival.(Li et al., 2013) These changes may be accelerating with several new apartment buildings and public parks in the northern section of Chinatown.(Schmidt, 2022;Russ, 2019) Other work emphasizes that collective hardship does not always lead to an increase in social capital in other forms.Social trust appears to have decreased following the Spanish Flu pandemic 1918-1920, possibly because of the failure of governments and public health institutions to contain the crisis (Aassve et al., 2021).Dussaillant and Guzman (Dussaillant and Guzmán Astete, 2015) suggest that disasters might erode social trust in conditions of scarce recovery resources, unequal access to information and opportunities during recovery, or displacement. In our study sample, we also observed that citizenship activities and linking capital decreased, suggesting severed connections with larger societal structures as individuals moved towards closer bonds.During the pandemic, involvement in citizenship activities decreased to less than half of pre-pandemic levels, and fewer than 1 % of participants reported any form of linking capital.Thus, while a consequence of the pandemic might have been to draw people more closely together, another consequence might have been to further distance them from governmental and decision-making processes and the people involved in those processes.Taken together, these findings highlight the importance of considering impacts on multiple forms of social capital, given their different roles in community recovery. A limitation of the study is that people who did not participate in a follow-up interview during the pandemic tended to have lower cognitive social capital at baseline, possibly overstating the increase in cognitive social capital during the pandemic.However, the proportion of participants who responded during the pandemic was high overall (~80 %) and similar across neighborhoods.Second, while SASCAT's assessment of individual support has been evaluated for validity, measures of bonding, bridging, and linking social capital that together make up the measure of individual support have not been validated.The issue is of particular relevance if, as our results suggest, a community stressor such as a pandemic serves to increase some forms of social capital (such as bonding) while curtailing others (such as linking).Future work should more directly address the associations of these different forms of social capital with neighborhood and community stress. Generalizability of the findings is unclear; our convenience sample included healthy individuals who had immigrated to the US as adults, and, further, these findings warrant replication in other neighborhoods.An additional limitation is that we used census tract boundaries to delineate borders for the established and emerging enclave areas rather than residents' own perceptions of where the borders fall.However, to identify neighborhood types we used a robust method that was both based on objective criteria and supported by academic and lay understanding of Philadelphia neighborhoods.Further, our findings were unchanged in sensitivity analyses extending enclave boundaries to include a ¼-mile buffer. Major strengths of the study include its longitudinal design, with repeated measures of both structural and cognitive components of social capital, to capture change in these factors from before to during the pandemic.In addition, our operationalization of neighborhood types allowed us to distinguish between established and emerging enclaves, and our recruitment strategy resulted in a unique sample of Chinese immigrants residing in a wide range of neighborhood types in the Philadelphia region. Besides replication in other geographic areas and ethnic groups, our findings suggest three directions for future work.First, both the contributors to and consequences of the significant increases in individual support and cognitive social capital with the pandemic warrant further investigation.Clarifying the extent to which neighborhood, as opposed to individual characteristics, enabled enclave residents in particular to access individual sources of support or to feel a greater sense of harmony with their community can inform strategies to improve resilience to community hardship.Whether increases in these forms of social capital protected against experiences of discrimination and social isolation will be informative of the potential benefits of strategies to build social capital. Second, in the current study, residents of non-enclaves generally reported levels of social capital at baseline that were comparable to their enclave-residing counterparts.The roots of cognitive capital in non-enclave neighborhoods warrants further exploration -in particular, whether features of the neighborhoods have facilitated development of neighborhood trust, and the extent to which it is due to individual psychosocial characteristics.Clarifying the factors that contribute to social capital for Chinese immigrants living in areas of low co-ethnic density would also be informative for efforts to support the development of social resources in other minoritzed groups. Finally, given that residents of the emerging enclave in the current study proved to be remarkably resilient in the face of the pandemic, understanding the processes by which neighborhoods undergoing sociodemographic changes develop the physical and social structures that nurture social capital can point to ways to equip communities to improve their resilience.Overall, future work to discover community-level factors that contribute to the growth of different forms of social capital merits deeper exploration.Qualitative or mixedmethod approaches may help illuminate the processes by which specific neighborhood characteristics facilitate or impede the development of social capital.. Our findings provide evidence that while the pandemic related to declines in group membership in this sample of Chinese immigrants, it was associated with increases in other forms of social capital.These changes were more pronounced for residents of established enclaves, suggesting both greater vulnerability and greater resilience in these communities that merit further exploration.An overall decrease in forms of social capital linking individuals to wider society, including its power structures, was also apparent.These findings suggest the importance of clarifying how social capital derived from interacting within an immigrant enclave might be leveraged to counter the effects of a community stressor such as the COVID-19 pandemic.a Adjusted for age at baseline (years), gender, marital status (married or not), education level (<8 years, 8-11 years, high school graduate, Bachelors degree or higher), occupational category (blue collar, service, or white collar / self-employed), length of residence in the US (years), acculturation level (continuous GEQA score), percent of adults in the Census tract with a college degree, median household income of the Census tract, percent of adults in poverty in the Census tract.and percent of homes in the Census tract that were owner-occupied. b Neighborhood-specific estimates were derived from models stratified on neighborhood type.Wellbeing Space Soc.Author manuscript; available in PMC 2024 April 18. Fig. 2 . Fig. 2. Census tracts included in established and emerging enclaves in Philadelphia.Map adapted from 2010 census tract reference maps for Philadelphia County, PA. (US Census Bureau 2010). c P-value from Wald test statistic for interaction term in model including the covariates listed above.d Not estimated due to zero cell values. , myocardial infarction, stroke, heart failure, cardiovascular procedures, cancer (except non-melanoma skin cancer)); (2) pregnancy or lactation; (3) current or planned (within 2 years) nursing home residence; and (4) impaired cognitive ability or inability to provide informed consent.The study was approved by the Fox Chase Cancer Center Institutional Review Board, and all contact and informed consent documents were provided in English and Chinese. Table 1 Measure of social capital using the short version of the Adapted Social Capital Assessment Tool (SASCAT).5 . Wellbeing Space Soc.Author manuscript; available in PMC 2024 April 18. Table 2 Descriptive characteristics of study sample at baseline (n = 518), overall and by neighborhood type. Wellbeing Space Soc.Author manuscript; available in PMC 2024 April 18.Wellbeing Space Soc.Author manuscript; available in PMC 2024 April 18. Table 3 Fully adjusted a odds ratios (OR) and corresponding 95 % confidence intervals (CI) for greater social capital at baseline by neighborhood type.Boldface indicates statistically significant associations (n = 518).Adjusted for age at baseline (years), gender, marital status (married or not), education level (<8 years, 8-11 years, high school graduate, Bachelors degree or higher), occupational category (blue collar, service, or white collar / self-employed), length of residence in the US (years), acculturation level (continuous GEQA score), percent of adults in the Census tract with a college degree, median household income of the Census tract, percent of adults in poverty in the Census tract.and percent of homes in the Census tract that were owner-occupied.b P-value from Wald test statistic. Wellbeing Space Soc.Author manuscript; available in PMC 2024 April 18. Table 4 Fully adjusted a odds ratios (OR) and corresponding 95 % confidence intervals (CI) for greater social capital during pandemic vs. baseline, overall and by neighborhood type.Boldface indicates statistically significant associations.
2024-01-24T16:30:30.544Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "33dab212bdcaa744e74b7646afe8088df06d126f", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.wss.2024.100185", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dbc68d6a20550fedc0ff462fb26bcb66e0858540", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
235767927
pes2o/s2orc
v3-fos-license
Patient-reported outcome measures after hip fracture in patients with chronic cognitive impairment Aims Hip fracture patients have high morbidity and mortality. Patient-reported outcome measures (PROMs) assess the quality of care of patients with hip fracture, including those with chronic cognitive impairment (CCI). Our aim was to compare PROMs from hip fracture patients with and without CCI, using the Norwegian Hip Fracture Register (NHFR). Methods PROM questionnaires at four months (n = 34,675) and 12 months (n = 24,510) after a hip fracture reported from 2005 to 2018 were analyzed. Pre-injury score was reported in the four-month questionnaire. The questionnaires included the EuroQol five-dimension three-level (EQ-5D-3L) questionnaire, and information about who completed the questionnaire. Results Of the 34,675 included patients, 5,643 (16%) had CCI. Patients with CCI were older (85 years vs 81 years) (p < 0.001), and had a higher American Society of Anesthesiologists (ASA) classification compared to patients without CCI. CCI was unrelated to fracture type and treatment method. EQ-5D index scores were lower in patients with CCI after four months (0.37 vs 0.60; p < 0.001) and 12 months (0.39 vs 0.64; p < 0.001). Patients with CCI had lower scores for all dimensions of the EQ-5D-3L pre-fracture and at four and 12 months. Conclusion Patients with CCI reported lower health-related quality of life pre-fracture, at four and 12 months after the hip fracture. PROM data from hip fracture patients with CCI are valuable in the assessment of treatment. Patients with CCI should be included in future studies. Cite this article: Bone Jt Open 2021;2(7):454–465. Introduction Hip fracture patients with chronic cognitive impairment (CCI) represent up to 37% of the hip fracture population, 1 and are often vulnerable. 2 Patients with CCI are often excluded from studies because of the difficulty in obtaining informed consent from patients or proxies. Excluding these patients can lead to systematic bias in existing knowledge of hip fracture patients. 3 The traditional method of assessing outcome after hip fracture has been to measure physical functioning, reoperations, complications and mortality. 4,5 A hip fracture also has a considerable impact on patients' health-related quality of life (HRQoL). [6][7][8] Several studies have therefore advocated including patientreported outcome measures (PROMs) in the assessment of outcomes following a hip fracture. 5,9 There are few published studies on hip fracture patients using PROMs that include patients with CCI and there is thus a need for more studies to explore the relevant outcomes. 10,11 The Norwegian Hip Fracture Register (NHFR) is one of the few registries that routinely collect PROM data from patients, including cognitively impaired patients. Information on who filled in the form is also available. Methods Study design. Our aim was to compare PROM data after hip fracture in patients with and without CCI. This study was a prospective observational study based on data from the NHFR. The NHFR has collected data from all hospitals in Norway treating patients with hip fractures since 2005. 12 On a one-page form, the surgeon reports information such as fracture type, operation method and patient information, including assessment of CCI. The surgeon evaluates patients' chronic cognitive function by examining their medical chart, asking them or their relatives, or using the clock drawing test. 13 The information on chronic cognitive function is based on preoperative information. No other standardized diagnostic tools for assessment of cognitive function are normally used in this setting. The question on CCI on the form is, 'Does the patient have cognitive impairment?' with the options of 'Yes', 'No', or 'Uncertain'. The data on CCI in the NHFR have been previously validated against two hospital quality databases and the positive predictive value of the data reported to the NHFR on CCI was 78%. 14 Fractures were classified as undisplaced femoral neck, displaced femoral neck, basocervical, throchanteric A1, A2, A3, or subtrochanteric. Primary operations were classified as screw osteosynthesis, hemiartroplasty, sliding hip screw, and short/long intramedullary nail. PROM questionnaires were sent from the NHFR by mail directly to patients. Patients responded with use of a pre-paid envelope. No reminders were sent to patients not responding. PROMs reported in questionnaires at four and 12 months were analyzed. The questionnaires include the Norwegian translation of EuroQol five-dimension three-level (EQ-5D-3L), which covers five dimensions of HRQoL: mobility, self-care, usual activities, pain/discomfort, and anxiety/depression. 15 There are three levels of response for each dimension: from level 1 (indicating no problems or best state) to level 3 (indicating severe problems or worst state). 15 Pre-fracture EQ-5D-3L data were collected retrospectively together with the EQ-5D-3L data in the four-month questionnaire. The preference scores (EQ-5D index scores) were generated from a large European population, 16 ranging from a score of 1 (indicating the best possible state of health) to a score of -0.217 (indicating a state of health worse than death), while 0 indicates a state of health equal to death. Each questionnaire also includes information on who completed the form: the patient, a relative, a clinician, or other. Patient selection. Between 1 January 2005 and 31 December 2018, 113,447 patients were reported to the NHFR. Patients with pathological fractures and patients below the age of 65 years were excluded ( Figure 1). Patients treated with total hip arthroplasty (THA) were excluded because they were reported on forms that did not include information on cognitive status. Patients recorded in the NHFR with missing information on chronic cognitive status and patients with 'uncertain' cognitive status were also excluded. Patients who died within four months were also excluded. Finally, 60,847 patients received and 34,675 patients (57%) completed the fourmonth questionnaire. We primarily analyzed the data from patients responding to the four-month questionnaire. Pre-fracture EQ-5D data were answered together with the four months questionnaire. Out of these patients, 32,484 (94%) received and 24,510 (75%) answered the 12-month questionnaire. Secondly, we examined the group answering both the four-and 12-month questionnaires in order to analyze information on changes in a long-term perspective. Thus, 24,510 patients could be included in the analysis comparing PROMs at four and 12 months (Figure 1). Statistical analysis. Pearson's chi-squared test was used to compare categorical variables, while an independent samples t-test was used for continuous variables in independent groups. The number of patients reaching their pre-fracture EQ-5D status was calculated in percentages. The change in EQ-5D was calculated for each patient as the difference between EQ-5D index score and EQ-5D index score pre-fracture. Sub analyses with stratification on males/ females and different age groups were performed. The statistical software package IBM SPSS Statistics (v. 26.0; IBM, USA) was used for statistical analysis. This study was performed in accordance with the REporting of studies Conducted using Observational Routinelycollected health Data (RECORD) statement. 17 Ethics, funding, and potential conflict of interest. The NHFR has authorization from the Norwegian Data Protection Authority to collect and store data on hip fracture patients (authorization issued on 3 January 2005: reference number 2004/1658 to 2 SVE/-). The patients provided written, informed consent; if unable to understand or sign, a relative could sign the consent form on their behalf. The NHFR is financed by the Western Norway Regional Health Authority. No competing interests were declared by the authors. Results The four-month questionnaire was completed by 34,675 patients, and 24,510 patients completed both the fourand 12 month questionnaires. The majority of the questionnaires from patients with CCI were filled in by a proxy (four months 84%; 12months 78.2%), whereas most questionnaires from patients without CCI were filled in by the patients themselves (four months 67.2%; 12 months 73.0%) ( Table I). The baseline characteristics of responders and nonresponders of the four-month questionnaire are presented in Table II. The non-responders of this questionnaire were older (mean age 83 years vs 82 years) (p < 0.001, independent Student's t-test), included more females (75% vs 73%) (p < 0.001, Pearson's chi-squared test) and more patients with CCI (38% vs 16%) (p < 0.001), and had higher ASA scores (ASA 3 + 4; 66% vs 54%) (p < 0.001, Pearson's chi-squared test) compared to the responders. There were no clinically important differences in fracture type or operation method of the different fracture types between responders and non-responders, but due to the high number of cases the differences reached statistically significance (Table II). Patients answering four-month questionnaire (n = 34,675). Of the 34,675 patients answering the fourmonth questionnaire, 5,673 (16.3%) had CCI. Patients with CCI were older (85 years vs 81 years) (p < 0.001, independent Student's t-test), there were more females (77% vs 73%) (p < 0.001, Pearson's chi-squared test), and they had higher comorbidity (ASA 3 + 4; 73% vs 50%) (p < 0.001, Pearson's chi-squared test) compared to patients without CCI. All five dimensions of the health profiles deteriorated from pre-fracture to four months regardless of cognitive function (Table III), but the patients with CCI reported greater problems in this respect. The hip fracture had a dramatic impact on patients' mobility. The proportion of patients with CCI confined to bed increased five-fold from 3% to 16%, whereas patients without CCI showed an increase of 0.9% to 3.0% after four months (p < 0.001, Pearson's chi-squared test). The proportion of patients with CCI unable to wash or dress almost doubled from 25% to 48%. Further, the proportion of patients with CCI unable to perform usual activities increased from 45% to 63%. Hip fracture patients with CCI also reported an increase in both moderate and extreme pain/discomfort from 44% to 64% and 5.7% to 8.9%. Regarding anxiety and depression, hip fracture patients with CCI reported an increase in extreme symptoms from 7.4% to 9.7% after four months (Table III). The changes in responses in the EQ-5D-3L from preoperative to 12 months postoperative are shown in Figure 2 (walking ability), Figure 3 (self-care), and Figure 4 (usual activities). The patients with CCI had a lower EQ-5D index score after both four months (0.37 vs 0.60; p < 0.001, independent Student's t-test) and 12 months (0.39 vs 0.64; p < 0.001, independent Student's t-test) compared to patients without CCI (Table V). Stratifying into age groups, the youngest patient groups had higher EQ-5D index scores, both among patients with and without CCI (Table VI). There were statistically significant differences in EQ-5D index scores between patients with and without CCI for all age groups both at four and 12 months. The change in EQ-5D was higher among patients without CCI than among patients with CCI at four months (-0.19 to -0.17; p < 0.001, independent Student's t-test), but not at 12 months (p = 0.35, independent Student's t-test) when investigating all patients. There were, however, differences between the patients with and without CCI at age 65 years to 74 years at both four (-0.13 to -0.19; p = 0.002, independent Student's t-test) and 12 months (-0.11 to -0.14; p = 0.003, independent Student's t-test), and among patients over 90 years at four months (-0.16 to -0.20; p < 0.001, independent Student's t-test). There was no difference between patients with and without CCI in the proportion who achieved their pre-fracture EQ-5D status after four months (p = 0.074, Pearson's chi-squared test). After 12 months, a lower proportion of patients with CCI had reached their preoperative EQ-5D than those without CCI (28% vs 33%; p < 0.001, Pearson's chisquared test) ( Table V). The proportion of patients who reached their preoperative EQ-5D at four and 12 months decreased with age (Table VI). Discussion Postoperatively, HRQoL decreased for all hip fracture patients. Patients with CCI showed an even greater decline than those without CCI following a hip fracture. This was particularly due to a reduction in walking function, self-care capacity, and the ability to perform usual activities. Our results concur with a previous review reporting that CCI has a negative impact on HRQoL after a hip fracture. 18 The seven-fold increase in the number of patients with CCI who were confined to bed one year after a hip fracture is dramatic. Mukka et al 19 reported that 28% were non-walkers one year after the hip fracture. Milte et al 10 also found a decrease in walking ability, but their study measured the EQ-5D only one month postoperatively. The tendency was the same for self-care capacity, where the proportion of hip fracture patients with CCI unable to wash or dress almost doubled after 12 months, which is in accordance with a previous study by Osnes et al. 20 Table III. EuroQol five-dimension results before the fracture and at four months by chronic cognitive function (CCI) (n = 34,675). Variable Before operation Four months postoperatively The decrease in EQ-5D index according to age found in our study concur with earlier studies of all hip fractures. 5 The decrease in hip fracture patients reaching their pre-fracture HRQoL could be a sign of general decrease in physical and mental status. Peeters et al also found inferior results for female gender. 21 Few studies have included hip fracture patients with CCI. 3 One reason could be challenges in including patients that might not understand the purpose of the study. It can be difficult to obtain informed consent. The researcher might also find it difficult to trust and interpret answers from patients with CCI. However, patients with CCI represent a significant proportion of the hip fracture population, and should not be excluded from studies. Total, n (%) No CCI, n (%) CCI, n (%) p-value* Total, n (%) No CCI,n (%) CCI,n (%) p-value* PROMs at four months were completed by a proxy in 86% of the cases with CCI and 41% of cases without CCI. At 12 months, the corresponding proportions were 80% and 33%. Some would argue that PROMs collected from patients with CCI are unreliable. However, several studies have found that persons with CCI are capable of expressing their HRQoL of life via EQ-5D. [22][23][24] Further, studies have reported that the EQ-5D is a good tool for measuring outcome for patients recovering from hip fracture, including patients with CCI. [21][22][23]25 It has also been shown that responses given by a proxy can be trusted. However, a closer relationship to the patient led to more agreement in the proxies' answers. 24,26 We would argue that a proxy can normally judge the patient's walking ability and ability to perform self-care and usual activities using the simple three-level categorization in the EQ-5D-3L. However, it is important to acknowledge that the results presented in this study is, to a certain extent, represent a comparison between PROMS by patients without CCI and PROMS completed by proxy for patients with CCI. The EuroQol also contains a visual analogue scale (EQ-VAS). We chose to exclude these data, acknowledging the uncertainty in interpreting visual analogue scales for persons with CCI. 22 There was no substantial change in quality of life between four months 12 months despite improvement in walking ability. This finding might be an argument for only measuring PROMs at four months, thereby reducing the burden of data collection by researchers and those responsible for monitoring PROMs. Strengths and limitations. One strength of our study is the high number of patients included, and the inclusion of a large number of patients with CCI. To our knowledge, this is the largest study on PROM data from hip fracture patients with CCI ever reported. Our data represent nationwide results, including all types of hip fractures and operation methods, except fractures treated with a THA. This makes the data more representative than a small sample of patients and accordingly increases the external validity. The NHFR has high completeness of data: 88% for cases of osteosynthesis and 94% for hemiarthroplasties. 27 The main limitation of the study is nevertheless the methods used to identify cognitive impairment. The surgeon assessed the patient's cognitive function by use of different sources of information, including the patient's medical journal and discussion with relatives or with Changes in the mobility dimension of EuroQol five-dimension three-level from pre-fracture to four and 12 months postoperatively. Fig. 3 Changes in the self-care dimension of EuroQol five-dimension three-level from pre-fracture to four and 12 months postoperatively. the patient. However, no standardized tool/approach to diagnose cognitive impairment were normally used. Cognitive function was assessed preoperatively, and in cases where this assessment was based solely on conversation with the patient presence of delirium could have complicated this assessment. The data on CCI and reporting have also been previously validated against two local hospital databases with a sensitivity of 69% and a specificity of 90%. 14 Still, we acknowledge some uncertainty in our classification of cognitive function, and that Changes in the usual activities dimension of EuroQol five-dimension three-level from pre-fracture to four and 12 months postoperatively. the results, in particular where small differences were found, must be interpreted with some caution. The response rates for the PROM questionnaires were low and they were lower for patients with CCI than for those without CCI. This is to be expected, as it is presumably difficult, and in severe cases impossible, for patients with CCI to respond adequately to the questionnaire themselves. Due to the combination of high mortality and low response rate among patients with CCI only 16% and 10% of patients responding to the four and 12 months questionnaires respectively had CCI. These proportions were lower than the equivalent proportion for the total population recorded in the NHFR. 27 Further, the responders were younger and healthier than the non-responders. Our data on quality of life after hip fracture therefore probably represent a best-case scenario, including patients expected to have better quality of life than non-responders. EQ-5D-3L is a validated and frequently used questionnaire measuring HRQoL. This makes our results comparable to other studies of hip fracture patients and other illnesses. 25 Finally, we present the descriptive health profiles of the EQ-5D-3L questionnaire to provide more complete information on the patients' quality of life, not only the EQ-5D index. Presenting both the four-and 12-month PROM data allows us to examine trajectories in long-term follow-up. We cannot conclude that the changes in HRQoL occurred only because of the hip fracture. Patients with dementia are expected to deteriorate in daily functioning during a one-year follow-up. The response rate of our study was low, as could be expected due to high age and comorbidities. We did not send out reminders to the patients, which might have led to a greater response rate. The pre-fracture PROM data were collected retrospectively in the four-month questionnaire. This could have led to recall bias. However, studies have reported moderate to good correlation when comparing recalled data to prospective data following arthroplasty. 28 Only 2,116 patients (6%) of the patients responding to the four-month questionnaire died between distributions of the four-and 12-month questionnaires. Previous studies have reported 90-day mortality of 13% and oneyear mortality of 23%. 2 The low mortality rate between four and 12 months could be an expression of selection bias, meaning that only the healthiest patients responded to the four-month questionnaire. This is also supported by the differences found in the baseline data between responders and non-responders at four months. Our study did not assess the severity of the CCI. In the acute setting, cognitive function can be difficult to evaluate due to delirium and acute injury. Some patients were probably misclassified as having chronic CI because they were delirious. One previous study has confirmed that self-report is not sufficient to assess pain in elderly people with cognitive impairment. 17 Still, it has been shown that patients with mild to moderate dementia are able to complete 99% of the EQ-5D domains. 23 A ceiling/floor effect of patients' ratings has been found as a limitation of the three response alternatives of the EQ-5D questionnaire. We have no information on rehabilitation in our study. This could be a confounder, since there could be differences in rehabilitation offered to patients with and without CCI after a hip fracture, which could affect outcomes such as walking ability and anxiety and depression. Our study did not include THA patients due to missing information on cognitive function. However, THA patients only represent 2.4% of patients in the NHFR and we assume that very few of these patients have CCI. In conclusion, this study found that patients with CCI reported lower HRQoL four and 12 months after a hip fracture compared with hip fracture patients without CCI. PROM data from hip fracture patients with CCI is valuable in the assessment of the treatment of this particular vulnerable group. Patients with CCI should be included in future studies and for an orthopaedic registry it is important to establish good and simple methods to facilitate collection of PROMs from frail and cognitively impaired patients. Take home message -A hip fracture has a dramatic impact on patients' quality of life. -Hip fracture patients with chronic cognitive impairment have lower quality of life than those without cognitive impairment both before and after the hip fracture. -One in seven hip fracture patients with chronic cognitive impairment are confined to bed one year postoperatively. -Four in ten hip fracture patients with chronic cognitive impairment are unable to wash or dress one year postoperatively.
2021-07-09T06:16:58.192Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "18c085bfb3cc2fb4a3a30b7c0081bb9c998e865f", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1302/2633-1462.27.bjo-2021-0058.r1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "170ebf1879d8a81ea34b30cd500203a3bb491bf1", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
118363753
pes2o/s2orc
v3-fos-license
Entanglement under the renormalization-group transformations on quantum states and in quantum phase transitions We consider quantum states under the renormalization-group (RG) transformations introduced by Verstraete et al. [Phys. Rev. Lett. 94, 140601 (2005)] and propose a quantification of entanglement under such RG (via the geometric measure of entanglement). We examine the resulting entanglement under RG for the ground states of"matrix-product-state"(MPS) Hamiltonians constructed by Wolf et al. [Phys. Rev. Lett. 97, 110403 (2006)] that possess quantum phase transitions. We find that near critical points, the ground-state entanglement exhibits singular behavior. The singular behavior within finite steps of RG obeys a scaling hypothesis and reveals the correlation length exponent. However, under the infinite steps of RG transformation, the singular behavior is rendered different and is universal only when there is an underlying conformal-field-theory description of the critical point. I. INTRODUCTION Since Wilson [1], renormalization-group has been an important tools for theoretical physics, ranging from highenergy physics to condensed matter [2]. It is related to the coarse-graining procedure of the physical system, and from that the transformation of system parameters in the Hamiltonian is derived. Corresponding terms in the Hamiltonian be can determined to be relevant or irrelevant under the scale transformation. The renormalizationgroup transformation on quantum states was recently introduced by Verstraete et al. [3] using the representation of Matrix Product States (MPS) [4] (for a review of MPS, see e.g. Ref. [5]). Many important quantum states in the quantum information theory emerge naturally from the fixed points of this coarse-graining transformation on states [3]. One important property associated with quantum states is their entanglement content. There have been tremendous advancement on the understanding of entanglement (both bipartite and multipartite) for the past few decades [6]. The notion of entanglement has also been applied to many-body systems [7] and especially systems that possess quantum phase transitions [8], mainly via bipartite measures (e.g., between two spins or between one subsystem and the remaining) such as concurrence [9] and entanglement entropy [10]. Important insight has thus been obtained, such as the connection to conformal theory near criticality [10] and to the bipartite entanglement [9,11]. Furthermore, in higher dimensions than one, the entanglement entropy has been shown to possess an area law for various systems [12][13][14] (up to logarithmic corrections in fermions [12]). Regarding the coarse-graining process, one can thus raise the interesting question how the entanglement behaves under the RG transformation on states [15]. As we shall see below, one of the measures that appear to be suitable to the discussions of entanglement under RG is the so-called geometric measure of entanglement (or simply geometric entanglement) [16,17]. This measure of entanglement defined with respect to partitions into blocks of consecutive sites has recently been employed by Orús [18] and by Botero and Reznik [19]. In this paper, we provide an interpretation of the block geometric entanglement, namely that it is exactly the entanglement under the coarse graining of the renormalization group transformation on quantum states [3]. We apply this block-L entanglement (with L being the number of sites in each block) to two spin models constructed by Wolf et al. [20] that possess quantum phase transitions. We find that near critical points, the ground-state entanglement under RG transformation exhibits singular behavior. The singular behavior reveals the correlation length exponent. However, under the infinite steps of RG transformation, the singular behavior is rendered different and it has no universal form unless the critical point can be described by a conformal field theory. Let us begin by discussing matrix product states. It was shown by Vidal [21] that any state can be written in the MPS form as follows, Tr(A [1] p1 A [2] p2 · · · A [m] pm )|p 1 , p 2 , · · · , p m , where A's are D × D matrices with dimension D ≤ d m/2 . Let us now define the geometric measure of entanglement, by considering a multipartite system comprising m parts, each of which can have a distinct Hilbert space. We compare this general m-partite entangled pure state |ψ to the set of general product pure states, and define the maximal overlap of |ψ with the closest product states as follows, The maximal overlap Λ 1 (ψ) reveals the entanglement content of the state |ψ , the larger Λ 1 (ψ), the lower the entanglement of |ψ . A quantitative way to define the entanglement content is via where the subscript 1 indicates that the product state is composed of product of states of single sites. Note that the norm square of the translation invariant matrix product state (with m sites) can be expressed in terms of the operator E defined below in Eq. (14), which, for convenience, may not usually be normalized to be unity, and thus one has to supply this factor in Λ(ψ), We remark that by appropriate partitioning of |φ into various product forms, a hierarchy of entanglement can be obtained [17,[22][23][24]. The most relevant kind of partitioning regarding RG is to divide m sites into blocks of several consecutive neighboring sites, e.g., L consecutive sites in one-dimension. This leads to what we shall refer to as the block-L entanglement, where with |Φ L being product states of the block form: and we have implicitly assumed that the total number of spins m (sometimes denoted by N ) is an multiple of L. II. RG ON QUANTUM STATES Verstraete et al. [3] considered a quantum coarse-graining procedure by merging two neighboring spins to one new block spin, which is described in terms of matrices A's: From this a more convenient representation is to choose the new matrix as where V l is the right unitary matrix in the singular-value decompositioñ This keeps the dimension of the Hilbert space of the block to be of size bounded above by D 2 . They further introduced a more convenient representation of the RG transformation by defining (which we shall thereafter refer to as the RG operator or the transfer-matrix operator) which is invariant under any local unitary A q → p U q p A p . Hence, the operatorÊ 2 is invariant under any unitary within the block,Ê The RG transformation on the state can then be described by the mappinĝ The above discussions assume that the states being considered are translation invariant. A straightforward extension of the renormalization-group transformations on generic quantum states can be made by including the site-dependence of the matrices A As above,Ê ′ is invariant under any unitary on the original two spins. This means that under one-step RG transformation, the state |ψ transforms to and the (2k − 1)-th and 2k-th sites are merged into a single site. The unitaries are now generally site-dependent. Under RG of merging two neighboring sites, III. ENTANGLEMENT OF STATES UNDER RG In a similar spirit, though not identical, to the work on multiscale entanglement renormalization ansatz (MERA) by Vidal [25], we discard the short-range description and hence the total entanglement should decrease under this coarse graining. As the RG defined by Verstraete et al. [3] transforms the original state to the state |ψ ′ up to local (treating sites 2k−1 and 2k as local) unitary transformations [see Eq. (19)], the natural definition of the entanglement after the RG is to determine the one that is minimum among the local equivalence class. In terms of Vidal's MERA, the untiaries U [2k − 1, 2k] act as disentanglers that aim to reduce the entanglement between sites 2k − 1 and 2k. The merging of the two sites here under the RG defined by Verstraete et al. [3] is done with the same pairs of sites, in contrast to MERA, where the merging is done with, e.g., sites 2k and 2k + 1 via isometries [25]. In conforming the former picture of RG on states, the entanglement after one-step of RG should be defined as follows: where the unitary U is of the form where |Φ = |φ [12] ⊗|φ [34] ⊗· · ·⊗|φ [2k−1,2k] ⊗· · · with |φ [2k−1,2k] being any arbitrary state of the two sites 2k −1 and 2k. This is exactly the block-2 entanglement defined previously in Eq. (8) with L = 2. One can continue the merging procedure, and arrive at the successive entanglement under RG being equal to E L (ψ) with L = 4, 8, . . . , 2 l , etc. It it straightforward to see that for l ′ ≥ l, E 2 l ′ (ψ) ≤ E 2 l (ψ), i.e., the total entanglement under RG cannot increase. As another important ingredient in the RG is the rescaling of the lattice spacing, we also introduce entanglement per block [16,18,19] to reflect this rescaling of length scale (equivalently, system size) in RG, where N is the total number of sites (usually considered in the limit of large number of blocks, n ≡ N/L → ∞). Therefore, E L is the total entanglement for the system with merging L sites into one, and correspondingly, E L is the entanglement per block. A conclusion from the above discussions is that the entanglement per block (of size 2 l ) is equal to the entanglement per site of the l-time RG transformed state of |ψ , as the RG transformed state is determined up to a 2 l -local sites in view of the original sites. That is This gives a physical meaning to the block-L geometric measure of entanglement. Orus [18] has recently shown that the geometric measure of entanglement defined relative to blocks (of size L) of spins can be evaluated via and where the important assumption is that the closest product state can be taken to be a product of identical local states. For the ground states of the transverse-field XY spin chains, this ansatz has been verified numerically [16]. Let us briefly describe the proof. Consider the ansatz product state to be |Φ = |φ ⊗m and the translation-invariant state |ψ is expressed in the MPS form (2). Then the overlap between the two states is where the matrix B is defined as Suppose the largest eigenvalue of matrix B is not degenerate, then Suppose the corresponding eigenvector is r, then the goal is to find The maximal overlap becomes (in the limit m → ∞ and taking into account normalization ψ|ψ ) Under the assumption that the two maximizations can be interchanged, we get The maximization over φ can be achieved when (using the Cauchy-Schwartz inequality), up to a normalization, This leads to whereÊ is defined in Eq. (14) as the basis operator for state renormalization. Therefore, the entanglement per site becomes In the case of entanglement per block of size L, we can repeat the previous derivation by replacingÊ byÊ L and we arrive at the expression Therefore, the entanglement at large block size L → ∞ depends on the fixed-point property of the operatorÊ. IV. FIXED POINTS Verstraete et al. [3] have also defined the fixed point under the RG transformation on states viâ We can use their results on the classification of the fixed point and investigate the behavior of entanglement for generic states. They concluded that in the generic case, the largest eigenvalue ofÊ is nondegenerate and both its left and right eigenvectors have a maximal Schmidt rank,Ê and one can always choose where λ i > 0. Then and its maximum is λ 1 , the largest of {λ i }. Furthermore, Tr (Ê ∞ ) n = i λ i n . Therefore, the entanglement per block is where we have defined the normalized Schmidt coefficientsλ i ≡ λ i / k λ k . This means that if a many-body state can be represented by a MPS with dimension D, then the largest entanglement per block at the fixed point is bounded above by log D. We remark that according to Verstraete et al. [3], the entropy of a block of spins is exactly twice the entropy of entanglement of |Φ L , i.e., S = −2 iλ i logλ i ≤ 2 log D. In short, we have the relation E ∞ ≤ S/2 ≤ log D. V. EXAMPLES In this section, we shall warm up by several example states. Example 1 . AKLT state [26]. Orus has performed a detailed analysis for the AKLT state [27]. The operatorÊ is calculated to be (the matrix elements conveniently expressed in the "ket" and "bra" notations, Let |r ≡ r 0 |0 + r 1 |1 and |r * ≡ r * 0 |0 + r * 1 |1 , we have Moreover, for a total of n blocks, So Example 2 . GHZ state. Then, when either |r 0 | = 1 or |r 1 | = 1. Furthermore, Therefore, Note that in this case, there is a degeneracy in eigenvalues ofÊ, and there seems to be no problem in carrying out the procedure. Furthermore, no matter how large L is, GHZ state looks identical (after rescaling the system size), and it possesses a total entanglement log 2, and hence vanishing entanglement per block in the limit of large number of blocks. Example 3 . Cluster state. Consider the linear-cluster state described by TheÊ operator is calculated to bê where Furthermore, we have and thus arrive at We note that the above results hold for L = even; see Example 4 and below. In fact for L = 1, it is known that the cluster state possesses E 1 = ⌊N/2⌋ log 2, where N is total number of spins [28]. This means that E 1 = (1/2) log 2, which is half of E L=even . Example 4 . Anti-ferromagnetic GHZ state. One caution is that the closest separable state for block-1 is either |01010101... or |10101010... , neither of them being translation invariant. But block-2 is: e.g., |(01)(01)(01)(01)... . Therefore, there is an even-odd difference. Of course, the entanglement is the same as the ferromagnetic GHZ state. To avoid this even-odd effect, we will mainly consider the block size L to be even. In the following, we will provide numerical evidence in justifying the ansatz we use to calculate the entanglement. We note that for permutation invariant pure states, the ansatz of the product states being a tensor product of identical single-site states is well justified; see Ref. [29]. VI. ENTANGLEMENT IN QUANTUM PHASE TRANSITIONS WITH MATRIX PRODUCT STATES Wolf et al. have recently used matrix product states to engineer quantum phase transitions (QPT) with properties differing from standard paradigm [20] (e.g., analytic ground state energy and finite entanglement entropy for an infinite half-chain), but still with diverging correlation length and vanishing energy gap. Since the ground state depends on the system parameter (g) continuously, we shall investigate the ground-state entanglement properties of the corresponding models Wolf et al. considered and determine whether the ground-state entanglement can be a telltale of the corresponding critical points. ); labels "a", "b", and "c" denote the results from the identical, alternative, and arbitrary ansatz states, respectively; and (b) L = 2 (bottom panel); here labels "a" and "b" denote the results from the identical and arbitrary ansatz states, respectively; A. A spin-1/2 model The state that is represented by is the ground state of the following Hamiltonian [20] At g = 0, it is a GHZ state, whereas at g = −1 it is a cluster state. We first check to what extent the ansatz states for deriving the formula (40) for E L can be justified. In Fig. 2a, we compare the numerical values of the supposed entanglement density (E 1 ) with N = 10 spins using three different product states (identical, alternating, arbitrary) We see that the results of using the ansatz |Φ a (identical) to calculate E 1 are correct only for g ≥ −0.5. However, the results of using |Φ b (alternating) are always as good as those of using |Φ c (arbitrary). This suggests that for L = even, an product of blocks of even number of spins is a good ansatz. This is indeed the case for L = 2. In Fig. 1b, we compare numerical results for E 2 with the same number N = 10 of spins between two different ansatz states (identical and arbitrary) It is clear from the plot that they give identical results, thus supporting the use of product of identical single block states (with even block size L). Next, we calculate the analytic expression for the entanglement per block, focusing mainly on L = even. We begin by noting that the operatorÊ isÊ where |g ≡ (|0 + g|1 )/ 1 + g 2 . We can evaluate the n-th power ofÊ, where We can then calculate E L (g). In this case, we shall see that even the behavior of the entanglement E ∞ (g) will exhibit singularity across the critical point g = 0. We calculate the entanglement E ∞ (g) to be , for g > 0, 0, for g = 0, log 2, for g < 0. (72) The derivative of E ∞ (g) is discontinuous across g = 0 and exhibits divergence as g → 0 + , Therefore, the critical point is reflected by the property that the fixed-point entanglement E ∞ has divergence behavior in its derivative with respect to g. We can also study the entanglement E L for finite L and learn how the entanglement varies under state-RG transformation. The entanglement for L finite turns out to exhibit more feature (see Fig. 3), (74) The singularity at the critical point g = 0 is obvious; see Fig. 3. In particular, we find that As g decreases from a large value, the entanglement decreases to zero at g = 1, which is paramagnetic state with all spins pointing in the x direction. It then rises to a local maximum as g further decreases, and afterwards decreases to zero at the critical point. This intermediate region becomes smaller as L increases and is washed out at the fixed point. The cusp for L = 2 at g = −1 reflects a highly entangled state, which turns out to be the one-dimensional cluster state. The entanglement slowly decreases as g become more negative. As a comparison, we show the behavior of the nearest-neighbor concurrence [30] in Fig. 4. There are singularities at g = 0 and g = 1. The use of concurrence to infer critical points may incorrectly identify g = 1 as a critical point. B. A spin-1 model This example is a spin-one scenario. The state represented by is the ground state of the following spin-1 Hamiltonian For g = ±2, the GS is the AKLT state. For g → ±∞, the GS is the Néel GHZ state. The critical point is at g = g c = 0, where there is a diverging correlation length. Here, we also check to what extent the ansatz states for deriving the formula (40) for E L can be justified. In Fig. 5a, we compare the numerical values of the supposed entanglement density (E 1 ) with N = 4 spins using three different product states (identical, alternating, arbitrary) We see that the results of using the ansatz |Φ a (identical) to calculate E 1 are correct only for |g| being small. For larger |g| the ansatz |Φ a gives incorrect results. However, the results of using |Φ b (alternating) are always as good as those of using |Φ c (arbitrary). Similar to previous model, this suggests that for L = even, an product of blocks of even number of spins is a good ansatz. We note that for finite N there is a region near g = 0 that the entanglement behaves quadratically with g. This region, however, shrinks as N becomes larger, as illustrated in Fig. 5b. In the thermodynamic limit (N → ∞) this region is expected to disappear and the behavior of entanglement near g becomes linear (see below). Furthermore, even though the formula for E L is expected to work for L = even, the expression obtained by taking L = 1 appears to be close to the entanglement density obtained for finite N , except the small quadratic region (due to finite system size). Now, we derive the analytic expression for ground-state entanglement per block. First, we can construct the renormalization operatorÊ,Ê =    which can be transformed into the diagonal form where From this, we can get the E ∞ (g), E ∞ (g) = log 2, for g = 0, |g| < ∞, 0, for g = 0, |g| → ∞. The critical point in the limit of the fixed-point L → ∞ is reflected only by the discontinuity of the entanglement. The more interesting analysis comes to the case L is finite. For simplicity, take L to be even. For |g| ≫ 1, E L (g) = 0. We find that E L (−g) = E L (g), as can be seen from the symmetry ofÊ under g → −g. For finite values of g, we have E L (g) = log 2 − log 1 + (1 + g) −L (g − 1) L , for 2 < g log 2 − log 1 + (1 + g) −L , for 0 ≤ g < 2. We find that there is a discontinuity in the derivative across the critical point, The critical point is revealed by this singularity (i.e., the discontinuity). In addition to the critical point, the AKLT state at g = ±2 is signalled by the cusp in the entanglement for L = 2 and 4. The weaker singular behavior at g = ±2 suggests that it is not a critical point. VII. ENTANGLEMENT NEAR CRITICALITY Near quantum critical points, the correlation length generally scales as For any density functions, such as free energy density and the entanglement density, when there is no logarithmic singularity we expect the scaling behaves as (i.e., the scaling hypothesis, see, e.g. Ref. [2]) where d is the dimension of the system. We then expect that (unless dE(g)/dg behaves as ∼ log |g − g c |) in general In the case of one dimension d = 1 we consider here, the discontinuity of the entanglement derivative that we have found for both models (with L finite) implies that we have ν = 1, which is indeed the case. Furthermore, the discontinuity is proportional to L, which is expected as it is now the smallest length scale. In the case of transverse-field XX model, near the critical field g c = 1 the entanglement density E 1 has been shown to behave as [16] E XX and the entanglement derivative behaves as This is consistent with d = 1 and ν = 1/2 for the XX model. These results need to be compared to the case of one-dimensional transverse-Ising spin chain, where the divergence of the entanglement derivative is logarithmic, where g is the ratio between the external field and the spin-spin coupling and g = 1 is the critical point. In this case finite-size scaling needs to be performed to determine ν = 1 [16]. We remark that the divergence behavior in E L can be rendered different as L → ∞ (i.e., much larger than the correlation length ξ), so the scaling hypothesis is no longer expected to hold. The two MPS models discussed above already illustrate this: E finite L and E ∞ have different singular behaviors near critical points. For the transverse-Ising model, the results of Orús [18] showed that near g = 1, the entanglement E L with block size L ≫ ξ behaves as which gives (where c is the central charge of the conformal field theory) In brief, near criticality the singular behavior of entanglement under finite steps of RG reveals the critical point and the associated correlation-length critical exponent ν. However, under the infinite steps of RG transformation, the singular behavior is rendered different, which seems to be non-universal for critical points not describable by a conformal field theory, such as the two MPS models discussed here [20]. But it is universal for critical points describable by a conformal field theory such as the transverse-field Ising model. The results are shown in Fig. 7. The critical point (g 1 = g 2 = 0) is at the intersecting point of four different regions. As seen from the figure, there is singularity across the lines (0, g 2 ) and (g 1 , 0), and (0, 0) is the intersection of all these four lines. IX. CONCLUDING REMARKS We have considered the entanglement of states under the renormalization-group (RG) transformations and apply to the ground state of "matrix-product-state" Hamiltonians constructed by Wolf et al. For these models, the entanglement entropy of a consecutive L spins does not scale logarithmically with L. Furthermore, the use of concurrence for one of the models can lead to a spurious critical point. Using the geometric entanglement under RG, we have found that near critical points, the ground-state entanglement exhibits singular behavior. The singular behavior under finite steps of RG obeys a scaling hypothesis (similar to free energy) and reveals the correlation length exponent. However, under infinite steps of RG transformation, the singular behavior is rendered different. It is universal only when there is an underlying conformal-field-theory description of the critical point. Along the way, we have provided an upper bound for entanglement per block, which is log D, where D is the dimension of the matrices in MPS. This also shows the more complex the ground state is, the larger the dimension of the representative matrices for MPS we need to use. We conclude by posing the question whether there is any significance to the following function At least it is a quantity showing how the entanglement changes under the RG scale transformation; see Figs. 3 and 6. Under RG (i.e., as L increases) certain singular but non-critical features get washed or smoothed away. But the singular behavior near criticality persists for large L. The critical points of the two Hamiltonians (63) and (77) actually have different fates approaching fixed points. The former becomes an isolated point in the entanglement, whereas the latter become an algebraic singularity in the entanglement derivative, albeit both singularities are rendered different from finite L.
2010-06-16T16:33:47.000Z
2008-10-14T00:00:00.000
{ "year": 2008, "sha1": "feb30c931182e76bc0102797d68ef145739acaab", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0810.2564", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "feb30c931182e76bc0102797d68ef145739acaab", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
51940183
pes2o/s2orc
v3-fos-license
A unifying framework for fast randomization of ecological networks with fixed (node) degrees Graphical abstract Background A good algorithm to generate random networks with prescribed degree distribution (which is identical to the issue of generating random binary matrices with fixed marginal totals) should have two properties: it should be able to generate any one among all possible networks having a certain node degrees with the same probability, i.e. it should not tend towards the generation of networks having particular structural properties; and it should be able to generate truly random networks fast. Markov chains, where the randomization takes place in subsequent steps, each involving a small change in the network structure, represent a common solution to this problem.Several network randomization Markov chains have been shown to converge to the uniform distribution on their state space, that is they have been shown able to generate truly random networks with prescribed degree distribution [1][2][3][4][5].By contrast, most of Markov chains exhibit an important limit, that is it is not clear how many randomization steps they require to ensure that the randomized network is truly random. The best known Markov chain approach for randomizing network while preserving their degree sequence is the switching model (also known as rewiring, switching chain and swapping edges) [6,7,2,8].It can be applied to different kinds of networks, being able to properly randomize bipartite networks, undirected networks or directed networks with given node degrees, by repeatedly switching the ends of non-adjacent edge pairs (with some additional rules required for the correct randomization of directed networks, [2]).Yet, this method has a fundamental drawback, which is requiring a very large number of switches in order to ensure an unbiased randomization, which grows very rapidly with the size of the network (see, for example, [9]). A more recent Markov chain approach is the Curveball algorithm [10].Experimentally, this chain has been shown to mix much faster than the corresponding switching chain [10].Why the Curveball algorithm mixes faster than the switching model can be understood when thinking of both algorithms as games in which kids trade cards.That is, think of the Curveball algorithm as an algorithm that randomises the binary n  m bi-adjacency matrix of a bipartite network.Imagine that each row of the adjacency matrix corresponds to a kid, and the 1's in each row correspond to the cards owned by the kid.Then at each step in the Curveball algorithm, two kids are randomly selected, and trade a number of their differing cards.Using this same analogy for the switching model, in each step two cards are randomly selected and traded if firstly they are different and secondly they are owned by different kids (note that various algorithms implementing similar approaches were discovered independently by Verhelst [4]).Intuitively, the Curveball algorithm is clearly a more efficient approach to randomise the card ownership by the kids.More formally, the Curveball algorithm is also based on switches but instead of making one switch, several switches can be made in a single step, which leads to possibly exponentially many networks being reached in a single step, in contrast with the switching model where at most n 4 (the maximum number of possible edge pairs) networks can be reached in a single step. Designed to randomize only bipartite networks, the Curveball permits the randomization of both species  locality matrices, and bipartite ecological networks such as host-parasite and plantpollinator ones.There is, however, an important reason why such design requires an urgent upgrade.Notably, bipartite ecological networks have often been studied separately from food webs, even though all those networks belong to the same broader ecological class of 'resource-consumer' networks [11]. Food webs belong to a different class of networks, that is directed networks.In such networks, nodes cannot be attributed unambiguously to two different classes, since the same node can be simultaneously a consumer and a resource (for example, a predator can be eaten by another predator of a higher trophic level) [12].A third class of networks is that of undirected networks, which has importance in various fields, such as social sciences and epidemiology, with networks of those kinds being well suited, to represent contacts between persons, and that is also becoming increasingly relevant in the ecological context.In fact, there is a growing interest in the study of co-occurrence networks, that is networks obtained by linking species found together more often than random expectation, and hence considered as potentially interacting (see, for example, [13][14][15]). Although some attempts has been recently made to provide measures of network structure applicable to different kinds of ecological networks (see, for example, [16,17]), we are still very far from having a unifying analytical toolbox.Here we take a step further in this direction, by showing how the efficient Curveball algorithm can be extended to work also with unimode directed networks, and undirected networks.Besides providing ecologists with a common procedure to analyze different ecological entities, this constitutes an important advance for network science in general, with the potential of bringing benefit to various disciplines. Extending the Curveball algorithm We now propose two extensions of the Curveball algorithm: the Directed Curveball algorithm, which samples directed networks and the Undirected Curveball algorithm, which samples networks (Throughout this paper we use the convention that both directed and undirected networks do not contain self-loops or multiple edges).Both are Markov chain algorithms, which randomise networks by repeatedly applying them small changes, in this case trades of elements. The Directed Curveball algorithm An efficient way to store a directed network G = (V, E) is by storing its adjacency list, which is also the most natural data structure to run the Directed Curveball algorithm on.The adjacency list of G is a list of sets A v , one for each node v 2 V, where the set A v contains all out-neighbours of v. The Directed Curveball algorithm randomises a directed network by repeatedly applying trades to its adjacency list.A trade is defined as follows: (a) select two sets A i and A j at random, (b) Let A iÀj be all nodes in A i that are not in A j and that are not equal to j, i.e.A iÀj = A i n (A j [ {j}) and similarly let A jÀi = A j n (A i [ {i}).(c) Create new sets B i by removing A iÀj from A i and adding the same number of elements randomly chosen from A iÀj [ A jÀi .Combine A j A jÀi with the remaining elements of A iÀj [ A jÀi to form B j . Notice that the definition of A iÀj = A i n (A j [ {j}) in step (b) ensures no self-loop can be created at node j, since this would require adding j to A jÀi . Fig. 1 illustrates a trade of the Directed Curveball algorithm.We will refer to the number of elements exchanged as the size of a trade, for instance the trade in Fig. 1 is of size two.It is possible for a trade to be of size zero, in this case the current network is repeated and we move on to the next trade.The Lemma below shows that all switches in the switching model for directed equal trades of size one in the Directed Curveball algorithm.But, the Directed Curveball algorithm in addition allows trades of larger size.Intuitively, this could reduce the number of steps needed to obtain a random sample as compared to switching chains, since a trade can introduce more randomness than a switch.Proof.Let (x, y) and ðu; vÞ be edges in a directed network G that are allowed to be switched.Then x 6 ¼ v and u 6 ¼ y since otherwise this switch would introduce a self-loop.Furthermore v= 2A x and y = 2 A u since otherwise the resulting directed network would have multiple edges.In particular this implies y 2 A xÀu and v 2 A uÀx .Now if row x and row u are selected for a trade, then B x ¼ ðA x fygÞ [ fvg and B u ¼ ðA u fvgÞ [ y are possible sets in step (c) that lead to exactly the two new edges ðx; vÞ and (u, y). In the Appendix we show that the Directed Curveball algorithm converges to the uniform distribution whenever the switching chain for directed networks does.That is, eventually after a large number of trades, we obtain a network sampled from the uniform distribution.Note that, for certain node degrees, not all networks realisations (i.e.networks with the given node degrees -) can be obtained by applying switches to a given network. The simplest example is the oriented triangle (1, 2), (2, 3), (3,1).No swap or trade can be applied, and hence we can never generate the triangle (1, 3), (3, 2), (2, 1), which is also a possible network.Adding a second procedure, i.e. reorienting a randomly chosen directed triangles ('hexagonal move') solves this problem completely [2].However, further work showed that not all of these triangles need to be re-oriented [18].Theorem B3 in Appendix can be used to create a fast algorithm to recognize the triangles which need to be re-oriented in a network, and to choose one random orientation for each of them.Using the Curveball chain after this step delivers a uniform randomly sampled network.Furthermore, depending on the purpose of the randomization, this step might be even not necessary. Berger et al. [18] proved that (in cases where only network topology matters, i.e.where the information about the identity of individual nodes can be discarded) the sole use of the switch chain permits to sample uniformly at random from the set of all possible networks (which is a subset of the full set of networks including those that could be generated by triangle re-orientation).This would be enough to compare the frequency of network structural patterns such as motifs, nestedness, or Cscore between empirical and randomized networks, since their proportion is not affected by the reorientation step.Conversely, the re-orientation step discussed in Appendix would be necessary to assess the significance of patterns affected by the identity of nodes (for example, the frequency of a specific directed edge (i, j) in all networks). The Undirected Curveball algorithm The Undirected Curveball algorithm samples networks with fixed node degrees.The adjacency list representation of a network G = (V, E) is a list of sets A i .The set A i contains the indices of the neighbours of vertex i.For undirected networks each edge {i, j} is represented twice in the adjacency list, since i 2 A j and j 2 A i .Furthermore, i = 2 A i for all i, since networks do not contain self-loops.A trade in the Undirected Curveball algorithm is defined by the following steps.(a) Randomly select two sets A i and A j .(b) Let A iÀj be the set of elements in A i not in A j and not equal to j, i.e.A iÀj : = A i (A j [ {j}).Analogously define A jÀi : = A j (A i [ {i}).(c) Create a new set B i by removing A iÀj from A i and adding the same number of elements randomly chosen from A iÀj [ A jÀi .Combine A j A jÀi with the remaining elements of A iÀj [ A jÀi to form B j .(d) For each node k 2 B i A i , replace j by i in B k , similarly for each l 2 B j A j , replace i by j in B l . Step (b) ensures no self-loops are introduced and step (d) ensures that B represents a network (i.e. that i 2 B j implies j 2 A i ).Fig. 2 illustrates a trade in the Undirected Curveball algorithm.Note that the same trade can be made in the opposite direction, i.e.B network can be transformed into A in a single trade.A proof for this (which is a necessary condition for an unbiased Markov chain) is provided in Appendix. The Undirected Curveball algorithm includes trades of size zero which correspond to repeating the current network.Furthermore, Lemma 2 shows that any switch in the switching model for corresponds to a trade of size one in the Undirected Curveball algorithm.In fact, Fig. 3 shows that for each switch in the switching model, there are two different trades of size one in the Undirected Curveball algorithm. Lemma 2. Let G, G 0 be that differ by a switch.There are two trades of size one in the Undirected Curveball algorithm from G to G 0 . Proof.Without loss of generality we may assume that G = (V, E) and G 0 = (V, E 0 ) differ by a switch from {x, y} and fu; vg to fx; vg and {u, y}.Let {A 1 , . . ., A n } be the adjacency set representation of G, then y 2 A xÀu since the edge {x, y} is an edge of G, the edge {u, y} is not and y can not be equal to u since {u, y} 2 E 0 and G 0 has no self-loops.Similarly we find that v 2 A uÀx and hence the trade that swaps y and v between rows x and u results in the network G 0 .Similarly, there is a second trade which generates G 0 , namely the trade that exchanges x and u between sets A y and A v .& Analogous to the other versions of the Curveball algorithm, the Undirected Curveball algorithm in addition allows trades of larger size, corresponding to making several switches at once.In the Appendix we prove that the Undirected Curveball algorithm samples uniformly at random after applying a large number of trades. Mixing time and experimental stopping times There are some obvious major differences between the typical implementation of switch-based algorithm and that of Curveball algorithms that stem from the use of the adjacency list representation of a network used by the latter, compared to the edge-list representation used by the first.For example, the most common procedure used for fixed-degree network randomization [19] consists, at each step, in sampling two pair of linked nodes, a-b and c-d and rewire them in the form a-d and c-b if and only if neither a-d nor c-b exist already in the network.It is clear that this procedure has two important limitations in terms of performance, one deriving from the probability of performing a successful swap, and the other one deriving from the need to check that performing a given swap will not result in the generation of multiple edges.While the first limitation is strongly dependent on network structure (and we may also imagine situations where the probability of performing a successful swap is identical to the probability of performing a successful Curveball trade), the additional check for multiple edges is a serious bottleneck limiting the efficiency of swapbased algorithms (and one narrowing with network size).As a consequence, performing a performance comparison (for example in terms of CPU time needed to properly randomize a network) between typical swap implementations and their Curveball counterparts would be a trivial exercise.However, the most important question for practitioners as well as theoreticians is how many steps the Curveball algorithms have to run from an initial probability distribution (where an initial state is taken from) to sample from a probability distribution which is close to the uniform distribution.This number is defined as the total mixing time, i.e. the number N of reiteration steps in the Curveball algorithms. The Curveball algorithm has experimentally been shown to run much faster than the switching algorithm [10].Intuitively, this property should extend also to the directed and undirected versions of the Curveball.We tested this experimentally, by comparing the total mixing time of the Curveball algorithm with that of the switching model in: two sets of random (i.e.ErdÅs-Rényi) networks including, respectively, 100 directed and 100 undirected networks having a number of nodes (V) extracted at random between 100 and 1000, and a number of edges equal to E  V, with edges varying randomly between 5 and 50; two sets of power law (i.e.Albert Barabási) networks including, respectively, 100 directed and 100 undirected networks having a number of nodes (V) extracted at random between 100 and 1000; and two empirical networks, namely a directed food-web (listing all trophic interactions recorded in Little Rock Lake, Wisconsin in the United States of America, [20], having 183 nodes and 2476 edges), and an undirected cooccurrence network (representing ecological interactions between bacteria [21], having 316 nodes and 1086 edges). In order to compare the asymptotic mixing times of these algorithms, that is to assess the increased performance stemming from trading multiple elements at once compared to performing individual swaps, while excluding the above mentioned limitations emerging from the different algorithms' design, we implemented a swap algorithm in the same form of the Curveball but with the additional constrain of permitting only trades of size 1 between adjacency lists. To track the two algorithms' convergence towards the uniform distribution, we used, as a proxy, the degree of network perturbation, that we measured as the fraction of edges in the target network differing from those in its randomized counterpart [10]).For both the Curveball and the (modified) swap procedure, we recorded network perturbation every 100 step (i.e.trades and swaps), performing a total of 25,000 steps. In Fig. 4 we show how the Curveball algorithms converge much faster than the switching chain in both the random (ErdÅs-Rényi) and real world networks.The improvement of the Curveball algorithms against the switching chain was less pronounced for the power law (Albert Barabási) networks.This can be explained by the fact that all vertices in this network have low out-degree (1, 2 or 3), and hence the number of elements traded by the Curveball at each step is often one (i.e.equivalent to a step in the swap procedure).Furthermore, due to the power-law degree distribution, many of the edges will have the same target further limiting the size of trades. All algorithms were implemented in the R and Python programming language.All the code using in our analyses is publicly available [22].We also provide user friendly functions implementing the new algorithms in both Python and R programming language as Supplementary material. Concluding remarks The increasing understanding of natural systems' complexity is making clear how the current separation between food-web science, and mutualistic and antagonistic bipartite ecological networks may hide important patterns and processes possibly responsible for the emergence and maintaining of diversity [23,24].New analytical tools are needed to overcome this issue, and there is still long way towards a truly organic theory of ecological interactions.By providing a unifying framework for the randomization of all kinds of networks relevant to the ecological field, we hope that this work may represent a further step in that direction. Finally, we have focussed here mainly on ecological networks as this was the main motivation behind the development of the original Curveball, and behind our interest in extending its application beyond bipartite networks.Nevertheless, the randomization of large directed and undirected networks is a compelling problem in several fields other than ecology.For example, investigating the structure of social networks is becoming more and more important to improve our understanding of complex societal mechanisms [25], and disease spread dynamics [26].We are confident that our new methods will be useful in those fields too. Fig. 1 . Fig.1.Illustration of a trade in the Directed Curveball algorithm.Vertex i and j have several common out-neighbours and i is an out-neighbour of j.Those are removed to obtain the set {1, 2, 5, 6, 7} of nodes that can be traded.The nodes 5 and 7 are selected as new neighbours for i, the trade results in the network on the right. Fig. 2 . Fig. 2. Illustration of a trade in the Undirected Curveball algorithm.Nodes i and j are neighbours and have a single common neighbour.The nodes that are available for trading are nodes 1, 2, 4 and 5.A trade of size one is performed, exchanging nodes 1 and 4, hence in step (d) the sets A 1 and A 4 are updated. Fig. 3 . Fig. 3.A switch corresponds to a trade of size one in the Undirected Curveball algorithm.Notice that each switch can be realized by two distinct trades: the switch from {x, y} and fu; vg to fx; vg and {u, y} can be realized by selecting A x and A u and trading y for v or by selecting A y and Av and trading x for u. Fig. 4 . Fig. 4. The degree of network perturbation (measured as the fraction of edges in a randomized network differing from those of the original network) for an increasing number of steps in the Markov chains of the modified swap procedures (red), and of the Curveball algorithms (blue) while randomising: a set of one hundred random (ErdÅs-Rényi) directed; a set of one hundred random (ErdÅs-Rényi) undirected; a food-web; and a co-occurrence network.The randomization of the food-web and of the cooccurrence network was replicated one hundred times.Solid lines indicate the average values over the replicates, while shaded areas indicate standard deviation.
2016-12-22T12:18:26.000Z
2016-09-16T00:00:00.000
{ "year": 2018, "sha1": "3e80038f45f6bd9e2d744e3c6a0ebb18da7bcfa9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.mex.2018.06.018", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "71cae9821bd654e723303ade00046aa4bb35605c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science", "Medicine" ] }
5928983
pes2o/s2orc
v3-fos-license
Parvin Overexpression Uncovers Tissue-Specific Genetic Pathways and Disrupts F-Actin to Induce Apoptosis in the Developing Epithelia in Drosophila Parvin is a putative F-actin binding protein important for integrin-mediated cell adhesion. Here we used overexpression of Drosophila Parvin to uncover its functions in different tissues in vivo. Parvin overexpression caused major defects reminiscent of metastatic cancer cells in developing epithelia, including apoptosis, alterations in cell shape, basal extrusion and invasion. These defects were closely correlated with abnormalities in the organization of F-actin at the basal epithelial surface and of integrin-matrix adhesion sites. In wing epithelium, overexpressed Parvin triggered increased Rho1 protein levels, predominantly at the basal side, whereas in the developing eye it caused a rough eye phenotype and severely disrupted F-actin filaments at the retina floor of pigment cells. We identified genes that suppressed these Parvin-induced dominant effects, depending on the cell type. Co-expression of both ILK and the apoptosis inhibitor DIAP1 blocked Parvin-induced lethality and apoptosis and partially ameliorated cell delamination in epithelia, but did not rescue the elevated Rho1 levels, the abnormal organization of F-actin in the wing and the assembly of integrin-matrix adhesion sites. The rough eye phenotype was suppressed by coexpression of either PTEN or Wech, or by knock-down of Xrp1. Two main conclusions can be drawn from our studies: (1), high levels of cytoplasmic Parvin are toxic in epithelial cells; (2) Parvin in a dose dependent manner affects the organization of actin cytoskeleton in both wing and eye epithelia, independently of its role as a structural component of the ILK-PINCH-Parvin complex that mediates the integrin-actin link. Thus, distinct genetic interactions of Parvin occur in different cell types and second site modifier screens are required to uncover such genetic circuits. Introduction Epithelial tissue morphogenesis involves cell shape changes that are induced by tightly regulated interactions between adhesion proteins and the associated actin cytoskeleton. Thus, proteins that modify either the adhesive properties of cells or the dynamics of actin organization have profound effects on epithelial patterning. In pathological situations including cancer, abnormal protein expression drives cells to acquire metastatic properties and break the epithelial integrity. Integrins comprise a major cell surface protein family that mediate cell adhesion with the extracellular microenvironment and their function is essential for several tissue morphogenetic events during development [1]. Inside the cell, integrins organize the assembly of a large protein network, the adhesome, which mediates linkage with the actin cytoskeleton [2]. Parvin is a core component of the integrin adhesome and binds directly to integrin-linked kinase (ILK). Members of the highly conserved Parvin protein family contain two tandem unconventional Calponin-Homology (CH)-domains [3]. In contrast to mammalian a, b and c-parvin, invertebrates have a single parvin homolog [4]. Genetic data in mice have demonstrated the important role of Parvin in integrin-mediated adhesion and our previous genetic analysis in Drosophila revealed that Parvin is also essential for adhesion in muscle and wing epithelia [5,6]. In addition to these developmental functions, recent studies have linked b-Parvin expression to tumor suppressor effects during breast cancer formation in mice [7]. Misexpression studies and modifier screens aimed at identifying genetic circuits regulated by Parvin are of great importance to elucidate the tissue-specific molecular functions of Parvin in the context of a whole organism. Here we took advantage of the Drosophila system to determine the effects of high levels of Parvin at the cellular level in several tissues and to investigate the tissue-specific suppression or enhancement of these defects by specific genes. Leica SP5 software was used for quantitative analysis of the immunolabelled tissues. The compared images were acquired with identical settings of laser power, gain and iris while avoiding saturation of pixel intensity. Selected areas were outlined and the total intensity was measured and plotted using Excel. Images from adult eyes were obtained using either a Leica DFC500 cooled CCD camera or a Leica TCS LSI system. All images were assembled in Photoshop 7 and labelled in Corel Draw 12. Figure 1. Overexpression of Parvin results in morphogenetic defects at various tissues in the adult fly. Images were collected with a cooled CCD camera for various adult structures. Thoracic bristles in wild type (A) were missing upon expression of UAS::Parvin-GFP under ptcGal4 (A9). Leg from a wild type adult fly (B) was malformed when UAS::Parvin-GFP was expressed under ptcGal4 (B9). Ocellar bristles and arista from the head of a wild type adult fly (C) were missing upon UAS::Parvin-GFP expression under ptcGal4 (C9). A compound eye from a wild type adult fly (D) took on a rough appearance when UAS::Parvin-GFP was expressed under longGMRGal4 (D9 Parvin Overexpression during Development Causes Morphogenetic Defects In mammalian cells, a-Parvin has an anti-apoptotic function whereas b-Parvin promotes apoptosis [10,11]. We followed a gainof function approach utilizing the UAS/Gal4 system [12] to overexpress Parvin in several tissues during development (Table 1). We focused mainly on the wing epithelium and the eye, using ptcGal4, enGal4 and longGMRGal4 drivers. Overexpression of Parvin by ptcGal4 resulted in several abnormal developmental defects including loss of thoracic bristles, dysplasia in legs, loss of arista and ocellar bristles in the head, whereas a fraction of flies died during pupae development ( Figure 1A9-C9). Parvin overexpression driven by longGMRGal4 caused a rough eye phenotype (Figure 1D9). Finally, induction of Parvin expression with enGal4 mostly caused lethality, while the surviving flies had wing defects ( Figure 2L2, L3). Fly morphogenesis was not interrupted by similar levels of overexpression of several domain deletion UAS::Parvin-GFP constructs ( Table 2), suggesting that combinatorial interactions of Parvin domains are required to elicit a lethal effect and that only high levels of full-length Parvin are detrimental for the whole organism. Wing discs expressed either only Gal4 (A-C), or Parvin-GFP (green, D-F), or ILK, DIAP1 and Parvin-GFP together (green, G-I), and were probed for activated caspase-3 (magenta, A, D, G; white A9, D9, G9), active p-JNK (magenta, B, E, H; white B9, E9, H9), or lacZ expressed from the puc locus (magenta, C, F, I; white C9, F9, I9) The morphogenetic defects caused by Parvin-GFP overexpression driven by enGal4 suggested a pro-apoptotic function for Parvin in Drosophila, similar to b-Parvin in mammalian cells [11]. To further verify if Parvin-GFP overexpression caused apoptosis, we examined the levels of active Caspase-3. Active Caspase-3 was undetectable in control enGal4 wing discs (Figure 2A, A9) or those expressing a CH2-domain deletion Parvin mutant fused to GFP (UAS::Parvin DCH2 -GFP) ( Figure 3A). In contrast, Parvin-GFP overexpression induced a large increase in active Caspase-3, specifically in the posterior compartment of the wing disc, compared to only a few apoptotic cells in the anterior compartment which serves as an internal control ( Figure 2D, D19, K). We used a commercially available antibody against active Caspase-3 that was recently reported to recognize not only Caspase-3 but also additional substrates cleaved in a Drosophila Nedd2-like caspase (DRONC)-dependent manner [13]. Thus, we concluded that Parvin-GFP overexpression induced elevation of the Caspase-9-like initiator DRONC that resulted in apoptosis. Apoptotic stimuli are known to activate JNK signaling at the imaginal discs [14]. We examined whether Parvin-induced apoptosis is mediated by the JNK pathway, by immunostaining for the phosphorylated active form of JNK. The Drosophila homolog of JNK, basket, was highly phosphorylated specifically at the posterior compartment of the wing disc ( Figure 2E-E9), compared to low levels of active JNK in control discs ( Figure 2B-B9). We used the downstream target, puckered, as another marker for activation of the JNK pathway [15]. Cells ectopically expressing Parvin-GFP strongly upregulated the puc-lacZ reporter in the posterior compartment ( Figure 2F-F9), whereas in control discs, puc-lacZ was detected only in the stalk cells ( Figure 2C-C9). Thus, JNK signaling was activated by increased levels of Parvin-GFP within the wing imaginal disc. Increased Levels of Both ILK and DIAP1 Suppress Parvininduced Apoptosis Overexpression of Parvin-GFP driven by enGal4 resulted in lethality mainly during pupae development. Only 20% of the late pupae developed into adult flies ( Figure 2K) which exhibited various developmental defects in the wings, including tissue loss and vein defects ( Figure 2L2, L3). To dissect the molecular mechanism of UAS::Parvin-GFP-induced apoptosis, we coexpressed Parvin-GFP with either ILK, a binding partner of Parvin, or Drosophila Inhibitor of Apoptosis Protein (DIAP1) [16]. Coexpression of DIAP1 alone largely suppressed the Parvin-GFP-induced dominant lethality and apoptosis (73% rescue of adult viability, n = 116) ( Figure 3B, D, E). Coexpression of ILK [8] was less efficient at reducing the activation levels of DRONC, but significantly rescued lethality (75% rescue of adult viability, n = 120), similarly to DIAP1 expression alone ( Figure Parvin Overexpression in the Wing Epithelium Leads to Cell Delamination, Cell Invasion and MMP1 Secretion To further investigate the cellular consequences of Parvin-GFP overexpression, we used ptcGal4 to drive expression in a thin stripe of cells anterior to the anteroposterior (A/P) boundary of the wing disc. Overexpression of UAS::Parvin-GFP triggered cell invasion in areas proximal to ptcGal4 expression domain ( Figure 4A). In contrast, overexpression of UAS::Parvin DCH2 -GFP that was expressed even at higher levels than full-length Parvin-GFP [6] did not cause epithelial morphogenetic defects, indicating that the invasive phenotype was not a consequence of protein overexpression in general ( Figure 4B). High magnification optical sections along the apical/basal side of the wing pouch revealed a large reduction in Parvin-GFP expressing cells, whereas cells expressing UAS::Parvin DCH2 -GFP were maintained within the ptcGal4 domain ( Figure 4E1-F1). The UAS::Parvin-GFP expressing cells were extruded toward the basal side of the epithelium, where they acquired invasive properties that render them capable of migrating to the basal side of the epithelium and spread distant from the ptcGal4 expression domain ( Figure 4E2, E3, E4, F2, F3, F4, G). Several cells displayed small pyknotic nuclei indicative of apoptosis ( Figure 4F399). Cell invasion was consistent with ectopic induction of matrix metalloproteinase-1 (MMP1) along the ptcGal4 expressing region ( Figure 4C). MMP1 is a well established effector of cell invasion that is upregulated upon JNK activation and is normally expressed only in the stalk cells of the wing disc ( Figure 4C) [17]. To verify that a threshold level of UAS::Parvin-GFP is required to induce the invasive cell behavior, we coexpressed a UAS::RNAi construct known to knock down Parvin [6]. The moderate levels of Parvin-GFP expression along the ptcGal4 domain did not cause migration of these cells distant from their original position ( Figure 4D). Parvin Overexpression in the Wing Epithelium Results in Loss of Cell-matrix Adhesion and Extracellular Matrix Disassembly without Affecting Cadherin Levels In the wing imaginal discs integrin localizes largely in clusters containing adhesome proteins on the basal side of the epithelium, resembling the focal adhesions of mammalian cells [6,18]. The ectopic elevated levels of MMP1 upon UAS::Parvin-GFP overexpression, prompted us to further investigate cell-matrix adhesion organization. LamininA, is a major component of the extracellular matrix (ECM) and it has been shown to localize basaly in the wing disc, where it displays a fibrillar distribution [19]. We found that overexpression of Parvin caused disorganization of LamininA in the posterior compartment of the wing epithelium. LamininA was reduced in certain areas and accumulated in others, displaying a non-ordered pattern of distribution ( Figure 5A, A1-A199). Similarly the typical punctuate integrin localization at the focal contact-like structures at the basal side of the wing epithelium was severely affected, specifically in the posterior compartment, whereas large areas of the basal epithelium lacked integrin deposition ( Figure 5B). Enabled (Ena) plays a role in the elongation of F-actin barbed end filaments and recently it was shown that is expressed in the wing disc [20,21]. We found that within the anterior compartment, Ena accumulated basally at the focalcontact like structures, similarly to integrins and other integrin adhesome proteins [6,18], whereas in the posterior compartment expressing UAS::Parvin-GFP, Ena was largely diminished ( Figure 5C, E-E9). In contrast, in the middle and apical areas of the disc, Ena distribution was not affected, suggesting that its basal reduction was most likely a consequence of disorganized cell-matrix adhesion sites ( Figure 5D-D9). Thus, we concluded that in the basal wing epithelium high levels of Parvin-GFP disrupt integrin-matrix adhesion sites. Cadherin downregulation and initiation of the epithelialmesenchymal transition (EMT) are typical features of cells acquiring invasive properties [22]. Although the majority of the cells expressing UAS::Parvin-GFP were extruded on the basal side of the posterior wing epithelium, the amount and pattern of cadherin distribution was unaffected in the remaining cells that maintained their plasma membrane in the apical side of the disc ( Figure 5F-G). We therefore concluded that Parvin overexpression did not trigger EMT in the wing epithelium. Features of the Wing Epithelial Cells Expressing High Levels of Parvin-GFP Upon Parvin-GFP overexpression in the posterior wing compartment, we noticed a mosaic expression of the transgene (Figures 2, 3, 5, 6). Certain areas within the enGal4 domain, notably in the hinge and notum, were not labelled for Parvin-GFP although they properly expressed Engrailed and retained their posterior compartment identity ( Figure 6A-C). In these cells enGal4 was able to direct expression of a UAS::ABDMoesin-RFP transgene ( Figure 6D-F), suggesting that the lack of Parvin-GFP labeling was not due to defective enGal4 activity. In aggrement with this, when we probed wing imaginal discs with an antibody against Parvin, we found that in certain areas of the epithelium, where Parvin-GFP was undetectable, high levels of the protein were present as expected due to overexpression (Fig. 7A-D). In some of these areas, we found apoptotic cells with basaly located pyknotic nuclei (Fig. 7C). We concluded that in these cells, GFP could be destabilized due to undergoing apoptosis. In other areas of the disc undetectable Parvin-GFP was correlated with high density of nuclei (Fig. 7B). That could reflect newly proliferating cells contributing in the regeneration of the damaged epithelium [23]. To address whether the mosaic expression of Parvin-GFP within the wing epithelium and cell delamination along the apicobasal axis of the blade were also accompanied by changes in cell shape, we examined the F-actin cytoskeleton organization. In the most apical area of the wing blade the tissue was folded and the posterior compartment appeared shrunken, while the amount and distribution of F-actin cortically appeared normal. From the location of the nuclei in optical cross-sections -obtained at the region between the dorsal-ventral boundary in the middle of the wing poutch-it was evident that cells were shorter ( Figure 8A, G, I). In the middle area of the wing disc, cells expressing Parvin-GFP occupied a larger region of the wing blade whereas cell shape, as it was highlighted by F-actin, was similar to the flanking cells in the anterior compartment that did not express high levels of full-length Parvin-GFP ( Figure 8C). On the basal side, Parvin-GFP expressing cells occupied almost the entire posterior wing blade, but they were missing from the regions flanking the wing margin (Fig. 8E). The organization of F-actin basaly was completely disrupted (Fig. 8E, E9). Actin filaments were accumulated ectopically in some areas of the wing poutch cells and were missing from others. The observed gaps containing pyknotic nuclei, indicating areas of dead delaminated cells (Fig. 8E, E99), in accordance with previous studies describing the basal extrusion of dead cells in the wing epithelium [23,24]. As consequence of the damaged epithelium, the basal cell periphery appeared enlarged and irregularly shaped ( Figure 8E9). Coexpression of either ILK or DIAP1 with Parvin-GFP noticeably improved the cell delamination at the basal side ( Figure 9), whereas simultaneous coexpression of both ILK and DIAP1 further improved cell extrusion, as was evident from the reduced number of pyknotic nuclei accumulated basaly ( Figure 8B, D, F). However, in the wing blade F-actin organization was only modestly ameliorated by coexpression of both ILK and DIAP1. Actin filaments instead of decorating the outline of the cell, extended to the periphery and remained tangled resulting in a disordered meshwork pattern ( Figure 8B-B9, D-D9, F-F9, H, I). To test whether the disorganised F-actin is correlating with abnormal cell-matrix adhesion mediated by increased levels of Parvin-GFP rather than being a consequence of apoptosis, we examined the distribution of integrins and lamininA in discs coexpressing ILK and DIAP1, where apoptosis was rescued (Fig. 2). No improvement in the abnormal organization of either integrin or lamininA basaly in the wing epithelium was observed ( Figure 10A-B). Thus, defects in integrin-mediated adhesion in the basal side of the epithelium upon Parvin-GFP overexpression is not a consequence of Parvin-induced apoptosis, but rather a distinct effect that correlates with abnormalities in the organization of actin cytoskeleton. Parvin Overexpression Induces Up-regulation of Rho1 at the Basal Side of the Epithelium The Parvin-GFP induced alterations in the wing epithelium were highly reminiscent of those observed upon Rho1 overexpression [25,26]. We found that cells overexpressing Parvin-GFP triggered a substantial increase in Rho1 protein levels, mostly on the basal side ( Figure 11A, E), whereas Rho1 accumulation increased only modestly in the middle and in most apical areas of the epithelium ( Figure 11B, E). However, the increase in Rho1 levels represented a distinct effect, different from Parvin-induced apoptosis, because elevated Rho1 levels were unaffected, even when both ILK and DIAP1 were coexpressed ( Figure 11C-E). Diaphanus (Dia) is one of the main Rho1 downstream effectors. However, as previously found in wing discs [25], Rho1 elevation did not coincide with increased Dia levels upon Parvin-GFP overexpression ( Figure 12). Parvin Overexpression Disrupts F-actin Stress Fibers in the Pigment Cells of the Pupal Retina Parvin overexpression by longGMRGal4 caused a rough eye phenotype (Table 1, Figure 1). This Gal4 driver is expressed in all cell types of the eye (pigment, cone and photoreceptor cells) [27]. The elavGal4 and sevGal4 drivers that limit expression of Parvin-GFP to only the photoreceptor [28], or specific photoreceptor and cone cells [29], respectively, did not cause any eye roughening (Table 1). Thus, the rough-eye phenotype is most likely caused by overexpression of Parvin in the pigment cells. Several morphogenetic defects during eye development could result in final eye roughening [30]. We therefore examined the organization of Factin in both 3 rd instar larvae and at 75% of pupal development (p.d). The later developmental stage was selected because in the retinal floor, F-actin displays a highly ordered structure of stress fibers within the pigment cells encircling the cone cells [31]. We did not find any defects in F-actin organization in the eye imaginal discs from 3 rd instar larvae (data not shown). In contrast, when we examined retinas from late pupae, we found complete disorganization of actin stress fiber arrays in the retina floor, whereas retinas expressing the truncated UAS::Parvin DCH2 -GFP form appeared normal ( Figure 13A, B). Thus, in the pupal retina Parvin-GFP overexpression severely disrupted F-actin stress fiber organization in the basal side of the pigment cells, similar to the wing epithelium phenotype. Genetic Interactors of Parvin in the Eye The homozygous longGMRGal4 flies had wild type-like eye morphology when kept at 25uC ( Figure 14A). In contrast, flies overexpressing Parvin-GFP under longGMRGal4 displayed distorted ommatidia and mild rough eyes ( Figure 14B). This phenotype was sensitive to the copy number of both longGMRGal4 and UAS::Parvin-GFP transgenes (data not shown), and could therefore be exploited to identify genetic suppressors and [31]. We hypothesized that if Parvin-GFP overexpression compromised integrin-containing adhesion sites, as we found in the wing epithelium, then coexpression of an integrin heterodimer (a PS1 b PS or the a PS2 b PS ) would ameliorate the Parvin-induced phenotype [32]. In contrast, we found that elevated levels of either of the two coexpressed integrin heterodimers mildly enhanced the UAS::Parvin-GFP induced rough-eye phenotype (Table 3). However, because the levels of integrin expression are not accurately controlled in this experimental setting it is plausible that high levels of integrin expression could not reverse the Parvin-induced rough eye phenotype. Thus, we concluded that a tight balance of the intracellular amount of integrins appears to be rather crucial for the proper eye development. UAS::ILK weakly suppressed the Parvin induced rough-eye phenotype (Table 3). Surprisingly, coexpression of UAS::Wech-GFP, an ILK binding protein [33], completely suppressed the phenotype ( Figure 14C). The morphology and organization of the ommatidia remained intact when UAS::Wech-GFP was overexpressed alone under longGMRGal4 (Table 3). A strong suppressive effect was also achieved by coexpression of UAS::PTEN [34] (Figure 14E). In contrast, the catalytically inactive PTEN C124S mutant [35] was a poor suppressor, suggesting that enzymatically active PTEN is required to modulate Parvin effects. Next we conducted a dominant-modifier screen using chromosomal deficiency lines of the third chromosome (Bloomington kit) that covered almost 40% of the fly genome. In the first round we tested 111 deficiencies covering almost the entire 3 rd chromosome and found 4 suppressors and 12 enhancers. We further narrowed down three genomic regions that were dominant modifiers (present as just one copy) of Parvin-induced rough-eye. The cytogenetic regions encompassing 70B2;70C2 {Df(3L)Exel6119} and 91A5;91F1 {Df(3R)ED2} were identified as strong suppressors ( Figure 14G, I), whereas the region 93C6;94A4 {Df(3R) e-GC3} was an enhancer ( Figure 14D). To identify candidate genes, we used individual knock-down of 391 specific genes in the eye utilizing UAS::IR lines [36] for the majority of the genes located in the identified genomic regions. No candidate gene was identified for the dominant suppressive effect of 70B2; 70C2. Knock-down of Xrp1 (CG17836, 91D3-D5) within Df(3R)ED2 was equally efficient at suppressing the removal of one copy of the genomic region 91A5; 91F1 ( Figure 14K). In addition, knock-down of Eip93F (CG18389, 93F14) located in the genomic region 93C6; 94A4 enhanced the rough-eye, similarly to Df(3R) e-GC3 ( Figure 14F). Lastly, knock-down of genes encoding bPS integrin, Zasp52, and the transgelin homolog Chd64 (CG14996) all enhanced the Parvininduced rough-eye phenotype ( Figure 14H, J, L). Discussion Parvin proteins are highly conserved and participate in the assembly and function of the integrin adhesome [3,6]. Here we employed the UAS/Gal4 system to investigate additional functions of Parvin upon overexpression in a tissue specific manner and to identify novel genetic interactions in the wing and the eye ( Figure 15). We showed that Drosophila Parvin promoted apoptosis when overexpressed in vivo, similar to mammalian b-Parvin in HeLa cells [11]. Expression of b-Parvin in breast cancer cells was recently shown to inhibit tumor progression and cell proliferation [7] suggesting that our study of the cellular and molecular changes associated with Parvin overexpression in Drosophila may be relevant to cancer pathology. At the cellular level we demonstrated that overexpressed Parvin induced alterations in the organization of the actin cytoskeleton, disruption of cell-matrix adhesion, cell invasion and cell delamination. Mechanistically, we showed that overexpressed Parvin causes JNK activation and enhanced MMP1 levels. We also revealed a functional link between Parvin and subcellular distribution of Rho1. Interestingly, we showed that these Parvin-induced signaling effects are not dependent on its interaction with ILK. Among the three counterparts of the ILK/PINCH/Parvincomplex, only overexpression of full-length Parvin induced ectopic apoptosis and excessive lethality in the larval and pupae developmental stages [6]. Nevertheless, in the wing imaginal discs overexpression of other components of the integrin adhesome such as tensin and paxillin also result in apoptosis and lethality, including activation of the JNK pathway and modulation of Rho1 activity, respectively [37,38]. We showed that overexpression of Parvin increases Rho1 protein levels predominantly at the basal side of the wing epithelium, although loss of Parvin did not cause a reciprocal reduction of Rho1 levels [5,6]. Given the previous reports that mammalian Parvins interact with two regulators of the small GPTases family, the GEF apix and the CdGAP respectively [39,40], one hypothesis would be that high levels of Parvin sequester these factors and interfere with their interaction with Rho1. As a consequence, Rho1 is released from the apicolateral side where normally is enriched [41]. The elevated Rho1 levels in the basal compartment of the epithelium could explain the formation of ectopic actin accumulation in accordance with previous studies [42]. As already described Rho1 is able to induce JNK-dependent apoptosis and F-actin organization defects in the wing epithelia cells [25,26]. Therefore, it is plausible that the elevated JNK activity observed upon Parvin overexpression is caused by aberrant elevation of Rho1 basaly. Taken our findings together, we propose that Parvin-induced cellular defects in the wing epithelia are mediated by increased levels of Rho1, however, we cannot rule out a putative role of additional unidentified factors that are activated downstream of Parvin independently of Rho1. We recently showed that coexpression of ILK together with Parvin-GFP in the mesoderm is sufficient to completely rescue Parvin-induced lethality and control Parvin subcellular localization [6], suggesting that coupling of Parvin to ILK could have a protective effect in epithelia viability. We performed rescue experiments to investigate whether Parvin function in the wing epithelium is mechanistically linked to its interaction with ILK, by coexpressing Parvin with ILK. Expression of ILK alone did not completely rescue the dominant effects of Parvin overexpression in the developing wing epithelia, had a mild suppressive effect on the rough eye phenotype and did not change the subcellular distribution of Parvin-GFP in the wing epithelial cells. Both the JNK activity and the increase in Rho1 protein levels were also not UAS:ILK and UAS:DIAP1 (green, C, D; white C99, D99), as driven by enGal4 in the posterior compartment, probed for Rho1 (red, A-D; white A9-D9) and stained with DAPI to visualize nuclei (blue, A-D; white A999-D999). (E) Box-and-whisker plot of Rho1 levels indicating the means (vertical lines in the middle of the rectangular boxes) of measurements taken in apical, middle and basal focal planes. All individual measurements are superimposed on the box-and-whisker plots and are indicated by the same symbol in all focal planes to allow direct comparisons of the variation in pixel intensity. Arrows, closed areas in the posterior and anterior compartment of the wing pouch expressing (right) or not expressing (left) UAS::Parvin-GFP. The anterior part of the wing disc serves as an internal control. doi:10.1371/journal.pone.0047355.g011 affected by ILK coexpression. Even when high levels of ILK are present, the putative interaction of Parvin with GTPase regulators is not disturbed and the imbalance of Rho1 subcellular distribution is maintained. That is not unexpected given that both apix and CdGAP interact with the N-terminus region of Parvin, whereas ILK bind on the C-terminus. These findings demonstrate that the functional interplay between Parvin and ILK depends on the cell context and that Parvin interacts with other proteins and perform additional roles. In addition to functioning as a structural element of the integrin-actin link, it also acts as a dosage dependent modulator of actin cytoskeleton organization and cell homeostasis in the developing epithelia, via modulating the subcellular distribution of Rho1. Because overexpression of Parvin caused extensive apoptosis in the wing epithelium, to mechanistically uncouple the Parvininduced cellular defects from Parvin-induced apoptosis, we performed rescue experiments by coexpressing Parvin and DIAP1, which blocks apoptosis by inhibiting both the initiator caspase DRONC and the effector caspases DriCe and Dcp-1 [43]. DIAP1 alone did not efficiently suppress the cellular defects of Parvin in the wing. Both ILK and DIAP1 had to be coexpressed to completely rescue the lethality, presumably by coupling the reduction of excessive cytoplasmic Parvin by ILK and the inhibition of DRONC-mediated apoptosis by DIAP1. Coexpression of ILK and DIAP1 rescue both cell apoptosis and cell extrusion in the wing poutch cells, but not in the hinge and notum. These findings were not entirely unexpected, given previous documentation of regional differences within the wing imaginal disc regarding the differential requirement of actin regulators for epithelial integrity [44]. However, in consistence with our results from ILK rescue experiments, coexpression of DIAP1 or both ILK and DIAP1 did not ameliorate either the irregular organization of F-actin or the disorganized integrin-matrix adhesion sites and did not change the elevated levels of Rho1 in the basal side of the wing epithelium. These results demonstrate that the Parvin-induced cellular defects are not a simple consequence of apoptosis, but rather a distinct feature of Parvin function. Overexpression of Parvin in the eye generated a rough eye phenotype. At the cellular level the basal actin cytoskeleton in the eye retina was severely disrupted, suggesting that a cause of the abnormal eye development could be initiated by abnormalities in the cell shape of pigment cells, as in the case of the wing epithelium. Because the Parvin-induced eye phenotype was sensitive to the copy number of Parvin transgenes and to temperature, we performed a modifier screen to uncover novel genetic interactors. We found that elevated levels of Wech and PTEN antagonized the Parvin-induced dominant effects in the developing eye and completely suppressed the rough eye phenotype, whereas high levels of ILK had only minimal suppression activity. Wech is an ILK binding protein and it is not clear why it could suppress Parvin-induced dominant defects at elevated levels [33] rather than ILK itself, which directly binds to Parvin and rescues lethality completely in the mesoderm [6] and significantly in enGal4 expressing cells. The lack of data regarding Wech function in the eye, preclude further analysis at this point. The second surprising result of our study was the ability of high levels of PTEN to suppress the rough eye phenotype induced by Parvin overexpression. UAS::PTEN overexpression under GMRGal4 has been reported to induce a rough-eye phenotype by inhibiting cell-cycle progression in proliferating cells and inducing apoptosis in a cellcontext dependent manner [35,45]. In our experiments expression of the same UAS::PTEN lines obtained from two different donors [35,45] did not result in eye roughening. One possible explanation could be the use of longGMRGal4 (Bloomington #8506) in our experiments, because previous studies drove expression of UAS::PTEN with GMRGal4 [46]. In addition, previous reports suggested that expression by longGMRGal4 driver in the developing eye follows a more strict pattern in the photoreceptor cells [47]. Taken together our data and previous reports, we speculate that Parvin and PTEN have antagonistic functions within the eye epithelium and coexpression of both proteins counterbalance their induced dominant effects upon overexpression. Currently we do not have sufficient data to point a specific pathway that could be modified by Parvin and PTEN and leads to rough eye phenotype. However, the recent report that Parvin is associated with PKB [48] together with previous data suggesting that Parvin may facilitates the recruitment of PKB at plasma membrane [10], suggests that Parvin could antagonized the negative effects of PTEN on PKB activation by reducing PIP3 levels [49]. The third suppressor gene we found was Xrp1. Xrp1 contains an AT-hook motif that is found in nuclear proteins with DNA binding activity. Currently, we lack sufficient knowledge to speculate on putative functional interaction between Parvin-induced signaling and nuclear activity. However, previous studies on Xrp1 point on its role as a p53-dependent negative regulator of cell proliferation following genotoxic stress [50]. Among the genes that enhanced the Parvin-induced rough eye were all of the integrin subunits known to be expressed in the eye, including a PS1 , a PS2 and b PS [31], the cytoskeletal regulators ZASP52 [51] and the transgelin homolog encoded by CG14996 of unknown function. In conclusion, our findings revealed novel cell context-dependent roles for Parvin in the whole organism. Besides its known function as a structural component of the IPP-complex that mediate the integrin-actin link, we demonstrated that Parvin can also affect cell-matrix adhesion, organization of actin cytoskeleton and cell homeostasis, by regulating Rho1 and JNK levels in an ILK-independent manner. These findings are relevant to situations where cell homeostasis is altered ranging from the physiological renewal of tissues to cancer pathology. In addition, our modifier genetic screen revealed novel interactors that affect Parvin function in a living organism. Our in vivo data provide the first insight into genetic circuits influenced by Parvin and offer Table 3. Summary of all identified modifiers of UAS::Parvin-GFP, longGMRGal4 in the developing eye, including information on the stock used, the cytogenetic map for the deficiencies and the effect of each modifier. a framework for additional detailed studies to elucidate how these genetic networks interact.
2018-04-03T05:12:10.391Z
2012-10-15T00:00:00.000
{ "year": 2012, "sha1": "7451435e6120891ab9dd5925c66352fae26a9fda", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0047355&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7451435e6120891ab9dd5925c66352fae26a9fda", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
241295657
pes2o/s2orc
v3-fos-license
A Sneaking Exposure to Premiumship through Politics of Tactics and Compromises – Civil Military Relationship during Benazir Bhutto’s Government 1988-90 This study navigates on civil military relationship during the occasion of her first tenure as Prime Minister (1988-90). She made pragmatic approach and used well political tactics to land in corridor of power. Further, it encapsulates and explains how she was convinced; to continue Gen. Aslam Beg as Chief of Army Staff, to retain Sahibzada Yaqoob Khan as Foreign Minister, not to cut the defence budget and not to interfere in military affairs. This research work presents a systematic and factual analysis of her political acumen, sagacity and dexterity to develop civil-military relations in constructive way for the sake of democratic norms, tradition and to inculcate its environment. It also throws light on civil-military relations as well as the circumstances that led towards her ouster from premiership during her first tenure 1988-90. Furthermore, theories like trait theory, behavioral theory, situational theory and path goal theory have also been applied to have a better understanding of Benazir Bhutto’s leadership qualities and administrative abilities. Introduction The tenure of Benazir Bhutto as a Prime Minister (1988-90) is an important era of Pakistani political structure and democratic stability. Keeping in view the challenge to develop understanding with military which were being faced by Benazir Bhutto, during her first tenure as Prime Minister, the administrative responses, and managerial skills of Benazir Bhutto, as a ruler/Prime Minister of Pakistan can be gauged in the prism of trait theory, behavioral theory, situational theory and path goal theory. These theories are helpful to better understand the Benazir Bhutto's managerial skills, political tactics and dexterity during her first tenure as Prime Minister of Pakistan . The aforesaid theories explore the administrative skills, political acumen, revisiting challenge to develop better understanding with military and Benazir Bhutto's responses as Prime Minister during 1988-1990, attitude and motivational factors of any leader in office of authority while devising strategy to resolve the issues and to steer the country towards prosperity as well as democratic stability. The above mentioned assessment unfolds that Benazir Bhutto possessed exceptional leadership qualities as she displayed in the face of different personal and political challenges. The advent of Benazir Bhutto in the corridor of power was not a bed of roses, it was a hard journey which was full of myriad challenges. She exhibited exquisite response to the challenges by demonstrating political acumen, administrative and managerial skills. She acted, sometimes decisively, to prove her authority and sometimes demonstrated compromises with the other political stakeholders in Pakistan. Thus, the trait theory, behavioral theory, situational theory and path goal theory can be helpful for the readers to peep into political leadership of Benazir Bhutto keeping in view the challenges and the administrative responses of Benazir Bhutto, as a ruler/Prime Minister of Pakistan 1988-90. Delving the challenges in Civil-Military Relationship during Benazir Bhutto's Premiership (1988-90) In 1988, the Pakistan Peoples' Party majority raised eye brow in military leadership because the party's leadership used to be considered as conspired against military elites (Shafqat, 1996, p. 655-672). Barrister Ch. Aitzaz Ahsan, Minister of Interior, Law and Justice during Benazir Bhutto's first Premiership (1988-90) revealed in an interview by the researcher that the military regime of Gen. Zia-ul-Haq ruled more than a decade long, therefore, after the death of Gen. Zia, the military entrenchment in politics was still prevailing at that time. They were not ready to accept the government of Benazir Bhutto. Thus, mutual distrust and hostility existed between Benazir Bhutto and military elites at that time. However, the departure of Gen. Zia from the corridor of power, the achievement of majority by PPP in 1988 general elections, required a situational attitude transformation from political as well as military leadership. Therefore, Benazir Bhutto not only demonstrated bargaining chip to become Prime Minister but she also accepted military's demands; to retain Sahibzada Yaqoob as Foreign Minister, not to cut the defense budget, and let to continue Gen. Aslam Beg as an Army Chief (Interview of Barrister Ch. Aitzaz Ahsan, Interior Minister during Benazir's Premiership (1988-90) conducted by the researcher on 16 th August 2018). The primary administrative challenge of Benazir Bhutto was to develop understanding, trust as well as cooperation with military (Dharamdasani, 1989, 198). On flipside of the coin, military was willing to develop a better relationship with civilian leadership (Shafqat, 1996, p. 655-672). At that juncture, civilian leadership and military elites demonstrated; caution and suspicion towards each other, by and large, started the era of civil military understanding (Interview of Sardar Asif Ahmad Ali, Foreign Minister during Benazir Bhutto's Premiership conducted by the researcher on 23 rd May 2018). In this political challenge, Benazir Bhutto coined the bargaining chip, reasonably well, to the challenge of civil military relationship (Shafqat, 1996, p. 655-672). Replacement of ISI Chief Gen. Hameed Gul Even, in the office of Prime Minister, in spite of initial understanding, she kept on creating more challenges for herself and antagonized the military by taking various steps during her first tenure. On 24 th May 1989, Benazir Bhutto replaced Gen. Hameed Gul, Head of ISI, chief strategist during Afghan War with Lt. Gen. Shamsur-Rehman Kallu; who was retired military officer. Instead of making appointment from serving officers, she appointed retired General as ISI Chief without consultation of Army Chief (The Nation, May 25, 1989: Sheikh, 2000. However, this action of Benazir was not appreciated by military. Therefore, military considered this step of Benazir Bhutto as violation of her commitment as well as interference in professional affairs of Army. In these circumstances, it was coming difficult for Benazir Bhutto to make administrative decisions related to military affairs or to keep herself aloof to avoid any kind of interference in the affairs related to military (Burki, 2004, p.80: Dawn, May, 27, 1989. As a Prime Minister, this was a grave challenge for her government; to develop better understanding with military elites in Pakistan. Pacca Qilla Incident Moreover, Benazir Bhutto faced another challenge related to misunderstanding with military, on 26 th May 1990, when Sindh government, unleashed massacre of Muhajir children and women in Pacca Qilla. The Sindh government was reported that a large cache of various categories of weapons concealed by the MQM militant group, therefore, the provincial authorities carried out an operation Clean-Up (Akhund, 2000, 141: Sheikh, 2000. As a result of this operation, prominently having ethnic factor and association, almost 30 persons were killed at Pacca Qilla. This operation led towards strong riots in Karachi by Muhajir community and reaction caused by Muhajirs killed almost 350 persons. When operation was launched, Aslam Beg, the then Military Chief was on foreign tour. Further, at that time, the Corps Commander of Sindh was also on the visit of border areas. Therefore, the timings of the operation were chosen when the top brass of military was not present. Consequently, this Pacca Qilla incident was taken against Muhajirs (Shafqat, 1996, p. 655-672). At that juncture, the army troops moved in and took the control of the area and commanded the Police authorities to clear out with immediate effect. The police complied with the instructions of armed forces (Sheikh, 2000, p. 183). Later, When Gen. Aslam Beg visited different affected areas in Hyderabad; he received a cordial welcome and people shouted slogans for removal of Benazir Bhutto by imposing martial law. This incident of Pacca Qilla parted ways between Army Chief as well as Prime Minister (Akhund, 2000, 141-143). Even, Army Chief and Prime Minister did not meet between 27 th of May till 24 th of July 1990 (Talbot, 1998, p. 309). This incident also created problems for Benazir Bhutto and she could not handle the relationship with military tactfully and misunderstanding with top military brass weakened her grip in the administrative affairs of the country (Akhund, 2000, 141-143: Sheikh, 2000, 153-154: Shafqat, 1996. Differences with Army regarding Selection of Army Personnel Even, Benazir's differences regarding Army Selection Board related to promotions, postings and retirements of senior rank officers; were also arisen towards confrontation. As every year, from April to June, Military's Selection Board conducts its meeting to make decisions regarding the postings and promotions of officers. Pakistan army always focuses to guard the recommendations, proceedings and decisions of Selection Board. Further, in 1970s, Zulfikar Ali Bhutto's interference in the Board's recommendations caused uproar in the army (Shafqat, 1996, p. 655-672). In the month of June, 1990, she, being the Prime Minister tried to interfere in Military's Selection Board in order to seek, extension in term of appointment, of Lt. Gen. Aslam Masood who was serving as Corps Commander, Lahore (Talbot, 1998, p. 309). At that juncture, the military leadership conveyed serious concerns to the President and became doubtful about Benazir's commitment regarding power sharing with military. In July, 1990, the generals, in Corps Commander meeting conveyed a decisive message to the President of Pakistan. As a result, the President, Ghulam Ishaq Khan added military's message to his already longer list of charges; like corruption, misconduct, nepotism and inefficiency of PPP government and decided to over throw the government of Benazir Bhutto (Shafqat, 1996, p. 655-672). In fact, Benazir Bhutto was not effective or skillful enough, to manage his relations with the army. The challenge of developing balance civil military relationship, for Benazir Bhutto, and to fulfill the commitment which she made before taking oath as Prime Minister remained just a verbosity without any practical effort to develop harmonious and understanding based relationship. Later on, this violation of commitment, by Benazir Bhutto, became a grave hurdle for her government and military developed strong misapprehensions about PPP government (Shafqat, 1996, p. 655-672). Response to Civil-Military Relations In 1988, the elections brought PPP's victory which facilitated the way for Benazir Bhutto to associate, a hope of becoming Prime Minister of Pakistan. However, civilian political leadership and military bureaucracy were not enjoying mutually deep and understanding based relationship (Shafqat, 1996, p. 655-672 (Bhatia, 2008, p.92-93: Dharamdasani, 1989. On accepting military's demands, Benazir response was a positive step to bring a democratic transition as a leader in a country. In fact, before taking oath, for the office of Prime Minister, military affirmation and approval was important and Benazir responded with political dexterity and accommodative attitude as a sagacious politician. However, if Benazir would have been rejected the demands of military and her response was not positive to the military's demand; her arrival at the office of Prime Minister would have been jeopardized state of affairs (Bhatia, 2008, p.92-93: Dharamdasani, 1989. According to trait theory, leaders demonstrate their intelligence, temperament and attitude in different timings and duration. Therefore, Benazir's personality demonstrated a positive attitude and temperament while dealing political bargaining with army; to smooth her way towards the office of Prime Minister of Pakistan. Therefore, trait theory is helpful to gauge the leadership qualities of Benazir with her accommodative attitude and good temperament. Harmonizing Response regarding Gen. Aslam Beg's continuity as Army Chief Benazir Bhutto not only accepted the demands to retain Sahibzada Yaqoob as Foreign Minister, not to interfere in military's internal affairs, to give direct role to the military in foreign policy but also exhibited harmonizing response to agree on military's willingness regarding continuity of Aslam Beg as Army Chief in her first tenure of Premiership (1988-90). As a political leader, it was need of the time to accept military's demand regarding the continuity of Gen. Aslam Beg as Army Chief. (Zakria, 1989, p.10). Benazir Bhutto's positive response regarding the Army Chief, removed, to a larger extent, the misunderstanding and suspicions between top military brass and PPP's political leadership. Therefore, the presence of army chief, on 2 nd December 1988, in the oath taking ceremony of Benazir as Prime Minister of Pakistan, removed misgivings and conveyed a symbolic message that civil and military leadership have developed reasonable understanding to resolve suspicions and doubts among each other. Hence, Benazir was agreed, for Gen. Aslam Beg, to carry on his services for Pakistan armed forces as Army Chief. Therefore, Gen. Aslam Beg was generous and courteous enough to demonstrate his pledge of loyalty to the Premiership of Benazir Bhutto (Zakria, 1989, p.10). As behavioral theory describes that personality of the leader can be analyzed through his managerial activities, effectiveness and application of one's learning according to the demand of the profession. Therefore, Benazir Bhutto showed a good behavior towards Civil Military relationship and was agreed regarding the continuity of Gen. Aslam Beg as an Army Chief. Therefore, behavioral theory is applied about the compromising and harmonizing behavior of Benazir Bhutto to handle the affairs with the state institutions with good behaviors and attitude. Showing willingness Not to Cut the Defense Budget The perception of Pakistan Peoples' Party, in the eyes of military, was antiarmy and the party's leadership used to be considered as conspired against military elites (Dharamdassani, 1989, 198). Therefore, the top military leadership was doubtful that presence of Prime Minister Benazir Bhutto as a Prime Minister will cut the defense budget of Pakistan. If Benazir Bhutto would have curtailed defense budget, it would have created many hurdles and problems for Pakistan military as well. The Indian aggressive designs, border skirmishes, volatile situation in Kashmir, always required Pakistan's defense system; to keep updated and modernized technologically. For this purpose, military's insistence not to cut the defense budget was justified demand which received a positive response from Benazir Bhutto as well. Therefore, the military's perception about Benazir Bhutto as anti-army was removed (Shafqat, 1996, p. 655-672). The top military brass, through its demand, and Benazir's response of acceptance to military's demand was a significance attitudinal change; between military behavior and Benazir's acceptance as a women leader of the country. It was a sagacious reaction, on Benazir Bhutto's part, because, historically army had dominated in politics of Pakistan. Benazir Bhutto's response to the initial challenges was a skillful demonstration, of a young leader, to hold the office of Prime Minister while tackling and managing the powerful forces in Pakistan political structure during her first tenure (Shafqat, 1996, p. 655-672). According to the situational theory, the leaders always performed well according to situation by the using their discretion, performance and authority. At the very outset of her journey, as a first female Prime Minister of Pakistan, Benazir Bhutto showed her willingness, not to cut the defence budget, according to the situation as well as demand of military. Thus, Benazir Bhutto acted according to situation, agreed to military's demand, ascended to the office of Prime Minister of Pakistan, while showing leadership skills and presence of mind as the situation was demanding. Formulating Committee to review Intelligence Agencies' role in Democratic Polity Gen. Zia used the role of intelligence agencies, to influence the politics of Pakistan, within country to achieve his arbitrary and authoritarian purposes, in order to seek longevity for his dictatorial rule. Therefore, after the attack of the Soviets on Afghanistan, American CIA and ISI remained very close and worked collaboratively. After successful role in Afghanistan, the ISI gained boost and was encouraged to run Afghan Policy as well. Such effective political role was also creating hurdles and hindrances when Benazir became Prime Minister in 1988. Hence, she attentively tried to assess ISI role in politics. Therefore, she constituted a Committee in February 1989 to assess and review the role of Intelligence agencies in Pakistan's politics. The said Committee presented its findings, although, Committee appreciated the excellent performance of ISI, yet, also showed deep concerns over its influence in Pakistan's politics (Dawn, Change of Guard at the ISI, May 29, 2000). As the path goal theory explains leader's response to achieve various goals; in different tasks, to get satisfaction and to assign work according to different political environments. Benazir Bhutto assigned a task to a review Committee for determining the role of ISI in Pakistan's political environment; in order to satisfy herself as a politically and democratically elected Prime Minister of Pakistan. Replacing ISI Chief In the light of suggestions of the Committee, Benazir Bhutto decided to take control of operation; related to Inter Service Intelligence (ISI). She was fearful that more than one occasion, ISI undermined her government and she stated it, on many occasions, about the ISI as well. Historically, Benazir Bhutto's distrust and doubts existed with ISI because it hounded her, during Zia regime and contributed a key role to create IJI just before the elections of 1988. (Muneer Ahmad, 1990, 108-123) For this purpose, Benazir Bhutto was willing to bring ISI under effective and vigorous civilian control. While neglecting the advice of Chief of Army Staff, Benazir Bhutto removed the then powerful ISI chief, Lt. Gen. Hameed Gul in May 1989(The Nation, May 25, 1989: Sheikh, 2000. The replacement of Lt. Gen. Hameed Gul was a courageous response and a brave move by Benazir Bhutto. Lt. Gen. Hameed Gul, who was not only an important policymaker during Afghan War in military regime of Gen. Zia but he also contributed significantly for the creation of IJI. However, US foreign policy experienced a strategic shift because of peaceful prospects of settlement of Afghan issue. However, the presence of Hameed Gul became as an irritant factor to US policy makers. She demonstrated a little understanding with military and acted fearlessly, by appointing a retired Lt. Gen. Shams-ur-Rehman Kallu as ISI chief instead of making an appointment from serving officers to the post of ISI chiefs. Hence, Benazir Bhutto exhibited political dexterity and relentless courage for trying to bring Inter-Services Intelligence (ISI) under her control as a Prime Minster through a courageous and fearless response. Although, it was considered interference in military's professional affairs yet, for a spirit of accommodation, army complied with Benazir Bhutto's orders (Shafqat, 1996, p. 655-672). Compassionate Response and Morale Boosting Visit of Siachen Glacier Benazir Bhutto gave an active response to the activities and contributions of soldiers who were facing difficulties and tribulations in harsh climatic conditions at Siachen glacier. For this purpose, on 21 st August, Benazir Bhutto travelled to Skardu on Air Force C-130 Plane. It was a very supportive response on behalf of Benazir Bhutto; because no head of the state visited Siachen glacier before her. Then, she visited Dansamfrom Skarduand she also went Ali Brangsawhile travelling on Puma helicopter. The visit of Benazir Bhutto on such high places with little availability of oxygen, proved a moral boosting visit and encouraged the morale of soldiers through her confident and compassionate response at Siachenglacier (Akhund, 2000, 113). During her first tenure, Benazir Bhutto, sometimes, gave compromised response while, on different occasions, used her office and authority, to minimize the role of ISI in democratic decision making process. She also remained proactive to shun the impression of any civil military rift. Even, in conflicting situation, after the removal of Gen. Hameed Gul as ISI Chief by her, she, made efforts, to bridge the engulf, and visited Siachen glacier in order to show solidarity and compassionate response by a Prime Minister, for the soldiers, who, were facing severe climatic difficulties at Siachen glacier (Akhund, 2000, 113). Conclusion Benazir Bhutto's first tenure remained a mixture of understanding and conflict between Civil Military relationship. However, her decisions and actions, in her first tenure, demonstrated a tilted attitude, by Benazir Bhutto, towards the attainment of domination of democratically elected government; in a state where military was entrenched in politics of the country. Her party's victory in 1988 elections raised eye brow of military leadership becasuse PPP was considered as anti-army political party. Thus, inspite of her initial bargaining chip with military, she was unable to develop understanding with military on several issues like; replacement of Gen. Hameed Gul, ISI Chief, without consulting the then Army Chief, Pacca Qilla incident and using of her authority regarding promotion, posting and retirement of senior Army officers. The study has observed that, in order to come into power and to run administrative machinery of the country, she adopted pragmatic approach and tried to develop harmonizing relationship with Gen. Aslam Beg, Army Chief, accepted military's demands without which it would have been possible to bring a democratic transition in the country's political arena. Therefore, the good-will gestures and pragmatic approach of Benazir Bhutto dispelled the concerns of military leadership about Benazir Bhutto and PPP as they had perceived them to be holding an antiarmy stance and security concerns. She also assured the army that she will not cut the defense budget. Besides this, her compassionate response and morale boosting visit of Siachen glacier contributed her positive image as a Prime Minister in the eyes of Army. Therefore, Benazir inherited some outstanding issues in the internal and external fronts but some of them were those issues which were specifically created for Benazir Bhutto. Some issues and challenges were also the outcome of her administrative mishandling or governance. However, the plain fact is that she could not complete her first tenure as Prime Minister because of multiple factors, regardless of nepotism and corruption, the most important cuase for her short tenures was the bad civil-military relations.
2021-10-05T16:36:26.765Z
2019-06-30T00:00:00.000
{ "year": 2019, "sha1": "a8c663be9e82f78d677d7dd246ea6cd540830e8e", "oa_license": "CCBY", "oa_url": "https://pssr.org.pk/issues/v3/1/a-sneaking-exposure-to-premiumship-through-politics-of-tactics-and-compromises-civil-military-relationship-during-benazir-bhutto-s-government-1988-90.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e5aca85e9d56ae0704868502d1114b958eba474c", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
207894183
pes2o/s2orc
v3-fos-license
Dataset of EEG power integral, spontaneous recurrent seizure and behavioral responses following combination drug therapy in soman-exposed rats This article investigated the efficacy of the combination of antiepileptic drug therapy in protecting against soman-induced seizure severity, epileptogenesis and performance deficits. Adult male rats with implanted telemetry transmitters for continuous recording of electroencephalographic (EEG) activity were exposed to soman and treated with atropine sulfate and the oxime HI-6 one minute after soman exposure and with midazolam, ketamine and/or valproic acid 40 min after seizure onset. Rats exposed to soman and treated with medical countermeasures were evaluated for survival, seizure severity, the development of spontaneous recurrent seizure and performance deficits; combination anti-epileptic drug therapy was compared with midazolam monotherapy. Telemetry transmitters were used to record EEG activity, and a customized MATLAB algorithm was used to analyze the telemetry data. Survival data, EEG power integral data, spontaneous recurrent seizure data and behavioral data are illustrated in figures and included as raw data. In addition, edf files of one month telemetry recordings from soman-exposed rats treated with delayed midazolam are provided as supplementary materials. Data presented in this article are related to research articles “Rational Polytherapy in the Treatment of Cholinergic Seizures” [1] and “Early polytherapy for benzodiazepine-refractory status epilepticus [4]. a b s t r a c t This article investigated the efficacy of the combination of antiepileptic drug therapy in protecting against soman-induced seizure severity, epileptogenesis and performance deficits. Adult male rats with implanted telemetry transmitters for continuous recording of electroencephalographic (EEG) activity were exposed to soman and treated with atropine sulfate and the oxime HI-6 one minute after soman exposure and with midazolam, ketamine and/ or valproic acid 40 min after seizure onset. Rats exposed to soman and treated with medical countermeasures were evaluated for survival, seizure severity, the development of spontaneous recurrent seizure and performance deficits; combination anti-epileptic drug therapy was compared with midazolam monotherapy. Telemetry transmitters were used to record EEG activity, and a customized MATLAB algorithm was used to analyze the telemetry data. Survival data, EEG power integral data, spontaneous recurrent seizure data and behavioral data are illustrated in figures and included as raw data. In addition, edf files of one month telemetry recordings from soman-exposed rats treated with delayed midazolam are provided as supplementary materials. Data presented in this article are related to research articles "Rational Polytherapy in the Treatment of Cholinergic Seizures" [1] and "Early polytherapy for benzodiazepine-refractory status epilepticus [4]. Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-ncnd/4.0/). Data The first set of data corresponds to a dose-response experiment of delayed treatment with midazolam in soman-exposed rats. Fig. 1A is a survival plot of soman-exposed rats treated with a dose range of midazolam at 40 min after seizure onset. Fig. 1B is a bar graph illustrating the EEG power integral at 1 h and 6 h after soman exposure in rats treated with midazolam. Fig. 1C Raw and analyzed data, and representative EEG files (edf) Experimental factors Rats were pre-implanted with telemetry transmitters 1e2 weeks prior to soman exposure and administration of post-exposure medical countermeasures. Experimental features Adult male rats with implanted telemetry transmitters for continuous recording of electroencephalographic (EEG) activity were exposed to soman and treated with atropine sulfate and the oxime HI-6 one min after soman exposure and with midazolam, ketamine and/or valproic acid 40 min after seizure onset. Rats exposed to soman and treated with medical countermeasures were evaluated for survival, seizure severity, the development of spontaneous recurrent seizure and performance deficits; combination anti-epileptic drug therapy was compared with midazolam monotherapy. Value of the data These data are the first to demonstrate that a combination of antiepileptic drug therapy such as valproic acid and ketamine increases the efficacy of midazolam against soman-induced seizure severity and epileptogenesis. These data are of value to others who are using this rat model to evaluate drugs for improved efficacy against somaninduced status epilepticus and epileptogenesis. The raw data of the power analysis and the sample European Data Format (.edf) are of value to others who may wish to further analyze changes in EEG patterns that may be useful in identifying types of seizure activity, power changes or sleep alterations. The raw EEG data might help the scientific community to test seizure detection and prediction algorithms, which can accelerate the screening of neuroprotective compounds against soman-induced status epilepticus. The behavioral data obtained from HVS Image tracking system may be useful for comparing with findings obtained in other laboratories on organophosphorus chemical exposure-induced performance deficits [2,3]. Four representative telemetry recordings of over one month of continuous data from soman exposed rats treated with one of three doses (1, 3, 9 mg/kg; ip) of midazolam are provided in Mendeley Data Direct (URL to data: https://doi.org/10.17632/zwcx948yjc.2) as supplementary materials in a data repository. Each folder contains raw EEG data compressed into 7zip format split into 3e4 volumes. After uncompressing (we suggest using https://www.7-zip.org) the set of files for each animal, the recordings can be accessed using EDF (European Data Format)-compatible software and contain signals collected at 250 Hz from two EEG channels, one signal strength channel (used for assessing gross motor activity) and one body temperature channel, and include baseline and post-exposure to soman data. The animal identification and date/time of each recording were modified per IACUC suggestion with date set back to 2000. A05_GD_MDZ3.edf is from a GD-exposed rat treated with 3 mg/kg midazolam Delayed midazolam increases survival but does not prevent status epilepticus or epileptogenesis. A) Midazolam administered 40 min after the onset of seizures induced by soman (GD) dose-dependently increased survival to GD: rats that received saline (GD/ SAL; n ¼ 10) or low midazolam (1 mg/kg; GD/MDZ1; n ¼ 14) had poor survival (30% and 50% respectively), while those that received 3 mg/kg midazolam (GD/MDZ3; n ¼ 13) or 9 mg/kg midazolam (GD/MDZ9; n ¼ 13) had 85e90% survival. B) Soman exposure increased EEG power integral during status epilepticus. Treatment with midazolam (3 or 9 mg/kg) at 40 min after seizure onset reduced GD-induced seizure severity compared to saline treatment as shown by power integral during the 1 h period after treatment. Data shown are mean ± SEM. C) Following a latent period of 1e2 weeks, all of the surviving GD-exposed rats treated with saline or with 1 mg/kg midazolam developed SRS, while 57 and 70% of those treated with 3 mg/kg and 9 mg/kg midazolam, respectively developed SRS. D) Number of SRS is shown as mean ± SEM. *p < 0.05; **p < 0.01; ***p < 0.001. 40 min after seizure onset that had over 25 SRS; the modified date and time stamp has exposure on 1/ 5/2000 at 21:55 (actual 10:04 a.m., which is~2 hr after onset of the dark cycle). Since the exposure was performed in a hood in another room, there is an interruption of signal during this period (between 21: 52e21:57), followed by seizure onset at 22:03 (with seizure latency of 8 min). Treatment time (40 min after seizure onset), characterized by a short signal interruption is 22:43. For A06_GD_MDZ1.edf from a GD exposed rat treated with 1 mg/kg midazolam that had over 30 SRS, the modified date and time stamp has exposure on 1/5/2000 at 22:47 with a data gap between 22:44 and 22:49 and seizure onset at 22:50 with seizure latency of 3 min. Treatment time (40 min after seizure onset), characterized by a short signal interruption is 23:30. For A08_GD_MDZ9.edf from a GD exposed rat treated with 9 mg/kg midazolam that had less than 5 SRS, the modified date and time stamp has exposure on 1/5/2000 at 22: 11 with a data gap between 22:08 and 22:13 and seizure onset at 22:19 with seizure latency of 8 min. Treatment time (40 min after seizure onset), characterized by a short signal interruption was 22:58. A fourth animal A09_GD_MDZ3.edf from a soman exposed rat treated with 3 mg/kg midazolam is The second set of data corresponds to a dose-response study of ketamine monotherapy or as an adjunct to midazolam against soman-induced toxicity in rats. Survival plots are shown in Fig. 2A (ketamine monotherapy) and Fig. 2B (ketamine and midazolam combination). Fig. 2C is a bar graph illustrating the effect that midazolam and ketamine dual therapy had on EEG power integral (a measure of seizure severity) compared to vehicle or midazolam monotherapy at 1 h after treatment. Some of these data are captured in Fig. 2B in Niquet et al. [1], with additional groups shown here. The number of rats that develop SRS are in Fig. 3C in Niquet et al. [1], and in Fig. 2D in Niquet et al. [4]. Raw data for Fig. 2 is in a supplementary excel file titled "Data Combination Therapy Against Soman" (Fig. 2A, KET survival; Fig. 2B, MDZKET Survival; Fig. 2C, KET & MDZKET Power). We present data from a comparison of the effects of monotherapy and dual therapy on performance deficits that follow soman exposure. Latency to locate the platform in the Morris water maze is shown in Fig. 3E in Niquet et al. [1] and Fig. 2F, Niquet et al. [4] with raw data in the excel file "Morris Water Maze." Additional measures of performance in the Morris water maze are shown in Fig. 3A (Training sessions percent time in target quadrant), 3B (Training sessions distance travelled), 3C (Training sessions thigmotaxis), and 3D (Probe trial Gallagher score) with raw data included in the supplementary excel file (Morris water maze). We also provide data from an evaluation of the efficacy of adding valproic acid to a combination therapy of ketamine and midazolam against soman-induced seizure severity [4]. Raw data for Fig. 1B in Niquet et al. [4] is included in the excel file labelled "MDZ_KET_VPA_Power." Animals Male Sprague-Dawley rats (350e400 g; Charles River) individually housed and maintained on a reverse, 12 h light-dark cycle were implanted with F40-EET telemetry transmitters Seizure recording and analysis Rats anesthetized with isoflurane were surgically implanted with F40-EET telemetry transmitters (DSI, Inc.) to record bi-hemispheric cortical EEG activity as previously described [5]. Following 7e10 days of recovery, rats were exposed to soman and monitored for seizure onset. For EEG activity recording, an RPC-1 physiotel receiver from DSI was placed under each rat's home cage for continuous data collection (24 h/day) using Dataquest ART Acquisition software (DSI, Inc.). The Dataquest EEG files were converted to European Data Format (edf) [6] using Neuroscore 1.1.1 (DSI), and the EEG channel with the best signal-to-noise ratio was chosen for each animal. Signals from each EEG channel were visually screened to identify the presence of artifacts, and the channel with the least number of artifacts was chosen for further analysis. In case several data sets were accumulated per animal, the datasets were linked to each other in MATLAB, allowing proper representation of the data. The signal was filtered using a Butterworth filter (pass band of 0.1e125 Hz; notch filter of 60 Hz [7]). Epileptiform activity was identified using Dataquest ART 4.1 (analysis software), Neuroscore 1.1.1 (DSI), and a customized MATLAB (MATLAB; Mathworks, 2008a) algorithm according to de Araujo Furtado et al. [7] and confirmed by visual screening. Converted edf files were analyzed to identify the time spent in Fig. 3. Ketamine (KET; 30 mg/kg) with or without midazolam (MDZ; 3 mg/kg) was administered 40 min after soman (GD)-induced seizure onset and compared with no agent control mice (n ¼ 10e11/group). Morris water maze testing was conducted one month after GD exposure. GD-exposed rats treated with combination therapy of KET and MDZ (GD/MDZ/KET) performed similarly to noagent control (No GD) rats. Rats treated with MDZ monotherapy (GD/MDZ) or KET monotherapy (GD/KET) spent less time in the target quad (A), had greater distance travelled (B), spent more time in thigmotaxis (C), had greater cumulative distance in the probe trial compared to no-agent control rats (D), and had greater latency to locate the platform [1,4]. Data shown are mean ± SEM. *p < 0.05. status epilepticus and the development of SRS. The EEG ratio power integral was calculated by taking the average of power spectra of each hour period through a customized MATLAB algorithm and applying a formula [decibels ¼ 10*(Log(V^2sample/V^2normal))]*60 min, resulting in decibels/h. The range of frequency analyzed was 0.1e100 Hz, and the data represent the full spectrum and the ratio of power of EEG signal in the first 24 h and in specific time periods of 1 and 6 hours after onset of status epilepticus. Behavioral assessments One month after soman exposure and treatment, rats were evaluated for spatial memory acquisition and retention in the Morris water maze (MWM) test. A hidden platform (10 Â 10 cm) was placed in a fixed position 1.25 cm below the surface of a 170 cm diameter pool filled with paint-blackened water (26 ± 1 C; for detailed methods, see Schultz et al. [5]). Briefly, rats received four 60 s trials per session, 2 training sessions per day and a 30 min rest period between sessions for a total of 8 trials/day. After three training days (6 training sessions total), the platform was removed, and two 60 s probe trials were conducted. A video tracking program (HVS Watermaze 2100, HVS Image, Cambridge, UK) was used to measure latency to escape, path length, speed, heading error, thigmotaxis, target quad time, Gallagher score [8], and number of platform passes (definitions listed below). Following the second probe test, a visual acuity test was conducted in which the latency to locate a visible platform was evaluated over four successive 60 s trials. Data are included in supplementary material (excel sheet 9) with corresponding labels listed below. Latency to escape Time in seconds from the start of the trial for the rat to reach the platform and end the trial. A maximum latency of 60 s is assigned if the rat fails to find the platform in 60 s. In the supplementary excel file, listed as S1Lat, S2Lat, etc. Path length The length of the path that the rat took from the starting location to its location at the end of the trial (meters). Listed in supplementary excel data file as S1Path, S2Path, etc. Speed Average speed over entire trial (meters per second), path length/latency. Listed in supplementary excel data file as S1Speed, S2Speed, etc. Heading error The absolute value of the angle between the platform, the rat's starting position, and the rat's position after traveling 20 cm. A smaller heading error indicates a more accurate initial direction of the rat navigating towards the platform. Listed in supplementary excel data file as S1Head, S2Head, etc. For probe trial, listed as P1Head, P2Head. Thigmotaxis The percent of trial time the rat spent in the perimeter of the water maze. Listed in supplementary excel data file as S1Thig, S2Thig, etc. Target quadrant time The percent of trial time the rat spent in the target quadrant. Listed in supplementary excel data file as S1TargetQuad, S2TargetQuad, etc. Gallagher score (cumulative) The Gallagher score is the distance between the rat and the platform at every second in the trial [8]. The cumulative version is the total Gallagher score for the entire trial. Listed in supplementary excel data file as S1GalCum, S2GalCum, etc. For probe trial, listed as P1Gal, P2Gal. Platform passes in probe trial Number of times during the probe trial that the rat passed through the platform location. Listed in supplementary excel data file as P1Pass, P2Pass. Statistical analysis Statistical analyses were performed using SPSS (IBM Inc, Armonk, NY), and graphs were compiled using SigmaPlot (Systat Software Inc., San Jose, CA). Survival across time and spontaneous recurrent seizure onset were analyzed with a Kaplan-Meier analysis, followed by a log-rank test to determine treatment effect on the distributions. EEG power integral were analyzed using an ANOVA. Significant interactions were followed up at each time point (baseline, 1 hr, 6 hr) followed by multiple comparisons of treatment groups. Measures in the Morris water maze were analyzed using a repeated measures ANOVA. For significant interactions between group and repeat (time or trial), a one-way ANOVA was used to compare groups at each repeat and to compare each group over time. Differences were considered statistically significant when p < 0.05. Disclaimer The views expressed are solely those of the authors and do not necessarily represent the official views of the CCRP, NIAID, NIH, HHS, USAMRICD or DoD.
2019-10-10T09:31:34.412Z
2019-10-08T00:00:00.000
{ "year": 2019, "sha1": "288ab10858660c8173972d4d7f3ad8b8d7cebe4a", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.dib.2019.104629", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a0bee82441c23d6dedfe90c48a62ae4c38991f95", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2047737
pes2o/s2orc
v3-fos-license
Notch Signalling Is Required for the Formation of Structurally Stable Muscle Fibres in Zebrafish Background Accurate regulation of Notch signalling is central for developmental processes in a variety of tissues, but its function in pectoral fin development in zebrafish is still unknown. Methodology/Principal Findings Here we show that core elements necessary for a functional Notch pathway are expressed in developing pectoral fins in or near prospective muscle territories. Blocking Notch signalling at different levels of the pathway consistently leads to the formation of thin, wavy, fragmented and mechanically weak muscles fibres and loss of stress fibres in endoskeletal disc cells in pectoral fins. Although the structural muscle genes encoding Desmin and Vinculin are normally transcribed in Notch-disrupted pectoral fins, their proteins levels are severely reduced, suggesting that weak mechanical forces produced by the muscle fibres are unable to stabilize/localize these proteins. Moreover, in Notch signalling disrupted pectoral fins there is a decrease in the number of Pax7-positive cells indicative of a defect in myogenesis. Conclusions/Significance We propose that by controlling the differentiation of myogenic progenitor cells, Notch signalling might secure the formation of structurally stable muscle fibres in the zebrafish pectoral fin. Introduction The development of many organs starts with the formation of a primordium at specific embryonic locations in response to combinatorial positional signals. This is the case with the appendages. The limb/fin mesenchyme precursor cells protrude from the embryonic trunk to form a small bud covered by a layer of ectoderm and will give rise to precise arrangements of differentiated cells such as cartilage/bone and muscle [1,2]. The genetic network that triggers and controls paired fin outgrowth seems to be similar to the developmental program of the tetrapod limb until larval stages. After that, morphological and genetic differences become obvious, underlying the diversity of appendages formed in different vertebrates. In amniote tetrapods, paired appendage outgrowth is controlled along the proximal-distal (PD), anterior-posterior (AP) and dorsalventral (DV) axes by three organizing centres. The apical ectodermal ridge (AER) promotes outgrowth and skeletal patterning along the PD axis; the zone of polarizing activity (ZPA) patterns the AP axis; the non-ridge ectoderm specifies the DV axis [1]. The apical ectodermal fold (AEF) of the zebrafish pectoral fin bud, like the tetrapod AER, expresses several Fgf molecules, suggesting that it performs comparable functions [2]. At later stages of pectoral fin outgrowth, shh is activated in a posterior domain of the fin bud, defining a ZPA-like region in zebrafish that appears to control AP patterning, in a manner similar to that described for chick and mouse [3,4,5]. Tetrapod DV patterning relies on the activity of Wnt7a in the dorsal ectoderm to specify dorsal structures [6,7] and of En1 in the ventral ectoderm to allow ventral fates to be generated in the limb bud [8]. The fact that the homologous genes are expressed in similar territories within the zebrafish pectoral fin buds, suggests a conserved role in DV patterning [9,10]. Although important, Fgf, Hh and Wnt signalling are insufficient to account for the diversity of appendages patterning between species. Numerous studies in tetrapods provide gene expression and functional data compatible with a role for Notch signalling in several steps of limb development. Notch is a transmembrane receptor that, upon binding to its ligands (Delta or Jagged), suffers a series of proteolytic events that result in the cleavage of the Notch intracellular domain (NICD) by gamma-secretase. NICD is then translocated to the nucleus where it associates with the DNA-binding transcriptional repressor CSL (RBPjk, Supressor of Hairless, Lag-1) turning it into a transcriptional activator, which then drives transcription of Notch target genes, such as bHLH transcription factors of the Hairy/Enhancer of Split (Hes) homologues or Hes-related (Her) family in vertebrates. Importantly, activation of the Notch receptor by its ligands requires the addition of ubiquitin to the ligand and this ubiquitination is performed by E3 ubiquitin ligase Mindbomb [11,12]. In the tetrapod limb, it was shown that the expression of one Notch downstream target gene is dynamic and oscillates with a periodicity of 6 hours in the distal forelimb mesenchyme, suggesting that a Notch-dependent molecular clock governs the timing of formation of autopod skeletal elements [13]. In addition, Notch signalling seems to be required for a variety of functions including AER signalling [14,15,16], chondrogenic differentiation [17,18] and myogenesis, a process that leads to skeletal muscle formation [19,20]. A resident progenitor population expressing Pax3 and/or Pax7 is maintained in the developing skeletal muscle [21,22]. Later in development the progenitor population generates satellite cells, which are marked by the expression of Pax7 [21,22]. Therefore, in developing and adult muscle, pools of undifferentiated cells are preserved in a latent state to undergo myogenic differentiation that allows growth and/or regeneration of muscle fibres. Importantly, controlled myogenic differentiation and maintenance of progenitors in skeletal muscles has been shown to require Notch signalling [20,23,24]. Nevertheless, the function of the Notch pathway during early limb and pectoral fin development is still unknown. Here we show that several components of the Notch pathway are expressed in the myogenic mesenchyme of zebrafish pectoral fins. Without Notch signalling the muscle fibres become thin, wavy and fragmented and no stress fibres are formed in endoskeletal disc cells. Desmin and Vinculin proteins lose their normal localization in pectoral fins, indicating that the muscle fibres formed in the absence of Notch signalling produce weak mechanical forces. We also observed a decrease in the number of Pax7-positive myogenic progenitor cells in Notch signalling disrupted pectoral fins. We propose that the lack of integrity of the muscle fibres observed might be due to altered myogenesis that results in a premature reduction of myogenic progenitor cells. Core Elements of the Notch Pathway are Expressed in Pectoral Fin Myogenic Mesenchyme In zebrafish, the pectoral fins arise at 24 hours-post-fertilization (hpf) as small buds of mesenchymal cells on each side of the trunk. At this time point, little or no expression of the Notch ligand jagged2, the Notch transmembrane receptors notch1a, notch2 and notch3 and the direct Notch targets her6, her7 and her13.2 was detected by whole mount in situ hybridization at the level of the pectoral fin buds. Later in development, at 36 hpf, when the main signalling centres of the pectoral fin are established, expression of jagged2, notch1a, notch2, notch3, her6, her7 and her13.2 was detected broadly in the pectoral fin ( Fig. S1A-G). At 48 hpf, the centre of the fin bud is occupied by a chondrogenic condensation, that will form the endoskeletal disc. This condensation divides the mesenchymal cell population into a dorsal and a ventral myogenic mesenchyme that will give rise to the fin musculature [25]. Histological sections at this time point showed expression of jagged2, notch1a, notch3, her6 and her13.2 in the myogenic mesenchyme ( Fig. 1A'', B'', D'', E'', G''). In addition, jagged2 and notch3 were also expressed in the base of the apical ectodermal fold (AEF) (Fig. 1A', A'', D', D''). Expression of notch2 and her7 seems to be present in the entire fin ( Fig. 1C'', F''). An additional important component of the Notch signalling pathway is Mind bomb (Mib). This E3 ubiquitin ligase is essential for activation of Notch signalling as it promotes ubiquitination and internalization of Notch ligands [11,12]. Our analysis reveals that both zebrafish mib genes (mib1 and mib2) are also expressed in the myogenic mesenchyme in the pectoral fin ( Fig. 1H'', I''). At 72 hpf the expression patterns of all these genes were maintained, but they were no longer detected at later stages ( Fig. S1H-N). The expression pattern of other Notch-related genes such as the ligands deltaC and deltaD and the direct targets her1, her11, her12 and her15 was also studied, but no expression was detected in zebrafish pectoral fins (data not shown). Combining our data with the previously reported ubiquitous coexpression of the two Supressor-of-Hairless Su(H) paralogs Su(H)1 and Su(H)2 [26] that are essential components of the transcriptional activation complex downstream of Notch, we conclude that the core elements necessary for a functional Notch pathway are transiently expressed in the developing pectoral fin. Defective Pectoral Fins are Formed Upon Notch Signalling Perturbation To uncover the role of Notch signalling during pectoral fin development, we started by analysing the gross pectoral fin morphology of the mib ta52b , a severe Notch signalling mutant in which both mib1 and mib2 are affected [12,27]. A sibling control pectoral fin, at 5 days-post-fertilization (dpf), is composed of a cartilaginous endoskeletal disc with individual cells surrounded by thin matrix deposits and a large fin fold supported by dermal fin rays with a characteristic open shape ( Fig. 2A, A') [25]. In clear contrast, 5 dpf mib ta52b mutant larvae show pectoral fins with a clear disorganized endoskeletal disc and a misshapen fin fold (Fig. 2B, B'). In addition, we used previously validated antisense morpholinos to block the Notch pathway at several levels, namely the jagged2 ligand [28] and the two Su(H) genes downstream of Notch receptor [26]. In both jagged2 and Su(H)1+2 morphants, pectoral fins with disorganized endoskeletal disc cells were observed in 5 dpf larvae, similar to the mib ta52b mutant phenotype (compare Fig. 2C-D' with Fig. 2B, B'). Thus, correct Notch pathway activity is essential for normal pectoral fin formation. Early interference with Notch signalling, which controls several early developmental processes [29], might have indirect effects that later impact on pectoral fin formation. To examine the importance of Notch signalling at developmental stages closer to the time of fin formation, we made use of a known c-secretase inhibitor drug called DAPT to fully block Notch signalling in a time-controlled manner [30]. We compared the pectoral fin phenotype at 5 dpf obtained when DAPT was added to the embryo medium at the 1-cell stage with that obtained when DAPT was added at 21 hpf, when pectoral fin formation is initiated. Control DMSO-treated embryos developed normal fins (Fig. 2E, E'), whereas all DAPT-treated embryos treated at the 1-cell stage or at 21 hpf showed a endoskeletal disc cells disorganization similar to that in mib ta52b mutants, and jagged2 and Su(H)1+2 morphants (Fig. 2F, F'). These experiments strongly suggest that the pectoral fin phenotype described when Notch signalling is perturbed results from an effect on fin development. Altogether, these results show that canonical Notch signalling is required for proper development of pectoral fins in zebrafish. Patterning and Cell Lineage Specification in the Pectoral Fins are not Affected in the Absence of Notch Signalling To investigate whether the fin developmental defects upon Notch signalling perturbation result from an earlier disruption of the AEF and/or ZPA signalling centres, we examined the expression of key components of the Fgf and Hh pathways. In mib ta52b mutants the expression of fgf8a and fgf24 is restricted to AEF, as in siblings ( Fig. S2A-D'). mkp3, a readout of Fgf activity, is expressed in a PD gradient, as expected, but is slightly up-regulated proximally in the mib ta52b mutants ( Fig. S2E-F). The expression pattern of shh, ptc1 and gli1 are unaffected in the mib ta52b mutants. In both mib ta52b mutants and their siblings, the expression of shh and its receptor ptc1 appears restricted to the ZPA ( Fig. S2G-J') and gli1, a readout of Hh activity, is expressed in the fin mesenchyme ( Fig. S2K-L'). As in tetrapods, signals from the AEF and the ZPA might be able to regulate hox gene expression in zebrafish and in this way specify regional identity along the fin [31]. Once again, no alteration in the expression of hoxa9b, hoxa11b and hoxa13a was observed between the mib ta52b mutants and their siblings (Fig. S2M-R'). This analysis demonstrates that patterning along the PD and AP axes of the pectoral fin is broadly unaffected upon Notch signalling impairment. Our data show that several Notch signalling components were expressed at early stages of pectoral fin development at the level of the myogenic mesenchyme ( Fig. 1), raising the possibility that this pathway might be important to define muscle versus cartilage lineages within the pectoral fin. To address the possibility that Notch signalling interferes with early stages of muscle or cartilage differentiation, we performed an in situ hybridization analysis using myod, an early marker of myogenic differentiation [32], and sox9a and sox9b, cartilage markers [33]. At 68 hpf, sox9a, sox9b and myod positive cells were detected within the fins of mib ta52b mutants and their siblings ( Fig. 3A-F'), indicating that both pectoral fin cell lineages are specified in the absence of Notch signalling. Moreover, in a double fluorescent in situ analysis with sox9b and myod, we found no signs of cartilage and muscle lineages cell mixing in the absence of Notch signalling (Fig. 3G, H). Overall, these results suggest that defects of patterning, cell lineage specification or muscle precursor ingression are not the cause of the pectoral fin phenotype observed when Notch signalling is perturbed. Notch Signalling Impacts on Skeletal Muscle Fibre Integrity and Stress Fibre Formation in Pectoral Fins To characterize with cellular resolution the pectoral fin architecture later in development, after tissue differentiation has occurred, we used DAPI and phalloidin to label cell nuclei and filamentous actin, respectively. In control embryos at 3 (data not shown) and 5 dpf, the endoskeletal disc cells possess an actin cytoskeleton organized in transversal stress fibres ( Fig. 4A', F'). In mib ta52b and mib m178 mutants, in jagged2 and Su(H)1+2 morphants and in DAPT-treated embryos, the actin organization was distinct from controls; the actin filaments accumulated at the periphery of the endoskeletal disc cells (Fig. 4B'-E', G'). The endoskeletal disc separates the fin musculature into two opposing muscles, the abductor and the adductor. Within these muscles, the individual muscle fibres run in sheets in a semiradial fashion along the PD axis of the muscles [34]. This was exactly what we found in both 3 dpf (data not shown) and 5 dpf control pectoral fins, where well-aligned striated muscle fibres extend along the PD axis ( Fig. 4A'', F''). In mib ta52b and mib m178 mutants, in jagged2 and Su(H)1+2 morphants and in DAPT-treated embryos, striated muscle fibres are formed. However, the fibres are thinner, wavy and fragmented, leading to gaps in the fin musculature We used transmission electron microscopy to characterize the skeletal muscle fibre phenotype observed in the mib ta52b mutants. We found that, in contrast to the sibling pectoral fins, which show well-organized myofibrils with sarcomeres in register, mib ta52b mutants show severely disrupted myofibrils. Although these myofibrils have sarcomeric organization, they are fragmentary and are often separated by large areas of cytoplasm (compare Fig. 5A with Fig. 5B). These results demonstrate that Notch signalling is essential for both muscle integrity and skeletal architecture in pectoral fins. Mechanically Fragile Skeletal Muscle Fibres are Formed in Pectoral Fins in the Absence of Notch Signalling The lack of integrity of the myofibrils that we uncovered in mib ta52b mutants could be due to a mis-regulation of the muscle specific intermediate filament protein Desmin. It has been shown that Desmin localizes to the Z-discs of the sarcomeres and plays a fundamental role in maintaining the integrity of the myofibrils and their connection to the subsarcolemmal cytoskeleton, thereby ensuring mechanically resilient muscles [35,36]. By using immunohistochemistry, we were able to show that Desmin protein is abundantly expressed in the skeletal muscle fibres of 5 dpf sibling and DMSO-treated pectoral fins (Fig. 5C, G), but is severely downregulated in mib ta52b mutants and DAPTtreated embryos (Fig. 5D, H). Interestingly, lack of Desmin protein is not a consequence of lack of desmin mRNA transcription as detected by in situ hybridization in mib ta52b mutants pectoral fins (Fig. 6B, D). These results suggest that desmin might not be the primary cause of the muscle phenotype in mib ta52b mutants but unveil the possibility, amongst others, that forces generated by mechanically stable muscle fibres are essential to maintain appropriate Desmin protein levels. Myotendinous junctions (MTJ) form a mechanical unit that provides structural stability linking muscle fibres and extracellular matrix molecules (ECM) [37]. Therefore, structurally unstable muscle fibres could result from impaired formation/function of the MTJ. In the zebrafish pectoral fin, the two opposing muscles originate from a proximal bone called the cleithrum and insert distally in the fin membrane at the end of the endoskeletal disc [34], presumably through a MTJ-type linkage. As the MTJ has not been characterized in zebrafish fins, we looked for components of this type of linkage system in 5 dpf control pectoral fins. We found that Vinculin, a focal adhesion protein involved in linking actin filaments in cytoplasm through integrins to the ECM molecules in the extracellular space [38], is localized in the pectoral fin where muscle fibres are thought to insert distally in wild type control embryos (Fig. 5E). In contrast, Vinculin is not detected at the distal end of the muscle fibres in mib ta52b mutants and DAPT-treated embryos (Fig. 5F, H). Thus, abrogation of Notch signalling leads to defects in MTJ assembly. Vinculin recruitment to focal adhesions is force-dependent [38,39] raising the possibility that loss of Vinculin in the pectoral fins could be a consequence of the muscle phenotype and not its primary cause. To distinguish between these two possibilities we started by looking at the mRNA expression of vinculin (vcl) in pectoral fins of control and mib ta52b mutant embryos. In zebrafish, there are two vcl genes (Leslie, Hinits, Williams and Hughes; unpublished data). vcla is expressed in the somites and in pectoral fins (Fig. 6G) and vclb in the myogenic mesenchyme of pectoral fins (Fig. 6I, J, M). Expression of both vcla and vclb mRNAs occurs in mib ta52b mutants (Fig. 6H, K, L, N), showing that Notch signalling does not act via transcriptional control of Vinculin genes. We tested independently the possibility that impaired mechanical forces produced by unstable muscle fibres impact on the stabilization of Desmin and Vinculin protein levels. To that end, we inhibited muscle contraction and therefore force generation using the myosin-inhibitor blebbistatin [40] and MS222, a muscle relaxant that operates by preventing action potentials [41]. We added blebbistatin or MS222 to wild-type embryos at 2 dpf and performed an immunostaining to detect the localization of Desmin and Vinculin at 5 dpf. Control embryos showed the expected striated Desmin expression (Fig. 5I, O) and Vinculin localization at the distal end of the pectoral fin muscle fibres (Fig. 5K, O), whereas in the blebbistatin and MS222-treated embryos Desmin was severally reduced (Fig. 5J, P) and Vinculin accumulation was not visible (Fig. 5L, P). Similar to what we observed in the mib ta52b mutants, the transcription of desmin and vclb is not affected in blebbistatin-treated embryos (Fig. 6E, F, O, P). This further supports the idea that the lack of Desmin and Vinculin in the pectoral fin seen in the absence of Notch signalling is a consequence of the muscle phenotype. Moreover, it also shows in the context of a living embryo that the accumulation of these proteins is dependent on mechanical forces produced by the skeletal muscles. Interestingly, we could also see that in blebbistatin and MS222treated embryos, actin filaments are accumulated at the periphery of the endoskeletal disc cells (Fig. 5N, compare with 5M and data Notch Signalling is Required to Set up a Pax7 Myogenic Progenitor Cell Population in Pectoral Fins The lack of integrity of the muscle fibres observed in the zebrafish mib ta52b mutants could result from an inefficient muscle differentiation due to premature uncontrolled differentiation accompanied by depletion of progenitor cells. To test this possibility, we performed immunohistochemistry with the pairedbox transcription factor Pax7, a marker of proliferating muscle progenitor cells. We observed a severe reduction in the number of Pax7-positive cells in the mib ta52b mutant pectoral fins when compared with siblings in all time points analysed. At 2 dpf, an average of 4 Pax7-positive cells were found in mib ta52b mutants (n = 7) in contrast to siblings that showed an average of 13.8 (n = 6). At 3 dpf, we counted on average 5 Pax7-positive cells in mib ta52b mutants (n = 10) and on average of 13.3 Pax7-positive cells in their siblings (n = 13). At 4 dpf an average of 4.6 Pax7-positive cells were counted in mib ta52b mutants (n = 14) and an average of 14.5 Pax7-positive cells was founded in their siblings (n = 7). At 5 dpf, the mib ta52b mutants showed on average 7 (n = 15) Pax7positive cells, significantly fewer than the 11.9 (n = 14) in siblings at this stage ( Fig. 7A-C). These results suggest that Notch signalling is required early on to establish a Pax7 myogenic progenitor cell pool. Discussion Fish pectoral fins are, in many respects, homologues of tetrapod forelimbs. However, a clear contrast exists concerning the role of Notch signalling during pectoral fin development. We started by performing a thorough expression analysis of the Notch pathway components in developing pectoral fin buds. We show that the core components needed for a functional Notch pathway are expressed in pectoral fin buds at similar stages. Namely, we found that the Notch ligand jagged2, the transmembrane receptors notch1a, notch2 and notch3, the Notch direct targets Figure 5. Mechanical weak muscles fibres are produced when Notch signalling is perturbed. Transmission electronic microscopy was performed in 5 dpf pectoral fins of siblings (A) and mib ta52b mutants (B) to analyse the ultrastructure of skeletal muscle fibres. The myofibrils with clear aligned sarcomeres are formed in the sibling embryo (A), while disintegrating myofibrils with poorly aligned sarcomeres are found in mib ta52b mutants (B). Immunostaining performed in pectoral fins at 5 dpf using Desmin, Vinculin, phalloidin and DAPI to label the Z-discs of the sarcomeres, the zone where the skeletal muscle fibres insert distally, filamentous actin and the cell nuclei, respectively (C-P) demonstrate that Desmin (n = 10) (D) and Vinculin (n = 12) (F) are downregulated in mib ta52b mutants when compared with their siblings (n = 10) (C, E). The same is observed in DAPTtreated embryos at 48 hpf and fixed at 5 dpf (n = 9) (H), when compared with the DMSO-treated embryos (n = 10) (G). In embryos treated with blebbistatin, Desmin (n = 9) and Vinculin (n = 12) are also downregulated (J, L) when compared with the DMSO-treated embryos (n = 12) (I, K). In blebbistatin-treated embryos, the endoskeletal disc cells present high levels of actin at the periphery and the skeletal muscle fibres are wavy with gaps between them (n = 15) (N). A down-regulation of Desmin and Vinculin is also observed in MS222-treated embryos (n = 14) (P) when compared with the control embryos (n = 13) (O). doi:10.1371/journal.pone.0068021.g005 her6, her7 and her13.2 and the E3 ubiquitin ligases mib1 and mib2 are all expressed in the entire fin bud at 36 hpf. Whereas mRNA of notch2 and her7 is present in the entire fin bud at 48 hpf, we found that expression of jagged2, notch1a, notch3, her6, her13.2, mib1 and mib2 becomes restricted to the myogenic mesenchyme. We also found expression of jagged2 and notch3 at the apical ectodermal fold (AEF) at this later stage. The expression pattern of all these genes is maintained at least until 72 hpf. However, and in contrast to the situation in the chick embryo where cyclic gene expression was described for the Notch-based clock gene hairy2 [13], we were unable to detect any sign of dynamic gene expression of any of the zebrafish cyclic genes, namely her1, her7, her12 and her15 in the developing pectoral fin. Nevertheless, the expression of several elements of the Notch pathway in the myogenic region and in the apical ectodermal ridge (AER) in fish pectoral fins was similar to that reported in developing chick and mouse limbs [15,16,19,20], suggesting a conserved role for Notch signalling in vertebrate appendage development. We used a strong Notch signalling mutant allele (mib ta52b ) [12,27] to dissect the function of the Notch pathway in pectoral fin development. We started by analysing the impact that Notch signalling could have in the early establishment of signalling centres, namely the AEF and the ZPA, as these are crucial to pattern the fin along the PD and the AP axes, respectively. We found that in the absence of Notch signalling the majority of the components of Fgf signalling pathway that are mediators of AEF function and the Hh pathway that are mediators of the ZPA function appear normal. Fitting with these data is the fact that a normal AEF and ZPA function observed in the absence of Notch signalling translates into a normal regional identity along the fin bud, as evaluated by the expression of hox genes. In the absence of Notch signalling we also found that the early specification of cartilage and muscle lineages was apparently normal in the pectoral fins, as we could detect expression of sox9a/ b and myod in the correct territories, respectively. This also compares with the results described for notch1, jagged2 and delta1 mutants at time points 2 dpf (sibling n = 6, mib ta52b n = 7, t-test p = 0.004), 3 dpf (sibling n = 13, mib ta52b n = 10, t-test p = 0.0002), 4 dpf sibling n = 7, mib ta52b n = 14, t-test p = 0.0019) and 5 dpf (sibling n = 14, mib ta52b n = 15, t-test p = 0.003). Error bars = SD. 5 dpf pectoral fins immunostained for Pax7 and DAPI to label muscle progenitor cells and nuclei, respectively, in (B) siblings and (C) mib ta52b mutants. doi:10.1371/journal.pone.0068021.g007 mouse mutants, in which no muscle or cartilage/bone specification problems were reported [14,15,16,20]. Thus, we did not detect an early patterning/specification phenotype in the pectoral fins of mib ta52b mutants. A striking phenotype was observed in Notch signalling-defective muscle at 3-5 dpf. Gaps in the fin musculature were detected in mib ta52b mutants that seem to result from the formation of thin, wavy and fragmented striated muscle fibres. At the ultrastructural level, we found myofibrils with recognisable sarcomeres present in mib ta52b mutant muscle fibres. However, their integrity was severely disrupted. A role for Mib2 in maintaining the integrity of fully differentiated muscles was shown in Drosophila. In the absence of mib2, apoptotic degeneration [42] and sarcomere instability [43] occur in skeletal muscles. Interestingly, Mib2 seems to maintain muscle integrity in a novel Notch-independent pathway, as mib1 fails to rescue the muscle mutant phenotype of mib2 in Drosophila [42,43]. These results in Drosophila raised the possibility that the skeletal muscle phenotype that we have uncovered in the mib ta52b mutant allele could be due to Mib2 in a Notch-independent manner, since both mib1 and mib2 are affected in this mutant allele [12,27]. Therefore, we analysed the mutant allele mib m178 , where only mib1 is affected [11,44,45], and showed that thin, wavy and fragmented striated muscle fibres are formed. In addition, we downregulated the ligand jagged2 and the transcriptional components Su(H)1+2 downstream of the Notch receptor and used the DAPT drug to block Notch signalling in a time-controlled manner. In all these situations a similar severe disruption of the skeletal muscle fibres was observed in the pectoral fins at 3-5 dpf. Therefore, we conclude that in zebrafish canonical Notch signalling through the jagged2 ligand is essential to promote the integrity of the muscle fibres. To dissect further the reason behind the muscle phenotype in mib ta52b mutants, we examined expression of the muscle cytoskeletal components, Desmin and Vinculin. Desmin is an intermediate filament protein that anchors the myofibrils to each other and to the subsarcolemmal cytoskeleton [35,36] and Vinculin is a focal adhesion molecule involved in the anchorage of actin filaments of the muscle fibres to the extracellular matrix [38]. Although the genes encoding Desmin and Vinculin were apparently transcribed normally, we detected a severe downregulation of Desmin and Vinculin protein levels in mib ta52b mutant pectoral fins. As recruitment of Vinculin to focal adhesions is partly forcedependent, our results raise the possibility that in the absence of Notch signalling the muscles fibres do not generate sufficient mechanical force to localize/stabilize Vinculin to the distal end of the muscle fibres. The similar behaviour of Desmin suggests that the accumulation of this protein could also be dependent on forces generated by the muscle fibres. In fact, we show in an independent manner that, in embryos where muscle contraction was inhibited using blebbistatin or MS222, the levels of Desmin and Vinculin proteins were severely downregulated, even though the levels of mRNA were not affected. Why might Notch signalling defects lead to weak muscle fibres and subsequent failure of cytoskeletal maturation? We found a decrease of Pax7-positive cells in mib ta52b mutants from early stages of pectoral fin development, suggestive of a premature uncontrolled differentiation of myogenic progenitors, as has been shown in other contexts in which Notch signalling was blocked [20,23,24]. We propose that altered myogenesis in the absence of Notch signalling in mib ta52b mutant pectoral fins, reflected by the lack of Pax7-positive cells, might underly defective muscle fibre genera-tion, which in turn triggers a failure of normal muscle cytoskeletal maturation. An additional indication that the muscle fibres formed in pectoral fins in the absence of Notch signalling are unable to generate normal mechanical forces is the observation that actin stress fibres are not formed in endoskeletal disc cells. A similar actin defect is observed in immotile blebbistatin and MS222treated embryos. It has been shown that external forces applied to the cytoskeleton cause the formation of stress fibres, as force promotes actin filament aggregation in an orientation parallel to the direction of the force application [46]. Interestingly, several studies have found evidence for a relationship between muscle force and bone formation/integrity [32,47,48,49,50,51,52,53]. However, without Notch signalling our embryos do not survive beyond 5 dpf, so we were unable to determine the effect of lack of stress fibres in the endoskeleton on the later formation of bone elements in pectoral fins. Ethics Statement All experiments involving animals were approved by the Animal User and Ethical Committees at Instituto de Medicina Molecular, according with directives from Direcção Geral Veterinária (PORT1005/92) or under UK Home Office licence. Zebrafish Embryos Embryos from AB lines and mib ta52b and mib m178 mutants were kept at 28uC and classified accordingly to [54]. In situ Hybridization Single whole-mount in situs were performed as described [55]. Double whole-mount fluorescent in situs were performed as described [56] with modifications; the red signal was developed with FAST RED (Roche AP substrate) and the green with Tyramide FITC (POD substrate). Drug Treatment DAPT (100 mM final concentration, D5942-Sigma) or DMSO control was added to embryo medium at indicated time points (1 cell-stage or 21 or 48 hpf) and then washed at 4-5 dpf. Blebbistatin (6 mM final concentration, B0560-Sigma) or DMSO control was added to embryo medium at 48 or 72 hpf and then washed at 5 dpf. MS222 (0.016%, A5040-Sigma) was added at 48 hpf to embryo medium and washed at 4-5 dpf, whereas control embryos were grown in embryo medium until 5 dpf. Imaging Following the in situs, whole embryos were photographed with a LEICA Z6 APO stereoscope coupled to a LEICA DFC490 camera and detached pectoral fins and sections were photographed with a Leica DMR microscope coupled to a Leica DC500 camera. Following the immunostainings detached pectoral fins and histological sections of pectoral fins at 48 hpf were examined with a Zeiss LSM 510 Meta confocal microscope. Two and threecolour confocal z-series images were acquired using sequential laser excitation, converted into a single plane projection and analyzed using ImageJ software (LSM Reader). Transmission Electronic Microscopy (TEM) For TEM analysis, wild-type and mib ta52b mutant at 5 dpf were fixed with 2.5% glutaraldehyde in 0.1 M sodium cacodylate buffer, post-fixed in 1% osmium tetroxide, dehydrated through an ethanol series and then embedded in Spurr's resin. The embedded embryos were serially sectioned (transverse sections) at the level of pectoral fins using an ultramicrotome. The ultrathin sections (70 nm) were then stained with uranyl acetate and plumbum citrate and viewed using a Jeol Jem-1010 electron microscope. Figures were assembled using Adobe Photoshop CS2. Statistical Methods In detached pectoral fins and histological sections of sibling and mib ta52b mutants, cells were counted manually in Pax7 and DAPIstained preparations photographed by a Zeiss LSM 510 confocal microscope. In the case of the histological sections, the number of Pax7-positive cells was counted in each slice of sectioned pectoral fin. A two-tailed Student's t-test was used. The statistical significance was determined as a p-value of 0.05 or less.
2018-04-03T03:20:01.989Z
2013-06-28T00:00:00.000
{ "year": 2013, "sha1": "1a53c2bdf66486d6bd5b73f2550db4192cb953c1", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0068021&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1a53c2bdf66486d6bd5b73f2550db4192cb953c1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
250691452
pes2o/s2orc
v3-fos-license
Uniqueness and numerical methods in inverse obstacle scattering The inverse problem we consider in this tutorial is to determine the shape of an obstacle from the knowledge of the far field pattern for scattering of time-harmonic plane waves. In the first part we will concentrate on the issue of uniqueness, i.e., we will investigate under what conditions an obstacle and its boundary condition can be identified from a knowledge of its far field pattern for incident plane waves. We will review some classical and some recent results and draw attention to open problems. In the second part we will survey on numerical methods for solving inverse obstacle scattering problems. Roughly speaking, these methods can be classified into three groups. Iterative methods interpret the inverse obstacle scattering problem as a nonlinear ill-posed operator equation and apply iterative schemes such as regularized Newton methods, Landweber iterations or conjugate gradient methods for its solution. Decomposition methods, in principle, separate the inverse scattering problem into an ill-posed linear problem to reconstruct the scattered wave from its far field and the subsequent determination of the boundary of the scatterer from the boundary condition. Finally, the third group consists of the more recently developed sampling methods. These are based on the numerical evaluation of criteria in terms of indicator functions that decide whether a point lies inside or outside the scatterer. The tutorial will give a survey by describing one or two representatives of each group including a discussion on the various advantages and disadvantages. Introduction The propagation of acoustic waves in a homogeneous isotropic medium with constant speed of sound c is governed by the wave equation uniformly for all directions. The total wave u is obtained via superposition u = u i + u s and, for most of this tutorial we assume the incident wave to be a plane wave, that is, with the exterior unit normal vector ν to ∂D and some impedance function λ ≥ 0 on ∂D. Note that the Neumann boundary condition for sound-hard scatterers is included as the case where λ = 0. For simplicity, throughout the tutorial we assume that the boundary ∂D of the scatterer D is C 2 smooth. The Sommerfeld radiation condition characterizes outgoing waves and ensures uniqueness for the obstacle scattering problem for both of the above boundary conditions (see [7]). For brevity, solutions u s to the Helmholtz equation that satisfy the Sommerfeld radiation condition are called radiating solutions. They can be shown to have an asymptotic behavior of the form (1.4) uniformly with respect to all directions. The function u ∞ is known as the far field pattern of the scattered wave and is an analytic function ofx on the unit sphere Ω := {x ∈ IR 3 : |x| = 1}. As one of the most important tools in scattering theory, Rellich's lemma (see Theorem 2.13 in [7]) provides a one-to-one correspondence between a radiating solution u s to the Helmholtz equation and its far field pattern u ∞ in the sense that u ∞ = 0 on Ω (or on an open subset of Ω) implies that u s = 0 in its domain of definition. The inverse scattering problem that we are concerned with is to determine the shape and location of the scatterer D from a knowledge of the far field pattern u ∞ for one or several incident plane waves. We note that this inverse problem is nonlinear in the sense that the scattered wave depends nonlinearly on the scatterer D. More importantly, it is ill-posed since the determination of D does not depend continuously on the far field pattern in any reasonable norm. This issue of ill-posedness will be handled using standard regularization techniques, e.g., Tikhonov regularization (see [7]). We illustrate the nonlinearity and ill-posedness of the inverse obstacle scattering problem by looking at a simple example. For this we consider as incident field the entire solution v i to the Helmholtz equation given by the field v i is a Herglotz wave function (see [7]), i.e., a superposition of plane waves. For D a sound-soft ball of radius R centered at the origin the scattered wave is given by This leads to the total field and the far field pattern Therefore, given the a priori information that the scatterer is a ball centered at the origin, (1.8) provides a nonlinear equation for determining the radius R. Concerning the ill-posedness we consider a perturbed far field pattern with some δ ∈ IR and a spherical harmonic Y n of degree n. Then, in view of the asymptotic behavior of the spherical Hankel functions for large argument, the corresponding total field is given in terms of an outgoing spherical wave function with the spherical Hankel function h (1) n of order n and of the first kind (see Section 2.4 in [7]). This implies and consequently, by the asymptotics of the spherical Hankel functions for larger order, it follows that This illustrates that small changes in the data v ∞ can cause large errors in the solution of the inverse problem, or a solution even may not exist anymore since v δ may fail to have a closed surface as zero level surface. The above inverse problem serves as a model problem for analyzing inverse scattering techniques in nondestructive evaluation such as radar, sonar, ultrasound imaging, seismic imaging etc. However, we should note that in practical applications the inverse scattering problem will never occur in the above idealized form. In particular, the far field pattern or some other measured quantity of the scattered wave will be available only for observation directions within a limited aperture either in the near or in the far field region. In addition, as it is the case for example in applications of inverse scattering techniques in land mine detection, the background might not homogeneous and then must be modelled as a layered medium. In this tutorial our main concern is with the issues of uniqueness and (stabilized) reconstruction algorithms. In the subsequent section 2 we will address the issue of uniqueness. After settling the uniqueness issue one might be tempted to ask for existence of solutions to the inverse scattering problem. However, for inverse problems, in general, this is the wrong question to ask. For inverse scattering problems, positive answers would need to characterize far field patterns for which the corresponding total field vanishes on a closed surface and this problem is beyond the capability of analysis. This is also reflected through the above example for the ill-posedness of the inverse obstacle scattering problem. Therefore, after settling uniqueness, the main task in inverse obstacle scattering is to design methods for the approximate and stable solution under the assumption of a correct or a perturbed far field pattern for a scatterer D. The remaining sections 3-5 will introduce the main ideas of iterative methods, decomposition methods and sampling methods for approximately solving the inverse obstacle scattering problem. Although most of our analysis in sections 3-5 can be extended to the impedance and/or the Neumann boundary condition, we confine our presentation of reconstruction methods to the case of the Dirichlet boundary condition. For more detailed presentations of the current state of research in inverse obstacle scattering we refer to the monographs [3,7,37] and the surveys [5,9,26,39,40]. Uniqueness Since by Rellich's lemma the far field pattern uniquely determines the scattered wave and consequently the total wave in the exterior of the scatterer, the question of uniqueness for the inverse problem is equivalent to the question whether the total wave can satisfy the boundary condition (1.2) for two different domains D 1 and D 2 . We immediately can exclude the case where the two scatterers are disjoint, i.e., D 1 ∩ D 2 = ∅. In this situation, the scattered wave u s is well defined in all of IR 3 , since it is defined in the exterior of both D 1 and D 2 . Consequently, the scattered wave u s is an entire solution to the Helmholtz equation satisfying the radiation condition and therefore it must be identically zero. However, then the total wave coincides with the incident field and this leads to a contradiction, because the plane wave by itself cannot satisfy the boundary condition. For the Dirichlet and Neumann condition this is obvious, since the plane wave is given by an exponential function. For the impedance boundary condition, Bu i = 0 on ∂D would imply that ν · d + λ = 0 on ∂D. This, with the aid of λ ≥ 0 and λ = 0, leads to a contradiction via Hence, non-uniqueness can occur only when D 1 ∩ D 2 = ∅, and, presently, this case cannot be excluded on the knowledge of the far field pattern for scattering of one incident plane wave only. However, when we have overdetermined data in the sense that the far field pattern is known for all incident directions we have the following classical uniqueness result for sound-soft scatterers due to Schiffer. Schiffer's uniqueness result was obtained around 1960 and appeared as a private communication in the monograph by Lax and Philipps [31]. We note that the proof presented in [31] contains a slight technical fault since the above argument does not work if D * is replaced by By analyticity the far field pattern is completely determined on the whole unit sphere by only knowing it on some surface patch. Therefore, Schiffer's result and, simultaneously, all other results of this section carry over to the case of limited aperture problems where the far field is only known on some open subset of Ω. Using the strong monotonicity property of the Dirichlet eigenvalues of −∆, extending Schiffer's ideas, Colton and Sleeman [10] showed that a sound-soft scatterer is uniquely determined by the far field pattern for one incident plane wave under the a priori assumption that it is contained in a ball of radius R such that kR < π. More recently, exploiting the fact that the wave functions are complex-valued, this bound was improved to kR < 4.49 by Gintides [13]. Schiffer's proof cannot be generalized to other boundary conditions. This is due to the fact that the finiteness of the dimension of the eigenspaces for eigenvalues of −∆ for the Neumann or impedance boundary condition requires the boundary of the intersection D * from the proof of the Theorem 2.1 to be sufficiently smooth. Therefore, a different approach is required for establishing uniqueness for the inverse scattering problem for other boundary conditions. Assuming two different scatterers that produce the same far field patterns for all incident directions, Isakov [19] obtained a contradiction by considering a sequence of solutions with a singularity moving towards a boundary point of one scatterer that is not contained in the other scatterer. He used weak solutions and the analysis is technically involved. Later on, Kirsch and Kress [24] realized that the proof can be simplified by using classical solutions rather than weak solutions and by obtaining the contradiction by considering pointwise limits of the singular solutions rather than limits of L 2 integrals. Only after this new uniqueness proof was published, it was also observed by the authors that for scattering from impenetrable objects it is not required to know the boundary condition for the scattered wave as stated in the following theorem. In the proof of that theorem, in addition to scattering of plane waves, we also need to consider scattering of point sources Φ(· , z) with source location z ∈ IR 3 \D given through the fundamental solution to the Helmholtz equation in IR 3 . We denote the corresponding scattered wave by w s (· , z) and its far field pattern by w ∞ (· , z). Scattering by plane waves and by point sources is related through the mixed reciprocity relation (see [26,37]) which is valid both for the sound-soft and impedance boundary condition. Proof. Following Potthast [37] we simplify the approach of Kirsch and Kress through the use of the mixed reciprocity relation (2.1). Let u ∞,1 and u ∞,2 be the far field patterns for plane wave incidence and let w s 1 and w s 2 be the scattered waves for point source incidence corresponding to D 1 and D 2 , respectively. With (2.1) and two applications of Rellich's lemma, first for scattering of plane waves and then for scattering of point sources, from the assumption u ∞, 1 Here, as in the previous proof, G denotes the unbounded component of the complement of D 1 ∪ D 2 . Now assume that D 1 = D 2 . Then, without loss of generality, there exists x * ∈ ∂G such that x * ∈ ∂D 1 and x * ∈D 2 . In particular, denoting by ν the outward unit normal to ∂D 1 , we have for sufficiently large n. Then, on one hand we obtain that is continuously differentiable in a neighborhood of x * ∈D 2 due to reciprocity and the well-posedness of the direct scattering problem with boundary condition B 2 on ∂D 2 . On the other hand we find that lim for all sufficiently large n, and therefore D 1 = D 2 . Finally, to establish that λ 1 = λ 2 for the case of two impedance boundary conditions B 1 and B 2 we set D = D 1 = D 2 and assume that λ 1 = λ 2 . Then from Rellich's lemma and the boundary conditions, considering one incident field, we have that for the total wave u = u 1 = u 2 . Hence, (λ 1 − λ 2 )u = 0 on ∂D. From this, in view of the fact that λ 1 = λ 2 , by Holmgren's theorem (see [26]) and the boundary condition we obtain that u = 0 in IR 3 \D. This leads to the contradiction that the incident field must satisfy the radiation condition. Hence, λ 1 = λ 2 . The case when one of the boundary conditions is the sound-soft boundary condition is dealt with analogously. Although there is widespread belief that the far field pattern for one single direction and one single wave number determines the scatterer without any additional a priori information, establishing this result still remains a challenging open problem. To illustrate the difficulty of a proof, we consider scattering of the entire solution v i given by (1.5) from a sound-soft ball D of radius R centered at the origin. Then from (1.7) we observe that the total field v vanishes on the spheres with radius R + mπ/k centered at the origin for all integers m. This indicates that proving uniqueness of the inverse obstacle scattering problem with one single incident plane wave needs to incorporate special features of the incident field. Some progress has recently be obtained by Cheng and Yamamoto [4], Alessandrini and Rondi [1], and Liu and Zou [32] who established uniqueness with one incident plane wave for polyhedral scatterers. Assuming that there exist two polyhedral scatterers producing the same far field pattern for one incident plane wave, the main idea of their proofs is to use the reflexion principle to construct a zero field line extending to infinity. However, in view of the fact that the scattered wave tends to zero uniformly at infinity, this contradicts the property that the incident plane wave has modulus one everywhere. Iterative methods We now turn to reconstruction methods and as a first group we describe iterative methods. Here the inverse problem is interpreted as a nonlinear ill-posed operator equation which is solved by iteration methods such as regularized Newton methods, Landweber iterations or conjugate gradient methods. The solution to the direct scattering problem with a fixed incident plane wave u i defines an operator A : ∂D → u ∞ that maps the boundary ∂D of the scatterer D onto the far field pattern u ∞ of the scattered wave. In terms of this operator, given a far field pattern u ∞ , the inverse problem just consists in solving the nonlinear and ill-posed operator equation for the unknown surface ∂D. In order to define the operator A rigorously, the most appropriate approach is to choose a fixed reference domain D of class C 2 and consider a family of scatterers D h with boundaries represented in the form where h : ∂D → IR 3 is of class C 2 and sufficiently small in the C 2 norm on ∂D. Then we may consider the operator A as a mapping from a ball V := {h ∈ C 2 (∂D) : h C 2 < a} ⊂ C 2 (∂D)} with sufficiently small radius a > 0 into L 2 (Ω). However, for ease of presentation, we proceed differently and consider only starlike domains, i.e., domains D r that allow a parameterization of the form where r : Ω → IR is a positive function representing the radial distance from the origin. Then, we may interpret the operator A as a mapping and, consequently, the inverse obstacle scattering problem consists in solving for the unknown radial function r. Since A is nonlinear, we may linearize in terms of a Fréchet derivative A (r). Then given a current approximation r for the solution of (3.3) in order to obtain an update r + q instead of solving the full equation A(r + q) = u ∞ we solve the approximate linear equation for q. We note that the linearized equation inherits the ill-posedness of the nonlinear equation and thererfore regularization is required. As in the classical Newton iterations this linearization procedure is iterated until some stopping criteria is satisfied. The Fréchet differentiabitlity of the operator A is addressed in the following theorem. where v q,∞ is the far field pattern of the solution v q to the Dirichlet problem for the Helmholtz equation in IR 3 \ D r satisfying the Sommerfeld radiation condition and the boundary condition with u = u i + u s the total wave for scattering from the domain D r . The boundary condition (3.5) for the derivative can be obtained formally by differentiating the boundary condition u = 0 on ∂D r with respect to ∂D r by the chain rule. It was obtained by Roger [41] who first employed Newton type iterations for the approximate solution of inverse obstacle scattering problems. Rigorous foundations for the Fréchet differentiability were given by Kirsch [20] in the sense of a domain derivative via variational methods and by Potthast [33] via boundary integral equation techniques. Alternative proofs were contributed by Kress and Päivärinta [28] based on Green's theorems and a factorization of the difference of the far field for the domains D r and D r+q and by Hohage [16] and Schormann [42] via the implicit function theorem. To justify the application of regularization methods for stabilizing (3.4) one has to establish injectivity and dense range of the operator A (r) : L 2 (Ω) → L 2 (Ω). This is settled for the Dirichlet and impedance boundary condition for large λ and remains an open problem for the Neumann boundary condition [29]. In the classical Tikhonov regularization, (3.4) is replaced by with some positive regularization parameter α and the L 2 adjoint [A (r)] * of A (r). For details on the numerical implementation, in particular on the choice of the regularization parameter, and numerical examples in two dimensions we refer to [7,15,20,25,27] and the references therein. The numerical examples strongly indicate that it is advantageous to use some Sobolev norm instead of the L 2 norm as penalty term in the Tikhonov regularization. Numerical examples in three dimensions have been more recently reported by Farhat et al [12] and by Harbrecht and Hohage [14]. In closing the section on Newton iterations we note as their main advantages that this approach is conceptually simple and, as the numerical examples indicate, leads to highly accurate reconstructions with reasonable stability against errors in the far field pattern. On the other hand, it should be noted that for the numerical implementation an efficient forward solver is needed and good a priori information is required in order to ensure convergence. In addition, on the theoretical side, although some progress has been made through the work of Hohage [16] and Potthast [38] the convergence of regularized Newton iterations for inverse obstacle scattering problems has not been completely settled. Decomposition methods The main idea of so-called decomposition methods is to break up the inverse obstacle scattering problem into two parts: the first part deals with the ill-posedness by constructing the scattered wave u s from its far field pattern u ∞ and the second part deals with the nonlinearity by determining the unknown boundary ∂D of the scatterer as the location where the boundary condition for the total field u i + u s is satisfied in a least-squares sense. In the potential method due to Kirsch and Kress [23], for the first part, enough a priori information on the unknown scatterer D is assumed so one can place a closed surface Γ inside D. Then the scattered field u s is sought as a single-layer potential with an unknown density ϕ ∈ L 2 (Γ). In this case the far field pattern u ∞ has the representation Given the far field pattern u ∞ , the density ϕ is now found by solving the integral equation of the first kind with the compact integral operator Due to the analytic kernel of S ∞ , the integral equation (4.2) is severely ill-posed. For a stable numerical solution of (4.2) Tikhonov regularization can be applied, that is, the ill-posed equation with some positive regularization parameter α and the adjoint S * ∞ of S ∞ : L 2 (Γ) → L 2 (Ω). Given an approximation of the scattered wave u s α by inserting a solution ϕ α of (4.3) into the potential (4.1), the unknown boundary ∂D is then determined by requiring the sound-soft boundary condition u i + u s = 0 on ∂D (4.4) to be satisfied in a least-squares sense, i.e., by minimizing the L 2 norm of the defect over a suitable set of admissible surfaces Λ. Of course, instead of solving this minimization problem we also can confine ourselves to visualizing ∂D by color coding the values the modulus |u| of the total field u ≈ u i + u s α on a sufficiently fine grid over IR 3 . Clearly, we can expect (4.2) to have a solution ϕ ∈ L 2 (Ω) if and only if u ∞ is the far field of a radiating solution to the Helmholtz equation in the exterior of Γ with sufficiently smooth boundary values on Γ. Hence, the solvability of (4.2) is related to regularity properties of the scattered wave which, in general, cannot be known in advance for the unknown scatterer D. Nevertheless, it is possible to provide a solid theoretical foundation to the above procedure (see [7,23]). The point source method of Potthast [34,35,37] can also be interpreted as a decomposition method. Its motivation is based on Green's representation for the scattered wave for a sound-soft obstacle u s (x) = − ∂D ∂u ∂ν (y) Φ(x, y) ds(y), x ∈ IR 3 \D, (4.5) and its far field pattern with kernel g z . Under the assumption that there does not exist a nontrivial solution to the Helmholtz equation in B z with homogeneous Dirichlet boundary condition on ∂B z , the Herglotz wave functions are dense in H 1/2 (∂B z ) [8,11] and consequently the approximation (4.7) can be achieved uniformly with respect to y on compact subsets of B z . Then we can insert (4.7) into (4.5) and use (4.6) to obtain as an approximation for the scattered wave u s . Knowing an approximation for the scattered wave, the boundary ∂D can be found as above from the boundary condition (4.4). The approximation (4.7), for example, can be obtained by solving the ill-posed linear integral equation via Tikhonov regularization and the Morozov discrepancy principle. Note that although the integral equation (4.9), in general, is not solvable the approximation property (4.8) is ensured through the above denseness result on Herglotz wave functions. As a first advantage of the decomposition methods we note that with the idea of separating the ill-posedness and the nonlinearity again they are conceptually straightforward. The second and main advantage consists of the fact that their numerical implementation does not require a forward solver. As a disadvantage, as in the Newton method of the previous section, if we go beyond visualization of the level surfaces of |u| and proceed with the minimization, a good a priori information on the unknown scatterer is needed. Furthermore, the accuracy of the reconstructions is slightly inferior to that of the Newton iterations. More recently a hybrid method combining ideas of decomposition methods and Newton iterations of the previous section have been suggested [27,30,43]. In principle, this approach may be considered as a modification of the potential method due to Kirsch and Kress in the sense that the auxiliary surface Γ is viewed as an approximation for the unknown boundary and, keeping ϕ α fixed as a regularized solution of (4.2), update Γ via linearizing the boundary condition (4.4) around Γ. For its brief description we assume the scatterer to be starlike and recall the representation (3.2). Given a far field u ∞ and a current approximation ∂D r with radial function r for the boundary surface, we solve ill-posed integral equation by Tikhonov regularization and set Then we evaluate the boundary values of u = u i + u s and its derivatives on ∂D r via the jump relations and find an update r + q by linearizing the boundary condition u| ∂D r+q = 0, that is, by solving u| ∂Dr +x · grad u| ∂Dr q = 0 for q. In an obvious way, these two steps are iterated. Clearly, this approach does not require a forward solver and connects ideas of Newton iterations and decomposition methods. From numerical examples (see [27,30,43]) it can be concluded that the quality of the reconstructions is similar to that of Newton iterations in the spirit of the previous section. Without giving any details on the computations, in Fig. 1-4 we present some examples for reconstructions by the above hybrid method obtained by Pedro Serranho. The numerical quadratures were based on Wienert's method [44] as described in section 3.6 of [7] and the radial distance functions were approximated by linear combinations of spherical harmonics up to order eight. In each example the figure on the left hand side gives the exact boundary shape, the figure in the middle the reconstruction with one incident wave in direction of the arrow and the figure on the right hand side gives the difference between the exact and the approximate radial function. The reconstructions are obtained with 2% random noise added to the synthetic far field pattern. The wave number is k = 1. reconstructions the criterion is evaluated numerically for a grid of points. As opposed to the two previous groups of methods, in principle, the sampling methods need full data u ∞ (x, d) for allx, d ∈ Ω. We begin by describing the linear sampling method as suggested by Colton and Kirsch [6]. Its basic idea is to find a Herglotz wave function with kernel g, i.e., a superposition of plane waves, such that the corresponding scattered wave v s coincides with a point source Φ(· , z) located at a point z ∈ D. To this aim we define the far field operator F : L 2 (Ω) → L 2 (Ω) as integral operator with kernel given through the far field pattern by Obviously, by superposition F g is the far field pattern corresponding to scattering of the Herglotz wave function with kernel g. Then, to achieve the above goal, we have to find the kernel g(· , z) of the Herglotz wave function as a solution to the integral equation of the first kind with the far field of the fundamental solution given by Assume that g solves equation (5.2). Then, by Rellich's lemma, we have that Letting x tend to the boundary and using the boundary condition u i +u s = 0 on ∂D we conclude that the Herglotz wave function v i with kernel g is a solution to the interior Dirichlet problem Conversely, if the Herglotz wave function v i with kernel g solves (5.4)-(5.5) then its kernel g is a solution of (5.2). Hence, if a solution g(· , z) to the integral equation (5.2) of the first kind exists for all z ∈ D, then from the boundary condition (5.5) for the Herglotz wave function we conclude that g(· , z) L 2 (Ω) → ∞ as the source point z approaches the boundary ∂D. Therefore, in principle, the boundary ∂D may be found by solving the integral equation (5.2) for z taken from a sufficiently fine grid in IR 3 and determining ∂D as the location of those points z where g(· z) L 2 (Ω) becomes large. However, in general, the solution to the interior Dirichlet problem (5.4)-(5.5) will have an extension as a Herglotz wave function across the boundary ∂D only in very special cases (for example if D is a ball with center at z). Hence, the integral equation of the first kind (5.2) will have a solution only in special cases. Nevertheless, by making use of the denseness properties of the Herglotz wave functions as mentioned above, the following result can be established (see [6]). Theorem 5.1 Under the assumption that there does not exist a nontrivial solution to the Helmholtz equation in D with homogeneous Dirichlet boundary condition on ∂D, for every ε > 0 and z ∈ D there exists a function g(· , z) ∈ L 2 (Ω) such that and the Herglotz wave function v i with kernel g(· , z) becomes unbounded From this it can be expected that solving the integral equation (5.2) and scanning the values for g(· , z) L 2 (Ω) will yield an approximation for ∂D through those points where the norm of g is large. A possible procedure with noisy data u ∞,δ − u ∞ L 2 (Ω×Ω) ≤ δ with error level δ is as follows. Denote by F δ the far field operator F with the kernel u ∞ replaced by the data u ∞,δ . Then for each z from a grid in IR 3 determine g δ = g δ (· , z) by minimizing the Tikhonov functional where the regularization parameter α is chosen according to Morozov's generalized discrepancy principle, i.e., α = α(z) is chosen such that Then the unknown boundary is determined by those points where g δ (· , z) L 2 (Ω) sharply increases. However there is a problem with the linear sampling method since is not clear whether the regularized solution obtained for (5.2) by Tikhonov regularization via the discrepancy principle actually provides an approximation in the sense of Theorem 5.1. A remedy, at least for the case of a sound-soft obstacle, has been provided by Arens [2] through connecting the linear sampling method to the factorization method that we are now going to describe as a second example of a sampling method. The drawback that for z ∈ D the integral equation (5.2), in general, is not solvable is remedied through the factorization method due Kirsch [21]. In this method, (5.2) is replaced by leading to the following characterization of the scatterer D. For a proof we refer to [21,22]. Comparing equations (5.2) and (5.6), the above results can be interpreted in the sense that as compared with (F * F ) 1/4 the operator F itself is too much smoothing since Φ ∞ (· , z) does not belong to its range F (L 2 (Ω)) if z ∈ D. The results also imply that, in contrast to the linear sampling method, if Tikhonov regularization with the regularization parameter chosen by the Morozov discrepancy principle is used to solve equation (5.6) with noisy data u ∞ , then g(·, z) converges as the noise level tends to zero if and only if z ∈ D. The most convenient approach to a numerical implementation of Theorem 5.2 is via Picard's criterion for the solvability of ill-posed linear operator equations in terms of a singular system of F . Both for the linear sampling method and the factorization method the indicator function f is given through the norm f (z) := g(· , z) L 2 (Ω) of the solutions to (5.2) and (5.6), respectively. For Potthast's [36,37,40] singular source method, that we now will consider as a third and final example for sample methods, the indicator function is given by f (z) := w s (z, z) through the value of the scattered wave w s (· , z) for the singular source Φ(· , z) as incident field evaluated at the source point z. The values w s (z, z) will be small for points z ∈ IR 3 \D that are away from the boundary and will blow up when z approaches the boundary due to the singularity of the incident field. Clearly, the singular source method can be viewed as a straightforward numerical implementation of the uniqueness proof for Theorem 2.2. Assuming the far field pattern for plane wave incidence to be known for all incident and observation directions, the indicator function w s (z, z) can be obtained by two applications of (4.8) and the mixed reciprocity principle (2.1). Combining (2.1) and (4.8) we obtain the approximation The probe method as suggested by Ikehata [17,18] uses as indicator function an energy integral for w s (· , z) instead of the point evaluation w s (z, z). In this sense, it follows the uniqueness proof of Isakov whereas the singular source method mimics the uniqueness proof of Kirsch and Kress. The theoretical foundation of sampling methods provides beautiful and exciting mathematics. Their main advantage consists of their simple implementation and the fact that no a priori information on the shape and location of the obstacle is required. In addition, in general, also the boundary condition need not to be known in advance. On the other hand, as a disadvantage the sampling methods require a lot of data and do not provide very sharp boundaries due to the need to decide numerically the question on how large infinity is.
2022-06-28T02:17:27.340Z
2007-01-01T00:00:00.000
{ "year": 2007, "sha1": "31b6af59971e75b876c7ae9d595c01328d9a4190", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/73/1/012003", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "31b6af59971e75b876c7ae9d595c01328d9a4190", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
234953087
pes2o/s2orc
v3-fos-license
Hospital efficiency in the eastern mediterranean region: A systematic review and meta-analysis Background Recent rising costs and shortages of healthcare resources make it necessary to address the issue of hospital efficiency. Increasing the efficiency of hospitals can result in the better and more sustainable achievement of their organizational goals. Objective The purpose of this research is to examine hospital efficiency in the Eastern Mediterranean Region (EMR) using data envelopment analysis (DEA). Methods This study is a systematic review and meta-analysis of all articles published on hospital efficiency in Eastern Mediterranean countries between January 1999 and September 2020, identified by searching PubMed through MEDLINE, Web of Science, Scopus, Science Direct, and Google Scholar. The reference lists of these articles were checked for additional relevant studies. Finally, 37 articles were selected, and data were analyzed through Comprehensive Meta-Analysis Software (v.2.2.064). Results Using the random-effects model, the mean hospital efficiency in Eastern Mediterranean hospitals was 0.882 ± 0.01 at 95% CI. Technical efficiency (TE) was higher in some countries such as Iraq (0.976 ± 0.035), Oman (0.926 ± 0.032), and Iran (0.921 ±0.012). A significant statistical correlation was observed between the hospital efficiency and the year of publication and sample size (p < 0.05). Conclusion Efficiency plays a significant role in hospital growth and development. Therefore, it is important for healthcare managers and policymakers in the EMR to identify the causes of inefficiency, improve TE, and develop cost-effective strategies. /fpubh. . hospitals and the absence of usual market indicators, there is a clear necessity for appropriate performance measurement tools to seek out best practices and identify gaps for improvement (4,5). A wide variety of analytic methods has been utilized by researchers to measure hospital efficiency in terms of costs and production frontiers and the associated inefficiency of individual organizations (6)(7)(8). These techniques can be divided into two main categories: parametric and non-parametric methods. Parametric methods use econometric techniques to estimate the parameters of a specific cost of production functions, and non-parametric methods use observed real-world data to draw the shape of the frontier (5). The premier of parametric methods in use is called stochastic frontier analysis (SFA) which uses multivariate regression analysis to estimate a cost or production function, where the decomposed unexplained error term represents inefficiency (which, in the case of a cost function, will always be greater than zero) (5). Most non-parametric methods take the form of data envelopment analysis (DEA) and its many variants. DEA uses linear programming methods to infer a piecewise linear production possibility frontier in effect seeking out those efficient observations that dominate (or envelop) the others. In contrast to parametric methods, DEA can handle multiple inputs and outputs without difficulty. DEA determines a best practice frontier of various decision-making units (DMUs) that envelops all inefficient DMUs. The estimation of the technical efficiency score is the major concern of almost all DEA models, indicating that the proper allocation of resources is not part of the calculations. Compared to parametric methods that need to initially specify production function before measurement, DEA is not subject to production function specification (9,10). In recent years, a vast amount of studies has been conducted in high-income countries benefiting from cutting-edge methodologies (8,11), so some of them incorporated preferences into the analysis (7,12,13), as well as in the EMR, aiming at measuring hospital efficiency through both parametric and non-parametric approaches (14- 16). A context-specific overview and analysis of existing articles are helpful for everyone interested in the field of efficiency measurement in healthcare with a focus on hospitals. According to our preliminary search, two systematic reviews have been conducted to address the issue in the hospital setting (17,18). The study by Ravaghi et al. explored the potential sources of inefficiency in EMR hospitals which had been reported by 56 eligible studies and summarized the possible solutions by using qualitative synthesis (18). The second review has included 22 eligible studies from the Gulf region and estimated the technical efficiency (TE) through pooled estimation. Despite this study having systematically reviewed the existing literature and addressed one important aspect of hospital economic performance, the focus of the study was only on Gulf region countries which might limit the generalizability of the study findings to other similar settings (17). This systematic review aimed to deeply scrutinize the published literature on hospital efficiency in EMR hospitals and estimate technical efficiency which has been reported by previous studies through meta-analysis. Methods The present study is a systematic review and meta-analysis to examine hospital efficiency in the EMR using DEA. Eligibility criteria Studies were included in this systematic review if they (1) measured efficiency using a statistical method, (2) used the hospital as the analysis unit, (3) measured hospital efficiency using data envelopment analysis, (4) reported data necessary to calculate it, (5) were written in English, (6) performed a study in EMR, (7) contained data required for analysis (by access to the full text or by request from the author), and (8) included mean and SD (VRS TE or CRS TE). Studies were excluded if they (1) used methods other than DEA (for example SFA and Pabon Lasso Model), (2) are performed at private hospitals or settings other than a hospital, and (3) were a thesis, case series, randomized controlled trials, case-control, commentaries, letters to the editor, book chapters, books, editorials, expert opinions, brief reports, and reviews. Search sources and search strategies PubMed through MEDLINE, Web of Science, Scopus, Science Direct, and Google Scholar were searched from January 1999 to September 2020. All of the keywords were in English, and the search strategy was restricted to English-language publications. The electronic search was complemented by hand-searching of the related articles as well as the reference lists of the final studies (Table 1). Screening and study selection Search results were imported and managed via EndNote X8 (Thomson Reuters, New York, USA). Duplicates were first removed electronically and then manually. Subsequently, the title and abstract of the included studies were independently screened by two reviewers (AM and MA), and disagreements were finally resolved by helping a third reviewer (HR). The full text of potential studies was retrieved and reviewed by the two reviewers. Email or ResearchGate contact was used to obtain full-text or English versions of the inaccessible studies. Data extraction Two reviewers (MA and AM) extracted data for the country where the study was conducted, year of publication, research purpose, sample size, data collection method, number of hospitals examined, and mean and standard deviation (SD) of TE. 391 Web of Science (( TS= ( "Data Envelopment Analysis" OR efficiency ) AND TS= ( "Afghanistan" OR "Bahrain" OR "Djibouti" OR "Egypt" OR "Iran (Islamic Republic of) " OR "Iraq" OR "Jordan" OR "Kuwait" OR "Lebanon" OR "Libya" OR "Morocco" OR "Oman" OR "Pakistan" OR "Qatar" OR "Saudi Arabia" OR "Somalia" OR "Sudan" OR "Syrian Arab Republic" OR "Tunisia" OR "United Arab Emirates" OR "Yemen" OR "Palestine" ) AND TS= ( hospital ) )) AND LANGUAGE: (English) AND DOCUMENT TYPES: (Article) Indexes=SCI-EXPANDED, SSCI, AandHCI, CPCI-S, CPCI-SSH, ESCI Timespan=All years 172 Google Scholar "Data Envelopment Analysis" OR efficiency AND "Afghanistan" OR "Bahrain" OR "Djibouti" OR "Egypt" OR "Iran (Islamic Republic of)" OR "Iraq" OR "Jordan" OR "Kuwait" OR "Lebanon" OR "Libya" OR "Morocco" OR "Oman" OR "Pakistan" OR "Qatar" OR "Saudi Arabia" OR "Somalia" OR "Sudan" OR "Syrian Arab Republic" OR "Tunisia" OR "United Arab Emirates" OR "Yemen" OR "Palestine")) 2200 Science Direct ("Data Envelopment Analysis" OR efficiency) AND "Eastern Mediterranean countries" AND (hospital) 24 by Mitton et al. (19) (see the Appendix). Each question was given a score of 0 (not present or reported), 1 (present but low quality), 2 (present and mid-range quality), or 3 (present and high quality). Criteria for assessment of quality included a literature review and identifying research gaps; research questions, hypotheses, and design; population and sampling; data collection process and instruments; and analysis and reporting of results. The assessment was conducted by both AM and MA, and discrepancies were then resolved either by discussion or by the third reviewer (HR). Data analysis Since the mean and standard deviation of TE had not been reported by most of the included studies, we dealt with this missing information by contacting the authors of these studies or calculating the values using available data. Meta-analysis was conducted to synthesize the mean technical efficiency (TE) using the randomeffects model by the sample size weighting (20). The results were presented with 95% confidence intervals (95% CIs) (20). Statistical heterogeneity among the studies was assessed by Cochran's Q statistic and I2 index (21,22). As the analytical results revealed a high heterogeneity (96.07%), the random-effects model was employed and covariates between variables were examined using the metaregression function. All these statistical analyses were conducted using the Comprehensive Meta-Analysis Software (v.2.2.064). Results The initial search resulted in 3,796 articles. After excluding duplicates and irrelevant articles, 2,725 studies were selected for abstract examination, whereas 2,674 articles were removed after reviewing abstracts. We also scrutinized 51 full-text articles for eligibility and excluded 14 because they did not satisfy our inclusion/exclusion criteria [Four were review articles (17,18,23,24), five used different estimation methods (14, 16,[25][26][27][28][29], one article was conducted in a single hospital ward (30), and two articles did not report mean and SD (neither VRS TE nor CRS TE) (31,32)]. Finally, 37 articles were found eligible for inclusion in this systematic review and meta-analysis. The reference lists of these 37 articles were manually searched, but no additional studies were included ( Figure 1). The PRISMA flow diagram (33) was followed in this study. Characteristics of the included studies Over half of the studies had been published after 2010, with most having been conducted in 2017 and 2014 ( Figure 2). Studies were only conducted in 11 of the 22 EMR countries. The overwhelming majority of these are located in Iran (N = 20) and Saudi Arabia (N = 4). The sample size varied from three (34) to 270 (35) hospitals. Health reports, interviews, hospital records, or annual statistical records were reported as the sources of data. Efficiency had been assessed in light of various concepts including technical, scale, and pure efficiency with a primary focus on TE in the reviewed studies. The reviewed studies varied in the models used to estimate the TE of public hospitals. Twelve studies used both constant and variable return to the efficiency scale (CRS and VRS), whereas 19 applied variable return to scale (VRS) and 6 used constant return to scale (CRS). The inputs used in the included studies are presented in Table 2, with a range of 2-5. Predominant . /fpubh. . inputs were the labor (including full-time and part-time physicians, full-time and part-time nurses, midwives, non-medical staff, and dentists) and capital (number of beds) variables. Two studies (36,37) used capital expenses in the inputs. Numerous output dimensions were used in the efficiency models (range: 1-9 variables). Output variables focused on the number of outpatient visits and inpatient admissions. Twelve studies used bed turnover (BTR) and occupancy (BOR) rates, and 10 studies used an average length of stay (ALS), while one study (38) used mortality rate in hospitals as an output variable ( Table 2). The methodological quality of included studies No articles were excluded based on the quality appraisal. All the included studies acquired more than 70% of the overall score. So that 95% (N = 35) of the studies were in the third quarter Q3 (≥75% of overall score). More than 65% (N = 13) of the studies have developed a good research question, and most of them adopted an appropriate sample size (92%, n = 34). With respect to the data collection method, 100% of the studies followed the standard guideline in collecting data and acquired the full score in this item. The analysis and results of the reporting item were the one item that most of the studies could not get a full score; therefore, only 33% (N = 12) of the studies got a full score here. The quality assessment scores are presented in Table 2. To examine the consistency of efficiency assessments, we conducted a meta-analysis of the estimated TE scores reported in the reviewed studies. The mean and standard deviation of TE with the CRS model in Eastern Mediterranean hospitals are 0.826 ± 0.03 at the 95% significance level. According to the randomeffects model, TE was higher in Iran (0.988 ± 0.010) in 2012 ( Figure 3). The mean and standard deviation of TE with the VRS model in Eastern Mediterranean hospitals are 0.892 ± 0.012 at a 95% significance level. According to the random-effects model, TE was high in Kuwait (1.00 ± 0.046) (Figure 4). Studies examining fewer hospitals for estimations reported higher efficiency scores compared to studies using more hospitals. Studies published in lower-middle-income countries reported TE to score higher compared to others ( Table 3). The results of the heterogeneity test indicated a high level of heterogeneity between the studies (I 2 = 96.07%, P = 0.0001). Therefore, potential sources of heterogeneity were examined using the meta-regression function. The results are displayed in Table 4, indicating that the year of publication and sample size of articles have caused heterogeneity between the reviewed studies (p < 0.05). The results of meta-regression with VRS, based on the year of study, demonstrated that an increase of one unit per year of study causes a higher incidence of hospital efficiency by 0.003 units. Moreover, the efficiency of the hospital decreases by 0.00008 as the sample size of articles increases. On the other hand, the results of meta-regression with CRS, based on the year of study, demonstrated that an increase of one unit per year of study causes a lower incidence of hospital efficiency by 0.006 units. Moreover, the efficiency of the hospital decreases by 0.006 as the sample size of articles increases. Discussion Several systematic reviews have been conducted on hospital efficiency worldwide (18,70,71). For example, a 2018 study reviewed 57 articles using DEA (18), and a 2014 study reviewed 23 articles using DEA, SFA, and balanced scorecard (71). To our knowledge, this is the first attempt to measure hospital efficiency using metaanalysis in the Eastern Mediterranean region. There was a growing trend in recent years to measure the efficiency of hospitals using different methods. In this study, we reviewed studies that measured the TE of hospitals in EMR countries. A total of 37 articles which calculated hospital efficiency using DEA were eligible for inclusion in the meta-analysis. It must be noted that the vast majority of studies on hospital efficiency were conducted in Iran. This may partly be due to the Iranian Ministry of Health and Medical Education's attempt at reducing hospital costs. In addition, efficiency and strategies for improving it have become a key priority for the Iranian government. A mean TE of 0.882 ± 0.01 was estimated for Eastern Mediterranean countries. This finding is consistent with the results of previous studies in other countries (24,72,73). Pereira et al. (4) examined the convergence in productivity and indicated that in the EMR, the performance spread among countries is decreasing and the gap between the best and worst practice frontier is increasing. Also, they showed that innovator EMR countries are Egypt, Jordan, Kuwait, Qatar, Tunisia, and the United Arab Emirates, and the lagging EMR WHO Member State is Somalia. In the study conducted by Du (73) on Chinese hospitals economic performance, the mean hospital efficiency was estimated at 0.74, 0.902, and 0.805 in the Central, Eastern, and Western regions of the country, respectively (73). Blatnik et al. (72) examined hospital efficiency in Slovenia and reported a mean TE of 0.936 (72). These extensive empirical works indicate that hospital efficiency can significantly vary across different countries and regions (4,11). . /fpubh. . According to our findings, the mean hospital efficiency varied in high-income countries such as Saudi Arabia, Oman, the United Arab Emirates, and Bahrain. For example, Oman had the highest mean TE, and Bahrain had the lowest mean TE. According to the 2017 WHO's report on "Eastern Mediterranean Region Framework for health information systems and core indicators for monitoring the health situation and health system performance, " Bahrain and Oman had the highest general government expenditure on health as a percentage of general government expenditure (10.5 and 6.8%, respectively) among the four countries (74). On the other hand, mean hospital efficiency also varied in low-and middle-income countries such as Pakistan, Afghanistan, Iran, Jordan, Tunisia, Palestine, and Iraq. For instance, among these countries, Iraq and Iran stood at the top of the list, whereas Pakistan had the lowest mean TE among other counterparts. WHO's world health report 2017 highlighted that among these seven countries, Iran had the highest and Pakistan had the lowest general government expenditure on health as a percentage of general government expenditure (17.5 and 4.7%, respectively) (74). Therefore, hospital managers and policymakers must focus on improving efficiency and reducing healthcare costs in regions that have lower rates of hospital efficiency. Furthermore, a study using the 'Sustainable Public Health Index' showed that Bahrain, Egypt, Iran (the Islamic Republic of), Jordan, Kuwait, Libya, Morocco, Oman, Pakistan, Qatar, Saudi Arabia, the Syrian Arab Republic, and the United Arab Emirates were the efficient EMR countries between WHO Member States (75). Hospital internal structure (11), regional differences (4,11), and decision-maker participation in the assessment (13) of the environmental, social, and economic sustainability of the hospital (7) have a significant impact on the efficiency of hospitals. The development of outpatient care (23), reducing supplier-induced demand (76), the strengthening of hospital management and quality management (70,77), the strengthening of governance and regulation (78), and enhanced crisis resilience such as COVID-19 crisis (8) are recommended as effective strategies to increase hospital efficiency. In addition, hospitals can serve as productive business entities through health system structure reform at the macro level, proper implementation of healthcare stratification, and responsiveness of insurance companies (23, 79). This allows hospitals to increase patient satisfaction and provide safe, high-quality care. The measurement of hospital efficiency is done through a set of input and output variables (80, 81). The present findings . /fpubh. . show that the most commonly used input variables in studies on hospital efficiency in the EMR are the number of employees and the number of beds, while the most commonly used output variables are the number of outpatient visits, the number of inpatient admission, and the number of operations. For example, in a study on hospital efficiency in Oman, Ramanathan (2005) used outpatient visits, inpatient services, and surgical operations as outputs, and the number of beds and manpower as inputs (15). In addition, some studies have used other inputs such as work hours, non-labor costs (i.e., equipment, food, and drugs), the area of the hospital in cubic meters (82,83), and outputs such as mortality rate, number of nursing students, number of medical students, number of nursing and medical training weeks, and number of scientific publications (84,85). Pereira proposed a framework to make a "sustainable public health index" and assessed the performance of the WHO Member States by using the 13 indicators of the UN's SDG 3 targets as input and output (75). They found that the EMR was in fourth place among six WHO regions (75). Researchers must use more input and output variables when measuring hospital efficiency to increase the accuracy of their findings. In some countries, mean efficiency has increased significantly in recent years. The present systematic review showed that, on average, smallscale (47) and public hospitals (61) have a lower level of efficiency. For example, Chaabouni and Abednnadher (2016), who examined Tunisian public hospitals, reported a positive association between cost-effectiveness and hospital size. They found that the mean cost-effectiveness was 0.995 in large hospitals compared to 0.875 in small hospitals (47). In a study on Iranian hospitals, Ketabi (2011) showed that CCUs in 83.3% of teaching hospitals and 60% of private hospitals perform inefficiently (44). This was attributed to the excess of medical equipment as well as personnel and technological capabilities. Teaching hospitals were less efficient because of bureaucratic processes, and private hospitals had lower BORs. There is a larger demand for care in public hospitals than in private hospitals, and thus, public hospital managers in particular must make optimal use of their resources. The present review showed that hospital efficiency decreases by 0.00008 as the sample size of articles increases. On the other hand, hospital efficiency increases by 0.003% as the publication date increases by 1 year. In other words, the time sequence of studies on hospital efficiency indicates lower levels of efficiency in recent years compared to previous years. Conclusion The results of this systematic review and meta-analysis of hospital efficiency in Eastern Mediterranean countries highlighted that the . /fpubh. . reviewed studies varied in the model used to estimate the technical efficiency in public hospitals (CRS and VRS). The EMR studies have based their analysis on hospital inputs. Also, a significant statistical correlation was observed between the hospital efficiency and the year of publication and sample size. The results of this article should, however, be cautiously interpreted. Although the pooled estimation of hospital efficiency reflects only the performance of a limited number of Eastern Mediterranean countries, this gap in the literature indicates that the reviewed studies are not comprehensive in terms of coverage and methodology. Other variables, such as ownership or type of hospital, can impact the results of efficiency analysis, but a small sample size restricts control of this variable. In recent years, the number of studies on efficiency has significantly increased, likely due to the increase in interest in the subject due to resource scarcity. To enable effective and efficient hospital management and improvement in hospital efficiency, health managers and policymakers must identify the causes of hospital inefficiency. An effective way of increasing hospital efficiency is by using evidence-based interventions. Therefore, health policymakers in Eastern Mediterranean countries must first identify the causes of hospital inefficiency and take necessary remedial actions to facilitate the optimal use of scarce resources. Author contributions MA and HR designed the research. MA, AM-A, and PI conducted it. MA and PI extracted the data. MA, HR, VB, AM-A, and PI wrote the study. MA had primary responsibility for the final content. All authors read and approved the final manuscript.
2020-01-16T09:04:39.873Z
2020-01-13T00:00:00.000
{ "year": 2023, "sha1": "37b25cd78cff77fd7b6f70348f55c0ebb8c6a515", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "be78a19b73164f75c280e420f7024cd2aebb86da", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
197440266
pes2o/s2orc
v3-fos-license
Can the evolution of music be analyzed in a quantitative manner? We propose a methodology to study music development by applying multivariate statistics on composers characteristics. Seven representative composers were considered in terms of eight main musical features. Grades were assigned to each characteristic and their correlations were analyzed. A bootstrap method was applied to simulate hundreds of artificial composers influenced by the seven representatives chosen. Afterwards we quantify non-numeric relations like dialectics, opposition and innovation. Composers differences on style and technique were represented as geometrical distances in the feature space, making it possible to quantify, for example, how much Bach and Stockhausen differ from other composers or how much Beethoven influenced Brahms. In addition, we compared the results with a prior investigation on philosophy. Opposition, strong on philosophy, was not remarkable on music. Supporting an observation already considered by music theorists, strong influences were identified between composers by the quantification of dialectics, implying inheritance and suggesting a stronger master-disciple evolution when compared to the philosophy analysis. I. INTRODUCTION In the history of music, composers developed their own styles along a continuous search for coherence or unity. In the words of Anton Webern 2 , "[...] ever since music has been written most great artists have striven to make this unity ever clearer. Everything that has happened aims at this [...]". Along this process we can identify a constant heritage of style from one composer to another as a gradual development from its predecessor, contrasting with the necessity for innovation. Quoting Lovelock: "[...] by experiment that progress is possible; it is the man with the forward-looking type of mind [...] who forces man out of the rut of 'what was good enough for my father is good enough for me'." 3 . Thus, development in music follows a dichotomy: while composers aims on innovation, creating their own styles, their technique is based on the works of their predecessors, in a masterapprentice tradition. Other fields like philosophy demonstrate a well-defined trend when considering innovation: unlike music, the quest for difference seems to drive philosophical changes 4 . Recently, this observation became more evident with the application of a quantitative method 1 where multivariate statistics was used to measure non-numeric relations and to represent the historical development as time-series. More specifically, the method consists of scoring memorable philosophers based on some relevant characteristics. The group of philosophers was chosen based on historical a) http://automata.cc; Electronic mail: vilson@void.cc b) http://www.estudiolivre.org/el-user.php?view user=gk; Electronic mail: renato.fabbri@gmail.com c) Electronic mail: gonzalo@ifsc.usp.br d) http://cyvision.ifsc.usp.br/˜luciano/; Electronic mail: ldfcosta@gmail.com relevance. The scores assigned to each philosopher characteristics define a state vector in a feature space. Correlations between these characteristic vectors were identified and principal component analysis (PCA) was applied to represent the philosophical history as a planar space where we could identify interesting properties. Furthermore, concepts like dialects can be modeled as mathematical relations between the philosophical states. Here, we extend that analysis to music. The application of statistical analysis to music is not recent. On musicology, statistical methods have been used to identify many musical characteristics. Simonton 5,6 used time-series analysis to measure the creative productivity of composers based on their music and popularity. Kozbelt 7,8 also analyzed the productivity, but based on the measure of performance time of the compositions and investigated the relation between productivity and versatility. More recent works 9,10 use machinelearning algorithms to recognize musical styles of selected compositions. Differently from these works, we are not interested in applying statistical analysis to music but on characterizing composers. Eight characteristics were described and scored by the authors, based on the recurrent appearance of these attributes in music pieces. We chose seven representative composers from different periods of music history. This group was chosen purposely to model their influence on contemporaries, represented as a group of "artificial composers", sampled by a bootstrap method 11 . The same statistical method used in philosophy 1 was applied to this set of composers and their characteristics, allowing us to compare the results from both fields. The results present contrasting historical facts, recognized along the history of music, quantified by application of distance metrics which allowed us to formalize concepts like dialectics, innovation and opposition, resulting in interpretations of music development which are compatible with Opposite state ri = vi + 2( ai − vi) perspectives from musicians and theorists 2,3 . II. MATHEMATICAL DESCRIPTION A sequence S of C music composers was chosen based on their relevance at each period of the classical music history. As done for philosophers 1 , the set of C measurements define a C-dimensional space henceforth referred as the musical space. The characteristic vector v i of each composer i defines a respective composer state in the musical space. For the set of C composers, we defined the same relations adapted for philosophers 1 , sumarized in Table I. It is important to note some details about these relations. Given a set of C composers as a time-sequence S, the average state at time i is defined. The opposite state is defined as the "counterpoint" of a musical state v i , considering its average state: everything running along the opposite direction of v i are understood as opposition. In other words, any displacement from v i along the direction r i is a contrary move, and any displacement from v i along the direction − r i is an emphasis move. Given a musical state v i and its opposite state r i , we can define the opposition vector D i . These details are better understood analyzing Figure 1. Considering the time-sequence S we defined relations between pairs of composers. The musical move implied by two successive composers at time i and j corresponds to the M i,j vector extending from v i to v j . Given the musical move we can quantify the intensity of opposition by the projection of M i,j along the opposition vector D i , normalized, yelding the opposition index. Considering the same musical move, the skewness index is the distance between v j and the line L i defined by the vector D i , and therefore quantifies how much the new musical state departs from the respective opposition move. A relationship between a triple of successive composers can also be defined. Considering i, j and k being respectively understood as the thesis, antithesis and synthesis, we defined the counter-dialectics index by the distance between the musical state v k and the middle line M L i,j defined by the thesis and antithesis, as shown in Figure 2. In higher dimensional philosophical spaces, the middlehyperplane defined by the points which are at equal distances to both v i and v j should be used instead of the middle line M L i,j . The proposed equation for counterdialectics scales to hyperplanes. The counter-dialectics index is suggested and used instead of dialectics index to maintain compatibility with the use of a distance from point to line as adopted for the definition of skewness. III. MUSICAL CHARACTERISTICS To create the musical space we derived eight variables corresponding to distinct characteristics commonly found in music compositions. The characteristics are related with the basic elements of music -melody, harmony, rhythm, timbre, form and tessitura 12 -and non-musical issues like historical events that have influenced the compositions, for example, the presence of Church. All the eight characteristics are listed below: Sacred -Secular (S-S c ): the sacred or religious music is composed through religious influence or used for its purposes. Masses, motets and hymns, dedicated to the Christian liturgy, are well known examples 3 . Secular music has no or little relation with religion and includes popular songs like Italian madrigals and German lieds 12 . Short duration -Long duration (D s -D l ): compositions are quantified having short duration when they do not have more than few minutes of execution. Long duration compositions have at least 20 minutes of execution or more. The same consideration was adopted by Kozbelt 7,8 in his analysis of time execution. Harmony -Counterpoint (H-C): harmony regards the vertical combination of notes, while counterpoint focuses on horizontal combinations 12 . Vocal -Instrumental (V -I): compositions using just vocals (e.g. cantata) or exclusively instruments (e.g. sonata). It is interesting to note the use of vocals over instruments on Sacred compositions 3 . Non-discursive -Discursive (D n -D): compositions based or not on verbal discourse, like programmatic music or Baroque rhetoric, where the composer wants to "tell a history" invoking images to the listeners mind 12 . Its contrary part is known as absolute music where the music is written to be appreciated simply by what it is. Motivic Stability -Motivic Variety (M s -M v ): motivic pieces presents equilibrium between repetition, reuse and variation of melodic motives. Bach is noticeable by his development by variation of motives, contrasting with the constantly inventive use of new materials by Mozart 2 . Rhythmic Simplicity -Rhythmic Complexity (R s -R c ): presence or not of polyrhythms, the use of independent rhythms at the same time -also known as rhythmic counterpoint 12 -a characteristic constantly found in Romanticism and the works of 20th-century composers like Stravinsky. Harmonic Stability -Harmonic Variety (H s -H v ): rate of tonality change along a piece or its stability. After the highly polyphonic development in Renaissance, Webern regarded Beethoven as the composer who returned to the maximum exploration of harmonic variety 2 . IV. RESULTS AND DISCUSSION Memorable composers were chosen as key representatives of musical development. This group was chosen purposely to model their influence over contemporaries, creating a concise parallel with music history. We modeled this group of influenced composers as new artificial samples generated by a bootstrap method, better explained in this section. The sequence is ordered chronologically and presented on Table II with each composer related with its historical period. The quantification of the eight musical characteristics was performed jointly by the authors of this article and is shown in Table III. The scores were numerical values between 1 and 9. Values more close of 1 reveals the composer tended to the first element of each characteristic pair, and vice versa. We emphasize that the focus of this work is not on the specific characteristics used or their attributed numerical values, which can be disputed, but on the techniques employed for the quantitative analysis. This data set defines an 8-dimensional musical space where each dimension corresponds to a characteristic that aplies to all 7 composers. Such small data set is not adequate for statistical analysis and the imediate analysis of this set would be highly biased by the small sample. A. Bootstrap method for sampling artificial composers To simulate a more realistic musical trajectory, we used a bootstrap method for generating artificial composers contemporaries of those seven chosen. The bootstrap routine generated randomized scores r. The values are not totally random, following a probability distribution that models the original n = 7 scores, given by p( r) = n i=1 e d i 2σ 2 where d i is the distance between a random score r and the original score chart. For each step a value p( r) is generated and compared with a random normalized value, characterizing the Monte Carlo 13 method to choose a set of samples. This samples simulates new randomized composers score charts -while respecting the historical influence of the main 7 original exponents. Higher values of p( r) imply a stronger influence of the original scores over r. For the analysis we used 1000 bootstrap samples obtained by the bootstrap process together with the original scores, considering σ = 1.1. Other values for σ were used yelding distributions with bootstrap samples closer to or further from the original musical states, which does not affected the musical space substantially. Pearson correlation coefficients between the eight musical characteristics chosen are presented in Table IV. Emphasized coefficients have absolute values larger than 0.5. We can identify some interesting relations between the pairs of characteristics that reflect important facts in music history. For instance, the Pearson correlation coefficient of 0.69 was obtained for the pairs S-S c (Sacred or Secular) and V -I (Vocal or Instrumental), which indicate that sacred music tends to be more vocal than instrumental. The coefficient of 0.56 also shows it does not commonly use polyrhythms as we can see analysing the pairs S-S c and R s -R c (Rhythmic Simplicity or Complexity). Negative coefficients of -0.33 for the pairs V -I and D n -D (Non-discursive or Discursive) indicated that composers who used just voices on their compositions also preferred to use programmatic musics techniques like baroque rhetoric. PCA was applied to this set of data, yielding the new variances given in Table V in terms of percentages of total variance. We can note the concentration of variance along the four first PCA axes, a common effect also observed while analyzing philosophers characteristics 1 . This would usualy mean that we could consider just four dimensions but as we will see below our measurements differs considerably with the inclusion of all eight components. As done for philosophers analysis, we performed 1000 perturbations of the original scores by adding to each score the values -2, -1, 0, 1 or 2 with uniform probability. In other words, we wanted to test if scoring errors could be sufficient to cause relevant effects on the PCA projections. Interestingly, the values of average and standard deviation for both original and perturbed positions listed in Table VI show relatively small changes. It is therefore reasonable to say that the small errors in the values assigned as scores of composers characteristics do not affected too much its quantification. C. Results Table VII shows the normalized weights of the contributions of each original property on the eight axes. Most of the characteristics contribute almost equally in defining the axes. Figure 3 presents a 2-dimensional space considering the first two main axes. The arrows follows the time sequence along with the seven composers. Each of these arrows corresponds to a musical move from one composer state Bach is found far from the rest of composers, which suggests his key role acknowledged by other great composers like Beethoven and Webern 2 : "In fact Bach composed everything, concerned himself with everything that gives food for thought!". The greatest subsequent change takes place from Bach to Mozart, reflecting a substantial difference in style. We can identify a strong relationship between Beethoven and Brahms, supporting the belief by the virtuosi Hans von Bülow 14 when he stated the 1 st Symphony of Brahms as, in reality, being the 10 th Symphony of Beethoven, clamming Brahms as the true successor of Beethoven. Stravinsky is near to Beethoven and Brahms, presumably due to his heterogeneity 3,12 . Beethoven is also near to Mozart who deeply influenced Beethoven, mainly in his early works. For Webern, Beethoven was the unique classicist who really came close to the coherence found in the pieces of the Burgundian School: "Not even in Haydn and Mozart do we see these two forms as clearly as in Beethoven. The period and the eight-bar sentence are at their purest in Beethoven; in his predecessors we find only traces of them" 2 . It could explain the proximity of Beethoven to the Renaissance Monteverdi. Stockhausen is a deviating point when compared with the others and it could present even more detachment if we had considered vanguard characteristics -e.g. timbre exploration by using electronic devices 3 -not shared by his precursors. To complement the analysis, Table VIII gives the opposition and skewness indices for each of the six musical moves, showing the movements are driven by rather small opposition and strong skewness. In other words, most musical moves seems to seek more innovation than opposition. Dialectics is also shown in Table IX and will play a key role in the next section. We performed Wards hierarchical clustering 15 to complement the analysis. This algorithm clusters the original scores taking into account their distance. The generated dendrogram in Figure 4 shows the composers considering their similarity. The representation supports the observations discussed previously. It is interesting to note the cluster formed by Beethoven and Brahms, reflecting their heritage. Stravinsky and Stockhausen forms another cluster and Mozart remains in isolation, as like Bach and Monteverdi. Both relations were also present in the planar space shown in Figure 3. V. COMPARISONS WITH PHILOSOPHERS ANALYSIS The results of composers analysis are surprising when compared with philosophers 1 . It is important to note that we preserved the number of characteristics and performed the same bootstrap method to generate a larger set of samples, making possible this comparison. The variances after PCA (Table X) concentrates in the four first new axis, similar to the variances for composers shown at Table V. If we compare the discussed musical space with the philosophical one in Figure 5 we identify opposite movements along all the philosophy history in contrast to music. This reveals a notorious characteristic of the way philosophers seem to have evolved their ideas, driven by opposition (W i,j ), as shown in Table XI, while composers tend to be more influenced by their predecessors as far as their dialectics measures are concerned (1/d i→k ). In general, the musical movements had minor opposition and, remembering the beginning of this work, it reflects the master-apprentice tradition present in music: the composers tend to build their own works confirming their precursors legacy, resulting in a greater dialectics than the philosophers related measures. This reveals a crucial difference considering the memory treatment along the development of philosophy and music: using the same techniques this article does 1 , we could verify that a philosopher was influenced by the opposition of ideas from his direct predecessor, while here composers were commonly influenced by their both predecessors. Therefore, we can argue that philosophy presents a memory-1 state, while music presents memory-2, considering memory-N being as number N of past generations whose influence on a philosopher or composer is being considered. Considering the linearity of musical movements we can identify the abscissa as a "time axis" representing the development of music along the history, with some composers like Beethoven returning to Monteverdi and others advancing to the modern age like Stravinsky and Stockhausen. The opposition and skewness indices for philosophers listed in Table XI endorses the minor role of opposition in composers at the period considered. We can observe strong opposition in philosophical moves contrasted to small opposition in musical movements. Also, the dialectics presents a phase difference suggesting knownledge and aesthetics transfer latency between each of these human fields. When comparing dialectics, other curious facts arise: the dialectics indices for musicians in Table IX are considerably stronger moves than for philosophers in Table XII. Both indices are also shown in Figure 6 where we can see a constant decrease of counter-dialectics. This makes it possible to argue that dialectics is stronger in music where a constantly return to the origins are clearly visible. This reveals the nature of the musical development, based on the search for a unity. Using the words of Webern, the search for the "comprehensibility" but always influenced by their old masters. VI. CONCLUDING REMARKS Motivated by the understanding of how innovation evolves in music history, we extended a quantitative method recently applied to the study of philosophical Statistical methods is nowadays commonly used for the study of music features and composers productivity, but analysis of composers characteristics modification along the music history has been less explored. The method differs on the aspect of how the characteristics concerning composers are treated: scores are assigned to each feature common in musical works. These scores reveal not the exact profile of composers, but a tendency of how their techniques are usually present. To make the simulation more realistic, we considered not just the small number of 7 composers, but derived other 1000 new "artificial composers" through a bootstrap method. A larger data set made possible the statistical analysis, considering not just the original scored composers, but other samples that respect the historical presence of the formers. This other thousand composers were modeled by a probabilistic distribution, and avoided a biasing caused by the use of only 7 composers. In order to investigate the relationship between this scorings we applied Pearson correlation analysis. The results demonstrated a strong correlation between some characteristics, which allows us to group this values, creating a reduced number of features that summarizes the most important characteris-tics. PCA was also applied to these components, reducing the complex space to a planar graph where some of the most interesting properties can be visualized. Historical landmarks in music are well-defined in the planar space, like the isolation of Bach, Mozart and Stockhausen, the proximity between Beethoven and Brahms and the distance from Bach and Mozart, the heterogeneity of Stravinsky and the vanguard of contemporary composers like Stockhausen. Even not so visible relations, like the trend to return to the maximum domain of polyphony -present on Renaissanceby Beethoven could also be clearly observable, demonstrating the chronological nature of the space. The dichotomy between master-apprentice tradition on music and the quest for innovation that opened this discussion could be visualized quantitatively. Each composer demonstrated his own style, differing considerably from his predecessor -clearly shown when analyzing pairs of subsequent composers like Bach and Mozart, Mozart and Beethoven or Stravinsky and Stockhausen. Otherwise, the inheritance of predecessors styles is also present when analyzing the direct relations between Mozart and Beethoven or Beethoven and Brahms, or indirect ones between Bach and Beethoven or Beethoven and Monteverdi. The entire scenario presented a "continual pattern" between composers -motivated by the influence of theirs predecessors -but also showed a force repelling both of them: the innovation, or in the words of William Lovelock 3 , the "experimentation" that makes progress possible. Along the analysis we noticed interesting differences when comparing composers with philosophers. While on philosophy the innovation is notably marked by opposition of each philosophers ideas, it is less present for music composers. The lack of strong opposition movements and proeminent presence of dialectics in musical space indicates the music innovation is driven by a constant heritage of each composer from his predecessors. We represented this characteristic referring to a memory state where philosophers shows memory-1 -each philosopher was influenced by opposite ideas of its direct predecessor -while composers shows memory-2 -inheriting the style of their both direct predecessors. The analysis of both dialectics values also shown surprising results: on philosophy the dialectics indices are arranged on a increasing series -showing a strong influence of dialectics to philosophy development -the dialectics indices on music exhibits the same pattern, but with an offset. This behavior presumably indicates a constant quest for coherence by the composers, a fact notably observed by the studies of Anton Webern 2 should have somewhat the same kernel and a lattency between the effects. Another result is that the quantitative methodology initially applied to the analysis of philosophy 1 proved to be extensible to other fields of knowledge -in this case music -reflecting with considerable efficiency details concerning the specific field. Computational analysis of music scores could be ap-plied to automate the quantification of composers characteristics, like identification of melodic and harmonic patterns or the presence or not of polyrhythms, motivic and harmonic stability 16 . More composers could be inserted in the set for the analysis of a wider time-line, possibly including more representatives of each music period. We want to end this work going back to Webern, who early envisioned these relations: "It is clear that where relatedness and unity are omnipresent, comprehensibility is also guaranteed. And all the rest is dilettantism, nothing else, for all time, and always has been. That's so not only in music but everywhere."
2012-03-05T01:50:09.000Z
2011-09-21T00:00:00.000
{ "year": 2011, "sha1": "2973fb63098b9fdb17df338f99353a92fb33c5ce", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1109.4653", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2fe887c070569cba0804d4dc8f4ed6d6a9ca5bc4", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Physics", "Computer Science", "Mathematics" ] }
157178188
pes2o/s2orc
v3-fos-license
Social Capital: a Basis for Community Participation in Fostering Environmental Education and the Heritage Tourism Development of Cibalay Megalithic Site Social capital is an often-unrecognized staple of community participation in a tourism site‟s development, although social capital comprises elements essential for successful community-based participation. This paper discusses how the host community‟s participation in the tourism development of Cibalay Megalithic Site was driven by local social capital. Cibalay Megalithic Site is one of the last reminders of ancient beliefs and is an iconic landmark, located within the Bogor Regency of West Java, Indonesia. It is also within the protected area of Gunung Halimun Salak National Park. Cibalay Megalithic Site is a product of a socio-cultural environment, deriving from the relationship between man and nature. Thus, its tourism development should interpret this history and promulgate environmental education as one of the key elements of sustainable tourism. The local Village of Tapos I was established as a tourism village; within this village, the hamlet of Sinar Wangi was declared a conservation hamlet. Both designations were achieved due to local initiatives of the host community in developing local tourism, with Cibalay Megalithic Site as the iconic tourism focus of the area. The high level of trust towards local figures and visitors, the conservation norm of “leuweung hejo, masyarakat ngejo” (if the forest is green, then the people will be prosperous) underlying everyday local life that indicates the importance of nurturing nature, good inter-personal relations between village members, and good social networking with outsiders: all combined to create the conditions and motivation that facilitated collective action in developing local heritage tourism. Introduction A great deal of tourism relies on places with natural, indigenous and historic significances which tourism products are based, that can be categorized as natural and cultural tourism of which are forms of heritage that must be preserved. Heritage sites are attractive tourism destinations all around the world, because it provides identity to the site (Sharpley and Telfer, 2002), especially related to culture, raising the importance of heritage tourism. Heritage tourism can be defined as visiting historical and archaeological sites for the purpose of acquiring knowledge or entertainment (Hasan and Jobaid, 2014), in which arts, culture and heritage form a key attraction for visitors, and a focus of their activities (Columbia, 2014). Cultural heritage resources will play a significant role in sustainable cultural, social, economic development of communities (sustainable tourism), so the physical fabric, that has influenced their creation, must be maintained e-ISSN: 2407-392X. p-ISSN: 2541-0857 (Chourasia and Chourasia in Ismail, Masron and Ahmad, 2014). In response to an increasing number of tourists demanding specialist tourism products. Pforr and Megerle (2006) and Ali-Knight (2011) heritage tourism can evolved as a potential niche tourism product and market that enhances visitors" awareness and appreciation on man civilization and human-built resources. These can be achieved through outdoor and recreational activities (Ali-Knight, 2011;Farsani, Coelho and Costa, 2011). A heritage tourist should learn and have appreciation on the heritage of the area he or she is visiting. Through environmental interpretation, which focuses on historical aspects, visitors" satisfaction will be enhanced and contributes to the conservation objectives of the product/site. Heritage tourism can share experiences with other modalities of tourism, yet remain distinct in its purpose and add new dimension to the tourism product offered. Rich in its culture, Indonesia has been a popular holiday destination; not just in terms of etnicity but also human civilization. Preserving the historic remains of civilization for the enrichment and education of present and future generations is crucial to add wealth to our understanding of our own nation"s heritage and specifically to our local cultural heritage. Enhancing cultural and heritage offering through sharing cultural stories and history with tourists and promoting historic places can create a richer, more memorable tourism experience (Columbia, 2014). This is actually the great strength of the Cibalay Megalith Site, ('structures made of large stones, usually rough and unhewn, which conform to certain well marked types' (Perry, 1918: 10), i.e., providing a first-hand experience from original objects, in accordance with Tilden (1977) definition of interpretation, "An educational activity which aims to reveal meanings and relationships through the use of original objects, by firsthand experience, and by illustrative media, rather than simply to communicate factual information" (p. 11). First-hand experince is crucial to understanding of the objects as stated by (Hooper-Greenhill, 1994: 11), "The real experiences that we offer, of objects, of buildings, of sites and of people, are essential to learning". Experiences suggest that many visitors who visited a megalith sites are confronted with a megalith resource, forming an innate curiosity about man"s historical culture and they would start to deliberate questions on human"s history. In order to respect the cultural significance of the destination, local community needs to be directly involved in the planning and management of the site. Therefore cultural and heritage tourism development must encourage local participation; consequently, its planning and development must focus on the power of the people to withhold negative changes and alterations on their life and surroundings. Reference Milic, Jovanovic and Krstic (2008) conclude that local community is the local driver in tourism activities, and an important factor for sustainable cultural tourism, as stated in (Cole, 2008: 58), "support and pride in tourism development are especially important in the case of cultural tourism where the community is part of a product". The authors based their conclusions on the observation that tourism services are mostly dependent on local institutions and participation of local communities. The emotional and cognitive bonds that individuals form with a place foster a sense of stewardship or desire to protect and care for that place (Halpenny, 2010). These are related with the expanding access to natural resources which are determined by the availability of local networks, collective actions, mutual trust, and social norms. These elements make up the working definition of social capital, in addition to cooperation, relationships, and social interaction (Pawar, 2006). This is in line with the results concluded by Thoyre (2008) and Liu et al. (2014), which confirm, that a high level of social capital encourage community"s behaviour in environmental protection. All of these findings suggested the potential of social capital in enhancing community participation in cultural heritage tourism development. The paper explored social capital elements that motivate the local communities to support heritage tourism development within their areas. It examines the existing elements of social capital that include relationship within the communities and how it relates to community empowerment and conservation education as getaway to sustainable tourism. e-ISSN: 2407-392X. p-ISSN: 2541-0857 Methodology This study was designed as a research survey by analyzing social phenomena. The research is directed toward finding facts on the basis of factual phenomena of social capital that will be considered as supporting the development of tourism. A. Study Site The study was conducted in the Village of Tapos I within the Sub-district of Tenjolaya, District of Bogor in the Province of West Java. The object of the local heritage tourism is the Cibalay Megalithic Site ( Figure 1). Figure 1 Cibalay Megalithic Site Cibalay Megalithic Site is declared as a Cultural Heritage Site and is protected and regulated by Act No. 5/1992 on Cultural Heritage Sites. It is also located within the protected area of Gunung Halimun-Salak National Park. Thus formally, its status is legally protected by the Act and by its location, that is, within a protected area. The closest hamlet to the location is the Sinar Wangi Hamlet, which is given the status of a conservation hamlet. The management of Cibalay Megalithic Site is organized by the Tourism Village Forum of Tapos 1 Village, which started in 2007 with the aim of promoting the tourism potential to improve the local welfare. Members of the forum were generally comprised of youths who are also members of Tapos 1Youth Village Tourism and also a member of the Activator Tourism Group consisting of members of various organizations such as the Regional Disaster Management Agency) and the National Community Empowerment Program. Both are chaired by the same person, as well as the Tourism Village Forum. In addition to the Tourism Village Forum, there was also a group of officers guarding the site from the Cultural Heritage Preservation Office of Serang, totalling to 6 people (1 civil servant and 5 permanent employees). B. Data Collection Method and Analysis This research is descriptive in nature. Data collected coveredthe elements of social capital and general condition of the location ( Table 1). The elements of social capital that were studied, were selected based on the scope of social capital at the micro (community) level as given by Grootaert and Bastelaer (2001), namely trust, local norms and value, local institutions (collective actions and coordination) and networks. Triangulation method for socialrelated data and information was used, and consisted of interviews, observation and literature studies. Interviews were conducted on every weekend between August18 th -September 18 th , 2014, with key informants and respondents from member of communities as shown in Table 1. The information gained from these interviews was supplemented by literature research in areas related to tourism, social capital, human behaviour and conservation education. Data and information collected were analyzed descriptively. C. Limitation of the study There are limitations to the study with regard to measuring social capital. This research did not use any quantitative measurement to determine the level of social capital condition within the village. Rather it used a qualitative approach, based on the analysis of other evidences on field. A. Elements of Social Capital for Tourism Development There are few opportunities and limited potentials for the local communities to invest and operate tourism businesses by themselves. Observations from Thailand Thammajinda (2013) and Indonesia reveal that benefit from tourism development have tended to by-pass local people in favour of outside investors, making them the main tourism actors who dominate tourism development in many local destinations. In the context of human development and the Cibalay Megalithic Site, social capital has a great influence because some dimensions of human development is strongly influenced by social capital such as the ability to solve problems together, raise collective consciousness to improve the quality of life and looking for opportunities to improve welfare. The existence of strong social ties will lead to an increase in welfare. This situation will increase the possibility of accelerating the development of individuals and groups within the community. The social capital elements of the communities within the study sites are tabulated below in Table 2 1) Trust Among the people in Tapos 1Village, trust towards their government leaders, community leaders and religious leaders, were highly influenced by the roles of the new village head at the time, local figures and religious figures who actively participated in every village activities, such as Qur"an recitals in every hamlet, mutual cooperation work such as in repairing roads, keeping the environment clean, and attendance in every village meeting and village development planning colsultative meetings. These have encouraged the community members to actively participate in these activities as well. Trusts in intra-and interpersonal social relations between individuals and between groups were also high and evident in everyday life. They can entrust a message for another member of community through other community member without being worried that the message would not be conveyed. On the contrary, the interpersonal economic relations related to income/livelihood, have prompted some dislike among the community. This was indicated by a little resentment shown towards the manager/caretakers of e-ISSN: 2407-392X. p-ISSN: 2541-0857 Cibalay Site. For example, some envyness could occur when the member (s) of the community received visitors for overnight stay at their home (homestays), despite the previous agreement on the criteria for the selection of houses. However, this did not cause any conflicts. Apart from the employment opportunity, this was caused by the different perceptions about the historical background of the Cibalay Site, between the officers and the local community. This fact has made the guarding officers to be more cautious to convey the story/myth/history of the site to the locals, so as not to create conflict, and tell the visitors who wanted to learn more about the history of the area to go directly to the head of the guarding officers. It is also influenced by the lack of knowledge and willingness of the staff to add their insights. The gap was also influenced by the status of the officers who has no close relationship with the leadership of the village government, yet, they have a formal power status due to the support from the Cultural Heritage Preservation Office in Serang. Nevertheless one of the officers is also a member and also the head of one of the local community housing group (RW). This is one of the ways to maintain the merging of conflict. Therefore to increase trust and community participation of the heritage site, the local community must be involved in the planning and management of the Cibalay Site, including having a communal agreement of the historical background of the site. History is important to be told, especially since the area receives a great number of visitors during weekends and national holidays. The local community was very much open to visitors, which show a high trust towards visitors. This was enhanced by the local Sundanese philosophy among the Tapos 1 Villagers in treating guests, i.e, "Someah hade kasemah," that can be translated into "polite, kind, and care towards visitors". The arrivals of visitors were also highly expected by the community, to obtain additional revenue. The informal and subjective elements of interpersonal behaviour shaped people"s minds and attitudes about interacting with others. When members within communities trust each other and the institutions that operated among them, they should have easier access to reach agreements and conduct collective actions and cooperation resulting in networks. Trust improves cooperation by reducing expenses and improving the exchange of resources, skills and knowledge (Pretty and Ward, 2001). When trust in the social structure increases, it would enhance the individual's willingness to trust people who they were not familiar with. As a result, individuals are more likely to start and join a local organization that aims to improve the social, economic, or local environment (Pretty and Smith, 2004). It was evidenced from the research, that economic and social motivations formed the level of trust that the communities showed toward each other, their leaders and outsiders. The basic motivation of trust towards people, who were outsiders, was largely economic, such as the opportunity for employment and increasing local income. 2) Social Norms The influence of local art and religious leaders in the villages has high effect on the implementation of social norms, while trust on the village governments determined the implementation of village government regulation. Social norms played important role in controlling and shaping the behaviour that grew in the community. Formal and informal rules, norms and sanctions are instrumental in putting the interests of group over individual interests in the formation of positive attitudes and behaviours towards environment.The influence of local art and religious leaders in the villages has high effect on the implementation of social norms, while trust on the village governments determined the implementation of village government regulation. According to Hasbullah, (2006), maintenance of group norms (adherence to the norms of religion, morality, and politeness) will strengthen the communities" social capital. Customary norms are no longer hold true for Tapos 1Village community. Nowadays, people are following the government norms or regulations, including the rules of the village, and achieving agreement among members of community. One of the norms that is still strongly applied by the community embodied in the local motto e-ISSN: 2407-392X. p-ISSN: 2541-0857 of "leuweung hejo, masyarakat kudu ngejo", which can be translated into, "if we want to maintain the forest to keep providing to us, then we should be kind to the environment". This motto is engraved in the everyday life of the people in the Tapos 1 Village. This motto is also motivated by the designation of one of the hamlet, Sinar Wangi, as a conservation village. Apart from customary norms, religious norms are very evident from the regular weekly Holy Qur"an recitals. The community also has a tendency to follow and implement what has been agreed and shared by the habit among fellow citizens, as well as the provisions of the religion or beliefs held. There was also a strong social control among members of te communities, in the form of sanctions for people who violated the existing norms, because it is their beliefs that member of community should not give bad name to the village. 3) Collective actions and cooperation The level of desire to add and share experiences to other community members was apparent during the recital of Qur"an. In addition, the Tourism Forum also frequently held informal meetings called "ngariung" once a week at the campsite of Mount Salak on Saturday/Sunday, which opened in general, for every community member who wants to join. The meeting is used to discuss the development of te Tourism Village Forum as wel as the development of tourism in Tapos i Village in general. Frequency of participation of community, in social organizations is generally high, especially among the youths. Among adults, the participation is generally moderate and normative. This was observed to affects the participation in decision-making on social organization, in which youths were more active in participation and decisionmaking. Various collective action based on mutual trust would increase the participation in a variety of shapes and dimensions, especially in the context of building a common progress. The purposes of collective action within the study site consisted on primarily community-organized activities for religious purposes and providing environmental services. Trust fostered the collective actions and cooperation within the studied village. This was in line with the high level of trust toward the village government, which validated the importance of local leadership (the village government) in empowering community. There is also a form of citizen awareness in the surrounding forest environment, especially residents of Sinar Wangi Hamlet. Community participation in maintaining forest environment makes Kampung Sinar Wangi established as one of the pilot conservation village of the Gunung Halimun Salak National Park. Therefore, residents of Kampung Sinar Wangi have a strong motivation to foster stewardship among fellow community and the environment in terms of protecting the environment in the surrounding forests and villages. 4) Networks Strong social capital would depend on the capacity of community groups to build its network. The research confirmed that trust provided the foundation for norms and collective actions which all together determined the level of networking. Table 3 confirmed that the higher the level of trust that the communities have towards their fellow communities, local figures, leaders and outsiders, the higher were the ability to established networks and build local organizations/associations. The high trust towards outsiders proved to enhance the openness of the communities to develop networks. Another important result was that the higher the need to improve economic condition, the higher the participation in tourism. Various social networks are observed in Tapos 1 Village. This is apparent since every social network has a foundation in the form of institutional organizations/forums/groups, whose membership is formal and open to the public. Willingness to establish cooperation network among the community members were shown by the presence of member of community figure who pioneered the establishment of networks, which will be followed by the community participation. This can be e-ISSN: 2407-392X. p-ISSN: 2541-0857 achieved only with high level of trust to the local figure. The local community are also very receptive or cooperate well to establish social networks in developing tourism within the village. This was evident from some of the networks that have been developed until now, which include Department of Tourism and Culture of the District of Bogor, West Java Tourism Village Forum and the Regional Disaster Management Agency. The underlying motivation for this, was to enhance welfare of the village through tourism development and other village activities. As the phenomenon of 'bottom-up', social capital is created when individuals developed network connections. In other words, strong social capital would depend on the capacity of community groups to build its network. The research confirmed that trust provided the foundation for norms and collective actions which all together determined the level of networking. Table 3 confirmed that the higher the level of trust that the communities have towards their fellow communities, local figures, leaders and outsiders, the higher were the ability to established networks and build local organizations/associations. The high trust towards outsiders proved to enhance the openness of the communities to develop networks. Another important result was that the higher the need to improve economic condition, the higher the participation in tourism. Heritage tourism development of Cibalay Megalithis Site has been providing benefits to the local village such employments, homestays owners, parking spaces and parking money, trades, etc. The road leading to this village has also been improved and various tourism objects have emerged due to the arrivals of visitors who wanter to visit the heritage site. This eventually add revenues to the local government and more visitors are expected to arrive/visit the Cibalay Megalithic Site, especially with the assistance from media that brought up the importance of megalithic sites as a n education media to learn abour our civilization heritage. B. Social capital as a basis for community participation in fostering environmental education The sustainability of Cibalay megalithic site is greatly influenced by the community and the visitors who come to the Site. They have to demonstrate proenvironmental behavior in order for the site to sustain. Pro-environmental behavior can be cultivated in both residents and visitors by means of environmental education; a management tool, a mean to strengthen people"s environmental concern which would lead to environmentally responsible behaviour or pro-environment behaviour (Stapp, 1969;Hungerford and Volk, 1990). Active local environmental groups can provide the source to generate social capital in the community (Klyza, Savage and Isham, 2004). Such group existed in Tapos 1 Village, i.e. the Tourism Village Forum. The Forum had strong influence to the community, particularly in tourism management of the village. Using the influence, the Forum could encourage community participation in environmental education, in which the importance of human history and the value of Mount Salak and Cibalay megalithic site as tourism destination were emphasized. Values can motivate people to engage in certain behavior (Kollmuss and Agyeman, 2002). Conveying the values of Mount Salak and the site is expected to increase people"s appreciation toward the resources, enhance their affinity with the resources, and in turn urge them to protect the resource. Values are related to norms. Changing the way people value things, is expected to also change the norms which regulates their action. Changing social norms can ensure the long-term protection of the environment (Pretty and Smith, 2004). Norms, rules, and values are the means to achieve long-term sustainability (Liu et al., 2014). In addition, people"s participation can be promoted by Environmental education (EE) for the residents can take the path of formal and nonformal educations, while for the visitors it would be best given in interpretation/interpretive programs. Formal EE means providing the issues of EE in the curriculum, either by integrating the issues in e-ISSN: 2407-392X. p-ISSN: 2541-0857 to the existing subjects, or designing new subject. Non formal EE can make use of the activities already exists in the community, such as Islamic study, informal discussion, or training and courses by design. A welldesigned and well-implemented interpretation can increased visitors knowledge of the host area, positive attitude toward the resources, general environmental behavioral intentions, and support of conservation (Powell and Ham, 2008). Interpretive programs for the visitor can take advantage of the local guide and various exhibits install in the site to inform visitors on the history and the value of the site. Community participation in sustainable tourism is also the key to the success of conservation efforts, because many tourism activities took place within protected areas (Sunkar, Rachmawati and Cereno, 2013). It is also supported by the statement of (Pretty and Smith, 2004) that many studies have shown the increased activity on the conservation of natural resources in and around the communities of protected area, which showed a good relationship between the members of the community, between and within groups, and networks. Social networks are indispensable for the success and sustainability of ecotourism development. According to Pretty and Smith (2004), strong positive relationships within and between social groups, could significantly lower the cost of tourism operations through cooperation, facilitation, collaboration, investment in collective action, reduction of the likelihood of an individual engaging in activities that generate negative impact on the group, and increase the chances of innovation. This research have found that trust especially in village government was the most crucial element of social capital that formed the foundation for a successful cultural heritage development because it led to networks that would empower the communities. This result was similar to that of Oktadiyani (2010) who found that trust and norms were the major elements of social capital in the communities of Kutai National Park buffer zone in, while research by Baksh et al. (2013) found that norms was strongly influenced that the development of ecotourism in the region was strongly influenced by the networks followed by public participation although it was not influenced by the beliefs and norms, whereas. Conclusion In the context of human development and the Cibalay Megalithic Site, social capital has a great influence because some dimensions of human development that were evident in Tapos 1 Village were strongly influenced by social capital such as the ability to solve problems together, raise collective consciousness to improve the quality of life, and looking for opportunities to improve welfare. The existence of strong social ties will lead to an increase in welfare. This situation will increase the possibility of accelerating the development of individuals and groups within the community. Local village leadership had significant effects on maintaining a solid rural community. The higher the social capital, the stronger is the ability of the community to resist changes on themselves and on their environment. When enhanced with conservation education, it would improve the conduct of sustainable tourism by empowering the community"s capacity to work together to address their common needs, fostering greater inclusion and cohesion, and increasing transparency and accountability. This research have found that trust especially in village governmet was the most crucial element of social capital that formed the foundation for a successful cultural heritage development because it led to networks that would empower the communities. In addition, this research also found that participation of the local community is crucial in the planning and management of Cibalay Megalith Site to provide the visitors with environment education and information on the histrocal background of the area. Acknowledgement Thanks are due to the Indonesian Ministry of Education and Culture and Bogor Agricultural University who made this research possible by awarding this research with funding through national competitive grants (DIPA IPB No. SPK. 21/IT3.11/LT/2014 dated June 2 nd , 2014.
2019-05-19T13:04:47.417Z
1970-01-01T00:00:00.000
{ "year": 1970, "sha1": "dd95ac14ee6f6868de8680f37fcd6773161bc870", "oa_license": "CCBY", "oa_url": "https://ojs.unud.ac.id/index.php/eot/article/download/25256/16460", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "fcb188ff2420f103502a9aad480cc20608a30621", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Political Science" ] }
181389999
pes2o/s2orc
v3-fos-license
Development of Automatic Accumulating Equipment for Roller-Type Onion Pot-Seeding Machine In this study, an automatic accumulating equipment for a roller-type onion pot-seeding machine was developed. The developed accumulating equipment accumulates the pot trays that complete the seeding in multiple stages using a conveyor belt and elevator plate, and can be easily attached to the existing roller-type onion pot-seeding machines as it is fabricated as a module. The components were selected to meet the conditions required for the accumulating equipment. A factorial experiment was performed by setting the transport speed of the conveyor belt and the one-time operating time of the actuator as test factors. Through the experiment, appropriate operating conditions to minimize the seed bounce rate in the accumulating process were derived. If the developed accumulating equipment is applied to a roller-type onion pot-seeding machine, the labor force and labor load required for seeding will be reduced. Introduction A need for ergonomic design is emerging to prevent occupational musculoskeletal diseases that occur owing to the uncomfortable working postures and repeated motions of workers [1,2]. In particular, agricultural works are often performed in harsh conditions such as high temperatures and dusty environment. Thus, studies to reduce the labor load and increase work efficiency of the workers are required [3]. In the case of the roller-type pot-seeding machine used for onion seeding, the work efficiency is high, and the economic efficiency and durability are excellent [4]. However, pot trays that complete the seeding process must be carried one at a time by workers as there is no pot tray accumulating equipment. The labor load of the workers is very high because they have to repeat the uncomfortable posture of bending their backs to pick the pot trays until the end of the seeding work. Therefore, the development of an equipment to accumulate the seeded pot trays in multiple stages and carry them at once is required. Kang et al. [5] developed accumulating equipment for wood to reduce the labor intensity and to shorten the time required in the wood packaging process. Chang et al. [6] developed a high-efficiency accumulator suitable for cabbage-accumulating work through a mechanical and kinematic analysis. Park et al. [7] developed equipment capable of accumulating up to three pot trays for the automation of the seedling raising process in plant factories. The accumulating equipment used in the seedling raising area can be classified into the round bar type, angle beam type, and flat plate type depending on the element that lifts the pot trays. The round bar type and angle beam type have high power transmission efficiency and relatively low production cost because they use simple mechanical components such as chains and gears. These types, however, cannot be used when the flexibility of the pot trays is high because the contact area between the lift element and pot tray is narrow. For the flat plate type, the production cost is high and the structure is complicated because hydraulic devices such as actuators are used. However, it is not significantly affected by the flexibility of the pot trays because the contact area between the lift element and pot trays is wide [8]. As the pot trays used for onion seeding can be easily bent owing to their high flexibility, the flat plate-type accumulating equipment is required to reduce the bending of the pot trays, thereby minimizing the seed bounce in the accumulating process [9]. In this study, flat plate-type accumulating equipment for a roller-type onion pot-seeding machine was developed, and appropriate operating conditions to minimize the seed bounce in the accumulating process were derived. Structure of Existing Roller-Type Onion Pot-Seeding Machine [10] A roller-type onion pot-seeding machine consists of an autofeeding device, hoppers, rollers, a seeding device, and a power source. Their major specifications and shapes are shown in Table 1 and Figure 1. As for the operation of the machine, pot trays are supplied and transported in constant speed by the autofeeding device. Seeding work is performed in pot trays by the following operation: bed soil input by hoppers 1 and 2, bed soil compression by the rollers 1, 2, and 3, seeding by the seeding device, and covering soil input by hopper 3. For the roller-type onion pot-seeding machine, work is completed with two bed soil inputs, one seeding, and one covering soil input, and there is no separate accumulating equipment. Table 1. Specifications of existing roller-type onion pot-seeding machine [11]. Classification Contents Length × Width × Height 3477 mm × 510 mm × 975 mm Total weight 165 kg Operation capacity 360 pot trays/h Number of cells in pot trays 448 Figure 1. Picture of existing roller-type onion pot-seeding machine. Requirements for Accumulating Equipment Based on the purpose of the accumulating equipment, components must be selected by considering the following. A. Manipulation must be simple for the convenience of the worker. B. The system must be constructed so that the impact occurring in the accumulating process can be minimized and pot trays can be accumulated in a safe manner. C. The equipment must be easily attached to and detached from the existing roller-type pot-seeding machine in order to increase its usefulness. D. If pot trays are over accumulated, their weight may result in excessive pressure on the bottom pot tray. Therefore, workers must be informed regarding the accumulating degree in order to prevent excessive accumulating. Details of Developed Accumulating Equipment The main components of the accumulating equipment are the conveyor belt, elevator plate, limit switch, pneumatic-type actuator, stopper bracket, alarm bell, control box, and connecting bracket. Their characteristics and functions are as follows: A. Conveyor belt: This transports the pot trays, which pass through hopper 3 of the existing pot-seeding machine, and complete the seeding work, to the accumulating plate ( Figure 2). The conveyor belt operates using a motor installed in the lower portion of the supporting frame as a power source. It can adjust the transport speed from 0.03 to 0.3 m/s using the speed controller. The speed controller has 10 stages, and the speed of the first stage is 0.03 m/s. The speed increment for one stage is 0.03 m/s. To transfer the pot trays to the elevator plate, two pairs of conveyor belts are installed with a spacing of approximately 285 mm. It moves up and down by a pneumatic-type actuator with air compressor. The area of the elevator plate was designed to be almost identical to the area of a pot tray so that it could support in a stable manner any pot trays that are easily bent owing to high flexibility (Figures 3 and 4). C. Limit switch: When pressed owing to a contact situation, the limit switch stops the conveyor belt by sending a stop signal to the motor. The limit switch operates the pneumatic-type actuator at the same time by sending a start signal to the air compressor. Located on the supporting frame behind the elevator plate, the limit switch is pressed by the end of a pot ( Figure 5). D. Pneumatic-type actuator: Located at the bottom of the elevator plate, the pneumatic-type actuator performs a stroke movement using the air compressor as a power source. Upon receiving a start signal from the limit switch, the pneumatic-type actuator performs a one-time ascent and descent of the elevator plate with the pot tray through the stroke movement. The stroke time can be adjusted by changing the air supply rate of the air compressor through the timer. The stroke time can be adjusted between 0.1 and 3 s. The shape and specifications of the pneumatic-type actuator are shown in Figure 6 and Table 2, respectively. E. Stopper bracket: As a key element in accumulating pot trays, the stopper bracket consists of a bracket, guide, and spring. The bracket enables only an upward tilt by up to 90 • , thus it does not block the motion when the elevator plate with a pot tray is raised. After passing of the elevator plate, the bracket returns to the original position by spring force. The side part of the pot tray is placed on the bracket when the elevator plate descends. The side length of the elevator plate is designed to be slightly shorter than that of a pot tray, and thus it does not touch the bracket when it ascends and descends ( Figure 7). Therefore, during the one-time ascent-descent motion, the elevator plate returns to the original position, and only the pot tray is supported by the bracket (Figure 8). When the first pot tray is fixed by the stopper bracket, the elevator plate with the second pot tray rises and pushes the first and second pot trays at the same time. When the elevator plate descends, the two pot trays are supported by the stopper bracket. The third and fourth pot trays are accumulated onto the stopper bracket in a similar sequence. In this instance, the first pot tray is located at the top. A total of six stopper brackets are located on both sides of the elevator plate so that the pot trays can be supported at six points. Through this, bending of the pot trays can be minimized. F. Alarm bell: When the buzzer is pressed owing to a contact situation, the alarm bell operates to inform the workers for the accumulation situation. Located at the side of the top of the guide, the buzzer is designed to get pressed when an elevator plate with four pot trays ascends. More than five pot trays make it difficult to move at once, and excessive deformations occurred by its self-weight. Therefore, the bell sound reminds the workers that four pot trays have been accumulated (Figure 9). The number of pot trays that operates the bell can be adjusted by changing the height of the alarm bell. G. Control box: Equipped with a motor speed controller, air compressor timer, accumulating equipment power on/off switch, and emergency stop switch, the control box allows the worker to control the overall operating conditions of the accumulating equipment in one place ( Figure 10). The control box was installed on the front of the accumulating equipment considering the workers' movements, and it was approximately 950 mm above the ground so that workers can operate it without bending significantly at the waist. H. Connecting bracket: This is used for connecting the accumulating equipment to a pot-seeding machine. There are two connecting brackets on the steel pipe (pipe 2) located at the front side of accumulating equipment. The hole diameter of connecting bracket was designed to fit with the diameter of steel pipe (pipe 1) located at the end of the roller-type pot-seeding machine. The developed accumulating equipment could be easily attached to and detached from the existing roller-type pot-seeding machine through the fastening of connecting bracket ( Figure 11). The operation method of the accumulating equipment is as follows. The pot tray that completed the seeding is transported to the elevator plate by the conveyor belt. When contact occurs between the end of the pot tray and the limit switch, the limit switch sends a stop signal to the motor to stop the conveyor belt, and simultaneously sends a start signal to the air compressor to operate the pneumatic-type actuator. In this instance, the pot tray and the elevator plate ascend and then descend owing to the stroke movement of the pneumatic-type actuator. The elevator plate returns to the original position as it descends, but the pot tray is accumulated onto the stopper bracket. When the elevator plate returns to its original position, the conveyor belt operates again to accumulate the second and third pot trays onto the stopper bracket in sequence in the same manner ( Figure 12). Figure 13 shows a flowchart of the accumulating process. The developed accumulating equipment has a simple configuration and can be easily attached to or detached from a roller-type pot-seeding machine by a worker. The control box provides very convenient operation. Moreover, pot trays can be supported and accumulated in a stable manner using six stopper brackets, and the information on the degree of accumulating is confirmed by the alarm bell. Therefore, the developed accumulating equipment meets all conditions required for the accumulating equipment. Figure 14 shows the overall shape of the developed accumulating equipment. Factorial Experiment to Derive Appropriate Operating Conditions If the operating conditions of the accumulating equipment are not appropriate, the seed bounce, in which the seeds in the pot tray cells are thrown out by the impact occurring in the accumulating process, may occur ( Figure 15). The operating conditions that affect the seed bounce are the transport speed of the conveyor belt and the elevator plate ascending/descending time (one-time operating time of the actuator). If the transport speed of the conveyor belt is faster than the proper speed, a large impulse occurs when the pot tray and the limit switch collide, thus causing the seed bounce. Moreover, if the ascending/descending time of the elevator plate is shorter or longer than the proper time, the seed bounce occurs owing to the impulse generated when the pot tray and the stopper bracket collide or owing to the deformation generated from contact with the stopper bracket. Therefore, a factorial experiment was performed to minimize the seed bounce that occurs in the accumulating process. The transport speed of the conveyor belt and the one-time operating time of the actuator were selected as test factors. The pot tray movement speed of a roller-type onion pot-seeding machine is 0.075 m/s [12]. To smoothly accumulate the pot tray that completed the seeding, the speed of the conveyor belt must be higher than the movement speed of the pot tray. Also, through a preliminary test, it was confirmed that excessive seed bounce occurred when the speed of the conveyor belt was higher than 0.3 m/s. As the transport speed of the conveyor belt can be adjusted from 0.03 m/s to 0.3 m/s in 0.03 m/s increments, it was set to seven steps of 0.09, 0.12, 0.15, 0.18, 0.21, 0.24, and 0.27 m/s in the factorial experiment. Through the preliminary test, it was confirmed that excessive seed bounce occurred when the actuator operating time was shorter than 0.7 s or longer than 1.6 s. Therefore, the actuator operating time was set to eight steps of 0.8, 0.9, 1.0, 1.1, 1.2, 1.3, 1.4, and 1.5 s. This leaves a total of 56 conditions (7 × 8) for the conveyor belt speed and the actuator operating time in the factorial experiment. Three repeated tests were conducted for each test condition, resulting in a total of 168 tests. The developed accumulating equipment was attached to the existing roller-type pot-seeding machine to accumulate the pot trays that completed the seeding. After accumulating of pot trays, the number of seeds thrown out of the pot tray cells owing to impacts occurring in the accumulating process was measured. The seeding rate of the pot tray after the seeding device was checked before the tests, and the results showed 100% seeding rate ( Figure 16). Thus, the empty cells of pot trays are due to the accumulating process. The results of each test condition were analyzed using the average values of the three repeated tests as representative values. For the bed soil and seeds used in the tests, products commonly used in South Korea were applied. Figure 17 show the pot tray cells from which the accumulated seeds were thrown out. Figure 18 shows the number of empty cells according to the pneumatic-type actuator operating time. As the operating time increased, the number of empty cells showed a tendency to increase. Under all operating time conditions, the number of empty cells was lowest when the conveyor belt speed was 0.09 m/s and highest when it was 0.27 m/s. It appears that the number of empty cells increased as the conveyor belt speed increased because the impulse generated from the contact with the limit switch increased. Figure 19 shows the number of empty cells according to the conveyor belt speed. As the conveyor belt speed increased, the number of empty cells showed a tendency to increase. Under almost all conveyor belt speed conditions, the number of empty cells was lowest when the operating time of the pneumatic-type actuator was 1.0 s and highest when it was 1.5 s. Given the operating time of the pneumatic-type actuator, the number of empty cells was high when the time was too short or too long. When the operating time was too short, it appears that the number of empty cells increased because the movement speed of the elevator plate increased, and thus the impulse generated when the pot tray was placed onto the stopper bracket increased. When the accumulating time was too long, it appears that the number of empty cells increased because the movement speed of the elevator plate decreased, and thus the deformation owing to the contact with the stopper bracket increased as the pot tray ascended (Figure 20). Since the relative motion or contact with the stopper bracket causes empty cells, most of the empty cells occurred at the edge of the pot tray. To verify whether the conveyor belt speed and operating time of the pneumatic-type actuator had a significant impact on the seed bounce rate, an analysis of variance (two-way ANOVA) was conducted (Table 3). It was found that the conveyor belt speed, operating time of the pneumatic-type actuator and their interaction had a significant impact on the seed bounce rate at the 5% significance level. Therefore, the conveyor belt speed and operating time of the pneumatic-type actuator can be judged as major factors that affect the seed bounce rate. Table 4 lists the empty rates under each test condition. The empty rate is defined as the ratio of the number of empty cells in the pot tray to the number of total cells in the pot tray expressed as a percentage, as shown in (1). The condition with small empty rate means good operating condition having small seed bounce rate. Derivation of Appropriate Operating Conditions where E s = empty rate of the pot tray cells, %, N s = number of empty cells in the pot tray, N c = number of total cells in the pot tray. As a result of analyzing the average empty rate, it decreased as the conveyor belt speed decreased. For the operating time of the pneumatic-type actuator, the empty rate was lowest at 1.0 s, followed by 0.9, 0.8, 1.1, 1.2, 1.3, 1.4, and 1.5 s. For individual conditions, the empty rate was lowest at 0.07% when the actuator operating time was 1.0 s and the conveyor belt speed was 0.09 m/s or 0.12 m/s. Although the empty rate was the same under the two conditions, a faster conveyor belt speed leads to higher work efficiency. Therefore, the actuator operating time of 1.0 s and conveyor belt speed of 0.12 m/s were judged to be optimal operating conditions considering both the seed bounce rate and work efficiency. Conclusions In this study, accumulating equipment for a roller-type onion pot-seeding machine was developed, and the appropriate operating conditions were derived through factorial experiments. The developed accumulating equipment was a flat plate type. Its main components were a conveyor belt, elevator plate, pneumatic-type actuator, stopper bracket, limit switch, control box, alarm bell, air compressor, and connecting brackets. The accumulating equipment operates as follows: when the pot tray that completed seeding is transported onto the elevator plate by the conveyor belt, the operation of the conveyor belt stops and the operation of the pneumatic-type actuator starts by the signal from limit switch. The elevator plate returns to its original position after being raised and lowered by the pneumatic-type actuator, but the pot tray is accumulated onto the stopper bracket. When the elevator plate returns to the original position, the conveyor belt operates again and accumulates the second and third pot trays on the stopper bracket in sequence in the same manner. Factorial experiments were conducted to derive the appropriate operating conditions of the developed accumulating equipment. The experiments were conducted while the conveyor belt speed was varied from 0.09 to 0.27 m/s and the operating time of the pneumatic-type actuator varied from 0.8 to 1.5 s. The seed bounce was measured under each condition. As a result, a conveyor belt speed of 0.12 m/s and actuator operating time of 1.0 s were found to be the appropriate operating conditions for the lowest seed bounce rate and high work efficiency. Factorial experiments were conducted under the conservative conditions that the seeded pot tray was not covered. Therefore, if there are no seeds to be thrown out in this experiment, there will be no empty cells in pot tray in the real seeding work. The application of the derived operating conditions to the developed accumulating equipment is expected to reduce the labor load and increase the work efficiency required for onion seeding.
2019-06-07T22:07:02.623Z
2019-05-24T00:00:00.000
{ "year": 2019, "sha1": "855cb529f53beaaa383f06d669709d532283dacd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/app9102139", "oa_status": "GOLD", "pdf_src": "Unpaywall", "pdf_hash": "75b0fdb06d1c993725a1d29efa91bcea5ef2844e", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Environmental Science" ] }
229642312
pes2o/s2orc
v3-fos-license
Measuring Company Performance and Customer Satisfaction Measurement plays a critical role in the everyday management of service businesses and in the development, implementation and adaptation of both strategy and operations. It also plays an important role in monitoring and enhancing the quality of the service experience and in the co-creation of service innovations. Measurement plays an important role in underpinning and informing the everyday and strategic management of service businesses. Reading a business is part of a strategic approach to management in which a firm continually observes the business to monitor and evaluate the relationships between strategy, operations and the production of value, or outcomes. This chapter will first present and discuss core instruments for measuring service production performance. This is followed by a discussion of service quality, customer satisfaction and measures of marketing performance. Finally, this chapter explores measurement tools intended to explore firms’ innovation capabilities. Electronic Supplementary Material The online version of this chapter (10.1007/978-3-030-52060-1_13) contains supplementary material, which is available to authorized users. • What measurement instruments have been developed to control and manage service businesses? • How are measurement tools used by service firms? • Measurement tools • Measuring service performance • Measuring customer satisfaction and marketing • Benchmarking • Innovation capability measure Measurement plays a critical role in the everyday management of service businesses and in the development, implementation and adaptation of both strategy and operations. It also plays an important role in monitoring and enhancing the quality of the service experience and in the cocreation of service innovations. Measurement plays an important role in underpinning and informing the everyday and strategic management of service businesses. Reading a business is part of a strategic approach to management in which a firm continually observes the business to monitor and evaluate the relationships between strategy, operations and the production of value, or outcomes. This involves identifying areas for improvement that could lead to adjustments to routines or everyday practices. This is a process of strategic reflexivity in which reading, including measurement, is a continual process supporting management practices. Each chapter in this book contributes to understanding the ways in which service businesses develop and apply tools that are designed to measure performance, innovation and the evolution of the business. Nevertheless, it is important to focus on the ways in which service businesses develop and apply approaches to measurement as part of everyday business practices. This chapter explores the application of approaches to measurement and evaluation by service businesses. These include approaches to measuring service firms' key performance, particularly production performance, customer satisfaction and innovation capabilities. This chapter engages with the debates explored in Chap. 5 on service operations and productivity, Chap. 7 on service innovation and Chap. 8 on customer satisfaction by exploring operational measurement tools and explaining them in detail. These company performance measurements are intended to measure failure and problems in service production and delivery, for example, how to handle queuing problems and complaints, but also the measurement of efficiency in production and delivery systems. The costs related to an increase in performance are also part of some business approaches to measurement. Running a service business is not just a matter of improving 13 Electronic Supplementary Material The online version of this chapter (https://doi.org/10.1007/978-3-030-52060-1_13) contains supplementary material, which is available to authorized users. performance as the cost of any alteration must also be taken into consideration; service businesses must be profitable and sustainable. It is important to appreciate that measurement involves both marketing, including a focus on customers, and operations. On the one hand, quantified marketing and customer measurements are often explored in marketing textbooks (e.g. Palmer 2005;Lovelock and Wirtz 2011;Hollesen 2015) and these are mostly directed towards measuring customer satisfaction. On the other hand, production performance measurements tend to be explored in service operations textbooks (e.g. Wright and Race 2004;Johnston and Clark 2005). This chapter will first present and discuss core instruments for measuring production performance. This is followed by a discussion of service quality, customer satisfaction and marketing performance measures. Finally, this chapter explores measurement tools intended to explore firms' innovation capabilities. Capacity Planning and Yield Management Services are traditionally considered to be labour intensive and this restricts the ability of firms to produce and deliver a certain amount of services at a given time. If a service firm cannot fulfil demand, then it loses business leading to dissatisfied customers. If a firm has unutilized overcapacity, then it also loses money as this overcapacity involves additional costs including salaries and other costs related to under-employed staff. It is difficult to balance service delivery with customer demand given variations in demand and it may be difficult to predict current and future demand. Therefore, the challenge for service businesses involves balancing decisions regarding capacity and utilization with actual and potential customer demand. Pricing is important and can be a means to regulate demand and this might enable a firm to balance capacity with demand. Systematic approaches and tools to investigate and regulate the relationships between capacity and demand have been developed (Belobaba 1989;Adenso-Diaz and Gonzáles-Torres 2002). This approach has been conceptualized as a process of yield management (Kimes 1989;Berman 2005) or a process by which measurement and operational delivery tries to allocate capacity to certain type of customers at a given time. This section explores problems related to capacity planning and yield management including a discussion of measurement. Capacity planning involves addressing alterations in service demand throughout the day and the week. This is particularly relevant for service businesses that absolutely must be produced at the moment of consumption and are unable to be stored. These services include restaurants, hotels, transportation services, call centres and medical clinics. Capacity planning is also relevant for lawyers and accountants, for example. Here it may be possible for firms to apply flexible approaches to staff management to try to equalize employee workloads. One implication being that accountants work exceptionally long hours towards the end of a tax or financial year as accounts need to be audited and approved. Capacity must be planned, and the first step includes measuring demand, for example, counting the number of customers every hour or every day. Statistics and demand curves can then be created to inform operational decisions. One example is provided in Fig. 13.1 which highlights a peak in demand within a call centre. In this example, there are several peaks making it difficult for a service business to optimize the service delivery process. Here the challenge involves developing an answer to the question: how many call centre operators should be employed per hour? The first observation is that customer demand peaks around lunch time-12:00-13:00. Call centre operators must work during this time and their lunch times must be scheduled around the lunchtime peak. There are relatively few calls during the first and last hour and thus call centre capacity can be reduced. This level of customer demand variation makes it difficult for a call centre to develop an optimum solution. There are perhaps three options for the operator: 1. Employ more staff during the peaks. This might be difficult given the problems of persuading employees to work for 60 minutes and then to take an unpaid break. 2. Link call centres together ensuring that overcapacity at one centre can be transferred automatically to the next available centre. 3. Ensure that some customers are dealt with by automated systems by filtering out customers who could be dealt with by some form of automated system. 4. Estimate the average demand during the day, but employees must expect extremely busy periods and times during which there is limited demand. The key issue is to monitor demand and to employ call centre operatives on relatively flexible contracts. Call centre operatives need to meet key performance indicators including response times and the average time required to deal with each customer. There is thus a clear focus on productivity combined with the quality of the service experience (see Chaps. 5 and 8). An alternative strategy is to engage in nudges intended to try to persuade customers to change their behaviours, and to use the call centre during times of reduced average demand. This can be achieved by informing customers of waiting times and, during key demand peaks, trying to persuade them to engage with the company at another time or via another service delivery channel, for example, a chatbot. Where there is a direct price for a service, for example, the number of guests in a restaurant then customers' behaviour can be influenced by price differentiation. The price can be reduced during hours or days with limited demand and increased during periods of peak demand. This leads to 'happy hours' with reduced prices in bars during the late afternoon and differentiated prices on flight travel and hotel rooms based on demand variations. Online retailers apply data analytics to predict demand altering prices to maximize profit as part of process of dynamic pricing known as surge pricing, demand pricing or time-based pricing. Pricing algorithms have been developed that challenge the relationship between product pricing, customer value and the pricing of competitor products (see Chap. 5,Sect. 5.3.2). Innovations in revenue management pricing systems have enabled companies, particularly in the hospitality and leisure industries, to link demand with alterations in pricing. This has led to dynamic pricing in which a provider instantly alters the pricing of a service or good as demand alters. Thus, prices will fall if there is a reduction in demand and will increase during times of peak demand. The integration of artificial intelligence with algorithmic pricing programmes has replaced human decision-making with automated systems that monitor demand and supply to facilitate dynamic pricing. Dynamic pricing is one approach to yield management, or the process by which firms maximize revenue, through production planning. The aim is to reduce delivery costs by optimization focused on minimizing overcapacity as much as possible and increasing prices as much as possible. The latter is restricted by the customers' willingness to pay; they may, for example, engage in comparator pricing seeking the least cost approach. Which price customers are willing to pay and at which time can be calculated theoretically, but is very difficult to measure in practice? Competitors also alter prices regularly to optimize yields. A service company may engage in pricing experiments by lowering and increasing prices at different hours during the day or over a week and then measure impacts on customer demand and revenues including the ratio between costs of service delivery and profitability. Service businesses can also ask customers about how much they might be willing to pay for services at different times. This may be a more laborious process. Such measurements can only be applied reasonably and only for services that are paid directly including airplane tickets and hotel rooms. Developments in dynamic pricing, using algorithmic pricing programmes, have transformed the ways in which online service businesses manage the relationship between price, demand and operations. A service might be provided free as it is included as part of a much more complex service package, for example, an online or telephone ticket service for booking tickets to some event. In this case, a price experiment regarding the delivery of this type of service will not measure customers' reactions to waiting times. A survey might, however, be developed to assess the waiting times that different customer segments would find acceptable. Another technique for increasing revenue involves reducing the price of a service. This might be a single service that is paid for, or it might involve a service that is included as part of a larger service packet or bundle. That will reduce profit levels per customer, but it might also create an increased tolerance for longer service delivery waiting times because customers perceive that they are purchasing a discounted service and the expectation is that this will involve longer waiting times. A service firm can, therefore, decrease demand, particularly during periods of peak demand, and reduce delivery costs related to a reduction in delivery capacity. The aim is to try to spread demand, avoiding peaks, resulting in a reduction in service delivery costs. This is an optimization process. Several effects must be taken into consideration and assessed in calculating the preconditions for, and revenue effects of, changing service delivery capacity. These preconditions and effects are: • Customers' price and waiting tolerance, and changes in that caused by other service companies' offers • The costs of expanding capacity, or saved costs by reducing capacity • Lost revenue from customers leaving the firm permanently in response to an increase in waiting times. This results in an erosion of a firm's client base and a reduction in customer loyalty • The administrative costs of managing a differentiated price system Many of the capacity problems can be overcome by digitizing service delivery processes (Wirtz et al. 2018). Self-serviced IT systems have much greater capacity compared to labourintensive systems, and peak loads are normally not a problem. Furthermore, such capitalintensive systems are much cheaper as they have reduced variable costs. They come with other advantages including reductions in the costs of obtaining and analysing detailed knowledge about every service transaction. Such systems produce a continual stream of data and data analytics can be applied to identify patterns to inform operational decisions. This highlights the enhanced importance of big data and the ability of service firms to monitor and modify service delivery. Nevertheless, not all services can be digitized including, for example, restaurant meals and live theatrical performances. For services where capacity is difficult and expensive to alter, for example, hotel rooms and airplane seats, companies have developed a system based on overbooking. They sell more beds and seats than they have on the understanding that not all people will arrive. The advantage of this is a full utilization of capacity and thereby increased revenue. In addition, a service company may even benefit from selling the same room or seat twice as those who fail to utilize the service may still have to pay, for example, a hotel room or a flight reservation. This approach only works when the exact number of predicated non-arrivals occurs. If the number of non-arrivals is less than predicted, then the service company is in difficulty. They have more customers than capacity. A calculation is then made regarding losing customers, decreased customer satisfaction and loyalty. One solution is to pay compensation (e.g. airline companies) or to transfer customers to a competitor. All this adds additional costs that must be weighed against the increased income obtained by selling all seats or rooms and perhaps occasionally obtaining double payment. The management of services involved in overbooking requires measurement of the following variables: • The probability of the number of customers who will not show up at a given time and place. • The costs of procuring alternative services for customers including paying exemplary damages. • The combined value of lost customers, customer satisfaction and loyalty including possible impacts on public relations. Furthermore, overbooking creates complaints and often results in aggressive behaviour to employees from customers who have booked and paid to receive a service but are unable to obtain this at the specified time and place. Employees may react by working to rule impacting on produc-tivity, an increase in absences related to sickness and enhanced labour turnover. The financial impacts may be difficult to measure, but these wider impacts must be taken into consideration in the design of service businesses that adopt overbooking strategies. These businesses must also balance the tension between business efficiency, customer satisfaction and responsible business behaviour. Queue and Waiting Measures and Management There is a continuum of service businesses. On the one side, are capital-intensive businesses that apply technological solutions to cope with balancing supply and demand. On the other hand, are labour-intensive businesses in which it is difficult to avoid queues and waiting lines emerging during times of peak demand, for example, selfservice restaurants, call centres, car rentals and retail. A service company can investigate when and how such queues and waiting lines emerge. Techniques can be developed to make queuing more acceptable for customers reducing customer dissatisfaction levels. This was especially the case during Covid-19 when socially distancing had to be applied to the management of retail demand. People do not want to wait in line in queues. They become stressed and dissatisfied and may decide to purchase services from another provider. Sometimes people turn around and leave when they see a queue, for example, in a supermarket or restaurant. Nevertheless, queuing frequently cannot be avoided. It is important for a service company to explore acceptable customer queuing times. The company should attempt to reduce waiting time to this level, but this increases the costs of service delivery. For example, opening another supermarket checkout involves transferring an employee from one task to another. Furthermore, different customers will have different perceptions regarding acceptable waiting times. It is difficult to avoid the negative impacts of queuing and this involves both optimizing service delivery systems combined with queue management. On the one hand, the service business must balance customer dissatisfaction with increased costs. On the other hand, queues provide an opportunity to engage with customers and to provide distractions that might enhance sales. A service business can measure and calculate when queuing occurs and acceptable customer waiting times. This can be measured when queues occur including how many people are waiting and for how long. A queue, for example, in a supermarket or at a hotel reception, can be monitored and additional employees deployed to deal with peak demand, or technological solutions developed, for example, automated check in procedures at a hotel. For digitized services, an algorithm can be developed that automatically monitors demand and supply and tries to optimize delivery. In both cases, it is important to monitor when people leave the queue without completing a service transaction. This is a major problem for e-commerce retailers as customers select items and add them to their baskets, but never complete the transaction. All service businesses involving customer queuing must develop an appropriate approach to measuring and managing queues and in enhancing the customer experience of queuing. There are theoretical and mathematical queuing models that can calculate when queues occur and waiting times. Let us consider one example based on calculating queue waiting times at supermarket check out counters. The probability of a customer waiting in a queue is: An example is as follows. One counter is open (c) and it takes 2 minutes to serve one customer (m) and 3 customers arrive every minute (n): Customers will continue to wait, and the waiting line will expand exponentially. Six counters would be required to avoid queues forming. Redundant capacity will occur if more than six counters are provided. This calculation is about balancing cost by increasing capacity by opening more service lines versus loss of revenue as customers decide to leave a queue and perhaps obtain services from a competitor. This relationship between queue waiting, capacity and the identification of a balance point can be graphed ( Fig. 13.2). Optimal capacity occurs where the missing income curve and the additional cost curve intersect. There is a formula for calculating when an extension in capacity is profitable when the cost of extending the capacity is less than the missed income: When the reduction of waiting time exceeds missed customer income by customers deciding to leave the queue: There is an important distinction to be made between actual versus perceived queuing time. Customers waiting to be served require companies to try to manipulate actual time by altering customers' perceptions of the actual length of time that they have spent queuing. These techniques involve distractions to try to ensure that those waiting ignore the actual amount of time spent waiting and instead are distracted by some experience. This includes companies trying to increase customers' tolerance for waiting by deploying a number of psychological mechanisms including: • Management of waiting numbers A customer can be provided with a number ensuring that they no longer need to stand in line. This enhances the customer experience and waiting may become more comfortable and therefore acceptable. In a telephone queue, the service company may offer to return the customer's call whilst retaining their place in the queue. • Reducing customer sight lines as they queue This involves ensuring that a queue does not take the form of a single long line but is designed to restrict customer sight lines. This may reduce the perceived length of the queue. • Information provision Service businesses must manage customer expectations regarding queuing. Part of this task involves providing information regarding waiting times. This type of information provision may increase customer tolerance levels encouraging customers to continue to queue. Information provision makes people feel safer and more in control. Nevertheless, notification of very long waiting times will encourage customers to leave the queue. • Entertainment Provision of entertainment, for example, music or visuals on screens, whilst people wait may make them overlook the time spent queuing. In this case, entertainment is designed to distract or divert customers' attention from the experience of queuing. However, it might also distract as people may leave the queue to avoid the entertainment. This type of diversion also involves placing merchandise adjacent to the queue and signage. For some service businesses, queues provide an opportunity to engage with customers as they represent an unusual time when customers face the same direction. This provides an important opportunity to communicate messages to customers. • Provision of complaint personnel Employees may 'work' the queue to distract customers from the queuing experience by diverting their attention. In this case, customers may complain about the length of time spent queuing to employees. The danger is that this type of employment is not attractive as the role involves the management of unhappy customers. This type of human interaction between service providers and customers may also be used to educate customers to reduce service delivery times. At airports, employees inform customers in queues about preparations they should make before reaching a check-in point. The key issue for a service firm is to balance the cost of service delivery, as measured by Costs or missing income Missing income due to customers deciding to leave the queue Additional capacity costs Optimal capacity Capacity 13.1 Measuring Production Performance staffing levels, against loss of income as frustrated customers decide to leave the queue without completing transactions. One solution is to hire part-time assistants to cover peak times. This is only a partial solution as such employees may be less experienced and will perhaps work at a slower rate compared to fulltime employees. Balanced Scorecards One of the most important generic instruments that has been used to measure company performance over the last decades has been the balanced scorecard approach (Kaplan and Norton 1996). This approach has also been applied to services. A balanced scorecard approach tries to develop a holistic account of an organization ensuring that productivity enhancement occurs across an organization. The balanced scorecard is a tool to measure how close a company comes to implementing its strategies and business goals. Several quantitative measures and several qualitative assessments are included and placed in a matrix. A balanced scorecard is not a single index summarizing all variables, but rather it is a series of measures of the important factors or goals which best reflects a company's strategy and business goals. The first step is, therefore, to define which factors or variables should be included in a firm's balanced scorecard. Kaplan and Norton (1996) identified four performance areas which should be included: 1. Financial performance 2. Customer relations 3. Internal business processes 4. Innovation, learning and growth Within these four areas a single company can define which variables and factors it wants to include. The balanced scorecard approach is a tool that supports a company by providing a set of measures that management can use to assess how far a firm is meeting its stated goals. It is not a type of scientific prediction index that will lead to theoretically pre-defined and guar-anteed results. The selection of variables and factors should be decided carefully based on the identification of which factors are most critical for the company's business success. These factors should be measurable, either quantitatively or as a qualitative description of the state of the company. The metrics included within a balanced scorecard must be identified using a SMART approach. They must be Specific, Measurable, Achievable, Realistic and Timely. They must not include aspects of business performance that cannot be measured, and the SMART metrics must be closely aligned with a company's strategic plan. It is worth noting that this approach should not dominate the everyday management of a business as critical processes might not be fully reflected in a set of quantifiable measures. Thus, it is important to remember that not everything can be measured used SMART metrics. The factors included in a balanced scorecard reflect different operational processes and stakeholder interests. All companies must satisfy owners, customers and employees. Sometimes these three different types of stakeholder group can be satisfied by a company performing across the same set of SMART metrics, but often there may be divergent interests leading to conflict. Different factors and variables in the balanced scorecard approach promote different stakeholders' interests. A company then must decide which factors are the most important to be included. That is why it is meaningless to combine all factors in one index. Creating a single index would be possible, but it would hide more than it would reveal (Table 13.1). A balanced scorecard can be created for individual employees and managers to measure their individual performance or, for managers, their department's performance. The measurement results can be used to discuss and adjust employees' and managers' tasks and work performance. These alterations should be designed to ensure that the company comes closer to realizing its strategic and business goals. Safari (2016) developed a measurement tool that quantitatively can measure individuals' and teams' performance. The tool is based on identifying and setting target goals which are predefined for a period and an employee. The tool is an index that measures whether the employee, within a designated period, has under-or over-performed. These target goals are expressed in measurable SMART terms. Table 13.2 is an example of how this type of approach, known as a credit grid, is created for an employee involved in personal customer handling in a retail bank. The credits are measured as the number of work tasks fulfilled over a period. For example, if an employee, within a designated period, has agreed 180 loan applications and the agreed goal was 150 then there is a credit of +30 and an over-performance of 20%. The credit points of all work tasks are then summarized creating the employee's total credit points for a designated period. This calculation might be used to calculate bonuses recognizing over-performance. Such individual performance measures are widely applied by service companies to provide SMART measures to support salary systems. It can motivate employees and managers to enhance performance but within the parameters set by the company's strategy. Such systems have, however, been criticized by employees and trade unions for being very Tayloristic (cf. Frederick Taylor's famous industrial employee management system) and related to enhanced employee stress levels, illness and attrition. They might also be considered as an approach in which employees are very closely monitored The key issue is overall performance including the relationship between service delivery costs and profitability. There is another critical issue to consider. This is the provision of services that differentiate one company from another and perhaps one service worker from another. There are many aspects of service employment and service co-creation that are perhaps impossible to measure (see Chaps. 8 and 9). These more intangible aspects must not be overlooked as they perhaps underpin performance measured by any SMART metrics. Measuring Quality, Marketing Performance and Customer Satisfaction Customer satisfaction and sales performance are connected to the concept of service quality. Consequently, their measurement is often conceptualized as measures of service quality. Service quality and customer satisfaction are central to marketing practice and theory. Sustainable competitive advantage within service businesses is founded upon delivering high quality services creating satisfied customers. Customer satisfaction is related to customer retention and loyalty and underpins profitability, market share and the relationship between investment and return. Service quality is also related to cost and profit (see Chap. 8 and the quality of service experiences). Improving service quality may increase service delivery costs and perhaps the key issue is the relationship between additional costs versus any increase in revenue. Service quality can also be considered as part of an analysis of competitors' processes and products. A service firm should perhaps not provide a quality that is better than competitors, at least not if this difference is not reflected in higher prices. Such considerations must be reflected in measures of service quality and customer satisfaction. There are many costs related to the enhancement of service quality. These are both tangible, or easy-to-measure, and more intangible costs and related impacts. In Table 13.3 some of these potential additional costs are considered. In this section we explore some of the most common measurement instruments of service quality and customer satisfaction. Dimensions in Measuring Service Quality and Customer Satisfaction Before measurement, service quality must be defined operationally or, in other words, a framework must be developed enabling a company to measure quality quantitatively. The first decision Source: Authors' own that must be made concerns what a company wants to measure. There are choices here. A company can measure the quality and satisfaction of a single service, service quality and satisfaction of a customer or customer attitudes towards the service firm-or all of these. Measurement instruments have been developed that measure different aspects of quality and customer satisfaction (Table 13.4). A decision needs to be made regarding the most appropriate individual to assess service quality. Sometimes the customer cannot be easily identified, for example, in the provision of business services. Business service providers, including the provision of outsourced work canteens, office cleaning, or call centres or personnel administration (including pay and pensions), have many possible different stakeholders (Table 13.5). The selection of which group to include in any assessment of service quality is a difficult decision; all stakeholders' satisfaction and quality assessments are important for customer firms and for providers of outsourced business services. A debate needs to occur regarding the definition of each dimension included in any measurement of service quality. A service firm will not be able to assess all quality dimensions and should not attempt to do so as the rewards will not reflect costs, including time, of this type of appraisal. A service firm should, therefore, only attempt to measure and enhance quality up to an optimal point defined by the point at which income from improved service quality exceeds the costs of The customer, a firm, commissions, contracts and pays for a service that they have decided to outsource. The general manager is responsible for the firm's business model including balancing quality over cost. The purchaser Often the service contract is negotiated and administered by a departmental manager, or a specialist procurement function within a firm. The decision is one about balancing risk versus cost. The user Customers, in other words the firm's employees, use the service, for example, the canteen or personnel administration. They want the best service, for example, quality food at an affordable price. Customer of the company that has outsourced the function/ task/ activity. Customers of the firm that have outsourced the service also experience the outputs of these business services, for example, the cleanliness of an office or the service quality of a call centre. Source: Authors' own 13.2 Measuring Quality, Marketing Performance and Customer Satisfaction improving quality (Chap. 8). This can be calculated when a firm knows the additional expenditure required to enhance quality and is then able to calculate the impacts on profitability. Any attempt to enhance quality must balance the importance of each measure of service quality against performance (Fig. 13.3). Mapping Service Quality and Customer Satisfaction Many methods and tools for measuring service quality and customer satisfaction exist (see also Forsyth 1999;Bourne and Neely 2003;Franceschini et al. 2009). In is important to appreciate the interrelationships between service quality and customer satisfaction. Both customer satisfaction and service quality are multidimensional constructs requiring similar measures (Sureshchandar et al. 2002). In the following we explore the most important and widely used measurement instruments. Total Service Quality Indexes Different indexes that combine service quality dimensions can be constructed. In Table 13.6 an example of such an index of service quality is provided which includes the importance of each indicator and performance for a provider of broadband services. The index includes economic performance and employee satisfaction (which is important for customers' experience of the service encounter (cf. Chap. 8)). The Service Journey This is an instrument to systematically assess how a typical customer experiences service delivery with all the touch points that they might have with a service firm. This experience is seen as a journey during which the customer becomes aware of the problem which the service will solve, to the solution of this problem or, in the worst-case scenario, there is perhaps no solution. The customer's journey can be drawn as a model using a blueprint technique (Bitner et al. 2008). This tool can be applied by a service firm to understand and re-design the service delivery system. It can be used as a framework to formulate questions in customer surveys and to collect experiences from employees about each customer touch point. Customers can also be asked to experience the journey in a laboratory-like situation. Figure 13.4 is an illustrative example of such a service journey model based on a railway company. At each touch point the customer can be satisfied, which brings them closer to becoming a satisfied loyal customer, or they can be dissatisfied encouraging them to find an alternative mode of transportation. Surveys Service firms often use standardized surveys to ask customers about service quality. We all have experienced several such surveys. They typically ask actual customers to comment on a series of concrete and detailed points, often just after the service has been delivered ( Fig. 13.5). These questionnaires can be varied in many ways. They are either given to customers at the end of the service delivery process or sent to customers after the service event has concluded either by e-mail or via smartphones. A problem is the reliability of these customer surveys. People receive many service quality questionnaires and there are low response rates. There is a risk that only the most dissatisfied customers respond, and Low High Low High Importance Quality performance* X =Priority in quality work The higher the importance and the lower the performance, the more the dimension should be prioritized perhaps the over-satisfied, extremely loyal, customers are also more likely to respond. Web-Based Satisfaction Scoring Platforms There exist many Web-based review platforms where people can evaluate service quality and service experiences, both quantitatively via standard questions based on categories and qualitatively by providing comments (see also Chaps. 3, 4 and 11 for digitalisation tendencies in services). These may be provided on the service provider's website, or via a third-party platform. Often service firms are given overall scores enabling potential clients to compare one provider with another. These third-party review platforms include TripAdvisor for hotels and restaurants and Trustpilot for customer services. These platforms are business models in their own right generating revenue from linking customers with service providers (see Chap. 3). It is important that the revenue-based nature of these operations is considered in any assessment of their ability to provide an independent assessment of the quality of a service firm's products. Customers provide feedback on service providers for free and the review platforms play an increas-ingly important role in influencing consumer decision-making. Customers can easily access and explore these online reviews to compare with different levels of service quality provided by different firms. These review platforms do not provide a completely objective and balanced review of service quality. There are problems with response bias as well as the possibilities of false or fake reviews. False reviews might be provided by employees of the service provider or by rival companies. Often service firms are able to comment on individual reviews and these responses may influence customer assessments. Focus Groups A focus group is a qualitative method intended to obtain information about customers' impressions of a service firm's service quality. A service firm can invite a small number of customers, preferably about eight to ten, to discuss service quality and their experiences of a firm. The customers are, for example, asked to discuss some good and some bad service experiences with this service firm and perhaps to compare with other service experiences that were extremely good or extremely bad. A focus group should develop into a conversation amongst customers and during this discussion the customers influence and often determine the structure of the conversation. The focus group organizer is only able to guide or shape the discussion rather than completely control it. Focus groups are recorded and transcribed and then analysed using qualitative approaches based on coding and the identification of key themes. The advantage of a focus group discussion compared to a standardized survey is that a service firm can acquire a detailed understanding of quality problems including identifying customers' criteria for good service quality. There is always the possibility that quality issues not identified by the service firm emerge during focus group discussions. Critical Incidents and Service Quality Critical incidents are events or occurrences where a service delivery fails in some way or some problem is avoided. All service firms should identify, monitor and explore critical incidents as they occur during the service co-creation process. Employees involved in an event must provide a detailed account of the incident. A firm must explore the origins of any potential service quality failure and learn to avoid them. A critical incident could, for example, involve an accountancy firm failing to identify an account irregularity, or failing to meet an expected deadline. Some critical incidents are so critical that they threaten the continued existence of the service business. Critical incidents must be considered as the outcome of a set of processes which must be analysed to identify what, where and how the problem emerged. This involves a process of tracing back the processes and the ways in which they worked to produce a critical incident in service delivery. The key point of the critical incidence approach is to identify what occurred to ensure that any possible reoccurrence can be avoided. This might involve alterations to employee training combined with adaptations to everyday service delivery routines, or everyday practices. Benchmarking Often a service firm, or a department within a service firm (e.g. a bank branch), is measured and benchmarked against other similar service firms or departments. Benchmarking against other service firms, particularly competitors, is intended to challenge a service firm's existing services to enhance service quality. A key issue is to highlight, develop and refine differentiation in the marketplace by product and process. Benchmarking can be used as the basis for quality improvements and to optimize service quality. The service quality of a service firm should meet that of comparable competitors but should not be significantly higher. This is to highlight the importance of product and process segmentation by price and quality. The balance between price and quality determines profitability. Benchmarking the internal departments of a service business is intended to inspire, encourage and sometimes force each department to enhance their service quality and customer satisfaction rates. This should be undertaken without increasing costs. Productivity within each department could also increase through benchmarking. SERVQUAL A much-used benchmarking measure of relative service quality, or the service quality of a service firm in relation to its competitors, is the SERVQUAL scale. This was developed by Parasuraman, Zeithaml and Berry (1988). This scale measures service gaps, for example, where a service firm provides a lower or higher perceived service quality than its competitors. The SERVQUAL scale is based on a survey requesting customers to provide details about their experiences of a service firm. Customers are asked to provide feedback on a series of service quality and customer satisfaction variables. The SERVQUAL method is divided into two measures. First, an expectation section which measures how important each variable is for the customer. Second, a perception section which measures how the customer assesses this service firm's performance against each variable. This tool is constructed to measure service quality gaps or how the perception of the actual quality lives up to the interviewees' expectations. The actual service quality may be lower, similar or higher than customer expectations. A third measure might be added, namely how customers assess competitors' services compared to the service firm under examination. This could be achieved by asking interviewees to indicate their perception of competitors' service quality (the B-score column in Table 13.7). The SERVQUAL tool as it was developed by, Parasuraman, Zeithaml and Berry (1988), includes five dimensions (Table 13.7). On each dimension, several variables are measured, and the interviewee is asked to assess each variable on a 7-point scale where 1 is 'strongly disagree' and 7 is 'strongly agree'. The interviewee should answer questions regarding their expectation of the service in general and then explore their actual perceptions. Each answer is scored from 1 to 7. A service firm can then calculate the gap between each service variable in terms of the relationship between expected and perceived outcomes. This highlights whether the perceived quality variable was given a higher or lower score compared to customer expectations. A total quality score gap can then be calculated. It is not only the size of the gaps between expectations and perceptions that are important. The results must also be seen in relation to the size of the expectation score. In principle, interviewees are free to expect that a service firm provides the highest service quality on all variables; however, they are supposed to provide realistic expectations. Another problem with SERVQUAL is that price is absent from the analysis of the appraisal of service quality. A service firm, applying the SERVQUAL scale to improve service quality, may fail to retain or attract customers because of price differences between competitor products. There is a risk that, despite improved service quality with perhaps the same pricing, customers decide to purchase from another firm offering similar services, but at a lower price. An ideal scale should also ask customers about pricing and the relationship between expectations and perceptions. Innovation Capability Measures Service innovation is important for the development of service firms (Chap. 7), and also for manufacturing companies (Chap. 12). It is more difficult to measure service firms' innovation capabilities compared to measuring manufacturing firms' technological innovation capabilities. Service innovation often does not involve research and development (R&D) activities and technological investments. R&D innovation usually involves significant capital and revenue investment and a formal innovation process which is relatively easy to measure. Service innovation capability is a complex phenomenon that amongst other variables includes employees' entrepreneurship, management systems, customer interaction, co-creation of innovation with customers, service Source: Authors' own processes and a series of other factors that are difficult to measure. An ideal measure of service innovation capability does not exist. Sundbo (2017) has developed a quantitative measurement instrument that identifies inputs to and outcomes of innovation processes. This instrument was developed based on a literature review and was tested using two cases. The conclusion was that it was impossible to construct a single measure-an index-of service innovation capabilities. The challenge is that so many different factors influence a firm's service innovation capabilities. The result of the test was that some variables can be measured in companies, and combined the measures are able to provide an estimation of a service firm's innovative capability. The factors included in this approach are divided into input factors (factors that determine innovation) and outcome factors (measures of the result of the innovation effort). The factors, and suggestions for quantitative measures, include the following: Input Factors These are investments in either money or time and time can be converted into an indicator of cost or a measure of financial value. • Working hours within the firm (at all levels and including all activities, such as interacting with customers in co-creation activities, converted into a measure of financial value). • External advice and knowledge procurement (both of which can be expensive, e.g. consultancy or paid research). • Expenditure on technology and other materials. • Public support (income, e.g. public grants or free advice provided). • Network benefits (e.g. competitors who are also collaborators, representatives from the value chain and others from networks; inputs from network activities are normally free; however, the firm's time used in network activities should be deducted). Outcome Factors Four types of outcome factors were identified as important: 1. Income and growth (a) Turnover (more sales-increased turnover). relationships to be formed with new external actors, then these can be a future innovation resource). The conclusion is that not many instruments have been developed to measure service companies' innovation capabilities and it is difficult to construct a simple measure. Nevertheless, a multi-dimensional measurement framework can be developed. Wrapping Up There are two types of services. On the one hand, there are services that are customized and cocreated between a service provider and consumer. These are labour-intensive services that are highly customized service experiences. On the other hand, there are standardized services that are increasingly extremely capital intensive rather than labour intensive. There is an on-going process by which services are becoming increasingly digitized and automated. Measuring the quality of a co-created highly customized service experience is difficult. The primary measure is based on the scale of repeat business. It is important that the measurement of service quality and customer experiences is not an end in itself. The aim of all investments in the measurement of service businesses must be a focus on service quality enhancement. It must be accepted that there are some aspects of a service experience that are more about emotions, feelings and perceptions and these are difficult to measure and compare. There is also the added difficulty in that a service experience is a very immediate experience and perceptions of the experience may fade with time. An important question is when and how to measure service quality and customer satisfaction? Measurement of the service outcome will not enable a service provider to modify the service process to enhance quality and customer satisfaction. Too much measurement during the service delivery process may interfere and perhaps undermine service quality. A further complication is the relationship between service quality and customer satisfaction. All service firms need to develop an approach that develops a solution to balancing the interrelationships between price, quality, value and expectation. This is critical as it is this relationship that plays an important role in the long-term viability of service businesses and their continued ability to compete. Learning Outcomes • Many tools to measure service company performance, service quality and customer satisfaction exist. • Before measuring, service quality must be defined operationally or, in other words, a framework must be developed enabling a company to measure quality quantitatively. • Customer satisfaction and service quality are complex multi-dimensional constructs. • SERVQUAL is one of the most used instruments. There might be difficulties in using SERVQUAL because it does not measure the effect of price. • Mapping the service journey is another widely used instrument. • Queue handling is one of the areas where most measurement instruments have been developed. • Very few tools have been developed to measure service firms' innovation capabilities.
2020-12-03T09:05:26.813Z
2020-11-27T00:00:00.000
{ "year": 2020, "sha1": "c3e78650927be37d00c7ce9bad87412444e45003", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "9dfb7a5aefb9d2f2add7d2bf8d0fd8329f23090b", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
267741724
pes2o/s2orc
v3-fos-license
Transcriptional regulation of flavonol biosynthesis in plants Abstract Flavonols are a class of flavonoids that play a crucial role in regulating plant growth and promoting stress resistance. They are also important dietary components in horticultural crops due to their benefits for human health. In past decades, research on the transcriptional regulation of flavonol biosynthesis in plants has increased rapidly. This review summarizes recent progress in flavonol-specific transcriptional regulation in plants, encompassing characterization of different categories of transcription factors (TFs) and microRNAs as well as elucidation of different transcriptional mechanisms, including direct and cascade transcriptional regulation. Direct transcriptional regulation involves TFs, such as MYB, AP2/ERF, and WRKY, which can directly target the key flavonol synthase gene or other early genes in flavonoid biosynthesis. In addition, different regulation modules in cascade transcriptional regulation involve microRNAs targeting TFs, regulation between activators, interaction between activators and repressors, and degradation of activators or repressors induced by UV-B light or plant hormones. Such sophisticated regulation of the flavonol biosynthetic pathway in response to UV-B radiation or hormones may allow plants to fine-tune flavonol homeostasis, thereby balancing plant growth and stress responses in a timely manner. Based on orchestrated regulation, molecular design strategies will be applied to breed horticultural crops with excellent health-promoting effects and high resistance. Introduction Flavonols belong to one class of f lavonoids with the C 6 -C 3 -C 6 basic structure and are characterized by the carbon-carbon bond (C2 and C3 positions), a hydroxyl group (C3 position), and a carbon group (C4 position) in the heterocyclic C-ring (Fig. 1a).Due to the varying substitutions of hydroxylation and methoxylation on the A and B rings, f lavonol aglycones can be classified into more than 10 different types [1].Among them, kaempferol, quercetin, and myricetin are the most prevalent f lavonol aglycones in the plant kingdom [1,2] (Fig. 1a).Flavonols are commonly found in glycosylated forms and are ubiquitously distributed in various plant tissues [1,2].They play a vital role in plant development and stress resistance, including regulating auxin transport, affecting root and pollen development, inf luencing pollinator preference and reproductive isolation, UV-B protection, and enhancing disease resistance [3][4][5][6][7][8]. Until now, more than 26 000 articles can be retrieved from the PubMed database (https://pubmed.ncbi.nlm.nih.gov/) by using the keyword 'quercetin', one class of f lavonol aglycone.In the past few decades, significant progress has been made in understanding the transcription factors (TFs) and microRNAs (miRNAs) that regulate f lavonol biosynthesis in plants.However, to date there has been no systematic review of the transcriptional regulation of f lavonol biosynthesis in plants. This review focuses on recent advances in knowledge and understanding of the transcriptional regulation of f lavonol biosynthesis in plants.Progress in the characterization of TFs and miRNAs regulating f lavonol biosynthesis will be summarized, with particular emphasis on direct transcriptional regulation and cascade transcriptional regulation of f lavonol biosynthesis. Direct transcriptional regulation involves TFs, such as MYB, AP2/ERF, and WRKY, which directly target the key FLS gene and other early genes in the f lavonoid biosynthesis pathway.In addition, different regulation modules in cascade transcriptional regulation will be summarized and discussed; these consist of miRNAs targeting TFs, regulation between activators, interaction between activators and repressors, and degradation of activators or repressors induced by environmental signals such as UV-B or plant hormones.This review may provide valuable insights for the production of horticultural crops with a high content of f lavonols that are beneficial for human health. Flavonol aglycones finally undergo modifications by uridine diphosphate-glycosyltransferases (UGTs), O-methyltransferases (OMTs), and acyltransferases (ATs), leading to the formation of various and stable f lavonol derivatives [34][35][36].In addition, dihydrof lavonols also can be acted on by dihydrof lavonol 4reductase (DFR) to produce leucoanthocyanidins and to direct f lavonoid metabolism to the synthesis of anthocyanins and proanthocyanidins.The competition between FLS and DFR regulates metabolic f lux to different branches of the f lavonoid biosynthetic pathway [37]. R2R3-MYB transcription factors The MYB (V-myb avian myeloblastosis viral oncogene homolog) family is widely present in all eukaryotes and represents one of the largest TF families in plants.The N-terminus of MYB family proteins contains a highly conserved DNA-binding domain which typically consists of one to four imperfect repeats [38].Each repeat of ∼50-53 amino acids forms a helix-turn-helix structure, allowing them to bind to the major groove of the DNA double helix [39].MYB proteins are classified into four types: MYB-related (R1/2-MYB, R3-MYB), R2R3-MYB, 3R-MYB (R1R2R3-MYB), and 4R-MYB [38,40].R2R3-MYB TFs are the largest class in plants and are composed of two DNA-binding repeats.Based on phylogenetic relationships and the presence of conserved motifs in the Cterminal region, R2R3-MYB proteins are classified into different subgroups, including subgroup 4 (SG4), SG7, SG19, etc. [38,40,41].In past decades there has been an increasing number of studies focusing on R2R3-MYB TFs involved in regulating f lavonol biosynthesis (Fig. 1b and Table 2).SG7 R2R3-MYBs.The SG7 R2R3-MYBs are f lavonol-specific regulators with the characteristic SG7 (GRTxRSxMK) motif and have been characterized in numerous plants.In Arabidopsis thaliana, AtMYB12, along with its homologs AtMYB11 and AtMYB111, belongs to SG7 of the R2R3-MYB family.These proteins controlled f lavonol biosynthesis by independently activating expression of AtCHS, AtCHI, AtF3H, and AtFLS1 [42,43].MdMYB22 from apple and CsMYB12 from tea are the homologs of AtMYB12 and have been shown to act as activators via binding to the FLS promoter and activating its transcription in vivo, as confirmed through yeast one-hybrid and luciferase assays [44,45].In Freesia hybrida, chromatin immunoprecipitation-quantitative polymerase chain reaction (ChIP-qPCR) and β-glucuronidase assays indicated that FhMYB1/2/3/4 bind to MYBCORE and ACrich elements in promoters of FhCHI2 and FhFLS1 to activate their transcription [46].Furthermore, MrMYB12 from Chinese bayberry (Morella rubra) was found to bind to the MYBCORE element in the MrFLS2 promoter and activate its expression, as demonstrated by EMSA and luciferase assays [47].These results demonstrate that SG7 R2R3-MYBs can directly target the key gene FLS and other early genes in the f lavonoid biosynthetic pathway. In addition, secondary metabolite profiling analyzed by LC-MS showed a selective reduction of glycosylated f lavonol derivatives in single, double, or triple mutants of Arabidopsis AtMYB11, AtMYB12, and AtMYB111 [43].Meanwhile, the accumulation of other phenolic compounds in the mutant seedlings remained significantly unchanged [43].In tomato, according to an LC-MS analysis, levels of 13 glycosylated f lavonol derivatives, quercetin, naringenin, and naringenin chalcone were reduced in pf mutants with truncation of SlMYB12 [48].In Petunia axillaris, a myb-f l CRISPR mutant strongly reduced f lavonol levels and expression of FLS and HT1 (F3 H) [49].In addition, overexpression of one SG7 R2R3-MYB gene, such as AtMYB12 [37], Gentiana trif lora GtMYBP3/4 [50], Epimedium sagittatum EsMYBF1 [51], peach (Prunus persica) PpMYB15/PpMYBF1 [52], or Chinese bayberry MrMYB12 [53], resulted in f lavonol accumulation in tobacco (Nicotiana tabacum) flowers.This was caused by upregulated expression of NtFLS and other early genes in the f lavonoid biosynthesis pathway.Transgenic tobacco f lowers changed from red to pale or pure white and showed a reduction in anthocyanin content, which was not consistent with the unaffected expression of NtDFR and other anthocyanin biosynthetic genes [37,50,52,53].Such apparent changes in f lower color suggest that FLS may effectively compete with DFR to redirect the f lux towards f lavonol biosynthesis and away from anthocyanin biosynthesis.These findings indicate that SG7 R2R3-MYB TFs may be f lavonol-specific activators in plants. High expression of SG7 R2R3-MYB genes may be one of reasons for high accumulation of f lavonols in horticultural crops such as tea, apple, and Chinese bayberry (Table 1).In tea, high expression of CsMYB12 in the first leaf or UV-B-irradiated leaf resulted in a high accumulation of quercetin glycosides and kaempferol glycosides [45].In apple, expression of MdMYB22 was positively correlated with the f lavonol content in fruit of F 1 hybrid populations of a cross between Malus sieversii f. niedzwetzkyana and M. domestica [44].In Chinese bayberry, the transcript level of MrMYB12 was induced by UV-B irradiation and was correlated with high accumulation of quercetin derivatives [47].SG19 R2R3-MYBs.In addition to SG7 R2R3-MYBs, SG19 R2R3-MYB TFs were also found to be involved in the positive regulation of f lavonol biosynthesis by activating transcription of the FLS gene.In F. hybrida, an SG19 R2R3-MYB protein, FhMYB21L2, could directly target and regulate one FLS member, FhFLS2, thus participating in f lavonol accumulation in later developmental stages of the f lower [46].On the other hand, four SG7 R2R3-MYB members, namely FhMYBF1/2/3/4, were found to regulate expression of FhFLS1 and f lavonol biosynthesis in early developmental stages of the f lower [46].In Arabidopsis, SG19 R2R3-MYB members, including AtMYB21/24/57, have also been shown to be involved in regulating f lavonol biosynthesis by controlling expression of AtFLS1 [46,71].An SG19 R2R3-MYB TF, MdMYB8, was also identified in Malus crab apple and was responsible for quercetin 7-O-glucoside accumulation by regulating expression of MdCHS and MdFLS [72]. Other R2R3-MYBs.SG4 R2R3-MYB members contain an EAR motif (LNL[D/E]L) and have been identified as negative regulators of f lavonols and other phenolic compounds.In Arabidopsis, the atmyb4 atmyb7 mutant showed an increase in the accumulation of f lavonols and anthocyanins, caused by induced expression of general phenylpropanoid pathway genes, such as C4H, 4CL1, etc. [73].In Tartary buckwheat, SG4 R2R3-MYB members FtMYB14/15/16 could directly target FtPAL gene and inhibit activity of its promoter, thereby negatively regulating rutin biosynthesis [74,75].FtMYB11 and FtMYB13 are non-typical R2R3-MYB repressors that do not belong to SG4 R2R3-MYB while their function is similar to that of FtMYB14/15/16 [74,76].These SG4 R2R3-MYBs target and negatively regulate the general phenylpropanoid pathway genes. In addition, AtMYB13 is an SG2 R2R3-MYB and could activate promoters of AtCHS, AtCHI, and AtFLS1, thereby enhancing f lavonol accumulation in Arabidopsis seedlings [7].In Chinese bayberry, two SG44 R2R3-MYB proteins, MrMYB5 and MrMYB5L, could activate transcription of MrF3 5 H and MrFLS1 by binding to their promoters, based on EMSA and luciferase assays [47].MrMYB5 or MrMYB5L also interacted with MrbHLH2 to synergistically regulate expression of MrF3 5 H and MrFLS1, and these MYB-bHLH protein complexes play a crucial role in regulating myricetin biosynthesis [47].This specific transcriptional mechanism of f lavonol biosynthesis might be the reason for the high accumulation of myricetin derivatives in fruit and leaf of Chinese bayberry (Table 1). WRKY transcription factors The WRKY proteins have been shown to be regulators of f lavonol biosynthesis, including activators and repressors (Fig. 1b and Table 2).In apple, overexpression of MdWRKY11 promoted f lavonoid accumulation and upregulated expression of MdF3H, MdFLS, MdDFR, MdANS, and MdUFGT in apple calli [77].In tobacco, chromatin immunoprecipitation assays and overexpression experiments demonstrated that NtWRKY11b could target and activate promoters of NtMYB12, NtFLS, NtGT5, and NtUFGT, thereby inducing f lavonol accumulation [78].Overexpression of VqWRKY31 from Vitis quinquangularis in grape increased accumulation of f lavonoids and stilbenes and promoted expression of VvCHS, VvCHI, VvDFR, VvFLS, and VvSTS (stilbene synthase), which enhanced powdery mildew resistance [79].Moreover, VvWRKY70 was identified as a transcriptional repressor of f lavonol biosynthesis in grape by inhibiting the transcriptional activity of VvFLS4 and VvCHS2/3 [80].Overexpression of VvWRKY70 caused a reduction of f lavonol contents in transgenic grape calli [80].These WRKY TFs can directly regulate expression of FLS genes to participate in regulation of f lavonol biosynthesis. In addition, there were other WRKY activators involved in regulation of f lavonol accumulation by activating early genes in f lavonoid biosynthesis (Fig. 1b and Table 2).For example, RNAi and overexpression experiments demonstrated that Arabidopsis AtWRKY23 functioned as a positive regulator of f lavonol biosynthesis by activating AtF3 H expression and was also required for proper root growth and development [5].In cotton (Gossypium hirsutum), ChIP-qPCR, yeast two-hybrid, biomolecular f luorescence complementation, and firef ly luciferase complementation imaging assays revealed that GhWRKY41could form a homodimer with itself to directly activate GhWRKY41 itself and expression of GhC4H and Gh4CL, which promoted accumulation of f lavonoids and lignin to improve cotton resistance to Verticillium dahliae [81].Overexpression of VqWRKY56 from Vitis quinquangularis increased f lavonoid content by directly targeting VvCHS3 and other f lavonoid biosynthetic genes in the transgenic grape leaf, which reduced susceptibility to powdery mildew [82]. bZIP transcription factors A well-studied example is Arabidopsis ELONGATED HYPOCOTYL 5 (HY5), a bZIP TF that activates expression of AtCHS, AtFLS, and other genes to regulate f lavonoid accumulation during photomorphogenesis in seedlings [83].In recent years, several newly discovered bZIP activators have also been found to be involved in positive regulation of f lavonol biosynthesis (Fig. 1b and Table 2).In Populus tremula × P. alba, overexpression and suppression experiments indicated that PtabZIP1L could positively regulate f lavonoid accumulation by affecting expression of PtaFLS2/4, which mediated lateral root development and drought resistance [84].In grape, CRISPR/Cas9-mediated mutagenesis of VvbZIP36 promoted anthocyanin accumulation but inhibited f lavonol biosynthesis in the leaf, which was associated with upregulation of anthocyanin biosynthetic genes and downregulation of VvFLS2/4 and two VvFLR (f lavonol-3-Orhamnosyltransferase) genes, respectively [85].Overexpression of grape VvibZIP22 in tobacco promoted accumulation of f lavonols and anthocyanins and induced expression of NtPAL, NtCHS, NtDFR, and NtANS [86].In rice, OsbZIP48 was identified as a positive regulator of f lavonoid biosynthesis through a metabolitebased genome-wide association study [87].Yeast one-hybrid and luciferase assays further demonstrated that OsbZIP48 could directly bind to promoters of Os4CL5 and OsCHS and activate their transcription [87].Interestingly, pear PpbZIP44 could positively regulate expression of PpF3H and PpADT, which encodes an enzyme, arogenate dehydratase, of primary metabolism as a key determinant of carbon f low into the phenylpropanoid pathway [88].Therefore, transient overexpression of PpbZIP44 in pear fruit promoted accumulation of phenylalanine and f lavonoids [88]. AP2/ERF transcription factors In recent years, several AP2/ERF TFs have been reported to be involved in positive or negative regulation of f lavonol biosynthesis (Fig. 1b and Table 2), but it is unknown whether these ERFs are regulated by ethylene signals.In tomato, overexpression of SlERF.G3like activated expression of SlFLS and other early genes in the f lavonoid biosynthesis pathway, such as SlCHS1/2, SlCHI, SlF3H, and SlF3 H, which resulted in induction of f lavonol content in fruit [89].SlERF.G3-like appeared to act independently of the f lavonolspecific activator SlMYB12 [89].Overexpressing MdAP2-34 in apple callus induced f lavonol accumulation by targeting and activating the MdF3 H promoter [90].Overexpression of citrus CsERF003 in tomato led to accumulation of f lavonol glycosides and naringenin chalcone by activating expression of SlPAL, SlC4H, Sl4CL, SlCHS, SlCHI, SlF3 H, and SlFLS [91].In addition, an ERF transcription repressor, FtERF-EAR3, was identified to inhibit FtF3H expression and f lavonol biosynthesis by binding to the GCC-box in the FtF3H promoter in Tartary buckwheat [92]. Other transcription factors Members of other TF families have also been reported to be involved in regulation of f lavonol biosynthesis and can be divided into two classes (Fig. 1b and Table 2).First, TFs such as MdSCL8, AaYABBY5, and PtHSF5a directly target FLS and regulate its expression.In apple, 5-aminolevulinic acid (ALA) inhibited expression of MdSCL8, which alleviated its transcriptional repression of MdFLS1 and promoted f lavonol accumulation [93].In Artemisia annua, overexpression of AaYABBY5 upregulated expression of AaPAL, AaCHS, AaCHI, AaFLS, AaFSII, AaLDOX, and AaUFGT, resulting in a significant increase in total f lavonoid content [94].In Populus tomentosa, overexpression of PtHSFA5a upregulated expression of PtCHS1, PtF3 H2, and PtFLS1/2, leading to a significant increase in f lavonol content in the transgenic poplar [95].EMSA, ChIP-qPCR, and luciferase assays demonstrated that PtHSFA5a can directly bind to the promoters of PtCHS1 and PtFLS1 to enhance their transcription [95].Second, TFs act as regulators by targeting early genes in f lavonoid biosynthesis.In Arabidopsis, REPLUMLESS (RPL) TF was necessary for bacterial resistance and could repress f lavonol accumulation by inhibiting expression of the CHI gene, which regulated auxin transport to promote plant growth [96].In tobacco, overexpression and suppression experiments showed that an HD-ZIP IV TF, NtHDG2, could regulate f lavonol biosynthesis by targeting and activating promoters of NtF3 H and NtF3GT [97].In sweet potato (Ipomoea batatas), EMSA and ChIP-qPCR indicated that IbBBX29 could bind to specific T/G-boxes in the promoters of IbCHS1, IbCHI1, and IbF3 H to activate their expression [98].Overexpression of IbBBX29 increased contents of f lavonols and other f lavonoids by upregulating expression of f lavonoid biosynthetic genes in storage roots of sweet potato [98]. Cascade transcriptional regulation of flavonol biosynthesis Post-transcriptional regulation miRNAs are a class of non-coding RNAs with lengths 20-24 nt and regulate post-transcriptional processes by recognizing target genes through base complementarity, leading to mRNA cleavage or translational inhibition [99].With the application and deep exploration of genomics, several miRNAs involved in f lavonol biosynthesis have been identified in plants (Fig. 1b and Table 2).The miR858-MYB regulatory modules have been reported to regulate f lavonol biosynthesis in plants.In Arabidopsis, transgenic experiments indicated that AtmiR858a directly targeted and cleaved SG7 R2R3-MYB genes, including AtMYB11/12/111, resulting in the negative regulation of f lavonol biosynthesis [100].Subsequent research revealed that primary AtMIR858a encoded a small peptide, miPEP858a, which was involved in transcriptional regulation of AtmiR858a [101].This peptide could negatively regulate f lavonol biosynthesis by inhibiting expression of AtMYB12 through AtmiR858a.In potato, overexpression of StmiR858 inhibited expression of StMYB12A/C genes, leading to a reduction in f lavonol accumulation [15].As research goes on, members of other miRNA families and their different target genes are continually discovered.In cotton, overexpression of GhSPL10, a target of GhmiR157a, resulted in a significant increase in f lavonol accumulation, which promoted initial cellular dedifferentiation and callus proliferation [102].In Arabidopsis, overexpression of apple MdmiR172 targeting MdAP2_1a reduced levels of anthocyanins and f lavonols as well as expression of AtFLS1 and other f lavonoid biosynthetic genes in plantlets, which may be caused by regulation of the MdAP2_1a-MdMYB10 module [103].Current research on miRNAs regulating f lavonol biosynthesis is quite limited, warranting further investigation. UV-B regulation Solar ultraviolet (UV) light consists of UV-A (320-400 nm) and a portion of UV-B (280-320 nm).Elevated UV-B radiation leads to the generation of numerous free radicals within plants, which induces damage to DNA, RNA, and proteins.Flavonols serve as efficient scavengers of free radicals and UV-B absorbers [4].Preharvest UV-B radiation was found to increase f lavonol accumulation in apple and grape fruits [104,105].In many studies UV-B radiation as a method of postharvest treatment was applied to improve f lavonol content in vegetables and fruits, such as apple [106], asparagus [107], broccoli [108], Chinese bayberry [47], cucumber [109], kale (Brassica oleracea var.sabellica) [110], mango [111], onion [112], peach [35], and tomato [113].Recently, significant progress has been made in the regulatory mechanisms of plant f lavonol biosynthesis in response to UV-B signal in UVR8-dependent and UVR8-independent ways, including R2R3-MYB and other non-MYB TFs (Fig. 2 and Table 2).UVR8-dependent UV-B signaling pathway.UV-B signal is perceived by the photoreceptor UV RESISTANCE LOCUS (UVR8), which is a plant-specific and highly conserved protein [114,115].UVR8 was inactive in its dimeric form in the absence of UV-B, and CONSTI-TUTIVELY PHOTOMORPHOGENIC 1 (COP1) induced ubiquitination and degradation of ELONGATED HYPOCOTYL 5 (HY5) by the 26S proteasome, thus repressing expression of downstream target genes [114,116] (Fig. 2).After UV-B perception, dimeric UVR8 underwent monomerization to form monomeric UVR8, which interacted with COP1 to form a complex that repressed COP1 activity [114,116].The central TF HY5 was ultimately stabilized and activated transcription of UV-B-responsive genes [116,117]. In Arabidopsis, HY5 could directly target and activate expression of AtMYB12 and AtMYB111 under UV-B radiation, leading to f lavonol accumulation in the seedlings [4,118] (Fig. 2).This HY5-SG7 R2R3-MYB regulatory module has also been identified in horticultural crops such as grape [119], apple [104], and tea [45].MdHY5 and MdMYB22 (a homolog of AtMYB12) from apple could also synergistically regulate transcription of MdCHS and MdFLS, leading to the induction of f lavonol accumulation under UV-B radiation [104].These results indicated that HY5 regulates f lavonol biosynthesis in two steps.First, it directly binds to promoters of SG7 R2R3-MYBs and induces their expression.Secondly, it can interact with SG7 R2R3-MYBs to synergistically regulate expression of FLS and other early genes in f lavonoid biosynthesis.In addition to the HY5-SG7 R2R3-MYB module, monomeric AtUVR8 in Arabidopsis could also directly interact with AtMYB13 to form a complex that enhanced the affinity of AtMYB13 for promoters of AtCHS, AtCHI, and AtFLS1 [7] (Fig. 2).This UVR8-MYB13 module further promoted f lavonol accumulation and plant resistance to UV-B stress [7].UVR8-independent UV-B signaling pathway.Apart from the UVR8dependent UV-B signaling pathway, the UVR8-independent stress response has been recently identified in plants (Fig. 2 and Table 2).Under white light conditions without UV-B radiation, the brassinosteroid signal led to activation of BRI1-EMS-SUPPRESSOR 1 (BES1) (a master TF in brassinosteroid signal transduction) in Arabidopsis [120].This activation of AtBES1 resulted in downregulation of AtMYB11, AtMYB12, and AtMYB111 expression, consequently leading to a decrease in f lavonol accumulation [120].However, when Arabidopsis plants were exposed to UV-B radiation, the UVR8-COP1-HY5 module was activated to initiate UV-B photomorphogenesis, including activation of SG7 R2R3-MYB expression [120].In addition, UV-B stress could also inhibit expression of AtBES1 in a UVR8-independent manner, which removed inhibition of SG7 R2R3-MYB expression and promoted f lavonol accumulation [120].The UV-B stress-induced inhibition of AtBES1 expression reallocated more energy towards f lavonol biosynthesis, which promptly shifted plants from brassinosteroid-promoted growth to UV-B stress response and ensured normal plant growth under adverse conditions. Phytohormonal regulation Jasmonates.Jasmonates (JAs) are vital plant hormones and trigger a cascade of stress-related gene expressions in response to biotic and abiotic stress through the signaling module of the SCF COI1jasmonate-ZIM domain (JAZ).Induction of f lavonol accumulation by JAs has been observed in plants such as blackberry (Rubus sp.) [121], G. biloba [122], and Tartary buckwheat [76].In Arabidopsis, JAZ proteins interfered with MYB-bHLH complexes, which were composed of IIIe-bHLH TFs (MYC2, MYC3, MYC4, and MYC5) and SG19 R2R3-MYB TFs (MYB21 and MYB24) [123].JA signals were perceived by CORONATINE-INSENSITIVE PROTEIN 1 (COI1), which recruited proteins to the Skp1/Cullin/F-box (SCF COI1 ) complex for ubiquitination and subsequent degradation by the 26S proteasome pathway [124,125].Thus, the degradation of JAZ proteins led to release of the MYB-bHLH complexes, which regulated expression of downstream genes involved in stamen development [123,126] (Fig. 3).A further study demonstrated that AtMYB21 and AtMYB24 activated transcription of AtFLS1 to induce accumulation of pollen-specific f lavonols, which enhanced reactive oxygen species (ROS) scavenging capacity and contributed to male fertility [71]. Auxin.Auxin is an important plant hormone for plant growth and development through the SCF TIR1 -IAA-ARF module [128].Auxin has also been reported to positively regulate f lavonol biosynthesis [129,130].In Arabidopsis, Lewis et al. [130] found that auxin could upregulate expression of AtMYB12, AtCHS, AtCHI, AtF3 H, and AtFLS through the Transport Inhibitor Response1 (TIR1) signaling pathway [130].They speculated that auxin response factors (ARFs) play a crucial role in this process [130] (Fig. 3).Recently, AtARF2 was identified as a positive regulator of f lavonol biosynthesis through directly activating transcription of the AtMYB12 and AtFLS genes [131].Another study indicated that auxin-induced f lavonol accumulation also depended on the ARF pathway [5] (Fig. 3 and Table 2).In Arabidopsis, auxin may mediate degradation of SOLITARY ROOT/INDOLE-3-ACETIC ACID14 (SLR/IAA14) by the SCF TIR1 complex, which led to release of AtARF7/19 and subsequent induction of AtWRYY23 expression, thereby activating AtF3 H expression to induce f lavonol biosynthesis in the roots [5,132]. Gibberellic acid.Gibberellic acid (GA) is an important plant hormone for plant growth and participates in negative regulation of f lavonol biosynthesis through the GID1-SCF SLY1/GID2 -DELLA sig-naling module [133][134][135].In plants, GA signals promoted the interaction between GID1 (GA-INSENSITIVE DWARF1) and DELLA proteins, enhancing the binding affinity of the GID1-DELLA complex with the SCF SLY1/GID2 complex.This interaction led to the degradation of DELLA proteins by the 26S proteasome pathway [133,134] (Fig. 3).Recently, a study in Arabidopsis revealed that GA negatively regulated f lavonol biosynthesis through the DELLA-SG7 R2R3-MYB module [135] (Fig. 3 and Table 2).In the absence of GA, DELLA protein accumulated and physically interacted with SG7 R2R3-MYBs, which enhanced the transcriptional activation activity of AtMYB12 and AtMYB111 on promoters of AtFLS1 and AtF3H [135].This promoted f lavonol biosynthesis in Arabidopsis roots and inhibited auxin transport and root growth.Conversely, GA signaling promoted degradation of the DELLA protein by the 26S proteasome pathway.Subsequently, this reduced the transcriptional activation activity of SG7 R2R3-MYB proteins and f lavonol content in Arabidopsis roots, which led to an increase of auxin accumulation in root tip cells and promotion of root growth [135]. Abscisic acid.Abscisic acid (ABA) as a plant hormone plays an important role in plant growth and fruit ripening [136].ABA can also modulate the stomatal aperture by directly promoting production of ROS in plant guard cells [137].Flavonols, as important ROS scavengers, are involved in the regulation of ABAinduced stomatal closure in plants [138,139].In tobacco, ABA treatment could inhibit expression of NtMYB184 (a f lavonolspecific activator), which reduced production of f lavonols and thus increased ROS levels to regulate stomatal closure [65].ALA, known as a new natural plant growth regulator, can reverse ABA-induced stomatal closure [140].In apple, ALA treatment enhanced protein abundance and phosphorylation of protein phosphatase 2AC (MdPP2AC), which promoted the interactions of different PP2A subunits and increased holoenzyme activity [140].Phosphorylated PP2A interacted with and dephosphorylated MdSnRK2.6 (sucrose non-fermenting 1-related protein kinase 2.6), which induced f lavonol accumulation and thus reduced ROS levels in the guard cells to open stomata [140].In addition, ALA treatment could inhibit expression of MdSCL8 (a f lavonol repressor), which may promote expression of MdFLS1 and f lavonol accumulation to participate in regulation of stomata opening [93]. Other regulation modules.Several members of other TF families were identified as repressors by negatively regulating transcriptional activity of f lavonol-related activators (Table 3).In Arabidopsis, AtMYB4 could repress activation of the AtCHS and AtFLS promoters by AtMYB12 and AtMYB111 [141].In tomato, loss-of-function and luciferase assays indicated that SlSPL-CNR functioned as a negative regulator of f lavonol biosynthesis by repressing SlMYB12 transcription activity [142].In P. tomentosa, yeast two-hybrid, pull-down, co-immunoprecipitation, and luciferase assays demonstrated that PtIAA17.1 could interact with PtHSFA5a to suppress PtHSFA5a-mediated activation of PtCHS1 and PtFLS1 [75].Salt stress enhanced the stability of PtIAA17.1, resulting in the promotion of its interaction with PtHSFA5a and the repression of f lavonol biosynthesis [95].In addition, a receptor-like kinase (OsRLCK160) could regulate f lavonoid accumulation in rice by interacting with and phosphorylating OsbZIP48 [87]. Conclusions and perspective Flavonols are an important branch of the f lavonoids with excellent bioactive activities, and they are abundant in horticultural crops.Due to the broad function of f lavonols in plants, sophisticated regulation networks involving different types of TFs (activators and repressors) and miRNAs have evolved, illustrating fine-tuned f lavonol homeostasis under specific environmental conditions or hormonal signals.Significant progress has been achieved in unraveling the transcriptional regulation of f lavonol biosynthesis in Arabidopsis and some horticultural crops, such as tea, apple, and Chinese bayberry.However, more f lavonol-rich horticultural crops deserve in-depth investigations to discover novel TFs and elucidate the specific regulatory mechanisms of f lavonol biosynthesis.Besides transcriptional regulation, posttranscriptional regulation, post-translational modifications, and epigenetic regulation of f lavonol biosynthesis in plants are also worthy of further exploration. In addition, f lavonol derivatives are associated with the astringency of horticultural crops.Increasing the f lavonol content of fruits and vegetables by genetic selection, genetic engineering, or physical treatment may be of interest for human health, but it might negatively affect the taste quality.Structural modification of secondary metabolites caused by decorations such as hydroxylation and glycosylation can alter the taste of the compounds.So far, there is limited research reporting the improvement of the f lavor quality of f lavonols by structural modification, which deserves future study.This will provide valuable insights for utilizing molecular design breeding and synthetic biology to enhance f lavonol accumulation in horticultural crops with significant bioactivities and unaffected f lavor quality, thereby facilitating the development of the horticultural industry. Figure 2 . Figure 2. Cascade transcriptional regulation of f lavonol biosynthesis in plants under UV-B radiation.The yellow box with bold letters means promotion of f lavonol biosynthesis, whereas the blue box with non-bold letters represents repression of f lavonol biosynthesis.Line thickness indicates the activity of transcriptional activation or repression.Light orange and blue-gray ovals represent activators and repressors, respectively. Figure 3 . Figure 3. Cascade transcriptional regulation of f lavonol biosynthesis in plants in response to different hormonal signals.Yellow boxes with bold letters mean promotion of f lavonol biosynthesis, whereas blue boxes with non-bold letters represent repression of f lavonol biosynthesis.Line thickness indicates the activity of transcriptional activation or repression.Light yellow and light gray backgrounds indicate activation and repression of f lavonol biosynthesis, respectively.Light orange and blue-gray ovals represent activators and repressors, respectively.JA, jasmonates; GA, gibberellic acid; Ub, ubiquitin. Table 1 . Flavonol content in the edible portion of staple crops, medicinal plants, and horticultural plants. Species Content (mg/100 g FW) Reference Species Content (mg/100 g FW) Reference Flavonols in plants are usually present in glycosylated forms.Therefore, the flavonol content was defined as the total content of flavonol glycosides.* Dry weight basis; FW, fresh weight; n.d., not detected. Table 2 . Different types of transcription factors or regulation modules involved in regulating f lavonol biosynthesis in plants. Table 3 . Different regulation modules in cascade transcriptional regulation of f lavonol biosynthesis in plants.
2024-02-19T16:02:51.714Z
2024-02-15T00:00:00.000
{ "year": 2024, "sha1": "e25683493c831d8f3a5c103ccd3c09314310a26e", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "3f7502b0c0bb8782b20012da470dd0fda394e315", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
938442
pes2o/s2orc
v3-fos-license
Crowd-sourcing and author submission as alternatives to professional curation Can we decrease the costs of database curation by crowd-sourcing curation work or by offloading curation to publication authors? This perspective considers the significant experience accumulated by the bioinformatics community with these two alternatives to professional curation in the last 20 years; that experience should be carefully considered when formulating new strategies for biological databases. The vast weight of empirical evidence to date suggests that crowd-sourced curation is not a successful model for biological databases. Multiple approaches to crowd-sourced curation have been attempted by multiple groups, and extremely low participation rates by ‘the crowd’ are the overwhelming outcome. The author-curation model shows more promise for boosting curator efficiency. However, its limitations include that the quality of author-submitted annotations is uncertain, the response rate is low (but significant), and to date author curation has involved relatively simple forms of annotation involving one or a few types of data. Furthermore, shifting curation to authors may simply redistribute costs rather than decreasing costs; author curation may in fact increase costs because of the overhead involved in having every curating author learn what professional curators know: curation conventions, curation software and curation procedures. Introduction Can we decrease the costs of database curation by crowd-sourcing curation work or by offloading curation to publication authors? Bourne et al. proposed (1) these two alternatives to professional curation. Both alternatives are forms of manual curation by individuals who are not primarily trained and paid to be professional curators. The bioinformatics community has accumulated significant experience with these two alternatives to professional curation in the last 20 years; that experience should be carefully considered when formulating new strategies for biological databases, such as the funding cuts being considered for several modelorganism databases (2,3) and the funding cuts recently applied by NIH to the EcoCyc and MetaCyc databases (The NIH grant for EcoCyc was cut by 18% at its last renewal; the NIH grant supporting MetaCyc/BioCyc was cut by 27% at its last renewal.). Crowd sourcing as an alternative to professional curation Even before the founding of Wikipedia in 2001, crowdsourcing of curation (also known as community curation), was an appealing possibility for the life sciences. Community annotation was envisioned as a major advantage of The Genome Sequence Database (GSDB) (4) compared with Genbank. GSDB developed a community annotation tool called GSDB Annotator to facilitate annotation contributions from scientists. However, few community-contributed annotations were received by GSDB, which was among the reasons for its demise several years later. Dr M. Cherry reports that another experiment in community curation was performed by the Saccharomyces Genome Database (SGD). After receiving encouragement from the yeast community at an experimental conference, SGD authored a software tool for web-based submission of community curation. So little curation was submitted by the yeast community that this tool was discontinued by SGD. The EcoCyc and MetaCyc projects also experimented with a type of community curation in the 2000s. Aware of past low participation rates for community curation, and postulating that reviewing existing database entries would take less time than authoring new entries, we began systematically contacting authors of articles we had curated to request that they review the gene annotations and pathways we had curated. Again, the response rate was so low that we discontinued the practice. More recently, the EcoliWiki project asked Escherichia coli scientists to contribute information on E. coli genes, proteins, and strains. Despite engaging and persistent advocacy by the project's director, Dr J. Hu, at multiple E. coli conferences, the response rate was fairly low (data supplied by Hu show that from 2007 to 2010, an average of 46 people per year contributed 583 wiki page updates; but from 2013 to 2015 the numbers decreased to an average of 10 people per year contributing 76 updates). The Gene Wiki project (5) has created Wikipedia pages for all human genes that combine data that was programmatically extracted from structured databases with text authored by unpaid human contributors. These articles contain 1.42 million words of text contributed by 6830 distinct editors for >10 000 genes. However, since the information within Gene Wiki articles are not captured within a structured ontology, these Wiki data are not accessible to computational analysis, thus the successes of the Gene Wiki project are not easily transferable to curation of structured databases. Similar comments apply to the Rfam database (6) and its use of Wikipedia to curate textual descriptions of RNA families. Direct curation by authors Author-submitted curation is a type of crowd-sourced curation in which the curator is the person who knows the published work best and will benefit the most from the promotion of the work. Several moderate success stories are emerging for author-submitted curation. The TAIR project allows authors of submitted articles to submit Gene Ontology (GO) term annotations on Arabidopsis genes at the time they submit an article. Dr T. Berardini, who supervises TAIR curation, states that TAIR has received 800 such submissions over the years; in 2015, 87 authors submitted 2686 GO annotations for 98 articles. TAIR director Dr E. Huala believes that the reasons for this success include that TAIR has established relationships with plant journals that ask authors to submit data; that data are submitted at the time authors are most excited about publicizing their work; because the online submission form is simple; and that authors realized that curating their articles in TAIR raises the profile of their work in the scientific community. Canto is a web-based tool designed to allow publication authors to enter biological knowledge about genes, proteins and protein interactions (7). Canto was developed by PomBase for fission yeast literature curation but was designed to be easily deployed for other organisms, or to use additional ontologies. Canto provides a series of web-based forms that allow an author curator to specify what genes are mentioned in an article, and to specify GO terms, protein modifications, interactions, phenotypes, and alleles for those genes. Overall, authors have submitted 5300 distinct annotations from 300 publications to PomBase via Canto (8). In 2015, 18% of annotations entering PomBase were submitted by authors via Canto, and 82% were entered by professional curators (8), meaning that author curation made a significant dent in the curation workload. FlyBase speeds publication triage by sending email requests to authors of newly published articles requesting that, via an online tool, the authors list the genes studied in a publication and indicate the types of data described in the article (9). The author response rate to the FlyBase email requests was a respectable 44% over a nine-month period. Bunt et al. found that author response rates to these email requests were higher for recently published articles than if authors were contacted 2-13 months after publication of their article (35% response rate). On a yearly basis, Bunt et al. report that this author triage system frees up 2-3 months of FlyBase curator time per year that would have otherwise been spent on article triage. Discussion It is interesting that all of the success stories come from the author-curation model rather than the crowd-sourced curation model. We should note that since authors are members of 'the crowd', nothing would prevent authors of an article from participating in crowd-sourced curation. However, it appears that explicitly targeting authors to participate, particularly near the time of publication when they are most enthusiastic about the work, yields a significantly higher success rate than wider appeals to 'the crowd'. But let us consider other issues and trade-offs around the author-curation model. Bourne et al. are certainly concerned with the costs of biological databases, and particularly with curation costs. But will shifting work from professional curators to the crowd or to authors really save money? Probably some members of the crowd work for free, such as retired scientists or hobbyists. Yet crowd-sourced curation appears to have a very low participation rate, so its cost-saving potential seems quite low. In addition, some members of the crowd will seek payment for their work, and authors are usually professional (paid) scientists, so to a degree I see a shell game herecosts are simply being shifted from one bin (professional curators) to another (authors). That is, whether a curator is being paid for N hours of work or an author is being paid for M hours of work, someone is still being paid. We can argue about who works more efficiently, or about the notion that if the authors who do the curation are graduate students who make lower salaries than professional curators, or work from other funding sources than government grants, the NIH may save some money here. The point is that author time costs money too, and every hour an author spends curating is taking away from their time in the laboratory. But one could also argue that professional curators who are more familiar than authors with curation practices and curation software will also curate faster and more accurately than authors. Have Bourne et al. identified a key inefficiency in their statement 'There is an unnecessary cost in a researcher interpreting data and putting that interpretation into a research article, only to have a biocurator extract that information from the article and associate it back with the data'? Indeed, why not have the people who understand the work best-the authors-enter their results directly into one or more relevant databases? One reason is that professional curators have unique skills and training that the average bench scientist lacks. Indeed, in our 20þ years of experience in developing curated databases, we have found that some PhD-level life scientists cannot develop into successful curators even after prolonged training. Curator training encompasses multiple topics. One topic is curation conventions, since a significant goal of biological databases is to standardize the inconsistent terminology found in the life-sciences literature. For example, EcoCyc defines conventions for naming proteins, metabolites and metabolic pathways. The EcoCyc Curator's Guide (10) also defines conventions for defining the boundaries of metabolic pathways, conventions for what units to use for different database fields, style guidelines for writing mini-review summaries for genes and pathways, citation guidelines and conventions for assigning evidence codes to database entries. Curators also receive training in how to use the curation software used to enter new information into a database. For example, in EcoCyc, we enter a new metabolic pathway by first entering each metabolic reaction in the pathway (which involves entering reactant and product compounds not already present in the database), and then defining the pathway itself. Separate editing tools exist for metabolites, for reactions, for enzymes and for pathways. Each is fairly complicated to use, for example, the reaction editor includes a reaction-balance checker and allows users to specify reaction-directionality information as well as the cellular compartment(s) in which a reaction occurs. This software is nearly impossible to use without significant study or training. Curators are also trained in the methods needed to ensure that the information that is entered into a database is amenable to computational analysis, such as the use of ontologies, and the persistent determination to refrain from entering stray commentary and other nonconformant text into controlled database fields. Ultimately these methods ensure that the EcoCyc database can be computationally converted into an executable metabolic model, thus avoiding the need (and cost) of having separate curation efforts for a model-organism database and a metabolic model that now occur for most organisms. For review-level databases it may be preferable that synthesis of information from multiple articles be performed by neutral third parties such as database curators. Another unappreciated role of professional curators is to correct the errors that are rampant throughout the experimental literature; if database entries resulting from a publication were authored by the same person who authored the publication, they would likely promulgate the same errors from the publication into the database; fresh eyes are more likely to notice errors. In my view the inefficiency identified by Bourne et al. of having a person different from the author curate an article is more than offset by multiple inefficiencies of the authorcurates model where one author of every curated publication must learn curation methods, conventions, ontologies and software-probably for multiple databases over the course of multiple publications! Given the lack of interest most scientists have shown in crowd-sourced curation, it seems likely that if curation were forced upon them, some authors would take shortcuts in the process, skimping on what information they enter, and circumventing curation methods. The result will be incomplete, low-quality database entries. As discussed in (11), there is variation in the complexity of different curation tasks. We posit that 'the ease of replacing professional curators with some other approach (here, author curation) will depend on the complexity of the curation to be performed'. For example, it will be more challenging for authors to curate multiple types of data (e.g. gene functions, gene-regulation mechanisms and sequence variation) than a single type of data (e.g. gene functions alone). If database budgets are slashed by funding agencies, will scientists come to the rescue, such as by volunteering their time to assist in database curation? Many databases have lost their funding over the years; I know of no instance where scientists have come to the rescue in this way. For example, for many years the National Science Foundation biological databases program had a policy of funding databases for one grant cycle only; few if any of the databases funded under this program found any alternative source of funding after the first cycle. We do have the recent example of the TAIR Arabidopsis database, which lost its funding a few years ago and has now begun a successful subscription model to raise funds for curation and operations from the scientific community (12). In this case scientists came to the aid of a database by purchasing subscriptions. Conclusions The vast weight of empirical evidence to date suggests that crowd-sourced curation is not a successful model for biological databases. Multiple approaches to crowd-sourced curation have been attempted by multiple groups, and extremely low participation rates by "the crowd" are the overwhelming outcome. The author-curation model developed by TAIR, Canto, and FlyBase does show promise for boosting curator efficiency, and should be explored by other databases. However, note its limitations. This model has taken years to develop. The quality of author-submitted annotations is still uncertain (and should not be taken for granted given the complexity of GO). The response rate is significant but low. And to date author curation has involved relatively simple forms of annotation involving one or a few types of data. Furthermore, shifting curation to authors may simply redistribute costs rather than decrease costs; author curation may in fact increase costs because of the overhead involved in having every curating author learn what professional curators know: curation conventions (e.g. naming and style guidelines), curation software, and curation procedures. The more complex the database, the more the balance is likely to tip in favor of professional curation because authors will require more training to produce the highquality curation achieved by professional curators. Funding This work was supported by Award Number GM077678 from the National Institute of General Medical Sciences of the National Institutes of Health. The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Conflict of interest. None declared.
2017-10-02T08:08:58.530Z
2016-12-26T00:00:00.000
{ "year": 2016, "sha1": "ade5f0557157a634b35249d2f203c947c4528066", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1093/database/baw149", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ade5f0557157a634b35249d2f203c947c4528066", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
252755625
pes2o/s2orc
v3-fos-license
Implementation of Federal Waivers for Feeding Children in Early Care and Education During the COVID-19 Pandemic Objective To capture Child and Adult Care Food Program (CACFP) state directors’ experiences implementing federal waivers for feeding children in early care and education (ECE) settings during coronavirus disease 2019. Design Qualitative semistructured interviews. Setting Virtual interviews with state CACFP directors. Participants Child and Adult Care Food Program directors from 21 states from December 2020 to May 2021. Phenomenon of Interest Implementation of state-level waivers. Analysis Qualitative thematic analysis. Results State directors reported that the coronavirus disease 2019 waivers allowed ECE programs to continue feeding children despite being closed or having limited enrollment. The meal pattern, noncongregate feeding, parent/guardian meal pick-up, and monitoring waivers were most frequently used by states. Challenges included maintaining integrity to CACFP meal pattern requirements, addressing the limited capacity of ECE to produce and distribute noncongregate meals, and adapting technology for virtual reviews. Suggested improvements included streamlined communication from the US Department of Agriculture, standing waivers for emergencies, ongoing flexibilities for feeding children, and strategies to increase CACFP enrollment and reduce financial viability requirements for ECE. Conclusions and Implications Results indicate the need for the US Department of Agriculture to consider issuing and extending waivers, increasing ECE participation in CACFP, and ensuring timely communication and guidance on waiver tracking. INTRODUCTION The Child and Adult Care Food Program (CACFP) is a critical component of the federal nutrition safety net, ensuring access to healthful foods for income-eligible children participating in early care and education (ECE) programs. The CACFP reaches more than 4.2 million children daily through reimbursements to ECE programs to provide meals and snacks to children that meet meal pattern requirements. 1 Research has shown that children enrolled in CACFP-participating ECE programs have improved access to nutritious foods compared with those enrolled in nonparticipating ECE programs 2−5 and compared with what is available to children at home. 6 Furthermore, CACFP participation has economic implications; low-income households can reduce their food expenses, 7 and ECE providers receive reimbursements for food purchases and free nutrition education and resources. 1 Early in the coronavirus disease 2019 (COVID-19) pandemic, many ECE programs closed, operated at limited capacity, or experienced reduced enrollment as parents opted to keep children home. 8 These changes resulted in a 20% decrease in average daily attendance in CACFPparticipating ECE programs and placed more than 900,000 children at risk of losing access to the healthful meals that CACFP-participating ECE programs provide. 9 Program closures also increased families' risk of food insecurity and children's risk of nutrition-related health conditions, such as obesity. 9 Prompted partly by the widespread ECE closures, Congress passed the Families First Coronavirus Response (FFCR) Act that allowed the US Department of Agriculture (USDA) to provide waivers to states that enabled CACFP-participating ECE programs to continue distributing nutritious food to children. 10 Key waivers created by the FFCR Act that impacted CACFP-participating ECE programs included the meal times waiver, which allowed ECE programs to serve meals outside the standard mealtimes typically required by USDA 11 ; the noncongregate feeding waiver, which allowed ECE programs to serve meals outside of a group setting 12 ; the parent/ guardian meal pick-up waiver, which allowed parents or guardians to pick up to-go meals without having their children present 13 ; the monitoring waiver, which relaxed state CACFP agencies' requirements for in-person monitoring of ECE programs 14 ; and the meal pattern flexibility waiver, which allowed for reimbursement of meals that did not meet the meal pattern requirements. 15 Although the federal waivers were available to all states, each state had to formally opt-in to use any or all of the waivers. Once states' requests were approved, state CACFP agencies approved individual CACFP-participating ECE program's use of the waivers. 1 Despite the significant resources invested in creating these waivers and most states opting to use all the waivers, 10 very little is known about the implementation of the waivers. Significant variation in waiver implementation among states 16 may have resulted in diet-related inequities for children served by CACFP, underscoring a critical need to understand the state-level implementation of these waivers. Understanding how waivers were implemented has implications for improving CACFP by guiding the implementation components about the waivers that did not work, so these can be refined for the next emergency. Thus, we aimed to capture the experiences of CACFP directors on state-level waiver implementation to better inform future program and policy efforts for feeding young, low-income children during ECE program closures or interruptions such as those caused by COVID-19. Research Design We followed a basic qualitative research approach 17 wherein semistructured interviews were used to explore state CACFP directors' perspectives and experiences regarding the challenges and facilitators of implementing waivers for CACFP-participating ECE programs during the COVID-19 pandemic. The University of Nebraska−Lincoln Institutional Review Board approved all procedures and deemed this research study exempt. Participants and Recruitment Individuals were eligible to participate in the study if they were a statelevel CACFP director or another state employee who assisted in implementing CACFP waivers. Hereafter, all participants are considered state CACFP directors regardless of their official position title. Researchers obtained contact information for CACFP directors through searches of each state's CACFP website. State CACFP directors with contact information published on the state's CACFP website (n = 42 states) were sent an email and invited to participate in this study. If participants did not respond, 1 follow-up email was sent each week for 2 weeks following the initial invitation until researchers completed 3 attempts to connect. Participants were offered a $30 gift card. Twenty-four directors from 21 states agreed to participate, 8 declined, and 13 did not respond. All participants gave written, informed consent to participate. Data Collection Semistructured interview questions were developed by the coauthors and other members of the Nutrition and Obesity Policy Research and Evaluation Network Early Childhood COVID-19 Work Group. 18 Questions were reviewed by an expert committee with backgrounds in CACFP policy, ECE nutrition, and/or qualitative methods (Table 1). Interviews were conducted via Zoom (Zoom Video Communications, Inc, 2021) by experienced qualitative researchers from December 2020 through May 2021. Interviewers did not have any previous relationships with participants. All interviews lasted between 45 and 75 minutes. Participants were sent the interview questions before the interviews, and the authors reiterated the goals of the study at the beginning of the interview. After each set of 2−3 interviews, the interviewers met to discuss major themes identified during the interviews. The interviews continued until the researchers determined that saturation was reached or no new information was revealed. 19 All interviews were video and audio recorded. Data Analysis Interview recordings were transcribed verbatim, checked for accuracy, and uploaded to NVivo (version 12, QSR International Pty, Ltd, 2020) for data analysis. In this study, thematic analysis followed the realist method, which reports experiences, meanings, and the reality of participants. 20 Themes were identified using an inductive approach at the semantic level, meaning our themes were identified using the explicit 20 Development of themes focused primarily on participants' voices, and emergent themes were descriptive to capture the semantic meaning and summarize the range of participants' experiences. 20 Data were coded using the 6 steps of thematic analysis as follows. 20 First, coders familiarized themselves with the data by thoroughly reading the transcripts from each state multiple times and identifying patterns of responses. Second, codes were developed inductively by identifying units of meaning derived from the transcripts, and a codebook was developed. Third, codes were generated and grouped into potential themes and subthemes. Coders discussed grouping and arranging codes and reached a verbal agreement for all potential themes and subthemes. Fourth, possible themes and subthemes were reviewed between authors. The themes were reviewed for consistency with the codes to ensure they represented the data. Fifth, themes were defined, named, and assessed to ensure the data supported them. Sixth, a final report included the themes, subthemes, and representative quotes. All authors reviewed the initial themes and the final report to ensure that the data supported all generated themes and subthemes. The authors discussed any inconsistencies until an agreement was reached. Throughout the process, strategies to promote trustworthiness 21 were employed. These strategies included establishing credibility through peer debriefings with all authors present 21 ; establishing dependability 21 through the use of audit trails 19,21 that documented all decision-making during data analysis and records of codebooks, raw data, field notes, and transcripts; and maintaining reflexivity throughout the process by monitoring our biases through peer consultations and frequent team meetings. 21 State CACFP directors perceived that the waivers were critical to ensuring that young children were fed and families could avoid food insecurity early in the pandemic. As 1 state director said, "I think within our state, if we had not opted in and been granted those flexibilities [waivers], we would have had mass food insecurity in our state." Specifically, the meal pattern flexibility waiver was widely used and essential in states in which food supply shortages occurred. See Table 2 for more representative quotes. Furthermore, state CACFP directors reported that the monitoring waiver, which reduced requirements for monitoring, granted flexibility and time for Waivers allowed child care programs to continue feeding children when children could not attend child care each day "The waivers gave options for the child care to continue serving the meals, because there are [usually] a lot of the requirements that they have to stay within the mealtimes and serving the meals on site, and [allowed] that flexibility to be able to continue serving kids. And so, the ones not attending child care, they were able to do the grab and go meals and have parent pickup, but they also have the delivery option to some of the families that could not come out of their home. That was a good option that USDA provided." The meal pattern flexibility waiver was widely implemented in states with food supply shortages, especially at the beginning of the pandemic "The waivers for the meal pattern were important because they couldn't get milk or they couldn't get whole grain rich items are so there was some food shortages in the beginning, not so much probably through fall and this era time. We are now experiencing milk shortage, so that waiver has been used in that way." The monitoring waiver granted flexibility so there could be dedicated staff for technical assistance, waiver implementation, troubleshooting, and safety. "The sponsor monitoring waiver was really helpful for those agencies, especially some of the agencies that have school, or have child cares all across different towns because then they didn't have to go from one town to the next and potentially spread the virus if they're going from a community that has a really high rate right now, and then going somewhere else that maybe has a lower rate. That's the biggest thing we've heard. They have all really appreciated that waiver so that they could relax some of that monitoring." (continued) Theme 2: The meal pattern, noncongregate feeding, parent/guardian meal pick-up, and monitoring waivers were most commonly used by states and used in conjunction with each other, and state CACFP directors reported challenges and strategies regarding waiver implementation Meal pattern flexibility waiver Challenge: Preserving meal pattern integrity State CACFP directors preserved the integrity of meal pattern requirements by approving the waiver with adequate justification and providing resources for healthy substitutes for foods that were not available "And so, with the CACFP meal pattern, one of the challenges was probably in some of the rural areas they were having a hard time finding whole grain products to meet the whole grain requirement. So, we did a lot of flexibility on that, but made sure that they had a grain. But we allowed waivers in the meal pattern with whole grain rich. Just to make sure that they were serving all the components and they had a grain." "The one waiver that we did see utilized more in the beginning of the pandemic, not so much now, is the meal pattern waiver because it was a meal pattern waiver issued for CACFP. We approved to do it on a case-by-case basis, they had to tell us specifically what the issue was [before] we would grant waivers." Noncongregate feeding waiver Challenge: Feeding families The utility of the noncongregate feeding waiver was limited because it only allowed child care programs to serve meals to children enrolled in child care and not all children in the family "I think the summer feeding was absolutely critical or having some method outside of CACFP that could feed the complete family. Because that was one of the big issues we were struggling with was feeding that family as a whole. You know if they have three children and two are in school and one's in daycare we have to have a reasonable method of how we're going to feed all the children in the family." Challenge: Limited capacity It was challenging for child care providers to implement the noncongregate feeding waiver because they had limited capacity for staffing, packaging, delivery, and storage space "With the non-congregate feeding waiver, one of the things that we got a questions on that was an issue for some of our providers was getting the paper products or to-go containers, and then also the added expense of that, met with also like a delivery expense that they didn't necessarily have before, just some extra expenses around that that isn't really covered in their normal reimbursement because they're probably doing fewer meals, and getting less reimbursement but also adding this additional service on to meet their participants where they are. That was a huge thing with gas and mileage and delivering, as well as those to-go containers because as we know, a lot of restaurants and other types of programs switched to a lot of to-go containers for their own services that they were offering so it was harder for them to find those things." "Being able to do more than one day distribution was important because the staffing. It was difficult to have to staff up daily for those small number of meals." Overcoming challenges: Limited capacity Child care programs addressed challenges by preparing meals that were easy to pack, offering bulk products, and coordinating meal deliveries with the Department of Transportation. "We did find that [child care programs] were much more likely to give out the cold meals, instead of hot meals. And so, in this case, they could prepare them in the morning, and instead of putting them in the fridge they could put them in a cooler, or something like that and maintain the temperature that way. They also were more likely to, first of all, for supply reasons, but also because it does take up a little bit less space, I think, they would give out more bulk quantities. Instead of giving out multiple, for those sites that could give out some bulk, instead of giving out like individual small cartons of milk, the family might get a gallon of milk and that might include all the breakfast and lunch milk quantity for the week, and then they wouldn't have to give out milk with every single meal." "Most of our school districts developed what we call distribution routes, and parents could come to a particular school site, but also families could call in and they would be placed on a list, and the distribution routes would be the traditional bus routes. So they were going right to the kids home and for some areas of the community if we had a call from an apartment complex where not a lot of kids go, we may have had some younger kids, the school district would include them as a route or stop as well. And once we received that flexibility, that meal time waiver, that mealtime restriction flexibility that allowed us to kind of bundle meals, it helped to reduce transportation costs." Challenge: Food safety It was challenging to adapt noncongregate meals for pick up or delivery and meet food safety standards "We did provide overarching food safety guidelines, but food safety. The actual authority in [our state] is county by county so each county might have slightly different requirements and so just making sure they were meeting the county requirements for food safety was always a concern." Overcoming challenges: Food safety As a strategy to ensure food safety, some state CACFP directors reported providing technical assistance and educational materials. "We did a lot of technical assistance on how to provide a take home meal and what guidance to give them about storage and preparation or that sort of thing. We had not done any take home meals before, so making sure they held temperature and those sorts of things we had to provide a lot of education on." Parent/guardian meal pick-up waiver Challenge: CACFP verification It was challenging for child care programs to verify CACFP participants during parent pick up "The problem, you know, just being realistic, [parent pick-up is] a great thing and it's a very necessary thing, but it also does allow and cause some concerns as an administering agent because the rules are kind of loosey goosey. And, it does allow for people to maybe bend them in not the way they were intended or to add a couple meals here there because there really is no way. I mean people could pull up, the parents could even pull up so they can say, "We have five kids," and they would give them 35 meals. And maybe they have no kids, and (continued) really close together to make sure a) we weren't overlapping in service and both serving the same thing and b) that the schools were picking a side, either they were going to stay with the school lunch or the same with summer, or they were going to go with our program. And so, we had to work together to make sure that we were picking the right waivers we were implementing everything and then we were also doing our administrative oversight to make sure there wasn't duplicate participation." Monitoring waiver Challenge: Adoption of technology Adapting to technology for monitoring was challenging for child care programs "Technology has been a big point of discussions specifically with our offsite reviews. I mentioned this before I'll mention it again though, there's a lot of very rural frontier areas in [our state] and with that comes the lack of a high bandwidth. So doing a Zoom call or a FaceTime or Skype would either break up or you wouldn't be able to get completely through a call, that's happened to me and it's also happened to sponsors while they're trying to conduct their monitoring reviews." "The monitoring waiver to basically do desk reviews is great in certain circumstances, but what we're finding is it's taking us longer to get the review done and we're having more issues because those things that we would normally just observe or get when we're out there on site, then become a challenge because we're back and forth and back and forth saying you didn't send me this, or we're missing this particular piece, or I need you to take a picture of your notification that you have your justice for all poster posted, and I need to have you send your labels for your meal, take pictures of your labels. People struggle with that and that's a big burden on our sponsors." Overcoming challenges: Adoption of technology Using alternative strategies such as phone interviews, sending supporting documentation, and following best practices for virtual reviews "[Child care programs] did the best they could with [technology], whereas our sponsors reported that you know there was a lot of pictures that were sent, there was a lot of telephone conversations." "We used the best practices for monitoring document for state agencies as well for really streamlining how our virtual monitoring for our reviews, for how that would play out for this fiscal year because we were really just scrambling and being flexible, yet meeting our requirements once COVID hit. When we switch to doing our virtual reviews because that was not something that we had ever done either, so we worked through some of the challenges, but we were like happy to see the best practices document, and it helped us define what our protocols are for our reviews during COVID for this fiscal year." Theme 3: Implications for policy. Timely communication from USDA, standing waivers and continued flexibilities for feeding children, increasing CACFP enrollment, and reducing financial burden on child care are continued critical needs Timely and clear communication from USDA: State CACFP directors reported the need for clear communication regarding waiver usage and tracking from USDA Timelier and streamlined communication from USDA regarding waiver implementation, waiver extensions, and responding to questions "I think that the thing that has been the most difficult for everybody throughout this is just like the lack of agility in terms of responding to something like this. So, I think that the USDA did the best that they could, given the circumstance. But, I mean, it wasn't fast enough. We were not hearing back on waiver requests. We weren't issuing them quickly enough." "There was a lot of nationwide waivers that came out so it was getting very confusing on which waivers the sponsors needed to use, and which one was still effective, and which one had expired. So, that was very challenging to make sure that they understood which waiver was still effective." Challenges regarding tracking waiver usage and understanding what data to report back to USDA "I think a lot of questions from our providers to our sponsors to us was, "What is my record keeping look like during COVID?" That was a huge question about what's required, especially during the non-congregate feeding, "What do we really need to keep because they're not really in attendance? Do we keep an attendance?" Streamline waiver communication where USDA communicates about waivers with the state directors, who then communicate with sponsors and providers "I will say one of the largest issues that we had with the waiver information is that the waivers were released, the sponsors understood them, or knew about them but maybe didn't understand how they were supposed to be used, so were asking to be able to utilize them before the state agency truly understood the purpose of the waiver and the intent of the waiver and to what extent it could be used. So I would say that was probably one of our largest hurdles is that the information was available to the public and. Yes we didn't get the guidance as timely as we could have. We had to tell a lot of our sponsors which fortunately we have a good working relationship with our sponsors and they understand (continued) that that can be an issue, that information gets publicized before we really know what's going on with it." Offer a wide variety of mechanisms for timely and effective waiver communication from state agencies to CACFP sponsors and child care programs "So, we already had a broadcast email system to communicate with all the sponsors on. Anytime there's policy memos that come out, updates and such, we send out broadcast emails to all the sponsors. And so, we used that same system to communicate with them." "Again, we were calling them every day, and we are still calling the sponsors every week, so it was conversational. Each of the staff was doing their own recommendations based upon that need so I can ask them if they had any suggestion, what the suggestions were and the problems, but I do not have anything in writing. That's because it was that one on one thing that we're doing." "And then we did offer just one-on-one technical assistance. . .. We felt like it was better to communicate with them individually. And just address their questions and their assistance needed that way because technology, for the most part, can sometimes be challenging for those folks." "We created some resources that we can give you links to. One of them is our "At a Glance" document that summarizes all of the currently available waivers, and what their deadlines are, and gives like a quick synopsis of what this is. We have a "Frequently Asked Questions" document that we just kind of collated all of our most frequently asked questions during COVID with their answers. We can give you a link to that, and then we also have our CACFP Training Calendar that we created at some point during COVID to help them know when we have our different trainings live that are available." Standing waivers and continued flexibility: State CACFP directors reported the need for standing waivers to implement during emergencies and continued flexibility to implement the meal pattern and monitoring waivers Flexibility to transition between normal and emergency regulations moving forward I have learned that when there are public emergencies such as a pandemic or now in the Midwest a big storm, that if there was an easy way to transition from current regulation to adjusted regulation without having to opt in or have a big formal process or formal, I mean yes we do need to provide a plan of how we're going to ensure program integrity. I just feel that it will be easier for our organizations to say "Okay well this happened, so we can automatically go back to our pandemic plan." Permanent waiver allowances for continuing to feed children during situations such as child care closures or isolation for illness, during evenings, weekends, and holidays "I would love to see us to continue providing meals to our programs on weekends for children. I would like to see recognition of the fact that children are hungry on weekends and holidays, too, and I would like to see, with COVID again, it is brought this to the forefront, I think. We always knew that children we're hungry, people who work with it on weekends and holidays, but I would like to see CACFP have the ability to feed children, to give children food on weekends and holidays." "There are a lot of advocacy groups out there that are pushing for these [waivers] to continue forever. You fed them, basically, we fed them free for a year. Clearly, we can continue to do that. There, I hear that on several calls in our State for advocacy groups and when we have our, our regional call with our USDA office, they, other States are saying the same thing. There's a huge push for universal free feeding on all programs. Because they feel like it's clear that we can do it because we've had to do it for a year, so let's just keep it up." Continued flexibility from USDA so states can adjust meal patterns and monitoring requirements to their specific needs "I think, and I mentioned it earlier, I would like to see USDA allow the state to use waivers when needed. I'll give you an example. We realize the importance of whole grain products, but when you live in a rural area, and you have maybe one little tiny local mom and pop, little tiny, tiny store, it's hard to find whole grain and our provider sometimes have to travel 20 to 30 miles to find a loaf of true whole grain bread or products that have that are whole grain. I don't want to see, and I want to be able to use some waivers when they're necessary." "The continued flexibility from USDA has been so helpful, and allowing us as a state agency to work with our sponsors for what works best for them, rather than USDA prescribing, "This is what you have to do." They understand that every state is different and every region is different and so being able to have that flexibility to work with the sponsors as needed and having USDA be willing to grant flexibilities when needed, is really, really helpful." Increase CACFP enrollment and reduce financial burden on child care: CACFP directors suggested strategies such as changing financial viability standards for CACFP, lowering income eligibility requirements for for-profit centers and some states also reported state-funded grants and resources to increase CACFP participation and alleviate the financial burden on child care Support child care providers to leverage funding through the state or other sources by developing a repository of funding sources for child care and supporting providers to apply for such funding Meal pattern flexibility waiver. Subtheme: State CACFP directors felt the need to preserve the integrity of meal pattern requirements before approving its use. The meal pattern flexibility waiver allowed for reimbursement of meals that did not meet the meal pattern requirements. One of the biggest challenges with implementing the meal pattern flexibility waiver was maintaining integrity to the meal pattern. This was difficult because it required additional time and resources to work with each ECE program to determine the best options available and follow meal pattern requirements as closely as possible. For example, when ECE programs could not purchase whole grains, state agencies worked with them to still serve whole grain-rich food items by finding an alternative or encouraging ECE programs to at least offer some type of grain. One state CACFP director said, We do ask folks who are operating, "What are you serving?" because we don't want them going from fresh fruits and vegetables, bananas, broccoli, and chicken breasts to honey buns, chocolate milk, and vanilla wafers. We are regulating, and they know that the Meal Pattern [Flexibility] Waiver is not a free-for-all. We do say, "If you can afford to stick with the meal pattern, of course, stick with the meal pattern." [However] we understand there are going to be times when now that item might not even be available to you. In addition, some state CACFP directors confirmed the need for this waiver before approving ECE programs to use them. For example, several states reported only authorizing ECE programs to use the meal pattern flexibility waiver after verifying a Representative Quote sponsors, and they're stable. They're holding steady with providers. They may have lost a couple, but if anything, they've probably added more because there's a hunger issue, and the providers are recognizing the value of programs like CACFP." Well I also mentioned the Office of Childcare and Development which distributes our state funded reimbursements for families for child care. Our partnership with them was very important, they offered several grants throughout COVID to child care providers. Yeah they've offered grants that child cares could apply for and then those childcares could then credit families for their childcare fees even if they were not getting state assistance. Change financial viability standards for CACFP participants because programs may no longer be eligible because of the financial effects of COVID-19 "I think that the, financial viability standards that are embedded within CACFP are limiting a lot of our smaller and sometimes our newer centers and organizations. . . .Of course we expected a, a downward trend this year, but so many of our organizations have not been able to meet that standard because of COVID. The pandemic has kind of put them back, push them back a couple of years maybe. And I think if the State agencies could have some flexibility when it comes to that particular performance standard. They, they they're willing to have that program accountability measure. They just don't have contingency funds. They have just enough money to pay the bills that they get. So if we could have a bit more flexibility when it comes to new organizations, even if it's probationary, but we have a lot of sponsoring organizations that are fearful of bringing on sites that aren't financially solvent. We don't mean those where the house is about to burn down, but if you're just making ends meet, this is really who we should be looking for because that's who really would benefit from the program." For-profit centers may need lower eligibility requirements to continue to participate in CACFP "The other thing that was a really negative impact was on our for-profit centers, because they still had to show that they were 25% or above [low-income] in the children that they served, and when they were taking care of first responders, that skewed that number so then they weren't able to claim on CACFP because their income level for their children was about that 25%. We did ask for waiver from USDA and we have gotten no response. And so, for example, we have someone who called us who was at 24.5%, free or reduce, who still could not claim." food shortage in their area. By following these strategies, state CACFP directors used the meal pattern flexibility waiver when necessary without compromising the CACFP meal pattern requirements. Noncongregate feeding waiver. Subtheme: State directors perceived the utility of the noncongregate feeding waiver was limited because it only allowed child care programs to serve meals to children enrolled in child care and not all children in the family. The noncongregate feeding waiver allowed ECE programs to serve meals outside a group setting. Although the noncongregate feeding waiver was widely implemented across states, CACFP directors reported challenges related to inherent limitations with this waiver. Specifically, the first challenge was for families with school-aged children and children enrolled in ECE programs. For these families, ECE programs participating in CACFP could only provide meals for the children enrolled in the ECE program, meaning families had to find other sources of meals for their school-aged children. Subtheme: It was challenging for child care providers to implement the noncongregate feeding waiver because they had limited capacity for staffing, packaging, delivery, and storage space. Early care and education programs did not always have the capacity or infrastructure to implement noncongregate meals. Before COVID-19, ECE programs that served children prepared meals on site or had meals delivered by vendors. As the noncongregate feeding waiver allowed ECE programs to distribute meals outside of the group setting, ECE programs were then required to develop or purchase meals that could be delivered to children elsewhere. Commonly, ECE programs did not have sufficient staff to produce, package, and distribute the to-go meals. In addition, several ECE programs did not have storage or refrigerator space for the to-go meals, nor did ECE programs have the resources to deliver meals to children whose parents could not pick-up meals. One state CACFP director said, [ECE providers] would tell us, 'Oh, I want to give out a week's worth of meals.' And we had to say, 'Okay, let's stop and think about this, because how are you going to do that? You don't have huge commercial refrigerators. Do you have the staff to be able to prepare all those meals at once and get them out?' Subtheme: Child care programs addressed challenges by preparing meals that were easy to pack, offering bulk products, and coordinating meal deliveries. To overcome the challenges of packaging and delivering meals, CACFP directors reported working with ECE programs to develop menus with food items that were easy to package and encouraged programs to offer foods in bulk packaging (eg, milk, rice, and bread for the whole week). Prepackaging of foods allowed programs to meet needs for the entire week rather than 1 day of meals. Regarding delivering meals, 1 innovative strategy some state CACFP directors reported was partnering with the state's Department of Transportation to deliver meals to children's homes using school buses. Subtheme: It was challenging to adapt noncongregate meals for pick up or delivery and meet food safety standards. Early care and education programs found it challenging to adapt noncongregate meals for parent pick up or delivery while maintaining food safety standards. Safely holding food at appropriate internal temperatures was a new challenge for several ECE programs that were used to prepare meals before serving them to children. State CACFP directors and their staff provided programs with technical assistance and educational materials to overcome food safety challenges. Parent/guardian meal pick-up waiver. T a g g e d -P Subtheme: It was challenging for child care programs to verify CACFP participants during parent pick-up and prevent accidental duplication of meals with other child nutrition programs. The parent/guardian meal pick-up waiver allowed parents or guardians to pick-up to-go meals without having their children present. Given the waiver stipulation that CACFP can provide meals only to children enrolled in CACFP-participating ECE programs, state directors reported that it was challenging to verify if the parents were picking up meals for CACFP-participating children and, if so, how many. In addition, state CACFP directors reported challenges not duplicating meals served by other child nutrition programs such as the Summer Food Service Program (SFSP). For example, state directors reported that some SFSP sites also acted as CACFP sites, meaning they submitted claims for meals served to both programs. The CACFP and SFSP worked closely with these sites to ensure that meals were submitted appropriately for reimbursement. One strategy to overcome this challenge was for programs to delegate which meals would be claimed with each program. For example, breakfast and snacks were claimed through CACFP, and lunch was claimed through SFSP to ensure no accidental overlap in program reimbursements. Monitoring waiver. Subtheme: Adapting to technology for monitoring was challenging for child care programs. The monitoring waiver relaxed state CACFP agencies' requirements for in-person monitoring of ECE programs. State CACFP directors reported that this waiver granted them the flexibility and time to dedicate staff to technical assistance for programs implementing waivers and helped keep their staff safe because they no longer had to travel throughout the state to visit ECE programs. Despite the comprehensive implementation of the monitoring waiver, state CACFP directors reported that adapting to technology was challenging for ECE directors and providers. For example, state directors described how ECE directors and providers could not always email or scan the required monitoring documents during virtual monitoring. Furthermore, states with programs in rural areas reported challenges using video calls because of the lack of internet connection. To overcome these challenges, state CACFP directors reported using alternative strategies such as conducting phone interviews, allowing programs to send supporting documents by email after virtual monitoring sessions, and following best practices for virtual monitoring created by USDA. Theme 3. Implications for Policy State CACFP directors reported their current critical needs and implications for policy moving forward. Specific themes emerged around the timing of USDA communication, continued or permanent flexibilities for feeding children, and financial implications for ECE programs. Timely and clear communication from USDA. Subtheme: Timelier and streamlined communication from USDA regarding waiver implementation, waiver extensions, and responding to questions is a critical need. State CACFP directors reported a need for more timely communication from USDA regarding waiver implementation, waiver extensions, and response to questions raised by state agencies. State CACFP directors reported that information about waiver allowances and extensions was often not approved or communicated fast enough, which made planning and communication with ECE programs more complicated. For example, ECE programs needed to know what waivers would be continued ahead of time to plan for preparation and distribution. However, directors reported that they often would not know if a waiver would be extended early enough to help their ECE programs make accurate plans. Subtheme: Streamlined waiver communication was needed when the USDA communicates about waivers with the state directors, who then communicate with sponsors and providers. Streamlined communication from USDA to state agencies is needed to prevent confusion about waiver implementation. Directors reported that USDA would simultaneously release information on waivers to all states and CACFP-participating ECE programs. Early care and education programs would then call their state CACFP agency, asking questions before the state agency could review the waiver and understand its implications. A streamlined [The challenges were] interpreting the policy memos and walking through what an implementation plan at the institution level looks like and what the state is asking as far as the data that these folks are to collect and report to us so then we can report to FNS. Standing waivers and continued flexibility. Subtheme: Permanent waiver allowances for continuing to feed children during emergencies and flexibility to transition between normal and emergency regulations are needed. State CACFP directors reported a critical need for permanent standing waivers and continued flexibility. Specifically, they wanted to make decisions to transition between standard regulations and emergency flexibilities moving forward to save time rather than waiting for communication from USDA. This would enable states to respond efficiently to natural disasters or other emergencies and allow CACFP programs to continue serving food to young children in need. Further, directors reported the need for permanent waiver allowances to enable programs to continue feeding children during ECE program closures, evenings, weekends, or holidays because of concerns about children not receiving enough food at home. Directors felt that they finally learned how to implement noncongregate meals efficiently, enabling them to feed children who could not attend. Subtheme: Continued flexibility from USDA so states can adjust meal patterns and monitoring requirements to their specific needs. States also reported the need for continued flexibility from USDA to respond to and adjust requirements to meet each state's unique needs. For example, states with large rural populations are spending more time and incurring extra costs to drive to remote ECE programs for routine monitoring when the option of virtual monitoring could be just as efficient. Another state CACFP director explained, I think the continued flexibility from USDA has been so helpful and allowing us as a state agency to work with our sponsors for what works best for them, rather than USDA prescribing what you have to do. They understand that every state is different, and every region is different, and being able to have that flexibility to work with the sponsors as needed and having USDA be willing to grant flexibilities when needed, is really, really helpful. Increase CACFP enrollment and reduce the financial burden on child care. Subtheme: Support child care providers to leverage funding through the state or other sources and change the financial viability standards. Directors also reported a critical need to increase CACFP program enrollment and reduce the financial burden on ECE programs. Suggested strategies included having state CACFP agencies support ECE providers to leverage funding through the state or developing a repository of funding sources for ECE programs that could apply. Furthermore, states reported the need to change the financial viability standards, given the concern that several programs would fail to meet the current standards following the financial repercussions of the COVID-19 pandemic. To participate in CACFP, an ECE must demonstrate that It has adequate financial resources to operate the CACFP on a daily basis, has adequate sources of funds to continue to pay employees and suppliers during periods of temporary interruptions in Program payments and/or to pay debts when fiscal claims have been assessed against the institution, and can document financial viability (for example, through audits, financial statements, etc). 25 State directors were concerned that ECE programs would not meet the financial viability standards given the reduced child enrollment and consequential loss of income. One state CACFP director said, [Financial viability] is something that we anticipate as a future challenge because we're tasked with assessing their financial viability on an annual basis. We're really concerned that next year when we do that, their financials from this year period are not going to reflect viability. Subtheme: For-profit centers may need lower eligibility requirements to continue to participate in CACFP. Finally, states reported that for-profit centers needed lower eligibility requirements. Several for-profit ECE programs were no longer eligible for CACFP because of closures, reduced enrollment, and state mandates limiting capacity. For example, state directors reported that several of their for-profit ECE programs experienced reduced enrollment of children. When children from low-income families were not attending the ECE program, it reduced the program's percentage of children that met the CACFP income eligibility guidelines. Consequentially, these ECE programs were no longer eligible for CACFP because they did not meet CACFP requirements for enrollment of children from low-income households. DISCUSSION State CACFP directors reported that the waivers helped ECE programs continue feeding children during the COVID-19 pandemic. This is consistent with a previous study whereby ECE programs participating in CACFP in Arizona and Pennsylvania were more likely to offer noncongregate meals or meal delivery to families unable to attend during COVID-19 than non-CACFP sites, 26 which was a key flexibility provided by the waivers. Although several waivers were available, directors mentioned that 4 specific waivers, the meal pattern, noncongregate feeding, parent/ guardian meal pick-up, and monitoring waivers, were the most used and helpful in feeding children during the pandemic. Directors reported how several of these waivers had to be used in conjunction with others. Combining waivers, such as the noncongregate feeding and parent/ guardian meal pick-up waivers, could help increase state CACFP directors' efficiency in approving waivers. This solution could also reduce confusion and paperwork for both ECE programs and state agencies. Overall, the CACFP state directors were consistent in their perspectives about waiver usage, benefits, challenges, and policy implications for USDA. Commonly reported challenges for waiver implementation included concern over meal pattern integrity and limited capacity in ECE programs to provide noncongregate meals while maintaining food safety. Other problems included verification of enrollment and preventing accidental duplication of services between child nutrition programs. A previous study conducted with food service staff, superintendents, and community partners of school-aged children reported similar challenges in ensuring that food delivered via noncongregate feeding was safe. 27 These challenges indicate increased training and resources to develop and ensure safe food delivery systems across child nutrition programs. Although directors reported several challenges with implementing the waivers, they also shared effective solutions that helped them overcome these challenges. For example, directors reported that ECE programs provided meals by offering products in bulk and using bus routes to deliver meals. In another emergency whereby children cannot congregate to receive meals, child nutrition programs can leverage existing infrastructure for meal deliveries and offer items in bulk. 27 State CACFP directors were also concerned about ensuring integrity to the CACFP meal patterns while implementing the meal pattern flexibility waiver. Research has established that participation in CACFP improves the nutritional quality of foods and beverages served in ECE settings and is associated with fewer barriers to serving healthy foods. 3 A previous study conducted with ECE providers found that meeting the meal pattern requirement, especially at the beginning of the pandemic, was challenging given the food shortages. 28 Because CACFP participation benefits nutritional quality, CACFP directors viewed adherence to the CACFP meal pattern as essential. Several factors impacted the ability of providers to follow the mealtime requirements, including food shortages of whole grains and dairy and the limited capacity for staffing, packaging, delivery, and storage space. However, there is a need to better understand the level of regulation and monitoring necessary for child care programs to adhere to the meal pattern requirements to ensure the healthfulness of foods. State CACFP directors identified unique areas in which they perceived a critical need for more support or policy changes. Although the need for timely and streamlined communication from USDA regarding waiver availability and tracking was uniquely reported by the present research, director suggestions regarding more financial support for ECEs are also recommended by previous research. For example, Kuhns and Adams 29 reported that ECE programs that remained open and ECE programs that closed but continued to receive funding through public programs or philanthropy could continue feeding children during COVID-19 through grab-and-go meals. Conversely, ECE programs that closed and did not receive external funding were less likely to provide meals for children. 29 Early care and education program closures, whether because of state mandates or financial strains, leave a gap in service for families who rely on these programs for food. However, ensuring these programs have the funding and support to continue providing meals could help close the gap in food access. Funding and support for ECE programs could come from state or local governments. In addition, directors reported that families with children of multiple ages were concerned about not getting enough food for all their children because they were only receiving meals for their children enrolled in ECE, whereas SFSP offered meals to all children aged < 18 years. 30 Furthermore, neither of these programs provided meals to parents or guardians. Increased coordination and communication across various nutrition assistance programs and food resources could have helped families access these resources more efficiently, regardless of their child's age. State CACFP directors also reported the need for more flexibility in the program, especially during times of emergency. The flexibility to swiftly transition between normal and emergency operations could be useful beyond a pandemic. For example, if a storm or other situation arose that prevented children from attending ECE programs or prevented ECE programs from serving meals that met all meal pattern requirements because of food shortages, having infrastructure and protocols in place that allowed individual states to determine if there were a need to use emergency waivers would allow states and CACFP-participating ECE programs to quickly respond and ensure there was no gap in meals for young children. In addition, state CACFP directors reported the need for CACFP to provide meals for children outside of scheduled ECE program hours, such as evenings, weekends, and holidays. The extension of CACFP services could help children experiencing food insecurity receive a continuous supply of healthful foods between the ECE and home settings. Schools across the US have integrated weekend feeding or backpack programs that provide food to children over the weekends. 31 Weekend feeding programs are often provided by nonprofit organizations and foodbanks and have implications for improving academic performance in school-age students. 32,33 Integrating such programs through CACFP in ECE settings could further support lowincome children who do not have access to healthful meals when they cannot attend ECE. This study had some limitations. First, this study included the perspectives of state-level CACFP directors for 21 US states, so the findings may not be transferable to other states. However, there was the representation of at least 1 state from each region of the US. Another limitation was the semistructured interview process, introducing social desirability bias from the state CACFP directors, whereby directors who felt their state had successfully provided meals to children may have been more likely to participate. Finally, state CACFP directors opted to participate, increasing the risk of self-selection bias. IMPLICATIONS FOR RESEARCH AND PRACTICE Child and Adult Care Food Program directors reported that the waivers were valuable for ensuring the continuity of healthy meals distributed to young children in ECE. Further research is required to explore whether increased coordination and communication across nutrition assistance programs could have helped families access food resources more efficiently. In addition, research is needed to better understand the regulation and monitoring of meal pattern requirement adherence during times of emergency to ensure that children continue to receive healthy foods. Additional research is needed to explore CACFP perspectives on effectively feeding children during emergencies at the federal, CACFP sponsor, program, and parent/guardian levels. Finally, research is needed to explore how state CACFP characteristics, such as rurality, racial demographics, or prevalence of low-income children attending ECE programs, impacted state CACFP programs' ability to continue feeding children during the COVID-19 pandemic. Although some state directors reported challenges in implementing waivers, others also suggested novel ways to work around them and considered their success stories in implementing the waivers. For future emergencies and to improve the CACFP program, USDA can consider including suggestions to overcome commonly reported challenges for successful waiver implementation. Specific considerations to continue feeding children in ECE settings include implementing standing waivers for use during emergencies, permanent waiver flexibilities to feed children when they cannot attend ECE programs, continued flexibility to adjust meal pattern requirements to meet specific state needs, and reducing financial viability standards for CACFP participation. Taken together, the need for continued funding and support for ECE programs to operate during emergencies, increased coordination and communication across various nutrition assistance programs, and increased flexibility for state CACFP agencies to respond to emergencies and provide nutritious foods for children when they cannot attend ECE are strong implications for policy changes. Addressing these changes through a policy, such as Child Nutrition Reauthorization Act, 34 can positively affect CACFP operations and improve access to nutritious foods for young children across the US. Future research is needed to examine the impact of this policy and programmatic recommendations for improving waiver implementation, increasing CACFP enrollment, and feeding young children in ECE.
2022-10-08T13:02:01.105Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "314df56803f7861448af12cc451d52f746c4cdff", "oa_license": null, "oa_url": "http://www.jneb.org/article/S1499404622004511/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "ea73baf9360c6331fddd2205947f1652def5824f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
248007489
pes2o/s2orc
v3-fos-license
Numerical Solution of the Heat Transfer Equation Coupled with the Darcy Flow Using the Finite Element Method The fi nite element approach was utilized in this study to solve numerically the two-dimensional time-dependent heat transfer equation coupled with the Darcy fl ow. The Picard-Lindelöf Theorem was used to prove the existence and uniqueness of the solution. The prior and posterior error estimates are then derived for the numerical scheme. Numerical examples were provided to show the e ff ectiveness of the theoretical results. The essential code development in this study was done using MATLAB computer simulation. Introduction Let Ω ⊂ ℝ 2 be an open domain with a bounded boundary that is simply connected in ℝ 2 , with a continuous Lipschitz boundary ∂Ω. The equation of heat transfer coupled with the Darcy flow is based on heat transfer in porous medium [1]. The governing equation for the transfer of heat together with the Darcy flow is stated in a system of partial differential equations as where h and T are the unknowns and q = −K · ∇h is the Darcy velocity. Such a problem is often investigated using the partial differential equation scheme, and exact methods such as variable separation, variable change, and transformations are commonly employed in order to find exact solutions in a particular domain. However, these approaches cannot be used when dealing with nonhomogeneous initial or boundary conditions, nonlinear partial differential equations, and irregular domains or when resources are limited. As a result, many numerical approaches are utilized to get approximate solutions of equations whose relevance assists in the issues they describe or depict. Many papers have been written on the subject of heat convection in liquid media, in which its movement is described by the Navier-Stokes equations in combination with the heat equation [2][3][4][5][6][7][8]. An alternative coupling Darcy's model with the heat equation with constant viscosity but varying external force as a function of temperature has been studied by [9,10] and discretized with a spectral method. Ganapathy and Mohan [11] studied the mechanics of the steady-state evolution of motion of the fluid and transfer of heat produced by a difference in temperature in geometry under the assumption that the Darcy flow model was valid. They discovered that the motion of the fluid on the free surface behaves similarly to axial flow. He et al. [12] properly built, analyzed, and compared the control equations of porous media under the effects of external energy sources. Analog simulation is done on the model created with COMSOL multiphysical field processing software, taking into consideration the effect of various physical and geometrical characteristics of porous medium on the model. The simulation shows that when external energy sources are present, the porous media are pushed ahead along the transmission direction of external energy in the phase change range on the macro level. Teskeredžić et al. [13] implemented a numerical technique for flow of fluid, transfer of heat, and stress analysis in phase change issues. The approach relies on numerical meshes made up of cells of any polyhedral shape solving integral form equations regulating momentum, mass, and energy balance. Ahmad et al. [14] looked at numerically solving nonlinear differential equations for heat transmission in micropolar fluids across a stretching domain. With proper consideration of micropolar fluid theory, this study delivers realistic and distinct results. El-Hakiem et al. [15] solved the problem of hydromagnetic dispersion in non-Darcy mixed convection heat and mass transfer from a vertical surface imbedded in porous medium using a similarity solution. Furthermore, Mansour and El-Shaer [16] investigated the effect of a magnetic field on non-Darcy axisymmetric free convection in a power law fluid saturated porous media with variable permeability. Motivated by the preceding findings, the finite element method is introduced to solve numerically the governing equations (1)- (2). The finite element technique is a computational method for obtaining approximate solutions to partial differential equations (PDE). Mathematical Model Analysis 2.1. Initial Boundary Condition and Basic Principles. Equations (1)-(2) should be solved by specifying an initial condition: as well as boundary conditions, which could also take the following form: where T 0 , h 0 , p, q, r, and s are assigned functions and f∂Ω D , ∂Ω N g specifies a boundary partition, that is, ∂Ω D ∪ ∂Ω N = ∂ Ω and ∂Ω D ∩ ∂Ω N = ∅. ∂Ω N is the Neumann boundary, and ∂Ω D is the Dirichlet boundary. An integral domain with the solution belonging to the Sobolev space H 1 ðΩÞ and the underlying space L 2 ðΩÞ is utilized, since the problem cannot be expressed in terms of second derivatives. The L 2 ðΩÞ is a space that has the following definition [17]. and the Sobolev space H 1 ðΩÞ is defined as The space for vanishing boundary values is defined as Sobolev's imbedding is used; for all real numbers p ≥ 1, there exist constants S 0 p and S p such that Equation (10) simplifies to Poincaré's inequality when p = 2. Weak Formulation. In order to find the numerical solution of the governing equations (1)-(2) using the finite element method, variational formulation is introduced. This is formally proceeding, by multiplying the governing equations (1)-(2) for each t > 0 with a basis function v = vðx, yÞ and integrating on Ω. V = H 1 ðΩÞ is set, and hðtÞ, TðtÞ ∈ V is sought for each t > 0 such that ð : ð : By putting the Darcy law, that is, q = −K∇h, into (11) and (12), we obtain ð : ð : Then, by using Green's theorem stated in [18], equations (13) and (14) can be ð : ð : In (16), since we considered basis functions to be a linear basis function (e.g., Assume Bringing (19) and (20) into (15) and (16), respectively, we obtain ð : ð : Now considering the Galerkin approximation of the variational formulation (23) and (24) for all t > 0, let V l be the space of vanishing Lagrange finite elements of degree r on the boundary with respect to a mesh of mesh size l and define h l and T l as an approximate solution, h l ðtÞ, T l ðtÞ ∈ V l such that with T l ð0Þ = T 0l and h l ð0Þ = h 0l , where V l ⊂ V is an appropriate finite-dimensional space and T 0l and h 0l are convenient approximations of T 0 and h 0 in the space V l , respectively. The weak formulation (25) and (26) is called semidiscretization of (13) and (14). Abstract and Applied Analysis porous media using the finite element technique. That is the solution of (26) using the finite element technique. Since H 1 ðΩÞ is detachable, as a result, it has a quantifiable basis fϕ i g i≥1 : Let V l denote the region covered by the first l basis functions, fϕ i g 1≤i≤l : The reduced problem (26) is discretized in V m by the square system of equations. That is, find T l = ∑ 1≤i≤l T i ϕ i ∈ V l , the solution of The Picard-Lindelöf Theorem [19] guarantees the existence and uniqueness of the solution of the variational formulation (27) in which the theorem is stated as follows. Then, there exists a unique solution T ∈ L 2 ðð0, T f Þ, VÞ of the initial value problem: The detail of the proof is found in [19]. Condition (28) is a term that is frequently used to describe positivity or coercivity, while (29) is called boundedness or continuity. 2.4. Coercivity/Positivity. We need to constrain the flow velocities to guarantee that bðv, vÞ is positive. That is what we require: and assume the bounded velocities kqk ∞ : Using the Poincaré inequality, kvk H 1 0 ≤ C Ω kvk H 1 , where C Ω is some constant [19]. Continuity/Boundedness. The second condition the boundedness follows by applying the Cauchy-Schwartz inequality: where μ 2 = ðð1/PeÞ + C Ω kqk ∞ Þ > 0: Hence, the continuity condition is satisfied. Therefore, the solution of the weak formulation of our problem exists and is unique. Approximate Solution. Let e denote the element number in a region Ω. To provide an algebraic interpretation of (25) and (26), a basis ϕ j is introduced for V l , and it is observed that it suffices that (25) where (i) M e is the 3 × 3 matrix whose elements are given by m e ji = Ð Ω e ϕ j ϕ i dΩ e (ii) A e is the 3 × 3 matrix whose elements are obtained from (19) as a e ji = Ð Ω e K∇ϕ j ∇ϕ i dΩ e (iii) B e is the 3 × 3 matrix whose elements are obtained from (20) as b e ji = Ð Ω e q∇ϕ j ϕ i dΩ e + ð1/PeÞ The matrix A e is called the local stiffness matrix, and M e is called the local mass matrix. Now, by the assembling process, (37) and (38) will be used to get the overall approximate solution and after some algebraic manipulation, which will follow shortly, allow to be put into the following form: in which h = ½h 1 , h 2 , ⋯, h N T , T = ½T 1 , T 2 , ⋯, T N T and the N × N matrix M is the global mass matrix, the N × N matrix A is the global stiffness matrix, and the vectors g and r are the globalized force vectors. Here, N denotes the total number of nodes in the problem as a whole, and the components of h and T are now labeled by their global node numbers. For the numerical solution of the ODE system (39) and (40), many methods are available from that the backward Euler method is used and is given in the following form: which is first order accurate with respect to Δt = t k+1 − t k . media. Before analyzing a weak formulation (26), its convergence is analyzed, since it is less involved. The key to the analysis of the weak formulation is to compare T l not directly to T, but rather to an appropriate representative w l ∈ C 1 ð½0, T f , V l Þ. For w l , we choose the elliptic projection of T, defined by Error From [20], by the finite element method for elliptic problems, we have the L 2 estimate: for some constant p: If we differentiate (42), we see that ∂ w l /∂t is the elliptic projection of ∂T/∂t, so Let y l = w l − T l . Subtracting (26) from (45), we get Now, for each t, we choose v = y l ðtÞ ∈ V l . Note that for any function y ∈ C 1 ð½0, T ; L 2 ðΩÞ, Thus, we get Canceling the same expression on both sides of (48), we obtain Abstract and Applied Analysis where T = kM and E is the consistency error. In this way, we prove that Numerical Tests We have carried out two test problems to demonstrate the performance of the given algorithm. Accuracy of the method is measured by the error norm L ∞ = kT exact − T numeric k L ∞ = kT exact − T numeric k: The equation for the demonstration is a 2D heat equation over a rectangular region: That is finding the solution of temperature in the plate as a function of time and position using the finite element method. The heat equation, which will be used for demonstration in this section, is given by where α is thermal diffusivity and f is the heat generation or source function. Case 1: 2D Heat Equation Whose Exact Solution Is Nonlinear. Assume that the exact solution of (67) is T x, y, t ð Þ= ty sin πx ð Þ cos π 2 y + y sin πx ð Þ cos π 2 y , 0 ≤ x, y ≤ 1, t ≥ 0: ð68Þ By using (68), we find the value of in (67). Now, the initial boundary value problem or the strong formulation is with the boundary conditions T = 0 on the boundary dΩ and initial condition Tðx, y, 0Þ = y sin ðπxÞ cos ððπ/2ÞyÞ on Ω, where f = y sin πx ð Þ 1 + απ 2 t + 1 ð Þ+ α π 2 4 t + 1 ð Þ cos π 2 y The numerical result is compared with the exact result for different values of time and number of collocation points. To demonstrate the efficiency of the method, the absolute errors are reported in some arbitrary points in Table 1. To obtain the numerical results, MATLAB software is used. The equation is solved for Δt = 0:1, α = 1, and N = 10: Its numerical and exact solution and absolute error between exact and numerical solutions can be shown in Figure 1. As seen in Table 1, the reported absolute errors are as expected, that is, of order OðΔt + l 2 Þ. Case 2: 2D Heat Equation Whose Exact Solution Is Linear. Assume that the exact solution of (67) is By using (72), the value of f is f = 1 in (67). Now, the initial boundary value problem or the strong formulation is with the boundary conditions Tð0, y, tÞ = y + t, Tð1, y, tÞ = 1 + y + t, Tðx, 0, tÞ = x + t, and Tðx, 1, tÞ = 1 + x + t on the boundary dΩ and initial condition Tðx, y, 0Þ = x + y on Ω, where f = 1: The numerical and exact solution and absolute error between exact and numerical solutions of example 2 can be shown in Figure 2. The numerical result is compared with the exact result for different values of time and numbers of collocation points. To demonstrate the efficiency of the used method, the absolute errors in some arbitrary points are reported in Table 2. To obtain the numerical results, MATLAB software is used. The equation is solved for Δt = 0:1, α = 1, and N = 10: Its numerical and exact solution and absolute error between exact and numerical solutions can be shown in Figure 2. As seen in Table 2, the reported absolute errors are as expected, that is, of order OðΔt + l 2 Þ. Hence, from Tables 1 and 2, it is concluded that the absolute error between the exact solution and using FEM solution is of order OðΔt + l 2 Þ. So the finite element method is the best numerical method to solve any differential equations numerically in any type of geometries. Conclusion In this study, a mathematical model of a two-dimensional heat transfer equation coupled with the Darcy flow has been presented. The governing equation of a mathematical model is a system of partial differential equations and is solved using the finite element technique. After the finite element method was applied, the governing equation was then discretized into a set of ordinary differential equations. Then, backward Euler method was applied to find the numerical solution of the set of ordinary differential equations. The method was tested on two-dimensional time-dependent heat transfer in a plate, for which the exact solution is nonlinear or linear. A strong result is proven, and a numerical example 8 Abstract and Applied Analysis is provided to illustrate the convergence behavior of the problem generated by the finite element method. Further, the prior and posterior error estimates are then derived for the numerical scheme. Data Availability No data is available for this research. Conflicts of Interest The author declares that he has no conflicts of interest.
2022-04-08T15:08:52.375Z
2022-04-05T00:00:00.000
{ "year": 2022, "sha1": "05ffbcd8ec3320448141c46b114f2b883ce49e83", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/aaa/2022/5108445.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6df274b7046d9ffe7d2736629c6c659c23765c1b", "s2fieldsofstudy": [ "Engineering", "Physics", "Mathematics" ], "extfieldsofstudy": [] }
1854210
pes2o/s2orc
v3-fos-license
Two-Loop N_F =1 QED Bhabha Scattering: Soft Emission and Numerical Evaluation of the Differential Cross-section Recently, we evaluated the virtual cross-section for Bhabha scattering in pure QED, up to corrections of order alpha^4 (N_F =1). This calculation is valid for arbitrary values of the squared center of mass energy s and momentum transfer t; the electron and positron mass m was considered a finite, non vanishing quantity. In the present work, we supplement the previous calculation by considering the contribution of the soft photon emission diagrams to the differential cross-section, up to and including terms of order alpha^4 (N_F=1). Adding the contribution of the real corrections to the renormalized virtual ones, we obtain an UV and IR finite differential cross-section; we evaluate this quantity numerically for a significant set of values of the squared center of mass energy s. Introduction The relevance of the Bhabha scattering process (e + e − → e + e − ) in the study of the phenomenology of particle physics can hardly be overestimated. This is due to the fact that Bhabha scattering is the process employed to determine the luminosity of all the present e + − e − colliders, at both high (∼ 100 GeV) and intermediate (∼ 1−10 GeV) energies, as well as in a future linear collider. For this reason, Bhabha scattering has been studied in great detail within the context of the electroweak Standard Model (we refer the interested reader to [1] and references therein). In recent years, it was pointed out that, while the one-loop radiative corrections to Bhabha scattering and the corresponding real emission corrections were well known within the full electroweak Standard Model [2,3], the two-loop corrections had not been calculated even in pure QED, although a large amount of work was devoted to the study of the contributions enhanced by factors of ln(s/m 2 e ) [4]. The reason for this situation was identified in the technical problem of calculating the necessary two-loop box diagrams. Several groups started to work on different aspects of the problem. In [5], the two-loop QED virtual cross-section in the limit of zero electron mass was calculated, while in [6] the IR divergent structure of that result was carefully studied. A very interesting and ambitious project aiming at the complete evaluation of the Bhabha scattering cross-section in QED, without neglecting the electron mass m, was presented in [7]. Some of the necessary Feynman diagrams were calculated in [8,9,10]. By employing the results of [8,9], it was possible to complete the calculation of the virtual Bhabha scattering unpolarized differential cross-section, including the contribution of two-loop graphs involving a closed fermion loop (conventionally indicated as corrections of order α 4 (N F = 1), where α is the fine structure constant) [11]. The cross section presented in [11] is valid for arbitrary values of the squared center of mass energy s and momentum transfer t. The full dependence of the crosssection on the electron and positron mass m was retained. In calculating the Feynman diagrams, both UV and IR divergences were regularized with the continuous D-dimensional regularization scheme [12], while the diagrams were evaluated analytically in [8,9] by means of the Laporta algorithm [13] and the differential equations technique for the evaluation of the master integrals [14]. In [11], the renormalization program was carried out in the on-shell scheme; therefore the final result, expressed in terms of 1-and 2-dimensional harmonic polylogarithms (HPLs, 2dHPLs) [15] of maximum weight 3, is free from UV divergences. However, the cross-section of [11] still includes IR divergent terms that appear as a pole in (D − 4). As is well known, the IR divergent term in the cross-section of [11] cancels if one adds to it the contribution of events of the type e + e − → e + e − γ, in the limit in which the photon in the final state carries an energy which is small with respect to the squared center of mass energy. Such events are commonly referred to as soft photon events, and their contributions to the cross-section are known as real (as opposed to virtual) corrections. The purpose of the present paper is to calculate the contribution of the the soft radiation diagrams (up to and including order α 4 (N F = 1)) to the Bhabha scattering differential cross-section, in order to verify that the IR divergent terms present in this contribution cancel against the IR divergent terms in the cross-section in [11], as well as to ultimately obtain the IR finite differential cross-section by adding the virtual and real corrections. In calculating the contribution of the soft photon emission diagrams to the crosssection, we integrated over the soft photon phase space, taking into consideration the emission of photons with an energy smaller than a certain energy threshold ω. This threshold is supposed to be small with respect to the beam energy E. In other words, we assumed that the idealized detector, to be used to measure the differential cross-section we calculated, can tag all the events in which one photon appears in the final state, provided that the photon carries an energy larger than the chosen threshold. We also assumed that all the events in which a photon with an energy smaller than the threshold appears in the final state, are for the idealized detector indistinguishable from the Bhabha scattering events without photon emission. This is the standard textbook approach to the calculation of the real corrections to a given cross-section; admittedly, it is not enough to consider cross-sections measured in realistic experiments, where other aspects must be taken into account (e. g. hard bremsstrahlung effects, detector geometry). Very often, the experimental set up is so complex that the only effective tool to obtain a realistic cross-section is the Monte Carlo method. Nevertheless, we thought it useful to calculate the soft photon emission following this approach, in order to diagrammatically show how the cancellation of the IR divergences works, as well as to provide a benchmark for future, more realistic numerical calculations. Instead of presenting the lengthy analytical expression of the IR finite differential cross-section, we preferred to implement computer codes that evaluate the differential cross-section at order α 4 (N F = 1) for arbitrary values of the beam energy and scattering angle in the center of mass, E and θ, respectively. We have found that the corrections at order α 4 (N F = 1) are positive and very small with respect to the corrections of order α 3 (which are negative), so that their constructive contribution is strongly suppressed in the energy range of interest at present and future colliders. The relative weight of the corrections of order α 3 and α 4 (N F = 1) does increase in magnitude with the beam energy and, at a given energy, with the scattering angle. In order to check our numerical consistency, we reproduced the results in [2] and [3] revelant for our purposes. Our routines for the numerical evaluation of the Bhabha scattering cross-section up to corrections of order α 4 (N F = 1), written both in Mathematica [17] and in Fortran77, can be obtained from the authors [18]. The paper is organized as follows: in Section 2, we discuss the calculation of the soft real corrections to the Bhabha scattering differential cross-section 1 , up to and including terms of order α 4 (N F = 1). In addition, we investigate the cancellation of the IR divergent terms of the Bhabha scattering virtual cross-section calculated in [11] against the IR divergent terms originating from the soft photon emission diagrams considered here. In Section 3, we present the numerical results obtained by evaluating the IR finite Bhabha scattering cross-section (given by the sum of virtual and soft corrections up to terms of order α 4 (N F = 1)), for different significant choices of the beam energy E. Also, we compare the complete cross-section with its expansion in powers of the electron mass. We find that the first term in the expansion fits to sufficient approximation the numerical value of the complete crosssection for all the beam energies relevant in present and future colliders. In Section 4, we present our conclusions. Finally, two appendices include the expression of the integrals occurring in the evaluation of the soft radiative corrections relevant to our calculation, as well as the leading terms in the expansion in the limit m → 0 of the Bhabha scattering differential cross-section we studied. The Real Corrections In this Section, we discuss the calculation of the real corrections due to the emission of a soft photon to the Bhabha scattering unpolarized differential cross-section in pure QED. In particular, we obtain the real corrections of order α 3 and α 4 (N F = 1). In both cases, we must consider events involving a single soft photon in the final state: where p 1 , p 2 , p 3 , p 4 , and k are the momenta carried by the incoming electron, incoming positron, outgoing electron, outgoing positron, and outgoing soft photon, respectively. All of the particles in the initial and final states are on-shell, so that p 2 i = −m 2 (i = 1, 4) and k 2 = 0. Therefore, we introduce the quantities s and s ′ defined as follows: By definition, the soft photon approximation consists of neglecting k in the numerator of the scattering amplitude and of setting s ′ = s everywhere. In this approximation, the kinematical relations that link the Mandelstam invariants s, t, and u to the beam energy (E) and to the scattering angle in the center of mass frame (θ), are with In the following, we often employ the dimensionless variables x, y, and z, related to the Mandelstam invariants s, t, and u, by the relations which are valid in the physical region s ≥ 4m 2 , t, u ≤ 0. The complete Bhabha scattering differential cross-section up to order α 4 (N F = 1) can be written as follows: where σ 0 (s, t, m 2 ) is the tree-level (Born) cross-section and σ T i (s, t, m 2 ) (i = 1, 2) are the sum of the virtual and real corrections at order α 3 and α 4 (N F = 1), respectively. Real Corrections at Order α 3 We first consider the order α 3 contribution to the cross-section, which is given by: where the superscripts V and S stand for "virtual" and "soft". The one-loop virtual cross-section dσ V 1 /dΩ can be found in Eq. (67) of [11]; we devote the remaining part of this subsection to the calculation of dσ S 1 /dΩ. The diagrams contributing to the real corrections to the Bhabha scattering crosssection at order α 3 are shown in Fig. 1; the real photon can be emitted by any of the incoming or outgoing fermion lines of the s-and t-channel Bhabha scattering tree-level diagrams. At this stage, it is convenient to introduce the quantity where: Figure 1: Diagrams contributing to the real corrections at order α 3 . × dσ D 0 /dΩ is the Born Bhabha scattering cross-section obtained by calculating the traces over the Dirac indices in D dimensions. The contribution of the s-and tchannel diagrams and of their interference to the r.h.s. of Eqs. (14,15) is explicit. It is then straightforward to show that, in the soft photon approximation, the contribution of the diagrams in Fig. 1 to the unpolarized differential cross-section is given by where the IR divergent quantity J ij is defined as with ǫ i = +1 for i = 1, 4 and ǫ i = −1 for i = 2, 3, and where I ij indicates the integral: In Eq. (18), D is the dimensional regulator; furthermore the superscript on the integral sign indicates that the integration should be taken over the region | k| = k 0 < ω, with ω representing the cut-off on the unobserved soft-photon energy. The integral in Eq. (18) can be evaluated according to the standard technique discussed in detail 2 in Ref. [16]. It is important to observe that the integrals I ij depend only on the scalar product p i · p j (aside from an obvious dependence on E and m), so that I ij = I ji , I 11 = I 22 = I 33 = I 44 , I 12 = I 34 , I 13 = I 24 , I 14 = I 23 . Consequently, the quantities J ij also satisfy the same symmetry relations. Therefore, Eq. (16) becomes The explicit expressions of I 1j (j = 1, 4) can be found in Appendix A. where γ is the Euler constant and µ the 't Hooft scale. Our choice of the normalization constants removes, in all the divergent integrals, the finite terms associated with the use of dimensional regularization and sets the 't Hooft scale to m; this choice is consistent with the normalization employed in Ref. [11]. Figure 4: Diagrams contributing to the real corrections at order α 4 (N F = 1). The IR pole that originates from the integral I 1j is multiplied, in Eq. (20), by the terms proportional to (D − 4) in σ D 0 ; this product provides a finite contribution to σ S 1 . Terms proportional to (D − 4) in σ S 1 must be neglected. It is useful to understand how the cancellation of the IR poles works from a diagrammatic point of view. Fig. 2 and Fig. 3 schematically describe the situation. The contribution to the differential cross-section of the interference of the two diagrams shown in the first term of each line is IR divergent. This IR divergence cancels against the contribution to the real cross-section given by the second term in each line, where the product of the two tree-level diagrams represents the contribution of their interference to the cross-section in Born's approximation. We remind the reader that the one-loop photon self-energy diagrams are IR finite. Real Corrections at Order α 4 (N F = 1) The order α 4 (N F = 1) contribution to the cross-section is given by: where the two-loop virtual cross-section dσ V 2 /dΩ can be found in Eq. (68) of [11]. The IR divergencies present in the Bhabha scattering differential cross-section at order α 4 (N F = 1) cancel against the contribution to the real cross-section of the interference of the diagrams in Fig. 4 with the single photon emission tree-level diagrams in Fig. 1. In discussing the soft corrections to the cross-section at order α 4 (N F = 1) it is convenient to introduce the quantity The first term in the r.h.s. of the equation above is the contribution to the virtual cross-section of the one-loop self-energy diagrams and corresponding counter-term diagrams: where Π (1l,0) 0 is the UV renormalized photon self-energy. The latter quantity has been discussed in detail in Section 4 of [11]. The second term in the r.h.s. of Eq. (22) is: where Π (1l,1) 0 is the term proportional to (D −4) in the expantion of the renormalized photon self-energy (the explicit expression of this quantity can be found in the appendix of [11]). The contribution of the soft photon emission to the real corrections to the Bhabha scattering cross-section at order α 4 (N F = 1) is given by where the integrals J 1j (j = 1, · · · , 4) have been introduced in the previous subsection. The term proportional to (D − 4) in Eq. (22) provides a finite contribution to the real corrections in Eq. (25), since J 1j contains an IR pole. Terms proportional to (D − 4) in σ S 2 (s, t, m 2 ) are then neglected. Figs. 5-8 explicitly show how the cancellation of the IR divergences takes place from a diagrammatic point of view: the contribution to the virtual cross-section of the interference of the diagrams in the first term of each line is IR divergent; such divergence cancels against the interference of the two diagrams in the second term multiplied by the appropriate combination of J 1j integrals. We observe that in the last two lines of Fig. 5, in the last line of Fig. 6, and in the second line of Fig. 7, the subtraction of the real radiation in the second term of the l.h.s. does not cancel × + (J 13 + J 11 ) × = IR fin. Figure 6: Cancellation of the IR divergencies of the products of one-loop self-energy and vertex diagrams. × completely the IR pole in the corresponding virtual correction (first term in the l.h.s.). A residual IR pole, proportional to ζ(2), remains. As expected, the sum of the residual poles vanishes and the cross-section is therefore IR finite. Numerical Results In order to numerically evaluate the Bhabha scattering cross-section up to corrections of order α 4 (N F = 1), we developed two computer codes. One of them was written in Mathematica [17], while the other was written in Fortran77. Following [3], in the numerical calculation, we fixed the energy cut-off on the undetected soft photon to ω = 0.1 E, where E is the beam energy. We compared the results of the two codes for a beam energy ranging from E = 0.01 GeV to E = 500 GeV, and for an arbitrary choice of the scattering angle θ, finding complete agreement. In Fig. 9, we show the differential cross-section for E = 22 GeV and for 0 < θ < π. The dashed-dotted line represents the cross-section in the Born approximation (Eq. (12)), while the continuous (dashed) line represents the cross-section at order α 3 (α 4 (N F = 1)), Eqs. (13,21). The radiative corrections at order α 3 are negative; therefore, they lower the cross-section. The corrections at order α 4 (N F = 1) are negative and very small with respect to the corrections of order α 3 ; they lower the differential cross-section, although, as seen in Fig. 9, the effect is difficult to appreciate graphically. The relative weight of the corrections of order α 3 and α 4 (N F = 1) are shown in Fig. 10 and Fig. 11 for six different choices of the beam energy. In Fig. 10, we plotted the ratio of the order α 3 corrections to the tree-level cross-section: It is evident that the relative weight of the correction increases in magnitude with the beam energy and, at a given energy, with the scattering angle. A similar plot can be found in [3], for E = 22 GeV, where the full set of one-loop corrections in the Standard Model were considered. At θ = π/2 the corrections range from −9% of the Born cross-section at E = 10 MeV to −37% for E = 500 GeV. In Fig. 11, we plotted the ratio of the order α 4 (N F = 1) corrections to the complete order α 3 cross-section: The corrections increase in magnitude with the energy and, for fixed energy, with the scattering angle. The relative weight of these corrections at θ = π/2 ranges from −0.08% of the complete cross-section at order α 3 for E = 10 MeV to −4.3% for E = 500 GeV. Figs. 12 and 13 show the dependence of the cross-section on the beam energy, for small and large scattering angles respectively. Similar plots for a wider choices of angles can be found, limited to the one-loop corrections, in [2]. Using our codes, we reproduced the plots shown in [2] finding agreement. To complete the analysis of the numerical results, we expanded the analytic expression of the cross-section in the limit in which the squared electron mass is negligible with respect to the kinematic invariants s, t, and u. We define the leading terms of the cross-section in the limit m 2 → 0 through the relations: where i = 1, 2 and where the subscript "L" stands for "leading". The expressions of (dσ T i /dΩ)| L can be found in Appendix B. In Figs. 14 and 15 we plotted (for a fixed value of the scattering angle) the quantities: respectively. A glance of the figures shows that the leading terms of the cross-section approximate to a continuosly better degree the complete results for increasing beam energy. The leading terms of the cross-section fail to reproduce the complete result in the extremely forward and backward regions, where t or u becomes smaller than m 2 . The codes for the numerical evaluation of the Bhabha scattering differential cross-section in pure QED up to order α 4 (N F = 1) are available from the authors [18]. Conclusions In the present paper, we completed the evaluation of the Bhabha scattering crosssection at order α 4 (N F = 1) in pure QED. The calculation was performed without neglecting the electron mass m, and is valid for all physical values of the independent Mandelstam invariants s and t. The master integrals necessary for the evaluation of the virtual corrections were calculated in [8] and [9], while the UV renormalized unpolarized differential cross-section was obtained in [11]. The calculation was completed by providing the real correction in the approximation of a soft photon emission up to order α 4 (N F = 1), as well as by explicitly showing that the IR poles present in such corrections cancel the remaining IR poles of the virtual cross-section calculated in [11]. Finally, we developed computer codes for the numerical evaluation of the UV and IR finite cross-section. We compared our findings with the results present in the literature (where possible), obtaining complete agreement. We verified that the effect of the terms proportional to positive powers of the electron mass m is negligible in the energy range of interest in present and future colliders. after the integration on the photon phase space has been carried out, the integral becomes In the equation above, the quantity ρ ij has a particularly simple expression in terms of the dimensionless quantities x, y, and z introduced in Eqs. (7)(8)(9): The quantity I 1j in Eq. (31) can be expressed as a simple integral as follows: After observing that the scalar products p 1 · p j also have a simple form in terms of x, y, and z, one finds: In addition, the quantity ∆I 1j can also be expressed in integral form: where the Lorentz vector P µ 1j is defined by the relation P µ 1j = p µ j + r ρ 1j p µ 1 − p µ j .
2014-10-01T00:00:00.000Z
2004-11-24T00:00:00.000
{ "year": 2004, "sha1": "908081b6aa2b171318a7ff903890d38dc96b355b", "oa_license": null, "oa_url": "http://cds.cern.ch/record/807155/files/arXiv:hep-ph_0411321.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b2502547aac12f2087367e8088f65c3df5c95b9c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
92330064
pes2o/s2orc
v3-fos-license
Rituximab With Involved Field Irradiation for Early-stage Nodal Follicular Lymphoma Abstract The MabThera and Involved field Radiotherapy study investigated efficacy and safety of involved field (IF) radiotherapy in combination with the anti-CD20 antibody Rituximab for early-stage follicular lymphoma (FL) in a prospective, single-arm multicenter phase 2 design. Eighty-five stage I–II FL patients received 8 cycles of Rituximab (375 mg/m2) and IF irradiation (30/40 Gy). The primary endpoint was progression-free survival (PFS) 2 years from treatment start. Secondary endpoints were overall survival (OS), complete response rates, toxicity, quality of life, and minimal residual disease (MRD) response with protocol defined visits up to month 30. For the primary endpoint, PFS at 2 years was 85% for the intention-to-treat set. Long-term data were captured in selected sites and evaluated as post hoc analysis in the per protocol (PP) set: PFS and OS were 78% and 96% at 5 years with a median follow-up of 66 or 78 months, respectively. There were 17/76 recurrences in the PP set, of which 14 were outside the radiation volume only. MRD analyses revealed a clonal marker in 36% of patients at diagnosis. All but 1 marker positive patients experienced a molecular treatment response. There were 13 serious adverse events (4 related to the therapy) during the first 30 months. IF radiotherapy combined with Rituximab is well tolerated and highly efficient with low rates of recurrence in the first years in early-stage FL. The efficacy is comparable with more aggressive therapy approaches without compromising the quality of life and maintains for an extended follow-up of more than 5 years. Introduction Treatment of early-stage nodal follicular lymphoma (FL) has been debated for many years. 1 Radiation therapy is an effective treatment in this setting. Although large radiation volumes seem to reduce the relapse rate, 2,3 the National Comprehensive Cancer Network guidelines recommend radiation treatment only of the pathological involved regions (involved site radiation therapy) without prophylactic treatment of additional lymph node areas predominantly due to the higher toxicity profile of large field radiation. Since the anti-CD20 antibody Rituximab proved to eliminate minimal residual disease (MRD) in advanced stages [4][5][6] and enhances radiation sensitivity in vitro, 7 a combination of Rituximab and radiation limited to the involved region might solve this dilemma. The German Low-Grade Lymphoma Study Group (GLSG) in cooperation with the German working group of radiation oncology (ARO) conducted a prospective, single-arm, multicenter phase II study investigating the efficacy and toxicity of an involved-field radiotherapy in combination with Rituximab treatment (MIR = MabThera and Involved field Radiotherapy) in early-stage nodal FL. Patients Eighty-five patients (47 males/38 females) in 16 centers had been registered between February 2008 and October 2010. Median age was 55 years (min/max: 21/75 years). Further details of the patients' characteristics are shown in Table 1. All 85 patients started the therapy and were included in the intention to treat (ITT) population, which was the primary analysis set. The per protocol (PP) set was used for sensitivity analysis. Nine patients were excluded for the PP set due to violation of the inclusion criteria (stage [n = 3], histology [n = 1]) or application of <50% Rituximab (allergic reaction to Rituximab [n = 1], noncompliance [n = 1], withdrawal of consent [n = 2], missing Rituximab documentation [n = 1]). Out of 16 centers, 13 participated for the second period of extended follow-up data collection beyond month 30. Data of 60 patients were available for the post hoc analysis of the long-term efficacy. The post hoc defined PFS at 5 years of the extended follow-up period has been estimated to 78% with a median time follow-up period of 66 months for the PP set ( Fig. 1). No prognostic markers for PFS could be identified (Table 2). Overall survival. There were 2 deaths during the protocol defined 30 months follow-up. Both deaths were due to secondary cancers. Overall survival (OS) at 2 years was 97% (95% CI = [90; 99]) for the ITT and 97% (89; 99) for the PP set. At 5 years, the OS was 96% with a median follow-up of 78 months in the post hoc analysis of the extended follow-up period (Fig. 2). There were no lymphoma-related deaths. Recurrences. A total of 17 recurrences were observed in the PP set. Three recurrences were within the radiation volume. Two recurrences occurred as aggressive lymphoma (65 and 90 months) outside the radiation volume. Details of the recurrences are listed in Table 3. Patients with an MRD marker did not differ from the whole study cohort with respect to age, performance status, lactate dehydrogenase (LDH), follicular lymphoma interntional showing CLC level below the quantitative range (QR). Nine patients showed CLC above the QR (median CLC level 2.3 Â 10 À4 , range: 1.0 Â 10 À4 to 4.3 Â 10 À3 ). In BM samples, 3/5 showed a median infiltration of 3.2 Â 10 À4 , range: 2.1 Â 10 À4 to 5.8 Â 10 À4 while 2 were below QR. Circulating lymphoma cells at diagnosis and MRD Protocol-defined MRD negativity at week 18 by t(14;18) RQ-PCR was observed in 20/21 (95%) patients with an available sample. Of 21 patients, 14 (67%) achieved a CR, 1 patient a CRu, and 6 patients a partial response (PR) (30%). The only patient with persistent MRD positivity had a PR at restaging in week 18, remained marker positive, and showed no recurrence up to an extended follow-up of 67 months. Of 19 patients without progression after treatment, 15 (79%) were consistently MRD negative in PB at all follow-up time visits. In 4 patients in remission (21%), MRD positivity reoccurred but below the QR of 10 À4 . All 4 patients showed Initial MRD marker (yes = presence of MRD marker initially; no = no initial MRD marker; nd = not done), histology of recurrence (nd = not done) and time between first Rituximab to retreatment (figure in parenthesis = no treatment until last information). CR = complete response, MRD = minimal residual disease, na = not applicable (no residual lymphoma before start of treatment), PP = per protocol, PR = partial response, SD = stable disease. fluctuating MRD levels with alternating MRD positivity/ negativity during follow-up. Three initially marker positive patients showed clinical progression at months 22, 24, and 33 after start of treatment. In 2 patients, follow-up samples were MRD positive at or prior to time of relapse. In the third patient, no sample at relapse was available. The most common adverse events organized by organ class and CTC grade are summarized in Table 4. Thirteen patients experienced serious adverse events within the 30 months follow-up. Four of these events were classified as possibly related to therapy: allergic reaction to Rituximab, bacteremia, pulmonary embolism, and stomatitis. All patients completely recovered from those side effects. Secondary neoplasia. Two patients died of secondary cancer during the protocol-defined period: A peritoneal mesothelioma was diagnosed 6 months after the first Rituximab application and 4 months after radiation of the right cervical and left axillary region. The patient died 12.3 months after start of treatment. A metastatic nonsmall cell lung cancer was diagnosed in a second patient 12 months after radiation of the left cervical and right hilar region. The patient had a history of heavy smoking (>80 pack years) and died 22.6 months after start of therapy. Both secondary cancers were judged not related to therapy. A third patient developed a small cell lung cancer at month 80. Quality of life The treatment had no substantial influence on patients' quality of life. There was a minor impairment of the global health status, the physical and social performance and occurrence of fatigue at the end of the therapy. However, there was also a trend to overall improvement of quality of life at the end of follow-up (Fig. 3). Discussion Treatment of early-stage FL has been a controversial issue for the last decades. Especially in elderly patients, a watch and wait strategy might be considered since only 25% receive therapy within the first 5 years after diagnosis and the 10 years survival is 85% in a small cohort of patients. 8 On the other hand, radiation therapy has been shown to be very effective and early radiation therapy seems to improve disease-specific survival and OS based on an SEER database US National Cancer Database analysis. 9,10 The MIR concept might be an alternative curative approach ensuring quality of life and with only a minimal risk of severe side effects. However, larger treatment fields proved to be more effective for longtime control than treatment of only smaller volumes. 2,3 Since larger radiation volumes are associated with more toxicity, the NCCN guidelines still recommend only localized radiation therapy for those patients. The ARO 98-01 trial randomized total lymphatic irradiation against extended field irradiation in earlystage FL. Preliminary results showed recurrences mainly in the nonirradiated areas with a significant better PFS of patients treated with total lymphatic irradiation. 11 The Stanford data 2 and the ARO98-01 data 11 implicate that a part of the patients with early-stage FL actually do not have a limited stage but present with occult systemic disease with a need of a systemic treatment. This is also supported by the results of our PCR-based analysis of CLC in the diagnostic PB in our study. In 22 stage I/II patients (35%), CLC were detected in PB at diagnosis. Also, submicroscopic BM infiltration detected by PCR was frequent with 6/13 cases (46%) despite a negative BM histology. Friedberg et al 12 reported that systemic chemotherapy was advantageous compared with limited field irradiation alone in stage 1 FL. A combination of chemotherapy and involved field (IF) irradiation improved PFS compared with IF radiation only, but was hampered by toxicity. [13][14][15] Recently, the results of randomized trial were published investigating Cyclophosphamide, Vincristine, Prednisone (CVP) chemotherapy as additional systemic treatment in combination with IF radiotherapy versus radiotherapy alone 16 Rituximab was added later to the systemic therapy in 31 patients. Radiation therapy alone was significantly inferior in PFS compared with the combined modality approach. The best results were achieved using Rituximab, CVP, and radiation with a PFS of approximately 86% at 5 years. The current MIR study combined an IF radiation therapy only with Rituximab. The PFS at 2 years was at least as good as in (2018) 2:6 www.hemaspherejournal.com prospective large radiation field series but with significantly lower toxicity. 3,11 The presented PFS at 5 years of 78% was close to the data of MacManus of 86% without the accompanying toxicity of chemotherapy. 16 However, our data show limitation since it is based on a post hoc analysis of prospectively collected long-term data. This might result in a bias due to less extensive follow-up examinations or drop-outs. This could also be an explanation for the reduction of recurrences during the extended follow-up period. However, other published data also show a marked decrease of the PFS in initial years and a trend to a plateau thereafter. 2,3,12,17 Our data confirm the results of a larger retrospective Italian study, which suggested an improved patient outcome by the addition of Rituximab to limited field radiation therapy 17 : The 5 years PFS of the current study is even superior to their data (5 years PFS 68%). The Italian study also included FL grade 3a and/or applied only 4 cycles Rituximab instead of 8 cycles in the MIR study. 17 Response duration was associated with a continuous MRD response as 15/19 patients (79%) without progression were consistently MRD negative in PB at all follow-up time points. Only few relapsing patients had follow-up samples for MRD evaluation. However, it seems that MRD reappearance is associated with clinical relapse. A possible treatment concept evolving from our results could be Rituximab retreatment already at MRD relapse to prevent clinical recurrences. Alternatively, Obinutuzumab may result in a significant higher efficacy compared with Rituximab based on the studies in CLL and advanced FL. [18][19][20] Given the higher rate of outfield recurrences in the current study, radiation did have an additional effect to Rituximab with an increase of the CR rate from 29% to 79%. It is not clear, if a lower radiation dose would also show the same efficacy. Lowry et al 21 published a randomized trial for radiation alone with 24 Gy versus 40 Gy (2 Gy fraction size) showing no benefit for the higher dose group in indolent lymphomas. Therefore, 12 Â 2 Gy might have given similar results. However, a WHO diagnosis could be made in only 80% of the indolent lymphomas in this British trial and of those, only 60% were proven FL according to central pathologic review. 21 In the current study, all specimen underwent a central pathologic review, which confirmed an FL grade 1 or 2 corresponding to the inclusion criteria. Only 1 patient had to be excluded from the PP population due to the diagnosis of Hodgkin lymphoma in the central review. Radiation therapy in combination with rituximab induced MRD response at week 18 in all patients except one. This demonstrates the impact of Rituximab lymphoma cell clearance as data from IF radiation therapy alone detected MRD responses in only 60%. In contrast to IF radiation alone where only patients with a low-level of CLC (<1:100,000) achieved PCR negativity, lymphoma cell clearance in the MIR study was independent from the pretherapeutic CLC load. 22 There are only few publications analyzing the presence of CLC at diagnosis in small series with early-stage FL. Lambrechts et al 23 detected t(14;18) positive cells in 75% of a series of 12 early-stage I/II FL. Pulsoni 22 investigated a series of 24 early-stage FL and detected CLC in 66% of all cases. This higher detection frequency may in part result from the relatively contamination prone-nested PCR approach applied in both studies and the fact that nested PCR approaches detects t(14;18) positive cells in about 40% of healthy individuals. 24 By contrast, the MRD detection of 34% in our series was confirmed by a quality controlled RQ-PCR assay 25 and standardized data analysis according to EuroMRD guideline, 26 thereby reflecting in the most precise frequency determination of CLC in FL. The results show that in 1/3 of patients with early-stage disease lymphoma cells have spread from the affected lymph node to PB or BM. Presence of a clonal marker is obviously dependent on stage as a clonal marker was detected more frequently in patients with Ann Arbor stage II (58%) compared with stage I (21%) in our series. It seems safe to assume that detection of a molecular marker at diagnosis is a sensitive indicator for tumor burden, since in advanced stage III/IV FL the frequency of CLC detection increases up to 80% (t(14;18) and IGVH multiplex PCR) and is associated with a high tumor load. 27 In contrast to Ruella et al, 17 our data did not reveal initial MRD positivity as a negative prognostic marker. This might be explained by the fact that Rituximab was applied only 4 times in the Italian study instead of 8 times in the MIR study. However, Ruella et al also investigated only BM, while most of our analyses are based on blood samples. Also, 57% of MRD positive samples in the MIR study showed only a minor infiltration below the QR assuming a low systemic lymphoma burden. Compared with advanced FL, 28 level of CLC measured by RQ-PCR is low in stage I/II FL in the MIR study with 57% showing CLC level below the QR of 10 À4 reflecting a lower overall tumor burden. Rituximab and IF radiotherapy were well tolerated. There were only 4 serious adverse events possibly related to the treatment, which is clearly a lower rate than for the more aggressive approaches combining chemotherapy and small field radiation. MacManus et al 16 reported 45 grade 3 or 4 toxicities were counted in 69 patients receiving CVP or R-CVP in addition to IF radiotherapy. There were 3 cases of secondary cancers (ie, non-small cell lung cancer, mesothelioma, and small cell lung cancer) in our cohort, 2 of these occurred within 12 months after initiation of the lymphoma therapy and were outside of the radiation fields. A correlation to the study treatment seems to be unlikely, since a large meta-analysis shows that the addition of Rituximab to standard treatment is not associated with an increased risk of secondary malignancies. 29 The low toxicity profile of the MIR schedule is also reflected by the quality of life data. The MIR treatment had no significant negative influence on quality of life similar to patients with advanced disease receiving only Rituximab. 30 Quality of life scores 1 year after radiotherapy only did not differ from the normal population in the PHAROS-registry using the same EORTC-C30 questionnaire as in the current study. By contrast, after immunochemotherapy patients reported significant higher fatigue scores 1 year after therapy compared with patients who received radiotherapy only or to the normal population. 31 In the current study, fatigue was not significantly influenced by immuno-radiotherapy with only slightly elevated fatigue scores at week 18. In conclusion, IF radiotherapy in combination with Rituximab is well tolerated and shows low rates of recurrence in early-stage FL. Results are comparable with more aggressive therapeutic approaches but without compromising the quality of life and maintain also for an extended follow-up of more than 5 years. Study design and patient's entry criteria Details of the study design are published elsewhere. 32 Shortly, the MIR study recruited patients with CD20 positive nodal FL grade 1/2 according to the WHO classification 2001, in localized stage I/II (Ann Arbor), younger than 76 years and with Eastern Cooperative Oncology Group performance status 0 to 2. Patients with bulky disease (>7 cm), prior radiotherapy, prior chemotherapy or immunotherapy, or a prior diagnosis of a malignant neoplasia were excluded. The primary endpoint was PFS at 24 months as defined by Cheson. 33 Secondary endpoints were CR rate after Rituximab monotherapy (week 7) and completion of treatment (week 18), relapse pattern, OS, toxicity, and quality of life. The study was approved by local ethical committees of the Medical Faculty of the University of Heidelberg (AFmu-085/ 2007), the federal Paul-Ehrlich Institute and the German agency for radiation protection and registered (ClinicalTrials.govID: (2018) 2:6 www.hemaspherejournal.com NCT00509184). All patients gave their written informed consent. Since long-term follow-up is of interest for the community, the study centers were encouraged to further follow the patients after the last protocol defined visit at month 30 (extended evaluation period). However, frequency and extent (clinical and imaging) of follow-up examinations were at discretion of the local investigator and the data were not centrally monitored. These data were retrospectively collected for a post hoc analysis of PFS and OS at 5 years. Treatment Treatment consisted of 4 once per week administrations of Rituximab (MabThera, Roche Pharma, 375 mg/m 2 ). In week 7, patients received a restaging and radiation planning CT of the involved region. Four further weekly administrations of Rituximab were given in weeks 9 to 12. Radiation treatment of the involved lymph node regions (adapted from Yahalom and Mauch 34 ) was initiated in week 9 and applied in 2 Gy fractions (5 times/wk) up to a total dose of 30 Gy. In case of remaining lymphoma after initial Rituximab therapy in week 7, the residual region was boosted with 5 Â 2 Gy in week 12. Follow-up Protocol-defined follow-up visits were performed in week 18 and months 6, 12, 18, 24, and 30. These visits included the assessment of medical status, a physical examination, and analysis of blood cell counts and LDH. Three-dimensional imaging (CT/MRI) of neck, thorax, abdomen, and pelvis was mandatory for all time points except of week 18 (involved region only). Further follow-up examination during the extended evaluation period was in the centers discretion as mentioned above. Quality assurance Histologic specimens were centrally reviewed by the GLSG pathology reference panel. Staging imaging series were centrally reviewed and extent of radiation was recommended accordingly. For data of the first 30 months, core data monitoring was performed in all patients and additionally 100% source data were verified in 20% of the patients. Statistics Primary endpoint. PFS estimation for the primary endpoint was based on the prior trial ARO 98-01 (extended field radiotherapy vs total lymphatic radiotherapy in early stages of nodal FL). An interim analysis of this trial showed a PFS of 75% at 2 years. 35 The MIR study aimed to improve this outcome. Kaplan-Meier estimates and 2-sided 90% CIs were calculated at 6, 12, 18, 24, and 30 months after start of treatment. The null hypothesis (PFS at 24 months 75%) is rejected if the lower boundary of the 2sided 90% CI at month 24 is above 75% corresponding to a 1sided test with a = 0.05. Secondary endpoints. Remission was evaluated according to Cheson 1999. 33 Complete remission rate (CR/CRu) at week 7 and week 18 was evaluated only for patients with remaining lymphoma after the diagnostic biopsy procedure. In addition to the protocol-predefined endpoints, the rate of CR at month 6 has been calculated. OS was calculated from the beginning of therapy until death or last follow-up (Kaplan-Meier method). QLQ-C30 (EORTC, version 3.0) questionnaires were used for the assessment of the quality of life before treatment as well as in week 18 and month 30. Missing values were imputed using the LOCF (last observation carried forward) or BOCF (baseline observation carried forward) method. Toxicity was evaluated according to CTCAE version 3.0 scoring system. Kaplan-Meier estimates were used for the post hoc evaluation of PFS and OS at 5 years during the extended evaluation period. MRD analysis. MRD marker screening was performed prior to start of Rituximab in PB and BM. MRD was assessed at week 18 and months 12, 24, and 30. DNA was extracted with the Qiagen Blood Mini Kit (Qiagen, Hilden, Germany). In PB or BM lymphoma cells were assessed by t (14;18) or IGH rearrangement multiplex PCR as published. 36 RQ-PCR assays with generic primers for the t(14;18) breakpoints and/ or allele-specific primers for the Ig rearrangements were performed on an ABI PRISM 7700 thermal cycler (Applied Biosystems, CA). MRD quantification was performed as previously described 25 and evaluated according to EuroMRD criteria. 26 Assays were designed to reach a sensitivity of 1 Â 10 À5 with Albumin as control gene to correct for DNA amount or PCR inhibitors. 37 MRD positivity of a sample was defined if any of the triplicates were positive by RQ-PCR analysis. Molecular complete remission was defined as absence of PCR-detectable neoplastic cells in PB and/or BM at any time point with a sensitivity of at least 10 À4 . MRD analyses in PB and/or BM were pooled, and higher MRD values were encountered for calculation.
2019-04-03T13:08:42.781Z
2018-11-30T00:00:00.000
{ "year": 2018, "sha1": "d0ec7e6ff1f141681ca20b88c31da22681f44044", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1097/hs9.0000000000000160", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d0ec7e6ff1f141681ca20b88c31da22681f44044", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }